Skip to main content Skip to main navigation


DBPal: A Fully Pluggable NL2SQL Training Pipeline

Nathaniel Weir; Prasetya Utama; Alex Galakatos; Andrew Crotty; Amir Ilkhechi; Shekar Ramaswamy; Rohin Bhushan; Nadja Geisler; Benjamin Hättasch; Steffen Eger; Ugur Çetintemel; Carsten Binnig
In: David Maier; Rachel Pottinger; AnHai Doan; Wang-Chiew Tan; Abdussalam Alawini; Hung Q. Ngo (Hrsg.). Proceedings of the 2020 International Conference on Management of Data. ACM SIGMOD International Conference on Management of Data (SIGMOD-2020), June 14-19, Pages 2347-2361, ACM, 2020.


Natural language is a promising alternative interface to DBMSs because it enables non-technical users to formulate complex questions in a more concise manner than SQL. Recently, deep learning has gained traction for translating natural language to SQL, since similar ideas have been successful in the related domain of machine translation. However, the core problem with existing deep learning approaches is that they require an enormous amount of training data in order to provide accurate translations. This training data is extremely expensive to curate, since it generally requires humans to manually annotate natural language examples with the corresponding SQL queries (or vice versa). Based on these observations, we propose DBPal, a new approach that augments existing deep learning techniques in order to improve the performance of models for natural language to SQL translation. More specifically, we present a novel training pipeline that automatically generates synthetic training data in order to (1) improve overall translation accuracy, (2) increase robustness to linguistic variation, and (3) specialize the model for the target database. As we show, our DBPal training pipeline is able to improve both the accuracy and linguistic robustness of state-of-the-art natural language to SQL translation models.

Weitere Links