Weakly Supervised Text-to-SQL Parsing through Question Decomposition (2024)

Tomer Wolfson1,2 Daniel Deutch1 Jonathan Berant1
1Blavatnik School of Computer Science, Tel Aviv University2Allen Institute for AI
tomerwol@mail.tau.ac.il, danielde@post.tau.ac.il ,joberant@cs.tau.ac.il

Abstract

Text-to-SQL parsers are crucial in enabling non-experts to effortlessly query relational data. Training such parsers, by contrast, generally requires expertise in annotating natural language (NL) utterances with corresponding SQL queries.In this work, we propose a weak supervision approach for training text-to-SQL parsers. We take advantage of the recently proposed question meaning representation called QDMR, an intermediate between NL and formal query languages.Given questions, their QDMR structures (annotated by non-experts or automatically predicted), and the answers, we are able to automatically synthesize SQL queries that are used to train text-to-SQL models. We test our approach by experimenting on five benchmark datasets. Our results show that the weakly supervised models perform competitively with those trained on annotated NL-SQL data.Overall, we effectively train text-to-SQL parsers, while using zero SQL annotations.

1 Introduction

The development of natural language interfaces to databases has been extensively studied in recent years Affolter etal. (2019); Kim etal. (2020); Thorne etal. (2021).The current standard is Machine Learning (ML) models which map utterances in natural language (NL) to executable SQL queries Wang etal. (2020); Rubin and Berant (2021).These models rely on supervised training examples of NL questions labeled with their corresponding SQL queries. Labeling copious amounts of data is cost-prohibitive as it requires experts that are familiar both with SQL and with the underlying database structure Yu etal. (2018).Furthermore, it is often difficult to re-use existing training data in one domain in order to generalize to new ones Suhr etal. (2020).Adapting the model to a new domain requires new NL-SQL training examples, which results in yet another costly round of annotation.

Weakly Supervised Text-to-SQL Parsing through Question Decomposition (1)

In this paper we propose a weak supervision approach for training text-to-SQL parsers. We avoid the use of manually labeled NL-SQL examples and rely instead on data provided by non-expert users. Fig.1 presents a high-level view of our approach. The input (left corner, in red) is used to automatically synthesize SQL queries (step 3, in green) which, in turn, are used to train a text-to-SQL model. The supervision signal consists of the question’s answer and uniquely, a structured representation of the question decomposition, called QDMR. The annotation of both these supervision sources can be effectively crowdsourced to non-experts Berant etal. (2013); Pasupat and Liang (2015); Wolfson etal. (2020).In a nutshell, QDMR is a series of computational steps, expressed by semi-structured utterances, that together match the semantics of the original question. The bottom left corner of Fig.1 shows an example QDMR of the question β€œWhich authors have more than 10 papers in the PVLDB journal?”. The question is broken into five steps, where each step expresses a single logical operation (e.g., select papers, filter those in PVLDB) and may refer to previous steps.As QDMR is derived entirely from its question, it is agnostic to the underlying form of knowledge representation and has been used for questions on images, text and databases Subramanian etal. (2020); Geva etal. (2021); Saparina and Osokin (2021).In our work, we use QDMR as an intermediate representation for SQL synthesis. Namely, we implement an automatic procedure that given an input QDMR, maps it to SQL.The QDMR can either be manually annotated or effectively predicted by a trained model, as shown in our experiments.

We continue to describe the main components of our system, using the aforementioned supervision (Fig.1). The SQL Synthesis component (step 1) attempts to convert the input QDMR into a corresponding SQL query. To this end, Phrase DB linking matches phrases in the QDMR with relevant columns and values in the database. Next, SQL join paths are automatically inferred given the database schema structure. Last, the QDMR, DB-linked columns and inferred join paths are converted to SQL by the SQL Mapper. In step 2, we rely on question-answer supervision to filter out incorrect candidate SQL. Thus, our Execution-guided SQL Search returns the first candidate query which executes to the correct answer.

Given our synthesis procedure, we evaluate its ability to produce accurate SQL, using weak supervision.To this end, we run our synthesis on 9,313 examples of questions, answers and QDMRs from five standard text-to-SQL benchmarks Zelle and Mooney (1996); Li and Jagadish (2014); Yaghmazadeh etal. (2017); Yu etal. (2018).Overall, our solution successfully synthesizes SQL queries for 77.8% of examples, thereby demonstrating its applicability to a broad range of target databases.

Next, we show our synthesized queries to be an effective alternative to training on expert annotated SQL.We compare a text-to-SQL model, trained on the queries synthesized from questions, answers and QDMRs, to one trained using gold SQL. As our model of choice we use T5-large, which is widely used for sequence-to-sequence modeling tasks Raffel etal. (2020). Following past work Shaw etal. (2021); Herzig etal. (2021), we fine-tune T5 to map text to SQL.We experiment with the Spider and Geo880 datasets Yu etal. (2018); Zelle and Mooney (1996) and compare model performance based on the training supervision.When training on manually annotated QDMRs, the weakly supervised models achieve 91% to 97% of the accuracy of models trained on gold SQL.We further extend our approach to use automatically predicted QDMRs, requiring zero annotation of in-domain QDMRs.Notably, when training on predicted QDMRs models still reach 86% to 93% of the fully supervised versions accuracy.In addition, we evaluate cross-database generalization of models trained on Spider Suhr etal. (2020). We test our models on four additional datasets and show that the weakly supervised models are generally better than the fully supervised ones in terms of cross-database generalization.Overall, our findings show that weak supervision, in the form of question, answers and QDMRs (annotated or predicted) is nearly as effective as gold SQL when training text-to-SQL parsers.

Our codebase and data are publicly available.111https://github.com/tomerwolgithub/question-decomposition-to-sql

2 Background

Weakly Supervised ML

The performance of supervised ML models hinges on the quantity and quality of their training data. In practice, labeling large-scale datasets for new tasks is often cost-prohibitive.This problem is further exacerbated in semantic parsing tasks Zettlemoyer and Collins (2005), as utterances need to be labeled with formal queries. Weak supervision is a broad class of methods aimed at reducing the need to manually label large training sets Hoffmann etal. (2011); Ratner etal. (2017); Zhang etal. (2019).An influential line of work has been dedicated to weakly supervised semantic parsing, using question-answer pairs, referred to as learning from denotations Clarke etal. (2010); Liang etal. (2011).Past work has shown that non-experts are capable of annotating answers over knowledge graphs Berant etal. (2013) and tabular data Pasupat and Liang (2015). This approach could potentially be extended to databases by sampling subsets of its tables, such that question-answer examples can be manually annotated.A key issue in learning text-to-SQL parsers from denotations is the vast search space of potential candidate queries. Therefore, past work has focused on constraining the search space, which limited applicability to simpler questions over single tables Wang etal. (2019). Here, we handle arbitrary SQL by using QDMR to constrain the search space.

Question Decomposition

QDMR expresses the meaning of a question by breaking it down into simpler sub-questions.Given a question xπ‘₯xitalic_x, its QDMR s𝑠sitalic_s is a sequence of reasoning steps s1,…,s|s|superscript𝑠1…superscript𝑠𝑠s^{1},...,s^{|s|}italic_s start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT , … , italic_s start_POSTSUPERSCRIPT | italic_s | end_POSTSUPERSCRIPT required to answer xπ‘₯xitalic_x. Each step sksuperscriptπ‘ π‘˜s^{k}italic_s start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT is an intermediate question which represents a relational operation, such as projection or aggregation. Steps may contain phrases from xπ‘₯xitalic_x, tokens signifying a query operation (e.g., β€œfor each”) and references to previous steps. Operation tokens indicate the structure of a step, while its arguments are the references and question phrases.A key advantage of QDMR is that it can be annotated by non-experts and at scale Wolfson etal. (2020). Moreover, unlike SQL, annotating QDMR requires zero domain expertise as it is derived entirely from the original question.

3 Weakly Supervised SQL Synthesis

Our input data contains examples of question xisubscriptπ‘₯𝑖x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, database Disubscript𝐷𝑖D_{i}italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, the answer aisubscriptπ‘Žπ‘–a_{i}italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, and sisubscript𝑠𝑖s_{i}italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, which is the QDMR of xisubscriptπ‘₯𝑖x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. The QDMR is either annotated or predicted by a trained model f𝑓fitalic_f, such that f⁒(xi)=si𝑓subscriptπ‘₯𝑖subscript𝑠𝑖f(x_{i})=s_{i}italic_f ( italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) = italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT.For each example, we attempt to synthesize a SQL query Q^isubscript^𝑄𝑖\hat{Q}_{i}over^ start_ARG italic_Q end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, that matches the intent of xisubscriptπ‘₯𝑖x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and executes to its answer, Q^i⁒(Di)=aisubscript^𝑄𝑖subscript𝐷𝑖subscriptπ‘Žπ‘–\hat{Q}_{i}(D_{i})=a_{i}over^ start_ARG italic_Q end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) = italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT.The successfully synthesized examples ⟨xi,Q^i⟩subscriptπ‘₯𝑖subscript^𝑄𝑖\langle x_{i},\hat{Q}_{i}\rangle⟨ italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , over^ start_ARG italic_Q end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⟩ are then used to train a text-to-SQL model.

3.1 Synthesizing SQL from QDMR

Given QDMR sisubscript𝑠𝑖s_{i}italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and database Disubscript𝐷𝑖D_{i}italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, we automatically map sisubscript𝑠𝑖s_{i}italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT to SQL. Alg.1 describes the synthesis process, where candidate query Q^isubscript^𝑄𝑖\hat{Q}_{i}over^ start_ARG italic_Q end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is incrementally synthesized by iterating over the QDMR steps. Given step siksuperscriptsubscriptπ‘ π‘–π‘˜s_{i}^{k}italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT, its phrases are automatically linked to columns and values in Disubscript𝐷𝑖D_{i}italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. Then, relevant join paths are inferred between the columns. Last, each step is automatically mapped to SQL based on its QDMR operator and its arguments (see Table1).

1:procedureSQLSynth(𝐬𝐬\mathbf{s}bold_s: QDMR, D𝐷Ditalic_D: database)

2:mapped←[]←mapped\textit{mapped}\leftarrow[]mapped ← [ ]

3:forsksuperscriptπ‘ π‘˜s^{k}italic_s start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT in 𝐬=⟨s1,…,sn⟩𝐬superscript𝑠1…superscript𝑠𝑛\mathbf{s}=\langle s^{1},...,s^{n}\ranglebold_s = ⟨ italic_s start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT , … , italic_s start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT ⟩do

4:cols←PhraseColumnLink⁒(D,sk)←colsPhraseColumnLink𝐷superscriptπ‘ π‘˜\textit{cols}\leftarrow\textsc{PhraseColumnLink}(D,s^{k})cols ← PhraseColumnLink ( italic_D , italic_s start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT )

5:refs←ReferencedSteps⁒(sk)←refsReferencedStepssuperscriptπ‘ π‘˜\textit{refs}\leftarrow\textsc{ReferencedSteps}(s^{k})refs ← ReferencedSteps ( italic_s start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT )

6:join←[]←join\textit{join}\leftarrow[]join ← [ ]

7:forsjsuperscript𝑠𝑗s^{j}italic_s start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT in refsdo

8:other_cols←mapped⁒[j].c⁒o⁒l⁒sformulae-sequence←other_colsmappeddelimited-[]π‘—π‘π‘œπ‘™π‘ \textit{other\_cols}\leftarrow\textit{mapped}[j].colsother_cols ← mapped [ italic_j ] . italic_c italic_o italic_l italic_s

9:join←join+JoinP⁒(D,cols,other_cols)←joinjoinJoinP𝐷colsother_cols\textit{join}\leftarrow\textit{join}+\textsc{JoinP}(D,\textit{cols},\textit{%other\_cols})join ← join + JoinP ( italic_D , cols , other_cols )

10:o⁒p←OpType⁒(sk)β†π‘œπ‘OpTypesuperscriptπ‘ π‘˜op\leftarrow\textsc{OpType}(s^{k})italic_o italic_p ← OpType ( italic_s start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT )

11:Q^←MapSQL⁒(op, cols, join, refs, mapped)←^𝑄MapSQLop, cols, join, refs, mapped\hat{Q}\leftarrow\textsc{MapSQL}(\textit{op, cols, join, refs, mapped})over^ start_ARG italic_Q end_ARG ← MapSQL ( op, cols, join, refs, mapped )

12:mapped⁒[k]β†βŸ¨sk,c⁒o⁒l⁒s,Q^βŸ©β†mappeddelimited-[]π‘˜superscriptπ‘ π‘˜π‘π‘œπ‘™π‘ ^𝑄\textit{mapped}[k]\leftarrow\langle s^{k},cols,\hat{Q}\ranglemapped [ italic_k ] ← ⟨ italic_s start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT , italic_c italic_o italic_l italic_s , over^ start_ARG italic_Q end_ARG ⟩

13:return mapped⁒[n].Q^formulae-sequencemappeddelimited-[]𝑛^𝑄\textit{mapped}[n].\hat{Q}mapped [ italic_n ] . over^ start_ARG italic_Q end_ARG

QDMR StepPhrase-DB LinkingSQL
1. ships1. SELECT(ship.id)SELECT ship.id FROM ship;
2. injuries2. SELECT(death.injured)SELECT death.injured FROM death;
3. number of #2 for each #13. GROUP(count, #2, #1)SELECT COUNT(death.injured) FROM ship, death WHERE death.caused_by_ship_id = ship.id GROUP BY ship.id;
4. #1 where #3 is highest4. SUPER.(max, #1, #3)SELECT ship.id FROM ship, death WHERE death.caused_by_ship_id = ship.id GROUP BY ship.id ORDER BY COUNT(death.injured) DESC LIMIT 1;
5. the name of #45. PROJECT(ship.name, #4)SELECT ship.name FROM ship, death WHERE death.caused_by_ship_id = ship.id AND ship.id IN (#4);

3.1.1 Phrase DB Linking

As discussed in Β§2, a QDMR step may have a phrase from xisubscriptπ‘₯𝑖x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT as its argument. When mapping QDMR to SQL these phrases are linked to corresponding columns or values in Disubscript𝐷𝑖D_{i}italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. For example, in Table1 the two phrases β€œships” and β€œinjuries” are linked to columns ship.id and death.injured respectively. We perform phrase-column linking automatically by ranking all columns in Disubscript𝐷𝑖D_{i}italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and returning the top one. The ranked list of columns is later used in Β§3.2 when searching for a correct assignment to all phrases in the QDMR.To compute phrase-column similarity, we tokenize both the phrase and column, then lemmatize their tokens using the WordNet lemmatizer.222https://www.nltk.org/ The similarity score is the average GloVe word embeddings similarity Pennington etal. (2014) between the phrase and column tokens. All columns in Disubscript𝐷𝑖D_{i}italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT are then ranked based on their word overlap and similarity with the phrase: (1) we return columns whose lemmatized tokens are identical to those in the phrase; (2) we return columns who share (non stop-word) tokens with the phrase, ordered by phrase-column similarity; (3) we return the remaining columns, ordered by similarity.

We assume that literal values in Disubscript𝐷𝑖D_{i}italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, such as strings or dates, appear verbatim in the database as they do in the question. Therefore, using string matching, we can identify the columns containing all literal values mentioned in sisubscript𝑠𝑖s_{i}italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. If a literal value appears in multiple columns, they are all returned as potential links.This assumption may not always hold in practice due to DB-specific language, e.g., the phrase β€œwomen” may correspond to the condition gender = β€˜F’. Consequently, we measure the effect of DB-specific language in Β§4.2.

3.1.2 Join Path Inference

In order to synthesize SQL Codd (1970), we infer join paths between the linked columns returned in Β§3.1.1. Following past work Guo etal. (2019); Suhr etal. (2020), we implement a heuristic returning the shortest join path connecting two sets of columns. To compute join paths, we convert Disubscript𝐷𝑖D_{i}italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT into a graph where the nodes are its tables and edges exist for every foreign-key constraint connecting two tables.The JoinP procedure in Alg.1 joins the tables of columns mentioned in step sksuperscriptπ‘ π‘˜s^{k}italic_s start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT (cols) with those mentioned in the previous steps which sksuperscriptπ‘ π‘˜s^{k}italic_s start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT refers to (other_cols). If multiple shortest paths exist, it returns the first path which contains either ci∈colssubscript𝑐𝑖colsc_{i}\in\textit{cols}italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∈ cols as its start node or cj∈other_colssubscript𝑐𝑗other_colsc_{j}\in\textit{other\_cols}italic_c start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ∈ other_cols as its end node.Step 3 of Table1 underlines the join path between the death and ship tables.

3.1.3 QDMR to SQL Mapper

The MapSQL procedure in Alg.1 maps QDMR steps into executable SQL. First, the QDMR operation of each step is inferred from its utterance template using the OpType procedure of Wolfson etal. (2020). Then, following the previous DB linking phase, the arguments of each step are either the linked columns and values or references to previous steps (second column of Table1). MapSQL uses the step operation type and arguments to automatically map sksuperscriptπ‘ π‘˜s^{k}italic_s start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT to a SQL query Q^ksuperscript^π‘„π‘˜\hat{Q}^{k}over^ start_ARG italic_Q end_ARG start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT. Each operation has a unique mapping rule to SQL, as shown in Table2. SQL mapping is performed incrementally for each step. Then, when previous steps are referenced, the process can re-use parts of their previously mapped SQL (stored in the mapped array).Furthermore, our mapping procedure is able to handle complex SQL that may involve nested queries (Fig.2) and self-joins (Fig.3).

QDMR OperationSQL Mapping
SELECT(t.col)SELECT t.col FROM t;
FILTER(#x, =, val)SELECT #x[SELECT] FROM #x[FROM] WHERE #x[WHERE] AND t.col = val;
PROJECT(t.col, #x)SELECT t.col FROM t, #x[FROM] WHERE Join(t, #x[FROM]) AND #x[SELECT] IN (#x);
AGGREGATE(count, #x)SELECT COUNT(#x[SELECT]) FROM #x[FROM] WHERE #x[WHERE];
GROUP(avg, #x, #y)SELECT AVG(#x[SELECT]) FROM #x[FROM], #y[FROM] WHERE Join(#x[FROM], #y[FROM]) AND #x[WHERE] AND #y[WHERE] GROUP BY #y[SELECT];
SUPER.(max, k, #x, #y)SELECT #x[SELECT] FROM #x[FROM], #y[FROM] WHERE Join(#x[FROM], #y[FROM]) AND #x[WHERE] AND #y[WHERE] ORDER BY #y[SELECT] DESC k;
xπ‘₯xitalic_x:β€œWhat are the populations of states through which the Mississippi river runs?”
s𝑠sitalic_s:the Mississippi river; states #1 runs through; the populations of #2
1.SELECT(river.river_name = β€˜Mississippi’)
2.PROJECT(state.state_name, #1)
3.PROJECT(state.population, #2)
1.SELECT river.river_name FROM river WHERE river.river_name = β€˜Mississippi’;
2.SELECT state.state_name FROM state, river WHERE river.traverse = state.state_name AND river.river_name IN (#1);
3.SELECT state.population FROM state, river WHERE river.traverse = state.state_name AND state.state_name IN (#2);
xπ‘₯xitalic_x:β€œWhat papers were written by both H. V. Jagadish and also Yunyao Li?”
s𝑠sitalic_s:papers; #1 by H. V. Jagadish; #2 by Yunyao Li
1.SELECT(publication.title)
2.FILTER(#1, author.name = β€˜H. V. Jagadish’)
3.FILTER(#2, author.name = β€˜Yunyao Li’)
1.SELECT publication.title FROM author, publication;
2.SELECT publication.title FROM author, publication, writes WHERE publication.pid= writes.pid AND writes.aid = author.aid AND author.name = β€˜H. V. Jagadish’;
3.SELECT publication.title FROM author, publication, writes WHERE publication.pid = writes.pid AND writes.aid = author.aid AND author.name = β€˜Yunyao Li’ AND publication.title IN (#2);

3.2 Execution-guided SQL Candidate Search

At this point we have Q^isubscript^𝑄𝑖\hat{Q}_{i}over^ start_ARG italic_Q end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, which is a potential SQL candidate. However, this candidate may be incorrect due to a wrong phrase-column linking or due to its original QDMR structure. To mitigate these issues, we search for accurate SQL candidates using the answer supervision.

Following phrase DB linking (Β§3.1.1), each phrase is assigned its top ranked column in Disubscript𝐷𝑖D_{i}italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. However, this assignment may potentially be wrong. In step 1 of Fig.1 the phrase β€œauthors” is incorrectly linked to author.aid instead of author.name. Given our weak supervision, we do not have access to the gold phrase-column linking and rely instead on the gold answer aisubscriptπ‘Žπ‘–a_{i}italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. Namely, we iterate over phrase-column assignments and synthesize their corresponding SQL. Once an assignment whose SQL executes to aisubscriptπ‘Žπ‘–a_{i}italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT has been found, we return it as our result. We iterate over assignments that cover only the top-k ranked columns for each phrase, shown to work very well in practice (Β§4.2).

Failing to find a correct candidate SQL may be due to QDMR structure rather than phrase-column linking. As sisubscript𝑠𝑖s_{i}italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is derived entirely from the question it may fail to capture database-specific language. E.g., in the question β€œHow many students enrolled during the semester?” the necessary aggregate operation may change depending on the database structure. If Disubscript𝐷𝑖D_{i}italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT has the column course.num_enrolled, the query should sum its entries for all courses in the semester. Conversely, if Disubscript𝐷𝑖D_{i}italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT has the column course.student_id, the corresponding query would need to count the number of enrolled students.We account for these structural mismatches by implementing three additional search heuristics which modify the structure of a candidate Q^isubscript^𝑄𝑖\hat{Q}_{i}over^ start_ARG italic_Q end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. If the candidate executes to the correct result following modification, it is returned by the search process.These modifications are described in detail in Appendix B. Namely, they include the addition of a DISTINCT clause, converting QDMR FILTER steps into SUPERLATIVE and switching between the COUNT and SUM operations.

4 Experiments

Our experiments target two main research questions.First, given access to weak supervision of question-answer pairs and QDMRs, we wish to measure the percentage of SQL queries that can be automatically synthesized. Therefore, in Β§4.2 we measure SQL synthesis coverage using 9,313 examples taken from five benchmark datasets. Second, in Β§4.3 we use the synthesized SQL to train text-to-SQL models and compare their performance to those trained on gold SQL annotations.

4.1 Setting

Datasets

We evaluate both the SQL synthesis coverage and text-to-SQL accuracy using five text-to-SQL datasets (see Table3). The first four datasets contain questions over a single database: Academic Li and Jagadish (2014) has questions over the Microsoft Academic Search database; Geo880 Zelle and Mooney (1996) concerns US geography; IMDB and Yelp Yaghmazadeh etal. (2017) contain complex questions on a film and restaurants database, respectively. The Spider dataset Yu etal. (2018) measures domain generalization between databases, and therefore contains questions over 160 different databases.For QDMR data we use the Break dataset Wolfson etal. (2020). The only exception is 259 questions of IMDB and Yelp, outside of Break, which we (authors) annotate with corresponding QDMRs and release with our code.See Appendix C for license.

Training

We fine-tune the T5-large sequence-to-sequence model Raffel etal. (2020) for both text-to-SQL and QDMR parsing (§⁒4.2Β§4.2\S\ref{sec:experiments_data_generation_predictions}Β§). Namely, for each task we fine-tune the pre-trained model on its specific data.For text-to-SQL, we fine-tune on mapping utterances xi;c⁒o⁒l⁒s⁒(Di)subscriptπ‘₯π‘–π‘π‘œπ‘™π‘ subscript𝐷𝑖x_{i};cols(D_{i})italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ; italic_c italic_o italic_l italic_s ( italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) to SQL, where sequence c⁒o⁒l⁒s⁒(Di)π‘π‘œπ‘™π‘ subscript𝐷𝑖cols(D_{i})italic_c italic_o italic_l italic_s ( italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) is a serialization of all columns in database Disubscript𝐷𝑖D_{i}italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, in an arbitrary order. In QDMR parsing, input questions are mapped to output QDMR strings.We use the T5 implementation by HuggingFace Wolf etal. (2020) and train using the Adam optimizer Kingma and Ba (2014). Following fine-tuning on the dev sets, we adjust the batch size to 128 and the learning rate to 1⁒e1𝑒1e1 italic_e-4 (after experimenting with 1⁒e1𝑒1e1 italic_e-5, 1⁒e1𝑒1e1 italic_e-4 and 1⁒e1𝑒1e1 italic_e-3). All models were trained on an NVIDIA GeForce RTX 3090 GPU.

4.2 SQL Synthesis Coverage

Our first challenge is to measure our ability to synthesize accurate SQL.To evaluate SQL synthesis we define its coverage to be the percentage of examples where it successfully produces SQL Q^^𝑄\hat{Q}over^ start_ARG italic_Q end_ARG which executes to the correct answer.To ensure our procedure is domain independent, we test it on five different datasets, spanning 164 databases (Table3).

Annotated QDMRs

The upper rows of Table3 list the SQL synthesis coverage when using manually annotated QDMRs from Break. Overall, we evaluate on 9,313 QDMR annotated examples, reaching a coverage of 77.8%. Synthesis coverage for single-DB datasets tends to be slightly higher than that of Spider, which we attribute to its larger size and diversity.To further ensure the quality of synthesized SQL, we manually validate a random subset of 100 queries out of the 7,249 that were synthesized. Our analysis reveals 95% of the queries to be correct interpretations of their original question.In addition, we evaluate synthesis coverage on a subset of 8,887 examples whose SQL denotations are not the empty set. As SQL synthesis relies on answer supervision, discarding examples with empty denotations eliminates the false positives of spurious SQL which incidentally execute to an empty set. Overall, coverage on examples with non-empty denotations is nearly identical, at 77.6% (see AppendixD). We also perform an error analysis on a random set of 100 failed examples, presented in Table4. SQL synthesis failures are mostly due to QDMR annotation errors or implicit database-specific conditions. E.g., in Geo880 the phrase β€œmajor river” should implicitly be mapped to the condition river.length > 750. As our SQL synthesis is domain-general, it does not memorize any domain-specific jargon or rules.

DatasetDB #ExamplesSynthesizedCoverage %
Academic119515579.5
Geo880187773683.9
IMDB113111688.5
Yelp112810078.1
Spider dev201,02779377.2
Spider train1406,9555,34976.9
Total:1649,3137,24977.8
Spider pred.201,02779777.6
ErrorDescription%
Nonstandard QDMRThe annotated QDMR contains a step utterance that does not follow any of the pre-specified operation templates in Wolfson etal. (2020)42
DB-specific languagePhrase entails an implicit condition, e.g., β€œfemale workers” β†’β†’\rightarrowβ†’ emp.gender = β€˜F’23
Phrase-column link.The correct phrase-column assignment falls outside of the top-k candidates (Β§3.2)13
Gold SQLAn error due to an incorrectly labeled gold SQL6
Predicted QDMRs

While QDMR annotation can be crowdsourced to non-experts Wolfson etal. (2020), moving to a new domain may incur annotating new in-domain examples.Our first step to address this issue is to evaluate the coverage of SQL synthesis on predicted QDMRs, for out-of-domain questions. As question domains in Spider dev differ from those in its training set, it serves as our initial testbed. We further explore this setting in Β§4.3.4.Our QDMR parser (Β§4.1) is fine-tuned on Break for 10 epochs and we select the model with highest exact string match (EM) on Break dev.Evaluating on the hidden test set reveals our model scores 42.3 normalized EM,333The metric is a strict lower bound on performance. setting the state-of-the-art on Break.444https://leaderboard.allenai.org/breakThe predicted QDMRs, are then used in SQL synthesis together with examples ⟨xi,ai,Di⟩subscriptπ‘₯𝑖subscriptπ‘Žπ‘–subscript𝐷𝑖\langle x_{i},a_{i},D_{i}\rangle⟨ italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⟩. In Table3, the last row shows that coverage on Spider dev is nearly identical to that of manually annotated QDMRs (77.6% to 77.2%).

4.3 Training Text-to-SQL Models

Next, we compare text-to-SQL models trained on our synthesized data to training on expert annotated SQL.Given examples ⟨xi,Di⟩subscriptπ‘₯𝑖subscript𝐷𝑖\langle x_{i},D_{i}\rangle⟨ italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⟩ we test the following settings:(1) A fully supervised training set, that uses gold SQL annotations {⟨xi,Qi,Di⟩}i=1nsuperscriptsubscriptsubscriptπ‘₯𝑖subscript𝑄𝑖subscript𝐷𝑖𝑖1𝑛\{\langle x_{i},Q_{i},D_{i}\rangle\}_{i=1}^{n}{ ⟨ italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_Q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⟩ } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT.(2) A weakly supervised training set, where given answer aisubscriptπ‘Žπ‘–a_{i}italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and QDMR sisubscript𝑠𝑖s_{i}italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, we synthesize queries Q^isubscript^𝑄𝑖\hat{Q}_{i}over^ start_ARG italic_Q end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. As SQL synthesis coverage is less than 100%, the process returns a subset of m<nπ‘šπ‘›m<nitalic_m < italic_n examples {⟨xi,Q^i,Di⟩}i=1msuperscriptsubscriptsubscriptπ‘₯𝑖subscript^𝑄𝑖subscript𝐷𝑖𝑖1π‘š\{\langle x_{i},\hat{Q}_{i},D_{i}\rangle\}_{i=1}^{m}{ ⟨ italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , over^ start_ARG italic_Q end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⟩ } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT on which the model is trained.555In practice, we do not train directly on Q^isubscript^𝑄𝑖\hat{Q}_{i}over^ start_ARG italic_Q end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT but on sisubscript𝑠𝑖s_{i}italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT following its phrase-column linking. This representation is then automatically mapped to SQL to evaluate its execution.

4.3.1 Training Data

We train models on two text-to-SQL datasets: Spider Yu etal. (2018) and Geo880 Zelle and Mooney (1996).As our weakly supervised training sets, we use the synthesized examples ⟨xi,Q^i,Di⟩subscriptπ‘₯𝑖subscript^𝑄𝑖subscript𝐷𝑖\langle x_{i},\hat{Q}_{i},D_{i}\rangle⟨ italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , over^ start_ARG italic_Q end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⟩, described in Β§4.2, (using annotated QDMRs). We successfully synthesized 5,349 training examples for Spider and 547 examples for Geo880 train.

4.3.2 Models and Evaluation

Models

We fine-tune T5-large for text-to-SQL, using the hyperparameters from Β§4.1. We choose T5 as it is agnostic to the structure of its input sequences. Namely, it has been shown to perform competitively on different text-to-SQL datasets, regardless of their SQL conventions Shaw etal. (2021); Herzig etal. (2021). This property is particularly desirable in our cross-database evaluation (Β§4.3.3), where we test on multiple datasets.

We train and evaluate the following models:

  • β€’

    T5-SQL-G trained on {⟨xi,Qi,Di⟩}i=1nsuperscriptsubscriptsubscriptπ‘₯𝑖subscript𝑄𝑖subscript𝐷𝑖𝑖1𝑛\{\langle x_{i},Q_{i},D_{i}\rangle\}_{i=1}^{n}{ ⟨ italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_Q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⟩ } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT, using gold SQL, annotated by experts

  • β€’

    T5-QDMR-G trained on {⟨xi,Q^i,Di⟩}i=1msuperscriptsubscriptsubscriptπ‘₯𝑖subscript^𝑄𝑖subscript𝐷𝑖𝑖1π‘š\{\langle x_{i},\hat{Q}_{i},D_{i}\rangle\}_{i=1}^{m}{ ⟨ italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , over^ start_ARG italic_Q end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⟩ } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT with Q^isubscript^𝑄𝑖\hat{Q}_{i}over^ start_ARG italic_Q end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT that were synthesized using weak supervision

  • β€’

    T5-SQL-Gpart trained on {⟨xi,Qi,Di⟩}i=1msuperscriptsubscriptsubscriptπ‘₯𝑖subscript𝑄𝑖subscript𝐷𝑖𝑖1π‘š\{\langle x_{i},Q_{i},D_{i}\rangle\}_{i=1}^{m}{ ⟨ italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_Q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⟩ } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT, using gold SQL. This models helps us measure the degree to which the smaller size of the synthesized training data and its different query structure (compared to gold SQL) affects performance

Evaluation Metric

Due to our SQL being automatically synthesized, its syntax is often different from that of the gold SQL (see AppendixE.2). As a result, the ESM metric of Yu etal. (2018) does not fit our evaluation setup. Instead, we follow Suhr etal. (2020) and evaluate text-to-SQL models using the execution accuracy of output queries. We define execution accuracy as the percentage of output queries which, when executed on the database, result in the same set of tuples (rows) as aisubscriptπ‘Žπ‘–a_{i}italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT.

ModelSupervisionTraining setExec. %
T5-SQL-G⟨xi,Qi,Di⟩subscriptπ‘₯𝑖subscript𝑄𝑖subscript𝐷𝑖\langle x_{i},Q_{i},D_{i}\rangle⟨ italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_Q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⟩7,00068.0 Β±plus-or-minus\pmΒ± 0.3
T5-SQL-Gpart⟨xi,Qi,Di⟩subscriptπ‘₯𝑖subscript𝑄𝑖subscript𝐷𝑖\langle x_{i},Q_{i},D_{i}\rangle⟨ italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_Q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⟩5,34966.4 Β±plus-or-minus\pmΒ± 0.8
T5-QDMR-G⟨xi,ai,si,Di⟩subscriptπ‘₯𝑖subscriptπ‘Žπ‘–subscript𝑠𝑖subscript𝐷𝑖\langle x_{i},a_{i},s_{i},D_{i}\rangle⟨ italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⟩5,34965.8 Β±plus-or-minus\pmΒ± 0.3
T5-QDMR-P⟨xi,ai,Di⟩subscriptπ‘₯𝑖subscriptπ‘Žπ‘–subscript𝐷𝑖\langle x_{i},a_{i},D_{i}\rangle⟨ italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⟩*5,07562.9 Β±plus-or-minus\pmΒ± 0.8
ModelAcademicGeo880IMDBYelp
T5-SQL-G8.2 Β±plus-or-minus\pmΒ± 1.333.6 Β±plus-or-minus\pmΒ± 2.519.8 Β±plus-or-minus\pmΒ± 3.622.7 Β±plus-or-minus\pmΒ± 1.2
T5-SQL-Gpart4.9 Β±plus-or-minus\pmΒ± 1.532.4 Β±plus-or-minus\pmΒ± 1.320.9 Β±plus-or-minus\pmΒ± 0.820.7 Β±plus-or-minus\pmΒ± 1.4
T5-QDMR-G10.7 Β±plus-or-minus\pmΒ± 0.740.4 Β±plus-or-minus\pmΒ± 1.827.1 Β±plus-or-minus\pmΒ± 3.616.2 Β±plus-or-minus\pmΒ± 4.7
T5-QDMR-P8.2 Β±plus-or-minus\pmΒ± 0.439.7 Β±plus-or-minus\pmΒ± 1.423.6 Β±plus-or-minus\pmΒ± 5.516.7 Β±plus-or-minus\pmΒ± 3.7

4.3.3 Training on Annotated QDMRs

We begin by comparing the models trained using annotated QDMRs to those trained on gold SQL. Meanwhile, the discussion of T5-QDMR-P, trained using predicted QDMRs, is left for Β§4.3.4.The results in Tables5-7 list the average accuracy and standard deviation of three model instances, trained using separate random seeds.

Spider & XSP Evaluation

Tables5-6 list the results of the Spider trained models.All models were trained for 150 epochs and evaluated on the dev set of 1,034 examples. When comparing T5-QDMR-G to the model trained on gold SQL, it achieves 96.8% of its performance (65.8 to 68.0).The T5-SQL-Gpart model, trained on the same 5,349 examples as T5-QDMR-G, performs roughly on par, scoring +0.6 points (66.4 to 65.8).

As Spider is used to train cross-database models, we further evaluate our models performance on cross-database semantic parsing (XSP) Suhr etal. (2020). In Table6 we test on four additional text-to-SQL datasets (sizes in parenthesis): Academic (183), Geo880 (877), IMDB (113) and Yelp (66). For Academic, IMDB and Yelp we removed examples whose execution result in an empty set. Otherwise, the significant percentage of such examples would result in false positives of predictions which incidentally execute to an empty set. In practice, evaluation on the full datasets remains mostly unchanged and is provided in AppendixE.Similarly to Suhr etal. (2020), the results in Table6 show that Spider trained models struggle to generalize to XSP examples. However, T5-QDMR-G performance is generally better on XSP examples, which further indicates that QDMR and answer supervision is effective, compared to gold SQL. Example predictions are shown in AppendixE.2.

Geo880

Table7 lists the execution accuracy of models trained on Geo880. Models were trained for 300 epochs, fine-tuned on the dev set and then evaluated on the 280 test examples. We note that T5-QDMR-G achieves 90.7% of the performance of T5-SQL-G (74.5 to 82.1). The larger performance gap, compared to Spider models, may be partly to due to the dataset size. As Geo880 has 547 training examples, fewer synthesized SQL to train T5-QDMR-G on (454) may have had a greater effect on its accuracy.

ModelSupervisionTrain. setExec. %
T5-SQL-G⟨xi,Qi,Di⟩subscriptπ‘₯𝑖subscript𝑄𝑖subscript𝐷𝑖\langle x_{i},Q_{i},D_{i}\rangle⟨ italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_Q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⟩54782.1 Β±plus-or-minus\pmΒ± 1.9
T5-SQL-Gpart⟨xi,Qi,Di⟩subscriptπ‘₯𝑖subscript𝑄𝑖subscript𝐷𝑖\langle x_{i},Q_{i},D_{i}\rangle⟨ italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_Q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⟩45479.4 Β±plus-or-minus\pmΒ± 0.4
T5-QDMR-G⟨xi,ai,si,Di⟩subscriptπ‘₯𝑖subscriptπ‘Žπ‘–subscript𝑠𝑖subscript𝐷𝑖\langle x_{i},a_{i},s_{i},D_{i}\rangle⟨ italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⟩45474.5 Β±plus-or-minus\pmΒ± 0.2
T5-QDMR-P⟨xi,ai,Di⟩subscriptπ‘₯𝑖subscriptπ‘Žπ‘–subscript𝐷𝑖\langle x_{i},a_{i},D_{i}\rangle⟨ italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⟩43270.4 Β±plus-or-minus\pmΒ± 0.2

4.3.4 Training on Predicted QDMRs

We extend our approach by replacing the annotated QDMRs with the predictions of a trained QDMR parser (a T5-large model, see Β§4.1).In this setting, we now have two sets of questions: (1) questions used to train the QDMR parser; (2) questions used to synthesize NL-SQL data.We want these two sets to be as separate as possible, so that training the QDMR parser would not require new in-domain annotations. Namely, the parser must generalize to questions in the NL-SQL domains while being trained on as few of these questions as possible.

Spider

Unfortunately, Spider questions make up a large portion of the Break training set, used to train the QDMR parser. We therefore experiment with two alternatives to minimize the in-domain QDMR annotations of NL-SQL questions. First, is to train the parser using few-shot QDMR annotations for Spider. Second, is to split Spider to questions used as the NL-SQL data, while the rest are used to train the QDMR parser.

In Table5, T5-QDMR-P is trained on 5,075 queries, synthesized using predicted QDMRs (and answer supervision) for Spider train questions.The predictions were generated by a QDMR parser trained on a subset of Break, excluding all Spider questions save 700 (10% of Spider train). Keeping few in-domain examples minimizes additional QDMR annotation while preserving the predictions quality. Training on the predicted QDMRs, instead of the annotated ones, resulted in accuracy being down 2.9 points (65.8 to 62.9) while the model achieves 92.5% of T5-SQL-G performance on Spider dev. On XSP examples, T5-QDMR-P is competitive with T5-QDMR-G (Table6).

In Table8, we experiment with training T5-QDMR-P without in-domain QDMR annotations. We avoid any overlap between the questions and domains used to train the QDMR parser and those used for SQL synthesis. We randomly sample 30-40 databases from Spider and use their corresponding questions exclusively as our NL-SQL data. For training the QDMR parser, we use Break while discarding the sampled questions.We experiment with 3 random samples of Spider train, numbering 1,348, 2,028 and 2,076 examples, with synthesized training data of 1,129, 1,440 and 1,552 examples respectively. Results in Table8 show that, on average, T5-QDMR-P achieves 95.5% of the performance of T5-SQL-G. This indicates that even without any in-domain QDMR annotations, data induced from answer supervision and out-of-domain QDMRs is effective in training text-to-SQL models, compared to gold SQL.

Geo880

For predicted QDMRs on Geo880, we train the QDMR parser on Break while discarding all of its 547 questions. Therefore, the parser was trained without any in-domain QDMR annotations for Geo880. SQL synthesis using the predicted QDMRs resulted in 432 queries. In Table7, T5-QDMR-P reaches 85.7% of T5-SQL-G performance while being trained using question-answer supervision and no in-domain QDMR annotations.

ModelSupervisionTrain. setDB #Exec. %
T5-SQL-G⟨xi,Qi,Di⟩subscriptπ‘₯𝑖subscript𝑄𝑖subscript𝐷𝑖\langle x_{i},Q_{i},D_{i}\rangle⟨ italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_Q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⟩1,3483048.4
T5-SQL-Gpart⟨xi,Qi,Di⟩subscriptπ‘₯𝑖subscript𝑄𝑖subscript𝐷𝑖\langle x_{i},Q_{i},D_{i}\rangle⟨ italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_Q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⟩1,1293047.4
T5-QDMR-P⟨xi,ai,Di⟩subscriptπ‘₯𝑖subscriptπ‘Žπ‘–subscript𝐷𝑖\langle x_{i},a_{i},D_{i}\rangle⟨ italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⟩1,1293046.2
T5-SQL-G⟨xi,Qi,Di⟩subscriptπ‘₯𝑖subscript𝑄𝑖subscript𝐷𝑖\langle x_{i},Q_{i},D_{i}\rangle⟨ italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_Q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⟩2,0284054.7
T5-SQL-Gpart⟨xi,Qi,Di⟩subscriptπ‘₯𝑖subscript𝑄𝑖subscript𝐷𝑖\langle x_{i},Q_{i},D_{i}\rangle⟨ italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_Q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⟩1,4404051.3
T5-QDMR-P⟨xi,ai,Di⟩subscriptπ‘₯𝑖subscriptπ‘Žπ‘–subscript𝐷𝑖\langle x_{i},a_{i},D_{i}\rangle⟨ italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⟩1,4404052.1
T5-SQL-G⟨xi,Qi,Di⟩subscriptπ‘₯𝑖subscript𝑄𝑖subscript𝐷𝑖\langle x_{i},Q_{i},D_{i}\rangle⟨ italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_Q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⟩2,0764056.2
T5-SQL-Gpart⟨xi,Qi,Di⟩subscriptπ‘₯𝑖subscript𝑄𝑖subscript𝐷𝑖\langle x_{i},Q_{i},D_{i}\rangle⟨ italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_Q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⟩1,5524053.7
T5-QDMR-P⟨xi,ai,Di⟩subscriptπ‘₯𝑖subscriptπ‘Žπ‘–subscript𝐷𝑖\langle x_{i},a_{i},D_{i}\rangle⟨ italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⟩1,5524053.8

5 Limitations

Our approach uses question decompositions and answers as supervision for text-to-SQL parsing. As annotating SQL requires expertise, our solution serves as a potentially cheaper alternative. Past work has shown that non-experts can provide the answers for questions on knowledge graphs Berant etal. (2013) and tables Pasupat and Liang (2015). However, manually annotating question-answer pairs on large-scale databases may present new challenges which we leave for future work.

During SQL synthesis we assume that literal values (strings or dates) appear verbatim in the database as they do in the question. We observe that, for multiple datasets, this assumption generally holds true (Β§4.2). Still, for questions with domain-specific jargon Lee etal. (2021) our approach might require an initial step of named-entity-recognition.Failure to map a QDMR to SQL may be due to a mismatch between a QDMR and its corresponding SQL structure (Β§3.2). We account for such mismatches by using heuristics to modify the structure of a candidate query (Appendix B). A complementary approach could train a model, mapping QDMR to SQL, to account for cases where our heuristic rules fail. Nevertheless, our SQL synthesis covers a diverse set of databases and query patterns, as shown in our experiments.

6 Related Work

For a thorough review of NL interfaces to databases see Affolter etal. (2019); Kim etal. (2020).Research on parsing text-to-SQL gained significant traction in recent years with the introduction of large supervised datasets for training models and evaluating their performance Zhong etal. (2017); Yu etal. (2018). Recent approaches relied on specialized architectures combined with pre-trained language models Guo etal. (2019); Wang etal. (2020); Lin etal. (2020); Yu etal. (2021); Deng etal. (2021); Scholak etal. (2021). As our solution synthesizes NL-SQL pairs (using weak supervision) it can be used to train supervised text-to-SQL models.

Also related is the use of intermediate meaning representations (MRs) in mapping text-to-SQL. In past work MRs were either annotated by experts Yaghmazadeh etal. (2017); Kapanipathi etal. (2020), or were directly induced from such annotations as a way to simplify the target MR Dong and Lapata (2018); Guo etal. (2019); Herzig etal. (2021). Instead, QDMR representations are expressed as NL utterances and can therefore be annotated by non-experts.Similarly to us, Saparina and Osokin (2021) map QDMR to SPARQL. However, our SQL synthesis does not rely on the annotated linking of question phrases to DB elements Lei etal. (2020). In addition, we train models without gold QDMR annotations and test our models on four datasets in addition to Spider.

7 Conclusions

This work presents a weakly supervised approach for generating NL-SQL training data, using answer and QDMR supervision.We implemented an automatic SQL synthesis procedure, capable of generating effective training data for dozens of target databases.Experiments on multiple text-to-SQL benchmarks demonstrate the efficacy of our synthesized training data. Namely, our weakly-supervised models achieve 91%-97% of the performance of fully supervised models trained on annotated SQL. Further constraining our models supervision to few or zero in-domain QDMRs still reaches 86%-93% of the fully supervised models performance.Overall, we provide an effective solution to train text-to-SQL parsers while requiring zero SQL annotations.

Acknowledgements

We would like to thank Mor Geva, Ori Yoran and Jonathan Herzig for their insightful comments.This research was partially supported by The Israel Science Foundation (grant 978/17), and the Yandex Initiative for Machine Learning and the European Research Council (ERC) under the European Union Horizons 2020 research and innovation programme (grants DELPHI 802800 and ProDIS 804302).This work was completed in partial fulfillment of the PhD of Tomer Wolfson.

References

  • Affolter etal. (2019)Katrin Affolter, Kurt Stockinger, and A.Bernstein. 2019.A comparative survey of recent natural language interfaces fordatabases.The VLDB Journal, 28:793 – 819.
  • Berant etal. (2013)J.Berant, A.Chou, R.Frostig, and P.Liang. 2013.Semantic parsing on Freebase from question-answer pairs.In Empirical Methods in Natural Language Processing (EMNLP).
  • Clarke etal. (2010)James Clarke, Dan Goldwasser, Ming-Wei Chang, and Dan Roth. 2010.Driving semantic parsingfrom the world’s response.In Proceedings of the Fourteenth Conference on ComputationalNatural Language Learning, pages 18–27, Uppsala, Sweden. Association forComputational Linguistics.
  • Codd (1970)EdgarF Codd. 1970.A relational model of data for large shared data banks.Communications of the ACM, 13(6):377–387.
  • Deng etal. (2021)Xiang Deng, AhmedHassan Awadallah, Christopher Meek, Oleksandr Polozov, HuanSun, and Matthew Richardson. 2021.Structure-grounded pretraining for text-to-sql.In NAACL.
  • Dong and Lapata (2018)LiDong and Mirella Lapata. 2018.Coarse-to-fine decodingfor neural semantic parsing.In Proceedings of the 56th Annual Meeting of the Associationfor Computational Linguistics (Volume 1: Long Papers), pages 731–742,Melbourne, Australia. Association for Computational Linguistics.
  • Finegan-Dollak etal. (2018)Catherine Finegan-Dollak, JonathanK. Kummerfeld, LiZhang, Karthik Ramanathan,Sesh Sadasivam, Rui Zhang, and Dragomir Radev. 2018.Improving text-to-SQLevaluation methodology.In Proceedings of the 56th Annual Meeting of the Associationfor Computational Linguistics (Volume 1: Long Papers), pages 351–360,Melbourne, Australia. Association for Computational Linguistics.
  • Geva etal. (2021)Mor Geva, Tomer Wolfson, and Jonathan Berant. 2021.Break, perturb, build: Automatic perturbation of reasoning pathsthrough question decomposition.ArXiv, abs/2107.13935.
  • Guo etal. (2019)Jiaqi Guo, Zecheng Zhan, Yan Gao, Yan Xiao, Jian-Guang Lou, Ting Liu, andDongmei Zhang. 2019.Towards complex text-to-sql in cross-domain database withintermediate representation.In Association for Computational Linguistics (ACL).
  • Herzig etal. (2021)Jonathan Herzig, Peter Shaw, Ming-Wei Chang, Kelvin Guu, Panupong Pasupat, andYuan Zhang. 2021.Unlocking compositional generalization in pre-trained models usingintermediate representations.ArXiv, abs/2104.07478.
  • Hoffmann etal. (2011)Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and DanielS.Weld. 2011.Knowledge-based weaksupervision for information extraction of overlapping relations.In Proceedings of the 49th Annual Meeting of the Associationfor Computational Linguistics: Human Language Technologies, pages 541–550,Portland, Oregon, USA. Association for Computational Linguistics.
  • Kapanipathi etal. (2020)Pavan Kapanipathi, I.Abdelaziz, Srinivas Ravishankar, S.Roukos, AlexanderG.Gray, RamΓ³nFernΓ‘ndez Astudillo, Maria Chang, Cristina Cornelio,S.Dana, Achille Fokoue, Dinesh Garg, A.Gliozzo, Sairam Gurajada, HimaKaranam, Naweed Khan, Dinesh Khandelwal, Youngsuk Lee, Yunyao Li, FrancoisLuus, Ndivhuwo Makondo, Nandana Mihindukulasooriya, Tahira Naseem, SumitNeelam, L.Popa, Revanth Reddy, R.Riegel, G.Rossiello, Udit Sharma,GPShrivatsa Bhargav, and M.Yu. 2020.Question answering over knowledge bases by leveraging semanticparsing and neuro-symbolic reasoning.ArXiv, abs/2012.01707.
  • Kim etal. (2020)Hyeonji Kim, Byeong-Hoon So, Wook-Shin Han, and Hongrae Lee. 2020.Natural language to sql: Where are we today?Proc. VLDB Endow., 13:1737–1750.
  • Kingma and Ba (2014)D.Kingma and J.Ba. 2014.Adam: A method for stochastic optimization.arXiv preprint arXiv:1412.6980.
  • Lee etal. (2021)Chia-Hsuan Lee, Oleksandr Polozov, and Matthew Richardson. 2021.KaggleDBQA: Realistic evaluation of text-to-SQL parsers.In Proceedings of the 59th Annual Meeting of the Associationfor Computational Linguistics and the 11th International Joint Conference onNatural Language Processing (Volume 1: Long Papers), pages 2261–2273,Online. Association for Computational Linguistics.
  • Lei etal. (2020)Wenqiang Lei, Weixin Wang, Zhixin Ma, Tian Gan, Wei Lu, Min-Yen Kan, andTat-Seng Chua. 2020.Re-examining the role of schema linking in text-to-sql.In EMNLP.
  • Li and Jagadish (2014)Fei Li and HosagraharVisvesvaraya Jagadish. 2014.Nalir: an interactive natural language interface for queryingrelational databases.In International Conference on Management of Data, SIGMOD.
  • Liang etal. (2011)P.Liang, M.I. Jordan, and D.Klein. 2011.Learning dependency-based compositional semantics.In Association for Computational Linguistics (ACL), pages590–599.
  • Lin etal. (2020)XiVictoria Lin, Richard Socher, and Caiming Xiong. 2020.Bridgingtextual and tabular data for cross-domain text-to-SQL semantic parsing.In Findings of the Association for Computational Linguistics:EMNLP 2020, pages 4870–4888, Online. Association for ComputationalLinguistics.
  • Pasupat and Liang (2015)P.Pasupat and P.Liang. 2015.Compositional semantic parsing on semi-structured tables.In Association for Computational Linguistics (ACL).
  • Pennington etal. (2014)Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014.GloVe: Globalvectors for word representation.In Proceedings of the 2014 Conference on Empirical Methods inNatural Language Processing (EMNLP), pages 1532–1543, Doha, Qatar.Association for Computational Linguistics.
  • Raffel etal. (2020)Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, MichaelMatena, Yanqi Zhou, Wei Li, and PeterJ. Liu. 2020.Exploring the limitsof transfer learning with a unified text-to-text transformer.Journal of Machine Learning Research, 21(140):1–67.
  • Ratner etal. (2017)A.Ratner, StephenH. Bach, HenryR. Ehrenberg, JasonAlan Fries, Sen Wu, andC.RΓ©. 2017.Snorkel: Rapid training data creation with weak supervision.Proceedings of the VLDB Endowment. International Conference onVery Large Data Bases.
  • Rubin and Berant (2021)Ohad Rubin and Jonathan Berant. 2021.SmBoP:Semi-autoregressive bottom-up semantic parsing.In Proceedings of the 2021 Conference of the North AmericanChapter of the Association for Computational Linguistics: Human LanguageTechnologies, pages 311–324, Online. Association for ComputationalLinguistics.
  • Saparina and Osokin (2021)Irina Saparina and Anton Osokin. 2021.Sparqling database queries from intermediate question decompositions.In EMNLP.
  • Scholak etal. (2021)Torsten Scholak, Nathan Schucher, and Dzmitry Bahdanau. 2021.PICARD:Parsing incrementally for constrained auto-regressive decoding from languagemodels.In Proceedings of the 2021 Conference on Empirical Methods inNatural Language Processing, pages 9895–9901, Online and Punta Cana,Dominican Republic. Association for Computational Linguistics.
  • Shaw etal. (2021)Peter Shaw, Ming-Wei Chang, Panupong Pasupat, and Kristina Toutanova. 2021.Compositional generalization and natural language variation: Can asemantic parsing approach handle both?In ACL/IJCNLP.
  • Subramanian etal. (2020)Sanjay Subramanian, Ben Bogin, Nitish Gupta, Tomer Wolfson, Sameer Singh,Jonathan Berant, and Matt Gardner. 2020.Obtainingfaithful interpretations from compositional neural networks.In Proceedings of the 58th Annual Meeting of the Associationfor Computational Linguistics, pages 5594–5608, Online. Association forComputational Linguistics.
  • Suhr etal. (2020)Alane Suhr, Ming-Wei Chang, Peter Shaw, and Kenton Lee. 2020.Exploring unexplored generalization challenges for cross-databasesemantic parsing.In ACL.
  • Thorne etal. (2021)James Thorne, Majid Yazdani, Marzieh Saeidi, F.Silvestri, S.Riedel, andA.Halevy. 2021.From natural language processing to neural databases.Proc. VLDB Endow., 14:1033–1039.
  • Wang etal. (2020)Bailin Wang, Richard Shin, Xiaodong Liu, Oleksandr Polozov, and M.Richardson.2020.Rat-sql: Relation-aware schema encoding and linking for text-to-sqlparsers.ArXiv, abs/1911.04942.
  • Wang etal. (2019)Bailin Wang, Ivan Titov, and Mirella Lapata. 2019.Learning semantic parsers from denotations with latent structuredalignments and abstract programs.In EMNLP.
  • Wolf etal. (2020)Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue,Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, JoeDavison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, JulienPlu, Canwen Xu, Teven LeScao, Sylvain Gugger, Mariama Drame, Quentin Lhoest,and Alexander Rush. 2020.Transformers:State-of-the-art natural language processing.In Proceedings of the 2020 Conference on Empirical Methods inNatural Language Processing: System Demonstrations, pages 38–45, Online.Association for Computational Linguistics.
  • Wolfson etal. (2020)Tomer Wolfson, Mor Geva, Ankit Gupta, Matt Gardner, Yoav Goldberg, DanielDeutch, and Jonathan Berant. 2020.Break it down: Aquestion understanding benchmark.Transactions of the Association for Computational Linguistics,8:183–198.
  • Yaghmazadeh etal. (2017)Navid Yaghmazadeh, Yuepeng Wang, Isil Dillig, and Thomas Dillig. 2017.Sqlizer: query synthesis from natural language.Proceedings of the ACM on Programming Languages, 1:1 – 26.
  • Yu etal. (2021)Tao Yu, Chien-Sheng Wu, XiVictoria Lin, Bailin Wang, YiChern Tan, XinyiYang, DragomirR. Radev, Richard Socher, and Caiming Xiong. 2021.Grappa:Grammar-augmented pre-training for table semantic parsing.In 9th International Conference on Learning Representations,ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net.
  • Yu etal. (2018)Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, JamesMa, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang, and DragomirR.Radev. 2018.Spider: A large-scale human-labeled dataset for complex andcross-domain semantic parsing and text-to-sql task.In Empirical Methods in Natural Language Processing (EMNLP).
  • Zelle and Mooney (1996)M.Zelle and R.J. Mooney. 1996.Learning to parse database queries using inductive logic programming.In Association for the Advancement of Artificial Intelligence(AAAI), pages 1050–1055.
  • Zettlemoyer and Collins (2005)L.S. Zettlemoyer and M.Collins. 2005.Learning to map sentences to logical form: Structured classificationwith probabilistic categorial grammars.In Uncertainty in Artificial Intelligence (UAI), pages658–666.
  • Zhang etal. (2019)Zhen-Yu Zhang, Peng Zhao, Yuan Jiang, and Zhi-Hua Zhou. 2019.Learning from incomplete and inaccurate supervision.Proceedings of the 25th ACM SIGKDD International Conference onKnowledge Discovery & Data Mining.
  • Zhong etal. (2017)V.Zhong, C.Xiong, and R.Socher. 2017.Seq2sql: Generating structured queries from natural language usingreinforcement learning.arXiv preprint arXiv:1709.00103.

Appendix A QDMR to SQL Mapping Rules

Table9 lists all of the QDMR operations along with their mapping rules to SQL. For a thorough description of QDMR semantics please refer to Wolfson etal. (2020).

QDMR OperationSQL Mapping
SELECT(t.col)SELECT t.col FROM t;
SELECT(val)SELECT t.col FROM t WHERE t.col = val;
FILTER(#x, =, val)SELECT #x[SELECT] FROM #x[FROM] WHERE #x[WHERE] AND t.col = val;
PROJECT(t.col, #x)SELECT t.col FROM t, #x[FROM] WHERE Join(t, #x[FROM]) AND #x[SELECT] IN (#x);
AGGREGATE(count, #x)SELECT COUNT(#x[SELECT]) FROM #x[FROM] WHERE #x[WHERE];
GROUP(avg, #x, #y)SELECT AVG(#x[SELECT]) FROM #x[FROM], #y[FROM] WHERE Join(#x[FROM], #y[FROM]) AND #x[WHERE] AND #y[WHERE] GROUP BY #y[SELECT];
SUPERLATIVE(max, k, #x, #y)SELECT #x[SELECT] FROM #x[FROM], #y[FROM] WHERE Join(#x[FROM], #y[FROM]) AND #x[WHERE] AND #y[WHERE] ORDER BY #y[SELECT] DESC k;
COMPARATIVE(#x, #y, >, val)SELECT #x[SELECT] FROM #x[FROM], #y[FROM] WHERE Join(#x[FROM], #y[FROM]) AND #x[WHERE] AND #y[WHERE] AND #y[SELECT] > val;
UNION(#x, #y)SELECT #x[SELECT] FROM #x[FROM], #y[FROM] WHERE Join(#x[FROM], #y[FROM]) AND (#x[WHERE] OR #y[WHERE]);
UNION_COLUMN(#x, #y)SELECT #x[SELECT], #y[SELECT] FROM #x[FROM], #y[FROM] WHERE Join(#x[FROM], #y[FROM]) AND #x[WHERE] AND #y[WHERE];
INTERSECT(t.col, #x, #y)SELECT t.col FROM t, #x[FROM], #y[FROM] WHERE Join(t, #x[FROM], #y[FROM]) AND #x[WHERE] AND t.col IN ( SELECT t.col FROM t, #x[FROM], #y[FROM] WHERE Join(t, #x[FROM], #y[FROM]) AND #y[WHERE] );
SORT(#x, #y, asc)SELECT #x[SELECT] FROM #x[FROM], #y[FROM] WHERE Join(#x[FROM], #y[FROM]) AND #x[WHERE] ORDER BY #y[SELECT] ASC;
DISCARD(#x, #y)SELECT #x[SELECT] FROM #x[FROM] WHERE #x[WHERE] AND #x[SELECT] NOT IN ( #y );
ARITHMETIC(+, #x, #y)( #x ) + ( #y );

Appendix B SQL Candidate Search Heuristics

We further describe the execution-guided search process for candidate SQL queries, that was introduced in Β§3.2. Given the search space of candidate queries, we use four heuristics to find candidates Q^isubscript^𝑄𝑖\hat{Q}_{i}over^ start_ARG italic_Q end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT which execute to the correct answer, aisubscriptπ‘Žπ‘–a_{i}italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT.

1. Phrase linking search:We avoid iterating over each phrase-column assignment by ordering them according to their phrase-column ranking, as described in Β§3.1.1. The query Q^i(1)superscriptsubscript^𝑄𝑖1\hat{Q}_{i}^{(1)}over^ start_ARG italic_Q end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT is induced from the top ranked assignment, where each phrase in sisubscript𝑠𝑖s_{i}italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is assigned its top ranked column. If Q^i(1)⁒(Di)β‰ aisuperscriptsubscript^𝑄𝑖1subscript𝐷𝑖subscriptπ‘Žπ‘–\hat{Q}_{i}^{(1)}(D_{i})\neq a_{i}over^ start_ARG italic_Q end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT ( italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) β‰  italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT we continue the candidate search using heuristics 2-4 (described below). Assuming that the additional search heuristics failed to find a candidate Q^i(1)β€²superscriptsubscript^𝑄𝑖superscript1β€²\hat{Q}_{i}^{(1)^{\prime}}over^ start_ARG italic_Q end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( 1 ) start_POSTSUPERSCRIPT β€² end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT such that Q^i(1)′⁒(Di)=aisuperscriptsubscript^𝑄𝑖superscript1β€²subscript𝐷𝑖subscriptπ‘Žπ‘–\hat{Q}_{i}^{(1)^{\prime}}(D_{i})=a_{i}over^ start_ARG italic_Q end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( 1 ) start_POSTSUPERSCRIPT β€² end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT ( italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) = italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, we return to the phrase linking component and resume the process using the candidate SQL induced from the following assignment Q^i(2)superscriptsubscript^𝑄𝑖2\hat{Q}_{i}^{(2)}over^ start_ARG italic_Q end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( 2 ) end_POSTSUPERSCRIPT, and so forth. In practice, we limit the number of assignments and review only those covering the top-k most similar columns for each phrase in sisubscript𝑠𝑖s_{i}italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, where k=20π‘˜20k=20italic_k = 20. Our error analysis (Table4) reveals that only a small fraction of failures are due to limiting kπ‘˜kitalic_k. Step 2 in Fig.1 represents the iterative process, where Q^i(1)superscriptsubscript^𝑄𝑖1\hat{Q}_{i}^{(1)}over^ start_ARG italic_Q end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT executes to an incorrect result while the following candidate Q^i(2)superscriptsubscript^𝑄𝑖2\hat{Q}_{i}^{(2)}over^ start_ARG italic_Q end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( 2 ) end_POSTSUPERSCRIPT correctly links the phrase β€œauthors” to column author.name and executes to aisubscriptπ‘Žπ‘–a_{i}italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, thereby ending the search.

2. Distinct modification:Given a candidate SQL Q^isubscript^𝑄𝑖\hat{Q}_{i}over^ start_ARG italic_Q end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT such that Q^i⁒(Di)β‰ asubscript^𝑄𝑖subscriptπ·π‘–π‘Ž\hat{Q}_{i}(D_{i})\neq aover^ start_ARG italic_Q end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) β‰  italic_a, we add DISTINCT to its SELECT clause. In Table10 the SQL executes to the correct result, following its modification.

3. Superlative modification:This heuristic automatically corrects semantic mismatches between annotated QDMR structures and the underlying database. Concretely, steps in sisubscript𝑠𝑖s_{i}italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT that represent PROJECT and FILTER operations may entail an implicit ARGMAX/ARGMIN operation. For example for the question β€œWhat is the size of the largest state in the USA?” in the third row of Table10. Step (3) of the question’s annotated QDMR is the PROJECT operation, β€œstate with the largest #2”. While conforming to the PROJECT operation template, the step entails an ARGMAX operation. Using the NLTK part-of-speech tagger, we automatically identify any superlative tokens in the PROJECT and FILTER steps of sisubscript𝑠𝑖s_{i}italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. These steps are then replaced with the appropriate SUPERLATIVE type steps. In Table10, the original step (3) is modified to the step β€œ#1 where #2 is highest”.

4. Aggregate modification:This heuristics replaces instances of COUNT in QDMR steps with SUM operations, and vice-versa. In Table10, the question β€œFind the total student enrollment for different affiliation type schools.”, is incorrectly mapped to a candidate query involving a COUNT operation on university.enrollment. By modifying the aggregate operation to SUM, the new Q^isubscript^𝑄𝑖\hat{Q}_{i}over^ start_ARG italic_Q end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT correctly executes to aisubscriptπ‘Žπ‘–a_{i}italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and is therefore returned as the output.

HeuristicQuestionCandidate SQL/QDMRModified Candidate SQL/QDMR
Phrase linking searchWhat are the distinct majors that students with treasurer votes are studying?SELECT DISTINCT student.major FROM student, voting_record WHERE student.stuid = voting_record.stuidSELECT DISTINCT student.major FROM student, voting_record WHERE student.stuid = voting_record.treasurer_vote
Distinct modificationFind the number of different product types.SELECT products.product_type_code FROM productsSELECT DISTINCT products.product_type_code FROM products
Superlative modificationWhat is the size of the largest state in the USA?(1) states in the usa; (2) size of #1; (3) state with the largest #2; (4) size of #3(1) states in the usa; (2) size of #1; (3) #1 where #2 is highest; (4) the size of #3
Aggregate modificationFind the total student enrollment for different affiliation type schools.SELECT university.affiliation, COUNT(university.enrollment) FROM university GROUP BY university.affiliationSELECT university.affiliation, SUM(university.enrollment) FROM university GROUP BY university.affiliation

Appendix C Data License

We list the license (when publicly available) and the release details of the datasets used in our paper.

The Break dataset Wolfson etal. (2020) is under the MIT License.Spider Yu etal. (2018) is under the CC BY-SA 4.0 License. Geo880 Zelle and Mooney (1996) is available under the GNU General Public License 2.0.

The text-to-SQL versions of Geo880 and Academic Li and Jagadish (2014) were made publicly available by Finegan-Dollak etal. (2018) in: https://github.com/jkkummerfeld/text2sql-data/.

The IMDB and Yelp datasets were publicly released by Yaghmazadeh etal. (2017) in: goo.gl/DbUBMM.

Appendix D SQL Synthesis Coverage

We provide additional results of SQL synthesis coverage. Table11 lists the coverage results, per dataset, when discarding all examples whose SQL executes to an empty set. Out of the 9,313 original examples, 8,887 examples have non-empty denotations. Coverage scores per dataset remain generally the same as they do when evaluating on all examples. These results further indicate the effectiveness of the SQL synthesis procedure. Namely, this ensures the synthesis results in Table3 are faithful, despite the potential noise introduced by SQL with empty denotations.

DatasetDB #ExamplesSynthesizedCoverage %
Academic118314880.9
Geo880184670783.6
IMDB111310189.4
Yelp1665481.8
Spider dev2097874576.2
Spider train1406,7015,13776.7
Total:1648,8876,89277.6
Spider pred.2097875076.7

Appendix E NL to SQL Models Results

E.1 Evaluation on the Full XSP Datasets

We provide additional results of the models trained on Spider. Namely, we evaluate on all examples of the Academic, IMDB and Yelp datasets, including examples whose denotations are empty.Table12 lists the results of all the models trained on the original training set of Spider.In Table13 we provide the XSP results of the models trained on the random subsets of Spider train, used in Β§4.3.4. Similar to our previous experiments, T5-QDMR-P is generally better than T5-SQL-G in terms of its cross-database generalization.

E.2 Qualitative Results

Table14 includes some example predictions of our Spider trained models from Tables5-6. For each example we describe its question and target (gold) SQL annotation, followed by each model’s result.

ModelSupervisionTraining setSpider dev.AcademicGeo880IMDBYelp
T5-SQL-G⟨xi,Qi,Di⟩subscriptπ‘₯𝑖subscript𝑄𝑖subscript𝐷𝑖\langle x_{i},Q_{i},D_{i}\rangle⟨ italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_Q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⟩7,00068.0 Β±plus-or-minus\pmΒ± 0.37.9 Β±plus-or-minus\pmΒ± 1.333.6 Β±plus-or-minus\pmΒ± 2.519.1 Β±plus-or-minus\pmΒ± 2.925.3 Β±plus-or-minus\pmΒ± 1.7
T5-SQL-Gpart⟨xi,Qi,Di⟩subscriptπ‘₯𝑖subscript𝑄𝑖subscript𝐷𝑖\langle x_{i},Q_{i},D_{i}\rangle⟨ italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_Q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⟩5,34966.4 Β±plus-or-minus\pmΒ± 0.84.9 Β±plus-or-minus\pmΒ± 1.732.4 Β±plus-or-minus\pmΒ± 1.321.1 Β±plus-or-minus\pmΒ± 0.726.1 Β±plus-or-minus\pmΒ± 1.0
T5-QDMR-G⟨xi,ai,si,Di⟩subscriptπ‘₯𝑖subscriptπ‘Žπ‘–subscript𝑠𝑖subscript𝐷𝑖\langle x_{i},a_{i},s_{i},D_{i}\rangle⟨ italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⟩5,34965.8 Β±plus-or-minus\pmΒ± 0.311.2 Β±plus-or-minus\pmΒ± 1.040.4 Β±plus-or-minus\pmΒ± 1.830.3 Β±plus-or-minus\pmΒ± 3.125.8 Β±plus-or-minus\pmΒ± 5.1
T5-QDMR-P⟨xi,ai,Di⟩subscriptπ‘₯𝑖subscriptπ‘Žπ‘–subscript𝐷𝑖\langle x_{i},a_{i},D_{i}\rangle⟨ italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⟩5,07562.9 Β±plus-or-minus\pmΒ± 0.88.4 Β±plus-or-minus\pmΒ± 0.939.7 Β±plus-or-minus\pmΒ± 1.427.0 Β±plus-or-minus\pmΒ± 5.128.2 Β±plus-or-minus\pmΒ± 2.9
ModelSupervisionTrain. setDB #Spider dev.AcademicGeo880IMDBYelp
T5-SQL-G⟨xi,Qi,Di⟩subscriptπ‘₯𝑖subscript𝑄𝑖subscript𝐷𝑖\langle x_{i},Q_{i},D_{i}\rangle⟨ italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_Q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⟩1,3483048.42.129.69.922.6
T5-SQL-Gpart⟨xi,Qi,Di⟩subscriptπ‘₯𝑖subscript𝑄𝑖subscript𝐷𝑖\langle x_{i},Q_{i},D_{i}\rangle⟨ italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_Q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⟩1,1293047.42.626.914.516.9
T5-QDMR-P⟨xi,ai,Di⟩subscriptπ‘₯𝑖subscriptπ‘Žπ‘–subscript𝐷𝑖\langle x_{i},a_{i},D_{i}\rangle⟨ italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⟩1,1293046.28.429.016.016.9
T5-SQL-G⟨xi,Qi,Di⟩subscriptπ‘₯𝑖subscript𝑄𝑖subscript𝐷𝑖\langle x_{i},Q_{i},D_{i}\rangle⟨ italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_Q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⟩2,0284054.76.328.318.321.0
T5-SQL-Gpart⟨xi,Qi,Di⟩subscriptπ‘₯𝑖subscript𝑄𝑖subscript𝐷𝑖\langle x_{i},Q_{i},D_{i}\rangle⟨ italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_Q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⟩1,4404051.33.721.212.219.4
T5-QDMR-P⟨xi,ai,Di⟩subscriptπ‘₯𝑖subscriptπ‘Žπ‘–subscript𝐷𝑖\langle x_{i},a_{i},D_{i}\rangle⟨ italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⟩1,4404052.16.827.412.218.5
T5-SQL-G⟨xi,Qi,Di⟩subscriptπ‘₯𝑖subscript𝑄𝑖subscript𝐷𝑖\langle x_{i},Q_{i},D_{i}\rangle⟨ italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_Q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⟩2,0764056.23.225.513.024.5
T5-SQL-Gpart⟨xi,Qi,Di⟩subscriptπ‘₯𝑖subscript𝑄𝑖subscript𝐷𝑖\langle x_{i},Q_{i},D_{i}\rangle⟨ italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_Q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⟩1,5524053.72.317.810.222.8
T5-QDMR-P⟨xi,ai,Di⟩subscriptπ‘₯𝑖subscriptπ‘Žπ‘–subscript𝐷𝑖\langle x_{i},a_{i},D_{i}\rangle⟨ italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⟩1,5524053.86.132.319.821.8
Question:Return me the total citations of papers in the VLDB conference in 2005.
Target SQL:select sum ( publication_0.citation_num ) from conference as conference_0, publication as publication_0 where conference_0.name = "VLDB" and publication_0.year = 2005 and conference_0.cid = publication_0.cid;
T5-SQL-G:select sum(t1.citation_num) from publication as t1 join conference as t2 on t1.cid = t2.cid where t2.name = "VLDB" and t1.year < 2005;βœ“
T5-QDMR-G:SELECT SUM(cite.cited) FROM publication, cite, conference WHERE conference.cid = publication.cid AND publication.pid = cite.cited AND conference.cid IN ( SELECT conference.cid FROM conference WHERE conference.name = ’VLDB’ ) AND publication.year < 2005;βœ—
T5-QDMR-P:SELECT SUM(cite.cited) FROM cite, publication, conference WHERE publication.pid = cite.cited AND conference.cid = publication.cid AND conference.cid IN ( SELECT conference.cid FROM publication, conference WHERE conference.cid = publication.cid AND conference.name = ’VLDB’ AND publication.year < 2005 );βœ—
Question:Return me the papers written by H. V. Jagadish and Yunyao Li after 2005.
Target SQL:select publication_0.title from author as author_0, author as author_1, publication as publication_0, writes as writes_0, writes as writes_1 where author_0.name = "Yunyao Li" and author_1.name = "H. V. Jagadish" and publication_0.year > 2005 and author_0.aid = writes_0.aid and author_1.aid = writes_1.aid and publication_0.pid = writes_0.pid and publication_0.pid = writes_1.pid;
T5-SQL-G:select t3.title from author as t1 join writes as t2 on t1.aid = t2.aid join publication as t3 on t2.citation_num = t3.pid where t1.name = "H. V. Jagadish" intersect select t3.title from author as t1 join writes as t2 on t1.aid = t2.aid join journal as t3 on t2.journal = t3.jid where t3.name = "Yunyao Li";βœ—
T5-QDMR-G:SELECT publication.abstract FROM author, publication, writes WHERE publication.pid = writes.pid AND writes.aid = author.aid AND author.name = ’h.v.jagadish’ AND publication.year > 2005βœ—
T5-QDMR-P:SELECT journal.homepage FROM journal, author, publication, writes WHERE journal.jid = publication.jid AND publication.pid = writes.pid AND writes.aid = author.aid AND author.name = ’h.v.jagadish’ AND publication.year > 2005βœ—
Question:What is the shortest river?
Target SQL:SELECT river.river_name FROM river WHERE river.length = (SELECT min (river.length) FROM river);
T5-SQL-G:select length from river order by length limit 1;βœ—
T5-QDMR-G:SELECT river.river_name FROM river WHERE river.river_name IN ( SELECT river.river_name FROM river ) ORDER BY river.length ASC LIMIT 1;βœ“
T5-QDMR-P:SELECT river.river_name FROM river WHERE river.river_name IN ( SELECT river.river_name FROM river ) ORDER BY river.length ASC LIMIT 1;βœ“
Question:Find the actor with most number of films.
Target SQL:select actor_0.name from actor as actor_0, cast as cast_0, movie as movie_0 where actor_0.aid = cast_0.aid and cast_0.msid = movie_0.mid order by count(distinct(movie_0.title)) desc limit 1;
T5-SQL-G:select t1.name from actor as t1 join cast as t2 on t1.aid = t2.id group by t1.aid order by count(*) desc limit 1;βœ—
T5-QDMR-G:SELECT actor.name FROM actor, movie, cast WHERE actor.aid = cast.aid AND cast.msid = movie.mid GROUP BY actor.name ORDER BY COUNT(movie.mid) DESC LIMIT 1;βœ“
T5-QDMR-P:SELECT actor.name FROM actor, cast, movie WHERE actor.aid = cast.aid AND cast.msid = movie.mid GROUP BY actor.name ORDER BY COUNT(movie.mid) DESC LIMIT 1;βœ“
Question:Which business has the most number of checkins?
Target SQL:select business_0.name from business as business_0, checkin as checkin_0 where business_0.business_id = checkin_0.business_id group by business_0.name order by sum(checkin_0.count) desc limit 1;
T5-SQL-G:select t1.name from business as t1 join checkin as t2 on t1.business_id = t2.business_id group by t2.business_id order by count(*) desc limit 1;βœ—
T5-QDMR-G:SELECT business.name FROM checkin, business WHERE business.business_id = checkin.business_id GROUP BY business.name ORDER BY COUNT(checkin.cid) DESC LIMIT 1;βœ—
T5-QDMR-P:SELECT business.name FROM checkin, business WHERE business.business_id = checkin.business_id GROUP BY business.name ORDER BY COUNT(checkin.cid) DESC LIMIT 1;βœ—
Weakly Supervised Text-to-SQL Parsing through Question Decomposition (2024)
Top Articles
So erstellen Sie BG3 Hag's Bane, Fey Flower und Essence-Standorte - Gamingdeputy Germany
How to Get the Hag’s Bane in Baldur's Gate 3
Netronline Taxes
Cranes For Sale in United States| IronPlanet
Jeremy Corbell Twitter
La connexion Γ  Mon Compte
Comcast Xfinity Outage in Kipton, Ohio
South Carolina defeats Caitlin Clark and Iowa to win national championship and complete perfect season
Nordstrom Rack Glendale Photos
Truist Drive Through Hours
Cube Combination Wiki Roblox
Toonily The Carry
Vardis Olive Garden (Georgioupolis, Kreta) ✈️ inkl. Flug buchen
Www.paystubportal.com/7-11 Login
Purple Crip Strain Leafly
Classroom 6x: A Game Changer In The Educational Landscape
Magicseaweed Capitola
Aberration Surface Entrances
Simplify: r^4+r^3-7r^2-r+6=0 Tiger Algebra Solver
Project, Time & Expense Tracking Software for Business
Yonkers Results For Tonight
Wisconsin Volleyball Team Boobs Uncensored
Craigslist Maryland Trucks - By Owner
Craigslist Alo
Sienna
Ltg Speech Copy Paste
Jesus Calling Feb 13
Craigs List Jax Fl
Kids and Adult Dinosaur Costume
Rust Belt Revival Auctions
Here’s how you can get a foot detox at home!
How to Destroy Rule 34
The TBM 930 Is Another Daher Masterpiece
Jason Brewer Leaving Fox 25
Puretalkusa.com/Amac
Urban Blight Crossword Clue
Wilson Tattoo Shops
Improving curriculum alignment and achieving learning goals by making the curriculum visible | Semantic Scholar
Gfs Ordering Online
Pro-Ject’s T2 Super Phono Turntable Is a Super Performer, and It’s a Super Bargain Too
Skyward Marshfield
Χ”ΧͺΧ—Χ‘Χ¨/Χ™ או הירשם/Χ”Χ™Χ¨Χ©ΧžΧ™ Χ›Χ“Χ™ ΧœΧ¨ΧΧ•Χͺ.
Postgraduate | Student Recruitment
Sofia With An F Mugshot
Luciane Buchanan Bio, Wiki, Age, Husband, Net Worth, Actress
Strange World Showtimes Near Century Stadium 25 And Xd
'The Nun II' Ending Explained: Does the Immortal Valak Die This Time?
Yourcuteelena
Union Supply Direct Wisconsin
Craiglist.nj
Congressional hopeful Aisha Mills sees district as an economical model
Raley Scrubs - Midtown
Latest Posts
Article information

Author: Velia Krajcik

Last Updated:

Views: 5921

Rating: 4.3 / 5 (54 voted)

Reviews: 93% of readers found this page helpful

Author information

Name: Velia Krajcik

Birthday: 1996-07-27

Address: 520 Balistreri Mount, South Armand, OR 60528

Phone: +466880739437

Job: Future Retail Associate

Hobby: Polo, Scouting, Worldbuilding, Cosplaying, Photography, Rowing, Nordic skating

Introduction: My name is Velia Krajcik, I am a handsome, clean, lucky, gleaming, magnificent, proud, glorious person who loves writing and wants to share my knowledge and understanding with you.