Skip to content

Text-to-SQL: Help agents turn natural language into SQL queries#762

Draft
amotl wants to merge 1 commit intomainfrom
llm
Draft

Text-to-SQL: Help agents turn natural language into SQL queries#762
amotl wants to merge 1 commit intomainfrom
llm

Conversation

@amotl
Copy link
Copy Markdown
Member

@amotl amotl commented Apr 15, 2026

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 15, 2026

Warning

Rate limit exceeded

@amotl has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 45 minutes and 18 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 45 minutes and 18 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: f8555cb3-4cb5-47dd-9f83-65b4a4abe714

📥 Commits

Reviewing files that changed from the base of the PR and between d6dc462 and 39957e5.

📒 Files selected for processing (10)
  • CHANGES.md
  • cratedb_toolkit/query/cli.py
  • cratedb_toolkit/query/llm/__init__.py
  • cratedb_toolkit/query/llm/api.py
  • cratedb_toolkit/query/llm/cli.py
  • cratedb_toolkit/query/llm/model.py
  • cratedb_toolkit/query/llm/util.py
  • doc/query/index.md
  • doc/query/llm/index.md
  • pyproject.toml

Walkthrough

A new LLM-to-database query module is introduced with domain models for LLM and database configuration, a utility function to initialize LLM backends (OpenAI, Azure, Ollama), and a QueryData API that converts natural language questions into SQL queries using LlamaIndex's NLSQLTableQueryEngine.

Changes

Cohort / File(s) Summary
Module Documentation
CHANGES.md
Added changelog entry documenting new QueryData functionality for natural language to SQL conversion.
Domain Models
cratedb_toolkit/query/llm/model.py
Introduced ModelType enum for LLM providers (OpenAI, Azure, Ollama) and ModelInfo/DatabaseInfo dataclasses for configuring LLM and database connectivity parameters.
LLM Configuration
cratedb_toolkit/query/llm/util.py
Added configure_llm() utility function that instantiates and configures LLM backends and embedding models based on provider type, with support for OpenAI, Azure OpenAI, and Ollama with environment-based credentials.
Query API
cratedb_toolkit/query/llm/api.py
Implemented QueryData dataclass that initializes a natural language query engine on instantiation, validates setup, provides ask() method to execute queries, and includes a CLI entry point for demo execution.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant QueryData
    participant LLMConfig as LLM Config
    participant LlamaIndex
    participant CrateDB

    User->>QueryData: Create QueryData(db, model)
    activate QueryData
    QueryData->>QueryData: __post_init__()
    QueryData->>QueryData: setup()
    QueryData->>LLMConfig: configure_llm(model)
    activate LLMConfig
    LLMConfig->>LLMConfig: Instantiate LLM Backend<br/>(OpenAI/Azure/Ollama)
    LLMConfig->>LLMConfig: Create Embedding Model
    LLMConfig-->>QueryData: (llm, embedding)
    deactivate LLMConfig
    QueryData->>LlamaIndex: Create NLSQLTableQueryEngine<br/>(db, llm, embedding)
    LlamaIndex->>CrateDB: Connect & Inspect Schema
    deactivate QueryData

    User->>QueryData: ask(question)
    activate QueryData
    QueryData->>QueryData: Validate query_engine
    QueryData->>LlamaIndex: query(question)
    activate LlamaIndex
    LlamaIndex->>CrateDB: Generate & Execute SQL
    CrateDB-->>LlamaIndex: Results
    LlamaIndex-->>QueryData: Response with metadata
    deactivate LlamaIndex
    QueryData->>QueryData: Log response & sources
    QueryData-->>User: Return LlamaIndex Response
    deactivate QueryData
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Poem

🐰 A rabbit's ode to queries newfound:
With LLMs and engines spinning round,
From natural words to SQL's sweet sound,
QueryData hops where questions abound!
No more manual joins—just ask and bound! 🚀

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 60.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the main feature being added: a Text-to-SQL capability that helps agents convert natural language queries into SQL, which aligns with the new QueryData dataclass and LLM-to-DB query flow introduced in the changeset.
Description check ✅ Passed The pull request description is related to the changeset, explaining the QueryData feature for converting natural language to SQL queries mentioned in CHANGES.md and implemented across the new modules.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch llm

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@cratedb_toolkit/query/llm/api.py`:
- Around line 84-88: Change the four logger.info calls that print full
query/answer/sources/metadata to DEBUG-level logging (or redact sensitive
fields) to avoid leaking query results: replace logger.info("Query was: %s",
question), logger.info("Answer was: %s", response),
logger.info(response.get_formatted_sources()), and
logger.info(response.metadata) with logger.debug equivalents (or sanitize
response/metadata before logging) so only DEBUG mode exposes full content while
normal INFO logs remain safe.
- Around line 54-59: The setup method creates a new SQLAlchemy engine and calls
an unscoped connect(), which ignores the caller-provided DatabaseInfo.engine and
leaks a connection; change setup() to use the existing engine on self.db
(self.db.engine) instead of creating engine_crate and remove the standalone
engine_crate.connect() call (or if a scoped connection is required, acquire it
via a context manager from self.db.engine). Ensure all references to
engine_crate are replaced with self.db.engine so the caller's configuration is
honored and no permanent open connection is left.
- Around line 67-70: SQLDatabase is being instantiated without passing through
DatabaseInfo filters, so schema/include_tables/ignore_tables set on DatabaseInfo
are ignored; update the setup() method where SQLDatabase(engine_crate, ...) is
created to forward DatabaseInfo.schema, DatabaseInfo.include_tables, and
DatabaseInfo.ignore_tables into the SQLDatabase constructor (or compute the
include/ignore lists from DatabaseInfo if names differ), ensuring the
SQLDatabase instance receives the schema and table filters defined on the
DatabaseInfo object.

In `@cratedb_toolkit/query/llm/util.py`:
- Around line 47-60: The code currently ignores ModelInfo fields and always
reads Azure/OpenAI config from environment variables; update the
OpenAI/AzureOpenAI construction to prefer values from the passed ModelInfo
instance (info.api_key, info.endpoint, info.instance, info.api_version) and only
fall back to os.getenv(...) when those ModelInfo fields are empty/None;
specifically modify the branches that create OpenAI and AzureOpenAI (referencing
llm, completion_model, info, ModelType.AZURE) so they pass api_key=info.api_key
or os.getenv(...), azure_endpoint=info.endpoint or os.getenv(...),
engine=info.instance or os.getenv(...), and api_version=info.api_version or
os.getenv(...).
- Around line 73-85: The embed_model variable currently returns the string
"local" in the Ollama branch which violates the declared BaseEmbedding return
type; change that branch to return None (since NLSQLTableQueryEngine.embed_model
accepts Optional[BaseEmbedding]) and update the function's return type
annotation to reflect Optional[BaseEmbedding] (or a tuple containing
Optional[BaseEmbedding] if the function returns (llm, embed_model)); also remove
the ty: ignore[invalid-return-type] comment and ensure references to ModelType
and the return tuple (llm, embed_model) in this function are updated
accordingly.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: e342e93e-bc9c-4a49-9446-79f272a76782

📥 Commits

Reviewing files that changed from the base of the PR and between 9800da5 and d6dc462.

📒 Files selected for processing (5)
  • CHANGES.md
  • cratedb_toolkit/query/llm/__init__.py
  • cratedb_toolkit/query/llm/api.py
  • cratedb_toolkit/query/llm/model.py
  • cratedb_toolkit/query/llm/util.py

Comment thread cratedb_toolkit/query/llm/api.py
Comment thread cratedb_toolkit/query/llm/api.py
Comment thread cratedb_toolkit/query/llm/api.py Outdated
Comment on lines +47 to +60
llm = OpenAI(
model=completion_model,
temperature=0.0,
api_key=os.getenv("OPENAI_API_KEY"),
)
elif info.type is ModelType.AZURE:
llm = AzureOpenAI(
model=completion_model,
temperature=0.0,
engine=os.getenv("LLM_INSTANCE"),
azure_endpoint=os.getenv("OPENAI_AZURE_ENDPOINT"),
api_key=os.getenv("OPENAI_API_KEY"),
api_version=os.getenv("OPENAI_AZURE_API_VERSION"),
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n cratedb_toolkit/query/llm/util.py | head -100

Repository: crate/cratedb-toolkit

Length of output: 3759


🏁 Script executed:

cat -n cratedb_toolkit/query/llm/model.py

Repository: crate/cratedb-toolkit

Length of output: 906


Honor ModelInfo fields before falling back to env vars.

ModelInfo.endpoint, instance, api_key, and api_version are defined but ignored here, which makes programmatic configuration unreliable. Prefer ModelInfo values first, then env fallbacks.

Proposed fix
     if info.type is ModelType.OPENAI:
         llm = OpenAI(
             model=completion_model,
             temperature=0.0,
-            api_key=os.getenv("OPENAI_API_KEY"),
+            api_key=info.api_key or os.getenv("OPENAI_API_KEY"),
         )
@@
         llm = AzureOpenAI(
             model=completion_model,
             temperature=0.0,
-            engine=os.getenv("LLM_INSTANCE"),
-            azure_endpoint=os.getenv("OPENAI_AZURE_ENDPOINT"),
-            api_key=os.getenv("OPENAI_API_KEY"),
-            api_version=os.getenv("OPENAI_AZURE_API_VERSION"),
+            engine=info.instance or os.getenv("LLM_INSTANCE"),
+            azure_endpoint=info.endpoint or os.getenv("OPENAI_AZURE_ENDPOINT"),
+            api_key=info.api_key or os.getenv("OPENAI_API_KEY"),
+            api_version=info.api_version or os.getenv("OPENAI_AZURE_API_VERSION"),
         )
@@
             AzureOpenAIEmbeddings(
-                azure_endpoint=os.getenv("OPENAI_AZURE_ENDPOINT"),
+                azure_endpoint=info.endpoint or os.getenv("OPENAI_AZURE_ENDPOINT"),
                 model=embedding_model,
             )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cratedb_toolkit/query/llm/util.py` around lines 47 - 60, The code currently
ignores ModelInfo fields and always reads Azure/OpenAI config from environment
variables; update the OpenAI/AzureOpenAI construction to prefer values from the
passed ModelInfo instance (info.api_key, info.endpoint, info.instance,
info.api_version) and only fall back to os.getenv(...) when those ModelInfo
fields are empty/None; specifically modify the branches that create OpenAI and
AzureOpenAI (referencing llm, completion_model, info, ModelType.AZURE) so they
pass api_key=info.api_key or os.getenv(...), azure_endpoint=info.endpoint or
os.getenv(...), engine=info.instance or os.getenv(...), and
api_version=info.api_version or os.getenv(...).

Comment thread cratedb_toolkit/query/llm/util.py Outdated
Comment on lines +73 to +85
if info.type is ModelType.OPENAI:
embed_model = LangchainEmbedding(OpenAIEmbeddings(model=embedding_model))
elif info.type is ModelType.AZURE:
embed_model = LangchainEmbedding(
AzureOpenAIEmbeddings(
azure_endpoint=os.getenv("OPENAI_AZURE_ENDPOINT"),
model=embedding_model,
)
)
else:
embed_model = "local"

return llm, embed_model # ty: ignore[invalid-return-type]
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

In current LlamaIndex docs for NLSQLTableQueryEngine, what is the accepted type of embed_model? Is a plain string like "local" supported?

💡 Result:

The accepted type for embed_model in the current LlamaIndex docs for NLSQLTableQueryEngine is Optional[BaseEmbedding]. A plain string like "local" is not directly supported as the parameter value; it expects an instance of an embedding model class implementing BaseEmbedding, such as HuggingFaceEmbedding(model_name="BAAI/bge-small-en"). However, if not provided (None), it falls back to Settings.embed_model, which can be set globally to such an instance.

Citations:


🏁 Script executed:

# First, verify the file exists and examine the function
find . -name "util.py" -path "*/query/llm/*" -type f

Repository: crate/cratedb-toolkit

Length of output: 100


🏁 Script executed:

# Once found, read the file to verify code state
cat -n cratedb_toolkit/query/llm/util.py | head -100

Repository: crate/cratedb-toolkit

Length of output: 3759


🏁 Script executed:

# Search for usages of configure_llm
rg "configure_llm" -A 3 -B 3

Repository: crate/cratedb-toolkit

Length of output: 1541


🏁 Script executed:

# Search for usages of NLSQLTableQueryEngine to understand how embed_model is passed
rg "NLSQLTableQueryEngine" -A 5 -B 2

Repository: crate/cratedb-toolkit

Length of output: 2228


Fix type inconsistency in embed_model for Ollama branch.

Line 83 returns a string ("local") while the function signature declares it returns BaseEmbedding. This breaks the contract and will cause a runtime error when NLSQLTableQueryEngine receives the value. The type mismatch is currently masked by ty: ignore[invalid-return-type] on line 85.

Per LlamaIndex API documentation, NLSQLTableQueryEngine.embed_model accepts Optional[BaseEmbedding], not plain strings. Return None instead of "local" and update the return type annotation accordingly.

Proposed fix
-from typing import Tuple
+from typing import Optional, Tuple
@@
-def configure_llm(info: ModelInfo, debug: bool = False) -> Tuple[LLM, BaseEmbedding]:
+def configure_llm(info: ModelInfo, debug: bool = False) -> Tuple[LLM, Optional[BaseEmbedding]]:
@@
-    else:
-        embed_model = "local"
+    else:
+        embed_model = None
 
-    return llm, embed_model  # ty: ignore[invalid-return-type]
+    return llm, embed_model
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cratedb_toolkit/query/llm/util.py` around lines 73 - 85, The embed_model
variable currently returns the string "local" in the Ollama branch which
violates the declared BaseEmbedding return type; change that branch to return
None (since NLSQLTableQueryEngine.embed_model accepts Optional[BaseEmbedding])
and update the function's return type annotation to reflect
Optional[BaseEmbedding] (or a tuple containing Optional[BaseEmbedding] if the
function returns (llm, embed_model)); also remove the ty:
ignore[invalid-return-type] comment and ensure references to ModelType and the
return tuple (llm, embed_model) in this function are updated accordingly.

@amotl amotl force-pushed the llm branch 2 times, most recently from b231b23 to 51706ec Compare April 15, 2026 23:36
@amotl amotl changed the title QueryData: Help agents turn natural language into SQL queries Text-to-SQL: Help agents turn natural language into SQL queries Apr 15, 2026
DataQuery is the little sister of Google's QueryData product.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant