-
Notifications
You must be signed in to change notification settings - Fork 601
Dialect-specific parsing and Snowflake JSON support #241
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hey, I wrote about this recently in #7 (comment) |
If the question was specifically about discussing a change like this c67d667 then I was too quick to close this as a duplicate, as this is something we'll need eventually. |
thanks @nikolay for the fast response :) , Follow your answer i follow issue #7 , and the issues that mentation there . What i currently hope to do is to parse snowflake-json queries with this this lib, and I wonder what is the correct way for doing that. As json-Sql query in snowflake could look not native in ANSI-sql perspective -so the first idea that came into my mind is dialect. json sql in snowflake could look like : I could think also about some other approaches to deal with that - but maybe it's should be in another issue , with more suitable title :) |
Right. This is a good example where different dialects conflict, as Something like c67d667 (i.e. the first of two options I listed in #207 (comment)) does seem to be a good solution for this, although there is a question when to use individual "toggles" like I would be more comfortable with dialect-based switches, as that means N dialects to reason about instead of 2^N with the fine-grained toggles. @Dandandan, @maxcountryman any thoughts on this? The different ways to write subscripts do not seem to conflict syntactically at first glance:
...though there are probably issues around precedence, as I can't guess the grammar PostgreSQL uses just from its docs. |
I agree that dialect-based switches seem preferable here. Unfortunately I can't think of a better way of handling these kinds of things. |
Cool , I think there is still one issue about the dialect specific values in the AST . For example currently in derived-table definition , there is no definition of "FLATTEN" or "table , Currently I see 3 option:
The enum will like that :
and the derived table struct will look like that :
For me option 3 look like the best solution . |
The
|
thanks @nickolay for the explanation . So I think that I am ready to start working on PR , My plan is to start with #223 -as it's suit for the dialect , and could also help others. |
Cool! Actually, #223 is the least clear part of the puzzle to me (added a comment there), but I'll be glad to play along. |
Ok , here my try , |
Opened also another PR for supporting named argument functions . |
Almost 5 years without update. Can this be closed in light of https://github.com/apache/datafusion-sqlparser-rs/blob/main/tests/sqlparser_custom_dialect.rs ? |
Hi ,
Currently the lib support different databases (postgresql-sql, mysql , etc .. ) dialect only in the tokenizer.
My question is - if there was any discussions/plans about supporting per-db dialects also in the parser (or do you you think it's good idea to start such discussions )?
Where most of the DB's support ANSI sql , when it's come to more advance features like json-parsing - There is great variability between the different databases.
The text was updated successfully, but these errors were encountered: