Replies: 1 comment 2 replies
-
Strongly disagree with this - @onefact we have taught high school students, college students, postgraduate (Master's and PhD) and faculty in non-technical subjects (e.g. humanities) to improve python scripts. (And to write SQL; the homework from the first week of class last summer - colab here - was featured in the @motherduckdb blog: https://motherduck.com/blog/introducing-column-explorer/ and is currently used in the onboarding user flow as an example dataset.) Please let me know if you need further examples - if we are wrong about this assumption or our experience we need to revise our data thinking curricula for the fall semester! |
Beta Was this translation helpful? Give feedback.
-
I was asked to keep this viewpoint separate from issues
#1346
#1405
Also observing the rise of DuckDb and Motherduck convince me that
'The best dashboards are built with code.' does not scale.
SQL/DuckDB versus Panda's or R enables colleagues and users that are fine using Excel adjusting 90% of a dashboard themselves.
For the remaining 10% I can help optimizing their SQL or use some trick like Hive Partitioning.
Promoting the 'any dataloader' path, reduces Observable Framework to a tool for developers.
DuckDB videos underline removing the barriers of entry that a Spark Cluster or even a Postgress DB has.
Just as Duck strives to 'just work' for CSV or Parquet instead of telling the users that their input is not valid.
IMHO you should target the modern browser stack + DuckDB WASM or as an implicit dataloader for CSV or SQL/Parquet.
Python/R etc can have their uses but just like Quarto it seems more valuable to 'just works with you less then perfect data in CSV, Excel or Parquet' and incremental data updates then 'you have to transform your data or write a Python script to start'
I can always optimize the SQL to perform better, non developers cannot improve a Python script...
Beta Was this translation helpful? Give feedback.
All reactions