Skip to content

Improved typings #279

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Jun 13, 2022
Merged

Improved typings #279

merged 3 commits into from
Jun 13, 2022

Conversation

soedirgo
Copy link
Member

@soedirgo soedirgo commented Jun 2, 2022

BREAKING CHANGE: Rework typings.

Closes #172, closes #217.

The current typings DX is not great. The way users supply types is through .from<T>() which needs to be done on each call. What ends up happening most of the time is users would leave the T which defaults to any, because this is the path of least resistance.

Another sticking point is the T above also becomes the query result type, even though you can e.g. specify only certain columns.

This change intends to alleviate the above by 1) accepting type definitions at the constructor, and 2) parsing .select() query to infer the result type.

As an example, using the type definitions in test/types.ts:

type Json = string | number | boolean | null | { [key: string]: Json } | Json[]
export interface Database {
public: {
Tables: {
users: {
Row: {
username: string
data: Json | null
age_range: string | null
status: 'ONLINE' | 'OFFLINE' | null
catchphrase: string | null
}
Insert: {
username: string
data?: Json | null
age_range?: string | null
status?: 'ONLINE' | 'OFFLINE' | null
catchphrase?: string | null
}
Update: {
username?: string
data?: Json | null
age_range?: string | null
status?: 'ONLINE' | 'OFFLINE' | null
catchphrase?: string | null
}
}
channels: {
Row: {
id: number
data: Json | null
slug: string | null
}
Insert: {
id?: number
data?: Json | null
slug?: string | null
}
Update: {
id?: number
data?: Json | null
slug?: string | null
}
}
messages: {
Row: {
id: number
data: Json | null
message: string | null
username: string
channel_id: number
}
Insert: {
id?: number
data?: Json | null
message?: string | null
username: string
channel_id: number
}
Update: {
id?: number
data?: Json | null
message?: string | null
username?: string
channel_id?: number
}
}
}
Functions: {
get_status: {
Args: {
name_param: string
}
Returns: 'ONLINE' | 'OFFLINE'
}
get_username_and_status: {
Args: {
name_param: string
}
Returns: {
username: string
status: 'ONLINE' | 'OFFLINE' | null
}[]
}
void_func: {
Args: {}
Returns: undefined
}
}
}
personal: {
Tables: {
users: {
Row: {
username: string
data: Json | null
age_range: string | null
status: 'ONLINE' | 'OFFLINE' | null
}
Insert: {
username: string
data?: Json | null
age_range?: string | null
status?: 'ONLINE' | 'OFFLINE' | null
}
Update: {
username?: string
data?: Json | null
age_range?: string | null
status?: 'ONLINE' | 'OFFLINE' | null
}
}
}
Functions: {
get_status: {
Args: {
name_param: string
}
Returns: 'ONLINE' | 'OFFLINE' | null
}
}
}
}

We can use it like:

import { Database } from './types'

const postgrest = new PostgrestClient<Database>('https://...')

const res = await postgrest
  .from(
    'users' // 'users' | 'channels' | 'messages'
  ).update(
    { data: 'foo' }, // {
                     //   username?: string
                     //   data?: Json | null
                     //   age_range?: string | null
                     //   status?: 'ONLINE' | 'OFFLINE' | null
                     //   catchphrase?: string | null
                     // } | { ... }[]
  ).eq(
    'data', // 'username' | 'data' | 'age_range' | 'status' | 'catchphrase'
    'foo',   // string | null
  ).select('name:username, status') // <- renames work too

const {
  data,  // {
         //   name: string  <- renamed
         //   status: string
         // }[] | undefined
  error, // PostgrestError | undefined
} = res

Most of the hard work in .select() query parsing is done by @bnjmnt4n in https://github.com/bnjmnt4n/postgrest-query. Here we're only implementing a subset of it because certain things are tricky to do properly, e.g.:

  • column casting (::)
  • json paths (->, ->>)
  • embedded tables in select query parsing (members(id))
  • embedded column types in filters (members.id)

Embedded tables in .select() is especially tricky because it requires relation cardinality information, which can be calculated using FK information in the DB. e.g. for .from('orgs').select('members(*)') orgs -> members can be 1-to-many so members would be an array, but for the opposite i.e. .from('members').select('orgs(*)') members would be object | null. Right now all embedded tables are typed as unknown.

@soedirgo soedirgo force-pushed the BREAK/typing branch 3 times, most recently from ef2d5ee to 4f9635b Compare June 3, 2022 11:32
@@ -60,8 +67,8 @@ export default class PostgrestFilterBuilder<T> extends PostgrestTransformBuilder
* @param column The column to filter on.
* @param value The value to filter with.
*/
eq(column: keyof T, value: T[keyof T]): this {
this.url.searchParams.append(`${column}`, `eq.${value}`)
eq<ColumnName extends string & keyof Table>(column: ColumnName, value: Table[ColumnName]): this {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking super cool @soedirgo!

Just wondering if there are plans to make this ColumnName type even more helpful?
I see you're now allowing all strings plus suggesting the keys of the table – is it also possible to suggest the keys for embedded tables when the user provides the type?

For example:

type KeysOf<T, Key extends keyof T = keyof T> = Key extends string
    ? T[Key] extends (infer U)[]
      ? `${Key}.${KeysOf<U>}`
      : T[Key] extends Record<string, any>
        ? `${Key}.${KeysOf<T[Key]>}`
        : Key
    : never;

(not thoroughly tested!)

Is this in scope for these changes? Or are we not going to allow users to provide nested types and have them as unknowns? (haven't dived into this deeply, so not fully informed sorry!)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The implementation in https://github.com/bnjmnt4n/supabase-client/blob/a1bd9d86fc04ffe9fd5172667644755d96456e29/src/postgrest/PostgrestFilterBuilder.ts#L39 actually does this, but I wanted to keep this simple and fall back to unknown for now.

I noticed that it's not falling back to unknown though - will update the PR. Thanks for noticing!

@bnjmnt4n
Copy link
Contributor

bnjmnt4n commented Jun 5, 2022

Hey @soedirgo! That repo was actually largely very experimental, and I didn't fully manage to integrate it into a usable Supabase client yet, so this is pretty exciting! On the other hand, while experimenting with that, I did have some notes:

  • Using template strings still isn't an ideal DX, because you can't trigger autocompletion of fields. Using objects might allow for better Intellisense autocompletion. Perhaps an ideal solution would be to accept both strings and objects.
  • As you mentioned, the cardinality information would allow for the typings to be much more useful. Given that this is a breaking change anyway, it would be cool for the type generation APIs to perhaps provide more information about the tables in the future (possibly separate from the OpenAPI-based solution?) to allow for much better type informations when embedded resources are requested.

@soedirgo
Copy link
Member Author

soedirgo commented Jun 6, 2022

Using template strings still isn't an ideal DX, because you can't trigger autocompletion of fields. Using objects might allow for better Intellisense autocompletion

@bnjmnt4n I agree, after implementing this I concluded that using template strings for the select query is still suboptimal. There was actually a suggestion to use arrays instead: #265. Atm the ideal DX imo is an array of typed objects, each object representing a field.

But I do think that this approach is a good local maximum at least - I'm thinking of not changing the API too much right now (there are other breaking changes in separate PRs alongside this one) - so the plan is to implement the typed array approach in the next major version.

the cardinality information would allow for the typings to be much more useful. Given that this is a breaking change anyway, it would be cool for the type generation APIs to perhaps provide more information about the tables in the future (possibly separate from the OpenAPI-based solution?) to allow for much better type informations when embedded resources are requested

Yup, I'm currently working on the typegen - which can be run using the Supabase CLI - to complement this PR. (Note that the typing changes here are incompatible with the OpenAPI-based typings.) It'll be based on https://github.com/supabase/postgres-meta, so we can get pretty much any info we want from the database, though I decided to put it off for now since even with FK info determining cardinality is still nontrivial (I probably need to look into how PostgREST does it).

@soedirgo soedirgo force-pushed the BREAK/null-to-undefined branch from b192307 to 628be73 Compare June 7, 2022 06:24
Copy link

@J0 J0 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Base automatically changed from BREAK/null-to-undefined to next June 13, 2022 07:08
@soedirgo soedirgo merged commit 5a0b3be into next Jun 13, 2022
@soedirgo soedirgo deleted the BREAK/typing branch June 13, 2022 07:08
@github-actions
Copy link

🎉 This PR is included in version 1.0.0-next.1 🎉

The release is available on:

Your semantic-release bot 📦🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants