Skip to content

How to convert Cloud Code params from/to GraphQL #6596

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
Setti7 opened this issue Apr 13, 2020 · 17 comments
Closed

How to convert Cloud Code params from/to GraphQL #6596

Setti7 opened this issue Apr 13, 2020 · 17 comments
Labels
type:bug Impaired feature or lacking behavior that is likely assumed type:feature New feature or improvement of existing feature

Comments

@Setti7
Copy link

Setti7 commented Apr 13, 2020

I'm trying to create a custom object from a Cloud Code using GraphQL, but I'm having trouble converting the fields from/to it.

For example, when creating an object that has a relation, we have the options to link and to createAndLink a new child object. How could I achive the same, but from inside a cloud code function, receiving the params in GraphQL and sending the response as GraphlQL too? I'm doing it field-by-field manually but it seems wrong.

Parse.Cloud.define('initOrder', async (req, res) => {
  /// Inputs
  ///   - address: AddressPointerInput!
  ///   - products: [CreateOrderItemFieldsInput!]!

  console.log(req.params.products)

  // Create order
  let order = new Parse.Object('Order')
  let orderItemsRelation = order.relation('products')

  // Setting required fields
  let user = new Parse.Object('_User')
  user.id = 'Lkwpbo84eg' // from session

  let store = new Parse.Object('Store')
  store.id = 'qmKKCBVqXG' // would be another param

  let paymentMethod = new Parse.Object('PaymentMethod')
  paymentMethod.id = 'UOZrC98GJE' // another param

  order.set('orderStatus', 'draft')
  order.set('buyer', user)
  order.set('store', store)
  order.set('paymentMethod', paymentMethod)

  /// Get Address from input
  let address
  if ('link' in req.params.address) {
    address = new Parse.Object('Address')
    address.id = req.params.address.link
  } else if ('createAndLink' in req.params.address) {
    // Create and link to order

    // Do I need to manually set all those address fields? What is the best way
    // to create an object directly from params when using graphql?
  }

  order.set('address', address)

  // Create orderItems
  req.params.products.forEach(async (product) => {
    let newOrderItem = new Parse.Object('OrderItem')
    let newProduct = new Parse.Object('Gallon')

    newProduct.id = product.product.link

    newOrderItem.set('amount', 10)
    newOrderItem.set('product', newProduct)

    let savedOrderitem = await newOrderItem.save()

    // For some reason the orderItems aren't being added to the relation.
    orderItemsRelation.add(savedOrderitem)
  })

  const saved = await order.save()
  console.log(saved.toJSON())

  /// When creating an object of a generated class with graphql, the return type
  // is actually CreateOrderPayload. How can I return such thing?
  return saved
})

Are there any examples on how to do something similar? I found this, but the examples are so simple it does not help me much.

Full project code here

@davimacedo
Copy link
Member

I didn't get what you need to achieve. Do you want to customize your GraphQL schema by adding new mutations/queries through cloud code? If yes, take a look in this example here. Have you written a schema.graphql in your project? Could you please share here? Thanks.

@Setti7
Copy link
Author

Setti7 commented Apr 14, 2020

Hi @davimacedo , I wanted to make a cloud code mutation that returned the values following the Relay specification, but when I asked the question I didn't know what Relay was (still learning), so I didn't know exactly what I wanted to do either.

In my case, I have a Order object that has a lot of child Orderitem's (a relation), but I'm having trouble returning theses children (following the Relay spec, with nodes, edges...). In the example below, when I run the mutation, it works, the order is generated with all correct fields, but when returning it edges is always null, so I can't get the OrderItems objects.

I found this more complex example here and tried following it, but I couldn't make a custom cloud code like this. As the last commit is from 13 months ago I just tought it was outdated and tried to do it myself.

My cloud code now is this:

Parse.Cloud.define('initOrder', async req => {
  const user = req.user
  if (!user) {
    throw new Error('Unauthorized')
  }

  const fields = req.params['input']['fields']

  // Create order logic here (almost the same as above)...

  // Set fields
  order.set('address', address)
  order.set('store', store)
  order.set('paymentMethod', paymentMethod)
  order.set('orderStatus', 'pending')
  order.set('change', fields['change'])
  order.set('buyer', user)

  const savedOrder = await order.save();
  return {
    'order': savedOrder,
    'clientMutationId': req.params['clientMutationId']
  }
})

and my schema.graphql is this:

extend type Mutation {
    "Initialize an Order for the current User"
    initOrder(
        input: InitOrderInput!
    ): InitOrderPayload @resolve(to: "initOrder")
}

input InitOrderInput {
    fields: InitOrderFieldsInput
    clientMutationId: String
}

input InitOrderFieldsInput {
    "Store ID"
    store: ID!,
    "Delivery address"
    address: CreateAddressFieldsInput!,
    "A list of OrderItems being purchased"
    orderItems: [CreateOrderItemFieldsInput!]!,
    "The payment method ID"
    paymentMethod: ID!,
    "Change amount, in integers"
    change: Int!
}

type InitOrderPayload {
    order: Order!
    clientMutationId: String
}

The mutation in action:

mutation initOrder {
  initOrder(
    input: {
      fields: #...
    }
  ) {
    order {
      objectId # works! Order is created
      orderItem {
        edges { #returns null :(
          node {
            objectId
          }
        }
      }
    }
  }
}

@davimacedo
Copy link
Member

Why are you trying to create your own custom mutation for this instead of running the default one? You can simply go with the query below with no need of any cloud code or additional schema:

mutation initOrder {
  createOrder(
    input: {
      fields: #...
    }
  ) {
    order {
      objectId # works! Order is created
      orderItem {
        edges {
          node {
            objectId
          }
        }
      }
    }
  }
}

@Setti7
Copy link
Author

Setti7 commented Apr 15, 2020

Because I want to control when and how orders are created, without letting the client choose, for security reasons. I'm following what is recommended in this post.

It's probably better to just use a beforeSave trigger in this specific case, but what if I wanted to do a function to cancel all the users orders? Then, I would need to create a custom cloud code for it, and I would have the same problem as this one: how to return a list of objects (including its children) following the relay spec.

@davimacedo
Copy link
Member

For this specific case, I believe it is better to control by using ACL/CLP and worst case a beforeSave trigger. It will save you many lines of code since Parse Server automatically creates the endpoints that you are trying to create. But I agree with you that there are other cases in which you will need to write your own relay compliant cloud code functions. It is not so easy, though. Relay Spec requires a lot of effort to be implemented and that's a good example in which Parse Server creates a lot of value for you.

Anyways, picking up your code as an example, I don't know any easy way to implement all automatic generated capabilities but maybe we should provide such way to do this. @Moumouls any thoughts here?

@Setti7
Copy link
Author

Setti7 commented Apr 16, 2020

Also, if I need to set permissions column-wise (example: a user can only edit some fields of his order), I would need to use a beforeSave trigger, right (or use a custom cloud code)?

@Moumouls
Copy link
Member

Currently i've some custom mutation like this with a code first based approach (GraphQLJS, Nexus). Our main issue here is that we merge schema like a regular stitching.

In regular merge situation graphql-tools seems to create an execution context that do not contain the original types (auto generated ones) when the extended schema is executed.

I'm working on it on my fork, and i try to manage to obtain an overload behavior that do not stitch but truly merge and import extended type into the auto generated schema (so all auto generated type are reusable).

May be @yaacovCR you have an advice on how to use graphql tools to create a unique execution context ?

I currently test this implementation on my own fork to allow a true merge, with an unique execution context.

const customGraphQLSchemaTypeMap = this.graphQLCustomTypeDefs.getTypeMap();
        Object.values(customGraphQLSchemaTypeMap).forEach(
          (customGraphQLSchemaType) => {
            if (
              !customGraphQLSchemaType ||
              !customGraphQLSchemaType.name ||
              customGraphQLSchemaType.name.startsWith('__')
            ) {
              return;
            }
            const autoGraphQLSchemaType = this.graphQLAutoSchema.getType(
              customGraphQLSchemaType.name
            );
            if (
              autoGraphQLSchemaType &&
              typeof customGraphQLSchemaType.getFields === 'function'
            ) {
              const findAndReplaceLastType = (parent, key) => {
                if (parent[key].name) {
                  if (
                    this.graphQLAutoSchema.getType(parent[key].name) &&
                    this.graphQLAutoSchema.getType(parent[key].name) !==
                      parent[key]
                  ) {
                    // To avoid unresolved field on overloaded schema
                    // replace the final type with the auto schema one
                    parent[key] = this.graphQLAutoSchema.getType(
                      parent[key].name
                    );
                  }
                } else {
                  if (parent[key].ofType) {
                    findAndReplaceLastType(parent[key], 'ofType');
                  }
                }
              };

              Object.values(customGraphQLSchemaType.getFields()).forEach(
                (field) => {
                  findAndReplaceLastType(field, 'type');
                }
              );
              autoGraphQLSchemaType._fields = {
                ...autoGraphQLSchemaType.getFields(),
                ...customGraphQLSchemaType.getFields(),
              };
            } else {
              this.graphQLAutoSchema._typeMap[
                customGraphQLSchemaType.name
              ] = customGraphQLSchemaType;
            }
          }
        );
        this.graphQLSchema = mergeSchemas({
          schemas: [
            this.graphQLSchemaDirectivesDefinitions,
            this.graphQLAutoSchema,
          ],
          mergeDirectives: true,
        });

Notice that i do not merge the custom schema but i transfer types and merge fields on the auto schema.

@Moumouls Moumouls added type:bug Impaired feature or lacking behavior that is likely assumed discussion type:feature New feature or improvement of existing feature labels Apr 16, 2020
@yaacovCR
Copy link
Contributor

yaacovCR commented Apr 17, 2020

Hi!

graphql-tools mergeSchemas is poorly named, and probably should be renamed in a future version to stitchSchemas.

It is meant for "merging" standalone well-formed GraphQL schemas, and does so by creating a wrapping schema that delegates queries to the individual subschemas; it therefore does not truly "merge" anything.

The reason it "merges" schemas this way, is because it is designed to handle merging remote schemas, whose implementations are not under the control of the outer gateway schema.

mergeSchemas does allows the true merging of additional standalone typedefs and resolvers to the merged schema. These typedefs can simple be added to the array of "schemas" (or in v5 can be added to the new typeDefs option), and the new resolvers can be added in a resolvers option similar to makeExecutableSchema. These typedefs can refer to any of the merged types in the larger wrapping schema. But if you pass in a schema into mergeSchemas, the schema cannot refer to types in another schema, because mergeSchemas expects each schema to be well formed.

mergeSchemas and graphql-tools may not be the best solution for parse-server, as it it designed for schema stitching, which is not strictly necessary when all of the subschemas are under the control of the owners of the gateway.

<<BEGIN_EDIT>>
Meaning, even if all you want to do is add in a few types to a standalone schema, mergeSchemas will do that for you using above combination of schemas, typeDefs, and resolvers, but at the cost of an additional round of delegation to your well formed schema, that you may not truly need.
<<END_EDIT>>

You may find that @Urigo's merge-graphql-schemas better fits for the users of parse-server. I suspect that may be the case, but am not sure, as I do not know if you require the ability to delegate to additional remote schemas.

I am not sure if the above addresses the exact problems you are seeing.

I would also say that this code seems suspect, as mergeSchemas should know about all the types it needs to know about. Is this code still strictly necessary in v5? If so, you should probably file a bug, as the gateway schema created by mergeSchemas should be automatically stripping variables from each subschema that do not belong. I seem to recall that this was fixed in the fork somewhere along the way prior to v5, but if there is something outstanding, a bug report would be most welcome.

All the best!

@yaacovCR
Copy link
Contributor

Above has been edited slightly, if you are just reading on mobile.

@yaacovCR
Copy link
Contributor

yaacovCR commented Apr 19, 2020

Should have said the following, in terms of most appropriate tools, depends on your goal:

My current understanding depends on what you want to merge:

Typedefs

  1. If your typedefs all appropriately use the extend keyword, you can just use extendSchema from the graphql-js library
  2. If you want to merge typedefs that would otherwise conflict with each other (i.e., merging types without using extend or merging fields in some way, you can use merge-graphql-schemas. merge-graphql-schemas can also take care of merging resolvers to eventually pass to makeExecutableSchema.

Executable schemas

If you have control over these executable schemas, you have to ask yourself why you are merging them rather than creating a unified schema, i.e., your first option is to avoid the need for merging.

  1. Having said that, if you have access to the executable schema objects themselves, you can use graphql-compose.
  2. You can also use the toConfig options within graphql-js to convert a schema to a config object and just create a new schema programatically using new GraphQLSchema(...). This is pretty straightforward if the only types you are modifying are the root types (assuming those root types are not also nested!).
  3. If any non-root types are changed, you have to edit any types that reference them to point to the new type. This is something graphql-compose takes care of for you, but can be a big headache if you want to do so manually. graphql-tools comes to the rescue by giving you a mapSchema function that allows you to replace any type within the schema with a new type (and/or rename/delete the type), performing all the rewiring for you. This is similar to the older visitSchema function, also exported, which does something similar, except mapSchema always gives you a new schema, while visitSchema is set up to modify the graphql objects in okay. This may be a multi-step process, where you add in the new types you want to add using above toConfig/new GraphQLSchema approach and then use mapSchema/visitSchema..

Non-executable schemas

Merging non-executable schema is equivalent to creating a gateway that can route to subschemas without accessing the underlying implementation.

This is where you can choose schema stitching (graphql-tool's mergeSchemas) or Apollo Federation.

  1. Schema stitching let you (a) add additional types/resolvers on the gateway, avoiding an additional round of delegation when not necessary, (b) use transforms that can manipulate the types/fields of the subschema before adding them to the gateway schema, (c) merge different fields from similar types in different subschemas into a single type within the common gateway schema, delegating as appropriate to the individual subschemas, (d) use subscriptions, (e) output a regular graphql-js GraphQLSchema, i.e. an executable schema object, that can be used with all the other tools within the Javascript graphql ecosystem..
  2. Apollo Federation is designed for those who have control over the individual subschemas and can add the required Federation directives. You can use graphql-transform-federation together with graphql-tools's makeRemoteExecutableSchema to turn any schema into a Federation compliant schema, although this requires an extra round of delegation. Federation does not (yet) support subscriptions.

@Moumouls
Copy link
Member

@yaacovCR it seems that we have issues on uploading images with a merged schema, serialize error. Do we need to configure something to make it work ?

@yaacovCR
Copy link
Contributor

Which graphql upload scalar are you using? Original doesn't let you serialize, graphql-tools exports one that does

@Moumouls
Copy link
Member

We use the original one, so we can remove the original dependency and then use the GraphQL tools one ?

Another question, does serialization of GraphQL upload introduce some limitations, or performance issues compared to the original GraphQL upload ?

Ps: thanks a lot for your support @yaacovCR 👍

@yaacovCR
Copy link
Contributor

The serialization and parsing being refers to here are with respect to the Upload scalar exported by the graphql-upload package that allows clients and servers to use the graphql-upload multipart request format to upload files.

The serialization and parsing are between external <=> internal GraphQL values of this upload scalar, i.e. internally it consists of a promise that when resolved will have filename details. I do not know if @jaydenseric might be open to exporting a version of the scalar that supports serialization, but it is trivial to support, and so a version of the scalar that does so (and attempts cross compatibility between different versions of graphql-upload) is included within graphql-tools.

The reason @jaydenseric does not include serialization is that serialization is meant for scalars that are included within output types, and the Upload scalar only makes sense in the context of an input type. On the other hand, schema stitching works in part by taking resolver arguments that have been parsed into internal format and "putting them back" in the external format to be sent to a different GraphQL service (that may have a different internal format!). In this sense, serialization is a concept that may be helpful for input types as well.

That is a long way of saying that the serialization of the scalar should not affect performance.

However, using schema stitching to forward files may indeed affect performance, and I doubt that is it is the recommended way of organizing a file server or GraphQL server (cf https://www.apollographql.com/blog/apollo-server-file-upload-best-practices-1e7f24cdc050).

It definitely affects performance without the use of a fix that allows you to send a stream as a multipart request from within NodeJS, as you will have to wait for the entire buffer to be loaded into memory at the gateway before forwarding (cf: https://github.com/apollographql/apollo-server/issues/3033#issuecomment-625494401).

That is why graphql-tools includes a link that patches FormData to allow sending streams of unknown length as stream. See form-data/form-data#394 (comment). This functionality could be submitted as a PR to the form-data package...

@Moumouls
Copy link
Member

Thank you for your response and the time you gave to our use case.

Currently we do not stream the upload directly to the storage (may be we need to change this later).
Serialization should not be an issue so.

Here the short code example of our use case:

const handleUpload = async (upload, config) => {
  const { createReadStream, filename, mimetype } = await upload;
  let data = null;
  if (createReadStream) {
    const stream = createReadStream();
    data = await new Promise((resolve, reject) => {
      const chunks = [];
      stream
        .on('error', reject)
        .on('data', chunk => chunks.push(chunk))
        .on('end', () => resolve(Buffer.concat(chunks)));
    });
  }

I'll open a PR to use your scalar to handle the upload correctly on merged schema

@Moumouls
Copy link
Member

Moumouls commented Oct 1, 2020

@Setti7 here a similar issue resolved, i close this one
https://community.parseplatform.org/t/adding-a-custom-graphql-mutation-to-parse-server/820

@Moumouls Moumouls closed this as completed Oct 1, 2020
@Moumouls
Copy link
Member

Moumouls commented Oct 1, 2020

Here we just have a lack of docs on schema customization

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type:bug Impaired feature or lacking behavior that is likely assumed type:feature New feature or improvement of existing feature
Projects
None yet
Development

No branches or pull requests

4 participants