-
-
Notifications
You must be signed in to change notification settings - Fork 4.8k
How to convert Cloud Code params from/to GraphQL #6596
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I didn't get what you need to achieve. Do you want to customize your GraphQL schema by adding new mutations/queries through cloud code? If yes, take a look in this example here. Have you written a |
Hi @davimacedo , I wanted to make a cloud code mutation that returned the values following the Relay specification, but when I asked the question I didn't know what Relay was (still learning), so I didn't know exactly what I wanted to do either. In my case, I have a I found this more complex example here and tried following it, but I couldn't make a custom cloud code like this. As the last commit is from 13 months ago I just tought it was outdated and tried to do it myself. My cloud code now is this: Parse.Cloud.define('initOrder', async req => {
const user = req.user
if (!user) {
throw new Error('Unauthorized')
}
const fields = req.params['input']['fields']
// Create order logic here (almost the same as above)...
// Set fields
order.set('address', address)
order.set('store', store)
order.set('paymentMethod', paymentMethod)
order.set('orderStatus', 'pending')
order.set('change', fields['change'])
order.set('buyer', user)
const savedOrder = await order.save();
return {
'order': savedOrder,
'clientMutationId': req.params['clientMutationId']
}
}) and my schema.graphql is this: extend type Mutation {
"Initialize an Order for the current User"
initOrder(
input: InitOrderInput!
): InitOrderPayload @resolve(to: "initOrder")
}
input InitOrderInput {
fields: InitOrderFieldsInput
clientMutationId: String
}
input InitOrderFieldsInput {
"Store ID"
store: ID!,
"Delivery address"
address: CreateAddressFieldsInput!,
"A list of OrderItems being purchased"
orderItems: [CreateOrderItemFieldsInput!]!,
"The payment method ID"
paymentMethod: ID!,
"Change amount, in integers"
change: Int!
}
type InitOrderPayload {
order: Order!
clientMutationId: String
}
The mutation in action: mutation initOrder {
initOrder(
input: {
fields: #...
}
) {
order {
objectId # works! Order is created
orderItem {
edges { #returns null :(
node {
objectId
}
}
}
}
}
} |
Why are you trying to create your own custom mutation for this instead of running the default one? You can simply go with the query below with no need of any cloud code or additional schema: mutation initOrder {
createOrder(
input: {
fields: #...
}
) {
order {
objectId # works! Order is created
orderItem {
edges {
node {
objectId
}
}
}
}
}
} |
Because I want to control when and how orders are created, without letting the client choose, for security reasons. I'm following what is recommended in this post. It's probably better to just use a beforeSave trigger in this specific case, but what if I wanted to do a function to cancel all the users orders? Then, I would need to create a custom cloud code for it, and I would have the same problem as this one: how to return a list of objects (including its children) following the relay spec. |
For this specific case, I believe it is better to control by using ACL/CLP and worst case a Anyways, picking up your code as an example, I don't know any easy way to implement all automatic generated capabilities but maybe we should provide such way to do this. @Moumouls any thoughts here? |
Also, if I need to set permissions column-wise (example: a user can only edit some fields of his order), I would need to use a beforeSave trigger, right (or use a custom cloud code)? |
Currently i've some custom mutation like this with a code first based approach (GraphQLJS, Nexus). Our main issue here is that we In regular merge situation I'm working on it on my fork, and i try to manage to obtain an May be @yaacovCR you have an advice on how to use graphql tools to create a unique execution context ? I currently test this implementation on my own fork to allow a true merge, with an unique execution context. const customGraphQLSchemaTypeMap = this.graphQLCustomTypeDefs.getTypeMap();
Object.values(customGraphQLSchemaTypeMap).forEach(
(customGraphQLSchemaType) => {
if (
!customGraphQLSchemaType ||
!customGraphQLSchemaType.name ||
customGraphQLSchemaType.name.startsWith('__')
) {
return;
}
const autoGraphQLSchemaType = this.graphQLAutoSchema.getType(
customGraphQLSchemaType.name
);
if (
autoGraphQLSchemaType &&
typeof customGraphQLSchemaType.getFields === 'function'
) {
const findAndReplaceLastType = (parent, key) => {
if (parent[key].name) {
if (
this.graphQLAutoSchema.getType(parent[key].name) &&
this.graphQLAutoSchema.getType(parent[key].name) !==
parent[key]
) {
// To avoid unresolved field on overloaded schema
// replace the final type with the auto schema one
parent[key] = this.graphQLAutoSchema.getType(
parent[key].name
);
}
} else {
if (parent[key].ofType) {
findAndReplaceLastType(parent[key], 'ofType');
}
}
};
Object.values(customGraphQLSchemaType.getFields()).forEach(
(field) => {
findAndReplaceLastType(field, 'type');
}
);
autoGraphQLSchemaType._fields = {
...autoGraphQLSchemaType.getFields(),
...customGraphQLSchemaType.getFields(),
};
} else {
this.graphQLAutoSchema._typeMap[
customGraphQLSchemaType.name
] = customGraphQLSchemaType;
}
}
);
this.graphQLSchema = mergeSchemas({
schemas: [
this.graphQLSchemaDirectivesDefinitions,
this.graphQLAutoSchema,
],
mergeDirectives: true,
}); Notice that i do not merge the custom schema but i transfer types and merge fields on the auto schema. |
Hi! graphql-tools mergeSchemas is poorly named, and probably should be renamed in a future version to stitchSchemas. It is meant for "merging" standalone well-formed GraphQL schemas, and does so by creating a wrapping schema that delegates queries to the individual subschemas; it therefore does not truly "merge" anything. The reason it "merges" schemas this way, is because it is designed to handle merging remote schemas, whose implementations are not under the control of the outer gateway schema. mergeSchemas does allows the true merging of additional standalone typedefs and resolvers to the merged schema. These typedefs can simple be added to the array of "schemas" (or in v5 can be added to the new typeDefs option), and the new resolvers can be added in a resolvers option similar to makeExecutableSchema. These typedefs can refer to any of the merged types in the larger wrapping schema. But if you pass in a schema into mergeSchemas, the schema cannot refer to types in another schema, because mergeSchemas expects each schema to be well formed. mergeSchemas and graphql-tools may not be the best solution for parse-server, as it it designed for schema stitching, which is not strictly necessary when all of the subschemas are under the control of the owners of the gateway. <<BEGIN_EDIT>> You may find that @Urigo's merge-graphql-schemas better fits for the users of parse-server. I suspect that may be the case, but am not sure, as I do not know if you require the ability to delegate to additional remote schemas. I am not sure if the above addresses the exact problems you are seeing. I would also say that this code seems suspect, as mergeSchemas should know about all the types it needs to know about. Is this code still strictly necessary in v5? If so, you should probably file a bug, as the gateway schema created by mergeSchemas should be automatically stripping variables from each subschema that do not belong. I seem to recall that this was fixed in the fork somewhere along the way prior to v5, but if there is something outstanding, a bug report would be most welcome. All the best! |
Above has been edited slightly, if you are just reading on mobile. |
Should have said the following, in terms of most appropriate tools, depends on your goal: My current understanding depends on what you want to merge: Typedefs
Executable schemasIf you have control over these executable schemas, you have to ask yourself why you are merging them rather than creating a unified schema, i.e., your first option is to avoid the need for merging.
Non-executable schemasMerging non-executable schema is equivalent to creating a gateway that can route to subschemas without accessing the underlying implementation. This is where you can choose schema stitching (graphql-tool's mergeSchemas) or Apollo Federation.
|
@yaacovCR it seems that we have issues on uploading images with a merged schema, |
Which graphql upload scalar are you using? Original doesn't let you serialize, graphql-tools exports one that does |
We use the original one, so we can remove the original dependency and then use the GraphQL tools one ? Another question, does serialization of GraphQL upload introduce some limitations, or performance issues compared to the original GraphQL upload ? Ps: thanks a lot for your support @yaacovCR 👍 |
The serialization and parsing being refers to here are with respect to the Upload scalar exported by the graphql-upload package that allows clients and servers to use the graphql-upload multipart request format to upload files. The serialization and parsing are between external <=> internal GraphQL values of this upload scalar, i.e. internally it consists of a promise that when resolved will have filename details. I do not know if @jaydenseric might be open to exporting a version of the scalar that supports serialization, but it is trivial to support, and so a version of the scalar that does so (and attempts cross compatibility between different versions of graphql-upload) is included within graphql-tools. The reason @jaydenseric does not include serialization is that serialization is meant for scalars that are included within output types, and the Upload scalar only makes sense in the context of an input type. On the other hand, schema stitching works in part by taking resolver arguments that have been parsed into internal format and "putting them back" in the external format to be sent to a different GraphQL service (that may have a different internal format!). In this sense, serialization is a concept that may be helpful for input types as well. That is a long way of saying that the serialization of the scalar should not affect performance. However, using schema stitching to forward files may indeed affect performance, and I doubt that is it is the recommended way of organizing a file server or GraphQL server (cf https://www.apollographql.com/blog/apollo-server-file-upload-best-practices-1e7f24cdc050). It definitely affects performance without the use of a fix that allows you to send a stream as a multipart request from within NodeJS, as you will have to wait for the entire buffer to be loaded into memory at the gateway before forwarding (cf: https://github.com/apollographql/apollo-server/issues/3033#issuecomment-625494401). That is why graphql-tools includes a link that patches FormData to allow sending streams of unknown length as stream. See form-data/form-data#394 (comment). This functionality could be submitted as a PR to the form-data package... |
Thank you for your response and the time you gave to our use case. Currently we do not stream the upload directly to the storage (may be we need to change this later). Here the short code example of our use case: const handleUpload = async (upload, config) => {
const { createReadStream, filename, mimetype } = await upload;
let data = null;
if (createReadStream) {
const stream = createReadStream();
data = await new Promise((resolve, reject) => {
const chunks = [];
stream
.on('error', reject)
.on('data', chunk => chunks.push(chunk))
.on('end', () => resolve(Buffer.concat(chunks)));
});
} I'll open a PR to use your scalar to handle the upload correctly on merged schema |
@Setti7 here a similar issue resolved, i close this one |
Here we just have a lack of docs on schema customization |
I'm trying to create a custom object from a Cloud Code using GraphQL, but I'm having trouble converting the fields from/to it.
For example, when creating an object that has a relation, we have the options to
link
and tocreateAndLink
a new child object. How could I achive the same, but from inside a cloud code function, receiving the params in GraphQL and sending the response as GraphlQL too? I'm doing it field-by-field manually but it seems wrong.Are there any examples on how to do something similar? I found this, but the examples are so simple it does not help me much.
Full project code here
The text was updated successfully, but these errors were encountered: