-
Notifications
You must be signed in to change notification settings - Fork 330
IF-THEN-ELSE Conditional (Draft 7) #206
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
IF-THEN-ELSE Conditional (Draft 7) #206
Conversation
@andersonf Thanks a lot for the PR. It is a very elegant implementation. |
1.0.25 has been released with the PR. Thank again for your help. https://github.com/networknt/json-schema-validator/releases/tag/1.0.25 |
@andersonf This feels a bit incomplete. From what I can tell only draft v4 is supported out of the box. The JsonSchemaFactory allows a meta schema to be injected, but this library only defines a meta schema for draft v4. Does your team have a meta schema from draft v7 that we can potentially add to this project? If you guys are using the draft v4 meta schema, then you're kind of using a Frankenstein mixture of draft v4 and draft v7. 'id' was changed to '$id' in draft v6 for example. I'd suggest sticking 100% to a draft version so that other validators also work with your json schemas. |
I found something else kind of weird. The ValidatorTypeCode enumeration is being used by the JsonMetaSchema.DraftV4 class to retrieve keywords. Technically the 'if' keyword isn't apart of draft v4, so it seems like there should be a ValidatorTypeCode enumeration for each draft version. 'id' was changed to '$id', so it's possible that one of the validation keywords also could change someday. That's another reason to separate the ValidatorTypeCode enumerations by draft version. |
If my memory serves, I think we currently support both "id" and "$id". The library is a mix of draft versions already at the moment. Ideally, we should have the draft version defined in the configuration and let the end-user choose which version to be used. This requires some of the validators to be implemented multiple times with dependency injection to load the right version during the initialization time. I think we have discussed using the service module of light-4j for the job; however, a lot of members have concerns with the extra dependency. At the moment, we are just adding new features from each version and make sure that it is backward compatible. I don't know if this is workable in the long run, but we can always change the design if we encounter obstacles we cannot surpass. I am very open to any suggestions and want to learn what everyone thinks. |
@stevehu I guess it's just a matter of how you want to design the application. It's probably fine to support multiple draft versions. The only reason I brought it up was because this validator supports mixes of various draft versions and that might be inconsistent with validators in other languages. For example, a client could claim to use this validator for draft v4 json schemas, but this library would allow that client to accidentally use draft v7 features. I believe this can potentially be harmful because json schemas can be hosted on a web server and used by third-party clients (that's what my team does). A problematic example could be a javascript client for a REST API that needs to validate the request body long before the request is sent. The catch here is the javascript client uses a strictly draft v4 validator and therefore wouldn't support the draft v7 features. This application is really flexible and that is cool, but there are defined specifications for json schema versions. It just seems kind of misleading to configure the JsonSchemaFactory to use draft v4, but also get various features from later draft versions. I'm a bit more strict with my code, so I'd almost prefer to get errors for not following the configured draft version. |
@jawaff I agree with you that mixing multiple specification versions will make the users confused if one validator specification is conflicted between two versions. So far, I haven't seen any case as we haven't implemented the newest features yet. For example, we have implemented both the |
I think we can adopt the same strategy of everit. What do you think? https://github.com/everit-org/json-schema#draft-4-draft-6-or-draft-7 |
@stevehu Another thing that I just realized is that a json schema might technically be able to utilize multiple schema specifications. Suppose we have a draft 7 schema file that has a $ref to a draft 4 schema file. Is the referenced schema supposed to be validated using the draft 4 or 7 specification? I might end up in that situation if there were publicly hosted json schemas that I wanted to reference in my json schemas. Looking at everit, it doesn't seem like they would support that sort of situation. It only seems to allow the draft version to be set when their schema loader is created. When I was looking through our JsonSchemaFactory class earlier in this conversation I was under the impression that you were already trying to follow the same strategy as everit. JsonSchemaFactory#getInstance() configures the JsonSchemaFactory to utilize the draft 4 specification. The part that really confused me (which was earlier in this conversation) was that our draft 4 JsonMetaSchema supports later versions. We could simply just follow everit's strategy, but I'm not entirely sure if they have the correct strategy. We might want to do some further investigation. Sometimes you just have to make a decision if there isn't enough information out there though. |
Everit is using the $schema or configuration to decide which version of the spec is using for the validation. It is a good strategy but as you said, there are drawbacks as well. With the new Draft 2019-09(draft 8) release and the pending OpenAPI 3.1 release, it is very hard for the library maintainer to keep up with all the changes. I was entertaining the idea to use code generation to resolve the schema validation issue over the weekend. The idea is coming from Jsoniter which is using javassist to generate the code dynamically or statically. This resolves several issues:
The problem is that it needs a lot of time to implement and to stabilize. In the long run, I can make the OpenAPI specification validation with code generation which is just a superset of the schema validation. What do you think? |
There's probably a few good ways to go about this.
If we were to stick with our current strategy, then we could just have
different JsonMetaSchemas for each draft version and restrict which
validators can be used based on the draft version. Maybe we could find an
alternative to dynamically loading validator classes from the classpath. I
think that'd be less invasive of a change and less work.
I can see code generation being possible, but I don't see any crucial
performance benefits. Validators are currently loaded dynamically from the
classpath, which is relatively slow. However, if we generate the JsonSchema
for validation, it seems like it would be about as fast as the dynamically
assembled JsonSchema.
The speedup gained from code generation is probably in the creation of the
JsonSchema, not the validation itself. Howeber, if the JsonSchemas are
separated into reusable chunks and they get cached after being loaded, then
there isn't much of a speed impact from our current strategy.
Those are my thoughts anyways. I may not be thinking the same thing as you
regarding the code generation though.
…On Mon, Nov 11, 2019, 7:33 PM Steve Hu ***@***.***> wrote:
Everit is using the $schema or configuration to decide which version of
the spec is using for the validation. It is a good strategy but as you
said, there are drawbacks as well. With the new Draft 2019-09(draft 8)
release and the pending OpenAPI 3.1 release, it is very hard for the
library maintainer to keep up with all the changes.
I was entertaining the idea to use code generation to resolve the schema
validation issue over the weekend. The idea is coming from Jsoniter which
is using javassist to generate the code dynamically or statically. This
resolves several issues:
- To support multiple versions of specifications even the reference is
in a different draft version.
- Performance will be much fast than the current implementation.
- We don't need to worry about the context issue with aggregated
validator like AnyOf, OneOf, and AllOf.
The problem is that it needs a lot of time to implement and to stabilize.
In the long run, I can make the OpenAPI specification validation with code
generation which is just a superset of the schema validation. What do you
think?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#206?email_source=notifications&email_token=AAWC3XHS4OEGFLO4XZ3F5MDQTIPXJA5CNFSM4JJVNVMKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEDY5FQQ#issuecomment-552719042>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAWC3XEAOXATWSKVJIPNF53QTIPXJANCNFSM4JJVNVMA>
.
|
I think what you have suggested is the most realistic strategy with minimum modifications. The performance gain with code generation is due to the lack of code to handle different situations. The different schema will generate different code. |
Okay, I think I'm seeing the difference with the codegen now. That might be nice, but we already have the fastest json schema validator in the Java ecosystem. Is it worth rewriting instead of just doing some profiling? |
This feature is very useful for our project.
The implementation of theses keywords was guided by SPEC Draft 7.