-
Notifications
You must be signed in to change notification settings - Fork 1.4k
sqlx
in some cases more than 60 times slower than Diesel ORM (according to Diesel benchmark suite)
#1481
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Seems like the sqlite backend is the culprit |
Yeah, the core issue is that SQLite does not have a non-blocking API. Reading through the docs, you might get the impression that you can make it work in a non-blocking fashion, but you lose durability guarantees if you do that (trust me, we tried it). At the end of the day, since it's file I/O, you still have to go to a background thread at some point to perform blocking I/O since there's still no true async file I/O (perhaps outside of io-uring). In SQLx, the SQLite driver sends all commands to a background thread to perform the operations against the database and then receives results over an async channel. This is obviously going to introduce a lot of overhead compared to calling the SQLite APIs directly but this ensures that we never block the async runtime thread, which is something Diesel doesn't deal with since it's a purely synchronous API. For the other benchmarks, we're within 2-3x the relative performance of Diesel which isn't bad considering the sync/async difference and the fact that we really haven't put all that much time into optimizing SQLx and there's a lot of low-hanging fruit there that we're waiting on language features for: namely, It's a bit more useful to compare SQLx to the It's all apples to oranges. There's also no concrete numbers (except the performance-over-time metrics which we're decently consistent on, again except SQLite), it's just all relative to Diesel's performance. Not great methodology, honestly. Those benchmarks are clearly designed with one intent in mind: to emphasize Diesel's performance over competing crates. In the 0.7 release, which is blocked on GATs and As-is, this issue isn't really actionable so I'm going to close. It'd be interesting to see if there's any improvement when the SQLx benchmarks are switched to using a single-threaded Tokio runtime, but that's an issue to open in Diesel's repo. |
@abonander Let me clarify a few things here.
I've chosen to use the
Those numbers are not designed to emphasize diesels performance over other crates. They are just a way for me (and other maintainers) to track diesel's performance evolves over time and how this compares to other crates providing similar functionality. It's not about being faster than x or slower than y, but to get a general feeling about the performance characteristics of other solutions. Sometimes this indicates issues in diesel that can be solved, sometimes this indicates issues in other crates that can be solved (for example the
Feel free to open a PR for this. Generally speaking I'm more than happy to share these benchmark implementations as tool to track performance between all rust database connector implementations. |
Yeah, that's mostly a holdover from when SQLx was going to only support Still though, it's not really a fair comparison. Were you just trying to compare how each crate performs straight out of the box?
Yeah, sorry we didn't get back to you before. At least you finally got your feedback?
I guess my issue with it is that it's just a rather crude comparison, and it's hard to see the utility even for Diesel besides emphasizing its performance over other crates. The choice to make the graphs relative to Diesel means that you're really only going to notice differences in performance in Diesel if suddenly another crate is faster than Diesel when it wasn't before, or if the other crates become better or worse all at once between benchmark runs. It inherently means the metric you're aiming to improve, and advertise, is Diesel's competitiveness with other crates. And for the authors of the other crates, it means they have to say to themselves something like, "last month we were 2x slower than Diesel and now we're 3x slower... did we get slower or did Diesel get faster?" The fact that the
I don't think so? It's still a linear scale, but instead of the step size being, say, multiples of 10 μs, it's multiples of Diesel's benchmark time. The graphs would have to go 1x, 10x, 100x, or 1x, 2x, 4x, 8x, etc. to be a logarithmic scale. The SQLite graphs with the massive spikes for SQLx are still on a linear scale, just multiples of 10x.
I really wish I had the time. I may open an issue on the repo if you don't mind though. |
* Use the tokio runtime for sqlx/sea-orm as requested here: launchbadge/sqlx#1481 (comment) * Update quaint to a version that uses tokio 1.0 and reenable it in the metrics job
At least for me it was not clear that you expect the tokio backend to perform better. It's definitively not the goal of the benchmarks to artifically slow down any implementation. Because of this I've opened diesel-rs/diesel#2954 to use tokio instead of async_std as runtime. Feel free to comment on the implementation there.
I did change the visualization of the recorded metrics to only show a box plot of the measured times. At least I see some value in these plots as I can see how the crates currently compare to each other. Yes that does not answer the question if something got faster/slower, but I can see that x is currently faster/slower than y. Which is at least for me an important information.
OK, I feel there is much wrong with this statement here:
That's correct, thanks for pointing that out. |
Hi,
I just stumbled upon the following benchmark suite of the Diesel project:
https://github.com/diesel-rs/metrics/
In several of the benchmarks,
sqlx
is more than 60 times slower than Diesel.I'm wondering if you guys are aware of these benchmarks, and whether
sqlx
is really so much slower in some cases, or if there is a problem with the benchmarks.The text was updated successfully, but these errors were encountered: