Skip to content

[WIP] using db-pool library to create a pool of databases#5846

Draft
momentary-lapse wants to merge 94 commits intoLemmyNet:mainfrom
momentary-lapse:parallel-db-tests
Draft

[WIP] using db-pool library to create a pool of databases#5846
momentary-lapse wants to merge 94 commits intoLemmyNet:mainfrom
momentary-lapse:parallel-db-tests

Conversation

@momentary-lapse
Copy link
Copy Markdown
Contributor

Addresses: #4979

Comment thread crates/db_schema/src/utils.rs Outdated
.await;

// TODO make compatible with ActualDbPool
db_pool.pull_immutable().await
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I created this WIP PR to share the progress and the issue I'm stuck with currently. The crate I use operates with its own structure wrapping connection pools: code
And we have our own ActualDbPool. They are kinda same, but it's not obvious for me how to correctly convert one to another.
I had an idea to make ActualDbPool a enum with two possible values: RegularPool and ReusablePool, but stuck on trying to adapt stuff like LemmyContext, which also requires pool struct to be clone-able (and ReusablePool is not). And it seems a lot of changes to the main codebase for purely test changes.
Do you folks have any ideas how to manage that? Or should I stick to the initial plan without using this library?

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Our ActualDbPool is just a type alias for deadpool Pool<AsyncPgConnection>.

Their crate should be able to work with deadpool pools, but I'm not familiar with how to plug that into their crate... you'll have to ask them.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I see. I returned to this issue today after a week of a break. I'm in contact with the db-pool author and they're helping to understand a lot of moments and really willing to collaborate, so i think we'll make this work.

I'd like to clarify one moment: do we want build_db_pool_for_tests to return still ActualDbPool? db-pool has its own wrapper ReusableConnectionPool which works like a deadpool Pool, but a bit different and needs adaptation. And it might be easier to adapt tests for working with ReusableConnectionPool than converting ReusableConnectionPool to ActualDbPool

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The return type of build_db_pool_for_tests may be changed. Also, a DbPool variant may be added if needed.

Comment thread crates/api/api_utils/src/context.rs Outdated
#[derive(Clone)]
pub struct LemmyContext {
pool: ActualDbPool,
pool: ContextPool,
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the moment which currently blocks me, and i think it's better to consult with you again. LemmyContext structure must be cloneable, therefore all the fields, therefore the pool. But unfortunately, reusable pool from db-pool crate is not, and i don't have access to its fields to implement the trait here.
But before asking db-pool developer, i'd like to be sure we really need this pool cloning stuff, especially for the tests. Cloning the pool seems a bit strange to me, but i may miss something. I'm looking at the code now, but maybe you folks already have some insights on this

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wrap it in Arc for now.

@momentary-lapse
Copy link
Copy Markdown
Contributor Author

Update: I'm working on the topic; cannot devote much time for it, but it slowly going forward, and i keep the code in the branch up-to-date. I connected db-pool crate to our tests, and reworked most of them. Currently have a runtime error, planning to look at it and fix this week.
After this, what is left is to change a few tests which are using build_db_pool function.

@momentary-lapse
Copy link
Copy Markdown
Contributor Author

Okay, for now i simply ignored cleanup errors in db-pool, and tests are passing now. Tried to disable migrations/2025-08-01-000017_forbid_diesel_cli/ as Nutomic said, and it defeated that diesel cli error, but another one appeared: it tried to truncate table deps_saved_ddl, but failed. It's in util schema, and maybe there's something with permissions. Anyway, planning to look into it more carefully, since now there's no guarantee that databases are even properly cleaned. But at least the global problem is located.

There's a small speed improvement comparing to main build (~4 min). But i suspect that's because one heavy test is still ignored (lemmy_diesel_utils schema_setup::tests::test_schema_setup) and it feels something is hindering tests from running truly in parallel. Even tho now they definitely interfere: i put serial back to worker tests in lemmy_apub_send, because without that directive they all tried to use the same port 8085, and failed

@Nutomic
Copy link
Copy Markdown
Member

Nutomic commented Mar 10, 2026

There's a small speed improvement comparing to main build (~4 min). But i suspect that's because one heavy test is still ignored (lemmy_diesel_utils schema_setup::tests::test_schema_setup) and it feels something is hindering tests from running truly in parallel.

How many tests are currently running at the same time? Not sure where that is defined, but try to change the number higher or lower and see if it helps.

Even tho now they definitely interfere: i put serial back to worker tests in lemmy_apub_send, because without that directive they all tried to use the same port 8085, and failed

It might be possible to pass the port number as parameter to make them parallel. But there are only a few tests in that crate so it can be changed later in another PR.

@momentary-lapse
Copy link
Copy Markdown
Contributor Author

momentary-lapse commented Mar 10, 2026

How many tests are currently running at the same time? Not sure where that is defined, but try to change the number higher or lower and see if it helps.

I hardcoded the pool size for tests to 30 for now. But both pool sizes (restricted and privileged) don't seem to actually affect the test speed at all.

It might be possible to pass the port number as parameter to make them parallel. But there are only a few tests in that crate so it can be changed later in another PR.

Honestly, I tried to set up each test to its own port (changed it here and here) and run them together, but they started failing and hanging. I spent some time, but then switched to other stuff.

@dessalines
Copy link
Copy Markdown
Member

@Nutomic
Copy link
Copy Markdown
Member

Nutomic commented May 4, 2026

Tried it locally and the tests are passing now, good job! Still needs cargo +nightly fmt --all and maybe other fixes to make CI pass.

And you need to use a proper release version of the db-pool crate in crates/diesel_utils/Cargo.toml.

Edit: Locally tests complete in 104s with this change, compared to 203s for main branch. So only half the time!


#[tokio::test]
#[serial]
#[tokio_shared_rt::test(shared = true, flavor = "multi_thread")]
Copy link
Copy Markdown
Member

@Nutomic Nutomic May 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
#[tokio_shared_rt::test(shared = true, flavor = "multi_thread")]
#[tokio_shared_rt::test(shared, flavor = "multi_thread")]

Can be shortened like this (with search and replace on all files).

https://docs.rs/tokio-shared-rt/latest/tokio_shared_rt/attr.test.html

},
)
.await
.expect("diesel postgres backend");
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dont use so many expect, instead simply use ? and return LemmyResult from this closure.


let backend = DieselAsyncPostgresBackend::new(
config,
|manager| Pool::builder(manager).max_size(30), //TODO use some env var
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Leftover todo, anyway this is probably fine to keep hardcoded.


#[test]
#[serial]
#[ignore]
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to enable this test again (it doesnt have to run in parallel).

@@ -0,0 +1 @@

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Delete this folder (it was removed from main branch but came back somehow).

Comment on lines +2395 to +2397
let pool_arc = data.pool();
let pool_ref = &***pool_arc;
let pool = &mut pool_ref.into();
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
let pool_arc = data.pool();
let pool_ref = &***pool_arc;
let pool = &mut pool_ref.into();
let pool = &build_db_pool_for_tests().await;
let pool = &mut pool.into();

Simplify it like the other tests. In this file its all written in this overly complicated way.

@momentary-lapse
Copy link
Copy Markdown
Contributor Author

Thanks for the review, I'll try to address remarks soon-ish and contact with db-pool owner to release my forked changes as a new version.

@Nutomic
Copy link
Copy Markdown
Member

Nutomic commented May 5, 2026

To fix the task which is currently failing in CI (lemmy_api_common_works_with_wasm) you need to change crates/diesel_utils/Cargo.toml. For db-pool set optional = true and then add db-pool under the full feature.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants