-
Notifications
You must be signed in to change notification settings - Fork 934
2nd level cache GetMany ineffective for collections #3359
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
I enabled NHibernate Logging and here is part of the logs, you can see that it fetched all, then got Cache hit but then after that it went again for each record (one by one) to load, attempting to resolve, fetch from cahce, cache hit and resolved 2023-07-14 23:45:21,876 - collection initialized from cache |
In
|
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
@hazzik, to me, the bug is inside NHibernate core not inside NHibernate.Caches. |
I'm just trying to eliminate variables from the equation. |
If you could design a simplest test case it would be great. |
Multiple But of course, we shouldn't load collection entities one by one. We should batch it. [Test]
public void CollectionLazyInitializationFromCacheIsBatched()
{
int count;
using (var s = OpenSession())
{
var readOnly = s.Get<ReadOnly>(s.Query<ReadOnly>().Select(x => x.Id).First());
count = readOnly.Items.Count;
}
var itemPersister = Sfi.GetEntityPersister(typeof(ReadOnlyItem).FullName);
var itemCache = (BatchableCache) itemPersister.Cache.Cache;
itemCache.ClearStatistics();
using (var s = OpenSession())
{
var readOnly = s.Get<ReadOnly>(s.Query<ReadOnly>().Select(x => x.Id).First());
count = readOnly.Items.Count;
}
// 6 items with batch-size = 4 so 2 GetMany calls are expected 1st call: 4 items + 2nd call: 2 items
Assert.That(itemCache.GetMultipleCalls.Count, Is.EqualTo(2));
} P.S. That's all from me - I'm not working on a fix. |
But in the first GetMany you already loaded all values so why after to keep calling Get/GetMany for each record? |
Collection cache contains only entity ids, not entities itself. |
Ah OK now I got it. So at first it gets all collection ids but then it should run in batches to load values for the collections themselves. |
I think we can reuse parts of query cache for this. @EngSayed, I would suggest not using collection caches. Often, from practice it is faster to load collection from DB than from L2 cache. This is a know problem. |
That's very bad news indeed!! Every Entity has collection(s) and if I would load collection from DB then why not to also load the entity from DB as well? |
Ok, turns out the fix is super easy... @bahusoid thanks for the test. |
Not sure when. You should be able to try a pre-release version as soon as PR is merged |
Fixed #3365 |
PR is merged, how to get the pre-release version please? |
It's going to be published by our nightly build to cloudsmith tomorrow: https://cloudsmith.io/~nhibernate/repos/nhibernate-core/packages/ See https://github.com/nhibernate/nhibernate-core#nightly-development-builds Wait for next 5.4.4-dev build or compile it yourself from sources. |
Thanks. I will try to test it later. |
No. We do not have schedules for releases. We do them when they are ready, and when a contributor is available to do the release. |
Are you sure 5.4.4-dev.4462/ is used? Logs? |
Yes, and same logs
|
Here are part of Monitor logs from Redis:
As you can see it goes one by one not batches and I have |
I see no one-by-one lookups for collection elements in your log. Parts like (from old log):
There was |
Now it looks like in your log:
So entities are already preloaded to session. |
So why it keeps firing GetMany for only 1 cache key over and over? |
Don't know. Maybe incorrect cache configuration (not configured getmany script or something) It's all good from nhibernate-core part. |
That cannot be correct because for the main entity it fires get many for a batch of keys. The number of keys are calculated from nhibernate core so it is not good from this side |
This is the main entity not the collection. So it resolves the entity but it goes to Redis cache for the collection one by one |
After debugging again I think I got what is happening ... so after it GetMany for the main entities, then it goes for each entity to load its collection and of course if the collection has more than one record then it GetMany that collection after that it goes to the second main Entity to load its collection and so on |
@fredericDelaporte, any news on releaseing 5.4.4 ? |
For whatever it's worth, this one is biting us too - would love to see a release to resolve this! |
It is now released. |
Uh oh!
There was an error while loading. Please reload this page.
I believe there is an issue in second level cache when NHibernate initializing a collection which call GetMany to get all values for array of keys but then when it goes into for loop to Assemble then it again calls GetMany for each item
I have Entity which has a bag collection of EntityFields and both have
batch-size=2000
attribute. When I try to load the collection EntityFields then NHibernate callsDefaultInitializeCollectionEventListener.InitializeCollectionFromCache
then it gets a 2000 CacheKey which callsGetMany()
till that point everything is fine but then it goes insideCacheEntry.Assemble
which callsInternalLoad
andDoLoad
which again tries to load from the cacheLoadFromSecondLevelCache
and calls again GetMany ... attached the call stack image.My understanding is that once it found it all in Second Level Cache then it should not keep calling
GetMany
again for each record one by one, right?The text was updated successfully, but these errors were encountered: