Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug: TenanAwareKey vs not tenant aware keys in cache #14

Closed
wants to merge 1 commit into from
Closed

Bug: TenanAwareKey vs not tenant aware keys in cache #14

wants to merge 1 commit into from

Conversation

nPraml
Copy link
Contributor

@nPraml nPraml commented Apr 1, 2022

Hello @rbygrave ,

@rPraml and I tested our application in a multi tomcat environment with hazelcast and we got some problems by caching (login of a user).

From an user-password entity we had 2 entries in the cache map: one with the UUID as key (e.g. 1234) and the second one used the TenantAwareKey (e.g. 1234:1, :1 as the tenant information) --> Our next query wanted to load the entity from the cache but it got the "wrong" one without the tenant information which was outdated (--> our login was not successful because it used the old password).

@rPraml implemented a temporary solution that works in our application.

return new HzCache(map, config.getTenantProvider());
ServerCache hzCache = new HzCache(map);
if (config.getTenantProvider() != null) {
return new TenantAwareCache(hzCache, new TenantAwareKey(config.getTenantProvider()));
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rbygrave I would suggest to move that code to Ebean and remove cache handling form the plugin completely. It acts as a proxy and converts all keys to be tenant-aware.

Unfortunately, we have to do some more or less expensive copy operations on getAll/putAll/removeAll methods.

Note: this kind of bug is also present in other cache plugin(s), e.g. here: https:/ebean-orm/ebean-ignite/blob/master/src/main/java/io/ebean/ignite/IgCache.java#L49

@Override
public Map<Object, Object> getAll(Set<Object> keys) {
Map<Object, Object> keyMapping = new HashMap<>();
keys.forEach(k -> keyMapping.put(key(k),k));
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have to convert the keyset (e.g. 1234, 5678) to a tenantaware key (1234:1, 5678:1)

and we have to do remove the tenant-info after we get the result form the cache.

this could be probably done by tmp.forEach((k,v)-> ret.put(((CacheKey) k).key, v)) as it should be always a cacheKey

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ret.put(((CacheKey) k).key ...

Yes, that looks better to me.

@rbygrave
Copy link
Member

rbygrave commented Apr 5, 2022

So yes I agree, TenantAwareCache should move into ebean-api or ebean-core.

@rPraml
Copy link

rPraml commented Apr 6, 2022

I see that you currently do a lot of refactoring and releaseing all modules
@rbygrave Could you do that in ebean 13, please?

@rbygrave
Copy link
Member

rbygrave commented Apr 6, 2022

Could you do that in ebean 13, please?

Do what? Move TenantAwareCache I am thinking?

@rbygrave
Copy link
Member

rbygrave commented Apr 6, 2022

FYI: I am adding TenantAwareCache ... and making the appropriate adjustments ...

@rbygrave
Copy link
Member

rbygrave commented Apr 7, 2022

Refer: ebean-orm/ebean#2638

@rbygrave
Copy link
Member

rbygrave commented Apr 7, 2022

Fixed by a0fbfb2

@rbygrave rbygrave self-assigned this Apr 7, 2022
@rbygrave rbygrave added this to the 13.2.0 milestone Apr 7, 2022
@rbygrave rbygrave added the bug label Apr 7, 2022
@rbygrave rbygrave closed this Apr 7, 2022
@rPraml
Copy link

rPraml commented Apr 8, 2022

@rbygrave thank you for the fix. We are going to migrate to ebean 13 next week

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants