Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bounded cache #33

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open

Bounded cache #33

wants to merge 4 commits into from

Conversation

kantp
Copy link

@kantp kantp commented Jul 9, 2015

This is my proposal for #32. It enables users of Haxl to use caches of bounded size, but keeps "cache everything" as the default behaviour.

Please let me know if you have any remarks or want me to change anything.

Philipp Kant added 3 commits July 9, 2015 12:00
There is a new typeclass, CacheableSource, with a single function
cacheSize. This function specifies the size (in terms of the number of
requests) that are cached for each request. 'Nothing' indicates an
unbounded cache, while 'Just n' says that at most n requests are cached.

Bounded caches allow handling of large data volumes, at the expense of
sacrificing the consistency guarantee that the cache usually grants in
Haxl.

The bounded cache is based on the psqueues package.
Also, clearly state in the docs that choosing a bounded cache sacrifices
replayability.
DataCache.lookup now has result Maybe (res a, DataCache res) instead of
Maybe (res a).
@facebook-github-bot
Copy link

Thank you for your pull request and welcome to our community. We require contributors to sign our Contributor License Agreement, and we don't seem to have you on file. In order for us to review and merge your code, please sign up at https://code.facebook.com/cla - and if you have received this in error or have any questions, please drop us a line at [email protected]. Thanks!

@@ -1,5 +1,5 @@
loadCache :: GenHaxl u ()
loadCache = do
cacheRequest (CountAardvarks "yyy") (except (LogicError (NotFound "yyy")))
cacheRequest (CountAardvarks "xxx") (Right (3))
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The order in which the cacheRequests are listed is changed by 3f64d48,
so I changed this file to match with what I get. As far as I can tell, the order is not relevant, is it?

- Use Hashmap.insertWith again in the cache.  That's how it was
  originally done before introducing bounded caches. It should be
  slightly faster. Also, for some reason, using a HashMap.lookup
  required additional constraints on the request type that were
  problematic on older ghc versions.
- Make FiniteCache.insert strict in key and value
- Also, added an export list to the module Haxl.Core.FiniteCache.
@facebook-github-bot
Copy link

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Facebook open source project. Thanks!

@simonmar
Copy link
Contributor

Taking a glance over this, it is slightly intrusive and touches some performance-critical bits (e.g. DataCache.lookup). We'll need to be careful it doesn't impact performance.

@kantp
Copy link
Author

kantp commented Jul 17, 2015

Yes, unfortunately it's rather intrusive. I'm also not sure whether the use case this opens up, transferring large amounts of data, is something that other Haxl users will want to do.

@jkomyno
Copy link

jkomyno commented Jul 10, 2020

@simonmar, @kantp Hi, any update on this?

@kantp
Copy link
Author

kantp commented Jul 10, 2020

Hi @jkomyno,
no news from my side. I have moved away from the project that I had used this in (using a fork of the repository with this patch), so I didn't pursue this any further. If it's deemed useful, I'd be happy to help in getting it merged, though. But the performance concerns are a valid point of course, we'd need to make sure that performance is not impacted by this.

@simonmar
Copy link
Contributor

@jkomyno in principle I have no objection to including a bounded cache of some kind, but it needs to have zero (or close to zero) overhead.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants