-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Discussion: Support for Multiple Datastores #2747
Comments
I had done some work a couple months ago on making it much easier to modify the datastores used by ipfs. Changing it so the config looked like this: (ish) https://gist.github.com/whyrusleeping/0252846bc655632449b1 |
Thanks, @whyrusleeping. Form what I understand, we have the infrastructure in place for using different datastores based on the key prefix. What I am proposing is to allow for multiple datastores under the "/blocks" prefix. That is what I need for the filestore. For reading each datastore is looked up in sequence until the block is found. Think of it something like UnionFS for the IPFS datastore. |
Closing this in favor of #3119. |
It would be really nice if IPFS can support using more than one datastore at the same time for storing blocks. For example to have a datastore for an active the cache and another datastore stored for more permanent data, perhaps on a read-only filesystem. Some form of multiple datastore support is required to support the filestore I am working on in pull request #2634 (towards issue #875).
There are many open issues on how to handle this. The point of this issue is to open a discussion. I intend to implement something once there is some sort of agreement on the semantics.
Assuming this is something that is wanted, let's start of the discussion with this:
How should the pinner and garbage collector interact with multiple datastores? As I see there should be a designed datastore for the cache and the garbage collector should only work on that datastore. It should ignore blocks on other datastores with the possible exception of reading blocks from them to resolve recursive pins. Blocks in all other datastores should be considered implicitly pinned.
To support the view of one datastore being the cache. New blocks should be written to the cache by default and explicit API calls should be made to add blocks to other datastores or move blocks from the cache datastore to other datastores.
Thoughts?
The text was updated successfully, but these errors were encountered: