Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot restart daemon after files API ops and repo gc #2698

Closed
kevina opened this issue May 15, 2016 · 16 comments
Closed

Cannot restart daemon after files API ops and repo gc #2698

kevina opened this issue May 15, 2016 · 16 comments
Assignees
Labels
kind/bug A bug in existing code (including security flaws)

Comments

@kevina
Copy link
Contributor

kevina commented May 15, 2016

$ ipfs version
ipfs version 0.4.1
$ ipfs daemon &
$ mkdir /tmp/adir
$ echo "hello world" > /tmp/adir/hello.txt
$ ipfs add -r /tmp/adir
added QmT78zSuBmuS4z925WZfrqQ1qHaJ56DQaTfyMUF7F8ff5o adir/hello.txt
added QmfLiVjH2vujCVP2e75zyzBYmpcjktmDeU1YBz6Ct8BBsc adir
$ ipfs files cp /ipfs/QmfLiVjH2vujCVP2e75zyzBYmpcjktmDeU1YBz6Ct8BBsc /adir
$ ipfs pin rm QmfLiVjH2vujCVP2e75zyzBYmpcjktmDeU1YBz6Ct8BBsc
$ ipfs repo gc
$ kill %1
$ ipfs daemon

The final command ipfs daemon will hang. When I press Ctrl-c I get:

Received interrupt signal, shutting down...
(Hit ctrl-c again to force-shutdown the daemon.)
10:41:21.661 ERROR   cmd/ipfs: error from node construction: error loading filesroot from DAG: Failed to get block for QmWikN9opNVqoFhD8D4ERt6QSBFBaRwvVoc3wdfYJExpLh: context canceled daemon.go:257
Error: error loading filesroot from DAG: Failed to get block for QmWikN9opNVqoFhD8D4ERt6QSBFBaRwvVoc3wdfYJExpLh: context canceled

This is very likely related to #2697

@kevina
Copy link
Contributor Author

kevina commented May 15, 2016

Note after the steps above, nothing will work even when offline:

$ ipfs files ls
Error: error loading filesroot from DAG: merkledag: not found
$ ipfs files flush
Error: error loading filesroot from DAG: merkledag: not found
$ ipfs add -r /tmp/adir/
Error: error loading filesroot from DAG: merkledag: not found

To fix this I need to somehow remove the key "/local/filesroot" from ".ipfs/datastore/". Or just start over by killing the ".ipfs/datastore" directory or if I don't care about the cache or my node id just kill ".ipfs" and do an ipfs init.

@Kubuxu Kubuxu added the kind/bug A bug in existing code (including security flaws) label May 15, 2016
@Kubuxu
Copy link
Member

Kubuxu commented May 15, 2016

Can you also give us output of ipfs version --commit.

@kevina
Copy link
Contributor Author

kevina commented May 15, 2016

$ ipfs version --commit
ipfs version 0.4.1-

@whyrusleeping
Copy link
Member

Hrm... We should probably do things a little differently. The files api isnt designed to pin content added to it, but we should probably 'pin' at least the directories (or maybe the top level directory) to ensure things like this don't happen.

@kevina
Copy link
Contributor Author

kevina commented May 16, 2016

If something is accessible via the files API, I as I user would not want it to be garbage collected. See #2697. A recursive pine on the root should indirectly pin any thing accessible via the files API. If a file is appended to or modified than I don't care if the old content get garbage collected, it is just the current version I care about.

@whyrusleeping
Copy link
Member

Think of this usecase, I have a large (TB scale) directory I want to be able to work on, without having to have it all locally. I can do ipfs files cp /ipfs/QmThatLargeDir /place/in/my/files/api and that operation completes almost immediately. I can access content in there, and have it fetched as needed. If we force a pin on everything in the files api space, we would have to download all content that we link to, preventing us from doing interesting things like working on datasets locally that we couldnt possibly have the storage space for.

Pinning definitely needs some work though, and i think the solution here is to have a files api specific pinning command that pins content by path instead of by hash directly.

@kevina
Copy link
Contributor Author

kevina commented May 16, 2016

Then, we need someway to distinguish between files added locally via the files API and files copied/linked in from somewhere else. The latest version of any file added via "files write" should also be pinned, while old version should be allowed to be garbage collected.

@whyrusleeping
Copy link
Member

we could do a 'best effort' pin on the files api content. Essentially, when gathering pins, we take every block referenced by the files API that is local and add them to the 'do not gc' set.

@Kubuxu
Copy link
Member

Kubuxu commented May 17, 2016

This is quite good idea in my option.

@kevina
Copy link
Contributor Author

kevina commented May 17, 2016

Doing a 'best effort' pin would work. The only problem is in the use case of a large (TB scale) directory, if parts of the directories in any way make there way into the local cache there will be no way to remove them without locally from the cache without also removing them the files API. For now, I guess, we can can ignore this problem as I don't think it will come up that often.

@whyrusleeping
Copy link
Member

Yeah, @kevina i can see that being a problem, but i think youre right, it likely won't come up that often yet.

@kevina
Copy link
Contributor Author

kevina commented May 17, 2016

Okay, should I try to implement something? For now I will just add a special case and read the /local/filesroot hash. In the future we can think about generalizing it, maybe even adding a new 'best effort' pin type.

@kevina
Copy link
Contributor Author

kevina commented May 24, 2016

@whyrusleeping @Kubuxu I would like to move forward on this. Eventually I would like a new general purpose "best effort" pin type as that will work nicely with my new filestore (#2634), but I can start small and just see how much work it will be to support a best effort pin as a special case for the files API.

Does this need more input from others before something is implemented?

@RichardLitt
Copy link
Member

@kevina: @whyrusleeping is on vacation at the moment, and he should be back early next week. Just so you know!

@whyrusleeping
Copy link
Member

@kevina we probably can start by just augmenting what we pass into the GC function with the computed 'best effort' pinset from the files api. I agree though that having a 'best effort' pin type would be really nice and we should get to that at some point

kevina added a commit to ipfs-filestore/go-ipfs that referenced this issue Jun 20, 2016
Closes ipfs#2697.  Closes ipfs#2698.

License: MIT
Signed-off-by: Kevin Atkinson <[email protected]>
@tonycai
Copy link

tonycai commented Jun 6, 2018

I have the same issue when executing ipfs daemon:
Error: error loading filesroot from DAG: Failed to get block for QmR5soSwhinniUbAFU52YWChnHZo4u95WopMZ11j3rsgmn: context canceled

tonycai@dolphin:~$ ipfs version --commit
ipfs version 0.4.14-
os: ubuntu 16.04

how to fix it? Thank you so much!

ariescodescream pushed a commit to ariescodescream/go-ipfs that referenced this issue Apr 7, 2022
Closes ipfs#2697.  Closes ipfs#2698.

License: MIT
Signed-off-by: Kevin Atkinson <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug A bug in existing code (including security flaws)
Projects
None yet
Development

No branches or pull requests

5 participants