Improve IPFS memory usage #52
Loading…
x
Reference in New Issue
Block a user
No description provided.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
I found some hints for reducing memory usage in these GitHub comments:
IPFS memory usageto Improve IPFS memory usageBy the way, the config contains lots of settings that we most likely want to change for the small DO boxes that are supposed to mostly keep our own data replicated. E.g. minimum and maximum number of peer connections: https://github.com/ipfs/go-ipfs/blob/master/docs/config.md#basic-connection-manager
Cool, giving these a try manually on barnard
I tried all of these and the memory usage was still getting up to 1GB in only a few minutes. I have tried something else: removing all the bootstrap nodes from the config. After this change, the ipfs process is using ~150MB after 30 minutes, let's see how this evolves.
I think every ipfs node needs to have the other members of the cluster as bootstrap nodes in case they don't have 100% uptime. If the cluster is up they get all the pins, but if the cluster daemon is down while a document is created they wouldn't be able to load the document
How does that make sense, when we create documents through the cluster node's API instead of the IPFS daemon one? Doesn't that mean you cannot create docs when the cluster process is down?
No, I'm just saying if you're not connected to the cluster when the document is created, you wouldn't be able to get the document after the fact (if you're not connected to any of our nodes that have the document)
Update: even removing all bootstrap nodes from the config doesn't help with the ridiculous memory consumption. After 4 hours the ipfs daemon on barnard is using 625M of RAM
I don't understand what this means. Who is "you" in this case? And why wouldn't one be able to get a document, when it's available on the IPFS network in general?
There's a WIP PR for ipfs-go that's supposed to fix the memory leaks caused by storing the peerstore in memory: https://github.com/ipfs/go-ipfs/pull/6080
On barnard go-ipfs is now using 716M of RAM after 15 hours
Yeah I got confused, even without bootstrap nodes it's still connecting to the network
In the meantime we can use Systemd's MemoryHigh and MemoryMax to keep the memory usage of go-ipfs under control: https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html#MemoryHigh=bytes
Testing it on barnard
We have significantly lowered memory usage on barnard (by lowering the min and max number of connections, disabling bandwidth metrics). The daemon is using 80MB of RAM after 40 minutes on barnard.
The master branch of our chef repo now uses this WIP PR in the ipfs-cookbook repo: https://github.com/67P/ipfs-cookbook/pull/4 (I need to make the attributes configurable, but first we're continuing the migration)
We forgot to close this one. We merged https://github.com/67P/ipfs-cookbook/pull/4
No PR since the change were done to an external cookbook, so I'm adding a label and @raucao and I as assignees to this issue