Update ipfs-cluster to 0.10.1 #48

Closed
opened 2019-04-29 13:06:17 +00:00 by greg · 7 comments
Owner

https://github.com/ipfs/ipfs-cluster/blob/v0.10.1/CHANGELOG.md#v0101---2019-04-10

0.10.0 switched away from raft

The raft logs on dev appear to be broken, commands like ipfs-cluster-ctl pin ls $HASH result in "cluster: the state on this node is not consistent", so we probably need to bootstrap the node again before we can upgrade: https://cluster.ipfs.io/documentation/upgrades/#troubleshooting-upgrades

https://github.com/ipfs/ipfs-cluster/blob/v0.10.1/CHANGELOG.md#v0101---2019-04-10 0.10.0 switched away from raft The raft logs on dev appear to be broken, commands like `ipfs-cluster-ctl pin ls $HASH` result in "cluster: the state on this node is not consistent", so we probably need to bootstrap the node again before we can upgrade: https://cluster.ipfs.io/documentation/upgrades/#troubleshooting-upgrades
Author
Owner

I have a battle plan:

  • Switch the nginx vhost for ipfs.kosmos.org to port 5001 temporarily (connect to ipfs instead of the ipfs-cluster daemon)
  • Remove andromeda from the current cluster, leaving dev the only member
  • Update ipfs-cluster to 0.10.1 on andromeda, initialize a new cluster on andromeda
  • Stop the current cluster on dev, update ipfs-cluster to 0.10.1, join andromeda's new cluster
I have a battle plan: - Switch the nginx vhost for ipfs.kosmos.org to port 5001 temporarily (connect to ipfs instead of the ipfs-cluster daemon) - Remove andromeda from the current cluster, leaving dev the only member - Update ipfs-cluster to 0.10.1 on andromeda, initialize a new cluster on andromeda - Stop the current cluster on dev, update ipfs-cluster to 0.10.1, join andromeda's new cluster
Owner

You do realize that we want to move away from dev in the first place, right? Hence, we have a whole new machine to play with (barnard.kosmos.org), which will contain both hal and ipfs.

You do realize that we want to move away from dev in the first place, right? Hence, we have a whole new machine to play with (barnard.kosmos.org), which will contain both hal and ipfs.
Owner

Oh, and also we want to deploy a new XMPP hal and move hubot-kredits over to that instead of the IRC one...

Oh, and also we want to deploy a new XMPP hal and move hubot-kredits over to that instead of the IRC one...
Author
Owner

Yes, I realize that, but that doesn't mean we should do multiple things at the same time

Yes, I realize that, but that doesn't mean we should do multiple things at the same time
Owner

Why not? It actually makes this upgrade much easier if we don't have to mess around with the existing node that is in use.

Why not? It actually makes this upgrade much easier if we don't have to mess around with the existing node that is in use.
greg self-assigned this 2019-04-30 11:02:04 +00:00
Owner

Hasn't this been finished?

Hasn't this been finished?
Author
Owner

Closed by #49

Closed by #49
greg closed this issue 2019-05-02 09:17:25 +00:00
Sign in to join this conversation.
2 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: kosmos/chef#48
No description provided.