IPFS usage roadmap #148
Loading…
x
Reference in New Issue
Block a user
No description provided.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Discussed with @bumi yesterday about how to proceed with IPFS and caching in kredits-web. There are multiple different sides to it. Current planned steps:
ipfs.kosmos.orgon port 443 read-only by default, and use the normal URL schema ofhttps://ipfs.kosmos.org/ipfs/$hashfor GET requests. Aside from being able to remove the entire IPFS client lib dependency inkredits-web/contracts, this also enables using the IPFS Companion browser extension for intercepting GETs and loading them from a local IPFS node.fetch()instead ofipfs.cat()for fetching documents, in case no custom IPFS config was handed in to the Kredits instance on creation.Steps 1 and 2 would be the first phase, in order to not load the same IPFS documents over and over again in
kredits-web. The second phase is then to make the architecture fully decentralized:ipfs.kosmos.orgfor our docs to be pinned.kredits-contractsand require it to be handed in to theKreditsprototype instance, in order for write operations to work. Then require contributors to bring a local IPFS (or a remote one they have access to), if they want to do write operations inkredits-web(optional).This solves the basic problems and also makes the entire application completely unhosted/decentralized again, as anyone can use it from any source they want, with any IPFS and Ethereum node, and no write permission needed on a specific IPFS gateway.
Afterwards, to further improve caching and loading times, we can then easily implement a ServiceWorker, which caches the documents in the application cache and intercepts the
fetchrequests before they even try to validate anything.Just FYI: edited/updated the original post considerably. This comment is just for notifications to work.
PR for the gateway config: kosmos/chef#76
So, now that we tested GET requests with caching, it turns out those are still quite slow, even just loading all data from disk (on my few-years-old SSD). So we should probably implement IndexedDB caching soon (which is easy, as we never have to invalidate the cache). Alternatively, we could try ServiceWorker's application cache.