IPFS usage roadmap #148

Open
opened 2019-07-01 10:42:15 +00:00 by raucao · 3 comments
raucao commented 2019-07-01 10:42:15 +00:00 (Migrated from github.com)

Discussed with @bumi yesterday about how to proceed with IPFS and caching in kredits-web. There are multiple different sides to it. Current planned steps:

  • Make ipfs.kosmos.org on port 443 read-only by default, and use the normal URL schema of https://ipfs.kosmos.org/ipfs/$hash for GET requests. Aside from being able to remove the entire IPFS client lib dependency in kredits-web/contracts, this also enables using the IPFS Companion browser extension for intercepting GETs and loading them from a local IPFS node.
  • Set headers for cache re-validation time to eternity. This way, the normal browser cache already greatly improves performance. No invalidation needed, because the hash itself is a strong ETAG to begin with.
  • In the contract wrapper, use a normal fetch() instead of ipfs.cat() for fetching documents, in case no custom IPFS config was handed in to the Kredits instance on creation.

Steps 1 and 2 would be the first phase, in order to not load the same IPFS documents over and over again in kredits-web. The second phase is then to make the architecture fully decentralized:

  • Finish IPFS pinner. This removes the dependency on ipfs.kosmos.org for our docs to be pinned.
  • Remove ipfs-cluster from all IPFS nodes, and add ipfs-pinner for pin orchestration.
  • Remove the IPFS HTTP client as a dependency in kredits-contracts and require it to be handed in to the Kredits prototype instance, in order for write operations to work. Then require contributors to bring a local IPFS (or a remote one they have access to), if they want to do write operations in kredits-web (optional).

This solves the basic problems and also makes the entire application completely unhosted/decentralized again, as anyone can use it from any source they want, with any IPFS and Ethereum node, and no write permission needed on a specific IPFS gateway.

Afterwards, to further improve caching and loading times, we can then easily implement a ServiceWorker, which caches the documents in the application cache and intercepts the fetch requests before they even try to validate anything.

Discussed with @bumi yesterday about how to proceed with IPFS and caching in kredits-web. There are multiple different sides to it. Current planned steps: * [x] Make `ipfs.kosmos.org` on port 443 read-only by default, and use the normal URL schema of `https://ipfs.kosmos.org/ipfs/$hash` for GET requests. Aside from being able to remove the entire IPFS client lib dependency in `kredits-web/contracts`, this also enables using the IPFS Companion browser extension for intercepting GETs and loading them from a local IPFS node. * [x] Set headers for cache re-validation time to eternity. This way, the normal browser cache already greatly improves performance. No invalidation needed, because the hash itself is a strong ETAG to begin with. * [x] In the contract wrapper, use a normal `fetch()` instead of `ipfs.cat()` for fetching documents, in case no custom IPFS config was handed in to the Kredits instance on creation. Steps 1 and 2 would be the first phase, in order to not load the same IPFS documents over and over again in `kredits-web`. The second phase is then to make the architecture fully decentralized: * [x] Finish IPFS pinner. This removes the dependency on `ipfs.kosmos.org` for our docs to be pinned. * [x] Remove ipfs-cluster from all IPFS nodes, and add ipfs-pinner for pin orchestration. * [ ] Remove the IPFS HTTP client as a dependency in `kredits-contracts` and require it to be handed in to the `Kredits` prototype instance, in order for write operations to work. Then require contributors to bring a local IPFS (or a remote one they have access to), if they want to do write operations in `kredits-web` (optional). This solves the basic problems and also makes the entire application completely unhosted/decentralized again, as anyone can use it from any source they want, with any IPFS and Ethereum node, and no write permission needed on a specific IPFS gateway. Afterwards, to further improve caching and loading times, we can then easily implement a ServiceWorker, which caches the documents in the application cache and intercepts the `fetch` requests before they even try to validate anything.
raucao commented 2019-07-01 12:04:28 +00:00 (Migrated from github.com)

Just FYI: edited/updated the original post considerably. This comment is just for notifications to work.

Just FYI: edited/updated the original post considerably. This comment is just for notifications to work.
raucao commented 2019-07-01 13:17:53 +00:00 (Migrated from github.com)

PR for the gateway config: kosmos/chef#76

PR for the gateway config: https://gitea.kosmos.org/kosmos/chef/pulls/76
raucao commented 2019-07-01 18:07:06 +00:00 (Migrated from github.com)

So, now that we tested GET requests with caching, it turns out those are still quite slow, even just loading all data from disk (on my few-years-old SSD). So we should probably implement IndexedDB caching soon (which is easy, as we never have to invalidate the cache). Alternatively, we could try ServiceWorker's application cache.

So, now that we tested GET requests with caching, it turns out those are still quite slow, even just loading all data from disk (on my few-years-old SSD). So we should probably implement IndexedDB caching soon (which is easy, as we never have to invalidate the cache). Alternatively, we could try ServiceWorker's application cache.
Sign in to join this conversation.
No Milestone
No project
No Assignees
1 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: kredits/contracts#148
No description provided.