IPFS usage roadmap #148
Labels
No Label
good first issue
ipfs
rsk
scaling
bug
dev environment
docs
duplicate
enhancement
feature
idea
invalid
kredits-1
kredits-2
kredits-3
question
release
major
release
minor
release
patch
security
ui/ux
wontfix
No Milestone
No project
No Assignees
1 Participants
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: kredits/contracts#148
Loading…
Reference in New Issue
No description provided.
Delete Branch "%!s(<nil>)"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Discussed with @bumi yesterday about how to proceed with IPFS and caching in kredits-web. There are multiple different sides to it. Current planned steps:
ipfs.kosmos.org
on port 443 read-only by default, and use the normal URL schema ofhttps://ipfs.kosmos.org/ipfs/$hash
for GET requests. Aside from being able to remove the entire IPFS client lib dependency inkredits-web/contracts
, this also enables using the IPFS Companion browser extension for intercepting GETs and loading them from a local IPFS node.fetch()
instead ofipfs.cat()
for fetching documents, in case no custom IPFS config was handed in to the Kredits instance on creation.Steps 1 and 2 would be the first phase, in order to not load the same IPFS documents over and over again in
kredits-web
. The second phase is then to make the architecture fully decentralized:ipfs.kosmos.org
for our docs to be pinned.kredits-contracts
and require it to be handed in to theKredits
prototype instance, in order for write operations to work. Then require contributors to bring a local IPFS (or a remote one they have access to), if they want to do write operations inkredits-web
(optional).This solves the basic problems and also makes the entire application completely unhosted/decentralized again, as anyone can use it from any source they want, with any IPFS and Ethereum node, and no write permission needed on a specific IPFS gateway.
Afterwards, to further improve caching and loading times, we can then easily implement a ServiceWorker, which caches the documents in the application cache and intercepts the
fetch
requests before they even try to validate anything.Just FYI: edited/updated the original post considerably. This comment is just for notifications to work.
PR for the gateway config: kosmos/chef#76
So, now that we tested GET requests with caching, it turns out those are still quite slow, even just loading all data from disk (on my few-years-old SSD). So we should probably implement IndexedDB caching soon (which is easy, as we never have to invalidate the cache). Alternatively, we could try ServiceWorker's application cache.