Why build this blog, or anything, on IPFS?

beaucronin | 302 points

IPFS still has a long way to go until it is useable in my opinion. The default configuration for the desktop client will gladly keep open 1000+ peer connections and will happily degrade your usual internet experience.

In addition the ecosystem is filled with technical/community debt that makes navigating the system a nightmare for anyone who isn't an expert. As an example: https://github.com/ipfs/go-ipfs/issues/1482

It's a shame if you ask me.

nullstyle | 4 years ago

Disclaimer: I run the unofficial IPFS Discord and Matrix (found at https://permaweb.io/discord and /matrix) and have helped organize IPFS Meetups in SF. We also run an IPFS gateway and have built a groups app on top of IPFS and Textile.

I generally agree with the conclusion, but there's a few downsides that aren't conveyed here.

Let's look at the proposed upsides: 1) Ownership, control, censorship: That's partly correct. Ownership is fair, in the sense that you can run your node and self-host. However this is true of any self-hosting solution. You could run a Docker instance of a Wordpress or Ghost site and get ownership / control.

2) The point about censorship is muddied, however. I'll combine that point with the second upside: Resilience. Every day for the past two years, I've seen people wonder if IPFS is a magical cloud with infinite storage. People seem to think you put a file on IPFS, and it just gets replicated, censorship resistant hosting. That's not how it works. People need to pin your hash. You need to tell the world about your hash somehow. All this is done via a public list of IPs that is being broadcasted. Think of IPFS this way: you're letting people with the hash become CDNs of your content. That's cool, but that doesn't solve discovery, keeping things up, etc. IPFS doesn't encrypt the content, or the connectivity, or hide the hosts. Solutions exist around that, but they're niche, and honestly I question the motives besides just ideology.

3) Elegance. Yeah it's a really, really cool way to solve linking. As some others pointed out, it's not as fast as classic centralized links, so it's better suited currently for solutions that don't require speed.

leshokunin | 4 years ago

The thing I worry about with IPFS is privacy. If you use IPFS directly (as intended, not via a public gateway), and you visit a site, then you are automatically going to be seeding (like a torrent) the visited content, and thus you will be announcing/broadcasting the fact to the world that you (your node/your IP) have visited it. My current understanding is that this cannot really be avoided, since one needs to be able to find the nodes that have the content for any given hash.

kalmi10 | 4 years ago

> even go super old-school and run a web server at home. It's not as if we're short on options in 2020.

Though it's old school, it's incredibly difficult to run server at home now at least in India. The network I connect to is behind a NAT which is behind another NAT. At least that's what I saw when I tried to host my blog on Raspberry PI at home over a year ago. Ultimately I gave up on that endeavor. If anyone has solution that doesn't involve third party, please suggest.

I think I will have to wait until my ISP implements IPv6. That could take another decade :/

rohan1024 | 4 years ago

Last I checked a lot of the consensus were that the Dat project was more mature than IPFS and that it had some advantages over IPFS (such as not using as much resources to run). How is it now? Is it more mature? Even though I actually even sub to their newsletter I haven't really been keeping up to date if they have made any major releases.

Not to be a downer on IPFS at all, btw. I'm very glad that both it and Dat exist. IPFS has always seemed like a much larger undertaking and it is cool thst they are trying to push the dweb even further. We need that just as much as we need Dat, which with its inclusion in the Beaker browser for example really serves as a super cool demo of what dweb can give us in the future.

olodus | 4 years ago

It's pieces like these that remind me why good writing skills are important, and one shouldn't stray from the basics unless they're fully aware of the trade-offs. For this article, it would be: write a better hook, and make sure to include a rudimentary thesis statement, because I wasn't able to deduce what you were trying to persuade me of, within the first few paragraphs.

With a title like "Why build this blog -- or anything -- on IPFS?" You were trying to persuade me, right?

I had to read through what is essentially every single cooking recipe on the web, before I got to the actual filling. I.e a whole lotta aimless wandering and musing, that is only tangenitally related to the topic at hand, before giving me what the title promised. Similarily to cooking blogs, this page is 2/3 filler, and 1/3 actually giving me what the title promised "So……why IPFS?"

> 1. Ownership, control, censorship

The author goes on to chastise Medium's censorhsip practices, but not too long ago he mentioned self-hosted Wordpress and staticly-generated Github pages. Wordpress and Github pages get over these hurdles and are easier to setup than IPFS.

> 2. Resilience

Suffice to say, the point of this pargraph was "DNS and HTTP unrobust, webservers fail under unforseen circumstances." Ok, well how does IPFS do things differently? You never explained how IPFS works, much less how it gets over any of the aforementioned issues you outlined.

> 3. Elegance

> But I will say that content addressing strikes me, and many software people who come across it, as obviously superior to host-based addressing along certain dimensions.

Never touched upon or elaborated.

> Plus, it's super cool. You should try it!

Atleast you have a call to action. Otherwise, this post fails to even come close to making me interested in IPFS.

endothrowho333 | 4 years ago

If you replace the word "IPFS" with "BitTorrent," this article is still true. Similarly, if you replace "IPFS" with "BitTorrent" in most of the comments here, the comments are still true.

If you understand how BitTorrent works -- including its strengths and limitations -- you'll understand how IPFS works.

jude- | 4 years ago

> From a certain perspective, the internet and the web as we know them (including fundamental technologies such as DNS, TCP/IP, HTTP, SMTP, and even Javascript) are flawed in fundamental ways. Leaving the technicalities aside, how do these flaws manifest? It is hard to stop bad actors from doing bad things: sending too many emails, stealing sensitive data, flooding websites with traffic, spreading false facts and bifurcating the shared reality that allowed for a democratic global order.

I would argue that this is even more of a problem with decentralized services because there is no one to define (or police for) spam or bad content.

bsurmanski | 4 years ago

I'm involved with a startup that has been trying to use IPFS. There are still a few problems related to incentivizing people to pin files for you. Filecoin, the ICO coin associated with IPFS has been inching closer to a testnet launch for quite long now. That was supposed to happen end of last year and it didn't as far as I know. So, they are obviously a bit behind schedule on that. Without that, content on IPFS is only as durable as the node that uploaded the content. There are no guarantees long term availability of content.

So, IPFS is more of a CDN than a file system currently. It's a distributed content cache. There's an enormous long tail of files that are only available on 1 node, which is typically somebody's laptop.

Another problem is that the block system does not combine well with e.g. s3 or similar file buckets on popular cloud providers. If you think of IPFS as a CDN then you basically have to worry about hosting files somewhere that is reliable and durable. IPFS does not solve that problem currently. So, you basically will either be self hosting some file servers or use something off the shelf, like S3. There's an S3 backend for IPFS but it's a bit unclear how well that performs. We've done some tests with it and the small blocksize is creating quite a bit of overhead for read and write HTTP requests.

Access control or privacy protection are currently not really in scope of IPFS. I doubt this is a good tool for bypassing e.g. censor ship unless you are willing to expose yourself to explaining why your node is hosting certain content hashes. TOR and I2P probably provide better protection here. I2P actually runs a variant of bittorrent for file sharing. It's been a while since I looked at this but it used to be quite slow but reliable.

jillesvangurp | 4 years ago

I've seen hugely varying IPFS latency. `time curl -o- https://teetotality.blog/posts/how-this-blog-was-made/` took > 20 seconds, for example, yet https://ipfs.io/ipns/teetotality.blog/ returned in under a second.

I'd be really interested in a "what to do and what not do" wrt IPFS to avoid those 20s (or completely non-functioning) URLs.

mceachen | 4 years ago

The long-term solution for web is:

- Use more static HTML - Since your assets are versioned and static, start calculating hashes for everything you publish - When you link to something, include the URL and the integrity <a href="URL" integrity="sha256:...">...</a>

In the immediate future this will fix broken links. Things that are important will be cached and accessible via content addressing. In the long-term future this will fix other problems like linking to a page and then it being changed to something you don't endorse.

fulldecent2 | 4 years ago

Right now this case in point of why NOT to.

ipfs resolve -r /ipns/teetotality.blog/posts/how-this-blog-was-made/%60: no link named "`" under QmefCQnxfw2qaT5WKMxiMVGTWu2i47ttpyUCDdn7f3nA2K

TylerE | 4 years ago

This blog is “built” on cloudflare apparently.

Why is medium being compared to IPFS???

netfl0 | 4 years ago

I think IPFS should be a pluggable back-end among many back-ends that become possible once we move away from the idea that websites must be hosted by a specific server / domain and start thinking in terms of “client-first” architecture.

I don’t want to write a wall of text, right now the top post on https://qbix.com/blog lays out several specific actionable things we can all do to bring about this future. It requires a snowball effect and a critical mass for any of this to take off.

EGreg | 4 years ago

Good luck managing a website with 10 daily blog posts and multiple authors through static HTML files. The success of Wordpress is thanks to a lot more than "now you don't have to know HTML".

spiderfarmer | 4 years ago

Stopped reading after "Medium's engineers are much better at all of this than you will ever be"; can't take this writer seriously.

sergioro | 4 years ago

I think one of the interesting use cases for IPFS is having a distributed build store for tools like Nix and Guix. It sounds to me like almost the perfect case for it: an immutable datastore of hashes of reproducible builds is well suited to distributed storage. Imagine being able to peek into a global store of build outputs and, for the hash of any given inputs, retrieve some corresponding output. It would be an incredibly unique experience.

There’s a proof of concept of building IPFS into Guix’s store[1]. There’s also periodically discussion on the mailing list. I’m sure it’s much more complicated than I’m giving it credit for, and there would be security and social implications (someone’s going to be building most of the software, and what happens if you have low bandwidth or a small data plan?) Still, IPFS sounds like an interesting experiment in this area.

[1] https://github.com/fps/guix-ipfs-cache

kdtsh | 4 years ago

Out of curiosity, isn’t the DNS gateway like Cloudflare (https://blog.cloudflare.com/distributed-web-gateway/) a single point of failure? Is there a solution to this without having to resort to a completely different desktop app?

aabhay | 4 years ago

When I first learned about content addressing a couple years ago, it sounded like the holy grail. I'm less certain now. It seems much better from the machine perspective, but I'm not sure it matches closely enough the way humans interact with data. We are spatial and temporal creatures. There's something unsettling about my video file being chopped up into a million chunks and stored who-knows-where. Compared to the path/URL approach where I have a single large file. I know where it lives and how big it is. Moving/copying/deleting/updating/etc are all intuitive and map to analogs in the physical world. Content addressing (and object storage system I might add) isn't as easy to reason about.

Don't get me wrong. Content addressing is very cool, and may revolutionize everything. It's very elegant. I'm just a bit skeptical.

anderspitman | 4 years ago

I once fell for this talk about "content-addressing".

You know what? Saying stuff is addressed by their content doesn't change the fact that the internet is "location-addressed" and you still have to know where peers that have the data you want are and connect to them.

And what is the solution for that? A DHT!

Turns out DHTs have terrible incentive structure and don't seem to be working well.

Downloading content on IPFS is the _most slow experience ever_ and for some reason I don't understand downloading is even slower. Even if you are in the same LAN of another machine that has the content you need it will still take hours to download some small file you would do in seconds with `scp`.

Now even if you know which peer has the content you want and tell IPFS to connect to it directly and the connection is established and the is being (slowly) downloaded... IPFS will drop the connection and the download will stop.

fiatjaf | 4 years ago

I have my own blog on DigitalOcean using Jekyll. Can't be more easy to do and mantain.

meerita | 4 years ago

We run a blog on top of IPFS, and we use ENS (Ethereum Name Service) to keep it always updates.

Basically you can access it if you use Opera browser or some browser extension. If not there are some gateways, like this: blog.almonit.eth.link

neiman | 4 years ago

I once thought about building an app based on IPFS but the problem is the lack of decentralized consensus. Only the Blockchain has cracked that problem and it is incredibly inefficient. The IPFS app would still require a central server and if that is the case then it would be better to just have some sort of public database backup with encrypted user data so that anyone can host their own fork and each fork can talk with each other through a standardized protocol. It would be a federated system like E-Mail.

imtringued | 4 years ago

Wordpress is unlike github pages or medium. In theory a LAMP server could run on everyone's home router, and they 'd publish their blog there. With commercial interest , network configurations would change to accomodate that, and people wouldn't lose their content when any service decides to shutdown. It's scary how much of the web is being lost , i would wager most links to nonocorporate sites > 10 years old barely work (has anyone studied the link rot?).

buboard | 4 years ago

ISPs have made it really expensive to get a static IP. I maintained a commercial connection for way to many years because I would remote into my home systems.

heelix | 4 years ago

Im intriqued by the idea of IPFS, but it does seem like you're giving up a lot of control - Has this been used for anything >= mid-scale yet?

lbj | 4 years ago

All the docs tells about to setup IPFS server. What is the intended way to browse ipfs sites(extensions , or new browser ) ?

thomas232233 | 4 years ago

Has Cloudflare made any commitments regarding the longterm availability of their IPFS gateway?

cyounkins | 4 years ago

Off-topic but from work this site is blocked due to a security issue. Anyone know why?

davidhbolton | 4 years ago

In conclusion, it makes no sense except as a science experiment.

If you care about people consuming your content, you won't use IPFS.

Proven | 4 years ago

God, another IPFS shilling.

Can't they come back when it's really usable?

bureaucrat | 4 years ago