Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
IPFS 0.7.0, the SECIO retirement edition (ipfs.io)
125 points by georgyo on Sept 24, 2020 | hide | past | favorite | 44 comments


So I got curious about IPFS recently and spend a couple of hours reading through the docs and tutorials. Good news: it’s a very well documented and very ambitious project. The docs in fact struck me as “too good”. As in, this isn’t some amateur writing them there is clearly a ton of work that went into it all. Which made me curious: how does a relatively unknown open source project do this? Where is the money?

Well, the answer was about half way through the tutorial. Basically say you want to publish a web page on IPFS. You put it together on your laptop and start up the IPFS daemon. Cool, you are now online and everyone can see your stuff. But what happens when you disconnect from your Wi-Fi? Well you can have your content pinned by another party and have them serve it. After all, IPFS is all about content addressing so it doesn’t matter who hosts it. But how does that work? Well, a paid service called Pinata is part of the official tutorial. No other service is mentioned. I am sure others exist but how is this going to be different than what we have with the web currently where a huge portion of content is served by a single company (CloudFlare)?


Thanks! We've been making a lot of updates to the documentation to make it easier to use - so glad it feels easy to follow!

I'll follow up on the Pinata docs example. There are a lot of options for how to persist content in the IPFS network, and we should describe all of them (even if Pinata is one of the smoother/easier to use ones for those new to IPFS who don't want to run their own persistent node). Feel free to file an issue or PR on that docs page if you get a second and we'll help get that fixed ASAP.

Given interest in decentralized persistence, you may be interested in collaborative clusters which allow a group of peers to all persist each other's content: https://collab.ipfscluster.io/ & https://cluster.ipfs.io/documentation/collaborative/


Would you happen to know if the Internet Archive intends to use collaborative clusters to globally distribute archive contents on IPFS?


That's a great idea! I know there's a project in the works to have redundant copies of the Archive stored on Filecoin, so expanding that to also make the data available for Collaborative Clusters should be totally doable. We'd have to slice the archive down into bites that small machines like yours and mine can help with though. Thanks for the suggestion!


Thank you for that info. I am still very new to IPFS but am going to try to learn more before I submit any PRs or anything like that. Is there some way to see how pinned content is distributed? Do pinning services have a standard API for talking to them?


We're actually implementing a standard Pinning API right now! You can check out the spec here (https://github.com/ipfs/pinning-services-api-spec) - currently being integrated by Pinata and soon others.

By the way, here's a PR to add other pinning options to the docs: https://github.com/ipfs/ipfs-docs/pull/471


Well, maybe that, but they (Protocol Labs) also raised $200 million in 30 minutes with Filecoin which is basically just their attempt to create paid decentralized pinning.


>But how does that work? Well, a paid service called Pinata is part of the official tutorial. No other service is mentioned. I am sure others exist but how is this going to be different than what we have with the web currently where a huge portion of content is served by a single company (CloudFlare)?

Besides that there are other pinning services or that you can run your own, you can use multiple at once, and it's also possible for your site's users to help contribute without needing any coordination with you. I'm really excited about the part that users can help host things, even after the owner dies or otherwise gives up on hosting a project. I've been disappointed by how often I find old URLs to pages I've liked stop working; I'd love if I could help continue hosting those pages on their original URLs so they work for everyone, and IPFS seems like a step towards that world.


> Where is the money?

Protocol Labs raised a bunch of money from the Filecoin ICO. Pinata has nothing to do with it.


> Which made me curious: how does a relatively unknown open source project do this

There's a lot of money behind it. A number of people are paid to work on IPFS full time.


> Which made me curious: how does a relatively unknown open source project do this

There's a lot of money behind it. A number of people are paid proper wages to work on IPFS full time.


I think CloudFlare is supporting IPFS. I can only speculate as to how this is monetized.


I'm going to shamelessly take this thread as an opportunity to link the work we recently did with IPFS that didn't stick when i tried submitting it before.

https://blog.ipfs.io/2020-09-08-nix-ipfs-milestone-1/

I'm very keen on seeing IPFS succeed, and (unlike with economics). I think a "trickle down" approach of targeting developer workflows in hopes that devs "reverse dogfood"[1] is very viable.

[1]: take the thing they use themselves and make it their program use it.


I've been following this work really closely, thank you! Making IPFS work with Nix is amazing work and it will be pretty incredible when every Nix server can act as a cache automatically.


Thank you!


Is there any compelling reason to use hypercore (Dat) or IPFS over the other? To me, it seems they both have a similar feature set.


IPLD, and its ability to embed existing references (e.g. git hashes) I think is a good "big tent" approach to get people on board.

Relatedly, I think files / encouraging application layer to marshall/unmarshall from flat bytes was always a terrible idea. IPLD encourages not doing that.

The most important thing is betting people to agree on how we content-address data (see [1]). At that point, we can just let the network protocols duke it out, using bridges to ensure any fragmentation doesn't hurt network affects so bad.

Ultimately, shared addressing scheme -> commodification, something this industry desperately needs, and indeed something humanity needs to make better and fairer use of technology. It's a lot like the identity problem with human-oriented distributed systems, just way easier to solve prior to coordinating better since the actual content addressing (as opposed to agreeing on the scheme) is trustless and coordination free.

[1]: https://www.softwareheritage.org/2020/07/09/intrinsic-vs-ext...


For those who didn't immediately know what IPLD is:

https://docs.ipld.io/

So similar to JSONLD (JSON Linked Data) -- if you know what the schema of a blob it can help with pre-processing... this seems solvable as an add-on but being included is a difference.


Dat makes it easy to update your dats (just be careful not to lose your private key). That's not possible with IPFS - you have to use IPNS. IPNS used to be slow - I'm not sure if it still is, but I'd be surprised if it's as fast as Dat.


I'ts been a while, but last time I tried IFPS (the official Go node) it was really CPU/memory heavy and also flooded my network, severely impacting other traffic.

Has the implementation improved?


When was the last time you tried? 0.5.0 from May had a lot of improvements - but go is still a hungry hungry beast. maybe try using Desktop which sets a lower peer count?


I have been out of the loop of IPFS. I know IPFS mostly deals with data distribution, but last I checked, it did not have any mechanisms for data durability. Has anything changed on that front?


Back in the early days of IPFS, we had the idea of setting up "pinning rings" between hackerspaces

https://github.com/c-base/ipfs-ringpin


There is a lot of work happening in the filecoin area, which I imagine addresses durability. filecoin is built on top of ipfs.

https://filecoin.io/store/


Filecoin is poorly executed. Check out Sia and Skynet to see something that actually works.


This seems hyperbolic. Filecoin hasn’t even launched yet and you’re calling it poorly executed. Sia has its own set of problems and trade offs they made, which you’re really glossing over here.


I'm familiar with Sia but know little about Filecoin, can you elaborate on the tradeoffs between the two?


Sia records the data on chain (this is a little reductionist) where as filecoin opens up a market for storage and retrieval. Sia requires you to run a full node to interact with the chain and data, filecoin does not. Sia is gearing their product towards a different use case, the most compelling being personal backups imo.

Filecoin is attempting to create an ecosystem around theirs with an in depth market around different actions on-chain.

They make different trade offs. But it’s incredibly premature to call Filecoin “poorly executed”. The connotation around “look at sia to see something that actually works” completely disregards that there are other decentralized storage providers that “actually work” and that Sia has some key drawbacks too, performance and node management being notable ones.


> Sia records the data on chain where as filecoin opens up a market for storage and retrieval

I think what you're trying to say is that with Sia, you are responsible for choosing which hosts store your data, whereas with Filecoin, you submit an open contract to the network, and any host that satisfies your terms can claim it. Both platforms have a market, but Sia's is off-chain and Filecoin's is on-chain.

(Also, to be clear, Sia doesn't store actual file data on the blockchain itself -- it's been clear since the early days of Bitcoin that storing large amounts of data on-chain isn't viable.)

> Sia requires you to run a full node to interact with the chain and data

Technically it's always been possible to store and retrieve data without running a full node (you just need a few secret keys, hashes, and IP addresses), but I'll grant that it hasn't been very user-friendly until recently.

> performance

Can you be more specific?


> Sia records the data on chain (this is a little reductionist) where as filecoin opens up a market for storage and retrieval. Sia requires you to run a full node to interact with the chain and data, filecoin does not. Sia is gearing their product towards a different use case, the most compelling being personal backups imo.

Everything you wrote is no longer true of Sia and Skynet. Your information is extremely out of date.


I think your comment would be better received if you explained how the parent's information was out of date. As it stands, this isn't really helpful.


The whole idea is still equivalent to BitTorrent with magnet: links. So no, it's not a free file store - if you want to make sure your file is available, you have to seed it, or pay someone else to.


Are there minimum hardware requirements? I mean, does this stuff run on a Raspberry Pi, or do you need something better?


I run IPFS on raspberry Pi (along with many other network services). Daemon is hungry for memory, but it's not a big problem. I use following systemd unit to run it and restart when it eats too much of memory:

  [Unit]
  Description=IPFS daemon
  After=syslog.target network.target

  [Service]
  Type=simple
  User=pi
  ExecStart=/usr/local/bin/ipfs daemon
  Restart=always
  KillMode=process
  MemoryHigh=250M
  MemoryMax=300M
  RestartSec=60

  [Install]
  WantedBy=multi-user.target


It's Raspberry Pi 3B+, 1 GB RAM


Is IPFS something one can participate in to provide bandwidth and diskspace - just to support the project?


Kind of. You have to deliberately pin content (NB: My experience with it was a couple years ago, I like it but had little immediate personal use for it) for it to persist on your system. So it's more like torrents in that regard versus other distributed systems like Freenet. In this fashion, you know what you're sharing (and don't have deniability like with Freenet), but you can also control what you contribute to.


You can become an IPFS Cluster Follower (https://collab.ipfscluster.io/#instructions) and help back up various datasets of IPFS content (like Project Gutenberg, Package Managers, Websites, etc)


The current problem of IPFS is that it requires a standalone app. Is there going to be a browser only version based on WebRTC and similar technologies?


There are browser extentions.

Google Chrome: https://chrome.google.com/webstore/detail/ipfs-companion/nib...

Mozilla Firefox: https://addons.mozilla.org/en-US/firefox/addon/ipfs-companio...

There's also IPFS gateways that bypass the need for anything but a browser.

Official list of public gateways: https://ipfs.github.io/public-gateway-checker/


I’ve been using Tahoe LAFS for years for redundant storage. How is this better? I’ve seen some articles about IPFS and it seems similar.


Does IPNS work yet? The last few times I tried it, it was all but nonfunctional.


all but functional?


I wonder if there are statistics on the versions that the nodes currently use?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: