IPFS should be enabled and treated as a first-class service by my distro. I install said distro, it comes with a folder - and a basic service - upon which I can replace Dropbox, Facebook, and Instagram - for my target peer group, i.e. any of my friends and family who will also run the very same distro, for this purpose.
I agree. I am working on an Arch distro (may roll my own for containers) that includes secure decentralized storage and configuration by default, and a federated marketplace for sharing your own local and server resources.
Haha... right now I have my desktop that I can show, which isn't perfect but I have an idea of where I want to go and how to get there (mostly). I am still running i3 for compatibility, but definitely moving towards Sway.
I've seen some great looking desktops that I still need to mimic, and the make them more fluid/responsive for multiple devices. I would like to review and look at what George Singer is up to:
Seems like he may be offering VR devices for developers that are able to demonstrate some requirements. How cool is that? I would like to take something like Sway, and add or combine it with a project similar to SimularVR, with a Vulkan backend. That is the end goal.
I will likely just start with Sway and build a kick ass plugin and application manager that uses containers. I will re-explore the current application isolation solutions, such as AppImage vs FLatpak vs Snap, but likely try my hand at building my own.
Part of that process is repeatable builds. I would like to build the OS compilation into the decentralization process, and make distributed (and localized) compilation a core component of the crypto solution, as it will later be used for rendering all types of resources on top of something like IPFS and Git.
Ideally, package maintainers will be solely focused on automated builds directly from source repos, so that everything is compiled and versioned automatically, so that you can switch software versions at anytime, with some compatibility-script support for DB and data changes.
Anyway, that is a lot of talk, so I will have to share more when I make more progress. It is a longer term goal, but feel free to help me get started:
For example, you can see a UI kit here that I would like to utilize initially, although I am still inclined to try to implement my own idea here, which involves combining a
I was thinking of something similar for applications. Just earlier today I needed a large dataset for some machine learning with Jupyter notebook in a git repo. I don't want to store the dataset in the git repo due to the size, and I don't really want to deal with "downloading the file if it isn't already downloaded into some location".
What would be awesome is something like this little bit of hypothetical Python code:
import ipfs
file = ipfs.file(FILE_HASH, keep=True, share=True) # THIS LINE
with open(file) as fin:
# Do stuff like a normal file.
No need to distinguish between local files and cloud files! You can choose whether to keep the file permanently or temporarily, and whether to share it back to other IPFS nodes. You would probably need a system service to do this seamlessly as you suggested.
Sure, it might take a some time to download it at first - but you have to deal with that either way. On subsequent runs, it all just works. (I assume for popular enough files you don't have to worry about the availability of the file, especially if people tend to share back by default).
The big benefit being it is simpler to share datasets and actually use datasets - you can do both at the same time with one little file content identifier (FILE_HASH above).
IPFS includes a way to mount it as a filesystem. So if you know the IPNS URL for some data, or the hash, you can access it like accessing any other file. If it's already downloaded then there shouldn't be any overhead versus just opening a file, if it hasn't it'll download it. You oughtn't need any extra library or logic for it.
That's brilliant! I think that means it should also support reading the file in chunks, as it gets downloaded, automatically.
I wonder how purging less important files is/could be handled through this? It would be good if there was some way to say, "this file is likely only going to be used temporarily (or just once), so feel free to delete it if disk space is required". That hinting might not be necessary as IPFS could use some least-recently used algorithm, but there should be some way to at least pin files, "this is really important, don't delete this".
Windows has been experimenting with variations of remote-files-as-local via OneDrive for some time now. I really think it needs to be baked in to the filesystem or at least the file browser to be successful, and be absolutely transparent for the application that's requesting the file.
Once the scale problem starts to be solved, the question becomes discoverability. And the answer to that is, there is an infinite namespace of things - one might suggest it up to the distro maintainers to adopt policies - behind which one can hide ones content. Like, if the distro has a button that says 'publish your stuff' and a button that says 'keep this stuff secret' but inform your closest friends when there are updates so they can get approved copies, well .. thats a distro thing.
I've been following the development of IPFS for the last few years and I'm getting the impression that it's fizzling out. Maybe Protocol Labs are working on Filecoin now (that's what's bringing them money), but they never really finished the pubsub implementation and js-ipfs still can't run in the browser because there is no IPFS network that communicates using WebSockets and not TCP/UDP.
> they never really finished the pubsub implementation and js-ipfs still can't run in the browser because there is no IPFS network that communicates using WebSockets and not TCP/UDP
I don't believe the pub/sub is critical functionality (progress at [0], but it's really just chatty). Progress of that additional feature should not be a measuring stick for the project as a whole. For js-ipfs, I see work being done frequently, e.g. the DHT work[1] and weekly syncs[2]. I agree work is slow, but I am not getting the impression it is fizzing out.
Having said that, I think there are fundamental problems with the ideas behind it. I want MaidSafe's ideas and IPFS's completion/MVP schedule. IPFS doesn't bring enough anonymity or forced, equitable, encrypted block sharing IMO (I am aware of efforts on both fronts of course).
They already got 250M USD. Why do they need to get Filecoin to work?
The easiest and least stressful outcome is to drag it along forever while paying themselves giant executive salaries from that stash. As long as there’s some minimal progress, it’s not fraud.
I understand that it's fashionable to be cynical about ICOs. However the IPFS people have clearly been working their asses off writing solid software and giving it to us for free, for several years now. All along they've been talking about their plan for filecoin.
With that track record, the burden is on you to explain why you think they will suddenly throw all that away, and stop working on a project they're so clearly passionate about, ruining their reputation in the process.
With a track record of major ICO success, they could easily start a fund or raise money for the next venture — regardless of what happens to Filecoin.
Convincing people to give up $250 million is the part that built their reputation and ensures lucrative deals will be coming their way. Whether they’re passionate about some storage-sharing token is neither here nor there.
You're absolutely right. All human beings are selfish monsters and are motivated by money more than anything good. And that of course includes you yourself.
What’s good about Filecoin? It’s not like it would benefit humanity somehow. You make it sound like I’m criticizing Doctors Without Borders here, rather than a clunky S3 alternative.
Brings up a lot of good points, but his question at the end “Does IPFS mean I may be storing some illegal content originated by other users?” is so off-base, it casts doubt on the whole rest of the post. I only have a passing familiarity with IPFS, but I know that you only store files that you have explicitly chosen to store, or “pinned”. How did the author miss this?
If you read the white paper, it pretty clearly says that your node will cache files that peers request in order to build reputation on the network. Maybe IPFS doesn't actually do that currently, but it could and the whitepaper says it should, so it is a very valid question to ask.
edit: double checked the whitepaper, and it says "In the case that a node has nothing that its
peers want (or nothing at all), it seeks the pieces its peers want, with lower priority than what the node wants itself. This incentivizes nodes to cache and disseminate rare pieces, even if they are not interested in them directly."
I'm not sure how IPFS addresses this, but one method is to have an oracle that reports DMCA takedowns. Nodes can achieve takedown compliance by subscribing to the blacklist.
Obviously there are lots of technical challenges. Just a few:
- Who gets to add entries to the blacklist?
- How do infringement victims know that's the right person to contact?
- How do you avoid creating a central point of failure? (at minimum nodes need ability to choose their blacklist provider)
- Since it's all content-addressed, how do you handle near duplicates? e.g. same movie encoded in different formats (basically have to punt on this one unless you implement some kind of contentID, which is a whole other problem)
Anyway, you need to have some figleaf policy so people have a clear way to opt-in to compliance.
I would be completely skeptical of this approach except that it appears to work for adblockers. Extensions like uBlock Origin let users choose from a several third-party blacklists, with sensible defaults. Of course, there are fewer ad servers than pirate movie copies...
I think it is a non-problem, because the node is just a cache, just like an optic-fiber acts as a cache (you can actually compute the storage capacity of an optic-fiber of a given length using the bitrate and the speed of light). Owners of internet infrastructure (fiber&routers) are never sued for e.g. infringement of copyright, and infrastructure is exactly what an IPFS node is.
You need to understand WHY people sue others to be able to draw the correct conclusion.
There are reasons why infrastructures don't get sued, yet users who download illegal files very often get sent a warning from their ISP that they may cut them off and charge them with penalty.
If IPFS takes off the same way BitTorrent took off and a lot of people end up using IPFS to download illegal content, ISPs will do the same thing, not because each individual file is illegal, but because they look at the entire network as a whole and conclude that it was this network that caused the rise in pirating, which is technically true. Their goal would be to scare the users into NOT using the network altogether unless they know EXACTLY what they're doing. That's their playbook.
The reason people don't use torrents as much as they used to is because people are afraid of getting sued by their ISPs and content owners. They know it won't completely get rid of all BitTorrent pirates, but they DO know that it will lower the usage. And they will do the same for IPFS users if that's what it takes.
> but because they look at the entire network as a whole and conclude that it was this network that caused the rise in pirating, which is technically true.
But at the same time, ISPs were also profiting from the pirating (in fact, but this is beside the point, they were financially profiting, while the users were just getting movies).
In the case of IPFS, the ones running the infrastructure are not actually profiting directly from the distribution of content in their caches. I.e., the IPFS users are more "innocent" than ISPs in the BitTorrent case.
So if BigMedia wants to sue anybody, they should sue the ISPs, OR the people who placed data in the cache in the first place.
Finally, if this is all insurmountable, then there is still a solution. All users of IPFS transfer ownership of their harddrives (or parts thereof) to the IPFS foundation. That way, the users are not liable.
PS: With IPFS, users are at the same level as ISPs. So if users are getting sued, they can say "I'm an ISP, and I got my data from that ISP, sue them!"
This is why there has been a so-called "the great vertical integration" of the ISPs and content providers.
It's easy to understand if you take a look at what companies own or are owned by major ISP companies like Time Warner Cable, Comcast, etc.
Once you start digging deeper into how the Internet "actually" works, you quickly realize it's not as decentralized as we thought, and you become very pessimistic. But I do hope there's a solution for this, which is why I am a fan of people working on stuff like IPFS.
That said, if you really want to overthrow the current structure, you should really understand the history and why and how things work the way they do currently. Not just at a technical level but on economical level as well.
It's one of the reasons. Did I say it's the absolute singular reason behind all this?
Streaming services cost you money, and if you study microneconomics, humans are rational animals that always tend to move towards maximizing their gains, which is calculated by subtracting cost from benefit. And there are many factors involved here.
The ISP suing people problem causes people to decide between mental cost and financial cost.
If you know what you're doing and have an efficient workflow that doesn't cause you to have a lot of mental cost, then you may use torrents, but otherwise for most people the mental cost overweighs the financial cost, and that's why many people switched over.
Or it could be that owners of internet infrastructure are never sued because they have legal departments that can spend a lot of resources fighting back, keeping the copyright holders laywers busy (and draining resources) for a long time. On the other hand, end-users don't have legal departments, and probably don't have the resources to pay for a single lawyer for very long, and are much more likely to settle to make things go away, so it makes much more business sense to sue them.
The DMCA has explicit exceptions very clearly spelled out for service providers that do not store content and/or do not choose which content gets cached...
Perhaps nodes could seek only parts of files, a sub-set of the pieces that make up the full file, for the purpose of building reputation? It could offer some differentiation between users who are seeking the full file vs those that are just acting as a pipe.
I talked to some of the IPFS folks a while back, and they were planning on publishing a blacklist. It would be up to the individuals running IPFS nodes to subscribe to blacklists, and anyone who wanted to do so could publish them (after all, there's very little technical capability required to publish a simple list of hashes).
This was in the context of child porn though, not copyright. I imagine that they wouldn't bother publishing a blacklist of copyrighted items (since every item is copyrighted, and a copyright holder is perfectly within their rights to publish their own copyrighted works via IPFS). Instead I suspect that they would suggest that copyright holders publish their own blacklists.
Whitelists are probably a lot more practical. With a blacklist there is the problem that vast numbers of new files will be coming on the network all the time, and so you would need some mechanism to determine which ones should be put on the list. There are only two ways to do that, neither practical.
One is machines, but then you need a huge server farm and lots of sophisticated software, and even then you get lots of false positives and false negatives.
The other is human beings, but that would require a huge army of people. You can't get that many volunteers as people don't want to spend their time, and besides they don't want to be traumatized by looking nasty pics. So you have to hire them, and where would the money come from?
As a consequence, I think white lists are the way to go.
It does not "casts doubt on the whole rest of the post". It is a legitimate question afterall, and the same one faced nowadays with so many files being removed from websites. He questions how could one do it in a decentralized system since copyright issues will show up as well.
It's a legitimate question, especially for those familiar with other similar distributed systems. But it's easily answered and the article leaves it unanswered.
As I understand it though, this wouldn't be the case in the future if one becomes a Filecoin storage provider in which case one could be holding some illegal content. IIRC though some sort of blacklisting protocol will be provided in order to respond to DMCA, NSLs, etc, so I think Filecoin storage providers are covered in that regard.
The web already is decentralized. Most HTTP requests are fulfilled by a computer that is part of a CDN, not by the publishers computer.
IPFS is like Wikipedia, Uber, AirBnB. There have been encyclopedias, taxis and hotels before. But now it's easier to participate.
My computer has this article cached right now. So if my neighbor want to read it, he could get it from me. The infrastructure is just not there yet. IPFS will add it.
This may be an overblown response, but I can't help to give my opinion about centralisation. I think the web is decentralised, but has never been that centralised. What I'm about to say is about the Internet in a broader meaning, but applies to the web just as well.
A few actors own most of the servers (aws, google, ovh, azure), a few actors own most of the personal data (google, facebook, amazon) and a few actors own the pipes to transfer the data between users and the above service (depends on the country, but on average 3 to 4 provider per country, often the same across borders). The fact CDNs are spread geographically has no impact on the balance of centralisation, as what matter - in my opinion - isn't about geographical distribution, but about power or ownership distribution.
It's also very vertical due to the way we consume it (think asymmetrical bandwidth). We mostly depend on someone else's computer, or someone else's service, and has nothing to do with the utopic beginnings of the web when people were self-hosting their content on their machines.
An example would be streaming: we only download stuff from Netflix or Youtube, when technologies like Bittorrent or webtorrent could be leveraged to limit the reliance on one actor. Technologically these actors aren't really single points of failure as they've learned to be reliable, but it's a matter of who has control over the Internet.
OP is measuring on a graph where each node is a server. You are measuring on a graph where each node is a person. "De-centralisation" can usefully apply to both. The former measures resilience against technical risks. The latter against sociopolitical risks.
I disagree, the internet is very much centralized.
In your example, this very post on HN points to an https url. An url that for now, means this article, but years from now might land in a 404 page, or a dns not found. Sure I can use the internet archive or find your post here on HN and ask you for it. But those are alternative services, they're not part of http.
IPFS ensures that a url means one thing and one thing only, if changes, the url changes. No censorship, no ninja edits, as long as someone in world is willing to host that specific url, the ipfs protocol will serve it.
Fwiw, IPNS is mutable - you wouldn't use applications directly over IPFS, you'd view them over IPNS.
So it has mutability in the sense that a single address can change content over time. It's nice though, in that if it's important that content stays the same, it always will.
IPNS is virtually unusable at this stage. It's super slow that most people who use IPFS just use IPFS directly and try to come up with their own solution for this.
I do hope they figure out how to solve this problem but looking at the discussions around the issue, it doesn't sound like something that can be easily solved, maybe never.
Short version: we know, it sucks and we have a couple of ways of making it better, fear not
I also want to add that we have many other ways of doing mutable data over IPFS, not just IPNS. While IPNS is a naming system, having services on protocols with streams or using pubsub can also give you the ability to do mutable state in your application.
I thought software distribution would be a no brainer. There are plenty of projects that still have lists of mirror sites, using IPFS would be much better from a bandwidth and security perspective.
The distros are getting distributed over Bittorrent and since more people have it and know how to use it than IPFS, there is no point in distributing it over IPFS.
IPFS has some benefits if I understand it correctly, e.g. I think it can share duplicate data between multiple trees, while Bittorrent can't share blocks between multiple torrents. Still understandable that distros with working infrastructure won't jump on every new thing.
I doubt there is much efficiency gain when most distro's compress the packages and ISO's anyway, the few bits you could spare would most likely fall towards the installer ISOs, which are always mostly the same. But since nobody cares about the old ones, there isn't much win other than getting a tiny bit more bandwidth from people who don't offer the newest version yet.
There would also have to be a very high uptime guarantee, which means either every user of the distro operates a IPFS node by default, which nobody wants because IPFS is extremely chatty and there is such a thing as "servers in private subnets with limited internet access" that people don't want to send out arbitrary data to the internet. The next option would be opt-in, which means nobody will do it because everyone is lazy. Last option; simply maintain status quo.
As a host, though, there's little reason not to offer both. It offers the same benefits as bittorrent but a slightly different network, and also canonical domain names using IPNS which would be a nice addition for hosting their content.
If its easier to use than bittorrent then piracy people will switch over too and soon the name sounds just as frightening. The problem with bittorrent wrt popularizing legal use was that browsers should have shipped with BT support, the experience would have been just like clicking a traditional download link.
If you're talking about browsers acting as seed themselves, I don't think that model would get much traction, and the first thing people will do after downloading a browser is to disable that feature.
Nobody wants to keep seeding. This is the #1 problem for BitTorrent, and most people only keep Torrent on while they're downloading, so they never want it running all the time.
I think same goes with IPFS, which is why they came up with FileCoin, but if the mental cost of maintaining a node (because of all the copyright issues mentioned elsewhere on this thread) is higher than the actual profit each node makes, it will never take off. So in the end I think filecoin nodes will centralize just like Bitcoin nodes became centralized--that is, if they ever do end up launching what they promised--and when that happens it's AWS all over again, although the content addressable nature is indeed cool and I can imagine some cool applications coming out of it. It's just that I don't think it will replace HTTP altogether.
The JS approach is still more cumbersome than a normal download, which just asks for a destination right when you click the link (if at all, I think most browsers default to just starting to download to a default destination nowadays).
As for seeding, that would just happen while the download is going as you suggest, maybe have that configurable somewhere for the tech savy people. That would already scale quite well compared to no uploading at all with the traditional approach. The more demand there is, the slower the download gets, the longer you'll be seeding..
That being said, that's what would have helped 15 years ago, nowadays I'd agree that we're doing pretty fine with the centralized approach in most use cases.
Yeah, browser integration is critical. That’s why IPFS has focused on the JS implementation.
One nuance - these p2p protocols are not as simple as HTTP, FTP, etc. They require persistent connections to many peers and significant local storage. This eats up more local resources, to the point where making in default-on might cause significant performance hit.
How many thousands of times per day are copies of software downloaded over transoceanic links when many hundreds of copies of that same data exist already in the user’s own county or even building?
These efficiency gains are real; major distributions should adopt this ASAP.
Does IPFS actually take the peer's actual geographic location when picking one? I thought they just search for peers in the local LAN and if they don't find one, they just go for the one that's closest to them according to their PeerId (Kademlia DHT)
Well, I think it would be neat if package managers would utilize it. From my experience, it's rare that a company creates their own mirror (repo)... Especially if they're workstations.
Most package managers should maintain a local cache of packages (pacman and apt do this).
An IPFS mirror would also mean you'd have to run an IPFS node in a possibly corporate environment, which is a big nono if management doesn't allow P2P applications for whatever reasons. (Maybe they have their secret cure to cancer on their super secret server and don't want it to use the chatty IPFS protocols)
Then you're welcome to buy a copy I guess? I mean bandwidth isn't free and the assumption is if you download a distro, you like it enough to pass a little something on to your peers or spare a dime to the distributors.
TLDR: What's keeping it back is Web standards. Extensions cannot initiate TCP connections and there is no IPFS network made of (possibly hybrid -- both protocols) nodes that are able to use WebSockets instead of TCP.
The js-ipfs daemon can only fully run on Node.js where it has access to real UDP/TCP sockets. (I don't see this as a clear proclamation in their readme, so don't take this for granted)
> The js-ipfs daemon can only fully run on Node.js
Not sure what "fully" refers to here, but js-ipfs in the browser can fetch/add content like a normal IPFS node via either websockets or webrtc. Although, TCP connections would make a lot of things, a lot more efficient and nicer.
IPFS is really trivial to install in most OSes and distributions. If you don't want to run it as a daemon via terminal/systemd/whatever, you can either run the browser extension together with ipfs-desktop and it's GUIs all the way down.
Interesting to consider WebTorrent vs. BitTorrent in this context; BitTorrent has grown much closer to mainstream but WebTorrent doesn't seem to be growing.
I think the author is overly critical of the limitations of a distributed file system. With a decent incentive system (FileCoin for instance) there is no reason that distributed file storage won’t scale really well.
I for one have a stack of SSDs waiting for FileCoin to arrive. I am keen to leverage the 10Gbps fiber in my office to start ghetto hosting for some $$.
>IPFS may be a way of sticking it to the man. But the invisible hand of the free market forces also help here; when one big corporation starts playing foul and upsets the users, new companies and startups quickly move in to disrupt the space and fill in the void.
No, once someone like Facebook gets a monopoly, it becomes impossible for a startup to get anywhere. It is hard for me to believe the author doesn't know this.
He also doesn't address the problem of web companies going out of business, like Geocities, so you lose all your data forever. Likewise the if-you're-not-the-customer-you-are-the-product problem that is behind so many ills today. As far as defeating governmental censorship goes, IPFS is being combined with Tor.
In general, he seems to be looking at things from the perspective of someone who works for a company based on the present centralization model, rather than the perspective of those who are suffering under it and could solve a lot of problems with IPFS.
I understand part of the scalability concern being about the physical network capabilities ("Today's networking ecosystem evolved for the client-server model, what kind of problems could this create for switching to peer-to-peer model...") as opposed to the scalability of the IPFS protocol and node implementations.
I want to have IPFS services built up to the point that I can rely upon it so I can .. take the 100 or so of my peer group, and give them access to Content I Create.
I think I can have a machine up and running in my life that serves the principle nodes, and if a few of my other peers are also capable, we can build a scalable net among us that would serve the rest.
This is a very different scenario to mass-scale datacenter levels of delivery, and at a much more localised scale.
Among my peers, it'll scale just fine.
(So its mostly about the UI and presentation... I don't mind sending my friends a funky/cool URL. If that URL solves the IPNS->IPFS issue .. and if we also can get some pub/sub channels happening, to make the URL-swapping even more fluid .. wrapped up as a standard distro item .. )
His point was about scalability within centralized (logical) services, which people often confuse or understand them as a single point of failure or not scalable. But that's not true, as he pointed out there.
In my ignorance I'm not really aware of the landscape for these types of technologies; does anyone have time to share a brief summary or link to further info?
The short answer is that a great many techy's are working on decentralization because the present centralized model has a lot of serious problems, like Facebook et al making their money selling ads that may have malware, and also selling your data to who-knows-who.
There are many projects to decentralize the web. IPFS gets so much attention because it is such a broad platform that other technologies (like Ethereum) can operate on top of, and because in some important ways it is farther along than anybody else.
First off, I'm a developer working on IPFS for Protocol Labs. Let me try to answer some inaccuraties and questions from this blogpost, to clear any confusion.
> There is even the Beaker browser to help you surf IPFS
Beaker does not currently support IPFS, even though it used to in the beginning. We're hopeful Beaker might support IPFS once again in the future, but for now, Beaker just supports Dat.
> How common are petabyte or even gigabyte files on the Internet? There is definitely an increase in size trend due to the popularity of the multimedia files. But when will this become a pressing issue? It is not a pressing issue right now because CDNs help a lot for reducing traffic for the Internet. Also bandwidth is relatively easy to add compared to latency improvements
Bypassing the fact that centralized vs decentralized are fundamentally different models with different reliability, argueing that CDNs help with reducing traffic is just wrong. CDNs just moves the traffic from "your server -> your client" to "your server -> someones CDN -> your client", it doesn't actually reduce traffic. However, using a decentralized protocol like IPFS would actually reduce traffic, as closests nodes can help serve content and maybe clients would not need to contact your own IPFS node at all, if the data already exists in a closer node (at your ISP or even neighbor)
> For scalability, shard it, georeplicate it, and provide CDNs for reading. For fault-tolerance, slap Paxos on it, or use chain replication systems (where Paxos guards the chain configuration), or use the globe-spanning distributed datastores available today.
IPFS is actually a globe-spanning distributed datastore, so IPFS would be excellent choice if you're building a CDN. IPFS gives you the benefits you want for a CDN, scalable, content-addressed and used to passing around a lot of data.
> Case in point, Dropbox is logically-centralized but is very highly available and fault-tolerant, while serving to millions of users. Facebook is able to serve billions of users.
For now. In the future, who knows? The argument for using IPFS is that we don't know if a central server will be around in the future, but to make sure, let's add content-addressing to data so we can serve this data from anywhere. Then it doesn't matter if Dropbox is around in the future, I can just pull down the data from anywhere and verify it's correct locally.
> If you want to make the natural disaster tolerance argument to motivate the use of IPFS, good luck trying to use IPFS over landlines when power and ISPs are down, and good luck trying to form a multihop wireless ad hoc network over laptops using IPFS. Our only hope in a big natural disaster is cell towers and satellite communication.
Here the author is comparing a physical network with a software stack. Of course IPFS is not a antenna, so it'll be hard to use it as such. But, IPFS works excellent over radio and ad-hoc networks, because it's content-addressed. The current web? Not so well, as we have bunch of assumptions that the endpoints we're using are all serving us "good" content and that you have a server-client model. Ad-hoc networks are bandwidth-constrained, and everything we can do to reduce that bandwidth is worth doing. IPFS helps a lot with that since everything is content-addressed.
> Does IPFS mean I may be storing some illegal content originated by other users?
No, IPFS only shares what you already accessed (until GC) or when you explicitly "pin" something (basically tell IPFS you want to help share this content with others)
> How does IPFS deal with the volatility? Just closing laptops at night may cause unavailability under an unfortunate sequence of events. What is the appropriate number of replicas for a data to avoid this fate? Would we have to over-replicate to be conservative and provide availability?
This questions lead me to the belief that the author hasn't really understood IPFS before writing this post. Yes, shutting down your server will make content stored on that server unavailable. IPFS does not automatically distributed data. Just as your local HTTP server does not automatically distribute data to other servers.
> But can the Byzantine nodes somehow collude to cause data loss in the system, making the originator think the data is replicated, but then deleting this data? What other things can go wrong?
Again, other nodes can't control your data. If you add data to your IPFS node, it's there until you remove it. No other nodes can control your data. Also, data is fetched based on hashes and when the content is downloaded, it's verified again to make sure it's correct. No way of screwing around with that.
Happy to answer any more questions people have about IPFS.
> Beaker does not currently support IPFS, even though it used to in the beginning. We're hopeful Beaker might support IPFS once again in the future, but for now, Beaker just supports Dat.
Since you're a developer at protocol labs, I will assume you remember why beaker dropped IPFS in favor of DAT. I think IPFS is cool but the biggest hurdle for adoption is the naming scheme. More specifically what happens when a file content changes. You have IPNS to handle this but as far as I know it's virtually unusable because it's extremely slow. And I got the impression that you guys have no real solution for this, and maybe IPNS is not even one of your top priorities because you're focused on other important things.
But as a user this matters a lot because without a name resolution system, we're left with static files only, and while this may be a good alternative to S3, this means that's about it and people can't build apps on top of it. I wonder what your plans are and if you have any solution to building an actual functional IPNS, or if you're focused on building an S3 alternative at the moment.
Indeed, I do remember why Beaker dropped IPFS support. Basically, it comes down to the URL structure and how we structure them in the IPFS ecosystem, which is not super easy to integrate with browsers today. There is some logs about the discussion over at Github: https://github.com/beakerbrowser/beaker/issues/2
There is a open feature request for (re)adding IPFS support to Beaker as well, if someone feels a bit experimental with adding IPFS support to a Electron application: https://github.com/beakerbrowser/beaker/issues/480 Reach out to me if you feel interested in helping out and I can point you in the right direction. Email is in my profile.
Re: IPNS is currently slow: Yes, it is. Apologies for that, it definitely has to get faster to be really useful. The network has grown a lot and we are working on improving the performance of IPNS, both for adding the records and reading them. Currently the adding of records is slower than reading, as reading now uses pubsub to push updates and making it a lot faster than before.
We do have more solutions in mind for IPNS performance and still doing research on the best way of implementing them.
If you haven't seen IPFS pubsub before, I urge you to take a look, as it gives you the ability of doing communication with a large number of nodes cheaply.
So while IPFS started out as a distributed file system with naming on top, we now have more functionality for handling dynamic data and working on making it even easier to build applications with just IPFS.
Paired with an anonymity layer and TOR hidden services for direct communication it is really solving a lot of the problems present todays Web, where DPIs can mess with the unencrypted HTTP contents, DNS responses faked, web servers and CAs hacked and centeralized services decide on content availability.
Especially provocative point about how mobile devices would most likely be leeches in a p2p world. The world wants easy to use mobiles. A system like this may need to constrain leeches. There will be some tension serving both goals.
It's a wonderful concept to decentralize the internet. However, I am a little skeptical of this concept. Won't IPFS be subject to the same problem as the global blockchain for BitCoin? That is, it can't process the kind of transaction throughput the global economy requires. Wouldn't that technical limitation be present for IPFS?
No, IPFS is not subject to the same problem as IPFS doesn't use or is a blockchain. There is no transactions.
There is data, and data is referred via their content hash. People can request that data via the content hash and then it's downloaded. There is no mining or similar things in IPFS itself.
Ethereum tackles computation and Filecoin storage. Our system
IPFS and Ethereum are well recognized because they actually exist and work. Our system, which also exists, and is ranked #2 "Blockchain" by developers on GitHub ( https://github.com/topics/blockchain ), will be tackling the bandwidth incentive problem.
Juan and I met a while back (we all have been doing P2P stuff since before 2014) and I'm trying to convince him + us + ethereum team to partner together, so app developers have a single solution to build any dApp on top of, using real tech (not vaporware!) that already exists.
Financial transactions academically must solve the double-spending problem, which Bitcoin addresses, and you are right, requires a form of Strong Consistency (via the CAP Theorem) that bottlenecks global throughput.
However, storage (IPFS) and data sync/bandwidth (us) don't have those requirements, so they can be much more scalable.
Indexation of content on IPFS thru Atmos (cryptocurrency). Watch this one.... might really be the "killer app" you are all looking for. link : Novusphere.io
Like, plug and play OS support, and its a go.