Well, reddit does offer rss support. So you could, for instance, subscribe to a mult (e.g.
reddit.com/r/programming+python+nim/.rss)
and discuss the articles you want to discuss with the community.
The cool thing about using reddit this way is that you can use your reader (like Thunderbird) to process your feeds according to rules. For instance, you could automatically delete all posts from /r/programming which don't feature Nim or Python.
RSS is feature complete. Like email. As soon as anyone tries to add something it ruins the whole thing by centralizing or proprietizing something or encouraging bad practices in there users or breaking something as simple as line breaks or copy/paste (outlookin' at you MS). Its fine that something else handle the social stuff, in fact, commenting tools often have rss feeds of their own.
You mean the lost world of FriendFeed/Google Buzz/PubSubHub.
It seemed like a great idea, but the social features really didn't work. People dumped their RSS feeds, but the decentralized nature meant that attempts at flowing comments between sites ended up losing context.
I would like to publish an output from my reading as RSS channels - then other people could subscribe to those channels making a full circle. I don't know what these channels would be - we could start with simple 'liked articles', then readers would count in how many 'liked channels' from friends a given article is and do other heuristics. But everybody would be able to make his own channels and then observe if people subscribe to them. In general it would be like a Facebook feed - but the algorithm would be executed on the user machine and defined by him.
Google Reader did something like that - I had a lot of friends on there, and I don't remember the exact UI, but I could easily see things they'd "starred" and this formed a pretty cool secondary news feed of sorts. It was perfect.
TheOldReader.com is really good, and kind of duplicates the Google Reader experience. Only problem is, none of my friends use it!
An NNO file is the end result of aggregation that can be used for the next session but is primal intend is publication/sharing.
It it s a comma separated text document that starts with a unix build time followed by a sorted list of items 1) feed url, 2) minutes before build, 3) title checksum
The file is build after the aggregator aggregates the users hundreds, thousands or millions of ESS, RSS and/or Atom subscriptions while reducing the result list by a configured amount. (say 3000)
When published a 2nd user can load the NNO into their aggregator which should fetch the top feed items listed in the NNO until they have a screen full of results. (scrolling loads more)
The 2nd aggregator can then proceed to fetch that users own subscriptions and/or parse additional NNO files into the result set.
When done they can publish their own NNO file to be used the same way.
Ideally the aggregators maintain their own manually unsubscribed feed list and implement a configurable filter like a word and a word combination filter, a Bayes filter, a machine learning implementation or some other filters. The news items displayed should have an indication about their origin (which NNO file) and they could provide an auto translate link or do inline-translation of headlines/articles.
Like you suggested, the point of the entire exercise is to have people access the internet without a multinational man in the middle.
I also coin ESS in the document, it is not required to make NNO work:
An ESS is a tiny comma separated text document with a unxi build date, an url for an RSS or Atom feed then, for each news item, 1) a publication date that is an offset in minutes from the build date and 2) a tiny check sum.
Like RSS is much smaller than the HTML document the ESS is much smaller than the RSS. The example is 71 bytes long. That is considerably smaller than a killobyte.
I would describe the problem with RSS with an analogy with juggling chain saws. :) When you toss the chain into the air it needs to land into your hand at the right moment for you to be able to toss it again.
When you request an RSS feed (or more like x per second) the results arrive with a random offset. The parser at times just sits there waiting for something to do then it is drowned in results arriving all at the same time. If that pile is large enough additional results arrive faster than the old ones can be parsed into the result set. Parsing feeds is not trivial the way it is with ESS. There are multiple formats (RSS and Atom), there can be multiple link elements with different attributes, the pubdate (or updated) has to be dug out and is not a number and lastly the feed can be quite huge.
With ESS you just take a substring (up to the first comma) and compare the number to a desired age or your oldest item.
If you or anyone else is interested in non-corporate technology just for the sake of making things great again feel free to contact me: gdewilde@gmail.com