Sad the site is now a Javascript powered site. Its not even old SEO friendly (I know most of the engines now run JS but it slows down indexing).
I thought for sure they did a PJAX [1] like design when I saw the spinner to keep asset loading minimal (why c2 would need that is beyond me).
But it is even worse. A full page reload on each click and then an XMLHTTPRequest call is made for some JSON.
I wonder who is running and why they picked such an awful design. Do people just not know how to write old web 1.0 apps?
For now I will play devils advocate and just assume perhaps they are offering JSON like API so that someone else can maybe write a better skin. If some one does hopefully they will use push state.
EDIT
I clicked around to see what Ward has been up to and now I can sort of see why c2 is the way it is. Basically Ward is working on Federated Wiki. I think the idea is sort of cool as it is yet another attempt to continue to decentralize the web.
I believe it is going to be based on some JSON protocol but it would be interesting and perhaps have greater consumption if it was in Google AMP format or supported that format or converted or whatever (not c2 but the federated wiki stuff).
It appears the root of his problem is an old database in a weird format. If he just gave access (to the DB) I'm sure a fellow githuber / HNer could probably get the database converted correctly for him to some other DB or format.
Then again I probably don't fully understand all of the problems.
It's not that the database is in a weird format. It's that individual entries have multiple character encodings. The old perl CGI could handle that just fine, but trying to do anything with those files is a pain in the butt, which is why the site was offline in the first place. Ward was attempting to work with a static target instead of a moving target.
it took some digging but I managed to grab a dump of every page in JSON. he hid it well but I scraped everything and put it up for people to see (it's in its original markup form, and I'm working on converting it to HTML again.)
This is a regrettable redesign. Every page loads a megabyte of crap (640 KB "names.txt" file + 260KB jquery) and requires javascript for what could be a static text site.
If done correctly, that could be a constant overhead that the browser caches, and you move on.
That's not particularly helpful for people crawling and not keeping state between page hits, but it should make a difference for end users if done right.
so, because of the weird format that ward has gone with, and because I'm not a fan of the federated wiki concept as he sees it, I mirrored the JSON dump of the old c2 wiki, did a little bit of processing on it and generated a directory for all the pages.
Note though that the wiki from what I can tell the wiki explicitly doesn't grant the right to host the contents somewhere else (in contrast to many other wikis)
> Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://code.jquery.com/jquery-3.1.1.js. (Reason: CORS header ‘Access-Control-Allow-Origin’ missing).
> None of the “sha256” hashes in the integrity attribute match the content of the subresource.
The basic concept behind federated wiki is interesting, but it still needs to be simple, clean, and usable. As it is, the most basic things like selecting text or navigation do not work or work in totally unexpected ways. It shouldn't actually need instructions for basic things like How to Follow Links and How to View Changes ("you can view the changed page on that site by clicking on the flag of that page. Don't expect the link to find the remote site because it will likely be hidden behind the original page on your site." What?)
ward cunningham has a great idea with poor execution, and he's holding years worth of commentary on various technical subjects in order to satisfy his desire to implement his idea.
I'm not a fan. I worry for the other rabbit holes of the internet that are still smart and able to be searched, mined, and mirrored easily.
Great, so now Ward Cunningham doesn't know how to write a good wiki system. Federation is all well and good, but it doesn't excuse the absolute garbage UI.
I'm happy that it's back. It seems to be running on a new engine.
When I click around I see a spinner sometimes, I don't remember seeing that in the past.
Seems like they hacked up a new system to fetch the old page quickly, nothing more. IIUC it happened because of some hardware failure, so I'm just happy it's still up, even though the full JS spinner thing is a bit sad.
Still the same content (and old bookmarks work fine) so I can live with the new bizarre delivery system (could've been worse, like one of those old flash websites). You can overwrite the css style to make it look normal.
I didn't even know it was down. Sad that it's become forgotten, I used to spend a lot of time there. After GoogleLovesWikiNot, contributions dropped and now most of the articles are very dated.
I thought for sure they did a PJAX [1] like design when I saw the spinner to keep asset loading minimal (why c2 would need that is beyond me).
But it is even worse. A full page reload on each click and then an XMLHTTPRequest call is made for some JSON.
I wonder who is running and why they picked such an awful design. Do people just not know how to write old web 1.0 apps?
For now I will play devils advocate and just assume perhaps they are offering JSON like API so that someone else can maybe write a better skin. If some one does hopefully they will use push state.
EDIT
I clicked around to see what Ward has been up to and now I can sort of see why c2 is the way it is. Basically Ward is working on Federated Wiki. I think the idea is sort of cool as it is yet another attempt to continue to decentralize the web.
I believe it is going to be based on some JSON protocol but it would be interesting and perhaps have greater consumption if it was in Google AMP format or supported that format or converted or whatever (not c2 but the federated wiki stuff).
[1]: https://github.com/defunkt/jquery-pjax