Hacker News new | past | comments | ask | show | jobs | submit login
Download this Blog: Websites don't need web servers (breckyunits.com)
2 points by breck 8 months ago | hide | past | favorite | 4 comments



Fun fact! You don't need any specific blog engine for this, it's already enabled on most github-hosted pages.

- find a random github.io website, for example: https://arduino.github.io/arduino-cli/0.21/

- Find a corresponding github repo (in this particular case linked in top-right corner), and switch to gh_pages branch: https://github.com/arduino/arduino-cli/tree/gh-pages

- Hit green "Get Code" button and then "download zip" link.

- Voila! You have a zip file with the generated HTML; all images; CSS; Javascript; even clientside search.


The HN Guidelines say "Assume good faith."

I assume you meant to be helpful with your comment.

The link you provided demonstrates that some thought must go in to making a site that works offline.

That site works poorly offline.

I will assume you did not test the one example you shared with everyone.

Perhaps you would be willing now to invest some time in exploring a larger sample of sites with GitHub pages, and report back what % of them actually work offline.


What kind of problems did you encounter?

That particular website (arduino-cli) works offline pretty well, including search - with a caviat that it needs a local webserver. I've used "python3 -mhttp.server" one, which is installed by default on most Linux machines (and I think on all macs too? not 100% sure)

Or you can click on "index.html" directly - then the site is _mostly_ usable (text and pictures are there), but you have to manually choose "index.html" for each page, and search does not work. Pretty annoying, yeah, but can still save you if you are without internet and need to consult docs.

(Note there is no top-level index in this archive, this is by their design - https://arduino.github.io/ shows 404 as well. You need to choose one of the versions and click index.html there)

I agree that it would be nicer if more sites worked better without webserver (didn't rely on explicit index.html, all paths relative) but even as things are now, they are not that bad. Websites disappear all the time, and I think it's very nice that you can download many of them in one click.


> What kind of problems did you encounter?

The problems with search and index which you detailed. Nice work.

> with a caviat that it needs a local webserver. I've used "python3 -mhttp.server" one

You and I can do this no problem.

The "Download this [Site]" functionality is designed for novice users, who have never used the terminal.

In user testing with that target user this week, it was clear that they can easily download, unzip, and open index.html, but from there everything needs to "just work". So a lot of getting little details all right to make that happen.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: