You are not. I hate how browsers nowadays, especially browsers on smartphones, are unusable without access to Internet. Sure, there is Pocket for instance, but IMHO there shouldn't be need for such app. And while I'm ranting at Pocket - there is still no automated login for LWN.net. (I know I can go with manual way, but still...)
P.S. I'm thinking about making nice dedicated cross-platform LWN.net articles & comments reader one day (well, maybe more), but it's hard to squeeze out enough time for that kind of fiddling (unless it's really a gravely matter, but it isn't here).
Opera Mobile (the "classic" one before they threw it all away) let you save pages for offline reading. Not perfect but better than nothing. Sadly it did not cache content through restarts which is annoying on mobile where apps get killed a lot. But if I recall correctly at least the navigation back and forward was instant, like on desktop, with no network traffic.
It infuriates me that my browser keeps this big cache of web content, but refuses to show it to me when I don't have an Internet connection! This is the main reason people keep asking for an app, when all they really want is offline access.
It's still very difficult to verify or ensure that the sites you know you'll need will actually be cached when you need them. For instance, you may not need page XYZ of the documentation now, but you might later when you're coding on the beach, and there's no way to force Chrome to crawl it besides preemptively visiting the page - and even then it might be evicted for any number of reasons!
Batch Save Pocket [1] is my stopgap solution, but Pocket's not at all optimized for documentation.
Maybe we're speaking about different things but I remember in IE you could save the page + it would save a certain number of links deep from that page.
doesn't HTML5 AppCache allow for some offline features now? But I agree, this should absolutely already be a feature that everyone plans for when they're making websites. Even when users have a connections plenty of people are one slow mobile connections and caching will prevent unnecessary page load times.
I would love to be able to cache websites to my server and access them in the event the site goes offline. I've tried a number of things like wget to "offline" a website and had mixed success. Does anyone know of a proven way to do something like this? (I'd even settle for no images a la google archive/cache but pulling images and scripts would be a huge win)
I'm younger but I can already see link-rot destroying my bookmarks. I now use (and pay) for pinboard.in however I'd like a way to do it myself. I've considered writing a chrome plugin to send url's I visit over to a process running on my server to archive it (with the ability to black/whitelist domains) but haven't found a way to do it yet the works reliably (I'd also probably need to send a copy of my cookies for auth sites).
> I've tried a number of things like wget to "offline" a website and had mixed success. Does anyone know of a proven way to do something like this?
What about httrack[0]? From description in OpenBSD ports:
HTTrack is an easy-to-use offline browser utility. It allows you to
download a World Wide Web site from the Internet to a local directory,
building recursively all directories, getting HTML, images, and other
files from the server to your computer. HTTrack arranges the original
site's relative link-structure. Simply open a page of the "mirrored"
website in your browser, and you can browse the site from link to link,
as if you were viewing it online. HTTrack can also update an existing
mirrored site, and resume interrupted downloads. HTTrack is fully
configurable, and has an integrated help system.
Or, you can use wget for downloading a single page or recursive download. :)
What we consider "decent" today is not always "decent" tomorrow and things like personal blogs go down all the time or change their URL structure. Also not everyone has a community/family that will keep their work online after they are gone and I don't want to lose content because someone's hosting lapsed after their death.
Looking back I wish I had archived some of the forums that I used as a kid as a number of them are just gone, no wayback machine, no cache, no archive, just gone.
Sidenote: I'd love to work or just use on a service that can will allow for community funding of both hosting/domain reg so that you could add a widget on your site and have it stay online even after your death as long as people donate, maybe even make the site static if no one can pay and use proceeds from other sites to float the cost. There is a chance that you could die and your close friends/relative would either not have the access (password/key) or technical know-how to keep your site online even if they had the funds to do so
The feature of being able to share a URL from my browser to Offline is pretty sweet.
I was irate that I had to type out URLs, until I thought to try that :)
Bug report: hitting the Back button takes me out of the browser, while the back arrow goes backwards in web history. I expected them to be the other way around (behaving more like a normal browser).
First URL I typed in was news.ycombinator.com, which wasn't valid until I added the http://. I don't think most users would know to do that.
Could you default to https / http when it isn't specified?
Thanks for this. I take the subway every day of the week, and unfortunately we don't have internet service in the subway tunnels. I've been hoping for an app like this for a long time. Will definitely download and try it!
One nice feature would be a "follow the next page button" mode for following serials like web comics. Instead of starting from a home page and following all links to a given depth, you would give it the URLs for page 1 and page 2. It would search page 1 for the button that leads to page 2, and then it would find that same button on page 2 to get page 3, and so on. In other words, it would simulate starting at page one and repeatedly pressing the "next" button to read the whole serial.
When I used to access the Internet via dial-up modem, and pay for every second online, I'd always browse via WWWOFFLE[1] and then be able to just return to where I had been after going offline. I think I remember putting something in a CHAP script to tell WWWOFFLE I was on/offline.