Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: Offline – Access your favorite websites without a network connection (play.google.com)
69 points by theycallmeg on Jan 22, 2015 | hide | past | favorite | 48 comments


Anyone else remember when this was a standard feature in a lot of desktop browsers? Or am I crazy.


You are not. I hate how browsers nowadays, especially browsers on smartphones, are unusable without access to Internet. Sure, there is Pocket for instance, but IMHO there shouldn't be need for such app. And while I'm ranting at Pocket - there is still no automated login for LWN.net. (I know I can go with manual way, but still...)

P.S. I'm thinking about making nice dedicated cross-platform LWN.net articles & comments reader one day (well, maybe more), but it's hard to squeeze out enough time for that kind of fiddling (unless it's really a gravely matter, but it isn't here).


Opera Mobile (the "classic" one before they threw it all away) let you save pages for offline reading. Not perfect but better than nothing. Sadly it did not cache content through restarts which is annoying on mobile where apps get killed a lot. But if I recall correctly at least the navigation back and forward was instant, like on desktop, with no network traffic.


What about appcache manifest, service workers in chrome, and hood.ie? There's ways to make the web work offline.


It infuriates me that my browser keeps this big cache of web content, but refuses to show it to me when I don't have an Internet connection! This is the main reason people keep asking for an app, when all they really want is offline access.


Google added offline browsing to Chrome late last year. To enable it go to:

    chrome://flags/#enable-offline-load-stale-cache
"When a page fails to load, if a stale copy of the page exists in the browser, a button will be presented to allow the user to load that stale copy."


It's still very difficult to verify or ensure that the sites you know you'll need will actually be cached when you need them. For instance, you may not need page XYZ of the documentation now, but you might later when you're coding on the beach, and there's no way to force Chrome to crawl it besides preemptively visiting the page - and even then it might be evicted for any number of reasons!

Batch Save Pocket [1] is my stopgap solution, but Pocket's not at all optimized for documentation.

[1] https://chrome.google.com/webstore/detail/batch-save-pocket/...


I haven't tried the app yet, but I think there's a difference.

With the browser, you browse a page, and it caches it. But you have to browse it first.

With an app, you could specify that you want the latest homepage? x-page(s)? and let the app poll for you.

So there's definitely place for innovation, I guess.


Maybe we're speaking about different things but I remember in IE you could save the page + it would save a certain number of links deep from that page.


I remember this being a massive deal in the marketing for MSIE 4, alongside the Active Desktop, also an idea more than a decade ahead of its time.


doesn't HTML5 AppCache allow for some offline features now? But I agree, this should absolutely already be a feature that everyone plans for when they're making websites. Even when users have a connections plenty of people are one slow mobile connections and caching will prevent unnecessary page load times.


I would love to be able to cache websites to my server and access them in the event the site goes offline. I've tried a number of things like wget to "offline" a website and had mixed success. Does anyone know of a proven way to do something like this? (I'd even settle for no images a la google archive/cache but pulling images and scripts would be a huge win)

I'm younger but I can already see link-rot destroying my bookmarks. I now use (and pay) for pinboard.in however I'd like a way to do it myself. I've considered writing a chrome plugin to send url's I visit over to a process running on my server to archive it (with the ability to black/whitelist domains) but haven't found a way to do it yet the works reliably (I'd also probably need to send a copy of my cookies for auth sites).


> I've tried a number of things like wget to "offline" a website and had mixed success. Does anyone know of a proven way to do something like this?

What about httrack[0]? From description in OpenBSD ports:

HTTrack is an easy-to-use offline browser utility. It allows you to download a World Wide Web site from the Internet to a local directory, building recursively all directories, getting HTML, images, and other files from the server to your computer. HTTrack arranges the original site's relative link-structure. Simply open a page of the "mirrored" website in your browser, and you can browse the site from link to link, as if you were viewing it online. HTTrack can also update an existing mirrored site, and resume interrupted downloads. HTTrack is fully configurable, and has an integrated help system.

Or, you can use wget for downloading a single page or recursive download. :)

[0]: http://www.httrack.com/


How often a descent website goes offline?


What we consider "decent" today is not always "decent" tomorrow and things like personal blogs go down all the time or change their URL structure. Also not everyone has a community/family that will keep their work online after they are gone and I don't want to lose content because someone's hosting lapsed after their death.

Looking back I wish I had archived some of the forums that I used as a kid as a number of them are just gone, no wayback machine, no cache, no archive, just gone.

Sidenote: I'd love to work or just use on a service that can will allow for community funding of both hosting/domain reg so that you could add a widget on your site and have it stay online even after your death as long as people donate, maybe even make the site static if no one can pay and use proceeds from other sites to float the cost. There is a chance that you could die and your close friends/relative would either not have the access (password/key) or technical know-how to keep your site online even if they had the funds to do so


Started as a weekend project but took me almost two weeks. Please share any feedback.


Thank you, it's pretty sad that we've had to wait years for something that was available in nineties versions of Internet Explorer.


We didn't, I've been using Offline Browser[1] for quite a while now. This does seem more polished, though.

[1] https://play.google.com/store/apps/details?id=it.nikodroid.o...


The feature of being able to share a URL from my browser to Offline is pretty sweet.

I was irate that I had to type out URLs, until I thought to try that :)

Bug report: hitting the Back button takes me out of the browser, while the back arrow goes backwards in web history. I expected them to be the other way around (behaving more like a normal browser).


It's not a bug, it's a feature :) I just hate that I can't exit the browser with a back button, but I may be the only one.


I don't know if it's reasonable to ask this, but it would be cool to have an option to swap them around?

I guess I'll always be assuming it behaves like Firefox, so I'm tripping myself up a lot.

Either way, very cool app.


It should be easy to add that option, I hope it will be in the next version.


Just 2 weeks! I am very impressed. Great work.

First URL I typed in was news.ycombinator.com, which wasn't valid until I added the http://. I don't think most users would know to do that. Could you default to https / http when it isn't specified?


Makes sense, I'll probably include a preview of the website as well. Thanks for the feedback.


Thanks for this. I take the subway every day of the week, and unfortunately we don't have internet service in the subway tunnels. I've been hoping for an app like this for a long time. Will definitely download and try it!


Nice, thanks!


[deleted]


Wrong thread? i think you meant the tuxedo thread


One nice feature would be a "follow the next page button" mode for following serials like web comics. Instead of starting from a home page and following all links to a given depth, you would give it the URLs for page 1 and page 2. It would search page 1 for the button that leads to page 2, and then it would find that same button on page 2 to get page 3, and so on. In other words, it would simulate starting at page one and repeatedly pressing the "next" button to read the whole serial.


Good idea, I'll consider adding it in next versions.


When I used to access the Internet via dial-up modem, and pay for every second online, I'd always browse via WWWOFFLE[1] and then be able to just return to where I had been after going offline. I think I remember putting something in a CHAP script to tell WWWOFFLE I was on/offline.

[1] http://www.gedanken.org.uk/software/wwwoffle/


How does it compare to existing stuff such as HTTrack ?


Offline seems to be like HTTrack for Android (Which is a greatly idea, actually).


I didn't know of HTTrack before, I'll check it out and let you know.


HTTrack is how I passed web design 101 way back in the day.


Seems like it doesn't handle non responding links/webservers. Crash report sent.


Thanks for reporting, does it work for other links?


Seems to be fine so far, doesn't seem to be resuming download upon stop/restart, might be only me :)

Thanks for the dev so far, looks pretty good.


Microsoft's Spartan is meant to have something similar to this. Will apparently sync to your Windows 10 Mobile 'reading list' as well.

Sounds interesting.


Do you have any source

I hope this idea will catch on and other browser vendors will implement this feature as well.


I read about it, but it seems that it will function like pocket i.e saving individual webpages.


Doesn't the stock Android browser do this? At least I know it offers me a "save for offline reading" option


Chrome for Android has a new, hidden, "Reading Mode" feature.

It only makes the page cleaner to read, it doesn't (yet) sync offline as far as I know.

http://lifehacker.com/enable-the-new-hidden-reader-mode-in-c...


I believe that's just one page. This is whole (or some specified subsets of) sites.


Can't find that option in my stock Android browser - what menu is it under?


Jut the normal "menu" menu (presented when you press the menu button). I am running 4.1.1 so nothing recent.


Does anyone have a rec for similar software on iOs. Unfortunately my firm won't let me use android.


Is this Free Software? In case it is I encourage you to make it available on F-droid!


Very useful and simple to use :). +1 for the material design usage.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: