As someone who's just written about the "small web" [1], this warms my heart. The lite version is probably a little too bare-bones for most people's tastes, but it sure is tiny -- great for people on very slow or flakey connections. Some numbers:
HTML homepage transfers 34KB (94KB uncompressed) over 6 requests. HTML search results page transfers 133KB (248KB uncompressed) over 31 requests.
Lite homepage transfers 13KB (11KB uncompressed, ha) over 4 requests. Lite search results page transfers 21KB (43KB uncompressed) over 5 requests.
In all cases, all requests are to *.duckduckgo.com, which was very good to see from a privacy perspective. Nice work, DDG!
Note that this enables any bang searches as well, though you'll need to single quote these to avoid attempted history expansion.
lite is also my default w3m search bookmark entry.
So:
ddg '!w foo' # Wikipedia article on foo
ddg '!dict foo' # Dictionary search on foo
ddg '!etym foo' # Etymology Dictionary search on foo
Note that if the endpoint itself relies on JS for local search, the bang won't be successful (though DDG will do its bit). Reddit, and HN/Algolia, I'm looking at you.
I happened to be running some quick lookups a few days ago while a friend was watching, and they 1) wondered how I was doing that and 2) if they could have a similar feature. Power of the shell.
Similarly, in Firefox (and maybe other browsers) you can add keywords to bookmarks, meaning that the browser will open that bookmark if you first type the keyword. It will also then insert anything else you type into the bookmarked URL if it has a %s. So as an alternative, more reliable way to search MDN, I bookmarked this:
The benefit to bookmarks / keywords is flexibility and not having to wait for DDG to add (or fix) a given bang search.
The downside is that you're speaking your own language, and unless someone adopts your specific keywords, you can't tell them to, say, "bang dict" (if you can get away with saying that in the context).
This is a fundamental distinction between any private vs. shared language.
(People are very uncomfortable when dropped into my computing environment --- GUI, shells, editors, browsers, etc. They've all acquired several decades of personalisation. Works for me.)
$ alias d='sr duckduckgo -text -ducky'
$ d '!man surfraw'
[www-browser opens http://manpages.org/surfraw]
[…]
DESCRIPTION
Surfraw provides a fast unix command line interface to a variety of popular WWW search engines and other artifacts of power. It reclaims google, altavista, dejanews, freshmeat, research index, slashdot and many others from the false-prophet, pox-infested heathen lands of html-forms, placing these wonders where they belong, deep in unix heartland, as god loving extensions to the shell.
[…]
Edit: even better, dict(1) (or GNU dico):
$ dict foo # same text as !dict’s http://dict.org/bin/Dict?Form=Dict2&Database=*&Query=foo
These are great, and I’m glad to be able to use them without direct contact with “Big Tech” and the tracking bloat that entails, but as I understand it DuckDuckGo is more or less a glorified — albeit relatively glorious and pleasant — proxy to Big Tech’s Bing (well, more, not less, because it does add some great conveniences like the IIUC formerly open source Instant Answers¹ and !bangs, and some fraction of results from its own crawler, and likely some from Yandex. But still, the largest fraction of its core service seems dependent on Bing. Not that that’s its fault or there are attractive alternatives).
Yeah, I wonder what their long-term plan is there. Surely to get substantially better, they'll have to move off Bing at some point (or buy Bing off Microsoft!). In any case, as long as DDG is a "privacy-stripping proxy" (which I believe it is), it's just using Bing technology as a search API, which doesn't seem too problematic.
I agree. Fortunately, to move off Bing doesn’t have to be a sudden flip of a binary switch.
> DuckDuckGo gets its results from over four hundred sources. These include hundreds of vertical sources delivering niche Instant Answers, DuckDuckBot (our crawler) and crowd-sourced sites (like Wikipedia, stored in our answer indexes). We also of course have more traditional links in the search results, which we also source from multiple partners, though most commonly from Bing (and none from Google). — https://help.duckduckgo.com/duckduckgo-help-pages/results/so...
I find their instant answers fairly useful, enough to often not use the other results at all — unfortunately for them enough to use some of the sites some answers are sourced from directly, in particular often using my browsers’ built in Wikipedia searches.
DuckDuckBot I can’t judge, because I don’t know which results come from it (previous comments on HN seem to show the amount of difference between DDG and Bings results vary between search terms¹), but it seems that non-Bing links are a thing and DDG can adjust what fraction of the page they take as whatever gap there is between them and the bigger crawlers’ narrows or widens.
Among other characteristics, I'm finding DDG works when searched via a Tor proxy, and even offers an Onion URL (https://3g2upl4pq6kufc4m.onion/).
Google 1) fails to work without JS for me much of the time and 2) throws up endless ReCAPTCHAs, so I generally don't bother (not just for GWS, but Scholar, Ngram viewer, and other actually-useful tools). My preferred response (https://toot.cat/@dredmorbius/104371588129861216) produces very faded joy over Tor.
DuckDucGo's classic link results are verbatim from Bing (or sometimes Yandex instead). DuckDuckBot is only used to grab favicons and a subset of its rich results (instant answers, zero-click info).
Don't take my word for it; you can compare DDG and Bing results for esoteric queries side-by-side. The order of the results may vary since search results aren't deterministic, but you'll find them to be otherwise identical. You can also ask staff in help channels about the details of where link results are sourced from.
I've been using Bing, directly, more and more in the past few years, when Google either gives no useful results at all, or blocks me for trying to use it with more precise queries. Overall its breadth is still lacking, but occasionally it finds useful things which Google doesn't (or won't?) find. It also doesn't try to mangle (only truncates when too long) URLs.
It always annoys me to lazily type the short “ddg.gg” in and wait for it redirect first to duckduckgo.com, then to https://lite.duckduckgo.com/lite/, but once it’s bookmarked it’s easier to use than SearX.
1. My site is also smol, but brand new and probably doesn't warrant a mention here, but respect for smol web!
2. > Because most people only view one or two articles on my site, I include my CSS inline. With HTTP/2, this doesn’t make much difference, but Lighthouse showed around 200ms with inline CSS, 300ms with external CSS.
With respect, this seems to be the frontend consensus and it... just doesn't make sense to me. Unless you expect the majority of your traffic to never hit a cache header on that external CSS. That 100ms perf hit (which is probably a warning sign that something's wrong with your assets or server config anyway) should be a one time affair, and not repeating that payload over and over surely makes up for it in the sub-200ms responses after.
The lite version has a broken image at the bottom. Other than that it is very nostalgic of Alta Vista home page back in the days. Kudos to DDG for having such simple non-javascript version. In current days, non-javascript web page is on the rise. All web pages should have a non-javascript lite weight version.
"HTML" version shows no images, just text and favicons. The total size of visible text content shown on the page is about 4KB (just did a count of characters on the page for "steve jobs" query). DDG shows those 4KB using 248KB of data.
1. That is not _that_ small. It's only small compared to the rest of the web, but not in absolute numbers, or in comparison to useful content shown.
2. The content to data ratio comes at 1:62. Or for one character shown on screen you are transferring 62 bytes of stuff. For context, Wikipedia page for Steve Jobs, which also includes multiple images, is 1.08MB and content is 128KB for a content to data ratio of about 1:9. So DuckDuckGo could in theory do ~7x better than this.
Up until not long ago, Google search result pages worked well without JS, and if you sent a suitably old UA header, you'd get the "original" version without mangled URLs or any of the other bloat. Then in 2019 or so, in what I consider to be an extremely hostile move, they started redirecting (using meta tags) to a horrible dumbed-down mobile-ish version and using styles-removed-by-JS to "hide" the actual content of the "full" version (which has the "modern" mangled URLs) if you managed to reach it anyway, so all you'd see was a blank page. It's almost like they were deliberately trying to make something that only worked in the latest version of Chrome or whatever few other "modern" browsers exist today, and decided to force non-JS users to a worse experience than they had had for the past decade or more. Nothing a filtering proxy can't fix (and I brought back true URLs at the same time), but I was absolutely incensed when that happened.
I don't think a search engine should ever require JS to be usable. The existence of search engines[1] predates the existence of JS, and of course the HTML mechanism of form submission and displaying a list of links predates both of those.
Looking at the network inspector, "lite" version search results load in about 900ms - 1.4sec for non-cached queries. That is fast only compared to the rest of the "big web" maybe, but not _that_ fast in absolute terms (and for context, Google full search results load in usually that much). The bottleneck for DuckDuckGo is probably the Bing API (usually 600-700ms response) for which there is no way around.
Rightdao is about 70ms for me, could be they are in bay area and not world-wide. Impressive nevertheless, nothing on the web loads in 70ms for me. That is 10x faster than DuckDuckGo Lite.
Just tested the pure HTML version and was pleased to find out that it does not force a minimum browser width upon the user. I usually set my desktop (1080p) to use a 3-column layout with i3 and am always bothered by the fact that I cannot escape from horizontal scrolling on every search I run on DDG.
I needed to add the HTML version as a new search engine option in the browser options to have it running as default, but hey, it works! Horizontal scrolling is now a thing of the past.
> not force a minimum browser width … 1080p … 3-column layout … cannot escape from horizontal scrolling
I have my second monitor in 1920×1080 portrait and sometimes that is an irritation at 1080px wide long before getting to the 640px your columns have. To pick one example, in Google search pages you lose over half the right-hand column without scrolling (though at least the main results are fully visible). A lot of layouts seem to assume having around 1280 pixels to play with (less a little for scroll bars).
I've found zooming out a little works fine usually, and modern browsers remember the setting, but that isn't perfect. If I unzoom on the main display it does so for tabs open on that site on the second too (to use the Google example again if I open maps or an image search on the main screen I probably want it at 100%) and some sites seem to actively resist being zoomed.
Indeed, zooming out is a trick I use for a couple websites but even so it isn't perfect. I often face the situation where 80% size is still too big or brings the dreadful horizontal scroll but 70% ends up being too small and very hard to read. These and layouts that bring colour combinations that break in dark mode (browser native or dark reader) are my current bane.
I compared all 3 searching "venera" and prefer the JS one, (sorry?), but I could see this being useful for their niche audience very well. I worry more about tracking than JS in browsers, when it runs super fast on my devices.
This is awesome... I was always pushing for non-JS versions of products and was always met with the dumbest reasons... 'the only reason someone wouldn't run JS is if they're hacking'
To be fair, disabling JS seems to be an extremely fringe thing to do and using resources to accommodate it should be a hard sell. I've only ever actually heard of people doing it here on HN.
Same, though looks like maybe it doesn't actually return the same results as the full site, which is really weird. All 3 front-end clients should show identical results.
HTML version recently get 403 Forbidden in Firefox's context menu search. It appears to be caused by Firefox adds a 'Origin: null' header in the request. Hope they can solve this bug one day.
You can achieve it without CSS actually (in chrome)
Just put <meta name="color-scheme" content="light dark"> to <head>
It will also make vanilla scrollbar into dark theme
I don't get why these would be preferable unless you are an HN elitist that hates javascript for no other reason than it's own sake.
Normal DuckDuckGo loads just as fast for me as the html variant and it doesn't provide location (I had to change it manually), it doesnt have image search, maps, news etc.
I don't understand why anyone would use these version over the normal DDG experience. It's not like the normal DDG is ridicolous amounts of javascript anyway and the normal DDG experience is pretty much just as fast. I cannot really notice any difference anyway.
The words I'd pick are 'careless' or 'sloppy' or 'laughable' use of javascript. The speed of carefully-crafted JS in today's browsers is amazing. If the mule collapses hauling that 20-ton-wagon of borax, there's the culprit.
This seems nice in theory but reaches the wrong audience.
Since a couple months ago DDG decided to block TOR Browser users, so I would guess that most people that were concerned about their privacy already moved on to the competition.
Though it's nice to see better support for 2G slow networks, even when it's not for the sake of privacy as they claim.
"Since a couple months ago DDG decided to block TOR Browser users,"
Do you have a source for that claim? A quick search returned nothing.
Edit: I downloaded the (latest version of Tor Browser)-1 (10.0.12) and DDG is on the start page as the default search. No problems searching for things either.
Try using a mobile carrier and TOR on Android. You'll always get (and I quote) this:
"We've detected that you have connected over Tor. There appears to be an issue with the Tor Exit Node you are currently using. Please recreate your Tor circuit or restart your Tor browser in order to fix this. If this error persists, please let us know: error-lite-tor@duckduckgo.com"
And no, rotating the exit node or changing the identity won't help either.
Even this works for me. Must be a US problem. Regardless, I fail to see how your first hop onto the Internet would affect DDG, and no-one else and therefore be a DDG problem. Your carrier must be mangling the data somehow.
How does this equate to DDG saying that they're blocking Tor users?
> Since a couple months ago DDG decided to block TOR Browser users, so I would guess that most people that were concerned about their privacy already moved on to the competition.
I've used the clearnet DDG site for months over Tor and the only problem I've experienced is that sometimes DDG decides that my exit node isn't acceptable for some reason. This is very rare though (happens about once per month) and the error message makes it very clear that I should just use another exit node (unlike many other sites where I get cryptic error messages or a Google captcha).
Plus as others mentioned, there's an onion service which works as well.
If you want to add this as a search bar option in Firefox, you have to visit one of these pages first and then a green plus will appear in the search bar menu to (for example) add DuckDuckGo Lite. Then in settings you can also make it your new default.
[When did Firefox remove the option to just type in a URL for this?]
I honestly don't care if they use JavaScript (responsibly, which I think they do), but why not ape the user research the other search engines have done and use real pagination? It's not like Google and Bing and etc don't know that you can append DOM nodes.
In general, are there tutorials or guidelines on how to strip away unnecessary JS components? As in how do I query which component is unused from the source code of a website? I suspect I will have to play with Console of a browser etc.
Is there any easy way to determine that? I’m not aware of one. The presence of JavaScript is not always enough to determine that the page is not usable without JavaScript.
You could check to see if there is content on the raw received HTML as well as indexing the content after evaluating JavaScript. If the page pre-JS page has content you mark it as no-JS friendly and if it only gets content after running JS then you mark it as JS-required.
If resources were no concern you could rank the page twice and have separate JS and no-JS indexes. Of course this isn't perfect as a lot of the ranking comes from pages referring to it but I think it is fine. If the page has a good chunk of content it likely still makes sense to include it in no-JS searches. I don't think people will be annoyed that they can't do every action on the page, the real annoyance is getting a blank screen or the main content otherwise unavailable.
So at the end of the day I think just having a js-required flag for each document is sufficient and set that flag if the "main content" (as determined by heuristics) is unavailable before running JS. I think that would be good enough to satisfy most users with JS disabled without too much expense.
It is so fast and clean, I just set it as my duck keyworded search. I'm not on a search engine for the looks, but for the results. The faster I get them and the fewer things there are to distract the eye, the better for me.
A guy at work made a no-frills search results viewer that was just a standard Windows UI ListView (table) window with one result per row. Best search page I ever saw, because you could see 40 results at a time instead of 10.
HTML homepage transfers 34KB (94KB uncompressed) over 6 requests. HTML search results page transfers 133KB (248KB uncompressed) over 31 requests.
Lite homepage transfers 13KB (11KB uncompressed, ha) over 4 requests. Lite search results page transfers 21KB (43KB uncompressed) over 5 requests.
In all cases, all requests are to *.duckduckgo.com, which was very good to see from a privacy perspective. Nice work, DDG!
[1] https://benhoyt.com/writings/the-small-web-is-beautiful/