Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The "new" HTTP is clearly targeted at the "approved browser".

For example, this alleged "head-of-line blocking problem" that HTTP/2 purportedly "solves" was never a problem of HTTP outside of a specific program, the graphical web browser, the type of client that tries to pull resources from different domains for a single website. Not all programs that use HTTP need to do that.

For instance I have been using HTTP/1.1 pipelining outside the browser for fast, reliable information retrieval for close to 20 years. It has always been supported by HTTP servers and it works great with the simple clients I use. I still rely on HTTP/1.1 pipelining today, on a daily basis. Never had a problem.

There are uses for pipelining besides the ones envisioned by "tech" companies, web developers and their advertiser customers.



If early hints breaks your proxy, it’s likely your proxy doesn’t handle 1xx status codes correctly. Could you tell me which proxy it is (privately if you think it necessary)? I’d like to chase the bug with them.


The big problem with pipelining in http 1.x is that a response can break the pipeline part way through and there is no way to know what the server processed. A response night for example, mid pipeline be connection:close and that’s that, did any subsequent request get processed? Who knows.


HTTP/2 seems to be designed for (graphical, interactive) webpages.

The maintainer of a popular webserver has suggested HTTP/2 is slower than HTTP/1.1 for file download.

https://stackoverflow.com/questions/44019565/http-2-file-dow...

As I stated, I use HTTP/1.1 pipelining every day. I use it for a variety of information retrieval tasks, even retrieving bulk DNS data. To give an arbitrary example, sometimes I will download a website's sitemaps. This usually involves downloading a cascade of XML files. For example, there might be a main XML file called "index.xml". This file then lists hundreds more sitemap XML files, e.g., archive-2002-1.xml, archive-2002-2.xml, containing every content URL on the website beginning with some prior year all the way up to the present day. Using a real world example, index.xml contains 246 URLs. Using HTTP/1.1 pipelining I can retrieve all of them into a single file using a single TCP connection. Then I retrieve batches of the URLs contained in that file, again over a single TCP connection. Many websites allow thousands of HTTP requests HTTP/1.1-pipelined over a TCP single connection, but I usually keep the batch size at around 500-1000 max. Of course I want the responses in the same order as the requests.

The process looks something like this

    ftp -4o 1 https://[domainname]/sitemaps/index.xml
    yy030 < 1|(ka;nc0) > 2
    yy030 < 2|wc -l

    1337855
1337855 is the number of URLs for [domainname]. Content URLs, not Javascript, CSS or other garbage.

yy030 is a C program that filters URLs from standard input

ka is a shell alias that sets an environment variable that is read by the yy025 program to indicate an HTTP header, in this case the "Connection:" header set to "keep-alive" not "close" (ka- sets it back to close)

nc0 is a one line shell script

    yy025|nc -vv h1b 80|yy045
yy025 is a C program that accepts URLs, e.g., dozens to hundreds to thousands of URLs, on stdin and outputs customised HTTP

h1b is a HOSTS file entry containg the address of a localhost-bound forward TLS proxy

yy045 is a C program that removes chunked transfer encoding from standard input

To verify the download, I can look at the HTTP headers in file "2". I can also look at the log from the TLS proxy. I have it set configured to log all HTTP requests and responses.

Is this a job for HTTP/2. It does not seem like it.

This type of pipelining using only a single TCP connection is not possible using curl or libcurl. Nor is it possible using nghttp. Look around the web and one will see people opening up dozens, maybe hundreds of TCP connections and running jobs in parallel, trying to improve speed, and often getting banned. As with the comment from the Jetty maintainer, I suspect using HTTP/2 would actually be slower for this type of transfer. It is overkill.

IMHO, HTTP, i.e., in the general sense, is not just for requesting webpages and resources for webpages.

I find HTTP/1.1 to be very useful. It is certainly not just for requesting webpages full of JS, CSS, images and the like. That is only one way I might use it. Perhaps HTTP/2 is the better choice for webpages. TBH, if using a "modern" graphical browser, I would be inclined to let it use HTTP/2. Most of the time I am not using a graphical browser.


This comment deserves a post on it's own, so you can explain the naming scheme behind it


One of the many programmer memes is something along the lines of "naming is difficult." Yet programmers, individuals who are often obsessed with numbers, insist on trying to do it anyway. The results speak for themselves. This extends beyond programs. The so-called "tech" industry has produced some of the most absurd, non-descriptive business names in the history of the world.

I decided to try numbering the programs I write instead of naming them. I often use a prefix that can provide a hint.^1 For example, the yy prefix indicates it was created with flex and the nc in nc0 indicates it is a "wrapper script" for nc. If the program is one I use frequently, then I have no trouble remembering its number. In the event I forget a program number, I have a small text file that lists each yy program along with a short description of less than 35 chars.

1. But not always. I have some scripts that I use daily that are just a number. I also have a series of scripts that begin with "[", where the script [000 outputs a descriptive list of the scripts, [001, [002, etc. I am constantly experimenting, looking for easier, more pleasing short strings to type.

Each source file for a yy program is just a single .l file with a 3-char filename like 025.l, so searching through source code can be as simple as

     grep whatever dir/???.l
If I put descriptions in C comments at top of each .l file I can do something like

     head -5 dir/???.l 
Aesthetically, I like have a directory full of files with filenames that follow a consistent pattern and are of equal length. Look at the source code for k, ngn-k or kerf. When it comes to programming, IMO, smaller is better.


they are simply constructing GET request headers "by hand " based on some xml file downloaded earlier an then sendind that list of GET via `nc`. the example is just over confusing and using file named as 1, 2

I voted up because that is indeed neat tho.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: