Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You lose some benefits of features already implemented by existing HTTP clients (caching, redirection, authorization and authentication, cross-origin protections, understanding the nature of the error to know that this request has failed and you need to try another one...).

It's is certainly not comprehensive, but it's right there and it works.

Moving to your own solution means that you have to reimplement all of this in every client.



> understanding the nature of the error to know that this request has failed and you need to try another one...

Please elaborate. In my experience, most of HTTP client libraries do not automatically retry any requests, and thank goodness for that since they don't, and can't, know whether such retries are safe or even needed.

> redirection

An example of service where, at the higher business logic level, it makes sense to force the underlying HTTP transport level to emit a 301/302 response, would be appreciated. In my experience, this stuff is usually handled in the load-balancing proxy before the actual service, so it's similar to QoS and network management stuff: the application does not care about it, it just uses TCP.


They don't retry on errors but they know it is an error. Eg. imagine a shell script using curl or wget and trying multiple URLs as a health check (eg. on different round-robin IPs). Without these "generic" HTTP tools knowing that this is a "failure", you would need to implement custom parsing for any case like this instead of relying on the defined "error" and "success" behaviour.

The same holds true if you are using any programming library: there is a plethora of handlers for HTTP errors.

As for redirection, a common example is offering downloads through S3 using pre-signed URLs (you share a URL with your own domain, but after auth redirect to a pre-signed S3 URL for direct download or upload).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: