Here is how one might adhere to the so-called UNIX philosophy.
Utility #1: 580-character shell script to generate HTTP (NB. printf is a built-in)
Utility #2: TCP client to send HTTP, e.g., netcat
Utility #3: (Optional) TLS proxy if encryption desired, e.g., stunnel^1
1. For more convenience use a proxy that performs DNS name lookups. Alternatively, use TLS-enabled client, e.g., openssl s_client, etc.
Advantages over curl and similar programs: HTTP/1.1 pipelining
For the purpose of an example, the shell script will be called "post". To demonstrate pipelining POST requests, we can send multiple requests to DuckDuckGo over a single TCP connection. TLS proxy is listening on 127.0.0.1:80.
Based on personal experience as an end user, I find that using separate utilities is faster and more flexible than curl or similar program mentioned in this thread. For me, 1. storage space for programs, e.g. large scripting language interpreters and/or other large binaries, is in short supply and 2. HTTP/1.1 pipelining is a must-have. Using separate, small utilities 1. conserves space and 2. lets me do pipelining easily. I write many single purpose utilties for own use, including one that replaces the "post" shell script in this comment.
Utility #1: 580-character shell script to generate HTTP (NB. printf is a built-in)
Utility #2: TCP client to send HTTP, e.g., netcat
Utility #3: (Optional) TLS proxy if encryption desired, e.g., stunnel^1
1. For more convenience use a proxy that performs DNS name lookups. Alternatively, use TLS-enabled client, e.g., openssl s_client, etc.
Advantages over curl and similar programs: HTTP/1.1 pipelining
For the purpose of an example, the shell script will be called "post". To demonstrate pipelining POST requests, we can send multiple requests to DuckDuckGo over a single TCP connection. TLS proxy is listening on 127.0.0.1:80.
Put the queries in a file Send the queries Send the queries, save the result, then read the result Based on personal experience as an end user, I find that using separate utilities is faster and more flexible than curl or similar program mentioned in this thread. For me, 1. storage space for programs, e.g. large scripting language interpreters and/or other large binaries, is in short supply and 2. HTTP/1.1 pipelining is a must-have. Using separate, small utilities 1. conserves space and 2. lets me do pipelining easily. I write many single purpose utilties for own use, including one that replaces the "post" shell script in this comment.