TLSv1.2 (IN), TLS alert, close notify (256):

This message notifies the recipient that the sender will not send any more messages on this connection. Note that as of TLS 1.1, failure to properly close a connection no longer requires that a session not be resumed. This is a change from TLS 1.0 to conform with widespread implementation practice.

Expire in 0 ms for 1 (transfer 0x559d3dcc1bb0)

This is actually a debug message forgotten in the code and removed in february 2019.

curl: (35) error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol

Details :

This is often caused by improperly setting the HTTP / HTTPS proxy value.

Solution :


What does additional stuff not fine transfer.c:1037: 0 0 mean ?

Situation :

A curl query returns a 141 exit code, and the message :
* additional stuff not fine transfer.c:1037: 0 0
* SSL read: error:00000000:lib(0):func(0):reason(0), errno 104
* Closing connection #0
What does that mean ?
Looks like this errno 104 is an SSL error, and is unrelated to cURL.

Details :

From Daniel Stenberg, the author of cURL :

About the 141 exit code (source) :

It usually means curl crashed (segfaulted) or similar.

About additional stuff not fine transfer.c:1037: 0 0 (source) :

> Under what circumstances will cURL print the following message:
> * additional stuff not fine transfer.c:1037: 0 0

That's a debug output I put there once to aid me debugging a transfer case I
had problems with and I then left it there. It is only present in debug builds
and basically if you need to ask about it, the message is not for you...

Solution :


curl: (56) Recv failure: Connection reset by peer

Failure with receiving network data

curl: (7) Unsupported proxy scheme for ''

cURL doesn't support HTTPS proxy so you'll get this message to make you aware of that so that you don't think you're actually using a HTTPS proxy. You can workaround this by resetting the https_proxy environment variable, then retrying your cURL request :

https_proxy=; curl


How to check whether HTTP keep-alive is enabled ?


  • curl -Iv 2>&1 | grep -i 'connection #0'
  • curl -Iv -H 'Host:' 2>&1 | grep -i 'connection #0'


* Connection #0 to host left intact
* Closing connection #0
the HTTP keep-alive is enabled
* Closing connection #0
the HTTP keep-alive is not enabled


curl -Iv -H 'Host:' --next 2>&1 | grep -i 'connection #0'
* Connection #0 to host left intact
* Re-using existing connection! (#0) with
* Connection #0 to host left intact

curl: (52) Empty reply from server

Many reasons can cause this reply.

Reason 1 : you're doing it wrong :

curl: (52) Empty reply from server
curl: (35) Unknown SSL protocol error in connection to
When you want to use HTTPS (i.e. SSL / TLS over HTTP), you'll have to specify it in the scheme part of the URL, not with a trailing :443 (source) :
  • DO : curl
  • DON'T : curl
This is why commands using :443 fail : they're sending HTTP requests to the port 443 of the remote host, whereas this one is actually expecting HTTPS requests.

Let's have a look at this in more details :

One of the causes of this curl: (52) Empty reply from server error message would then be trying to use HTTP on a host only serving HTTPS. Let's check it :
curl: (7) couldn't connect to host
(UNKNOWN) [] 80 (http) : Connection timed out
(UNKNOWN) [] 443 (https) open
Confirmed !

Reason 2 : nobody's listening :

Once you've confirmed you're using cURL the right way, the reason why nobody replies is probably because nobody is listening (or you're not talking to the right guy ). To debug this, make sure :
  1. you get a similar empty response, no data sent from server, or something like that with an other tool such as wget or the Web Developer Firefox extension
  2. DNS resolution works as expected (and you're actually sending requests to the right server )
  3. you're contacting the right virtualhost (if any). Have a look at the access.log / error.log
  4. some process is actually listening on the specified IP+port
  5. the network configuration (route, firewall, forward + reverse proxy, NAT, ...) actually leads requests to this listening socket.
    Don't hesitate to sketch the desired vs observed setup : the bug is on one of those arrows


Usage :

cURL is a tool to transfer data to/from a server, using one of the supported protocols (HTTP, HTTPS, FTP, FTPS, GOPHER, DICT, TELNET, LDAP or FILE). It is designed to work without user interaction.

Play with HTTP/1.0 and HTTP/1.1 :

Use telnet.

Flags :

Flag Protocol Usage
-A userAgent --user-agent userAgent Specify the user agent. This can also be done with : -H "User-Agent: userAgent"
-b file --cookie file Read saved cookies from file and send them to the HTTP server.
-c file --cookie-jar file Write received cookie(s) to file
-C --continue-at offset Continue / resume a file transfer at the given position : offset bytes will be skipped from the beginning of the source file.
Use -C - to tell cURL to automatically find out where/how to resume the transfer.
--compressed Request a compressed response using one of the algorithms libcurl supports, and return the uncompressed document.
This is equivalent to : -H "Accept-Encoding: gzip, deflate".
A server that does not support the requested compression method(s) will ignore the compress header request and simply reply back the contents uncompressed.
-D file --dump-header file Dump protocol headers to file
-d data --data data HTTP Sends data in a POST request to the HTTP server, in a way that can emulate as if a user has filled in a HTML form and pressed the submit button. You MUST NOT urlencode this data.
-F formFieldName=formFieldValue HTTP Emulate a filled-in form in which a user has pressed the submit button. This causes cURL to POST data using the Content-Type multipart/form-data (allowing to upload binary data) according to RFC 1867. Use -F multiple times to submit several fields.
--ftp-pasv FTP Use the FTP passive mode.
--ftp-ssl FTP Use SSL/TLS for the FTP connection.
-H header --header header HTTP Provide extra header when getting a web page. When using extra headers, make sure the various gateways/proxies on the line don't mangle/remove them.
-I (capital i) --head Fetch the HTTP headers only with the HTTP command HEAD.
-i --include Include the HTTP headers in the output. This includes server name, date of the document, HTTP version, cookies and more...
Using both -i and -D is pleonastic as -i == -D /dev/stdout, but may collide with further options (like -o /dev/null). No big deal having an extra -i
-k --insecure Prevent the connection from failing when the SSL certificate check fails.
--limit-rate bytesPerSecond Limit the transfer speed (both for uploads and downloads) to bytesPerSecond bytes per second. bytesPerSecond can be appended a kKmMgG suffix for Kilo/Mega/Gigabytes per second.
-L --location Follow HTTP redirects. cURL doesn't re-send credentials on redirects.
--noproxy hosts Bypass the system proxy for the listed hosts
-o file --output file HTTP Write output to file instead of STDOUT
This allows fetching multiple files at once : curl -o -o
-O --remote-name HTTP Write Output to a local file named like the remote file. This will write index.html locally :
curl -O
cURL has no option to specify a destination directory, but it's possible to workaround this with : cd destinationDir; curl [options]; cd - (source)
--resolve Pre-populates the DNS cache with entries for host:port pair so redirects and everything that operates against this pair will instead use the provided
This looks like a good alternative to playing with HTTP headers during tests.
(sources : 1, 2)
-s --silent Silent mode. Don't show progress meter or error messages. Makes cURL mute.
--sslv2 -2
--sslv3 -3
HTTP Forces cURL to use SSL version 2 (respectively : 3) when negotiating with a remote SSL server.
Both are insecure and obsolete (Prohibiting SSL v2, Deprecating SSL v3) and must NOT be used anymore. Consider --tls directives instead.
  • these options ask cURL to use the specified TLS version (or higher) when negotiating with a remote TLS server
  • in old versions of cURL, these forced cURL to use exactly the specified version (check the version number + )
  • TLS 1.3 is still pretty recent (august 2018) and not widely supported
-T file URL
--upload file URL
Upload file to URL. If no remote file name is given within URL, the local file name will be used. If URL doesn't end with a /, its last part will be considered as the remote file name.
-u login:password
--user login:password
Specify login and password to use for server authentication
-U login:password
--proxy-user login:password
Specify login and password to use for proxy authentication
-v Verbose mode
  • lines starting with > mean data sent by cURL"
  • < means data received by cURL that is hidden in normal cases
  • and lines starting with * mean additional info provided by cURL
-x proxy --proxy proxy Use the specified proxy. Format : protocol://proxyHost:proxyPort
  • HTTP : specify a custom request method : POST, PUT, DELETE, ... Without -X, cURL defaults to GET.
  • FTP, SMTP and others : specify a custom command rather than the default one

Example :

Basic examples :

return page content to stdout
return HTTP response headers + page content to stdout
  • curl -i
  • curl -i | less
same as above (HTTP headers + page content), but written to outputFile rather than stdout
curl -i -o outputFile
This only outputs a progress meter to stdout :
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1270  100  1270    0     0  75107      0 --:--:-- --:--:-- --:--:-- 79375
same as above : HTTP headers + page content written to outputFile, without the progress meter
display the HTTP headers only (see also the wget method) :
  • curl -I
  • curl -IXGET
    Forces a GET HTTP request rather than the HEAD done with -I.
  • curl -iso /dev/null -D /dev/stdout

Define an Akamai alias :

alias akcurl='curl -H "Pragma: akamai-x-get-cache-key, akamai-x-cache-on, akamai-x-cache-remote-on, akamai-x-get-true-cache-key, akamai-x-get-extracted-values, akamai-x-check-cacheable, akamai-x-get-request-id, akamai-x-serial-no, akamai-x-get-ssl-client-session-id"'

Testing Akamai :

curl -iso /dev/null -D /dev/stdout -H "Pragma: akamai-x-get-cache-key, akamai-x-cache-on, akamai-x-cache-remote-on, akamai-x-get-true-cache-key, akamai-x-get-extracted-values, akamai-x-check-cacheable, akamai-x-get-request-id, akamai-x-serial-no, akamai-x-get-ssl-client-session-id" -H "Host:"

akcurl -iso /dev/null -D /dev/stdout -H "Host:"

akcurl -iso /dev/null -D /dev/stdout -H "Host:"

Log into WordPress back-office :

siteName=''; login='username'; password='password'; baseUrl='http://'$siteName; resultFile='./index.html'; redirectTo='http%3A%2F%2F'$siteName'%2Fwp-admin%2F'; curl -L -iso $resultFile -D /dev/stdout --data "log=$login&pwd=$password&wp-submit=Se+connecter&redirect_to=$redirectTo&testcookie=1" --cookie ./cookie.txt $baseUrl/wp-login && grep --color=auto "Tableau de bord" $resultFile

Notes :
  • Even though ./cookie.txt isn't visible after executing this command, it won't work without specifying it (?)
  • So far, I've not been able to do the same with wget.
  • ./index.html contains both the HTTP headers and the page contents.
  • The form field names may change : log, password, wp-submit, as well as the submit button name : Se connecter.

Log into WordPress back-office with a custom HTTP header :

serverName='srvXYZ'; siteName=''; login='username'; password='password'; baseUrl='http://'$serverName; resultFile='./index.html'; redirectTo='http%3A%2F%2F'$siteName'%2Fwp-admin%2F'; curl -L -iso $resultFile -D /dev/stdout --data "log=$login&pwd=$password&wp-submit=Connexion&redirect_to=$redirectTo&testcookie=1" --cookie ./cookie.txt --header "Host: $siteName" $baseUrl/wp-login.php

Notes, same as above, but also :
  • Don't forget to post data to wp-login.php, some redirects may not work (?)

Log into another back-office while sending custom HTTP host headers :

hostnameFake=''; hostnameReal='serverName'; resultFile='./result.txt'; cookieFile='./cookie.txt'; curl -iso $resultFile -D /dev/stdout --cookie-jar $cookieFile -H "Host: $hostnameFake" http://$hostnameReal/loginPage; curl -iso $resultFile -D /dev/stdout --data 'login=admin& password=secret' --cookie $cookieFile --cookie-jar $cookieFile -H "Host: $hostnameFake" http://$hostnameReal/loginCredentialsCheck; curl -iso $resultFile -D /dev/stdout --cookie $cookieFile -H "Host: $hostnameFake" http://$hostnameReal/contentPage; grep --color=auto "some text" $resultFile

Notes :
  • The data to be sent through HTTP POST must not be urlencoded
  • Albeit we're sending a tampered HTTP host header, no need to modify the cookie-jar file to translate the fake hostname into the real one. Just leave the cookie-jar file as-is.
  • Depending on the website, it may/may not be necessary to :
    • GET the login page first, for session cookies
    • specify the HTTP Referrer when presenting credentials

Get HTTP response times (source) :

Times :

  1. DNS lookup time : to look up the IP address of the domain in question
  2. Connect time : to set up the TCP connection
  3. Wait time : to receive the first byte after the connection has been setup aka TTFB
  4. Content download time

curl -s -o /dev/null -iD /dev/stdout -w "Effective URL :\t\t%{url_effective}\nContent-type :\t\t%{content_type}\nHTTP CODE :\t\t%{http_code}\nDNS lookup duration :\t%{time_namelookup} s\nConnect duration :\t%{time_connect} s\nTTFB :\t\t\t%{time_starttransfer} s\nTotal time :\t\t%{time_total} s\nDownloaded :\t\t%{size_download} bytes\n"

Back to the origin + bypass host cache (Varnish) :

curl -s -o /dev/null -iD /dev/stdout -w "Effective URL :\t\t%{url_effective}\nContent-type :\t\t%{content_type}\nHTTP CODE :\t\t%{http_code}\nDNS lookup duration :\t%{time_namelookup} s\nConnect duration :\t%{time_connect} s\nTTFB :\t\t\t%{time_starttransfer} s\nTotal time :\t\t%{time_total} s\nDownloaded :\t\t%{size_download} bytes\n" --header "Host:"$RANDOM

Upload a file by FTP :

curl -u login:password -T localFileName ftp://ftpHost/path/to/upload/directory/remoteFileName

Connect to a secure FTP (about cURL certificates) :

curl --insecure --ftp-ssl --ftp-pasv --user "login:password" "ftp://ftpHost:ftpPort/path/to/upload/directory/"

The trailing / looks mandatory (?).