Categories
Linux

How to Use Curl Command in Linux

An extensive guide explaining the use of curl command to fetch webpages and download files right from your terminal

The curl command is one more interesting command-line utility Linux has to offer you. curl command allows the user to fetch files from the server.

Advertisements

curl is a popular choice of the application developers and frequent Linux users because of its support for a number of protocols like RTMP, RTSP, SCP, SFTP, SMB, SMBS, SMTP, SMTPS, TELNET, HTTP, HTTPS, FTP, FTPS, IMAP, IMAPS, DICT, FILE, GOPHER, LDAP, LDAPS, POP3, POP3S, etc.

curl command does much more than just fetching the web pages for you. Knowing the options available with this command makes it more versatile for your use. Let us dive into the tutorial to get a good grasp on the use of the curl command using some brief examples.


Installation

Before using the curl command, check if it’s already installed on your system. Use the command curl --version to check if curl is installed.

Advertisements

In case if curl is not installed, use the following steps.

On Ubuntu and Debian based systems, use:

sudo apt-get update
sudo apt-get install curl

On RHEL, CentOs and Fedora distros, use:

sudo yum install curl

Now use the curl --version command to make sure that it is properly installed.

curl --version

Output:

curl 7.58.0 (x86_64-pc-linux-gnu) libcurl/7.58.0 OpenSSL/1.1.1 zlib/1.2.11 libidn2/2.0.4 libpsl/0.19.1 (+libidn2/2.0.4) nghttp2/1.30.0 librtmp/2.3 Release-Date: 2018-01-24 Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtmp rtsp smb smbs smtp smtps telnet tftp Features: AsynchDNS IDN IPv6 Largefile GSS-API Kerberos SPNEGO NTLM NTLM_WB SSL libz TLS-SRP HTTP2 UnixSockets HTTPS-proxy PSL gaurav@ubuntu:~$
Advertisements

Now we are ready to use the curl command.


Options available with CURL command

Let us first have a glance at some of the prominent options available with the curl command.

OptionDescription
-uto download files from a FTP server
-Cto resume an interrupted download
-oto save the result of the curl command with a predefined filename
-Ito get the HTTP headers of a defined URL
-Oto save the result of the curl command with the original filename
--libcurlto output the C source code that uses libcurl for the specified option
-xto use a proxy to access the URL
-#to display the progress bar to show the download status

Retrieving a webpage using CURL

The curl command, when used without any option, fetches the content of the URL specified in the command.

Syntax:

curl [URL]

Example:

curl https://allthings.how
Advertisements

Output:

gaurav@ubuntu:~$ curl https://allthings.how <!DOCTYPE html> <html class="no-js" lang="en-US" amp="" i-amphtml-layout="" i-amphtml-no-boilerplate="" transformed="self;v=1"> <head><meta charset="UTF-8"><style amp-runtime="" i-amphtml-version="012008290323002">html{overflow-x:hidden!important}html.i-amphtml-fie{height:100%!important;width:100%!important}html:not([amp4ads]),html:not([amp4ads]) body{height:auto!important}html:not([amp4ads]) body{margin:0!important}body{-webkit-text-size-adjust:100%;-moz-text-size-adjust:100%;-ms-text-size-adjust:100%;text-size-adjust:100%}html.i-amphtml-singledoc.i-amphtml-embedded{-ms-touch-action:pan-y;touch-action:pan-y}html.i-amphtml-fie>body,html.i-amphtml-singledoc>body{overflow:visible!important}html.i-amphtml-fie:not(.i-amphtml-inabox)>body,html.i-amphtml-singledoc:not(.i-amphtml-inabox)>body{position:relative!important}html.i-amphtml-webview>body{overflow-x:hidden!important;overflow-y:visible!important;min-height:100vh!important}html.i-amphtml-ios-embed-legacy>body{overflow-x:hidden!important;overflow-y:auto!important;position:absolute!important}html.i-amphtml-ios-embed{overflow-y:auto!important;position:static}#i-amphtml-wrapper{overflow-x:hidden!important;overflow-y:auto!important;position:absolute!important;top:0!important;left:0!important;right:0!important;bottom:0!important;margin:0!important;display:block!important}html.i-amphtml-ios-embed.i-amphtml-ios-overscroll,html.i-amphtml-ios-embed.i-amphtml-ios-overscroll>#i-amphtml-wrapper{-webkit-overflow-scrolling:touch!important}#i-amphtml-wrapper>body{position:relative!important;border-top:1px solid transparent!important}#i-amphtml-wrapper+body{visibility:visible}#i-amphtml-wrapper+body .i-amphtml-lightbox-element,#i-amphtml-wrapper+body[i-amphtml-lightbox]{visibility:hidden}#i-amphtml-wrapper+body[i-amphtml-lightbox] .i-amphtml-lightbox-element{visibility:visible}#i-amphtml-wrapper.i-amphtml-scroll-disabled,.i-amphtml-scroll-disabled{overflow-x:hidden!important;overflow-y:hidden!important}amp-instagram{padding:54px 0px 0px!important;background-color:#fff}amp-iframe iframe{box-sizing:border-box!important}[amp-access][amp-access-hide]{display:none}[subscriptions-dialog],body:not(.i-amphtml-subs-ready) [subscriptions-action],body:not(.i-amphtml-subs-ready) [subscriptions-section]{display:none!important}amp-experiment,amp-live-list>[update]{display:none}.i-amphtml-jank-meter{position:fixed;background-color:rgba(232,72,95,0.5);bottom:0;right:0;color:#fff;font-size:16px;z-index:1000;padding:5px}amp-list[resizable-children]>.i-amphtml-loading-container.amp-hidden{display:none!important}amp-list [fetch-error],amp-list[load-more] [load-more-button],amp-list[load-more] [load-more-end],amp-list[load-more] [load-more-failed],amp-list[load-more] [load-more-loading]{display:none}amp-list[diffable] div[role=list]{display:block}amp-story-page,amp-story[standalone]{min-height:1px!important;display:block!important;height:100%!important;margin:0!important;padding:0!important;overflow:hidden!important;width:100%!important}amp-story[standalone]{background-color:#202125!important;position:relative!important}amp-story-page{background-color:#757575}amp-story .amp-active>div,amp-story .i-amphtml-loader-background{display:none!important}amp-story-page:not(:first-of-type):not([distance]):not([active]){transform:translateY(1000vh)!important}amp-autocomplete{position:relative!important;display:inline-block!important}amp-autocomplete>input,amp-autocomplete>textarea{padding:0.5rem;border:1px solid rgba(0,0,0,0.33)}.i-amphtml-autocomplete-results,amp-autocomplete>input,amp-autocomplete>textarea{font-size:1rem;line-height:1.5rem}[amp-fx^=fly-in]{visibility:hidden}amp-script[nodom]{position:fixed!important;top:0!important;width:1px!important;height:1px!important;overflow:hidden!important;visibility:hidden}

Here, the content of the webpage is fetched directly to your terminal as source code.

You can use options -o and -O with the curl command to store this content into a file.

When -o option is used, the content of the URL is saved into your current directory with a user-defined filename.

Syntax:

curl -o [userdefined_filename] [URL]
Advertisements

Example:

gaurav@ubuntu:~/workspace$ curl -o ath.html https://allthings.how % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 199k 100 199k 0 0 58743 0 0:00:03 0:00:03 --:--:-- 58743 gaurav@ubuntu:~/workspace$ ls ath.html gaurav@ubuntu:~/workspace$

In this example, the content from the URL ‘allthings.how’ is saved as an HTML file named ath.html in my current working directory. On opening this HTML file, I’ll be redirected to the webpage which is saved.


Downloading files using CURL command

Using the -O option with the curl command also saves the content or webpage or a downloadable package as a file but saves this file with its original name.

Let us see this through an example:

Example:

Advertisements

Here I have used the curl command with -O option to download a Ubuntu package named ‘cherrytree_0.37.6-1.1_all.deb‘ from the Ubuntu package repository.

gaurav@ubuntu:~/workspace$ curl -O http://kr.archive.ubuntu.com/ubuntu/pool/universe/c/cherrytree/cherrytree_0.37.6-1.1_all.deb % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 613k 100 613k 0 0 220k 0 0:00:02 0:00:02 --:--:-- 220k gaurav@ubuntu:~/workspace$

Output:

trinity@ubuntu:~/workspace$ ls ath.html cherrytree_0.37.6-1.1_all.deb trinity@ubuntu:~/workspace$

So, the package is now downloaded and saved in the current working directory (CWD) with its original name.

Displaying a Progress Bar while downloading a file

There is one more aesthetic modification available whilst using the curl command to download a file. You can view the progress of your file download in the form of a Progress Bar onto your terminal. You just need to append the -# option with your command to download a file.

Let us see an example of this tweak.

Advertisements

Syntax:

curl -# -O [URL]

Example:

gaurav@ubuntu:~/workspace$ curl -# -O http://archive.ubuntu.com/ubuntu/pool/main/e/emacs-defaults/emacs-defaults_47.0.tar.xz ############################################################################################################################################### 100.0% gaurav@ubuntu:~/workspace$

Output:

gaurav@ubuntu:~/workspace$ ls ath.html cherrytree_0.37.6-1.1_all.deb emacs-defaults_47.0.tar.xz gaurav@ubuntu:~/workspace$

In this output, you can observe that I have downloaded a package named ‘emacs-defaults_47.0.tar.xz‘ in my CWD and the progress bar is displayed into the terminal while the downloading was in progress.


Resuming interrupted download in CURL

Many a times, there may occur a situation where you have to download files of larger size. Sometimes due to some reasons like power failure or network failure the download may abort in the mid-process without downloading the complete file. Even if you press Ctrl+C in the terminal, the process gets aborted.

Advertisements

The curl command when used with the -C option resumes the interrupted download.

Syntax:

curl -C - -O [URL]

Example:

In this illustration, I have tried to download Ubuntu 20.04 ISO image from the Ubuntu website.

gaurav@ubuntu:~/workspace$ curl -O https://releases.ubuntu.com/20.04.1/ubuntu-20.04.1-desktop-amd64.iso?_ga=2.212264532.1184373179.1600250922-1570904140.1591164974 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 2656M 0 1744k 0 0 87038 0 8:53:17 0:00:20 8:52:57 77726^C

Here, I deliberately aborted the downloading process by Ctrl+C.

Advertisements

Now I’ll use the -C option with the curl command to resume the interrupted download from the same source website.

Output:

gaurav@ubuntu:~/workspace$ curl -C - -O https://releases.ubuntu.com/20.04.1/ubuntu-20.04.1-desktop-amd64.iso?_ga=2.212264532.1184373179.1600250922-1570904140.1591164974 ** Resuming transfer from byte position 1851392 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 2654M 0 20.2M 0 0 57940 0 13:20:35 0:06:06 13:14:29 98278

The download has been picked up from where it was aborted.


Downloading files from a FTP server using CURL

It is pretty easy with the curl command to download a file from the FTP server using the -u option. You have to put the username and password into the command before entering the URL.

Syntax:

curl -u [username]:[password] [URL]
Advertisements

For the illustration , I’ll be using an online public FTP.

Example:

gaurav@ubuntu:~/workspace$ curl -O -u dlpuser@dlptest.com:eUj8GeW55SvYaswqUyDSm5v6N ftp://ftp.dlptest.com/16-Sep-20-16-0-0.csv % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 390 100 390 0 0 93 0 0:00:04 0:00:04 --:--:-- 93 gaurav@ubuntu:~/workspace$

Here, I have downloaded a file named ‘16-Sep-20-16-0-0.csv’ from this ftp server and saved it with its original name in my CWD. I’ll check the downloaded file using the ls command.

gaurav@ubuntu:~/workspace$ ls -al total 1092 drwxrwxr-x 3 gaurav gaurav 4096 Sep 16 16:15 . drwxr-xr-x 87 gaurav gaurav 266240 Sep 16 10:22 .. -rw-r--r-- 1 gaurav gaurav 390 Sep 16 16:15 16-Sep-20-16-0-0.csv -rw-r--r-- 1 gaurav gaurav 204429 Sep 16 11:45 ath.html gaurav@ubuntu:~/workspace$

Downloading multiple files together using CURL

Downloading multiple files at once using the curl command is a very simple task. You just use the -O option with the curl command similar in the way we have performed in the above blocks.

Syntax:

curl -O [URL-1] -O [URL-2] -O[URL-n]
Advertisements

Example:

gaurav@ubuntu:~/workspace$ curl -O http://archive.ubuntu.com/ubuntu/pool/universe/a/aegean/aegean_0.15.2+dfsg-1.debian.tar.xz -O http://archive.ubuntu.com/ubuntu/pool/main/a/apache2/apache2_2.4.29.orig.tar.gz % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 63500 100 63500 0 0 55458 0 0:00:01 0:00:01 --:--:-- 55458 100 8436k 100 8436k 0 0 123k 0 0:01:08 0:01:08 --:--:-- 127k gaurav@ubuntu:~/workspace$

In this example, I have downloaded two different packages from the Ubuntu repository.

Output:

gaurav@ubuntu:~/workspace$ ls -al total 9596 drwxrwxr-x 3 gaurav gaurav 4096 Sep 16 16:28 . drwxr-xr-x 87 gaurav gaurav 266240 Sep 16 10:22 .. -rw-r--r-- 1 gaurav gaurav 390 Sep 16 16:15 16-Sep-20-16-0-0.csv -rw-r--r-- 1 gaurav gaurav 63500 Sep 16 16:28 aegean_0.15.2+dfsg-1.debian.tar.xz -rw-r--r-- 1 gaurav gaurav 8638793 Sep 16 16:29 apache2_2.4.29.orig.tar.gz -rw-r--r-- 1 gaurav gaurav 204429 Sep 16 11:45 ath.html gaurav@ubuntu:~/workspace$

The two packages are downloaded at the same time using the curl command.


Fetching HTTP headers of a URL with CURL

HTTP Headers fields of any URL contains useful information like user agent, content type, encoding, etc. These header files also provide information about the object sent in the message body. Details about the request and response are also obtained from these HTTP headers.

Advertisements

You can use curl command with -I option to get these HTTP headers of a URL.

Syntax:

curl -I [URL]

Example:

gaurav@ubuntu:~/workspace$ curl -I www.firefox.com HTTP/1.1 200 OK Content-Type: text/html; charset=ISO-8859-1 P3P: CP="This is not a P3P policy! See g.co/p3phelp for more info." Date: Wed, 16 Sep 2020 11:17:00 GMT Server: gws X-XSS-Protection: 0 X-Frame-Options: SAMEORIGIN Transfer-Encoding: chunked Expires: Wed, 16 Sep 2020 11:17:00 GMT Cache-Control: private Set-Cookie: 1P_JAR=2020-09-16-11; expires=Fri, 16-Oct-2020 11:17:00 GMT; path=/; domain=.google.com; Secure Set-Cookie: NID=204=SpeHTVXkKYwe6uaKYLsPWmCA0A-sGb94c9jpbw067e7uhyeJnkap6TFEIESztwLOEst7KcDSBLgGrokh1EM2IZi2VPVzllH0tsvCu-QbKiunPoPJ6dD7oAnB7rxu30rAiO630vYm6SG1zbmGgxNEiB-adXp24h7iEoSq9WsjrGg; expires=Thu, 18-Mar-2021 11:17:00 GMT; path=/; domain=.google.com; HttpOnly gaurav@ubuntu:~/workspace$

In this example I have fetched the HTTP headers of ‘www.firefox.com‘.


Fetching C-Source Code using CURL

Using curl command with the --libcurl option can fetch the C source code. This has no significant use to the laymen users but can prove to be very helpful for the System Programmers, Security Analysts and Application Developers.

Advertisements

Syntax:

curl [URL] > filename --libcurl [code_filename]

Example:

In this example, I have fetched the content of the URL allthings.how and stored it in a file named gy_ath.html. The C source code is separately stored in the source.c file.

curl https://www.allthings.how > gy_ath.html --libcurl source.c

Output:

gaurav@ubuntu:~/workspace$ curl https://www.allthings.how > gy_ath.html --libcurl source.c % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0 gaurav@ubuntu:~/workspace$

Let us now check the downloaded files.

gaurav@ubuntu:~/workspace$ ls -al total 404 drwxrwxr-x 3 gaurav gaurav 4096 Sep 16 17:08 . drwxr-xr-x 87 gaurav gaurav 266240 Sep 16 10:22 .. -rw-r--r-- 1 gaurav gaurav 0 Sep 16 17:13 gy_ath.html -rw-r--r-- 1 gaurav gaurav 1535 Sep 16 17:13 source.c gaurav@ubuntu:~/workspace$
Advertisements

The source.c file contains the source code. This can be displayed on the terminal using the cat command. I have put a few lines from the output in the below-given block.

C sourcegaurav@ubuntu:~/workspace$ cat source.c /********* Sample code generated by the curl command line tool ********** * All curl_easy_setopt() options are documented at: * https://curl.haxx.se/libcurl/c/curl_easy_setopt.html ************************************************************************/ #include <curl/curl.h> int main(int argc, char *argv[]) { CURLcode ret; CURL *hnd; hnd = curl_easy_init(); curl_easy_setopt(hnd, CURLOPT_BUFFERSIZE, 102400L); curl_easy_setopt(hnd, CURLOPT_URL, "https://www.allthings.how"); curl_easy_setopt(hnd, CURLOPT_USERAGENT, "curl/7.58.0"); curl_easy_setopt(hnd, CURLOPT_MAXREDIRS, 50L);

Using a proxy in CURL to access a URL

As discussed in the introduction, the curl command supports a wide range of protocols like FTP, SMTP, HTTPS, SOCKS etc. Sometimes using a proxy server for transferring files becomes important when you wish to enhance the speed of your transfer and also protect your identity. curl command can easily be used to transfer files over the proxy server by appending the -x option to it.

Example:

curl -x [proxy_address]:[port] [URL]

In the above example, I have assumed that your proxy requires no authentication. In case if the proxy requires authentication to start the transfer, you can use the following command.

curl -u [username]:[password] -x [proxy_address]:[port] [URL]

Using this simple method, we can transfer files through a proxy server with option -x used with the curl command.


Conclusion

In this brief tutorial, we learnt how curl command proves to be helpful in downloading content directly from your terminal. We also learnt about the different options available with this command to be used for various tasks.