Client URL, or cURL, is a library and command-line utility for transferring data between systems. It supports many protocols and tends to be installed by default on many Unix-like operating systems. Because of its general availability, it is a great choice for downloading a file to your local system, especially in a server environment.
In this tutorial, you’ll use the curl
command to download a text file from a web server. You’ll view its contents, save it locally, and tell curl
to follow redirects if files have moved. This knowledge is particularly useful when working with REST APIs or setting up Node.js applications.
Downloading files from the Internet can be dangerous, so be sure you are downloading from reputable sources. In this tutorial, you’ll download files from DigitalOcean, and you won’t be executing any files you download.
Out of the box, without any command-line arguments, the curl
command will fetch a file and display its contents to the standard output.
Let’s give it a try by downloading the robots.txt
file from Digitalocean.com:
You’ll see the file’s contents displayed on the screen:
Give curl
a URL and it will fetch the resource and display its contents.
Fetching a file and displaying its contents is all well and good, but what if you want to actually save the file to your system?
To save the remote file to your local system, with the same filename as the server you’re downloading from, add the --remote-name
argument, or use the -O
option:
Your file will download:
Instead of displaying the contents of the file, curl
displays a text-based progress meter and saves the file to the same name as the remote file’s name. You can check on things with the cat
command:
The file contains the same contents you saw previously:
Now let’s look at specifying a filename for the downloaded file.
You may already have a local file with the same name as the file on the remote server.
To avoid overwriting your local file of the same name, use the -o
or --output
argument, followed by the name of the local file you’d like to save the contents to.
Execute the following command to download the remote robots.txt
file to the locally named do-bots.txt
file:
Once again, you’ll see the progress bar:
Now use the cat
command to display the contents of do-bots.txt
to verify it’s the file you downloaded:
The contents are the same:
By default, curl
doesn’t follow redirects, so when files move, you might not get what you expect. Let’s look at how to fix that.
Thus far, all of the examples have included fully qualified URLs that include the https://
protocol. If you happened to try to fetch the robots.txt
file and only specified www.digitalocean.com
, you would not see any output, because DigitalOcean redirects requests from http://
to https://
:
You can verify this by using the -I
flag, which displays the request headers rather than the contents of the file:
The output shows that the URL was redirected. The first line of the output tells you that it was moved, and the Location
line tells you where:
You could use curl
to make another request manually, or you can use the --location
or -L
argument which tells curl
to redo the request to the new location whenever it encounters a redirect. Give it a try:
This time, you see the output as curl
followed by the redirect:
You can combine the -L
argument with some of the aforementioned arguments to download the file to your local system:
Warning: Many resources online will ask you to use curl
to download scripts and execute them. Before you run any scripts you have downloaded, it’s good practice to check their contents before making them executable and running them. Use the less
command to review the code to ensure it’s something you want to run.
Some web files are protected and require authentication. curl
allows you to handle these cases easily. This is particularly useful when working with proxy servers or secure API endpoints.
To access a file that requires login credentials, use the -u
flag:
You can also use headers to pass API tokens:
For better security, avoid hardcoding sensitive data. Instead, use environment variables or configuration files.
Robust scripts need to account for network interruptions and delays.
Use the -C -
option to resume:
Prevent curl
from hanging indefinitely:
Automatically retry failed downloads up to 3 times:
Automating downloads can be useful in CI/CD pipelines or regular backups. This is especially relevant when working with Node.js applications or REST APIs that require regular data updates.
Make the script executable with chmod +x script.sh
, then schedule it with cron
or use it in a deployment pipeline.
Sometimes downloads might fail or behave unexpectedly. Here are some common issues and their solutions:
If curl
isn’t downloading your file, try these troubleshooting steps:
If you’re still having issues, the verbose output (-v
) will help identify the problem:
While curl
is powerful, sometimes wget
might be a better choice for certain download scenarios. wget
is specifically designed for downloading files and has some features that make it particularly useful:
To download a file with wget
:
-O
and -o
in cURL?The -O
(uppercase “o”) option in curl
saves the downloaded file using the original filename as provided by the server in the URL or HTTP headers. This is particularly useful when you want to preserve the server’s naming convention, or when downloading multiple files in a batch without having to specify each filename manually.
On the other hand, the -o
(lowercase “o”) option allows you to specify a custom filename for the downloaded file. This is helpful for organizing your downloads, preventing filename collisions, or when you want to give the file a more meaningful name locally.
Example:
If your download was interrupted due to a network error or system reboot, you can resume it using the -C -
option. This tells curl
to continue the download from where it left off, assuming the server supports HTTP range requests.
Example:
This is particularly useful for large ISO files or multi-gigabyte datasets, especially when working with unreliable internet connections or automating downloads in scripts.
Yes, curl
supports both basic and token-based authentication methods.
For basic authentication using a username and password:
For token-based authentication, use the Authorization
header:
Always avoid hardcoding sensitive credentials. Use environment variables or configuration files when scripting to enhance security.
Some URLs redirect from http
to https
, or from an old endpoint to a new one. By default, curl
does not follow these redirects.
To handle this, add the -L
or --location
option:
This is especially important when accessing public APIs, shortened URLs, or migrating download links.
Yes, cURL is included by default in Windows 10 and later. You can use it directly from Command Prompt or PowerShell.
For older versions, or for enhanced Unix-like behaviour, you can install it via:
Chocolatey with:
Once installed, you can run cURL commands just like on Linux or macOS. This enables cross-platform scripting and consistent development workflows.
You can download multiple files in a single command using either a list of URLs or a pattern. Here are two common approaches:
Using multiple URLs:
Using a pattern with brace expansion:
For more complex scenarios, you can also use a text file containing URLs and the -K
option:
Sometimes, you might encounter SSL certificate errors when downloading from servers with expired or self-signed certificates (SSL). While not recommended for production use, you can bypass certificate verification using the -k
or --insecure
option:
For a more secure approach, you can specify a custom certificate:
Remember that bypassing certificate verification can expose you to security risks, so use these options cautiously.
By default, cURL shows a progress bar, but you can customise the output using various options:
For a simple progress bar:
For detailed progress information:
You can also create a custom progress format using the -w
option with various variables like %{speed_download}
, %{time_total}
, and %{size_download}
.
curl
lets you quickly download files from a remote system. It supports a wide range of protocols like HTTP, HTTPS, FTP, and more—making it a reliable and script-friendly choice for file transfers. But it doesn’t stop there.
From simple downloads to complex API interactions, curl
can handle everything from setting custom headers and authentication to managing redirects and resumable downloads. It’s a staple tool for developers, sysadmins, and DevOps engineers who need precise control over network communication without relying on heavyweight tools.
Whether you’re automating tasks in a CI/CD pipeline, integrating data from external sources, or testing endpoints in REST APIs or Node.js applications, curl
fits naturally into modern development workflows.
To dive deeper into all its capabilities, view the manual page by running:
Or explore online examples to sharpen your command-line skills even further.
Continue building with DigitalOcean Gen AI Platform.
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
Tested on
Ubuntu 20.04
with Interactive terminal: Great! OKHow can I download .mkv or .mp4 files in ubuntu bash?