We hope you find this tutorial helpful. In addition to guides like this one, we provide simple cloud infrastructure for developers. Learn more →

How To Use the AWK language to Manipulate Text in Linux

PostedJanuary 22, 2014 289.4k views System Tools Linux Basics Ubuntu

Status: Deprecated

This article covers a version of Ubuntu that is no longer supported. If you are currently operate a server running Ubuntu 12.04, we highly recommend upgrading or migrating to a supported version of Ubuntu:

Reason: Ubuntu 12.04 reached end of life (EOL) on April 28, 2017 and no longer receives security patches or updates. This guide is no longer maintained.

See Instead:
This guide might still be useful as a reference, but may not work on other Ubuntu releases. If available, we strongly recommend using a guide written for the version of Ubuntu you are using. You can use the search functionality at the top of the page to find a more recent version.


Linux utilities often follow the Unix philosophy of design. Tools are encouraged to be small, use plain text files for input and output, and operate in a modular manner. Because of this legacy, we have great text processing functionality with tools like sed and awk.

In this guide, we will discuss awk. Awk is both a programming language and text processor that can be used to manipulate text data in very useful ways. We will be discussing this on an Ubuntu 12.04 VPS, but it should operate the same on any modern Linux system.

Basic Syntax

The awk command is included by default in all modern Linux systems, so we do not need to install it to begin using it.

Awk is most useful when handling text files that are formatted in a predictable way. For instance, it is excellent at parsing and manipulating tabular data. It operates on a line-by-line basis and iterates through the entire file.

By default, it uses whitespace (spaces, tabs, etc.) to separate fields. Luckily, many configuration files on your Linux system use this format.

The basic format of an awk command is:

awk '/search_pattern/ { action_to_take_on_matches; another_action; }' file_to_parse

You can omit either the search portion or the action portion from any awk command. By default, the action taken if the "action" portion is not given is "print". This simply prints all lines that match.

If the search portion is not given, awk performs the action listed on each line.

If both are given, awk uses the search portion to decide if the current line reflects the pattern, and then performs the actions on matches.

Simple Uses

In its simplest form, we can use awk like cat to simply print all lines of a text file out to the screen.

Let's print out our server's fstab file, which lists the filesystems that it knows about:

awk '{print}' /etc/fstab
# /etc/fstab: static file system information.
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
proc            /proc           proc    nodev,noexec,nosuid 0       0
# / was on /dev/vda1 during installation
UUID=b96601ba-7d51-4c5f-bfe2-63815708aabd /               ext4    noatime,errors=remount-ro 0       1

This isn't very useful. Let's try out awk's search filtering capabilities:

awk '/UUID/' /etc/fstab
# device; this may be used with UUID= as a more robust way to name devices
UUID=b96601ba-7d51-4c5f-bfe2-63815708aabd /               ext4    noatime,errors=remount-ro 0       1

As you can see, awk now only prints the lines that have "UUID" in them. We can get rid of the extraneous comment line by specifying that UUID must be located at the very beginning of the line:

awk '/^UUID/' /etc/fstab
UUID=b96601ba-7d51-4c5f-bfe2-63815708aabd /               ext4    noatime,errors=remount-ro 0       1

Similarly, we can use the action section to specify which pieces of information we want to print. For instance, to print only the first column, we can type:

awk '/^UUID/ {print $1;}' /etc/fstab

We can reference every column (as delimited by whitespace) by variables associated with their column number. The first column can be referenced by $1 for instance. The entire line can by referenced by $0.

Awk Internal Variables and Expanded Format

Awk uses some internal variables to assign certain pieces of information as it processes a file.

The internal variables that awk uses are:

  • FILENAME: References the current input file.
  • FNR: References the number of the current record relative to the current input file. For instance, if you have two input files, this would tell you the record number of each file instead of as a total.
  • FS: The current field separator used to denote each field in a record. By default, this is set to whitespace.
  • NF: The number of fields in the current record.
  • NR: The number of the current record.
  • OFS: The field separator for the outputted data. By default, this is set to whitespace.
  • ORS: The record separator for the outputted data. By default, this is a newline character.
  • RS: The record separator used to distinguish separate records in the input file. By default, this is a newline character.

We can change the values of these variables at will to match the needs of our files. Usually we do this during the initialization phase of our awk processing.

This brings us to another important concept. Awk syntax is actually slightly more complex than what we showed initially. There are also optional BEGIN and END blocks that can contain commands to execute before and after the file processing, respectively.

This makes our expanded syntax look something like this:

awk 'BEGIN { action; }
/search/ { action; }
END { action; }' input_file

The BEGIN and END keywords are actually just specific sets of conditions just like the search parameters. They match before and after the document has been processed.

This means that we can change some of the internal variables in the BEGIN section. For instance, the /etc/passwd file is delimited with colons (:) instead of whitespace. If we wanted to print out the first column of this file, we could type:

sudo awk 'BEGIN { FS=":"; }
{ print $1; }' /etc/passwd
. . .

We can use the BEGIN and END blocks to print simple information about the fields we are printing:

sudo awk 'BEGIN { FS=":"; print "User\t\tUID\t\tGID\t\tHome\t\tShell\n--------------"; }
{print $1,"\t\t",$3,"\t\t",$4,"\t\t",$6,"\t\t",$7;}
END { print "---------\nFile Complete" }' /etc/passwd
User        UID     GID     Home        Shell
root         0       0       /root       /bin/bash
daemon       1       1       /usr/sbin       /bin/sh
bin          2       2       /bin        /bin/sh
sys          3       3       /dev        /bin/sh
sync         4       65534       /bin        /bin/sync
. . .
File Complete

As you can see, we can format things quite nicely by taking advantage of some of awk's features.

Each of the expanded sections are optional. In fact, the main action section itself is optional if another section is defined. We can do things like this:

awk 'BEGIN { print "We can use awk like the echo command"; }'

We can use awk like the echo command

Awk Field Searching and Compound Expressions

In one of the examples above, we printed the line in the /etc/fstab file that began with "UUID". This was easy because we were looking for the beginning of the entire line.

What if we wanted to find out if a search pattern matched at the beginning of a field instead?

We can create a favorite_food.txt file which lists an item number and the favorite foods of a group of friends:

echo "1 carrot sandy
2 wasabi luke
3 sandwich brian
4 salad ryan
5 spaghetti jessica" > favorite_food.txt

If we want to find all foods from this file that begin with "sa", we might begin by trying something like this:

awk '/sa/' favorite_food.txt

1 carrot sandy
2 wasabi luke
3 sandwich brian
4 salad ryan

Here, we are matching any instance of "sa" in the word. This does exclude things like "wasabi" which has the pattern in the middle, or "sandy" which is not in the column we want. We are only interested in words beginning with "sa" in the second column.

We can tell awk to only match at the beginning of the second column by using this command:

awk '$2 ~ /^sa/' favorite_food.txt

3 sandwich brian
4 salad ryan

As you can see, this allows us to only search at the beginning of the second column for a match.

The "^" character tells awk to limit its searches to the beginning of the field. The "field_num ~" part specifies that it should only pay attention to the second column.

We can just as easily search for things that do not match by including the "!" character before the tilde (~). This command will return all lines that do not have a food that starts with "sa":

awk '$2 !~ /^sa/' favorite_food.txt

1 carrot sandy
2 wasabi luke
5 spaghetti jessica

If we decide later on that we are only interested in lines where the above is true and the item number is less than 5, we could use a compound expression like this:

awk '$2 !~ /^sa/ && $1 < 5' favorite_food.txt

This introduces a few new things. The first is the ability to add additional requirements for the line to match by using the && operator. Using this, you can combine an arbitrary number of conditions for the line to match.

We use this operator to add a check that the value of the first column is less than 5.


By now, you should have a basic understanding of how awk can manipulate, format, and selectively print text files. Awk is a much larger topic though, and is actually an entire programming language complete with variable assignment, control structures, built-in functions, and more. It can be used in scripts to easily format text in a reliable way.

To learn more about how to work with awk, check out the great online resources for awk, and more relevantly, gawk, the GNU version of awk present on modern Linux distributions.

By Justin Ellingwood


Creative Commons License