A Practical Guide to Using `sed` and `awk` in Linux
Jul 27, 2025 am 02:01 AMsed and awk are powerful text processing tools in Linux/Unix systems. sed is used for streaming editing and is suitable for searching, replacing, deleting, inserting and other operations. awk is a complete text processing language, good at field extraction, conditional filtering and data calculation. 1. Common functions of sed include: using s/old/new/g to globally replace text, 3s//// replace in the specified line, /pattern/s/// replace in the matching line, d deletes specific lines (such as /^$/d deletes empty lines), i and a insert or append text at the specified location, and the -i parameter can be saved and modified to the file and backed up the original file with -i.bak. 2. Common functions of awk include: {print $1,$3} printing the specified column, -F',' setting the separator to process CSV, $2>30 filter rows by condition, NR>1 skip the head, {sum =$2} END{print sum} sum statistics, END{print NR} count, and print "Label: "$1 format output. 3. The two can be combined through pipelines, such as cat log | sed 's/.*] "\(GET|POST\) .*" //' | awk '{print $1}' | sort | uniq -c | sort -nr cleans the log and counts the number of IP accesses. 4. Common mistakes should be avoided when using: use single quotes to prevent shell expansion, exchangeable separators such as s|old|new|g in sed to avoid path conflicts, awk supports variables and programming structures, preview without adding -i before modification, pay attention to the special meaning of . and * in regular expressions. Mastering sed and awk can significantly improve text processing efficiency, and is suitable for automation, log analysis and data cleaning, and can often replace more complex Python or Perl scripts.
sed
and awk
are two of the most powerful text-processing tools in Linux and Unix-like systems. While they may seem cryptic at first, mastering them can dramatically improve your efficiency when working with text files, logs, configuration files, and data streams. This guide breaks down practical uses of both tools with real-world examples, focusing on day-to-day tasks.

What Are sed
and awk
?
-
sed
(Stream Editor) : Processes text line by line. It's ideal for find-and-replace, inserting or deleting lines, and basic text transformations. -
awk
(Named after its creators: Aho, Weinberg, and Kernighan) : A full scripting language for text processing. It excels at working with structured data (like CSVs), extracting fields, and performing calculations.
They're often used together in pipelines to filter, transform, and analyze text efficiently.
Practical Uses of sed
sed
works on streams — it reads input, apply edits, and outputs the result. By default, it doesn't modify the original file unless told to.

1. Find and Replace Text
The most common use of sed
is replacing text using the s
(substitute) command.
sed 's/old-text/new-text/' filename
Example: Replace all instances of "apple" with "orange" in a file:

sed 's/apple/orange/' fruits.txt
?? This only replaces the first occurrence per line. To replace all occurrences, add the
g
(global) flag:
sed 's/apple/orange/g' fruits.txt
2. Replace on Specific Lines
You can limit substitutions to certain lines.
Replace only on line 3:
sed '3s/apple/orange/' fruits.txt
Replace only in lines containing "fruit":
sed '/fruit/s/apple/orange/' fruits.txt
3. Delete Lines
Use the d
command to remove lines.
Delete line 5:
sed '5d' file.txt
Delete all blank lines:
sed '/^$/d' file.txt
Delete lines containing "error":
sed '/error/d' log.txt
4. Insert or Append Text
Insert "Header" before line 1:
sed '1i\Header' file.txt
Append "Footer" after line 1:
sed '1a\Footer' file.txt
5. Save Changes to File
By default, sed
outputs to stdout. To edit a file in place, use -i
:
sed -i 's/apple/orange/g' fruits.txt
? Use
-i.bak
to create a backup before editing:sed -i.bak 's/apple/orange/g' fruits.txt
Practical Uses of awk
awk
treats each line as a record and splits it into fields. By default, fields are separated by whitespace.
1. Print Specific Columns
Print the first and third fields from each line:
awk '{print $1, $3}' data.txt
Useful for log files or CSV-like data.
Example input ( data.txt
):
John 25 Engineer Jane 30 Designer Bob 35 Manager
Command:
awk '{print $1, $3}' data.txt
Output:
John Engineer Jane Designer Bob Manager
2. Use a Custom Field Separator
For CSV files, use -F
to define the delimiter:
awk -F',' '{print $2, $4}' users.csv
For tab-separated files:
awk -F'\t' '{print $1}' data.tsv
3. Filter Rows Based on Conditions
Print lines where the second field is greater than 30:
awk '$2 > 30' data.txt
Print lines where the first field is "Jane":
awk '$1 == "Jane"' data.txt
Combine conditions:
awk '$2 > 25 && $3 == "Engineer"' data.txt
4. Process Headers and Summarize Data
Skip the header line:
awk 'NR > 1 {print}' file.csv
NR
= Number of Records (line number)
Sum values in a column:
awk '{sum = $2} END {print "Total:", sum}' numbers.txt
Count lines:
awk 'END {print NR}' file.txt
5. Format Output with Labels
awk '{print "Name: " $1 ", Age: " $2}' data.txt
Output:
Name: John, Age: 25 Name: Jane, Age: 30
Combining sed
and awk
in Pipelines
You can chain both tools for advanced processing.
Example: Clean a log file, extract IP addresses, and count occurrences:
cat access.log | \ sed 's/.*\] "\(GET\|POST\) .*" //' | \ awk '{print $1}' | \ sort | uniq -c | sort -nr
Breakdown:
-
sed
: Removes log prefix and HTTP request line -
awk
: Extracts first field (IP) -
sort | uniq -c
: Counts unique IPs - Final
sort -nr
: Sorts by count descending
Common Pitfalls and Tips
- Use single quotes around
sed
andawk
scripts to avoid shell expansion. - In
sed
, the delimiter ins///
can be changed (eg,s|old|new|g
) — useful when working with paths. -
awk
supports variables, loops, and functions — it's a full scripting language. - Always test without
-i
first to preview changes. - Be cautious with regex —
.
and*
have special meanings.
Final Thoughts
sed
and awk
are not just legacy tools — they're fast, lightweight, and perfect for automation, log parsing, and data muunging. Start with simple substitutions and field extractions, then gradually explore pattern matching, conditions, and arithmetic.
Once you're comfortable, you'll find yourself reaching for them instead of writing longer scripts in Python or Perl — especially in shell pipelines.
Basically, if it involves text, sed
and awk
can probably help.
The above is the detailed content of A Practical Guide to Using `sed` and `awk` in Linux. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undress AI Tool
Undress images for free

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

When processing files under Linux systems, it is sometimes necessary to delete lines at the end of the file. This operation is very common in practical applications and can be achieved through some simple commands. This article will introduce the steps to quickly delete the line at the end of the file in Linux system, and provide specific code examples. Step 1: Check the last line of the file. Before performing the deletion operation, you first need to confirm which line is the last line of the file. You can use the tail command to view the last line of the file. The specific command is as follows: tail-n1filena

Under normal circumstances, sed reads the line to be processed into the pattern space, and the commands in the script process the line one after another until the script is executed, then the line is output, and the pattern space is empty; then repeat the process just now action, a new line in the file is read in until the file is processed completely. However, various reasons, such as the user wanting a certain command in the script to be executed under certain conditions, or wanting the pattern space to be retained for next processing, may cause sed to not follow the instructions when processing files. Carry out the normal process. At this time, sed has set up some advanced commands to meet user requirements. If you want to learn the advanced commands of sed, you must first understand the following two buffer areas: 1. Pattern space (patt

How to use awk command for log analysis and processing in Linux? Introduction: In Linux systems, awk is a powerful text analysis and processing tool. It can be used to search and extract specific content in files, format data, and perform some simple but useful calculations. This article will introduce how to use the awk command for log analysis and processing in Linux, including commonly used awk command options and sample codes. 1. The basic syntax and options of the awk command. The basic syntax of the awk command is as follows: awk

For users who use the awk command, when processing numbers or strings in a line of text, it is very convenient to use comparison operators to filter text and strings. In the following section we introduce the comparison operators of "awk". What are comparison operators in awk? The comparison operators in awk are used to compare strings and numeric values, including the following types: Symbol effect > Greater than < Less than > = Greater than or equal to < = Less than or equal to == Equal to! = Not equal to some_value~/pattern/ If some_value matches the pattern pattern , then return truesome_value!~/pattern/if some_val

SED, also known as Stream Editor, is a very useful tool. It is used to search for a specific word or pattern and then perform some operation on that word or pattern, or in other words, transform it. In Windows, SED is also known as the Find and Replace function. SED comes natively with Ubuntu, so there's nothing to install; just start using it. In this tutorial we will show you how to use SED or Stream Editor. "S" Command The most important command in the SED or stream editor is the "s" command. The "s" stands for substitute. The syntax is as follows: /regexp/replace/flags So let’s use a file called ”file.txt&#

When filtering text, sometimes you may want to mark certain lines in a file or string based on a given condition or using a specific pattern that can be matched. It's very easy to accomplish this task using awk, and it's one of the few features in awk that may be helpful to you. Let's take a look at the following example. Let's say you have a shopping list of foods you want to buy. Its name is food_prices.list. The food names and corresponding prices it contains are as follows: $catfood_prices. listNoItem_NameQuantityPrice1Mangoes10$2.452Apples20$1.503

How to use sed command for log analysis and processing in Linux? Introduction: In Linux systems, log files record the running status and operation logs of the system. For system administrators, it is very important to analyze and process log files. Among them, the sed command is a very powerful text processing tool that can efficiently analyze and process log files in the Linux environment. This article will introduce how to use the sed command to analyze and process logs, and provide some commonly used sed command examples. one

1. Introduction to sed The full name of sed (streameditor) is a streaming editor. Sed is mainly used to automatically edit one or more files, simplify repeated operations on files, write conversion programs, etc. The workflow is as follows 1. Overview of sed > sed is a An online, non-interactive editor that processes content one line at a time. It is a very good tool for text processing. It can be used perfectly with regular expressions and has extraordinary functions. During processing, the currently processed line is stored in a temporary buffer, called "patternspace", and then the sed command is used to process the contents of the buffer. After the processing is completed, the contents of the buffer are sent to the screen. Then process the next line, and repeat this until the text
