Custom Shell Scripts
Tutorial – Shell Scripting
You do not need to learn low-level programming languages to become a real Linux power user. Shell scripting is all you need.
For most people, shell scripting is all you need to become a very, very powerful Linux user: Text processing, backups, image watermarking, movie editing, digital mapping, and database management are just a few of the many tasks where shell scripting can help you, even if you are just an end user.
This tutorial is the first installment in a series that explores why and how to write shell scripts – whatever your Linux needs and skills are.
No previous shell experience is necessary to use this tutorial. This installment describes what shell variables are, how the shell handles them, and some simple ways to use and process shell variables to get the job done. Before getting started, however, a brief introduction to a few fundamental concepts is necessary.
Scripts or Programs?
Software programs can either be compiled or interpreted. Applications like Firefox, digiKam, or LibreOffice fall in the first category. Compilation makes programs much faster, because it translates their human-readable source code to low-level instructions directly executable by a processor. Compiled programs, however, are also much longer and more difficult to write.
Interpreted programs, which are also called scripts, are more limited and often slower. However, the learning curve and development time is remarkably shorter and smoother because, when launched, their source code is passed to a command interpreter, which parses and executes it one line at a time. This spares you from dealing with memory management, full declaration of variables, and a bunch of other low-level issues.
The default command interpreter on most GNU/Linux distributions, which is also available for other operating systems, is the Bash shell [1]. To be precise, "shell" refers to a whole category of interpreters. This tutorial covers Bash and uses the terms "shell" and "Bash" interchangeably, because Bash is the default shell in Linux. However, most of what you will learn in this series will be usable, without modifications, with all the other shells you can easily install on Linux systems.
You can use Bash interactively or in scripts. Interactive mode is what happens at any Linux command-line prompt, unless you set your Linux system to use another shell.
Working at the command line is enough for quick, one-time actions, but the real power of a shell, and the topic of this tutorial, is automation. You can save long sequences of instructions in a plain text file, and the shell will execute it as one software program. All you need to do to make that happen is to mark the file as executable (with the chmod
command) and give it the right header. To see what I mean, here is the Bash script version of the "Hello World!" program:
#! /bin/bash echo "Hello World!"
Save it into a file called hello
and type hello
at the prompt, and it will answer just that: Hello World!. The first line is what marks the whole file as a script to be executed by the Bash interpreter, which on Linux is the compiled program installed as /bin/bash
. The second line simply tells that interpreter to launch the echo
program to output the Hello World! string.
Experimenting with Shell Variables
The Bash environment makes available both predefined parameters and ways to create and process custom parameters. Strictly speaking, for Bash, a parameter is "an entity that stores values," and a variable is a parameter identified by a name.
The shell syntax for variables may seem weird, but it is easy to use. To assign a value to a variable, just write its name. To use it, you must prepend a dollar sign to the name. In both cases, remember to avoid spaces before and after the equals sign:
MYNAME=Marco # value is assigned YOURNAME=$MYNAME # value is *used*
(Note: Throughout the series, #>
means the Linux command-line prompt; whatever follows the prompt is anything you choose to type there, not just into a script, to quickly check for yourself what you've learned.)
Besides a value, variables can also have attributes that specify, or limit, what you can do with them to avoid errors. A variable's attributes must be declared before using it in any way. The statement
declare -i WEIGHT
declares that the variable WEIGHT
can only have integer (-i
) values. Consequently, these two statements, which have the same meaning for a human reader,
#> WEIGHT=80 #> WEIGHT=eighty
do not mean the same thing to the shell; only the first statement will be accepted and executed by the shell. The other will cause an error, instead, which prevents you from wrongly assigning a character string to something that should only store integers.
Another handy attribute is -r
(readonly
):
#> declare -i -r WEIGHT=90
Adding that attribute makes any further attempt to assign values to the WEIGHT
variable fail:
#> WEIGHT=70 bash: WEIGHT: readonly variable
Special Characters and Variables
The Bash shell has a lot of predefined variables and special characters. So many, in fact, that there is no way (or need, frankly) I can describe them all in one article. For now, I will only show a few samples to give you an idea of what is available:
$PWD
and$OLDPWD
store the working directory where the script is currently running and the previous directory (old) where it was before that, respectively.$HOSTTYPE
and$OSTYPE
contain the CPU architecture and operating system on which Bash is executing. On my own system, they have the following values:
#> echo $OSTYPE running on $HOSTTYPE linux-gnu running on x86_64
The most used predefined shell variable is surely $HOME
, which represents the home directory of the user running the script. The same value is also represented by the tilde character, making these two statements equivalent:
#> BACKUP_DIR="$HOME/backup" #> BACKUP_DIR="~/backup"
The tilde character is not just a synonym of $HOME
, however: ~+
is equivalent to $PWD
, and ~-
to $OLDPWD
.
Variables like $HOME
, $PWD
, and $OLDPWD
may seem superfluous to a novice, but they are really important in many shell scripts. It is thanks to $HOME
that a script always can save files in the same place ($BACKUP_DIR
) – no matter in which directory the script is executed.
$PWD
is equally useful, at least, whenever a script moves from folder to folder during execution, and you want to know each time it happens.
Previously, I mentioned that a shell interpreter parses your script one line at a time. Next, the shell splits the current line into words and then looks at them one at a time to decide what to do next. Actually, the shell does that in other cases as well, but I'll deal with those cases later.
To split a line, or anything else, into single words, the shell needs to know what, exactly, separates one word from another. The answer is in the special shell variable IFS
(internal field separator): Any character sequence or any single character inside IFS
marks the end of a word and the beginning of the next.
The default IFS
value is the character triplet space, tab, and newline, but you can change the default to whatever you want.
This capability is extremely handy whenever you need to separate fields inside each line of a CSV file or in any generic string.
Here is an example, taken straight from my recent Linux Magazine tutorial on how to embed dynamic headlines in any Linux desktop [2]. If you have a variable called HEADLINE
that contains several fields separated by the pipe character (|
),
Red Hat Reports $823 Revenue|Navarro: Kavanaugh should step aside|Debian, Ubuntu... Leaving Users Vulnerable
you can save each of those headlines separately, with the single command
IFS='|' read -r -a NEWS <<< "$HEADLINES"
which, albeit cryptic, means:
- Set
IFS
to the pipe character. - Split
$HEADLINES
into separate fields, delimited by$IFS
. - Copy each of those fields into a separate element of an array called
NEWS
.
I will cover arrays and other more complex shell data structures in the next tutorial installment. For now, I'll look at another great property of the IFS
variable.
If the IFS
value is null, no word splitting occurs. At first glance, this may seem a totally useless, or irrelevant, property. In practice, the opposite is true. Setting IFS
to null is the standard Bash way to read and parse text files, one complete line at a time. This other snippet of code, from my previous tutorial [2], shows how to do it:
while IFS= read -r line do # process the current COMPLETE line, saved in $line done < $SOMEFILE
Because IFS
is set to null (because there is nothing immediately after the equals sign), each full line of text from the file $SOMEFILE
is loaded as a single string into the shell variable $line
. If IFS
has any other value, each line of $SOMEFILE
will be split into different words before your script ever knows what the whole line looks like.
Buy this article as PDF
(incl. VAT)
Buy Linux Magazine
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters
Support Our Work
Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.
News
-
The Gnome Foundation Struggling to Stay Afloat
The foundation behind the Gnome desktop environment is having to go through some serious belt-tightening due to continued financial problems.
-
Thousands of Linux Servers Infected with Stealth Malware Since 2021
Perfctl is capable of remaining undetected, which makes it dangerous and hard to mitigate.
-
Halcyon Creates Anti-Ransomware Protection for Linux
As more Linux systems are targeted by ransomware, Halcyon is stepping up its protection.
-
Valve and Arch Linux Announce Collaboration
Valve and Arch have come together for two projects that will have a serious impact on the Linux distribution.
-
Hacker Successfully Runs Linux on a CPU from the Early ‘70s
From the office of "Look what I can do," Dmitry Grinberg was able to get Linux running on a processor that was created in 1971.
-
OSI and LPI Form Strategic Alliance
With a goal of strengthening Linux and open source communities, this new alliance aims to nurture the growth of more highly skilled professionals.
-
Fedora 41 Beta Available with Some Interesting Additions
If you're a Fedora fan, you'll be excited to hear the beta version of the latest release is now available for testing and includes plenty of updates.
-
AlmaLinux Unveils New Hardware Certification Process
The AlmaLinux Hardware Certification Program run by the Certification Special Interest Group (SIG) aims to ensure seamless compatibility between AlmaLinux and a wide range of hardware configurations.
-
Wind River Introduces eLxr Pro Linux Solution
eLxr Pro offers an end-to-end Linux solution backed by expert commercial support.
-
Juno Tab 3 Launches with Ubuntu 24.04
Anyone looking for a full-blown Linux tablet need look no further. Juno has released the Tab 3.