Bash Index : S - The 'S' Bash commands : description, flags and examples

mail

shift

shift positional parameters

Example

#!/usr/bin/env bash

while [ "$#" -gt 0 ]; do
	echo "$1"
	shift
done
./myScript.sh {a..c}
a
b
c
mail

subscription-manager

Usage

Register and subscribe are distinct actions :
  • register : declare the existence of a machine to Red Hat so that it can be managed by their tools
  • subscribe : attach a subscription (for services, updates, ...) to a registered machine

Example

Register + subscribe a machine to Red Hat, as root :

  1. check subscription status (not subscribed so far) :
    subscription-manager status
    +-------------------------------------------+
        System Status Details
    +-------------------------------------------+
    Overall Status: Unknown
  2. subscription-manager config --server.proxy_hostname=10.27.26.37 --server.proxy_port=3128
  3. Depending on context :
    • subscription-manager register --username kevin --password 'P@ssw0rd' --auto-attach
    • subscription-manager register --username kevin --password 'P@ssw0rd' --org=1234567
    	Registering to: subscription.rhsm.redhat.com:443/subscription
    	The system has been registered with ID: aaaaaaa8-bbb4-ccc4-ddd4-eeeeeeeeee12
    	The registered system name is: myNewRedHatMachine
  4. check :
    subscription-manager status
    +-------------------------------------------+
       System Status Details
    +-------------------------------------------+
    Overall Status: Current
    
    System Purpose Status: Not Specified		not relevant
    
    
    subscription-manager list
    +-------------------------------------------+
        Installed Product Status
    +-------------------------------------------+
    Product Name:   Red Hat Enterprise Linux for x86_64
    Product ID:     479
    Version:        9.1
    Arch:           x86_64
    Status:         Subscribed
    Status Details:
    Starts:         01/01/2023
    Ends:           01/01/2024

Attach to a specific pool of licences :

subscription-manager attach --pool=1234567890abcdef1234567890abcdef
Successfully attached a subscription for: Red Hat Enterprise Linux for Virtual Datacenters, Standard

"Detach" a machine from Red Hat's subscription system, i.e. unsubscribe + unregister (source) :

You may run all actions below at once :
subscription-manager remove --all; subscription-manager unregister; subscription-manager clean

Detailed procedure :

  1. remove subscriptions, aka unsubscribe :
    subscription-manager remove --all
    Consumer profile "aaaaaaa8-bbb4-ccc4-ddd4-eeeeeeeeee12" has been deleted from the server. You can use command clean or unregister to remove local profile.
    Should you wish to unsubscribe from a specific pool instead of all of them :
    subscription-manager remove --pool=poolNumber
  2. unregister the machine :
    subscription-manager unregister
    System has been unregistered.
  3. clean (caches, ) :
    subscription-manager clean
mail

swapon / swapoff

Usage

Enable / disable devices and files for paging and swapping

Flags

Flag Usage
-s --summary display swap usage summary by device. Equivalent to cat /proc/swaps
This output format is DEPRECATED in favor of --show that provides better control on output data.
--show[=column...] display table of swap areas. To view all available columns :
swapon --show=NAME,TYPE,SIZE,USED,PRIO,UUID,LABEL
mail

sar

sar stands for System Activity Report. It comes from sysstat. A very basic usage would be :
sar interval count
This will generate a report every interval seconds, up to count reports. Without additional settings, sar defaults to making a CPU usage report :
sar 2 3
Linux 4.19.0-0.bpo.4-amd64 (myWorkstation)	05/14/2019	_x86_64_		(2 CPU)

05:36:16 PM	CPU	%user	%nice	%system	%iowait	%steal	%idle
05:36:18 PM	all	3.89	0.00	1.81	0.00	0.00	94.30		2 seconds increment, 3 reports
05:36:20 PM	all	4.62	0.00	1.03	0.00	0.00	94.36
05:36:22 PM	all	6.19	0.00	2.84	0.00	0.00	90.98
Average:	all	4.90	0.00	1.89	0.00	0.00	93.21
mail

systemd-analyze

Usage

Analyze system boot-up performance

Flags

Flag Usage
blame print a list of all running units, ordered by the time they took to initialize. This information may be used to optimize boot-up times.
The output might be misleading as the initialization of one service might be slow simply because it waits for the initialization of another service to complete.
critical-chain unit print a tree of the time-critical chain of units (for each of the specified units or for the default target (?) otherwise).
In the output :
  • @value : is the time after which the unit is active or started
  • +value : is the time the unit takes to start
- Looks like this is read from bottom to top
- The '+' times are the duration of each step
- The '@' times are the cumulated duration since "instant 0". They don't perfectly sum up because of
	- the initialization of one service might depend on socket activation
	- parallel execution of units
plot print an SVG graphic detailing which system services have been started at what time, highlighting the time they spent on initialization :
systemd-analyze plot > /path/to/result.svg
verify load unit files and print warnings if any errors are detected. Files specified on the command line will be loaded, but also any other units referenced by them (example)

Example

systemd-analyze verify :

systemctl status smbd
 smbd.service - Samba SMB Daemon
	Loaded: loaded (/lib/systemd/system/smbd.service; enabled; vendor preset: enabled)
	Active: active (running) since Thu 2018-11-29 08:36:29 CET; 1 weeks 0 days ago
	
systemd-analyze verify /lib/systemd/system/smbd.service
dev-mapper-hostname\x2d\x2dvg\x2dswap_1.swap: Unit is bound to inactive unit dev-mapper-hostname\x2d\x2dvg\x2dswap_1.device. Stopping, too.
var.mount: Unit is bound to inactive unit dev-mapper-hostname\x2d\x2dvg\x2dvar.device. Stopping, too.
tmp.mount: Unit is bound to inactive unit dev-mapper-hostname\x2d\x2dvg\x2dtmp.device. Stopping, too.
No clue whether this is normal / bothering or not Investigating...
mail

shutdown

Usage

Halt, power-off or reboot the machine.
More precisely : by default shutdown is used to change from the multi-user state (state 2) to single-user state (state s (or S), where only the console has access to the operating system).
ls -l $(which shutdown)
lrwxrwxrwx 1 root root 14 Jun 13 22:20 /sbin/shutdown -> /bin/systemctl
This command is reserved to root.

shutdown [options] [time] [wallMessage]

What's the difference between halt and power-off (source) ?

Both will stop the operating system, then...
halt
display a stop screen like System halted. At that step, it is safe to press the physical button.
power-off
send an ACPI command to signal the PSU to disconnect main power.
In practice, distributions often define aliases to shutdown with different defaults, which is why the observed behavior is not always the same.

Flags

Flag Usage
[time] one of :
  • now
  • hh:mm : to specify the shutdown time
  • +m : shutdown in m minutes (now is an alias for +0)
  • by default : +1
-c cancel a pending shutdown. Shutdowns specified with a now or +0 time value can not be cancelled.
-H --halt Halt the machine
-P --poweroff Power-off the machine (the default)
-r --reboot reboot the machine
mail

stdbuf

Usage

stdbuf options command
Run command with modified buffering operations for its standard streams.
Some man pages (used to) show this example :
tail -f access.log | stdbuf -oL cut -d aq aq -f1 | uniq
WTF does this aq aq mean ? Is it an advanced cut option ? Let's find out on cut manpage.
No reference to aq aq in the documentation, but we can still see in the See also section :
info coreutils aqcut invocationaq

Looks like someone had a hard time importing man pages having single quotes '.
This is confirmed by reading the documentation in a terminal :

This typo has been copy-pasted as-is many times .

Flags

Flag Usage
-omode --output=mode adjust standard output stream buffering to mode

Buffering modes :

  • L : the corresponding stream will be line buffered. This option is invalid with standard input.
  • 0 : the corresponding stream will be unbuffered

Example

Immediately display unique entries from access.log :

tail -f access.log | stdbuf -oL cut -d ' ' -f1 | uniq

journalctl -u ssh | cut -d ' ' -f6- | uniq
journalctl -u ssh | stdbuf -oL cut -d ' ' -f6- | uniq

	==> both _seem_ to behave the same. Maybe "journalctl" is not a good candidate to experiment on this ;-)
mail

script

Usage

Make typescript of terminal session, which could be handy when writing tutorials or documenting an installation process, for example.

script options /path/to/typescript/file

Flags

Flag Usage
-a append to /path/to/typescript/file rather than overwriting it / creating a new file
-c command run command instead of an interactive shell
-f flush output after each write

Example

How to use it :

  1. script /path/to/typescript/file
  2. the commands you type and their output is recorded into /path/to/typescript/file but not recorded in real time, unless -f is used
  3. stop recording :
    • exit
    • CTRL-d
  4. You can now view the typescript :
    • cat /path/to/typescript/file
    • less -R /path/to/typescript/file
mail

seq

seq is often used in for loops like :
for i in $(seq 1 10); do 
which works fine but has the pitfall of uselessly spawning a subshell (details). In most cases —especially in scripts / loops— shell brace expansion should be used instead (details).
Print a sequence of numbers :
ascending
seq 4 8
4
5
6
7
8
ascending with interval
seq 5 3 17
5
8
11
14
17
descending, with mandatory interval
seq 17 -3 5
17
14
11
8
5
mail

shred

Usage

Overwrite the specified file repeatedly, in order to make it harder for even very expensive hardware probing to recover the data, and optionally delete it.
shred relies on a very important assumption: that the file system overwrites data in place. This is the traditional way to do things, but many modern file system designs do not satisfy this assumption. The following are examples of file systems on which shred is not effective, or is not guaranteed to be effective in all file system modes:
  • log-structured or journaled file systems, such as those supplied with AIX and Solaris (and JFS, ReiserFS, XFS, Ext3, ...)
  • file systems that write redundant data and carry on even if some writes fail, such as RAID-based file systems
  • file systems that make snapshots, such as Network Appliance's NFS server
  • file systems that cache in temporary locations, such as NFS version 3 clients
  • compressed file systems

Regarding Ext3

  • In data=journal mode (which journals file data in addition to just metadata), the above disclaimer applies and shred is of limited effectiveness.
  • In both the data=ordered (default) and data=writeback modes, shred works as usual.
  • journaling modes can be changed by adding data=something to the mount options for a particular file system in /etc/fstab.
There is no consensus on how to REALLY, securely, erase a disk. Things get even worse when it comes to SSDs. The only good advice around seems to be to :
  1. encrypt the drive itself
  2. when comes the time to dispose of the disk, destroy the LUKS header (source, more about the LUKS header). The encrypted data remains in place and its protection is only as good as the encryption used.
Last minute thought : in view of maintaining your mental health, don't forget that the extensiveness of your secure deletion process should be in line with the level of attack you may be subject to.

Flags

Flag Usage
-n iterations --iterations=iterations overwrite iterations times instead of the default (3)
-u --remove truncate and remove file after overwriting
  • The default is not to remove the files because it is common to operate on device files like /dev/hda, which should not be removed.
  • Using --remove is safe (recommended!) when operating on regular files.
-v --verbose see shred working : successively overwriting the target file, then renaming it, then deleting it :
myTempFile=$(mktemp); echo "$myTempFile contains secret data" > "$myTempFile"; cat "$myTempFile"; shred -uv "$myTempFile"
-z --zero add a final overwrite with zeros to hide shredding

Example

shred -n 35 -z -u filename

Parameters :
-n 35
overwrite 35 times the file with random bytes
-z
then overwrite the file with zeros
-u
truncate then delete the file
mail

ss

Usage

ss :

ss -punta is a good equivalent to netstat -laputen (which will be deprecated soon)

Flags

Flag Usage
-a --all Display all sockets
-e --extended Show detailed socket information. The output format is:
uid:uidNumber ino:inodeNumber sk:cookie
  • uidNumber : the user id the socket belongs to
  • inodeNumber : the socket's inode number in VFS
  • cookie : an uuid of the socket
-l --listening Display listening sockets only
-n --numeric Show service names in numeric format
-r --resolve resolve numeric address/ports
-p --processes Show process using socket
-t --tcp Display only TCP sockets
-u --udp Display only UDP sockets
-x --unix Display only Unix domain sockets
(filters)

Filters :

State filters :

state anyTcpState, to be chosen from (source) :
  • All standard TCP states: established, syn-sent, syn-recv, fin-wait-1, fin-wait-2, time-wait, closed, close-wait, last-ack, listen and closing.
  • all : for all the states
  • connected : all the states except for listen and closed
  • synchronized : all the connected states except for syn-sent
  • bucket : states, which are maintained as minisockets, i.e. time-wait and syn-recv
  • big : opposite to bucket

Address filters :

  • dst addressPattern : matches remote address and port
  • src addressPattern : matches local address and port
  • dport operator anyPort : compares remote port to a number
  • sport operator anyPort : compares local port to a number
  • autobound : checks that socket is bound to an ephemeral port
With :
  • addressPattern is at the a.b.c.d:port format. If either the IP address or the port part is absent or replaced with *, this means wildcard match.
  • operator is one of <=, >=, ==, ... To make this more convenient for use in unix shell, alphabetic FORTRAN-like notations le, gt, etc. are accepted as well.
  • Expressions can be combined with and, or and not, which can be abbreviated in C-style as &, &&, ...
mail

sudo

Flags

Flag Usage
-H --set-home change the value of the $HOME environment variable into the home directory of the target user (i.e. mostly root, so /root). Normally, using sudo does not alter $HOME (details)
bash -c 'echo $USER $HOME'; sudo bash -c 'echo $USER $HOME'; sudo -H bash -c 'echo $USER $HOME'
This can be the default behavior, so the command above may not highlight anything.
-i --login simulate initial login. This runs the shell specified in /etc/passwd for the target user as a login shell. This means that login-specific resource files such as .profile or .login will be read by the shell.
If a command is specified, it is passed to the shell for execution. Otherwise, an interactive shell is executed.
full details : man -P 'less -p --login' sudo
-k --reset-timestamp invalidate the user's cached credentials instead of waiting the cache to expire (details)
-l --list list the allowed and forbidden commands for the invoking user (or user specified with -U) on the current host.
-p prompt --prompt=prompt Use a custom password prompt with optional escape sequences (details)
-S --stdin Write the prompt to the standard error and read the password from the standard input instead of using the terminal device.
  • this allows highly insecure things like :
    echo P@ssw0rd | sudo -S someCommand
  • this option automagically helped fix an issue with sudo in Ansible :
    • - hosts: srvOracle
        tasks:
          - name: Do some SQL
            shell: |
              sqlplus -s / as sysdba << EOC
              select * from dual;
              EOC
            become: yes
            become_user: oracle
            become_flags: "-i"
      sudo: no tty present and no askpass program specified
    • - hosts: srvOracle
        tasks:
          - name: Do some SQL
            shell: |
              sqlplus -s / as sysdba << EOC
              select * from dual;
              EOC
            become: yes
            become_user: oracle
            become_flags: "-iS"
      works like a charm
  • -s [command]
  • --shell [command]
Run the shell specified by : If command is specified, it is passed to the shell for execution via its -c option. Otherwise, an interactive shell is executed.
-U bob ---other-user bob To be used with -l : list bob's privileges instead of those of the invoking user. Advanced privileges required :
-u kevin --user kevin Run the specified command as the user kevin.

Example

Chain several commands within a single sudo :

sudo bash -c 'whoami; date; echo Hello World'
root
Tue Feb 27 02:53:52 PM CET 2024
Hello World
mail

strings

Usage

Find printable strings in files.

Example

strings /bin/sh | grep error
strerror
%s: I/O error
error setting limit (%s)
Syntax error: %s
mail

strace

Installed with the Debian package

strace

Usage

Trace system calls and signals
Each output line is made of 3 fields :
  1. the first entry on the left is the system call being performed
  2. the bit in the parentheses are the arguments to the system call
  3. the right side of the equals sign is the return value of the system call

Flags

Flag Usage
-c count time, calls, and errors for each system call and report a summary on program exit, suppressing the regular output
This can be a good starting point when debugging stuff : once you've listed where errors are, use strace -e type1,type2,type3
-e expression use expression to select which events to trace / how to trace them :
  • -e raw=select : print raw, undecoded arguments for the specified set of system calls (here : select)
  • -e trace=%file : trace all system calls which take a file name as an argument
  • -e trace=%network : trace all the network related system calls
  • -e select,stat,openat,lstat : trace only the system calls of the listed types
  • Filter by type of syscall
The syntax without a % leading keyword (like -e trace=keyword) is deprecated
-o file Write the trace output to the file file rather than stderr
-p PID Attach to the process with the process ID PID. You may attach quickly to a process with :
strace -p $(ps aux | awk '/[s]omething/ { print $2 }') [other strace options]
with something : any pattern in the output of ps uniquely matching the process to trace. This can be a binary name, a data file, ...

System calls :

Flag Usage
accept() When a request on a listening socket is refused / incomplete, accept() returns -1. Otherwise, it creates a new connected socket, and returns a new file descriptor referring to that socket.
fstat() Return metadata about a file (in the form of a "stat struct"). This file can be specified either by path for stat(), or by file descriptor for fstat().
recvfrom() Receive a message from a socket
select() Programs use select() to monitor file descriptors (specified as 2nd parameter) on a system for activity. select() will block for a configurable period of time waiting for activity on the supplied file descriptors, after which it returns with the number of descriptors on which activity took place.
This can be remembered as "wait for activity or timeout, and report where activity occurred"
mail

stat

Usage

display file or file system status

Time fields

stat myFile
  File: myFile
  Size: 26583103        Blocks: 51928      IO Block: 4096   regular file
Device: 811h/2065d      Inode: 13          Links: 1
Access: (0600/-rw-------)  Uid: ( 1000/   bob)   Gid: ( 1000/   bob)
Access: 2024-09-24 16:28:57.600939443 +0200
Modify: 2024-07-24 06:41:34.000000000 +0200
Change: 2024-09-22 18:42:31.159554215 +0200
 Birth: 2024-08-16 10:39:18.755412316 +0200
Field Changes when... Details
Access the file was last read
Modify changes were written to the file
Change the file metadata changed
Birth the file is created
This field (source) :
  • is not standard
  • is filesystem-dependant
  • may not be populated

Flags

Flag Usage
-c format
--format=format
with format being :
  • %n file name
  • %x time of last access, human-readable
  • %y time of last data modification, human-readable
  • %Y time of last data modification, seconds since Epoch
  • %z time of last status change, human-readable
  • ...

Example

Get a file's last access, last change

stat -c 'LAST ACCESS:%x LAST CHANGE:%z FILE:%n' file
mail

split

Usage

Split files into chunks
split [options] fileToSplit prefixOfSplittedFiles

Flags

Flag Usage
-a suffixLength Use suffixLength letters to form the suffix portion of the filenames of the split file.Make this suffix long enough so that there can be at least as many suffices as splits.
-d use numeric suffixes (aka digits) starting at 0, not alphabetic
-l nbLines Specify the number of lines in each resulting file piece.
-n chunkSpec
--number=chunkSpec
specifies how much chunks you want fileToSplit to be split into ( and how). chunkSpec can be :
n : split fileToSplit into n chunks
workDir='/run/shm/testSplit'; chunkPrefix='chunk_'; mkdir -p "$workDir"; cd "$workDir"; dataFile=$(mktemp --tmpdir='.'); for i in {1..10}; do echo -n "$i:"; for j in {a..j}; do echo -n " $j"; done; echo; done > "$dataFile"; split -n 3 "$dataFile" "$chunkPrefix"; for i in "$dataFile" "$chunkPrefix"*; do echo "$i"; cat "$i"; echo; rm "$i"; done
May split lines !
k/n : split as above, but instead of writing results into n files, output the values of the kth chunk to stdout. Use case : read/extract section of fileToSplit.
workDir='/run/shm/testSplit'; mkdir -p "$workDir"; cd "$workDir"; dataFile=$(mktemp --tmpdir='.'); for i in {1..10}; do echo -n "$i:"; for j in {a..j}; do echo -n " $j"; done; echo; done > "$dataFile"; split -n 2/3 "$dataFile"; rm "$dataFile"
l/n : like n without splitting lines
workDir='/run/shm/testSplit'; chunkPrefix='chunk_'; mkdir -p "$workDir"; cd "$workDir"; dataFile=$(mktemp --tmpdir='.'); for i in {1..10}; do echo -n "$i:"; for j in {a..j}; do echo -n " $j"; done; echo; done > "$dataFile"; split -n l/3 "$dataFile" "$chunkPrefix"; for i in "$dataFile" "$chunkPrefix"*; do echo "$i"; cat "$i"; echo; rm "$i"; done
l/k/n : like k/n without splitting lines
workDir='/run/shm/testSplit'; mkdir -p "$workDir"; cd "$workDir"; dataFile=$(mktemp --tmpdir='.'); for i in {1..10}; do echo -n "$i:"; for j in {a..j}; do echo -n " $j"; done; echo; done > "$dataFile"; split -n l/2/3 "$dataFile"; rm "$dataFile"
r/n : like l/n but with round robin distribution
workDir='/run/shm/testSplit'; chunkPrefix='chunk_'; mkdir -p "$workDir"; cd "$workDir"; dataFile=$(mktemp --tmpdir='.'); for i in {1..10}; do echo -n "$i:"; for j in {a..j}; do echo -n " $j"; done; echo; done > "$dataFile"; split -n r/3 "$dataFile" "$chunkPrefix"; for i in "$dataFile" "$chunkPrefix"*; do echo "$i"; cat "$i"; echo; rm "$i"; done
r/k/n : split like r/n + the "k" effect : output the values of the kth chunk to stdout :
workDir='/run/shm/testSplit'; mkdir -p "$workDir"; cd "$workDir"; dataFile=$(mktemp --tmpdir='.'); for i in {1..10}; do echo -n "$i:"; for j in {a..j}; do echo -n " $j"; done; echo; done > "$dataFile"; split -n r/2/3 "$dataFile"; rm "$dataFile"

Example

Split and re-assemble :

workDir='/run/shm/testSplit'; chunkPrefix='chunk_'; mkdir -p "$workDir" && cd "$workDir"; dataFile=$(mktemp --tmpdir='.'); dd if=/dev/random of="$dataFile" bs=1K count=10 status=none; split -n 3 -d "$dataFile" "$chunkPrefix"; ls -l "$chunkPrefix"*; cat "$chunkPrefix"* > "$dataFile"_REASSEMBLED; md5sum "$dataFile"*; [ "$PWD" = "$workDir" ] && rm "$chunkPrefix"* "$dataFile"*; cd - && rmdir "$workDir"
-rw------- 1 bob users 3413 Dec  9 12:05 chunk_00
-rw------- 1 bob users 3413 Dec  9 12:05 chunk_01
-rw------- 1 bob users 3414 Dec  9 12:05 chunk_02
9e5dfea5493c78eaee1e1e7158d063ad  ./tmp.4iNzQxgGae			checksums match 
9e5dfea5493c78eaee1e1e7158d063ad  ./tmp.4iNzQxgGae_REASSEMBLED	meaning we've successfully re-assembled the chunks into an exact copy of the original file

Split myBigFile into 1000 lines chunks :

split -l 1000 -a 3 myBigFile chunk_
Will create chunks : chunk_aaa, chunk_aab, chunk_aac, chunk_
mail

source (or .)

Usage

source someFile (or . someFile) reads and executes commands from someFile in the current shell context.

What's specific with source ?

  • someFile is executed even though its execution bit is not set
  • someFile is executed within the current shell context, which allows :
    • loading variables into the current interactive shell session :
      1. source myConfigFile
      2. other commands using variables
    • using someFile as a configuration file for a script, without leaving variables in the shell environment once the script is over :
      sourcedFile=$(mktemp sourcedFile.XXXXXXXX); echo 'value=42' > "$sourcedFile"; chmod -x "$sourcedFile"; scriptFile=$(mktemp scriptFile.XXXXXXXX); echo "source $sourcedFile; echo 'sourced'; echo \"during the script : \\\$value='\$value'\"" > "$scriptFile"; chmod +x "$scriptFile"; ls -l "$sourcedFile" "$scriptFile"; cat "$sourcedFile" "$scriptFile"; ./"$scriptFile"; echo "after the script : \$value='$value'"; rm "$sourcedFile" "$scriptFile"
      value=42	cat "$sourcedFile"
      source sourcedFile.akr6paIl; echo 'sourced'; echo "during the script : \$value='$value'"	cat "$scriptFile"
      sourced		started running the script
      during the script : $value='42'	the variable exists in the script context
      after the script : $value=''	no value anymore
      Since a dedicated subshell (having its own context) is spawned when executing a script, if you want variables to survive for future scripts or commands after the script ends, you'll have to :
      • export variables within the script (details)
      • or source the script
  • The return status is the exit status of the last command executed from someFile, or zero if no commands are executed.

Trying to source a file having DOS line endings led the shell to complain for syntax errors on EVERY line. Consider converting line endings into DOS format.

About exported variables :

configFile='./script.conf'; scriptFile='./script.sh'; echo -e '#!/usr/bin/env bash\nvar1=value1\nexport var2=value2' > "$configFile"; echo -e '#!/usr/bin/env bash\necho "\tvar1 = $var1"\necho "\tvar2 = $var2"' > "$scriptFile"; echo -e "\n'source'd config file :"; cat "$configFile"; source "$configFile"; echo -e "\nCommand line (current shell context) :\n\tvar1 = $var1\n\tvar2 = $var2\n\nScript (subshell context) :"; bash "./$scriptFile"; rm "$configFile" "$scriptFile"
'source'd config file :
#!/usr/bin/env bash
var1=value1			not exported
export var2=value2

Command line (current shell context) :
	var1 = value1		exists 
	var2 = value2

Script (subshell context) :
	var1 =			unset 
	var2 = value2

Is a shebang required in source'd configurations files (source) ?

If you have :
  • script.conf :
    variable='value'
  • script.sh :
    #!/usr/bin/env bash
    
    source script.conf
    
    ... do something with "$variable" ...

Then no shebang is required in the configuration file script.conf. It may also be a safe practice to remove its execution permissions.

mail

sort

Usage

Flags

Flag Usage Example
-b --ignore-leading-blanks (explicit)
-h sort human numeric values echo -e "1M\n1G\n10K\n2K" | sort -h
2K
10K
1M
1G
-k keyDef
--key=keyDef
sort data by keyDef, which format is : F[.C][opts][,F[.C][opts]]
  • F : field number (aka column), 1-indexed
    assumes columns are whitespace or TAB-separated, otherwise use -t
  • C : char position in field, 1-indexed
    • start : left of the keyDef : F[.C][opts][,F[.C][opts]]
    • stop : right of the keyDef : F[.C][opts][,F[.C][opts]], defaults to end of line
    • without -b and -t : assume field separator = SPACE
  • opts : ordering option(s)
    • override global ordering option for that key
    • 1 or more of bdfgiMhnRrV
      • see Ordering options: in the man page
      • man -P 'less -p "Ordering options:"' sort
-n sort numerically. Default is alphabetically.
-o outputFile
--output=outputFile
Write result to outputFile instead of standard output
-r --reverse sort in reverse order (i.e. : descending).
Default sorting order is ascending.
echo -e "a\nb\nc" | sort -r
c
b
a
-R --random-sort shuffle, but group identical keys.
See shuf
echo -e "a\na\nb\nc" | sort -R
b
c
a
a
-t 'x'
--field-separator 'x'
specify the field separator when using -k. Default is whitespace or TAB sort -nr -t ':' -k3 /etc/passwd | head -10
-T dir
--temporary-directory=dir
use dir for temporaries, not $TMPDIR or /tmp. Multiple options specify multiple directories
-u sort unique : don't display duplicated lines. sort -u is equivalent to sort | uniq
-V --version-sort natural sort of (version) numbers echo -e "1.10\n1.2\n1.3\n1.1" | sort -V

Example

Sort occurrences of a URL from an Apache error log by decreasing order :

grep ' 500 ' 2012-07-03-apache-access.log | cut -d ' ' -f 11 | sort | uniq -c | sort -nr | less

All the magic is in the uniq -c prior to sorting.

Delete duplicate lines from myFile :

fileWithDuplicates='myFile'; tmpFile=$(mktemp); mv "$fileWithDuplicates" "$tmpFile"; sort -u "$tmpFile" -o "$fileWithDuplicates"; rm "$tmpFile"

Sort arguments provided on a single line

  • echo '2 20 3 1' | tr ' ' '\n' | sort -n | tr '\n' ' '
    1 2 3 20
  • echo 'banana,coconut,apple' | tr ',' '\n' | sort | tr '\n' ','
    apple,banana,coconut,
The result string has a trailing separator, which can be removed with substring expansion.

sort based on substrings or multiple columns (source) :

  • Whether data is sorted numerically, alphabetically, ascending, descending, is just a matter of adding the appropriate ordering option(s).
  • to sort version numbers, consider sort -V

sort numbers ascending

cat << EOF | sort
51
63
42
EOF
42
51
63

sort by a specific digit :

sort by the 2nd digit, ascending
cat << EOF | sort -k1.2
51
63
42
EOF
51
42
63
sort by the 2nd digit, descending
cat << EOF | sort -k1.2r
51
63
42
EOF
63
42
51

sort by the 2nd then 3rd digit, ascending

cat << EOF | sort -k1.2,1.3
8120
7268
9147
EOF
8120
9147
7268

sort by column 2, then by column 1

for i in {1..5}; do echo "$(($RANDOM % 9)) $(($RANDOM % 9))"; done | sort -n -k2 -k1
3 0
1 1		sorting on column 1 only makes sense
6 1		when items of column 2 are equal
2 4
1 7
mail

snmpwalk

Usage

snmpwalk -On -c snmpCommunity -v snmpVersion host OID

Flags

Flag Usage
-O Output formatting options :
-On : displays the OID numerically

Example

snmpwalk -On -c foo -v 2c 10.44.36.253 1.3.6.1.4.1.15497.1.1.1.11

mail

shuf

Usage

Shuffle the input rows

Flags

Flag Usage
-e Consider every command line parameter as an input row
-i min-max
--input-range=min-max
Take numbers between min and max as input options to chose from
-n numLines Display at most numLines lines

Example

Shuffle string parameters :

shuf -e A Z E R T Y may output Z R E A T Y or T R Y E Z A

Generate numberOfRandomNumbers random numbers within a specified range (source) :

shuf -i rangeMin-rangeMax -n numberOfRandomNumbers
mail

shopt

Usage

Toggle shell options

Flags

Flag Usage
(none) optionName return status of optionName : on / off
-s
-s optionName
List options that are set
set the option optionName
-u
-u optionName
List options that are unset
unset the option optionName

Options :

Option Usage
autocd a command name that is the name of a directory is executed as if it were the argument to cd
cdspell autocorrect minor typos while using cd
(What does shopt -s dirspell do?)
dirspell autocorrect minor typos during word completion on a directory name (provided the directory name has a trailing /)
extglob enable extended pattern matching features
(Is it safe to leave extglob enabled ?)
mail

service

Usage

Manage daemons

Example

List status of all daemons :

service --status-all | less
This executes service serviceName status on all services, and returns :
  • [ + ] : service is running
  • [ - ] : service is stopped
  • [ ? ] : unknown / doesn't reply to the "service status" command

Manage a daemon :

service serviceName start|stop|restart|status