Bash Index : S - The 'S' Bash commands : description, flags and examples

mail

subscription-manager

Usage

Register and subscribe are distinct actions :
  • register : declare the existence of a machine to Red Hat so that it can be managed by their tools
  • subscribe : attach a subscription (for services, updates, ...) to a registered machine

Example

Register + subscribe a machine to Red Hat, as root :

  1. check subscription status (not subscribed so far) :
    subscription-manager status
    +-------------------------------------------+
        System Status Details
    +-------------------------------------------+
    Overall Status: Unknown
  2. subscription-manager config --server.proxy_hostname=10.27.26.37 --server.proxy_port=3128
  3. Depending on context :
    • subscription-manager register --username kevin --password 'P@ssw0rd' --auto-attach
    • subscription-manager register --username kevin --password 'P@ssw0rd' --org=1234567
    	Registering to: subscription.rhsm.redhat.com:443/subscription
    	The system has been registered with ID: aaaaaaa8-bbb4-ccc4-ddd4-eeeeeeeeee12
    	The registered system name is: myNewRedHatMachine
  4. check :
    subscription-manager status
    +-------------------------------------------+
       System Status Details
    +-------------------------------------------+
    Overall Status: Current
    
    System Purpose Status: Not Specified		not relevant
    
    
    subscription-manager list
    +-------------------------------------------+
        Installed Product Status
    +-------------------------------------------+
    Product Name:   Red Hat Enterprise Linux for x86_64
    Product ID:     479
    Version:        9.1
    Arch:           x86_64
    Status:         Subscribed
    Status Details:
    Starts:         01/01/2023
    Ends:           01/01/2024

Attach to a specific pool of licences :

subscription-manager attach --pool=1234567890abcdef1234567890abcdef
Successfully attached a subscription for: Red Hat Enterprise Linux for Virtual Datacenters, Standard

"Detach" a machine from Red Hat's subscription system, i.e. unsubscribe + unregister (source) :

You may run all actions below at once :
subscription-manager remove --all; subscription-manager unregister; subscription-manager clean

Detailed procedure :

  1. remove subscriptions, aka unsubscribe :
    subscription-manager remove --all
    Consumer profile "aaaaaaa8-bbb4-ccc4-ddd4-eeeeeeeeee12" has been deleted from the server. You can use command clean or unregister to remove local profile.
    Should you wish to unsubscribe from a specific pool instead of all of them :
    subscription-manager remove --pool=poolNumber
  2. unregister the machine :
    subscription-manager unregister
    System has been unregistered.
  3. clean (caches, ) :
    subscription-manager clean
mail

swapon / swapoff

Usage

Enable / disable devices and files for paging and swapping

Flags

Flag Usage
-s --summary display swap usage summary by device. Equivalent to cat /proc/swaps
This output format is DEPRECATED in favor of --show that provides better control on output data.
--show[=column...] display table of swap areas. To view all available columns :
swapon --show=NAME,TYPE,SIZE,USED,PRIO,UUID,LABEL
mail

sar

sar stands for System Activity Report. It comes from sysstat. A very basic usage would be :
sar interval count
This will generate a report every interval seconds, up to count reports. Without additional settings, sar defaults to making a CPU usage report :
sar 2 3
Linux 4.19.0-0.bpo.4-amd64 (myWorkstation)	05/14/2019	_x86_64_		(2 CPU)

05:36:16 PM	CPU	%user	%nice	%system	%iowait	%steal	%idle
05:36:18 PM	all	3.89	0.00	1.81	0.00	0.00	94.30		2 seconds increment, 3 reports
05:36:20 PM	all	4.62	0.00	1.03	0.00	0.00	94.36
05:36:22 PM	all	6.19	0.00	2.84	0.00	0.00	90.98
Average:	all	4.90	0.00	1.89	0.00	0.00	93.21
mail

systemd-analyze

Usage

Analyze system boot-up performance

Flags

Flag Usage
blame print a list of all running units, ordered by the time they took to initialize. This information may be used to optimize boot-up times.
The output might be misleading as the initialization of one service might be slow simply because it waits for the initialization of another service to complete.
critical-chain unit print a tree of the time-critical chain of units (for each of the specified units or for the default target (?) otherwise).
In the output :
  • @value : is the time after which the unit is active or started
  • +value : is the time the unit takes to start
- Looks like this is read from bottom to top
- The '+' times are the duration of each step
- The '@' times are the cumulated duration since "instant 0". They don't perfectly sum up because of
	- the initialization of one service might depend on socket activation
	- parallel execution of units
plot print an SVG graphic detailing which system services have been started at what time, highlighting the time they spent on initialization :
systemd-analyze plot > /path/to/result.svg
verify load unit files and print warnings if any errors are detected. Files specified on the command line will be loaded, but also any other units referenced by them (example)

Example

systemd-analyze verify :

systemctl status smbd
 smbd.service - Samba SMB Daemon
	Loaded: loaded (/lib/systemd/system/smbd.service; enabled; vendor preset: enabled)
	Active: active (running) since Thu 2018-11-29 08:36:29 CET; 1 weeks 0 days ago
	
systemd-analyze verify /lib/systemd/system/smbd.service
dev-mapper-hostname\x2d\x2dvg\x2dswap_1.swap: Unit is bound to inactive unit dev-mapper-hostname\x2d\x2dvg\x2dswap_1.device. Stopping, too.
var.mount: Unit is bound to inactive unit dev-mapper-hostname\x2d\x2dvg\x2dvar.device. Stopping, too.
tmp.mount: Unit is bound to inactive unit dev-mapper-hostname\x2d\x2dvg\x2dtmp.device. Stopping, too.
No clue whether this is normal / bothering or not Investigating...
mail

shutdown

Usage

Halt, power-off or reboot the machine.
More precisely : by default shutdown is used to change from the multi-user state (state 2) to single-user state (state s (or S), where only the console has access to the operating system).
ls -l $(which shutdown)
lrwxrwxrwx 1 root root 14 Jun 13 22:20 /sbin/shutdown -> /bin/systemctl
This command is reserved to root.

shutdown [options] [time] [wallMessage]

What's the difference between halt and power-off (source) ?

Both will stop the operating system, then...
halt
display a stop screen like System halted. At that step, it is safe to press the physical button.
power-off
send an ACPI command to signal the PSU to disconnect main power.
In practice, distributions often define aliases to shutdown with different defaults, which is why the observed behavior is not always the same.

Flags

Flag Usage
[time] one of :
  • now
  • hh:mm : to specify the shutdown time
  • +m : shutdown in m minutes (now is an alias for +0)
  • by default : +1
-c cancel a pending shutdown. Shutdowns specified with a now or +0 time value can not be cancelled.
-H --halt Halt the machine
-P --poweroff Power-off the machine (the default)
-r --reboot reboot the machine
mail

stdbuf

Usage

stdbuf options command
Run command with modified buffering operations for its standard streams.
Some man pages (used to) show this example :
tail -f access.log | stdbuf -oL cut -d aq aq -f1 | uniq
WTF does this aq aq mean ? Is it an advanced cut option ? Let's find out on cut manpage.
No reference to aq aq in the documentation, but we can still see in the See also section :
info coreutils aqcut invocationaq

Looks like someone had a hard time importing man pages having single quotes '.
This is confirmed by reading the documentation in a terminal :

This typo has been copy-pasted as-is many times .

Flags

Flag Usage
-omode --output=mode adjust standard output stream buffering to mode

Buffering modes :

  • L : the corresponding stream will be line buffered. This option is invalid with standard input.
  • 0 : the corresponding stream will be unbuffered

Example

Immediately display unique entries from access.log :

tail -f access.log | stdbuf -oL cut -d ' ' -f1 | uniq

journalctl -u ssh | cut -d ' ' -f6- | uniq
journalctl -u ssh | stdbuf -oL cut -d ' ' -f6- | uniq

	==> both _seem_ to behave the same. Maybe "journalctl" is not a good candidate to experiment on this ;-)
mail

script

Usage

Make typescript of terminal session, which could be handy when writing tutorials or documenting an installation process, for example.

script options /path/to/typescript/file

Flags

Flag Usage
-a append to /path/to/typescript/file rather than overwriting it / creating a new file
-c command run command instead of an interactive shell
-f flush output after each write

Example

How to use it :

  1. script /path/to/typescript/file
  2. the commands you type and their output is recorded into /path/to/typescript/file but not recorded in real time, unless -f is used
  3. stop recording :
    • exit
    • CTRL-d
  4. You can now view the typescript :
    • cat /path/to/typescript/file
    • less -R /path/to/typescript/file
mail

seq

seq is often used in for loops like :
for i in $(seq 1 10); do 
which works fine but has the pitfall of uselessly spawning a subshell (details). In most cases —especially in scripts / loops— shell brace expansion should be used instead (details).
Print a sequence of numbers :
ascending
seq 4 8
4
5
6
7
8
ascending with interval
seq 5 3 17
5
8
11
14
17
descending, with mandatory interval
seq 17 -3 5
17
14
11
8
5
mail

shred

Usage

Overwrite the specified file repeatedly, in order to make it harder for even very expensive hardware probing to recover the data, and optionally delete it.
shred relies on a very important assumption: that the file system overwrites data in place. This is the traditional way to do things, but many modern file system designs do not satisfy this assumption. The following are examples of file systems on which shred is not effective, or is not guaranteed to be effective in all file system modes:
  • log-structured or journaled file systems, such as those supplied with AIX and Solaris (and JFS, ReiserFS, XFS, Ext3, ...)
  • file systems that write redundant data and carry on even if some writes fail, such as RAID-based file systems
  • file systems that make snapshots, such as Network Appliance's NFS server
  • file systems that cache in temporary locations, such as NFS version 3 clients
  • compressed file systems

Regarding Ext3

  • In data=journal mode (which journals file data in addition to just metadata), the above disclaimer applies and shred is of limited effectiveness.
  • In both the data=ordered (default) and data=writeback modes, shred works as usual.
  • journaling modes can be changed by adding data=something to the mount options for a particular file system in /etc/fstab.
There is no consensus on how to REALLY, securely, erase a disk. Things get even worse when it comes to SSDs. The only good advice around seems to be to :
  1. encrypt the drive itself
  2. when comes the time to dispose of the disk, destroy the LUKS header (source, more about the LUKS header). The encrypted data remains in place and its protection is only as good as the encryption used.
Last minute thought : in view of maintaining your mental health, don't forget that the extensiveness of your secure deletion process should be in line with the level of attack you may be subject to.

Flags

Flag Usage
-n iterations --iterations=iterations overwrite iterations times instead of the default (3)
-u --remove truncate and remove file after overwriting
  • The default is not to remove the files because it is common to operate on device files like /dev/hda, which should not be removed.
  • Using --remove is safe (recommended!) when operating on regular files.
-v --verbose see shred working : successively overwriting the target file, then renaming it, then deleting it :
myTempFile=$(mktemp); echo "$myTempFile contains secret data" > "$myTempFile"; cat "$myTempFile"; shred -uv "$myTempFile"
-z --zero add a final overwrite with zeros to hide shredding

Example

shred -n 35 -z -u filename

Parameters :
-n 35
overwrite 35 times the file with random bytes
-z
then overwrite the file with zeros
-u
truncate then delete the file
mail

systemctl

Usage

Utility to control "stuff managed by systemd" a.k.a units

Flags

Flag Usage
-l --full Do not ellipsize unit names, process tree entries, journal output, or truncate unit descriptions in the output of status, list-units, list-jobs, and list-timers
--now
  • systemctl enable --now unit = systemctl enable unit && systemctl start unit
  • systemctl disable --now unit = systemctl disable unit && systemctl stop unit
daemon-reload Reload systemd's configuration. This will rerun all generators (see systemd.generator(7)), reload all unit files, and recreate the entire dependency tree. While the daemon is being reloaded, all sockets systemd listens on behalf of user configuration will stay accessible.
not be confused with reload
disable unit do NOT start unit at boot time.
  1. delete symlinks (those created by enable + symlinks created manually)
  2. reload the system manager configuration
  • this does NOT stop the corresponding services. To do so, use --now
  • it can be silenced with --quiet
enable unit start unit at boot time. This actually :
  1. creates a set of symlinks, according to the [Install] section of the indicated unit file
  2. reloads the system manager configuration (in a way equivalent to daemon-reload) in order to ensure the changes are taken into account immediately
Possible causes of :
systemctl enable unit
Failed to execute operation: Invalid argument
  • unit is unknown to systemd until you run systemctl daemon-reload
  • unit is a symlink (but it works with hardlinks, though)
  • a unit file must be named whatever.service
list-dependencies [options] Recursively show dependencies of the specified unit. Example :
systemctl list-dependencies graphical.target
list-unit-files [options] List unit files installed on the system and their enablement state : enabled / disabled / masked / static / generated / .... Example :
systemctl list-unit-files --type=service
reload pattern Asks all units listed on the command line to reload their configuration (i.e. ask daemons managed by systemd to reload their own configuration)
This will reload the service-specific configuration, not the unit configuration file of systemd. If you want systemd to reload the configuration file of a unit, use daemon-reload. In other words: for the example case of Apache, this will reload Apache's httpd.conf in the web server, not the apache.service systemd unit file.
not be confused with daemon-reload
show
  • systemctl show unit : show properties of unit
  • systemctl show jobId : show properties of jobId
  • systemctl show : show properties of systemd itself
  • --show is intended to be used whenever computer-parsable output is required. For human-readable output, use status.
  • It is possible to query specific properties :
    systemctl show mysql --property=ActiveEnterTimestamp
status unit Show terse runtime status information about unit, followed by most recent log data from the journal.
For a different output format :
systemctl status --output=json-pretty nginx

Example

Start / stop / status of a daemon :

systemctl start|stop|status daemon.service

While debugging, it may be useful to prefix systemctl invocation with date :

date; systemctl start docker.service
so that it's easier to identify journalctl entries if the operation failed.

View the unit configuration file of a service :

systemctl cat docker.service

Get the uptime of a service :

systemctl status mysql

	● mysql.service - MySQL Community Server
	   Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled)
	   Active: active (running) since Fri 2019-11-29 15:10:38 UTC; 2 days ago
	  Process: 1111 ExecStartPost=/usr/share/mysql/mysql-systemd-start post (code=exited, status=0/SUCCESS)
	  Process: 1050 ExecStartPre=/usr/share/mysql/mysql-systemd-start pre (code=exited, status=0/SUCCESS)
	 Main PID: 1110 (mysqld)

ps -o etime= 1110
	 2-17:50:46

==========================================
systemctl show mysql --property=MainPID | cut -d '=' -f 2
ps -o etime= $(systemctl show mysql --property=MainPID | cut -d '=' -f 2)
	 2-18:44:12

==========================================
systemctl show mysql --property=ActiveEnterTimestamp
	ActiveEnterTimestamp=Fri 2019-11-29 15:10:38 UTC

==========================================
ps -o lstart= $(systemctl show mysql --property=MainPID | cut -d '=' -f 2)
	Fri Nov 29 15:10:36 2019

echo $(($(date -d"now" +%s) - $(date -d"Fri Nov 29 15:10:36 2019" +%s)))

echo $(($(date -d"now" +%s) - $(date -d"$(ps -o lstart= $(systemctl show mysql --property=MainPID | cut -d '=' -f 2))" +%s)))
==> uptime in seconds
mail

set

Usage

Toggle options within a script. At the point in the script where you want the options to take effect, use set -o optionName or, in short form : set -shortOption. These two forms are equivalent.
To disable an option : set +o optionName, or set +shortOption.

Flags

optionName shortOption Usage
noexec n read commands but do not execute them (syntax check)
errexit e abort script at first error : when a command exits with non-zero status, except in these constructs : See notes and examples, Discussion about -e.
nounset u leave script and display an error message when using an unset variable
pipefail
  • By default, pipelines only return a failure if the last command errors.
  • If pipefail is set, the return value of a pipeline is :
    • the value of the last (rightmost) command to exit with a non-zero status
    • or zero if all commands in the pipeline exit successfully
  • When used in combination with -e, pipefail will make a script exit if any command in a pipeline errors.
verbose v print each command to stdout before executing it
xtrace x like verbose, but expands commands
(full list)

Discussion about -e :

-e is often presented as a mandatory flag that should be used in all scripts. But it's also considered by others as a bad / useless practice (details : 1, 2) because :
  • it's not an error when a test (if, [ ], ) evaluates to false :
    • dealing with a false status is what tests are for
    • treating a false as an error and exiting unconditionally is an over-reaction which brings nothing to the safety scripts
  • exceptions have been added to -e to workaround the false positives, but these exceptions
    • are not consistent across Bash versions
    • vary whether the process is/is not a subshell
    which actually adds complexity with no gain on safety

Example

Notes and examples with -e (details) :

This doesn't work on :
  • commands within if, until, while block
  • compound commands (list using && or ||)
  • commands with return value being inverted via !
set -e; echo -n 'hello'; true; echo ' world'
Outputs : hello world
echo "set -e; echo -n 'hello'; false; echo ' world'" | bash
Outputs : hello (the echo | bash hack is just to be able to see the result, since because of the false, an exit is executed, forcing to leave the current shell)
set -e; dir='/tmp'; if [ -d "$dir" ] ; then echo "$dir exists"; else echo "$dir does not exist"; fi
Outputs : /tmp exists
No non-success exit code met, -e keeps sleeping.
set -e; dir='/aDirThatDoesNotExist'; if [ -d "$dir" ] ; then echo "$dir exists"; else echo "$dir does not exist"; fi
Outputs : /aDirThatDoesNotExist does not exist
A non-success exit code is met, but -e is muzzled by if.
set -e; dir='/tmp'; [ -d "$dir" ] && echo "$dir exists" || echo "$dir does not exist"
Outputs : /tmp exists
No non-success exit code met, -e keeps sleeping.
set -e; dir='/aDirThatDoesNotExist'; [ -d "$dir" ] && echo "$dir exists" || echo "$dir does not exist"
Outputs : /aDirThatDoesNotExist does not exist
A non-success exit code is met, but -e is muzzled by ????.
#!/usr/bin/env bash
set -e
echo -n 'hello'
true
echo ' world'
Outputs : hello world
#!/usr/bin/env bash
set -e
echo -n 'hello'
false
echo ' world'
Outputs : hello, and returns the exit code 1
#!/usr/bin/env bash
set -e
echo -n 'hello'
if true; then
	echo -n ' wonderful'
fi
echo ' world'
Outputs : hello wonderful world
#!/usr/bin/env bash
set -e
echo -n 'hello'
if false; then
	echo -n ' wonderful'
fi
echo ' world'
Outputs : hello world, and returns the exit code 0.
A non-success exit code is met, but -e is muzzled by if.

Opportunity for a joke :

If you run set -e in a terminal, this will affect the current shell and any further command your "victim" will type. At the 1st non-success return code met (which is VERY easy : try TAB-completing like cd TAB), an exit will be fired, closing the terminal

If you _unintentionally_ run that joke on yourself (), you can disable the -e flag with : set +e

Hack that simulates sending parameters to a script :

  1. set -- whatever
    This construct actually turns whatever into positional parameters. whatever can be :
    • a list of files : file1 file2 file3
    • a shell expansion corresponding to a list of files : /path/to/*jpg
    • the result of a command : $(ls -1)
  2. You can then "read" these as if they were previously sent as parameters :
    set -- $(ls -1); echo $1 $2 $3
    file1 file2 file3		3 files of the current directory
    set -- $(ls -1); echo $@
    ... ... ... ...		all of them
mail

ss

Usage

ss :

ss -punta is a good equivalent to netstat -laputen (which will be deprecated soon)

Flags

Flag Usage
-a --all Display all sockets
-e --extended Show detailed socket information. The output format is:
uid:uidNumber ino:inodeNumber sk:cookie
  • uidNumber : the user id the socket belongs to
  • inodeNumber : the socket's inode number in VFS
  • cookie : an uuid of the socket
-l --listening Display listening sockets only
-n --numeric Show service names in numeric format
-r --resolve resolve numeric address/ports
-p --processes Show process using socket
-t --tcp Display only TCP sockets
-u --udp Display only UDP sockets
-x --unix Display only Unix domain sockets
(filters)

Filters :

State filters :

state anyTcpState, to be chosen from (source) :
  • All standard TCP states: established, syn-sent, syn-recv, fin-wait-1, fin-wait-2, time-wait, closed, close-wait, last-ack, listen and closing.
  • all : for all the states
  • connected : all the states except for listen and closed
  • synchronized : all the connected states except for syn-sent
  • bucket : states, which are maintained as minisockets, i.e. time-wait and syn-recv
  • big : opposite to bucket

Address filters :

  • dst addressPattern : matches remote address and port
  • src addressPattern : matches local address and port
  • dport operator anyPort : compares remote port to a number
  • sport operator anyPort : compares local port to a number
  • autobound : checks that socket is bound to an ephemeral port
With :
  • addressPattern is at the a.b.c.d:port format. If either the IP address or the port part is absent or replaced with *, this means wildcard match.
  • operator is one of <=, >=, ==, ... To make this more convenient for use in unix shell, alphabetic FORTRAN-like notations le, gt, etc. are accepted as well.
  • Expressions can be combined with and, or and not, which can be abbreviated in C-style as &, &&, ...
mail

sudo

Flags

Flag Usage
-H --set-home change the value of the $HOME environment variable into the home directory of the target user (i.e. mostly root, so /root). Normally, using sudo does not alter $HOME (details)
bash -c 'echo $USER $HOME'; sudo bash -c 'echo $USER $HOME'; sudo -H bash -c 'echo $USER $HOME'
This can be the default behavior, so the command above may not highlight anything.
-i --login simulate initial login. This runs the shell specified in /etc/passwd for the target user as a login shell. This means that login-specific resource files such as .profile or .login will be read by the shell.
If a command is specified, it is passed to the shell for execution. Otherwise, an interactive shell is executed.
full details : man -P 'less -p --login' sudo
-k --reset-timestamp invalidate the user's cached credentials instead of waiting the cache to expire (details)
-l --list list the allowed and forbidden commands for the invoking user (or user specified with -U) on the current host.
-p prompt --prompt=prompt Use a custom password prompt with optional escape sequences (details)
-S --stdin Write the prompt to the standard error and read the password from the standard input instead of using the terminal device.
  • this allows highly insecure things like :
    echo P@ssw0rd | sudo -S someCommand
  • this option automagically helped fix an issue with sudo in Ansible :
    • - hosts: srvOracle
        tasks:
          - name: Do some SQL
            shell: |
              sqlplus -s / as sysdba << EOC
              select * from dual;
              EOC
            become: yes
            become_user: oracle
            become_flags: "-i"
      sudo: no tty present and no askpass program specified
    • - hosts: srvOracle
        tasks:
          - name: Do some SQL
            shell: |
              sqlplus -s / as sysdba << EOC
              select * from dual;
              EOC
            become: yes
            become_user: oracle
            become_flags: "-iS"
      works like a charm
  • -s [command]
  • --shell [command]
Run the shell specified by : If command is specified, it is passed to the shell for execution via its -c option. Otherwise, an interactive shell is executed.
-U bob ---other-user bob To be used with -l : list bob's privileges instead of those of the invoking user. Advanced privileges required :
-u kevin --user kevin Run the specified command as the user kevin.

Example

Chain several commands within a single sudo :

sudo bash -c 'whoami; date; echo Hello World'
mail

strings

Usage

Find printable strings in files.

Example

strings /bin/sh | grep error
strerror
%s: I/O error
error setting limit (%s)
Syntax error: %s
mail

strace

Installed with the Debian package

strace

Usage

Trace system calls and signals
Each output line is made of 3 fields :
  1. the first entry on the left is the system call being performed
  2. the bit in the parentheses are the arguments to the system call
  3. the right side of the equals sign is the return value of the system call

Flags

Flag Usage
-c count time, calls, and errors for each system call and report a summary on program exit, suppressing the regular output
This can be a good starting point when debugging stuff : once you've listed where errors are, use strace -e type1,type2,type3
-e expression use expression to select which events to trace / how to trace them :
  • -e raw=select : print raw, undecoded arguments for the specified set of system calls (here : select)
  • -e trace=%file : trace all system calls which take a file name as an argument
  • -e trace=%network : trace all the network related system calls
  • -e select,stat,openat,lstat : trace only the system calls of the listed types
  • Filter by type of syscall
The syntax without a % leading keyword (like -e trace=keyword) is deprecated
-o file Write the trace output to the file file rather than stderr
-p PID Attach to the process with the process ID PID. You may attach quickly to a process with :
strace -p $(ps aux | awk '/[s]omething/ { print $2 }') [other strace options]
with something : any pattern in the output of ps uniquely matching the process to trace. This can be a binary name, a data file, ...

System calls :

Flag Usage
accept() When a request on a listening socket is refused / incomplete, accept() returns -1. Otherwise, it creates a new connected socket, and returns a new file descriptor referring to that socket.
fstat() Return metadata about a file (in the form of a "stat struct"). This file can be specified either by path for stat(), or by file descriptor for fstat().
recvfrom() Receive a message from a socket
select() Programs use select() to monitor file descriptors (specified as 2nd parameter) on a system for activity. select() will block for a configurable period of time waiting for activity on the supplied file descriptors, after which it returns with the number of descriptors on which activity took place.
This can be remembered as "wait for activity or timeout, and report where activity occurred"
mail

stat

Usage

display file or file system status

Time fields

stat myFile
  File: myFile
  Size: 2923            Blocks: 6          IO Block: 1024   regular file
Device: fd03h/64771d    Inode: 25          Links: 1
Access: (0600/-rw-------)  Uid: ( 1000/   bob)   Gid: ( 1000/   admin)
Access: 2021-07-05 15:21:18.000000000 +0200
Modify: 2021-07-07 12:03:22.000000000 +0200
Change: 2021-07-07 12:03:22.000000000 +0200
 Birth: -
Field Changes when... Details
Access the file was last read
Modify changes were written to the file
Change the file metadata changed

Flags

Flag Usage
-c format
--format=format
with format being :
  • %n file name
  • %x time of last access, human-readable
  • %y time of last data modification, human-readable
  • %Y time of last data modification, seconds since Epoch
  • %z time of last status change, human-readable
  • ...

Example

Get a file's last access, last change

stat -c 'LAST ACCESS:%x LAST CHANGE:%z FILE:%n' file
mail

split

Usage

Split files into chunks
split [options] fileToSplit prefixOfSplittedFiles

Flags

Flag Usage
-a suffixLength Use suffixLength letters to form the suffix portion of the filenames of the split file.Make this suffix long enough so that there can be at least as many suffices as splits.
-d use numeric suffixes (aka digits) starting at 0, not alphabetic
-l nbLines Specify the number of lines in each resulting file piece.
-n chunkSpec
--number=chunkSpec
specifies how much chunks you want fileToSplit to be split into ( and how). chunkSpec can be :
n : split fileToSplit into n chunks
workDir='/run/shm/testSplit'; chunkPrefix='chunk_'; mkdir -p "$workDir"; cd "$workDir"; dataFile=$(mktemp --tmpdir='.'); for i in {1..10}; do echo -n "$i:"; for j in {a..j}; do echo -n " $j"; done; echo; done > "$dataFile"; split -n 3 "$dataFile" "$chunkPrefix"; for i in "$dataFile" "$chunkPrefix"*; do echo "$i"; cat "$i"; echo; rm "$i"; done
May split lines !
k/n : split as above, but instead of writing results into n files, output the values of the kth chunk to stdout. Use case : read/extract section of fileToSplit.
workDir='/run/shm/testSplit'; mkdir -p "$workDir"; cd "$workDir"; dataFile=$(mktemp --tmpdir='.'); for i in {1..10}; do echo -n "$i:"; for j in {a..j}; do echo -n " $j"; done; echo; done > "$dataFile"; split -n 2/3 "$dataFile"; rm "$dataFile"
l/n : like n without splitting lines
workDir='/run/shm/testSplit'; chunkPrefix='chunk_'; mkdir -p "$workDir"; cd "$workDir"; dataFile=$(mktemp --tmpdir='.'); for i in {1..10}; do echo -n "$i:"; for j in {a..j}; do echo -n " $j"; done; echo; done > "$dataFile"; split -n l/3 "$dataFile" "$chunkPrefix"; for i in "$dataFile" "$chunkPrefix"*; do echo "$i"; cat "$i"; echo; rm "$i"; done
l/k/n : like k/n without splitting lines
workDir='/run/shm/testSplit'; mkdir -p "$workDir"; cd "$workDir"; dataFile=$(mktemp --tmpdir='.'); for i in {1..10}; do echo -n "$i:"; for j in {a..j}; do echo -n " $j"; done; echo; done > "$dataFile"; split -n l/2/3 "$dataFile"; rm "$dataFile"
r/n : like l/n but with round robin distribution
workDir='/run/shm/testSplit'; chunkPrefix='chunk_'; mkdir -p "$workDir"; cd "$workDir"; dataFile=$(mktemp --tmpdir='.'); for i in {1..10}; do echo -n "$i:"; for j in {a..j}; do echo -n " $j"; done; echo; done > "$dataFile"; split -n r/3 "$dataFile" "$chunkPrefix"; for i in "$dataFile" "$chunkPrefix"*; do echo "$i"; cat "$i"; echo; rm "$i"; done
r/k/n : split like r/n + the "k" effect : output the values of the kth chunk to stdout :
workDir='/run/shm/testSplit'; mkdir -p "$workDir"; cd "$workDir"; dataFile=$(mktemp --tmpdir='.'); for i in {1..10}; do echo -n "$i:"; for j in {a..j}; do echo -n " $j"; done; echo; done > "$dataFile"; split -n r/2/3 "$dataFile"; rm "$dataFile"

Example

Split and re-assemble :

workDir='/run/shm/testSplit'; chunkPrefix='chunk_'; mkdir -p "$workDir" && cd "$workDir"; dataFile=$(mktemp --tmpdir='.'); dd if=/dev/random of="$dataFile" bs=1K count=10 status=none; split -n 3 -d "$dataFile" "$chunkPrefix"; ls -l "$chunkPrefix"*; cat "$chunkPrefix"* > "$dataFile"_REASSEMBLED; md5sum "$dataFile"*; [ "$PWD" = "$workDir" ] && rm "$chunkPrefix"* "$dataFile"*; cd - && rmdir "$workDir"
-rw------- 1 bob users 3413 Dec  9 12:05 chunk_00
-rw------- 1 bob users 3413 Dec  9 12:05 chunk_01
-rw------- 1 bob users 3414 Dec  9 12:05 chunk_02
9e5dfea5493c78eaee1e1e7158d063ad  ./tmp.4iNzQxgGae			checksums match 
9e5dfea5493c78eaee1e1e7158d063ad  ./tmp.4iNzQxgGae_REASSEMBLED	meaning we've successfully re-assembled the chunks into an exact copy of the original file

Split myBigFile into 1000 lines chunks :

split -l 1000 -a 3 myBigFile chunk_
Will create chunks : chunk_aaa, chunk_aab, chunk_aac, chunk_
mail

source (or .)

Usage

source someFile (or . someFile) reads and executes commands from someFile in the current shell context.

What's specific with source ?

  • someFile is executed even though its execution bit is not set
  • someFile is executed within the current shell context, which allows :
    • loading variables into the current interactive shell session :
      1. source myConfigFile
      2. other commands using variables
    • using someFile as a configuration file for a script, without leaving variables in the shell environment once the script is over :
      sourcedFile=$(mktemp sourcedFile.XXXXXXXX); echo 'value=42' > "$sourcedFile"; chmod -x "$sourcedFile"; scriptFile=$(mktemp scriptFile.XXXXXXXX); echo "source $sourcedFile; echo 'sourced'; echo \"during the script : \\\$value='\$value'\"" > "$scriptFile"; chmod +x "$scriptFile"; ls -l "$sourcedFile" "$scriptFile"; cat "$sourcedFile" "$scriptFile"; ./"$scriptFile"; echo "after the script : \$value='$value'"; rm "$sourcedFile" "$scriptFile"
      value=42	cat "$sourcedFile"
      source sourcedFile.akr6paIl; echo 'sourced'; echo "during the script : \$value='$value'"	cat "$scriptFile"
      sourced		started running the script
      during the script : $value='42'	the variable exists in the script context
      after the script : $value=''	no value anymore
      Since a dedicated subshell (having its own context) is spawned when executing a script, if you want variables to survive for future scripts or commands after the script ends, you'll have to :
      • export variables within the script (details)
      • or source the script
  • The return status is the exit status of the last command executed from someFile, or zero if no commands are executed.

Trying to source a file having DOS line endings led the shell to complain for syntax errors on EVERY line. Consider converting line endings into DOS format.

About exported variables :

configFile='./script.conf'; scriptFile='./script.sh'; echo -e '#!/usr/bin/env bash\nvar1=value1\nexport var2=value2' > "$configFile"; echo -e '#!/usr/bin/env bash\necho "\tvar1 = $var1"\necho "\tvar2 = $var2"' > "$scriptFile"; echo -e "\n'source'd config file :"; cat "$configFile"; source "$configFile"; echo -e "\nCommand line (current shell context) :\n\tvar1 = $var1\n\tvar2 = $var2\n\nScript (subshell context) :"; bash "./$scriptFile"; rm "$configFile" "$scriptFile"
'source'd config file :
#!/usr/bin/env bash
var1=value1			not exported
export var2=value2

Command line (current shell context) :
	var1 = value1		exists 
	var2 = value2

Script (subshell context) :
	var1 =			unset 
	var2 = value2

Is a shebang required in source'd configurations files (source) ?

If you have :
  • script.conf :
    variable='value'
  • script.sh :
    #!/usr/bin/env bash
    
    source script.conf
    
    ... do something with "$variable" ...

Then no shebang is required in the configuration file script.conf. It may also be a safe practice to remove its execution permissions.

mail

sort

Usage

Flags

Flag Usage Example
-h sort human numeric values echo -e "1M\n1G\n10K\n2K" | sort -h
2K
10K
1M
1G
-km
-km,n
sort data based on the mth column, then (if present) on the nth column
-n sort numerically. Default is alphabetically.
-o outputFile
--output=outputFile
Write result to outputFile instead of standard output
-r --reverse sort in reverse order (i.e. : descending).
Default sorting order is ascending.
echo -e "a\nb\nc" | sort -r
c
b
a
-R --random-sort shuffle, but group identical keys.
See shuf
echo -e "a\na\nb\nc" | sort -R
b
c
a
a
-t 'x'
--field-separator 'x'
specify the field separator when using -k. Default is whitespace or TAB sort -nr -t ':' -k3 /etc/passwd | head -10
-T dir
--temporary-directory=dir
use dir for temporaries, not $TMPDIR or /tmp. Multiple options specify multiple directories
-u sort unique : don't display duplicated lines. sort -u is equivalent to sort | uniq
-V --version-sort natural sort of (version) numbers echo -e "1.10\n1.2\n1.3\n1.1" | sort -V

Example

Sort occurrences of a URL from an Apache error log by decreasing order :

grep ' 500 ' 2012-07-03-apache-access.log | cut -d ' ' -f 11 | sort | uniq -c | sort -nr | less

All the magic is in the uniq -c prior to sorting.

Delete duplicate lines from myFile :

fileWithDuplicates='myFile'; tmpFile=$(mktemp); mv "$fileWithDuplicates" "$tmpFile"; sort -u "$tmpFile" -o "$fileWithDuplicates"; rm "$tmpFile"

Sort arguments provided on a single line

  • echo '2 20 3 1' | tr ' ' '\n' | sort -n | tr '\n' ' '
    1 2 3 20
  • echo 'banana,coconut,apple' | tr ',' '\n' | sort | tr '\n' ','
    apple,banana,coconut,
The result string has a trailing separator, which can be removed with substring expansion.

Sort numerically on several columns (source) :

  • This contradicts what I wrote above —that's not completely clear to me yet— but it works
  • consider sort -V
cat << EOF | sort -k1,1n -k2,2n
2	2
1	10
1	1
10	1
2	10
1	2
EOF
1       1
1       2
1       10
2       2
2       10
10      1
mail

snmpwalk

Usage

snmpwalk -On -c snmpCommunity -v snmpVersion host OID

Flags

Flag Usage
-O Output formatting options :
-On : displays the OID numerically

Example

snmpwalk -On -c foo -v 2c 10.44.36.253 1.3.6.1.4.1.15497.1.1.1.11

mail

shuf

Usage

Shuffle the input rows

Flags

Flag Usage
-e Consider every command line parameter as an input row
-i min-max
--input-range=min-max
Take numbers between min and max as input options to chose from
-n numLines Display at most numLines lines

Example

Shuffle string parameters :

shuf -e A Z E R T Y may output Z R E A T Y or T R Y E Z A

Generate numberOfRandomNumbers random numbers within a specified range (source) :

shuf -i rangeMin-rangeMax -n numberOfRandomNumbers
mail

shopt

Usage

Toggle shell options

Flags

Flag Usage
-s
-s optionName
List options that are set
set the option optionName
-u
-u optionName
List options that are unset
unset the option optionName

Option Usage
autocd a command name that is the name of a directory is executed as if it were the argument to cd
cdspell autocorrect minor typos while using cd
(What does shopt -s dirspell do?)
dirspell autocorrect minor typos during word completion on a directory name (provided the directory name has a trailing /)
extglob enable extended pattern matching features
(Is it safe to leave extglob enabled ?)
mail

setfacl

Installed with the Debian package

acl

Usage

Set or modify file ACL. This allows granting specific rights to specific users / groups on specific files, without setting global permissions. For instance, if Bob's home directory is :
drwx------ 64 bob developers 4,0K jan. 15 20:38 bob/
and we would like Alice to have read access to /home/bob/, we can :

Command syntax :

setfacl -m aclEntry someFile
with aclEntry made of prefix:userOrGroup:permissions
  • prefix :
    • u: to change user rights
    • g: to change group rights
    • o: to change other rights. No need to specify userOrGroup
    • d:u: to declare default user rights
    • d:g: to declare default group rights
    • d:o: to declare default other rights
  • permissions : specified either numerically or with symbols (like in chmod) :
    • r : read permission
    • w : write permission
    • x : execute permission
    • X : execute permission, if the file is a directory or already has execute permission for some user (details)
    • - : ignored, rx is equivalent to r-x

Flags

Flag Usage
-b --remove-all
  • remove all extended ACL entries
  • the base ACL entries of the owner, group and others (i.e. Linux permissions) are retained
-d --default All operations apply to the default ACL
-m modify an existing ACL entry
-R Recursive : apply rights to all files and directories. -R must be supplied before -m : -Rm
-x remove an ACL entry :
setfacl -x u:kevin someFile

Example

Grant rw rights to a user on a single file :

setfacl -m u:alice:rw- file

Grant rw rights to a user on all files of a directory :

setfacl -Rm u:alice:rw- directory

Set default rights so that new files will inherit them (details) :

setfacl -m d:u:alice:rw directory/

Grant access to a file tree (source) :

setfacl -R -m user:stuart:rwX path/to/base/directory

If it complains : setfacl: Option -m: Invalid argument near character n :

setfacl -R -m user:httpd:rwx /data
setfacl: Option -m: Invalid argument near character 6
What's wrong ?
  1. Which is the 6th character ? It's the 6th of the value passed to -m :
    user:httpd:rwx
    123456
    Something's wrong with the username
  2. Does this username exist ? None of the commands above return a result : there is no such httpd user.
mail

service

Usage

Manage daemons

Example

List status of all daemons :

service --status-all | less
This executes service serviceName status on all services, and returns :
  • [ + ] : service is running
  • [ - ] : service is stopped
  • [ ? ] : unknown / doesn't reply to the "service status" command

Manage a daemon :

service serviceName start|stop|restart|status