Daemons - Processes with names ending in a "d"


What's the difference between reload and restart ?


  • keeps the server running while re-reading any configuration file updates
  • is safer than restart because
    • if a syntax error is noticed in a config file, it will not proceed with the reload, and your server remains running
    • restart drops all in-memory data (like Varnish cache) and starts "fresh" with an empty cache


  • shut down entirely + read configuration + start


NetworkManager —the network management daemon— :


  • a connection is
    • a set of data (layer 2 details, IP parameters, ) that describes how to create or connect to a network
    • said active when a device uses that configuration to create or connect to a network
  • a device may have multiple connections
    • like myConnection1, myConnection1, , all designed for a single device : eth0
    • these can be used to quickly switch between different networks and configurations
    • but only one of them can be active on that device at any given time
    So far, with the ifconfig and ip commands, we've been used to apply network parameters (IP address, DNS settings, ) directly to devices. Now, with NetworkManager, things are different :
    • all parameters are gathered into what is called a connection
    • you can define as many connections as you like
    • and you may plug the connection you need into a specific device (i.e. network interface) when required
    You can name a connection as you see fit, which is not explicit in most examples. Indeed, the connection is often named after the device it applies to, making things unclear.
  • to switch from one connection to another :
    • you just have to : nmcli con up connectionToEnable
    • no need to disable the current connection. If you try to do so :
      nmcli con down connectionToDisable
      Error: 'connectionToDisable' is not an active connection.
      Error: no active connection provided.

Configuration files

  • Before NetworkManager, network connections were defined in (details) :
    • Debianoids : /etc/network/interfaces
    • RedHatoids : /etc/sysconfig/network-scripts/ifcfg-ifname
  • With NetworkManager :
    • now they are defined in /etc/NetworkManager/system-connections/ifname.nmconnection
    • These are not supposed to be edited manually. Use utilities such as nmcli to alter connection settings.


supervisord vs systemd (sources : 1, 2)

  • they both do the job with similar efficiency and usability
  • systemd is built in the OS, which removes a software dependency
  • supervisord :
Looks like supervisord was built as a convenient way to manage processes at the (ancient!) times of SysVInit and rc.d scripts. Now systemd is here, this argument may not stand anymore...

Configuration files

For supervisord itself
For managed services

/var/log/daemon.log is full of /usr/sbin/irqbalance[PID]: WARNING, didn't collect load info for all cpus, balancing is broken



This is a solution in my specific case because myServer is a VM with a single vCPU (How to get the number of CPUs ?). Make sure conditions match on your side.
  1. stop + disable the daemon :
    systemctl stop irqbalance.service && systemctl disable irqbalance.service
  2. check it :
    systemctl is-enabled irqbalance.service; systemctl status irqbalance.service
    disabled			as expected
    ● irqbalance.service - irqbalance daemon
       Loaded: loaded (/lib/systemd/system/irqbalance.service; disabled; vendor preset: enabled)
       Active: inactive (dead)	as expected
    Jun 26 10:31:33 myServer /usr/sbin/irqbalance[PID]: WARNING, didn't collect load info for all cpus, balancing is broken
    Jun 26 10:31:43 myServer /usr/sbin/irqbalance[PID]: WARNING, didn't collect load info for all cpus, balancing is broken
    Jun 26 10:31:53 myServer /usr/sbin/irqbalance[PID]: WARNING, didn't collect load info for all cpus, balancing is broken
    Jun 26 10:32:03 myServer /usr/sbin/irqbalance[PID]: WARNING, didn't collect load info for all cpus, balancing is broken
    Jun 26 10:32:13 myServer /usr/sbin/irqbalance[PID]: WARNING, didn't collect load info for all cpus, balancing is broken
    Jun 26 10:32:23 myServer /usr/sbin/irqbalance[PID]: WARNING, didn't collect load info for all cpus, balancing is broken
    Jun 26 10:32:33 myServer /usr/sbin/irqbalance[PID]: WARNING, didn't collect load info for all cpus, balancing is broken
    Jun 26 10:32:40 myServer systemd[1]: Stopping irqbalance daemon...
    Jun 26 10:32:40 myServer systemd[1]: Stopped irqbalance daemon.


Why journald (source) ?

  • journald is a system service for collecting and storing log data, introduced with systemd. It tries to make it easier for system administrators to find interesting and relevant information among an ever-increasing amount of log messages.
  • In keeping with this goal, one of the main changes in journald was to replace simple plain text logfiles with a special file format optimized for log messages. This file format allows system administrators to access relevant messages more efficiently. It also brings some of the power of database-driven centralized logging implementations to individual systems.
  • journald can be configured to drop some messages when the system is generating lots of them. This is called rate limit, and is useful to not overload the logging system. In this situation, you may see entries like :
    Jun 24 13:47:23 localhost systemd-journal[155]: Suppressed 15460 messages from /system.slice/...
    This can be configured with RateLimitBurst and RateLimitIntervalSec (source).

Configuration directives (source) :

It all takes place in /etc/systemd/journald.conf

Directive Default value Description
Compress=boolean yes data objects that shall be stored in the journal and are larger than a certain threshold are compressed before they are written to the file system
MaxFileSec=rotationPeriod 1 month
  • The maximum time (in seconds) to store entries in a single journal file before rotating to the next one, i.e. : log rotation period
  • rotationPeriod can be specified with a suffix such as m, h, day, week, month or year
  • 0 disables log rotation
Normally, time-based rotation should not be required as size-based rotation with options such as SystemMaxFileSize should be sufficient to ensure that journal files do not grow without bounds. However, to ensure that not too much data is lost at once when old journal files are deleted, it might make sense to change this value from the default.
MaxLevelStore=level 7 (debug) Controls the maximum log level of messages that are stored on disk, forwarded to syslog, kmsg, the console or wall (if enabled). As argument, takes one of :
  • 7 or debug
  • 6 or info
  • 5 or notice
  • 4 or warning
  • 3 or err
  • 2 or crit
  • 1 or alert
  • 0 or emerg
Messages equal or below the log level specified are stored / forwarded, messages above are dropped. Defaults to debug to ensure that the all messages are written to disk and forwarded to syslog.
MaxRetentionSec=logfileTimeToLive 0 (i.e. : "disabled")
  • The maximum time (in seconds) to store journal entries. This controls whether journal files containing entries older than the specified time span are deleted.
  • logfileTimeToLive can be specified with a suffix such as m, h, day, week, month or year
  • 0 disables old logfiles deletion
Normally, time-based deletion of old journal files should not be required as size-based deletion with options such as SystemMaxUse should be sufficient to ensure that journal files do not grow without bounds. However, to enforce data retention policies, it might make sense to change this value from the default.
RateLimitBurst=nbMessages 1000 messages
  • If, during the intervalDuration interval, more than nbMessages messages are logged by a service, all further messages within this interval are dropped until the interval is over.
  • A message about the number of dropped messages is generated.
  • This rate limiting is applied per-service, so that two services which log do not interfere with each other's limits.
RateLimitIntervalSec=intervalDuration 30s
SplitMode=value uid Controls whether to split up journal files per user (only in Storage=persistent mode). value is one of :
  • uid : all regular users will each get their own journal files, and system users will log to the system journal
  • none : journal files are not split up by user and all messages are instead stored in the single system journal. In this mode unprivileged users generally do not have access to their own log data.
Storage=value auto Controls where to store journal data. value is one of :
  • volatile : log data will be stored only in memory, i.e. below the /run/log/journal hierarchy (which is created if needed).
  • persistent : data will be stored preferably on disk, i.e. below the /var/log/journal hierarchy (which is created if needed), with a fallback to /run/log/journal (which is created if needed), during early boot and if the disk is not writable
  • auto : similar to persistent but the directory /var/log/journal is not created if missing
  • none : turns off all storage, all log data received will be dropped
SystemMaxUse=value Enforce size limits on the journal files
  • System*= : SystemMaxUse=, SystemKeepFree=, SystemMaxFileSize=, SystemMaxFiles= : apply to the journal files when stored on a persistent file system, more specifically /var/log/journal
  • Runtime*= : RuntimeMaxUse=, RuntimeKeepFree=, RuntimeMaxFileSize=, RuntimeMaxFiles= : apply to the journal files when stored on a volatile in-memory file system, more specifically /run/log/journal
journalctl and systemd-journald ignore all files with names not ending with .journal or .journal~, so only such files, located in the appropriate directories, are taken into account when calculating current disk usage.
find /var/log/journal/ -name *journal* -exec ls -l {} \; | awk '{ TOTAL += $5} END { print TOTAL/1024/1024 " MB"}'
  • *MaxUse=value : how much disk space the journal may use up at most
  • *KeepFree=value : how much disk space systemd-journald shall leave free for other uses
systemd-journald will respect both limits and use the smaller of the two values
  • *MaxUse defaults to 10% of the size of the corresponding filesystem
  • *KeepFree defaults to 15% of the size of the corresponding filesystem
  • Both are capped to 4GB
  • If the file system is nearly full and either SystemKeepFree or RuntimeKeepFree are violated when systemd-journald is started, the limit will be raised to the percentage that is actually free. This means that if there was enough free space before and journal files were created, and subsequently something else causes the file system to fill up, journald will stop using more space, but it will not be removing existing files to reduce the footprint again, either.

Debugging :

/var/log/journal usage > SystemMaxUse, not in line with SystemKeepFree

  1. space initially reserved for journald logs :
    grep 'SystemMaxUse=' /etc/systemd/journald.conf
  2. space currently used by journald logs :
    find /var/log/journal -name *journal* -exec ls -l {} \; | awk '{ TOTAL += $5} END { print TOTAL/1024 " KB"}'
    1269780 KB
  3. files in /var/log/journal that are NOT journal logfiles !
    find /var/log/journal -type f -a ! -name "*journal*"
  4. space initially left for others :
    grep 'SystemKeepFree=' /etc/systemd/journald.conf
  5. space actually free :
    df -k /var/log/journal | awk '/var/ {print $2}'
  6. is the /var/log/journal filesystem over-allocated ?
    confFile='/etc/systemd/journald.conf'; SystemMaxUse=$(sed -rn 's/^SystemMaxUse=([0-9]+).$/\1/p' "$confFile"); SystemKeepFree=$(sed -rn 's/^SystemKeepFree=([0-9]+).$/\1/p' "$confFile"); totalJournalValues=$((SystemMaxUse+SystemKeepFree)); fsSize=$(df -k /var/log/journal | awk '/var/ {print $2}'); extraSize=$((fsSize-totalJournalValues)); echo -e "SystemMaxUse $SystemMaxUse\nSystemKeepFree $SystemKeepFree\ntotalJournalValues $totalJournalValues\nfsSize $fsSize\nextraSize $extraSize" | column -s ' ' -t
    SystemMaxUse		1168664
    SystemKeepFree		584332
    totalJournalValues	1752996
    fsSize			2337328
    extraSize		584332	this is >0 : OK

How can I delete old logs ?

Use one of journalctl's --vacuum-* functions.

sssd : the System Security Services Daemon


SSSD is a software package originally developed for GNU/Linux that provides a set of daemons to manage access to remote directories and authentication mechanisms.
Its purpose is to simplify system administration of authenticated and authorized user access involving multiple distinct hosts. It is intended to provide single sign-on capabilities to networks based on Unix-like OS that are similar in effect to the capabilities provided by Microsoft's Active Directory Domain Services to Windows networks.


Flag Type Default Usage
dyndns_update boolean True
  • optional
  • tells SSSD to automatically update the Active Directory DNS server with the IP address of this client
  • Enabling Dynamic DNS Updates
dyndns_update_ptr boolean True
  • whether the PTR record should also be explicitly updated when updating the client's DNS records
  • applicable only when dyndns_update is true


Installed with the Debian package



anacron is a computer program that performs periodic command scheduling (which is traditionally done by cron) without assuming that the system is running 24 hours a day. Thus, it can be used to control the execution of daily, weekly, and monthly jobs (or anything with a period of n days).
anacron is not a daemon. It is executed :
  • when the system boots
  • daily, by cron : grep 'anacron start' /etc/cron.d/anacron
Jobs controlled by anacron are configured in /etc/anacrontab. There are 2 configuration syntaxes :
Run the command every period days.
After being started itself, anacron waits delay minutes before actually firing command.
Used to identify the job in anacron messages, and as the name for the job's timestamp file. Can contain any non-blank character, except slashes.
This can be any shell command. If this command should be run by a non-root user, you can specify it like (source) :
sudo -u kevin command
Only "monthly" so far. Should be extended in future versions of anacron


There's not a lot to say about CRON : it's there, it works. I can't remember a single time having to fiddle with it. Actually, all the magic lies in crontab.

CRON (/usr/sbin/cron) is the clock daemon : it executes commands at specific times. These commands are stored in configuration files called "CRON-tables", aka crontabs. Each user may have his own crontab. crontabs can be found at /var/spool/cron/crontab/userName.

crontabs files mustn't be edited manually, use the crontab utility instead.



The ftpasswd utility (source) :

ftpasswd should come with ProFTPd (look into proftpdInstallDir/bin or get it here). It is used to generated authUserFiles.
Generate a password hash :
./ftpasswd --hash --stdin newPassword
Create a new virtual user (i.e. generate a authUserFiles entry) (source) :
ftpasswd --passwd --name=kevin --uid=1234 --gid=1337 --home=/home/kevin --shell=/sbin/nologin
  • This will prompt for password
  • By default, the new entry will be created in ./ftpd.passwd
  • When creating a virtual user, make sure this user has access to the path defined as it's home dir. Otherwise, FTP login may be denied.

Quick-and-dirty tips :

As said by the title, tips given in this chapter are quick and dirty and may be wrong or NOT adapted for a permanent configuration, but they COULD save your life for short-term actions. Use at your own risk.

Allow root to connect (which is disabled by default) (source) :
RootLogin on
And possibly :
<Directory /path/to/shared/directory>
	<Limit ALL>
		AllowUser root
Allow a group of users to escape their chroot jail (source) :
DefaultRoot ~ !groupName

About <directory ...> blocks (source) :

Even though the configuration parser will forbid having 2 <directory ...> blocks referring to the very same directory, it MAY be possible to create 2 conflicting configurations for the same directory using wildcards :

<directory /path/to/dir>
<directory /path/*/dir>
In such situations, the section which appears later in the config file wins.

I've also experienced configs such as:

<directory /path/to/dir1>
	config set 1
<directory /path/to/dir1/dir2>
	config set 2
where config set 2 wasn't enabled until config set 1 was commented .



syslog is a utility for tracking and logging all manner of system messages from the merely informational to the extremely critical. Each system message sent to the syslog server has 2 descriptive labels associated with it that make the message easier to handle : syslog also allows sending logs from several machines to a central log server but, as this relies on UPD, this must be considered as "no guarantee".

Configuration files :

OS Configuration files
Red Hatoids /etc/rsyslog.conf
Debianoids /etc/syslog-ng/syslog-ng.conf

The /etc/rsyslog.conf file is 2-columned, making a kind of source-destination list for logs (source) :

facility1.severity;facility2.severity		logfile1
facility3.severity;facility4.severity		logfile2
facility1.severity				logfile3



incron, which is based on inotify, allows firing actions when changes are detected on a monitored file or directory.

Setup on Debian :

apt-get install incron

incron is started right after installation, but you can /etc/init.d/incron start|stop|restart|reload|force-reload|status

Configuration files :

  • /etc/incron.conf : main configuration file. It is entirely commented, it allows redefining configuration files and contents if the default settings (see below) don't fit your needs. You can leave it as is.
  • /etc/incron.d/ : directory containing the list of files and directories to monitor
  • /etc/incron.{allow,deny} : list of users allowed / denied to use incron.
    Don't forget to : echo kevin >> /etc/incron.allow
  • /var/spool/incron/kevin : Kevin's incron rules set with incrontab
There are 2 methods to create a new rule (aka table) :
  1. Manually create a new file in /etc/incron.d/
  2. Or with incrontab -e

inotify events :

Event name Triggered when... Returns name of affected file when monitoring directory
IN_ACCESS File was accessed (read)
IN_ATTRIB Metadata changed, e.g., permissions, timestamps, extended attributes, link count (since Linux 2.6.25), UID, GID, etc.
IN_CLOSE_WRITE File opened for writing was closed
IN_CLOSE_NOWRITE File not opened for writing was closed
IN_CREATE File/directory created in watched directory
IN_DELETE File/directory deleted from watched directory
IN_DELETE_SELF Watched file/directory was itself deleted
IN_MODIFY File was modified
IN_MOVE_SELF Watched file/directory was itself moved
IN_MOVED_FROM File moved out of watched directory
IN_MOVED_TO File moved into watched directory
IN_OPEN File was opened
IN_ALL_EVENTS Any event from the list above

Some variables that may be used to define actions :

full path of the monitored file / directory
name of file / directory on which the event occurred
name of the event that occurred
number of the event that occurred. (Only reference found to event numbers. Use with care.)
the $ character

New file in /etc/incron.d/ :

File format :
fileOrDirectoryToMonitor eventToDetect action
Field separator :
When watching a directory, incron can NOT detect events in subdirectories.

incrontab -e

Let's have some fun with incron : is it dumb enough to watch a directory (let's say : /tmp) for file creation, and create a file upon file creation ?
  1. Let's start a sandbox (just in case this goes into an infinite loop and fills up an entire partition) : qemu-system-x86_64 -m 256 -snapshot debian73x64.img.compressed -net nic -net tap,ifname=tap0,script=no,downscript=no
  2. Install incron : apt-get install incron
  3. Allow Kevin : echo kevin >> /etc/incron.allow
  4. Become Kevin : su - kevin
  5. Create a new incron rule : incrontab -e, then enter :
    /tmp IN_CREATE /tmp/$%_$#
  6. Then (fun starts here!) : touch /tmp/pang
  7. The incron rule will match, and files will be created in /tmp/ :
    • IN_CREATE_pang
    • ...
    • until (IN_CREATE){25}_pang
    This stopped after creating 25 files. Finally, incron is smart enough to avoid entering infinite loops


Configuration files (source) :

Same as MySQL : /etc/mysql/my.cnf


Elements :

  • memcache : standalone server
  • memcached : clusterized server
  • php5-memcached : client for memcached (looks like the name of this module has nothing to do with the server part. details)


To configure Exim4, you may run the "wizard" : dpkg-reconfigure exim4-config, or simply edit /etc/exim4/update-exim4.conf.conf :

Bind: the Berkeley Internet Name Domain

Everything below is pretty dusty and possibly quite obsolete !
  1. install it via rpmdrake or whatever tool you like. There is no special issue on that point. BIND 9.2.0 is provided on CD1 of the download edition of the Mandrake 8.1 distribution.
  2. setting up a DNS implies 2 different points of interest:
    • a resolver, configured via /etc/resolv.conf
    • a id server, using the files below:
      • /etc/named.boot : parameters for named and pointers towards domain databases used by this server.
      • /etc/named.hosts : domain zone file, converts name into IP.
      • /etc/named.rev : reverse domain zone file, converts IP into name.
      • /etc/named.local : converts the loopback IP ( into localhost.
      • /etc/named.ca : "caches" resolved names.
      • /etc/named.conf : general configuration file. (Further details expected).
  3. to start it, simply type named
  4. to stop it, type /etc/rc.d/init.d/named stop
  5. it is possible to test the named.conf by running the utility named-checkconf which returns the errors. So no reply, no error
    If launched with no parameter, named-checkconf will check /etc/named.conf.
  6. you can use nslookup to simulate name requests to the name server:
    • to launch it ("-sil " option for silent mode, otherwise warning message saying nslookup is obsolete): nslookup -sil
    • at the > prompt, enter a machine name (either single or fully qualified)
    • leave with exit or CTRL-d.
  7. To list the entire content of the nameserver :
    host -l domain_name
    This can be run on any machine of the network.
  8. if security is set up to "medium" level, named is controlled by a utility called rndc, which related files are
    • /etc/rndc.conf: configuration
    • /etc/rndc-confgen: generates keys for identification with named. Those keys are to be used in /etc/rndc.conf and /etc/named.conf.
    Commands :
    • Launching named is still the same command: named
    • to stop it, ask for a status or reload /etc/named.conf, it is required to pass orders thru rndc :
      rndc stop | status | reload


Upstart is being obsoleted by systemd.

status of a daemon :
	initctl status ssh

stop Samba :
	initctl stop smbd

status | start | stop | restart | reload

list known daemons :
 - initctl list

initctl list | grep start

config files : /etc/init/*

upstart events : http://ubuntuforums.org/showthread.php?s=3f0e10960bf7fb598136ff1a1308eedf&t=723896&page=2

the event "local-filesystems" is fired once local FS are mounted (?)



	-provides the cmd 'initctl'
	-upstart uses .conf files in the /etc/init directory as scripts instead of the ones in /etc/init.d


	- Jobs are defined in files placed in /etc/init, the name of the job is the filename under this directory without the .conf extension.
	 They are plain text files and should not be executable.
	- All job files must have either an exec or script stanza. This specifies what will be run for the job :
		- exec + args (binary)
		- script +args (shell script executed by /bin/sh)

	- initctl list => list of current jobs

Tips on Postfix

Diagram of a mail system (source) :

How to bypass Name Resolution while configuring relayhost in /etc/postfix/main.cf ?

When sending mail to another MTA, Postfix always tries to resolve its name (DNS search of MX records). To workaround this, you can specify an IP address surrounded by "[]" in /etc/postfix/main.cf :
relayhost = []

Why does Postfix persist in resolving DNS names wrongly whereas /etc/resolv.conf is alright ?

This is because Postfix runs chrooted, and doesn't get its configuration for DNS resolution from /etc/resolv.conf but from /var/spool/postfix/etc/resolv.conf.
/etc/resolv.conf is copied into /var/spool/postfix/etc/resolv.conf when Postfix restarts.

How to send mail from a telnet session ?

  1. telnet smtp.example.com 25
  2. helo smtp
  3. mail from: bob@domain.com this can be counterfeit !
  4. rcpt to: alice@domain.com
  5. data
  6. subject: quack quack !
  7. from: john@... this can be counterfeit !
  8. to: jack@... this can be counterfeit !
  9. <0_/-
  10. .
  11. quit
mail from:, rcpt to: and data commands are known as the SMTP envelope. They are SMTP commands used to route the mail to destination. (See the RFC5321 (update of RFC821) for complete details).
The from: and to: fields are the mail headers. They only convey mail metadata and can easily be forged to send spam (See the RFC5322 (update of RFC822) and RFC2045 (and updates) for complete details).