NFS - The Network File System

mail

rpcbind.socket failed to listen on sockets: Address family not supported by protocol

Situation

  1. systemctl start nfs-server takes forever (and eventually times out)
  2. journalctl (journalctl -u rpcbind.socket) reports :
    oct. 17 12:27:16 myServer systemd[1]: rpcbind.socket failed to listen on sockets: Address family not supported by protocol
    oct. 17 12:27:16 myServer systemd[1]: Failed to listen on RPCbind Server Activation Socket.
    oct. 17 12:27:16 myServer systemd[1]: Dependency failed for NFS status monitor for NFSv2/3 locking..
    oct. 17 12:27:16 myServer systemd[1]: Job rpc-statd.service/start failed with result 'dependency'.
    oct. 17 12:27:16 myServer systemd[1]: Starting RPCbind Server Activation Socket.
    oct. 17 12:27:16 myServer systemd[1]: Starting Kernel Module supporting RPCSEC_GSS...
    oct. 17 12:27:16 myServer systemd[1]: Starting Preprocess NFS configuration...
    oct. 17 12:27:16 myServer systemd[1]: Started Kernel Module supporting RPCSEC_GSS.
    oct. 17 12:27:16 myServer systemd[1]: Started Preprocess NFS configuration.

Details

This looks very similar to the already met A dependency job for rpc-statd.service failed. issue.

Solution

  1. Use the same solution
  2. After that, you'll notice :
    journalctl -u rpcbind.socket
    -- Logs begin at mar. 2017-10-17 12:38:48 CEST, end at mar. 2017-10-17 12:41:14 CEST. --
    oct. 17 12:38:49 myServer systemd[1]: [/usr/lib/systemd/system/rpcbind.socket:6] Failed to parse address value, ignoring: [::]:111
    oct. 17 12:38:51 myServer systemd[1]: Listening on RPCbind Server Activation Socket.
    oct. 17 12:38:51 myServer systemd[1]: Starting RPCbind Server Activation Socket.
mail

How to optimize NFS performance ?

This is a work in progress.

NFS performance is closely related to RPC performance. Since RPC is a request-reply protocol, it exhibits very poor performance over wide area networks. NFS performs best on fast LANs.

Server-side :

  • either increase the number of threads for nfsd in /etc/nfs.conf (sources : 1, 2) :
    
    [nfsd]
    	
    	threads=8
    	
    
  • or in /etc/sysconfig/nfs (source) :
    # Number of nfs server processes to be started.
    # The default is 8.
    RPCNFSDCOUNT=16
    

Client-side :

  • Tune rsize and wsize (respectively "read size" and "write size") (sources : 1, 2, 3)
  • mount options : consider async (exportfs defaults to async since nfs-utils-1.0.1. source)
  • these mount options have no effect on NFS mounts (source : man 5 nfs, search atime/noatime,) :
    • atime / noatime
    • diratime / nodiratime
    • relatime / norelatime
    • strictatime / nostrictatime
mail

A dependency job for rpc-statd.service failed.

Situation

The full error message is :
A dependency job for rpc-statd.service failed. See 'journalctl -xe' for details.
mount.nfs: rpc.statd is not running but is required for remote locking.
mount.nfs: Either use '-o nolock' to keep locks local, or start statd.
mount.nfs: an incorrect mount option was specified

Details

This happened on a RHEL 7 server, shortly after disabling IP v6.

Looks like if IP v6 has been disabled via sysctl configurations then it is disabled only on kernel level but not via GRUB so it is not disabled completely hence systemd will attempt to make a connection on port 111 on any IP v6 address, [::]:111 during the boot up (source, similar info from Red Hat documentation).

Solution

  1. ipv6ConfFile='/etc/sysctl.d/ipv6.conf'; [ -f "$ipv6ConfFile" ] && echo "'$ipv6ConfFile' existing. Can not touch it, sorry" || { echo 'net.ipv6.conf.all.disable_ipv6 = 1' > "$ipv6ConfFile"; sysctl -p "$ipv6ConfFile"; dracut -vf; echo 'Now, please reboot.'; }
  2. reboot
mail

mount.nfs: access denied by server while mounting nfsServer:/path/to/export

Situation

On nfsServer :

/etc/exports :
/path/to/export		nfsClient(rw)

On nfsClient :

nfsMountPoint='/mnt/nfs'; mkdir -p "$nfsMountPoint"; mount -t nfs nfsServer:/path/to/export "$nfsMountPoint"
returns :
mount.nfs: access denied by server while mounting nfsServer:/path/to/export

Solution

On nfsServer (source) :

This is a quick'n'dirty fix, possibly raising security concerns. Do not use as-is in production.

  1. Edit /etc/exports :
    /path/to/export		nfsClient(rw,no_root_squash)
  2. exportfs -rv

On nfsClient (source) :

nfsMountPoint='/mnt/nfs'; mkdir -p "$nfsMountPoint"; mount -t nfs -o v3 nfsServer:/path/to/export "$nfsMountPoint"
mail

NFS operations

getattr
setattr
lookup
The NFS protocol does not refer to files and directories by name or by path, it uses an opaque binary value called a file handle. In NFSv3 this file handle can be up to 64 bytes long. NFSv4 allows them to be even larger.
A file's file handle is assigned by an NFS server, and is supposed to be unique on that server for the life of that file (usually an i-node number, or disk block address). Clients discover the value of a file's file handle by doing a lookup operation, or by using part of the results of a readdirplus operation. There is usually a special process done while mounting an NFS file system to determine the file handle of the file system's root directory. (source)
The Linux NFS client caches the result of all NFS lookup requests (referred to as "positive lookup" if the requested directory entry exists on the server, and "negative lookup" otherwise). To detect when directory entries have been added or removed on the server, the Linux NFS client watches a directory's mtime. If a change is detected, the client drops all cached lookup results for that directory. Since the directory's mtime is a cached attribute itself, it may take some time before a client notices it has changed. (source)
When receiving a MNT request from an NFS client, rpc.mountd checks both the pathname and the sender's IP address against its export table. If the sender is permitted to access the requested export, rpc.mountd returns an NFS file handle for the export's root directory to the client. The client can then use the root file handle and NFS lookup requests to navigate the directory structure of the export. (source) Large directories also adversely impact NFS performance. Directories are searched linearly during an NFS lookup operation : the time to locate a named directory component is directly proportional to the size of the directory and the position of a name in the directory. Doubling the number of entries in a directory will, on average, double the time required to locate any given entry. Furthermore, reading a large directory from a remote host may require the server to respond with several packets instead of a single packet containing the entire directory structure. (source)
access
readlink
read
The client must specify a file handle and starting offset for every call to read. Two identical calls to read will yield the exact same results. If the client wants to read further in the file, it must call read with a larger offset. (source)
write
create
mkdir
symlink
mknod
remove
rmdir
rename
link
readdir
readdirplus
fsstat
fsinfo
pathconf
commit
mail

NFS

Presentation

NFS allows sharing a directory with any computer over the network. The shared directory can be mounted on a client system and be used like any local directory.

NFS works with several daemons : (OBSOLETE : depends on NFS v(2|3|4)...)

  • nfsd
  • mountd
  • lockd
  • statd

Setup

Server install on CentOS 6 (http://www.howtoforge.com/setting-up-an-nfs-server-and-client-on-centos-5.5) :
yum install nfs-utils.x86_64 nfs-utils-lib.x86_64
chkconfig --levels 235 nfs on
/etc/init.d/nfs start

Client install on CentOS 6 :
yum install nfs-utils.x86_64 nfs-utils-lib.x86_64

Configuration

In /etc/exports, these lines :
  • pathName workstation(rw)
  • pathName workstation (rw)
have completely different meanings. Indeed, each line defines the permissions of a shared path in space-separated blocks of host+permission :
  • the 1st (no space between workstation and (rw)) means that the only client that can access the share is workstation, with read-write access.
  • the 2nd (space between workstation and (rw)) means that :
    • workstation can access the share with default rights (read only)
    • the rest of the world has read-write access to the shared directory. Specify *(rw) instead of simply (rw) when you REALLY give rights to "the rest of the world".

Sample /etc/exports :

/filer/projects/isbadmmvn	isbadmmvn(rw)	*(ro)
/filer/projects/isbadmci	isbadmci(rw)		*(ro)

Once exports are defined, set them effective with : exportfs option. Don't forget that the daemons must be running correctly before trying to share anything.

Option Effect
-a export all directories listed in /etc/exports
-i ignore /etc/exports completely and export directories listed further in the command line instead
-r re-export /etc/exports after it's been updated
-v verbose mode

You can list active exports / mounted exports with :

less /proc/fs/nfs/exports
Looks like this is not updated in real time.

Commands

Start / stop / get status / restart the NFS daemon :

  • systemd-style : systemctl start|stop|status|restart nfs-common.service
  • SysVInit-style :
    • /etc/init.d/nfs start|stop|status|restart|...
    • /etc/init.d/nfsserver start|stop|status|restart|... : for kernel-based NFS server.

List exports of a NFS server :

  • /sbin/showmount -e nfsServer
  • or, as root : showmount -e nfsServer

showmount comes from nfs-common, nfs-utils.

Mount an exported file system :

  • manual mount : mount nfsServer:absolutePathOfSharedDirectory mountPoint
  • with /etc/fstab for easier or automatic mount (source : CentOS doc) :
    nfsServer:/path/to/shared/directory	/mount/point	nfsType	options
If, on the client side, you get an error message like :

mount: RPC: Timed out

it could mean that the current client is not declared in /etc/exports on the server side. Moreover, it seems trying to mount an NFS share on a non-declared client makes a daemon hang since the server no longer replies to showmount -e ...