With Linux as my primary environment daily, I need to be able to consume CIFS shares in a variety of different environments. Systemd has some amenities that allow you to continue using the conventional file system table but with helpful coordination that doesn’t require user intervention. Some assumptions:
- You are running ArchLinux, or some distribution that is consuming systemd without mucking to deeply in the internals
- You understand how to manage mount points via the fstab
- You understand the basics of networking
- You understand the basics of user identification
As I move around to different networks I prefer to be able to dynamically mount/un-mount CIFS targets transparently without manual intervention. Many times I’m connecting to a network with CIFS targets over VPN type services, more often than not via sshuttle. This method of managing my CIFS targets works really well for me, transparently, as I use my desktop environments file manager. You’ll want to avoid the CIFS implementation in Nautilus as it uses the slow performing GVFS.
distilled useful information can be found on the ArchWiki
Step 1: Identification of self
Understand who you are on your own system by typing
id in the terminal, for example:
[agd@enoch ~]$ id uid=1000(agd) gid=1000(agd) groups=1000(agd),10(wheel)
Remember both your username and primary group name, mine above are agd in both cases, yours will very likely be the same. Linux has user and group identification numbers, you need to ensure that you’re mapping these concepts properly when you do mounts. You might be uid=1000 on your own system, but on another system you’re going to have a completely different uid. The fstab entry can force map these concepts.
Step 2: Create an fstab entry
The fstab manages all of your mounted file-systems. Edit with caution, you can muck up your install and have to fix it. You’ll want to add an entry that looks like this for a CIFS share:
//<address>/<sharename> /mnt/<server>/<sharename> cifs noauto,nofail,_netdev,x-systemd.automount,credentials=/etc/smb.<server>.<username>,uid=<username>,gid=<groupname>,forceuid,forcegid 0 0
Breaking that down:
//<address>/<sharename>: the server and share you’re trying to mount. More often than not I use the ip address directly of the server, but this is because when I’ve hopped around in many simultaneous networks sometimes the DNS doesn’t map properly.
/mnt/<server>/<sharename>: the location you want to mount to. It is advisable to use some sane convention within
cifs: the mount type… obvious in this case.
noauto,nofail,_netdev,x-systemd.automount: the trick to getting systemd to create an automount unit on our behalf. Also some reasonable settings for remote filesystems so your machine can showdown properly.
credentials=/etc/smb.<server>.<username>: the location of your credentials text file, more on that in a moment.
uid=<username>,gid=<groupname>,forceuid,forcegid: setting and forcing the map of your local
gidon to the CIFS target. Ideally on the server side the administrator is forcing specific users and groups in the
0 0: correspond to dump and fsck, which you should read more about in depth, but for remote file-systems using zero is fine.
Step 3: Create credentials file
Next we need to create the credential file:
The file should have two key=value pairs:
Step 4: Reboot
Reboot to have systemd examine your
/etc/fstab and set up the mount folder structure on your behalf. You can likely force this through
systemctl daemon-reload, but a reboot is an assured method.
Step 5: Attempt
Now when you navigate to those folders in
/mnt, if you’re able to reach the server, it will automatically mount transparently as you request the resource with your file manager.
Step 6: Force it, if it doesn’t work
More often that not you’ll be able to get the automount to trigger when you access the mountpount, however if it fails you can force it via:
systemctl start mnt-<NAME_OF_MOUNTPOINT>.mount
You can list the systemd generated units with
systemctl list-units | grep mnt.