How to manager auto linger

Hi all,

Description

I am encountering an issue on my server where users are attempting to mount SMB shares via GIO, but it doesn’t seem to work as expected.

Expected behavior :

$ dbus-launch bash
$ gio mount smb://XXXX.YY/path/to/smb
$ ls /run/user/401775/gvfs
directory1  directory2

Result

When attempting to mount, I observe that if “linger” is not enabled for the user, the mount does not work correctly. Specifically, nothing appears in ~/.gvfs or /run/user/${UID}/GVFS.

I have run gio command (gio list) to get access to my files.


When I enable the linger I got the expected result

Since I have hundreds of users, I would like to automate this process, possibly using systemd for user services. However, I am hesitant to enable “linger” for all users, as this setting allows users who are not logged in to run long-running services.

Any suggestions or guidance would be greatly appreciated. :pray:

Thanks in advance!

How? Was it loginctl enable-linger ${username} as root?

In other words, you want to enable-linger on session start and disable-linger on logout,
or always enable for select users? (I don’t know how to do the former.)

1 Like

I’m not sure linger is the correct solution here. I’m kind of thinking of perhaps autofs might be more relevant in this situation?

Yes I am root on the system, but the linger is a new concept for me. And I don’t understand why it’s working… witchcraft :crystal_ball:

That could be a good idea.

However, we allow each user to mount their own SMB shares, so we don’t have a centralized SMB credentials file. Each user manages their own mount and credentials, like this:

$ dbus-launch bash
$ gio mount smb://XXXX.YY/path/to/smb < .credentials # credentials file is optional
$ ls /run/user/401775/gvfs
directory1  directory2

For example, if user1 wants to mount SMB://share/best_project, and user2 wants to do the same, they will each use their own personal credentials file (since it’s based on LDAP).

How would you suggest managing this with autofs?

Enabling linger, leaves the processes running when the user logs out. As to why it works for your mount points, kinda seems strange, since linger shouldn’t be needed for that. If the user logs in, then they are logged in and the mount point should mount. I’ve usually used linger for example when running Podman containers as a non-root user, and needed to enable linger to ensure when I logged out, the processes remained running. The problem you will have enabling linger will mean unnecessary processes running when they log out, which will then affect CPU/RAM usage no doubt.

As for autofs, my mistake, theoretically it should have been possible, but after googling I found out it won’t do what you want. Then I found people suggesting fuse, which then in turn led me to people mounting with GVFS which then led me to find about gio of which you are already using.

How about ‘mount’? Naturally, the generic

mount -t cifs [-o options] //myServerIpAdress/sharename /mnt/myFolder/

requires privileges, but mount -t cifs is practically calling mount.cifs (from package cifs-utils), so sudo rules could be given for /usr/bin/mount.cifs

This is what I do and a little bit more elaborate than and user perm specific than your needs, but it works well for me.

Outline for user mounts of samba shares.

On client
Create group sharemount and add users allowed to mount shares.

Create file 01_sharemount in /etc/sudoers.d/ with perms 440 and this
content:
%sharemount ALL=NOPASSWD: /usr/sbin/mount.cifs, /bin/umount

Create credentials file in users home directory
/home/<username>/.samba/.<username> with directory perms 700 file perms
600


On server

Create corresponding users with passwords but no login access.
Create user mountable directories and entries in smb.conf

Now back on clients create the mount scripts in the users bin
directory with directory and file perms of 750

The mount script "sharemount":

#!/bin/bash
# sharemount <sharename 1> <sharename 2>
# This script will test for <servername> on the network and then mount shares or
# issue an error message

share="$@"

if ping  -w 2 -c 1 <servername> && smbclient -NL <servername>; then
   echo "<servername> is present"
for share in "$@" ; do
   mkdir ~/$share

   sudo mount.cifs -o username=$USERNAME,uid=$USERNAME,gid=$USERNAME,credentials=/home/$USERNAME/.samba/.$USERNAME //<servername>/$share ~/$share

done

else
    echo "The server <servername> could not be reached. Is it on?"

fi

#################end of script###############

Now the umount script "umountsmb"

#!/bin/bash
# umount.cifs <share 1> <share 2>
# this program un-mounts network shares and then removes the mount point

share="$@"

for share in "$@" ; do

    if [ -d ~/$share ]; then

	sudo	/bin/umount /home/$USERNAME/$share
	wait=2
	rmdir /home/$USERNAME/$share
	wait=1
    fi
done

################end of ummountsmb########################


I then create panel launchers for sharemount and umountsmb with the specific user shares as targets.

I gave up creating mount points for shares in fstab as it just became cumbersome to maintain.

1 Like