Having issues running mkfs from root without using the su (or sudo) command which was never an issue on CentOS 7. I know the problem here is my path, but I can’t for the life of me figure out the fix, or why this is different from CentOS 7. I’m sure this is something simple, but everything I’ve tried does not work.
I’m using telnet to connect to a Rocky Linux 9 server using the Rocket D3 Database that runs my application. My application occasionally needs to run a Linux command. When I get to the command line my prompt is:
sh-5.1#
If I run whoami
I get root
If I run id
I get uid=0(root) gid=0(root) groups=0(root)
If I echo $PATH
I get /usr/local/bin:/usr/bin
If I run /sbin/mkfs -t ext3 /dev/sdb1
I get mkfs: failed to execute mkfs.ext3: No such file or directory
Now, the problem is that mkfs is in /sbin/ so clearly it won’t run with that path. The moment I type in su at the sh-5.1# prompt, I get to an actual root prompt and the path/groups change.
If I run whoami
I still get root
If I run id
I get uid=0(root) gid=0(root) groups=0(root),4(adm)
If I echo $PATH
I get /usr/local/sbin:/root/.local/bin:/root/bin:/sbin:/usr/local/sbin:/usr/local/bin:/usr/bin
If I run /sbin/mkfs -t ext3 /dev/sdb1 it works fine.
The issue here is that my application software would require a huge rewrite to find every situation in which I need to access /sbin/. It is much easier to simply give the /sbin/ permissions to root which it should already have… and technically already does. So, I’m confused as to how to change this. I tried exportPATH=/sbin:$PATH but had no luck.
I’m confused as to how the same “user” can have two different paths and two different groups. I realize that I could probably spend a lifetime learning everything about $PATH, environment, profiles, bashrc, etc. so I’m hoping someone who already has the knowledge can impart some wisdom here as to how to correct the path.