Totally out of disk space on Rocky Linux 9.6

Hello Rocky Linux Forums,

I feel rather desperate as I seem to have run totally out of disk space with everything root related. I’m not an IT / truly Rocky linux savvy person, so I slightly panick!

The symptoms are, that the system did start up, but no longer arrived at the login screen. Instead, I see the Rocky Linux logo and a spinning circle.

I have tried to use the Rocky linux 9.6 install/boot/rescue USB stick, got into bash5.1 (the ‘regular way’ on the stick didn’t work), and tried to ‘hook onto the root’ from there. But instead, I received the error message, that mounting the root of my system wasn’t possible, because… the disk (or root?) was fulll!

Then an acquaintance of mine, who is more linux savvy than me, advised me to NOT use the bootable stick, but try to directly boot into the Linux text environment in y very own system. To that aim, I Googled and found that I had to boot into GRUB2, press e, and then alter the script there a tiny bit, ctrl-x and then indeed I did get the text environment with text login screen of my very own system. So I logged in, and tried to follow along these lines,

https://www.tecmint.com/fix-full-root-partition-linux/

but whenever I try to execute a ‘clean up command’, I get this message,

that I do not understand, since admittedly, I’m a layperson and not an IT one.

My acquaintance advised me to cd /var/log and then give in ls -ltra , resulting in

From here, I no longer have a clue as how to SAFELY clean out my biggest var/logs… and if even that will work, and not result in the previous error message above?!

Could someone please help me with this highly technical problem, explain it to me how to go about from here? I use my computer as a video production system, an now will miss my deadline :frowning:

Any help and ideas are highly welcome and appreciated!

Thank you for your attention, and help in advance!

Regards from Victor van Dijk.

Hi @victorvandijk!
When the root filesystem is 100% full, nothing will work, even mounting it from the rescue system can lock the metadata, so you need to create a little breathing space by deleting or resetting only the larger, non-critical files from /var/log (and/or the DNF cache), so that you can then run the appropriate cleaning tools (dnf clean all, dnf autoremove, journal vacuuming, etc.).

What I recommend you do from my point of view. Go to the log file
and run du -sh ., from there it’s up to you what you want to delete.

All the best!

1 Like

Hello @Fekthis ,

Thank you very much for your kind help!

Could you kindly clarify, what you mean by ‘go to the log file’? The little I know, is that in a Linux system *everything * is a file, but that’s about it, my knowledge about the Linux file system ends there… du= disk usage, isn’t it? and what should the switch -sh do? when I run this command with switch as you mentioned, I get this as result,

Toegang geweigerd = access denied
du: is not capable of reading map “./…”

And further, what are those big files saying ‘messages-[date]’ ? Are they safe to remove?

Thanks again!

Normally if I have to clean up volatile files from /var I use the disk use command to narrow it down.

du -sh /var/* | sort -h

Look for the largest subdirectory. Then, for example, you can keep drilling down.

du -sh /var/log/* | sort -h

Once you’re confident you’ve located the offending files

either remove them

rm -fr /var/log/yourhugedirectory_or_file

Or if you’re unsure about removing the file

cat /dev/null > /var/log/yourhugefile which will empty it out.

keep using df -h until you’ve got it pruned down. If you do have massive log files, it may be indicative of a larger problem that’s creating the huge log files. Normally if you have a decent amount of space on a drive like 25G filling up shouldn’t be a problem unless you’re storing very large files. I’m doing this from memory. Hopefully that’ll get you going. It’s not a bad idea to fire up chatGPT and query it. It’s usually very good about giving you options for this sort of scenario.

1 Like

@anonamoose I’m extremely grateful for your great, clear, concise help! With ‘your’ commands, I could empty out a massive CUDA subdirectory, an old version of it that I was unaware of having on my system! Thereafter, my system started to act ‘normal’ again :grinning_face:

Just one more question, how to go about, preventing this situation from ever happening again? Are there any tips? Are there automations and monitoring possible to prevent this situation?

Again, THANK YOU :heart:

Greetings from Victor.

1 Like

Glad I could help Victor. I can’t say specifically how to prevent your specific situation. Often these things are just a one off. I’d just keep an eye on the directory and it might be just fine after getting rid of the old files. If it does start ballooning I’d look at logs pertaining to Cuda and try to see why it’s happening. With logs in particular you can control log growth with “logrotate” You can use that to compress old logs and specify how often they roll over and how many files get created. For the specifics you can just ask chatGPT or do an internet search for logrotate and I’m assuming there’s a man file for logrotate as well man logrotate. Here’s a blog for logrotate if that was your issue. Logrotate

Anyway, good luck with your system. Glad it’s back in working order now.

1 Like

Yes. For most basic commands there is a manual that can be read. For ‘du’: man du
The du -sh is same as du --summarize --human-readable

You do get “no permission” errors, because you run as regular user who indeed lacks access to some files. There is command sudo that some users can use to run other commands as admin (root).

There is also alternative for -s in du, the --max-depth=N
Furthermore, there is option --one-file-system too.

All those together:

sudo du -hxd 1 /

Which gives on my system:

1.8G	/var
16K	/lost+found
8.0K	/mnt
4.0K	/media
4.0K	/srv
4.0K	/afs
37M	/etc
250M	/root
15G	/usr
17G	/

With depth 2 the output starts like:

$ sudo du -hxd 2 /
4.0K	/var/local
4.0K	/var/empty
777M	/var/cache
4.0K	/var/opt
4.0K	/var/adm
264K	/var/spool
4.0K	/var/nis
4.0K	/var/games
319M	/var/log
4.0K	/var/preserve
4.0K	/var/account
12K	/var/www
73M	/var/tmp
627M	/var/lib
4.0K	/var/ftp
12K	/var/db
4.0K	/var/yp
12K	/var/kerberos
1.8G	/var
16K	/lost+found
4.0K	/mnt/test
8.0K	/mnt
4.0K	/media
...

The part for /var one could get also with:

$ sudo du -hxd 1 /var
4.0K	/var/local
4.0K	/var/empty
777M	/var/cache
4.0K	/var/opt
4.0K	/var/adm
264K	/var/spool
4.0K	/var/nis
4.0K	/var/games
319M	/var/log
4.0K	/var/preserve
4.0K	/var/account
12K	/var/www
73M	/var/tmp
627M	/var/lib
4.0K	/var/ftp
12K	/var/db
4.0K	/var/yp
12K	/var/kerberos
1.8G	/var

The ls can sort output too. For example: ls -lhSr
That should show human-readable sizes, sorted by size, ascending (with largest last).


What does lsblk -o name,fstype,size,fsavail,fsuse%,mountpoints show?


PS. It is usually possible to copy-paste text from terminal to forum post (and format with code tags as I’ve done above). That is much more convenient that bitmap images.

1 Like

Hi @victorvandijk
Please forgive me for not being the most detailed with you so I can help you, as a rule I try to give answers as easy to understand. I should have been a little more detailed.
Peace! :victory_hand:

1 Like

The one I use for a really quick overview of all space on all drives (works as standard user)
df -h
This will show columns for size, used, available and then ‘Use%’
If ‘Use%’ is showing 90% it means it’s getting full

1 Like

First mistake: Google instead of asking on dedicated specific forums or helps like docs on Red Hat.

Second, my guess is the message indicating that the disk was full related to the fs on the usb stick.

Third: don’t you ever eared of the “man” command ? If not, just run in a terminal “man man”.

Have fun

:rofl: Thank you for your hilarious reply :rofl:

This issue/situation had also shown up on another distributions forum, which I’m copying my from response from…

The ‘actual’ issue, is that apparently you only have a ‘root’ filesystem for the O/S and the files that are written to it (which are probably what filled up ‘/’)…

Such that (and some would probably say this is ‘old school’ and not neccessary anymore) you should have ‘seperate’ partitions for ANY filesystem that get written to, otherwise runaway processes, unknowing users, or vendor applications, will continually cause this issue…

HERE is what I’ve been using for the past 3 decades (and this is a workstation config, with my server builds looking the same), such that, if you can perform a ‘fresh’ re-installation of the O/S, you might configure it with ‘seperate’ partitions/filesystems:

workstation2:>  df -k | grep -v \/run
Filesystem                    1K-blocks    Used Available Use% Mounted on
/dev/mapper/osdisk_vg-root_lv  10218772 4384992   5293108  46% /
devtmpfs                           4096       0      4096   0% /dev
tmpfs                           7989980       0   7989980   0% /dev/shm
/dev/mapper/osdisk_vg-opt_lv    4046560  111588   3708876   3% /opt
/dev/sda1                        996780  355612    572356  39% /boot
/dev/mapper/osdisk_vg-var_lv    8154588  228788   7489988   3% /var
/dev/mapper/osdisk_vg-tmp_lv    1992552    9724   1861588   1% /tmp
/dev/mapper/osdisk_vg-home_lv   1992552   46560   1824752   3% /export/home

If you realize that ANYONE can write anything to “/tmp” and/or “/var/tmp”, then you should also realize that they can fill those locations up, wreaking havoc..

Red Hat docs have for some RHEL major versions now had a note that /usr and /var as separate filesystems makes boot process more complex. (Their subdirectories, like /var/tmp ought to be “fine” on separate volumes.)
The apparent rationale is that during early boot, when the / has been mounted, commands in /bin and /sbin may be used, and early starting services may write to /var/run.

The /var/run is a symlink to /run – systemd actually notes if service still uses /var/run and not /run directly, and the /bin and /sbin are symlinks to /usr/bin and /usr/sbin, respectively. Hence the /usr and /var have to be available early in the boot too.

The /bin and /sbin used to be real directories, for the commands that were/are used before /usr/bin and /usr/sbin were accessible (as the /usr could be mounted, ro, from net – on systems that did not have package management like RPM).


Some security protocols do demand separate /var, so the “complex” must still be doable.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.