/build/static/layout/Breadcrumb_cap_w.png

kernel log messages about tmp: filesystem full

Hello,

I recently started getting errors in my security run output for the Kernel log messages.

This is what I'm receiving:


kernel log messages:
+pid 1582 (cabextract), uid 0 inumber 211970 on /tmp: filesystem full
+pid 2523 (cabextract), uid 0 inumber 47106 on /tmp: filesystem full pid
+3270 (cabextract), uid 0 inumber 211970 on /tmp: filesystem full pid
+5060 (cabextract), uid 0 inumber 211970 on /tmp: filesystem full pid
+6239 (cabextract), uid 0 inumber 211970 on /tmp: filesystem full pid
+7032 (cabextract), uid 0 inumber 235522 on /tmp: filesystem full pid
+96283 (cabextract), uid 0 inumber 211970 on /tmp: filesystem full pid
+97407 (cabextract), uid 0 inumber 47106 on /tmp: filesystem full pid
+98527 (cabextract), uid 0 inumber 141314 on /tmp: filesystem full pid
+2663 (cabextract), uid 0 inumber 235522 on /tmp: filesystem full pid
+7021 (cabextract), uid 0 inumber 211970 on /tmp: filesystem full pid
+7819 (cabextract), uid 0 inumber 235522 on /tmp: filesystem full pid
+89650 (cabextract), uid 0 inumber 141314 on /tmp: filesystem full pid
+90188 (cabextract), uid 0 inumber 47106 on /tmp: filesystem full pid
+90650 (cabextract), uid 0 inumber 94210 on /tmp: filesystem full pid
+91624 (cabextract), uid 0 inumber 211970 on /tmp: filesystem full pid
+92709 (cabextract), uid 0 inumber 235522 on /tmp: filesystem full pid
+93585 (cabextract), uid 0 inumber 211970 on /tmp: filesystem full

However, my Daily Run Output looks fine:

Disk status:
Filesystem    Size    Used   Avail Capacity  Mounted on
/dev/da0a     467G    293G    137G    68%    /
devfs         1.0K    1.0K      0B   100%    /dev
/dev/da0d     1.9G     18M    1.8G     1%    /tmp
/dev/da0f     9.7G    3.4G    5.5G    38%    /usr
/dev/da0e     1.9G     92M    1.7G     5%    /var
procfs        4.0K    4.0K      0B   100%    /proc
fdescfs       1.0K    1.0K      0B   100%    /dev/fd

The /tmp folder is only 1% full. The 100 percents I assume are supposed to be that way since the sizes are only 1K and 4K. Does it look right?

Why am I getting /tmp: filesystem full errors? and is it going to eventually blow up the K1000?

I updated the K1000 to the latest version of 9.1 and the errors remain.

What can I do to eliminate these errors? 

Thanks


2 Comments   [ + ] Show comments
  • Go to Settings › Support › Diagnostic Utilities.

    Select Top from the drop-down menu.

    Run Now


    And post the resulsts here. - Channeler 5 years ago
  • Hi Channeler,

    This is SMA 9.1.318.
    These are the results of the Top Diagnostic:
    last pid: 7071; load averages: 0.42, 0.39, 0.46 up 4+04:31:38 13:48:51
    162 processes: 1 running, 160 sleeping, 1 zombie

    Mem: 824M Active, 38G Inact, 1195M Laundry, 1967M Wired, 548M Buf, 20G Free
    Swap: 4096M Total, 4096M Free

    PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
    1248 mysql 55 20 0 3678M 3255M select 1 774:29 0.98% mysqld
    57510 www 1 34 0 496M 88636K piperd 7 0:10 0.59% php-fpm
    867 rabbitmq 140 52 0 1804M 119M select 2 70:02 0.20% beam.smp
    1522 root 1 52 0 368M 68276K nanslp 6 31:18 0.20% php
    21226 www 1 52 0 528M 113M accept 0 0:17 0.20% php-fpm
    21581 www 1 52 0 521M 107M accept 4 0:14 0.20% php-fpm
    87518 www 1 52 0 470M 62744K accept 0 0:03 0.20% php-fpm
    2140 root 1 20 0 37576K 6700K kqread 4 4:58 0.10% haproxy
    19990 www 1 52 0 524M 108M accept 3 0:15 0.10% php-fpm
    25229 www 1 52 0 521M 104M accept 3 0:14 0.10% php-fpm
    26403 www 1 52 0 521M 104M accept 0 0:14 0.10% php-fpm
    24204 www 1 52 0 518M 101M accept 6 0:13 0.10% php-fpm
    31865 www 1 52 0 511M 98592K accept 0 0:12 0.10% php-fpm
    91509 www 1 52 0 468M 62868K accept 0 0:03 0.10% php-fpm
    1664 root 1 22 0 349M 51692K nanslp 7 20:18 0.00% php
    1513 root 26 52 0 103M 63036K uwait 6 19:55 0.00% koneas.11.1-amd64
    2198 fetchmail 1 20 0 55144K 7864K select 0 5:22 0.00% fetchmail
    1675 root 1 20 0 355M 56468K nanslp 7 3:41 0.00% php - Geoff25 5 years ago

Answers (1)

Answer Summary:
Posted by: KevinG 5 years ago
Red Belt
1

Top Answer

What version of the SMA are you running?

Based on the partition sizes, it appears that you maybe are on an older VM.

If you stand up a new that is configured to use larger partition sizes that is the same version you are currently running and you restore your backup files, should resolve your issue.


Comments:
  • This is the latest SMA, 9.1.318.
    I didn't create the VM, but it might be a few years old. - Geoff25 5 years ago
    • Download the latest 9.1.317 OVF. Next you will need to apply the hotfix to get to version 9.1.318. Now you would be able to restore your backup files to this new VM. - KevinG 5 years ago
      • Great, I'll begin looking into that process. Hopefully that server meets the required specs to work with the new OVF. IT also contains the K2000. - Geoff25 5 years ago
  • Ask Kevin mentioned, your Swap is only 4GB.(according to the TOP results).

    KACE is using 32GB of swap since late 2016.

    Your VM needs to be migrated to a newer one. - Channeler 5 years ago
    • Thanks everyone,I guess its time to upgrade that VM. Funny We never had errors before. It must have started with the 9.1 update. - Geoff25 5 years ago

Don't be a Stranger!

Sign up today to participate, stay informed, earn points and establish a reputation for yourself!

Sign up! or login

Share

 
This website uses cookies. By continuing to use this site and/or clicking the "Accept" button you are providing consent Quest Software and its affiliates do NOT sell the Personal Data you provide to us either when you register on our websites or when you do business with us. For more information about our Privacy Policy and our data protection efforts, please visit GDPR-HQ