I ran into this issue recently when one of the vCenter appliances in my test environment triggered this alarm in the vSphere web client:
Logging into the VCSA via SSH and running ‘df –h’ showed that the /storage/core area was indeed 100% utilized:
vcsa01:/storage/core # df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 11G 3.9G 6.3G 39% / udev 4.0G 168K 4.0G 1% /dev tmpfs 4.0G 40K 4.0G 1% /dev/shm /dev/sda1 128M 38M 84M 31% /boot /dev/mapper/core_vg-core 25G 25G 0M 100% /storage/core /dev/mapper/log_vg-log 9.9G 209M 9.2G 3% /storage/log /dev/mapper/db_vg-db 9.9G 192M 9.2G 3% /storage/db /dev/mapper/dblog_vg-dblog 5.0G 171M 4.5G 4% /storage/dblog /dev/mapper/seat_vg-seat 9.9G 165M 9.2G 2% /storage/seat /dev/mapper/netdump_vg-netdump 1001M 18M 932M 2% /storage/netdump /dev/mapper/autodeploy_vg-autodeploy 9.9G 151M 9.2G 2% /storage/autodeploy /dev/mapper/invsvc_vg-invsvc 5.0G 146M 4.6G 4% /storage/invsvc
On my system a bunch of .tgz files had used up all the space:
vcsa01:/storage/core # ls -lah total 32K drwxr-xr-x 5 root root 4.0K Mar 9 16:00 . drwxr-xr-x 15 root root 4.0K Mar 9 15:47 .. drwx------ 2 root root 16K Mar 8 14:30 lost+found drwxrwx--- 2 netdumper netdumper 4.0K Jan 11 11:28 netdumps -rw------- 1 root root 12G Mar 9 16:00 vc-vcsa10-216-05-13-18.88.tgz -rw------- 1 root root 10G Mar 9 16:00 vc-vcsa10-216-08-28-14.17.tgz drwxr-x--x 4 root root 4.0K Mar 9 14:30 vmware-vws
The process for removing these is to stop the vpxd service, then remove the files. You can stop the vpxd service by running :
vcsa01:/ # service vmware-vpxd stop vmware-vpxd: Stopping vpxd by administrative request. process id was 32334 success
Before removing anything, make sure that the services are stopped by running:
vcsa01:/# service vmware-vpxd status vmware-vpxd is stopped
Once confirmed to be stopped, that files can be removed by using the ‘rm’ command. For example:
# rm vc-vcsa10-216-08-28-14.17.tgz
Once done, confirm that there is now free space in the /storage/core area:
Filesystem Size Used Avail Use% Mounted on /dev/sda3 11G 3.9G 6.3G 39% / udev 4.0G 168K 4.0G 1% /dev tmpfs 4.0G 40K 4.0G 1% /dev/shm /dev/sda1 128M 38M 84M 31% /boot /dev/mapper/core_vg-core 25G 173M 24G 1% /storage/core /dev/mapper/log_vg-log 9.9G 209M 9.2G 3% /storage/log /dev/mapper/db_vg-db 9.9G 192M 9.2G 3% /storage/db /dev/mapper/dblog_vg-dblog 5.0G 171M 4.5G 4% /storage/dblog /dev/mapper/seat_vg-seat 9.9G 165M 9.2G 2% /storage/seat /dev/mapper/netdump_vg-netdump 1001M 18M 932M 2% /storage/netdump /dev/mapper/autodeploy_vg-autodeploy 9.9G 151M 9.2G 2% /storage/autodeploy /dev/mapper/invsvc_vg-invsvc 5.0G 146M 4.6G 4% /storage/invsvc
With the space now freed up, you can start the vpxd service to get vCenter back up and running:
vcsa01:/# service vmware-vpxd start
The error alarm should now clear in vCenter. As always, be careful when deleting any files from your VCSA. If in doubt, check with VMware Technical Support!