Is there a way to shrink disk usage of VMware - vmware

yuyang#yuyangdeMBP Virtual Machines.localized % cd Ubuntu\ 64-bit\ 22.04.1.vmwarevm
yuyang#yuyangdeMBP Ubuntu 64-bit 22.04.1.vmwarevm % ls
Ubuntu 64-bit 22.04.1-62.scoreboard Virtual Disk-s009.vmdk Virtual Disk-s026.vmdk
Ubuntu 64-bit 22.04.1-63.scoreboard Virtual Disk-s010.vmdk Virtual Disk-s027.vmdk
Ubuntu 64-bit 22.04.1-64.scoreboard Virtual Disk-s011.vmdk Virtual Disk-s028.vmdk
Ubuntu 64-bit 22.04.1.nvram Virtual Disk-s012.vmdk Virtual Disk.vmdk
Ubuntu 64-bit 22.04.1.plist Virtual Disk-s013.vmdk autoinst.flp
Ubuntu 64-bit 22.04.1.scoreboard Virtual Disk-s014.vmdk autoinst.iso
Ubuntu 64-bit 22.04.1.vmsd Virtual Disk-s015.vmdk caches
Ubuntu 64-bit 22.04.1.vmx Virtual Disk-s016.vmdk mksSandbox-0.log
Ubuntu 64-bit 22.04.1.vmxf Virtual Disk-s017.vmdk mksSandbox-1.log
Virtual Disk-s001.vmdk Virtual Disk-s018.vmdk mksSandbox-2.log
Virtual Disk-s002.vmdk Virtual Disk-s019.vmdk mksSandbox.log
Virtual Disk-s003.vmdk Virtual Disk-s020.vmdk startMenu.plist
Virtual Disk-s004.vmdk Virtual Disk-s021.vmdk vmware-0.log
Virtual Disk-s005.vmdk Virtual Disk-s022.vmdk vmware-1.log
Virtual Disk-s006.vmdk Virtual Disk-s023.vmdk vmware-2.log
Virtual Disk-s007.vmdk Virtual Disk-s024.vmdk vmware.log
Virtual Disk-s008.vmdk Virtual Disk-s025.vmdk
yuyang#yuyangdeMBP Ubuntu 64-bit 22.04.1.vmwarevm % du -sh .
164G .
yuyang#yuyang-virtual-machine:~$ df -h
Filesystem Size Used Avail Use% Mounted on
tmpfs 389M 2.2M 387M 1% /run
/dev/sda3 2.0T 47G 1.9T 3% /
tmpfs 1.9G 252K 1.9G 1% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
/dev/sda2 512M 6.1M 506M 2% /boot/efi
tmpfs 389M 4.7M 385M 2% /run/user/1000
The linux system only used 47GB.
Shrink disk usage of VMware.
You can see VMware used a lot of disk space than need.
Is there a way shrink to 47GB.
By the way I am using VMware on macOS.

Related

centos rsync No space left on device

Hint on centos7 when writing to file. No space left on device (28), but I checked the disk and Inodes both have space. I don't know why. Has this ever happened to anyone?
[root#GDI2390 sync_backup]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/centos-root 52403200 2595000 49808200 5% /
devtmpfs 32867708 0 32867708 0% /dev
tmpfs 32874076 24 32874052 1% /dev/shm
tmpfs 32874076 279528 32594548 1% /run
tmpfs 32874076 0 32874076 0% /sys/fs/cgroup
/dev/sda1 508588 98124 410464 20% /boot
/dev/mapper/centos-home 4797426908 902326816 3895100092 19% /home
tmpfs 6574816 0 6574816 0% /run/user/0
[root#GDI2390 sync_backup]# df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/centos-root 52428800 89274 52339526 1% /
devtmpfs 8216927 562 8216365 1% /dev
tmpfs 8218519 2 8218517 1% /dev/shm
tmpfs 8218519 734 8217785 1% /run
tmpfs 8218519 13 8218506 1% /sys/fs/cgroup
/dev/sda1 512000 330 511670 1% /boot
/dev/mapper/centos-home 4797861888 26409024 4771452864 1% /home
tmpfs 8218519 1 8218518 1% /run/user/0
check "Disk space vs Inode usage"
df -h vs df -i
check also - No space left on device
Assuming this is ext[3|4], check and see if any of the folders you are rsyncing contains about 1M files. Note it is not a straight forward limit that has over flowed, it is more complicated and related to length of your filenames and depth of b-trees.
It is not disk space or inodes. It is the directory index that may have overflowed. See https://askubuntu.com/questions/644071/ubuntu-12-04-ext4-ext4-dx-add-entry2006-directory-index-full

No space left on device even after increasing the volume and partition size on EC2

I have an EC2 instance with 2 EBS volumes. On the root volume, I increased the volume size.
First I modified the volume size on console. Then followed this instruction to extended the partition following this instruction.
However, when running any command, I get No space left on device.
echo "Hello" > hello.txt
-bash: hello.txt: No space left on device
The df -h correctly shows, the total space correctly.
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 1.9G 0 1.9G 0% /dev
tmpfs 395M 596K 394M 1% /run
/dev/xvda1 30G 19G 9.6G 67% /
tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/xvdg 10G 6.2G 3.9G 62% /data2
tmpfs 395M 0 395M 0% /run/user/1001
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 30G 0 disk
└─xvda1 202:1 0 30G 0 part /
xvdg 202:96 0 31G 0 disk /data2
The bottom of the guide said,
If the increased available space on your volume remains invisible to
the system, try re-initializing the volume as described in
Initializing Amazon EBS Volumes.
So I followed the instruction to reinitialize the volume.
I used dd to reinitialize.
sudo dd if=/dev/xvda of=/dev/null bs=1M
30720+0 records in
30720+0 records out
32212254720 bytes (32 GB, 30 GiB) copied, 504.621 s, 63.8 MB/s
However, I still get "No space left on device" error. I have done multipe reboot of the instance, I still see the same error.
Update 1: I have a large number of small files 4-10KB. So, I am running out of inodes. Please let me know how to increase the inodes (on ext4 partition)? Thanks.
df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
udev 497270 371 496899 1% /dev
tmpfs 504717 539 504178 1% /run
/dev/xvda1 1966080 1966080 0 100% /
tmpfs 504717 1 504716 1% /dev/shm
tmpfs 504717 4 504713 1% /run/lock
tmpfs 504717 18 504699 1% /sys/fs/cgroup
/dev/xvdg 5242880 39994 5202886 1% /data2
tmpfs 504717 10 504707 1% /run/user/1001
Please let me know how to resolve this error.
You don't need to reinitialise the volumes if you are using the new generations EC2 instance type like M4, M5, T2 and T3.
You have to also expand your volume on EC2 instance:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html
As I can see your Inode is full, try this command to find the directory:
for i in /*; do echo $i; find $i |wc -l; done
Then, after finding the directory, do:
for i in /path/to/directory/*; do echo $i; find $i |wc -l; done
After that remove those file using:
rm -rf /path/to/fileorfoldername
As you don't want to remove the files, you will have to create a new filesystem using mke2fs and -I parameter for inode size, or keep increasing your root volume more.
Here is a similar question for it:
https://unix.stackexchange.com/questions/26598/how-can-i-increase-the-number-of-inodes-in-an-ext4-filesystem
Also, you can move your files to secondary drive.
You are running out of inodes on your / (root) directory. so to debug it, you need:
Find what are the inodes pointing to or open files something like for i in /*; do echo $i; find $i |wc -l; done
Now look for directories which take more time and then delete that dir

Where is my shared memory object saved?

I'm new to programming and i have programmer some share memory on my beaglebone black which runs on Linux.
I need to make sure that the memory is saved on the Beaglebone blacks ram, and not flash.
The filepath is:
#define FILEPATH "/tmp/mmapped.bin"
fd = open(FILEPATH, O_RDWR | O_CREAT | O_TRUNC, (mode_t)0600);
if (fd == -1) {
perror("Error opening file for writing");
exit(EXIT_FAILURE);
}
coded in C++
Please let me know if any further questions are needed.
The file path /tmp/whatever does not guarantee anything. What you need to do is log into your Beaglebone and see what filesystems are mounted where.
In order to store files in RAM you need to store them on a tmpfs or ramfs filesystem.
On my Fedora Linux laptop if I type mount or cat /proc/mounts I see everything that is mounted. These are the kinds of lines you're looking for:
$ grep tmpfs /proc/mounts
tmpfs /dev/shm tmpfs rw,seclabel,nosuid,nodev 0 0
tmpfs /run tmpfs rw,seclabel,nosuid,nodev,mode=755 0 0
tmpfs /sys/fs/cgroup tmpfs ro,seclabel,nosuid,nodev,noexec,mode=755 0 0
tmpfs /tmp tmpfs rw,seclabel,nosuid,nodev 0 0
So you can see that on Fedora anyway, the /tmp/ directory is a tmpfs and the files in /tmp/ will be in RAM.
One thing to watch for with tmpfs is that the default maximum size is half of RAM. If a file in tmpfs, plus programs using RAM exceeds the RAM size the tmpfs data will go into swap, which will be on Flash, if a Beaglebone even has swap, which I don't remember at the moment.
If there's no swap or it is small then using a tmpfs for big files will cause OOM (out of memory) and break a lot of things so be careful how much you use.

Running out of inodes disk space, free space left on device

I am getting the message 'disk is full' despite having plenty of free space.
Inodes :-
$ df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/xvda1 1638400 55434 1582966 4% /
tmpfs 480574 1 480573 1% /dev/shm
/dev/xvdb2 3014656 3007131 7525 100% /export
Disk space :-
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 25G 16G 9.3G 63% /
tmpfs 1.9G 0 1.9G 0% /dev/shm
/dev/xvdb2 46G 24G 20G 56% /export
Note: we have our magento application code which is under /dev/xvdb2 /export.please how to sort out the issue.

create a ramdisk in C++ on linux

I need to make a ramfs an mount it to an directory in linux using c++. I want to make it like a user (no sudo).
I need to call an application on a file that i created and it will be often. Writing it to HDD is very slow.
I found just:
system("mkdir /mnt/ram");
system("mount -t ramfs -o size=20m ramfs /mnt/ram");
but that is not good. I want to be a regular user, and command mount can be called just as root.
what can i do?
For a userspace ramfs solution, you can use python-fuse-ramfs.
I checket if /tmp is a ramfs, but it is not. It creates files on the HDD. but when i run df -h it outputs:
rootfs 25G 9,4G 15G 40% /
devtmpfs 1,9G 0 1,9G 0% /dev
tmpfs 1,9G 1,6G 347M 83% /dev/shm
tmpfs 1,9G 1,3M 1,9G 1% /run
/dev/mapper/vg_micro-root 25G 9,4G 15G 40% /
tmpfs 1,9G 0 1,9G 0% /sys/fs/cgroup
tmpfs 1,9G 0 1,9G 0% /media
/dev/mapper/vg_micro-stack 289G 191M 274G 1% /stack
/dev/mapper/vg_micro-home 322G 40G 266G 14% /home
/dev/sda2 485M 89M 371M 20% /boot
/dev/sda1 200M 19M 182M 10% /boot/efi
This means that tmpfs (ramdisks) are: /dev/shm, /run, /sys/fs/cgroup and /media. But only one of this is meant to be a temporary ramdisk for comunication between processes, using files. Here is the /dev/shm description and usage. The only thing is that tmpfs will not grow dynamically, but for my purposes it will be enough (20MB - 1GB).