create a ramdisk in C++ on linux - c++

I need to make a ramfs an mount it to an directory in linux using c++. I want to make it like a user (no sudo).
I need to call an application on a file that i created and it will be often. Writing it to HDD is very slow.
I found just:
system("mkdir /mnt/ram");
system("mount -t ramfs -o size=20m ramfs /mnt/ram");
but that is not good. I want to be a regular user, and command mount can be called just as root.
what can i do?

For a userspace ramfs solution, you can use python-fuse-ramfs.

I checket if /tmp is a ramfs, but it is not. It creates files on the HDD. but when i run df -h it outputs:
rootfs 25G 9,4G 15G 40% /
devtmpfs 1,9G 0 1,9G 0% /dev
tmpfs 1,9G 1,6G 347M 83% /dev/shm
tmpfs 1,9G 1,3M 1,9G 1% /run
/dev/mapper/vg_micro-root 25G 9,4G 15G 40% /
tmpfs 1,9G 0 1,9G 0% /sys/fs/cgroup
tmpfs 1,9G 0 1,9G 0% /media
/dev/mapper/vg_micro-stack 289G 191M 274G 1% /stack
/dev/mapper/vg_micro-home 322G 40G 266G 14% /home
/dev/sda2 485M 89M 371M 20% /boot
/dev/sda1 200M 19M 182M 10% /boot/efi
This means that tmpfs (ramdisks) are: /dev/shm, /run, /sys/fs/cgroup and /media. But only one of this is meant to be a temporary ramdisk for comunication between processes, using files. Here is the /dev/shm description and usage. The only thing is that tmpfs will not grow dynamically, but for my purposes it will be enough (20MB - 1GB).

Related

Getting no space Error in AWS Linux instance

I am getting no space error in was Linux when I trying to git pull files. I don't know why the /dev/root file is so much big. Is there anyway to free up some space without deleting Linux packages like Node , composer , git etc ?
root#ip-172-31-8-10:/# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root 7.7G 7.5G 276M 97% /
devtmpfs 2.0G 0 2.0G 0% /dev
tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs 393M 864K 393M 1% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/loop0 34M 34M 0 100% /snap/amazon-ssm-agent/3552
/dev/loop1 56M 56M 0 100% /snap/core18/1997
/dev/loop2 71M 71M 0 100% /snap/lxd/19647
/dev/loop3 71M 71M 0 100% /snap/lxd/21029
/dev/loop4 25M 25M 0 100% /snap/amazon-ssm-agent/4046
/dev/loop6 33M 33M 0 100% /snap/snapd/12883
/dev/loop5 56M 56M 0 100% /snap/core18/2128
/dev/loop7 33M 33M 0 100% /snap/snapd/12704
tmpfs 393M 0 393M 0% /run/user/114
tmpfs 393M 0 393M 0% /run/user/1000
You've configured an 8GB disk which is very small for most installs. You can do two things:
Expand the root volume. Pretty simple job to do on AWS: https://aws.amazon.com/premiumsupport/knowledge-center/expand-root-ebs-linux/
Start removing files that take up space. The operating system itself takes up a significant amount of space unless you're running something super barebone like Alpine, but since you haven't specified which distro you're using, it's hard to give concrete suggestions here.

centos rsync No space left on device

Hint on centos7 when writing to file. No space left on device (28), but I checked the disk and Inodes both have space. I don't know why. Has this ever happened to anyone?
[root#GDI2390 sync_backup]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/centos-root 52403200 2595000 49808200 5% /
devtmpfs 32867708 0 32867708 0% /dev
tmpfs 32874076 24 32874052 1% /dev/shm
tmpfs 32874076 279528 32594548 1% /run
tmpfs 32874076 0 32874076 0% /sys/fs/cgroup
/dev/sda1 508588 98124 410464 20% /boot
/dev/mapper/centos-home 4797426908 902326816 3895100092 19% /home
tmpfs 6574816 0 6574816 0% /run/user/0
[root#GDI2390 sync_backup]# df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/centos-root 52428800 89274 52339526 1% /
devtmpfs 8216927 562 8216365 1% /dev
tmpfs 8218519 2 8218517 1% /dev/shm
tmpfs 8218519 734 8217785 1% /run
tmpfs 8218519 13 8218506 1% /sys/fs/cgroup
/dev/sda1 512000 330 511670 1% /boot
/dev/mapper/centos-home 4797861888 26409024 4771452864 1% /home
tmpfs 8218519 1 8218518 1% /run/user/0
check "Disk space vs Inode usage"
df -h vs df -i
check also - No space left on device
Assuming this is ext[3|4], check and see if any of the folders you are rsyncing contains about 1M files. Note it is not a straight forward limit that has over flowed, it is more complicated and related to length of your filenames and depth of b-trees.
It is not disk space or inodes. It is the directory index that may have overflowed. See https://askubuntu.com/questions/644071/ubuntu-12-04-ext4-ext4-dx-add-entry2006-directory-index-full

No space left on device even after increasing the volume and partition size on EC2

I have an EC2 instance with 2 EBS volumes. On the root volume, I increased the volume size.
First I modified the volume size on console. Then followed this instruction to extended the partition following this instruction.
However, when running any command, I get No space left on device.
echo "Hello" > hello.txt
-bash: hello.txt: No space left on device
The df -h correctly shows, the total space correctly.
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 1.9G 0 1.9G 0% /dev
tmpfs 395M 596K 394M 1% /run
/dev/xvda1 30G 19G 9.6G 67% /
tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/xvdg 10G 6.2G 3.9G 62% /data2
tmpfs 395M 0 395M 0% /run/user/1001
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 30G 0 disk
└─xvda1 202:1 0 30G 0 part /
xvdg 202:96 0 31G 0 disk /data2
The bottom of the guide said,
If the increased available space on your volume remains invisible to
the system, try re-initializing the volume as described in
Initializing Amazon EBS Volumes.
So I followed the instruction to reinitialize the volume.
I used dd to reinitialize.
sudo dd if=/dev/xvda of=/dev/null bs=1M
30720+0 records in
30720+0 records out
32212254720 bytes (32 GB, 30 GiB) copied, 504.621 s, 63.8 MB/s
However, I still get "No space left on device" error. I have done multipe reboot of the instance, I still see the same error.
Update 1: I have a large number of small files 4-10KB. So, I am running out of inodes. Please let me know how to increase the inodes (on ext4 partition)? Thanks.
df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
udev 497270 371 496899 1% /dev
tmpfs 504717 539 504178 1% /run
/dev/xvda1 1966080 1966080 0 100% /
tmpfs 504717 1 504716 1% /dev/shm
tmpfs 504717 4 504713 1% /run/lock
tmpfs 504717 18 504699 1% /sys/fs/cgroup
/dev/xvdg 5242880 39994 5202886 1% /data2
tmpfs 504717 10 504707 1% /run/user/1001
Please let me know how to resolve this error.
You don't need to reinitialise the volumes if you are using the new generations EC2 instance type like M4, M5, T2 and T3.
You have to also expand your volume on EC2 instance:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html
As I can see your Inode is full, try this command to find the directory:
for i in /*; do echo $i; find $i |wc -l; done
Then, after finding the directory, do:
for i in /path/to/directory/*; do echo $i; find $i |wc -l; done
After that remove those file using:
rm -rf /path/to/fileorfoldername
As you don't want to remove the files, you will have to create a new filesystem using mke2fs and -I parameter for inode size, or keep increasing your root volume more.
Here is a similar question for it:
https://unix.stackexchange.com/questions/26598/how-can-i-increase-the-number-of-inodes-in-an-ext4-filesystem
Also, you can move your files to secondary drive.
You are running out of inodes on your / (root) directory. so to debug it, you need:
Find what are the inodes pointing to or open files something like for i in /*; do echo $i; find $i |wc -l; done
Now look for directories which take more time and then delete that dir

Expand EC2 amazon instance disk space [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 days ago.
Improve this question
l try to install pytorch on amazon as follow :
conda install pytorch torchvision cuda80 -c soumith
l get the following error :
Package plan for installation in environment /home/ubuntu/anaconda2/envs/crnn:
The following NEW packages will be INSTALLED:
cuda80: 1.0-0 soumith
pytorch: 0.1.12-py27_2cu80 soumith [cuda80]
torchvision: 0.1.8-py27_2 soumith
Proceed ([y]/n)? y
CondaError: IOError(28, 'No space left on device')
CondaError: IOError(28, 'No space left on device')
CondaError: IOError(28, 'No space left on device'
when l run the following command :
df -h
l get the following :
Filesystem Size Used Avail Use% Mounted on
udev 30G 0 30G 0% /dev
tmpfs 6.0G 8.6M 6.0G 1% /run
/dev/xvda1 7.7G 7.4G 361M 96% /
tmpfs 30G 0 30G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 30G 0 30G 0% /sys/fs/cgroup
tmpfs 6.0G 0 6.0G 0% /run/user/1000
now l added a new partition called /data which has 50 Go :
/data$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 30G 0 30G 0% /dev
tmpfs 6.0G 8.9M 6.0G 1% /run
/dev/xvda1 7.7G 6.3G 1.4G 82% /
tmpfs 30G 0 30G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 30G 0 30G 0% /sys/fs/cgroup
/dev/xvdf1 50G 3.9G 43G 9% /data
tmpfs 6.0G 0 6.0G 0% /run/user/1000
But most of installation file system such as cuda have to be installed in /
How can l transfer the 50 G0 free space from /data to / so that
/dev/xvda1 will have 58 GO ?
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
`-xvda1 202:1 0 8G 0 part /
xvdf 202:80 0 50G 0 disk
`-xvdf1 202:81 0 50G 0 part /data
Thank you a lot

Running out of inodes disk space, free space left on device

I am getting the message 'disk is full' despite having plenty of free space.
Inodes :-
$ df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/xvda1 1638400 55434 1582966 4% /
tmpfs 480574 1 480573 1% /dev/shm
/dev/xvdb2 3014656 3007131 7525 100% /export
Disk space :-
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 25G 16G 9.3G 63% /
tmpfs 1.9G 0 1.9G 0% /dev/shm
/dev/xvdb2 46G 24G 20G 56% /export
Note: we have our magento application code which is under /dev/xvdb2 /export.please how to sort out the issue.