I'm new to programming and i have programmer some share memory on my beaglebone black which runs on Linux.
I need to make sure that the memory is saved on the Beaglebone blacks ram, and not flash.
The filepath is:
#define FILEPATH "/tmp/mmapped.bin"
fd = open(FILEPATH, O_RDWR | O_CREAT | O_TRUNC, (mode_t)0600);
if (fd == -1) {
perror("Error opening file for writing");
exit(EXIT_FAILURE);
}
coded in C++
Please let me know if any further questions are needed.
The file path /tmp/whatever does not guarantee anything. What you need to do is log into your Beaglebone and see what filesystems are mounted where.
In order to store files in RAM you need to store them on a tmpfs or ramfs filesystem.
On my Fedora Linux laptop if I type mount or cat /proc/mounts I see everything that is mounted. These are the kinds of lines you're looking for:
$ grep tmpfs /proc/mounts
tmpfs /dev/shm tmpfs rw,seclabel,nosuid,nodev 0 0
tmpfs /run tmpfs rw,seclabel,nosuid,nodev,mode=755 0 0
tmpfs /sys/fs/cgroup tmpfs ro,seclabel,nosuid,nodev,noexec,mode=755 0 0
tmpfs /tmp tmpfs rw,seclabel,nosuid,nodev 0 0
So you can see that on Fedora anyway, the /tmp/ directory is a tmpfs and the files in /tmp/ will be in RAM.
One thing to watch for with tmpfs is that the default maximum size is half of RAM. If a file in tmpfs, plus programs using RAM exceeds the RAM size the tmpfs data will go into swap, which will be on Flash, if a Beaglebone even has swap, which I don't remember at the moment.
If there's no swap or it is small then using a tmpfs for big files will cause OOM (out of memory) and break a lot of things so be careful how much you use.
Related
I capture images from MIPI camera.
I have access to the .raw pointer. I want to save images in .raw as fast as possible.
The resolution of one image is 4224*3008.
I cope with two difficulties. Below an example to explain my problem :
#include <stdio.h>
int main ()
{
FILE * pFile;
pFile = fopen ("file.raw", "wb");
unsigned short* buffer = new unsigned short[3008*4224];
for (int i=0 ; i<600 ; i++)
{
for(int j=0 ; j<3008*4224 ; j++ ){buffer[j] = i ;}
fwrite (buffer , sizeof(unsigned short), 3008*4224, pFile);
}
fclose (pFile);
return 0;
}
This sample I wrote illustrates perfectly my issues.
I expect .raw with a size of 15.2GB, and a time to write in .raw more or less constant.
I can write only 4.3GB, then the code "skip" the function fwrite(..). Anyone has already encounter this problem ?
The function is really, really slow. At the beginning, the average time is around 250ms to write one image, and after some iterations, this time is around 2200ms. Does someone has an idea ?
It is probably not the most efficient solution to write images in .raw, but I didn't other solutions. I tested to write with openCV in .tif, it is faster than .raw but I am pretty sure that it is because I have a problem in my code.
If someone has tips, it would be great !
I am working on jetson TX2 board, which runs L4T (linux for tegra).
The file system of my sd card is ext3/Ext4:
nvidia#nvidia-desktop:~$ df -Th
Filesystem Type Size Used Avail Use% Mounted on
/dev/mmcblk0p1 ext4 28G 12G 15G 44% /
none devtmpfs 3,8G 0 3,8G 0% /dev
tmpfs tmpfs 3,9G 4,0K 3,9G 1% /dev/shm
tmpfs tmpfs 3,9G 21M 3,9G 1% /run
tmpfs tmpfs 5,0M 4,0K 5,0M 1% /run/lock
tmpfs tmpfs 3,9G 0 3,9G 0% /sys/fs/cgroup
tmpfs tmpfs 785M 124K 785M 1% /run/user/1000
/dev/sda1 fuseblk 58G 18G 40G 32% /media/nvidia/Ubuntu
/dev/mmcblk2p1 ext4 15G 5,2G 8,7G 38% /media/nvidia/sdCard
tmpfs tmpfs 785M 0 785M 0% /run/user/0
I use nvcc compiler. I use -03 as well as -use_fast_math.
Thank you !
Hint on centos7 when writing to file. No space left on device (28), but I checked the disk and Inodes both have space. I don't know why. Has this ever happened to anyone?
[root#GDI2390 sync_backup]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/centos-root 52403200 2595000 49808200 5% /
devtmpfs 32867708 0 32867708 0% /dev
tmpfs 32874076 24 32874052 1% /dev/shm
tmpfs 32874076 279528 32594548 1% /run
tmpfs 32874076 0 32874076 0% /sys/fs/cgroup
/dev/sda1 508588 98124 410464 20% /boot
/dev/mapper/centos-home 4797426908 902326816 3895100092 19% /home
tmpfs 6574816 0 6574816 0% /run/user/0
[root#GDI2390 sync_backup]# df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/centos-root 52428800 89274 52339526 1% /
devtmpfs 8216927 562 8216365 1% /dev
tmpfs 8218519 2 8218517 1% /dev/shm
tmpfs 8218519 734 8217785 1% /run
tmpfs 8218519 13 8218506 1% /sys/fs/cgroup
/dev/sda1 512000 330 511670 1% /boot
/dev/mapper/centos-home 4797861888 26409024 4771452864 1% /home
tmpfs 8218519 1 8218518 1% /run/user/0
check "Disk space vs Inode usage"
df -h vs df -i
check also - No space left on device
Assuming this is ext[3|4], check and see if any of the folders you are rsyncing contains about 1M files. Note it is not a straight forward limit that has over flowed, it is more complicated and related to length of your filenames and depth of b-trees.
It is not disk space or inodes. It is the directory index that may have overflowed. See https://askubuntu.com/questions/644071/ubuntu-12-04-ext4-ext4-dx-add-entry2006-directory-index-full
I have an EC2 instance with 2 EBS volumes. On the root volume, I increased the volume size.
First I modified the volume size on console. Then followed this instruction to extended the partition following this instruction.
However, when running any command, I get No space left on device.
echo "Hello" > hello.txt
-bash: hello.txt: No space left on device
The df -h correctly shows, the total space correctly.
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 1.9G 0 1.9G 0% /dev
tmpfs 395M 596K 394M 1% /run
/dev/xvda1 30G 19G 9.6G 67% /
tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/xvdg 10G 6.2G 3.9G 62% /data2
tmpfs 395M 0 395M 0% /run/user/1001
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 30G 0 disk
└─xvda1 202:1 0 30G 0 part /
xvdg 202:96 0 31G 0 disk /data2
The bottom of the guide said,
If the increased available space on your volume remains invisible to
the system, try re-initializing the volume as described in
Initializing Amazon EBS Volumes.
So I followed the instruction to reinitialize the volume.
I used dd to reinitialize.
sudo dd if=/dev/xvda of=/dev/null bs=1M
30720+0 records in
30720+0 records out
32212254720 bytes (32 GB, 30 GiB) copied, 504.621 s, 63.8 MB/s
However, I still get "No space left on device" error. I have done multipe reboot of the instance, I still see the same error.
Update 1: I have a large number of small files 4-10KB. So, I am running out of inodes. Please let me know how to increase the inodes (on ext4 partition)? Thanks.
df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
udev 497270 371 496899 1% /dev
tmpfs 504717 539 504178 1% /run
/dev/xvda1 1966080 1966080 0 100% /
tmpfs 504717 1 504716 1% /dev/shm
tmpfs 504717 4 504713 1% /run/lock
tmpfs 504717 18 504699 1% /sys/fs/cgroup
/dev/xvdg 5242880 39994 5202886 1% /data2
tmpfs 504717 10 504707 1% /run/user/1001
Please let me know how to resolve this error.
You don't need to reinitialise the volumes if you are using the new generations EC2 instance type like M4, M5, T2 and T3.
You have to also expand your volume on EC2 instance:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html
As I can see your Inode is full, try this command to find the directory:
for i in /*; do echo $i; find $i |wc -l; done
Then, after finding the directory, do:
for i in /path/to/directory/*; do echo $i; find $i |wc -l; done
After that remove those file using:
rm -rf /path/to/fileorfoldername
As you don't want to remove the files, you will have to create a new filesystem using mke2fs and -I parameter for inode size, or keep increasing your root volume more.
Here is a similar question for it:
https://unix.stackexchange.com/questions/26598/how-can-i-increase-the-number-of-inodes-in-an-ext4-filesystem
Also, you can move your files to secondary drive.
You are running out of inodes on your / (root) directory. so to debug it, you need:
Find what are the inodes pointing to or open files something like for i in /*; do echo $i; find $i |wc -l; done
Now look for directories which take more time and then delete that dir
I am getting the message 'disk is full' despite having plenty of free space.
Inodes :-
$ df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/xvda1 1638400 55434 1582966 4% /
tmpfs 480574 1 480573 1% /dev/shm
/dev/xvdb2 3014656 3007131 7525 100% /export
Disk space :-
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 25G 16G 9.3G 63% /
tmpfs 1.9G 0 1.9G 0% /dev/shm
/dev/xvdb2 46G 24G 20G 56% /export
Note: we have our magento application code which is under /dev/xvdb2 /export.please how to sort out the issue.
I need to make a ramfs an mount it to an directory in linux using c++. I want to make it like a user (no sudo).
I need to call an application on a file that i created and it will be often. Writing it to HDD is very slow.
I found just:
system("mkdir /mnt/ram");
system("mount -t ramfs -o size=20m ramfs /mnt/ram");
but that is not good. I want to be a regular user, and command mount can be called just as root.
what can i do?
For a userspace ramfs solution, you can use python-fuse-ramfs.
I checket if /tmp is a ramfs, but it is not. It creates files on the HDD. but when i run df -h it outputs:
rootfs 25G 9,4G 15G 40% /
devtmpfs 1,9G 0 1,9G 0% /dev
tmpfs 1,9G 1,6G 347M 83% /dev/shm
tmpfs 1,9G 1,3M 1,9G 1% /run
/dev/mapper/vg_micro-root 25G 9,4G 15G 40% /
tmpfs 1,9G 0 1,9G 0% /sys/fs/cgroup
tmpfs 1,9G 0 1,9G 0% /media
/dev/mapper/vg_micro-stack 289G 191M 274G 1% /stack
/dev/mapper/vg_micro-home 322G 40G 266G 14% /home
/dev/sda2 485M 89M 371M 20% /boot
/dev/sda1 200M 19M 182M 10% /boot/efi
This means that tmpfs (ramdisks) are: /dev/shm, /run, /sys/fs/cgroup and /media. But only one of this is meant to be a temporary ramdisk for comunication between processes, using files. Here is the /dev/shm description and usage. The only thing is that tmpfs will not grow dynamically, but for my purposes it will be enough (20MB - 1GB).