I am using FreeRTOS v 10.1.0 , in addition I have downloaded FreeRTOS+FAT from the labs area (160919 release)
I am using an Altera Cyclone V evaluation board and have succesfully ran FreeRTOS projects on the board using the Demo project and the available port for my board as the basis for my own applications.
I have also succesfully mounted a partion on my SD Card and read files from the SD Card and also written files to the SD Card.
My problems begin when I try to read a file bigger than 2K. I am using the following ff_fread command to read from a file I have previous opened and I know to be 5777 bytes long:
ff_fread( &byteBuffer[0],1,5777, pxSourceFile );
What I find is the byte buffer is repetedly populated with the same 2048 bytes, up to the maximum of 5777 bytes. So byteBuffer[0] to byteBuffer[2047] are what I expect but then this data is repeated.
I have also tried to read the data in 512 byte chunks and also in 2048 byte chunks in case the issue was related to a sector boundary (512 byte sector) or a cluster boundary (4 sectors per cluster).
My suspicion is that the issue is in FreeRTOS + Fat as opposed to the Altera code for interfacing with the SD Card. This is becasue when I put a break point in the following function I see that the FreeRTOS+Fat api does actually seem to jump back to the first sector after it has successfully read 4 sectors of data. So it would seem that the Altera Api is returning the data requested by FreeRTOS + FAT.
static int32_t prvReadSd( uint8_t pucDestination,
uint32_t ulSectorNumber,
uint32_t ulSectorCount,
FF_Disk_t pxDisk )
{
int32_t errorCode = alt_sdmmc_read(pucDestination,
ulSectorNumber * 512,
ulSectorCount * 512);
return errorCode
}
Any insights anyone can offer into the issues I am having will be greatly appreciated.
Ok, I have resolved my issue. My apologies for blaming FreeRTOS+FAT, I will explain the issue below just in case others have the same issue.
I had created a 1MB partition on my SD Card which I believed to be Fat16. After trying various things I decided to reformat my sd card using the following command in Linux.
sudo mkdosfs -F 16 /dev/sdc4
linux gave the following warning
WARNING: Not enough clusters for a 16 bit FAT! The filesystem will be
misinterpreted as having a 12 bit FAT without mount option "fat=16".
This prompted me to enable Fat12 support in the FreeRTOS+Fat config file and this fixed my issue.
Related
I'm trying to write to the flash memory of STM32L4R5 in 'FLASH_TYPEPROGRAM_FAST' mode of the HAL_FLASH_Program().
The flash of the MCU is configured as Single Bank.
Writing to the flash only works when using 'FLASH_TYPEPROGRAM_DOUBLEWORD'. The flash reads as 0xFFFFFFFF when written in 'FLASH_TYPEPROGRAM_FAST' mode.
This is my test project:
// Page Erase Structure
static FLASH_EraseInitTypeDef EraseInitStruct;
// Page Erase Status
uint32_t eraseStatus;
// Data Buffer
uint64_t pDataBuf[32] =
{
0x1111111122222222, 0x3333333344444444,
0x5555555566666666, 0x7777777788888888,
0x12345678ABC12345, 0x23456789DEF01234,
0x34567890AAABBB12, 0x4567890FABCDDD34,
0x1111111122222222, 0x3333333344444444,
0x5555555566666666, 0x7777777788888888,
0x12345678ABC12345, 0x23456789DEF01234,
0x34567890AAABBB12, 0x4567890FABCDDD34,
0x1111111122222222, 0x3333333344444444,
0x5555555566666666, 0x7777777788888888,
0x12345678ABC12345, 0x23456789DEF01234,
0x34567890AAABBB12, 0x4567890FABCDDD34,
0x1111111122222222, 0x3333333344444444,
0x5555555566666666, 0x7777777788888888,
0x12345678ABC12345, 0x23456789DEF01234,
0x34567890AAABBB12, 0x4567890FABCDDD34
};
// Flash Page Start Address
uint32_t pageAddr = 0x081FE000;
// Fill Erase Init Structure
EraseInitStruct.TypeErase = FLASH_TYPEERASE_PAGES;
EraseInitStruct.Banks = FLASH_BANK_1;
EraseInitStruct.Page = 255;
EraseInitStruct.NbPages = 1;
// Unlocking the FLASH Control Register
HAL_FLASH_Unlock();
// Clear OPTVERR Bit Set on Virgin Samples
__HAL_FLASH_CLEAR_FLAG(FLASH_FLAG_OPTVERR);
// Erasing the Flash Page
HAL_FLASHEx_Erase(&EraseInitStruct, &Error);
#if 0
// Wriring a Doubled Word to Flash. pDataBuf[0] is the 64-bit Word
HAL_FLASH_Program(FLASH_TYPEPROGRAM_DOUBLEWORD, pageAddr, pDataBuf[0]);
#else
// Wriring 32 Double Words. pDataBuf is the Starting Address of the 64-bit Array
HAL_FLASH_Program(FLASH_TYPEPROGRAM_FAST_AND_LAST, pageAddr, pDataBuf);
#endif
// Locking the FLASH Control Register
HAL_FLASH_Lock();
Am I doing anything wrong?
Thank you,
Ivan
Document RM0932, Reference manual for STM32L4+, section FLASH. It covers reading and writing from/to flash, for both single-bank and double-bank configurations and different MCU models of this line. It seems, most differences are about reading from Flash (64-bit for dual bank, 128-bit for single bank). As for writing, page 128:
Flash is very picky about data width, and every STM32 has different data width for its flash, it seems. Very recently I stumbled upon one, which accepted only 16-bit writes and reads. This one likes double words. There is no universal function to read and write flash to any STM32, so it seems one of your commands doesn't respect this MCU's Flash data width rules. You can check if any error flags appear as per reference manual, although, as you can see, it doesn't say anything about trying to write 32-bit piece of data. I would expect that write to fail, but we can't make any conclusions about error flags from the screenshot provided. If you're curious enough, you can look at what data width every mode/function of yours utilizes and see what happens. 64-bit writes have to work.
Computer:
Processor: Intel Xeon Silver 4114 CPU # 2.19Ghz (2 processors)
Ram: 96 Gb 2666 Hz: 12 - 8 Gb sticks
OS: Windows 10
GPU: None
Hard drive: Samsung MZVLB512HAJQ-000H2 - 512GB M.2 PCIe NVMe
IDE:
Visual Studio 2019
I am including what I am doing in case it is relevant. I am running a visual studio code where I read data off a GSC PCI SIO4B Sync Card 256K. Using the API for this card (Documentation: http://www.generalstandards.com/downloads/GscApi.1.6.10.1.pdf) I read 150 bytes of data at a speed of 100Hz using the code below. That data is then being split into to the message structure my device. I can’t give info on the message structure but the data is then combined into the various words using a union and added to an integer array int Data[100];
Union Example:
union data_set{
unsigned int integer;
unsigned char input[2];
} word;
Example of how the data is read read:
PLX_PHYSICAL_MEM cpRxBuffer;
#define TEST_BUFFER_SIZE 0x400
//allocates memory for the buffer
cpRxBuffer.Size = TEST_BUFFER_SIZE;
status = GscAllocPhysicalMemory(BoardNum, &cpRxBuffer);
status = GscMapPhysicalMemory(BoardNum, &cpRxBuffer);
memset((unsigned char*)cpRxBuffer.UserAddr, 0xa5, sizeof(cpRxBuffer));
// start data reception:
status = GscSio4ChannelReceivePlxPhysData(BoardNum, iRxChannel, &cpRxBuffer, SetMaxBytes, &messageID);
// wait for Rx operation to complete
status = GscSio4ChannelWaitForTransfer(BoardNum, iRxChannel, 7000, messageID, &amount);
if (status)
{
// If we have an error, "bytesTransferred" will contain the number of bytes that we
// actually transmitted.
DisplayErrorMessage(status);
printf("\n\t%04X bytes out of %04X transferred", amount, SetMaxBytes);
}
My issue is that this code works fine and keeps up for around 5 minutes then randomly it stops being able to keep up and the FIFO (first in first out) register on the PCI card begins to fill up faster than the code can process the data. To me this seems like a memory leak issue since the code works fine for a long time, then starts to slow down when nothing has changed as all the code is doing it reading the data off the card. We used to save the data in a really large array but even after removing that we had the same issue.
I am unsure how to figure out exactly what is happening and I'm hopping for a way to determine if there is a memory leak and how to fix it if there is.
It being a data leak is only a guess though and it very well could be something else that is the problem so any out of the box suggestions for diagnosing the problem are also appreciated.
Similar to Paul's answer, but I like to strategically place two (or more) _CrtMemCheckpoint followed by _CrtMemDifference, to cut down the noise.
Memory leaks can be detected and reported on (in Debug builds) by calling the _CrtDumpMemoryLeaks function. When running under the debugger, this will tell you (in the output tab) how many allocations you have at the time that it is called and the file and line number that each was allocated from.
Call this right at the end of your program, after you (think you) have freed all the resources you use. Anything left over is a candidate for being a leak.
I have a Windows 8.1 installation on second partition of my second HDD (/dev/sdb2 in Ubuntu) created using the command
VBoxManage internalcommands createrawvmdk -filename sdb2.vmdk -rawdisk /dev/sdb -partitions 2
Everything worked just fine - Windows installation was runable from VirtualBox and even bootable normally from GRUB. Last time when installing some software in Windows (PC booted up directly into Windows), I discovered there's not enough space on the system partition (/dev/sdb2) and enlarged it by 15 GB that were left spare on the HDD.
These changes made the Windows installation unusable in VirtualBox of course - it fails to boot offering some repair options. The first thing which I realized that is needed to do was enlarging the partition in the VMDK file, so I backed up the old sdb2.vmdk and sdb2-pt.vmdk files and recreated them with the same command as before.
This, however, made no change, because sdb2-pt.vmdk seems to be storing the boot record (MBR in my case, currently with GRUB) and some more stuff needed for Windows to work properly. My next attempt was replacing the new sdb2-pt.vmdk with the old one (with Windows bootloader and perhaps the old partition table) - this didn't work either.
How to update the VMDK files with the new partition size to make the enlarged Windows 8.1 installation bootable from VirtualBox again?
I have finally found the solution myself. Since the VBoxManage internalcommands createrawvmdk -filename sdb2.vmdk -rawdisk /dev/sdb -partitions 2 command produces two valid files based on the current disk structure, the only change needed was to recover the Windows boot loader from the old sdb2-pt.vmdk file which is a rather straightforward process. If you only wish to learn the recovery steps, you can skip the following theoretical part.
Some background information on the VMDK file format
VMWare Disk Format (VMDK) consists of two files - a descriptor file (sdb2.vmdk in the original question) and an extent file (sdb2-pt.vmdk). Their internal structure is well defined in the specification from VMWare. I'll sum up the most important parts:
The descriptor file (sdb2.vmdk) contains a section annotated # Extent description which can look something like this:
# Extent description
RW 63 FLAT "sdb2-pt.vmdk" 0
RW 41943040 ZERO
RW 83886080 FLAT "/dev/sdb" 58722304
RW 2 FLAT "sdb2-pt.vmdk" 63
RW 1191843568 ZERO
One extent description (a row from those above) has the following structure:
Access Size in sectors Type of extent Filename (Offset)
The offset parameter (specified only for FLAT type extents) specifies the offset (in sectors) of the given extent within the file Filename. Notice that file sdb2-pt.vmdk consists of two extents, the first 63 sectors long and the second only 2 sectors long.
The FLAT extent file sdb2-pt.vmdk is a raw data binary file identical to one you would obtain e.g. using the dd command on Unix-like systems. Since the sector size was 512 bytes in my case (I don't know if this is a general rule), the sdb2-pt.vmdk file (based on the new disk partitioning described in the extent description above) was (63+2)*512 bytes long.
Now to the second extent (the one with only 2 sectors in size). This is a padding extent which arose in my new partition table after enlarging the Windows partition (third extent in the description table). Since my previous partition table did not contain any such padding, the old sdb2-pt.vmdk file only contained the first 63 sectors long extent and thus was 1 024 bytes smaller than the new one generated by the VBoxManage internalcommands createrawvmdk -filename sdb2.vmdk -rawdisk /dev/sdb -partitions 2 command. This obviously rendered the old extent file and the new one incompatible.
The recovery process
Please be aware that the following steps apply to the old MBR disk structure only!
You surely want to keep the new partition structure and to propagate any changes made in the partition table to the VMDK file. Proceed with these steps:
Backup your old description file (sdb2.vmdk) and extent file (sdb2-pt.vmdk). In the following steps, you will only need the second one but you never know what else could happen.
Generate new descriptor and extent files issuing the command:
VBoxManage internalcommands createrawvmdk -filename sdb2.vmdk -rawdisk /dev/sdb -partitions 2
Now, the first extent entry in your new description file (sdb2.vmdk) should look like this:
RW ## FLAT "sdb2-pt.vmdk" 0
With the knowledge that you want to keep the new partition table (with everything what follows) and only restore the Windows boot loader stored in the backed up extent file (old sdb2-pt.vmdk), you have to copy the first 440 bytes (boot loader) from the old extent file to the new one. This can either be done with a hex editor (copy all values starting from address 0x0 up to 0x1B8 exclusive) or on a Unix-like system using the command:
dd if=old-sdb2-pt.vmdk of=sdb2-pt.vmdk bs=1 count=440
Violà.
On github there is a tool that will do that automatically (and re-running with same options will update its vmdk and auxiliary files, so you can change partitions too later) https://github.com/vasi/vmdk-raw-parts
Trying to read the sizes of disks that were created in multiple sessions using GetDiskFreeSpaceEx() gives the size of the last session only. How do I read correctly the number and sizes of all sessions in C/C++?
Thanks.
You might want to look at the DeviceIoControl API function. See here for control codes. Here is a code example that retrieves the size of a CD disk. Substitute
CreateFile(TEXT("\\\\.\\PhysicalDrive0")
for e.g.
CreateFile(TEXT("\\\\.\\F:") /* Drive is F: */
if you wish.
Note: The page says that DeviceIoControl can be used to "retrieve information about a floppy disk drive, hard disk drive, tape drive, or CD-ROM drive", but I have also tested it on a DVD, and it seemed to work perfectly. I did not have access to any multisession DVDs to test, so you'll have to test if that works yourself. If it doesn't work, I'd try some of the other control codes, at least IOCTL_DISK_GET_DRIVE_GEOMETRY_EX, IOCTL_DISK_GET_DRIVE_LAYOUT_EX, IOCTL_DISK_GET_LENGTH_INFO and IOCTL_DISK_GET_PARTITION_INFO_EX.
If all fails with DeviceIoControl, you could possibly make use of the Windows Image Mastering API (IMAPI). You'll need v2 of the API (included with Vista & later, can be added to XP & 2003 too, see here: What's new in IMAPIv2) for DVD support. This API is primarily for CD burning, but does perhaps contain some functionality for retrieving disk size, I'd find it weird if it didn't. Particularly, this example seems to be interesting. I do not know if this one works for multisession disks either, but since it can create them, I guess it's likely.
Here are some resources for IMAPI:
MSDN - IMAPI
MSDN - IMAPI interfaces
MSDN - Creating multisession disks with IMAPI (note: example with VB, not C or C++)
Hey I got at least 2 solutions for you:
1) Download dvd+rw-mediainfo.exe from http://fy.chalmers.se/~appro/linux/DVD+RW/tools/win32/, it's a tool that reads info about your disc. Then just make a system call from your app and parse the results. Here's example output:
D:\Downloads>"dvd+rw-mediainfo.exe" f:
INQUIRY: [HL-DT-ST][DVDRAM GT30N ][1.01]
GET [CURRENT] CONFIGURATION:
Mounted Media: 10h, DVD-ROM
Current Write Speed: 1.0x1385=1385KB/s
Write Speed #0: 8.0x1385=11080KB/s
Write Speed #1: 4.0x1385=5540KB/s
Write Speed #2: 2.0x1385=2770KB/s
Write Speed #3: 1.0x1385=1385KB/s
Speed Descriptor#0: 00/2292991 R#8.0x1385=11080KB/s W#8.0x1385=11080KB/s
READ DVD STRUCTURE[#0h]:
Media Book Type: 01h, DVD-ROM book [revision 1]
Legacy lead-out at: 2292992*2KB=4696047616
READ DISC INFORMATION:
Disc status: complete
Number of Sessions: 1
State of Last Session: complete
Number of Tracks: 1
READ TRACK INFORMATION[#1]:
Track State: complete
Track Start Address: 0*2KB
Free Blocks: 0*2KB
Track Size: 2292992*2KB
Last Recorded Address: 2292991*2KB
FABRICATED TOC:
Track#1 : 17#0
Track#AA : 17#2292992
Multi-session Info: #1#0
READ CAPACITY: 2292992*2048=4696047616
2) Investigate mciSendString from [DllImport("winmm.dll", EntryPoint = "mciSendStringA", CharSet = CharSet.Ansi)], I suspect you can send some command and get the desired results.
PS: of course you may download dvd+rw-mediainfo.exe sources from here and investigate further, I am just giving you ideas to think of.
UPDATE
Link to source code updated, thanks #oystein
There are many way to do this since the DVD drives have several interfaces for this due to legacy and backward-compatibility issues.
You could send an IOCTL_SCSI_PASSTHROUGH_DIRECT command to the DVD-drive ( the physicaldevice handle for it). With it you issue a SCSI commands that will be answered by the drive. You can read session information, disk information disk capcity and more.
I believe that dvd+rw-mediainfo.exe issues these.
Unfortunatly, the interface is a bit tricky and obscure, since it is a command within a command. Th passthrough has a byte buffer you will have to fill in yourself with the command structure.
Or you can call IOCTL_CDROM_READ_TOC_EX:
http://www.osronline.com/ddkx/storage/k306_2cs2.htm
I also believe that the exact set of the IOCTL / commands that will work depends on on the drive and its firmaware.
Older drives will not support the newr interfaces and some of the newer drives will not support legacy interfaces.
Thus, some of the libraries & tools might use one or more of these interfaces.
Accseeing the older sessons is all quite messy, really, since most OS will not care about them, only the most recent ones.
I am busy writing something to test the read speeds for disk IO on Linux.
At the moment I have something like this to read the files:
Edited to change code to this:
const int segsize = 1048576;
char buffer[segsize];
ifstream file;
file.open(sFile.c_str());
while(file.readsome(buffer,segsize)) {}
For foo.dat, which is 150GB, the first time I read it in, it takes around 2 minutes.
However if I run it within 60 seconds of the first run, it will then take around 3 seconds to run. How is that possible? Surely the only place that could be read from that fast is the buffer cache in RAM, and the file is too big to fit in RAM.
The machine has 50GB of ram, and the drive is a NFS mount with all the default settings. Please let me know where I could look to confirm that this file is actually being read at this speed? Is my code wrong? It appears to take a correct amount of time the first time the file is read.
Edited to Add:
Found out that my files were only reading up to a random point. I've managed to fix this by changing segsize down to 1024 from 1048576. I have no idea why changing this allows the ifstream to read the whole file instead of stopping at a random point.
Thanks for the answers.
On Linux, you can do this for a quick troughput test:
$ dd if=/dev/md0 of=/dev/null bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes (210 MB) copied, 0.863904 s, 243 MB/s
$ dd if=/dev/md0 of=/dev/null bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes (210 MB) copied, 0.0748273 s, 2.8 GB/s
$ sync && echo 3 > /proc/sys/vm/drop_caches
$ dd if=/dev/md0 of=/dev/null bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes (210 MB) copied, 0.919688 s, 228 MB/s
echo 3 > /proc/sys/vm/drop_caches will flush the cache properly
in_avail doesn't give the length of the file, but a lower bound of what is available (especially if the buffer has already been used, it return the size available in the buffer). Its goal is to know what can be read without blocking.
unsigned int is most probably unable to hold a length of more than 4GB, so what is read can very well be in the cache.
C++0x Stream Positioning may be interesting to you if you are using large files
in_avail returns the lower bound of how much is available to read in the streams read buffer, not the size of the file. To read the whole file via the stream, just keep
calling the stream's readsome() method and checking how much was read with the gcount() method - when that returns zero, you have read everthing.
It appears to take a correct amount of time the first time the file is read.
On that first read, you're reading 150GB in about 2 minutes. That works out to about 10 gigabits per second. Is that what you're expecting (based on the network to your NFS mount)?
One possibility is that the file could be at least in part sparse. A sparse file has regions that are truly empty - they don't even have disk space allocated to them. These sparse regions also don't consume much cache space, and so reading the sparse regions will essentially only require time to zero out the userspace pages they're being read into.
You can check with ls -lsh. The first column will be the on-disk size - if it's less than the file size, the file is indeed sparse. To de-sparse the file, just write to every page of it.
If you would like to test for true disk speeds, one option would be to use the O_DIRECT flag to open(2) to bypass the cache. Note that all IO using O_DIRECT must be page-aligned, and some filesystems do not support it (in particular, it won't work over NFS). Also, it's a bad idea for anything other than benchmarking. See some of Linus's rants in this thread.
Finally, to drop all caches on a linux system for testing, you can do:
echo 3 > /proc/sys/vm/drop_caches
If you do this on both client and server, you will force the file out of memory. Of course, this will have a negative performance impact on anything else running at the time.