VXWORKS simulator limits? - c++

I am currently working on porting a code under VxWorks. so I use the simulator to validate changes.
This code requires the opening of many pipes and sockets. I have a problem with the opening of these files descriptors. Indeed, I can open 17 files descriptors (sockets or pipes cause the same error) but the following return the error "EMFILE: too many opened files".
After some research on the net, I modified the global variable NUM_FILES, but this change had no effect.
Do you know if this is the simulator that limits the number of files descriptors opened simultaneously ?
Thank you for your help

I also had problems with not enough file descriptors being available. Setting NUM_FILES to 50 or so solved the problem. The limitation is within the VxWorks kernel which statically allocated the file descriptor table.
As far as I know changing NUM_FILES requires the kernel to be recompiled since it is a kernel configuration value.
You can count the number of free file descriptors by compiling and executing the following function on the VxWorks shell:
int countFreeFds(void)
{
int count = 0;
int i;
FILE *fd[100];
for (count = 0; count < 100; count++)
{
fd[count] = fopen("somefile", "r"); /* some any existing file */
if (fd[count] == NULL)
{
break;
}
}
for (i = (count - 1); i >= 0; i--)
{
fclose(fd[i]);
}
return (count);
}
If you do that on a freshly started VxWorks with no further binary loaded or tasks being started the value returned by countFreeFds will return a number close to NUM_FILES.
(also note that I've not tested the function above since right now I haven't got access to the source I've used some years ago ... you may also want to modify the code to use sockets or pipes instead but concerning free file descriptors it makes no difference)

I found the problem
i had to modify RTP_FD_NUM_MAX
it was a specific RTP value

Related

How to obtain a drive's file copy percentage or file copy progress

I am programming to my friend with Qt Creator in C++. My question: 1.- How can I get the copy percentage of each drive is object of file copying, while the copying is make in other application, such as Windows Explorer, or TeraCopy...? (In Windows 64 bits).
The big problem is my friend have many drives because he works to many clients and they give him many USB pendrives to copy information.
I try to find on internet the answer for some context to understand the philosophy and imagine the way to code , but I don't find it.
Some code for this:
void Widget::print_copy_percentage(QString drive){
int current_copy_percentage;
// .... the answer current_copy_percentage = ... ;
//find the drive
int i;
for (i=0; i < TOTAL; ++i)
if (drives.at(i) == drive)
break;
//publish the information
ProgressBar[i]->setValue(current_copy_percentage);
ProgressBar[i]->show();
}

read() returns the wrong number of bytes read on some systems

I'm trying to solve a file reading issue in a legacy system.
It's a 32bit windows application tested and run only on Windows7/SP1/64bit Systems which all have the same SP's, SDK's and IDE's installed. IDE is VS2010/SP1.
Here's the code in question:
#define ANZSEL 20
int ii, bfil, ipos;
if ((bfil = open("Z:\\whatever.bla", O_RDONLY, 0)) == -1) { goto end; } // please don't complain about this; it's just here because I didn't want to rephrase the if == -1 above and because it's a legacy codebase; i also tried with UNC paths by the way with the same result
ii = read(bfil, &some_struct_instance, sizeof(some_struct));
ipos = _lseek(bfil,0,SEEK_CUR); // ipos shows the correct position here, ie. sizeof(some_struct)
if (ii == sizeof(some_struct)) {
ii = read(bfil, &another_struct_instance, sizeof(another_struct)*ANZSEL); // ii here sometimes shows 15 instead of sizeof(another_struct)*ANZSEL
ipos = _lseek(bfil,0,SEEK_CUR); // ipos always shows the correct value of sizeof(some_struct) + sizeof(another_struct)*ANZSEL
if (ii == sizeof(another_struct)*ANZSEL) {
// should always come here as long as the files' long enough
So as you can see, it should be a plain old direct binary read into some structs. What i could observe is that when i create the file and first clear the struct with a memset/Zeromem to also "init" all padding-bytes with 0x00 instead of 0xCC (which is microsoft's way of tagging the mem in debug mode as non initialized stack mem) the problem disappears on the system where it didn't behave correctly before.
Although it seems clear to me how I can "properly" solve the issue - specify O_BINARY in open() like
if ((bfil = open("Z:\\whatever.bla", O_RDONLY|O_BINARY, 0)) == -1)
i don't have any clue about why this can behave so differently.
I tried to step through the sources of open() and read() on both systems, but since I rarely have access to the only system where the problem can be reproduced, i couldn't find anything out yet.
my question therefore is if anyone can point out why this happens and reference some docs.
This typically happens when a file contains the value 0x1a (aka control-Z). Like MS-DOS before it, Windows interprets control-Z as signaling the end of a text file, so when you open a file in text mode, and it reaches a 0x1a, it'll simply stop reading.
As you've already found, opening the file in binary mode fixes the problem--the 0x1a is no longer interpreted as signaling the end of file.

Portable way (linux & Windows) to have a file only modifiable by 1 process and not others in C/C++

I'm looking for a portable way (linux & Windows) to have a file only modifiable by 1 process and not others in C/C++.
The full requirement is that I want to keep a file only modifiable by 1 running process, as the others should only be able to read it.
The difficulty is that this process uses a vendor library that will fopen/fclose the file many times during its life (tens of seconds).
Thanks
You should make use of the "inter process communications"
For instance ,on Windows you could use following code, which makes sure that only one process would be able to write into this file.
int WriteToFile()
{
HANDLE _mutex = CreateMutex(NULL, TRUE, L"__File_Write__");
if(GetLastError() == ERROR_ALREADY_EXISTS)
{
return -1;
}
else
{
//write to file
return 0;
}
}

Delay in ofstream::open, possibly due to mixing with _iobuf?

I have a C++ program that creates an output file "A" with ofstream. This file is then read by some legacy C code that opens the file with _iobuf. The legacy code then creates its own output file "B" using _iobuf, and this file is then read by the C++ program using ifstream. This sequence is iterated many times, with the same file names for A and B in each iteration.
Occasionally, the C++ program cannot open the output file A for writing, and I must try several times before it succeeds. This occurs nondeterministically, and maybe once in a thousand iterations. Note that the C program never has to wait to open its input or output file, nor does the C++ program ever have to wait to open its input file. This informal observation is based on hundreds of thousands of iterations.
I'm wondering if this has something to do with mixing ofstream and _iobuf in the same program? Both the C++ code and the C code are linked into the same program. And the legacy C code is technically C++ code, but was written in a very C-like style. Is there anything I can do to eliminate this waiting to open the ofstream file? I do not want to change the legacy code if I can possibly avoid it.
Pseudo code (not compiled):
void someObject::someMethod()
{
for (int count = 0; count < someLimit; ++count)
{
newerObject::firstMethod();
olderObject::secondMethod();
newerObject::thirdMethod();
}
}
void newerObject::firstMethod()
{
// do some processing first
// then write the results of the processing to a file
ofstream A;
A.open("A", ofstream::out); // this sometimes must be tried multiple times
// write data to file A
A.close();
}
void olderObject::secondMethod()
{
FILE* f;
f = fopen("A", "rt"); // this always works the first time
// read data from file A
fclose(f);
// do some processing
f = fopen("B", "w");
// write data to file B
fclose(f);
}
void newerObject::thirdMethod()
{
ifstream B;
B.open("B"); // this always works the first time
// read data from file B
B.close();
// do some processing
}
Currently, as a work around, I put the ofstream::open in a do-while loop. I would love to get rid of this awkwardness. Thanks in advance for any advice you can give.
First off, the problem is almost certainly not the use of different methods to access the files: under the hood, the C and C++ I/O functions use the same system I/O facilities. You seem to be using Windows (on other systems files typically can be open multiple times simultaneously) and I don't know much about the system but I would suspect that the file system hasn't been updated to reflect that the file is closed when you try to open it. This may have to do with the "t" open flag: I don't know what this is about.
On UNIXes you can force the I/O operations to wait until the actual change on disk completed. Something like this could help avoiding the problem but has the significant cost that operations become hideously slow. On UNIXes one approach would be to blow away the file system entry the moment the file was opened successfully (after all, at this point its name isn't used anymore):
if (FILE* fp = fopen("file", "r")) {
remove("file");
// do processing
}
However, if I recall correctly on Windows you can neither remove the file nor rename it. Personally, in solving the problem I would proceed as follows:
Determine under which situations the file can't be opened, e.g. by keeping the file open and trying to open it. This is mainly intended to create a setup where the problem is reproducible so you can verify later that you indeed found a solution.
Once I found a way to reproduce the problem I would probably a better idea of the actual root cause and possibly googling would help. In any case this is the point where researching the root cause comes in.
Once the cause is understood it is hopefully easy to devise a solution. If not, opening the file multiple times under it is successful may very well be the right solution.

fopen problem - too many open files

I have a multithreaded application running on Win XP. At a certain stage one of a threads is failing to open an existing file using fopen function. _get_errno function returns EMFILE which means Too many open files. No more file descriptors are available. FOPEN_MAX for my platform is 20. _getmaxstdio returns 512. I checked this with WinDbg and I see that about 100 files are open:
788 Handles
Type Count
Event 201
Section 12
File 101
Port 3
Directory 3
Mutant 32
WindowStation 2
Semaphore 351
Key 12
Thread 63
Desktop 1
IoCompletion 6
KeyedEvent 1
What is the reason that fopen fails ?
EDIT:
I wrote simple single threaded test application. This app can open 510 files. I don't understand why this app can open more files then multithreaded app. Can it be because of file handle leaks ?
#include <cstdio>
#include <cassert>
#include <cerrno>
void main()
{
int counter(0);
while (true)
{
char buffer[256] = {0};
sprintf(buffer, "C:\\temp\\abc\\abc%d.txt", counter++);
FILE* hFile = fopen(buffer, "wb+");
if (0 == hFile)
{
// check error code
int err(0);
errno_t ret = _get_errno(&err);
assert(0 == ret);
int maxAllowed = _getmaxstdio();
assert(hFile);
}
}
}
I guess this is a limitation of your operating system. It can depend on many things: the way the file descriptors are represented, the memory they consume, and so on.
And I suppose there isn't much you can do about it. Perhaps there is some parameter to tweak that limit.
The real question is, do you really need to open that much files simultaneously ? I mean, even if you have 100+ threads trying to read 100+ different files, they probably wont be able to read them at the same time, and you'll probably not get any better result than having, as an example, 50 threads.
It's difficult to be more accurate since we don't know what you try to achieve.
I think in win32 all the crt function will finally endup using the win32 api underneath. So in this case most probably it must be using CreateFile/OpenFile of win32. Now CreatFile/OpenFile api is not meant only for files (Files,Directories,Communication Ports,pipes,mail slots,Drive volumes etc.,). So in a real application depending on the number these resources your max open file may vary. Since you have not described much about the application. This is my first guess. If time permits go through this http://blogs.technet.com/b/markrussinovich/archive/2009/09/29/3283844.aspx