FirmwareDownload() will print the remaining length of the flash memory, which will take about 5 minutes.
I want to calculate the progress while the firmware download process is still running, I tried changing the stdout to a file and wanted to read the progress of the file, but it doesn't work.
It stuck at "check_process: Start", Is because the file still owned by another thread?
Is there any way to read the file and get the progress while the write file is still being processed?
void fw_download()
{
//Save position of current standard output
fpos_t pos;
fgetpos(stdout, &pos);
int fd = dup(fileno(stdout));
freopen("firmwareDownload_log.txt","w",stdout);
camInstall->firmwareDownload();
//Flush stdout so any buffered messages are delivered
fflush(stdout);
//Close file and restore standard output to stdout - which should be the terminal
dup2(fd, fileno(stdout));
close(fd);
}
int main()
{
std::thread t1(fw_download);
while(counter <= 10) //wait download completed or timeout fail
{
cout << "check_process: Start" << endl;
sleep(5);
counter +=1;
FILE *fp;
char buffer[BUFFER_SIZE];
fp = popen("cat firmwareDownload_log.txt | tail -n 2", "r");
if (fp != NULL)
{
while (fgets(buffer, BUFFER_SIZE, fp) != NULL)
printf("%s", buffer);
pclose(fp);
}
}
t1.join();
}
Related
I'm trying to communicate two processes via Named pipes but I'm having problems with the reader process. The idea is that the consumer reads several messages that the producer writes into the pipe in random periods. Since these periods can be long, the consumer should block in the read operation until the producer has written the message into the pipe.
This is the code that I have now.
FILE* result_pipe_stream;
...
result_pipe_stream = fopen(result_pipe , "r");
...
string read_result_from_pipe(){
if (result_pipe_stream == NULL){
return NULL;
}
char buf[BUFSIZ];
std::stringstream oss;
while (1) {
if( fgets (buf, BUFSIZ, result_pipe_stream) != NULL ) {
int buflen = strlen(buf);
if (buflen >0){
if (buf[buflen-1] == '\n'){
buf[buflen-1] = '\0';
oss << buf;
return oss.str();
} else {
oss << buf;
}
}
} else {
// Necessary to avoid that fgets return NULL after closing the pipe for the first time.
clearerr(result_pipe_stream);
}
}
}
The first time the consumer reads from the pipe, the method works properly. It awaits until the writer sends a message and returns the value. However, from that point on, when the method is called again the fgets returns NULL until the writer adds the new message.
What is missing for the consumer to block after the second read?
The producers is a Java application writing the pipe like this
try {
OutputStream output = new FileOutputStream(writePipe, true);
output.write(taskCMD.getBytes());
output.flush();
} catch (Exception e) {
e.printStackTrace();
} finally {
if (output != null) {
output.close();
}
}
I'm coding a mock shell and I'm currently working on coding pipes with dup2. Here is my code:
bool Pipe::execute() {
int fds[2]; //will hold file descriptors
pipe(fds);
int status;
int errorno;
pid_t child;
child = fork();
if (-1 == child) {
perror("fork failed");
}
if (child == 0) {
dup2(fds[1], STDOUT_FILENO);
close(fds[0]);
close(fds[1]);
this->component->lchild->execute();
_exit(1);
}
else if (child > 0) {
dup2(fds[0], STDIN_FILENO);
close(fds[0]);
close(fds[1]);
this->component->rchild->execute();
waitpid(child, &status, 0);
if ( WIFEXITED(status) ) {
//printf("child exited with = %d\n",WEXITSTATUS(status));
if ( WEXITSTATUS(status) == 0) {
cout << "pipe parent finishing" << endl;
return true;
}
}
return false;
}
}
The this->component->lchild->execute(); and this->component->rchild->execute(); run execvp on the corresponding commands. I've confirmed that each of these return by printing out a statement in the parent process. However, in my Pipe::execute(), it seems like the child process does not finish because the cout statement in the parent process is never printed and I get a segmentation fault after the prompt ($) is initialized (see picture). Here is the main function that initializes the prompt after each execution:
int main()
{
Prompt new_prompt;
while(1) {
new_prompt.initialize();
}
return 0;
}
and here is the initialize() function:
void Prompt::initialize()
{
cout << "$ ";
std::getline(std::cin, input);
parse(input);
run();
input.clear();
tokens.clear();
fflush(stdout);
fflush(stdin);
return;
}
It seems like ls | sort runs fine, but then when the prompt is initialized, a blank line is read into input by getline. I've tried using cin.clear(), cin.ignore, and the fflush and clear() lines above. This blank string is "parsed" and then the run() function is called, which tries to dereference a null pointer. Any ideas on why/where this blank line is being entered into getline? and how do I resolve this? Thank you!
UPDATE: The parent process in the pipe is now finishing. I've also noticed that I'm getting seg faults also for my I/O redirection classes (> and <). I think I'm not flushing the stream or closing the file descriptors correctly...
Here is my execute() function for lchild and rchild:
bool Command::execute() {
int status;
int errorno;
pid_t child;
vector<char *> argv;
for (unsigned i=0; i < this->command.size(); ++i) {
char * cstr = const_cast<char*>(this->command.at(i).c_str());
argv.push_back(cstr);
}
argv.push_back(NULL);
child = fork();
if (-1 == child) {
perror("fork failed");
}
if (child == 0) {
errorno = execvp(*argv.data(), argv.data());
_exit(1);
} else if (child > 0) {
waitpid(child, &status, 0);
if ( WIFEXITED(status) ) {
//printf("child exited with = %d\n",WEXITSTATUS(status));
if ( WEXITSTATUS(status) == 0) {
//cout << "command parent finishing" << endl;
return true;
}
}
return false;
}
}
Here is the bug:
else if (child > 0) {
dup2(fds[0], STDIN_FILENO);
close(fds[0]);
close(fds[1]);
this->component->rchild->execute();
You are closing stdin for the parent, not just the right child. After this the stdin of the parent process is the same as that of the right child.
After that
std::getline(std::cin, input);
Tries to read the output of the left child, rather than the original stdin. By that point the left child had finished and that end of the pipe had been closed. This makes reading stdin fail, and leave input unchanged in its original state.
Edit: Your design has a minor and a major flaws. The minor flaw is that you don't need the fork in Pipe::execute. The major flaw is that the child should be the one who redirects streams and closes the descriptors.
Simply pass input and output parameters through fork() and let the child dup2 these. Don't forget to make it also close unrelated pipe ends. If you don't, the left child will finish but its output pipe will continue living in other processes. As long as other copies of that descriptor live, the right child will never get EOF while reading its pipe end - and week block forever.
I'm creating an FTP client.
I'm getting a gif from the server, but after that the gif is corrupted.
When I change the file extension to look at the diff, I see that the
CR/LF characters are gone.
How could this be? I made sure to use image mode.
Here's my read code in TCP socket.
string TCPSocket::long_read()
{
pollfd ufds;
ufds.fd = sd;
ufds.events = POLLIN;
ufds.revents = 0;
ssize_t bytesRead = 0;
string result;
char* buf = new char[LONGBUFLEN];
do {
bzero(buf, LONGBUFLEN);
bytesRead = ::read(sd, buf, LONGBUFLEN);
if (bytesRead == 0) {
break;
}
if (bytesRead > 0) {
result = result + string(buf, bytesRead);
}
} while (poll(&ufds, 1, 1000) > 0);
return result;
}
Here my get code in main.cpp
else if (command == command::GET) {
string filename;
cin >> filename;
string dataHost;
int dataPort;
if (enterPassiveMode(dataHost, dataPort)) {
dataSocket = new TCPSocket(dataHost.c_str(), dataPort);
if (fork() == 0) {
string result = dataSocket->long_read();
size_t length = result.size();
char* resultArr = new char[length];
memcpy(resultArr, result.data(), length);
// mode_t mode = S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH;
FILE* file = fopen(filename.c_str(), "w+b");
if (file) {
fwrite(resultArr, length, 1, file);
fclose(file);
}
else {
cout << "open failed";
}
break;
}
else {
writeAndImmediateRead(rfc959::TYPE_I);
controlSocket->write(rfc959::RETRIVE(filename));
string result = controlSocket->read();
cout << result;
int reply = Parser::firstDigit(result);
// I'll remove incomplete local file if request fails
if (reply != rfc959::POSITIVE_PRELIMINARY_REPLY) {
remove(filename.c_str());
continue;
}
wait(NULL);
cout << controlSocket->long_read();
}
}
}
EDIT
I did make sure to use Binary mode. And when I transferred a text file(though of a smaller size), it doesn't have this problem. Here's the output:
EDIT 2
Output from Wireshark showing Request: TYPE I and Response: Opening BINARY mode
By default, FTP servers and clients perform data transfers as "ASCII mode", which means that any CRLF sequence is translated on-the-fly to the host's ASCII line ending (e.g. just bare LF on Unix mmachines). This behavior is mandated by RFC 959; see Section 3.1.1.1.
To transfer your data as binary, and avoid the ASCII mode translation, your FTP client will want to send the TYPE command first, e.g.:
TYPE I
Your .gif file should then be transferred as is, with no replacements/transformations on any CRLF sequences.
Hope this helps!
im trying to write a program that gets two text files and copy`s the text from one file to the other.
the function seems to be working but it changes the file that I write in not in to a text file.
please help!!!!
int main(int argc, char *argv[]) {
printf("Hello World!\n");
char* file1 = argv[1];
char* file2 = argv[2];
char buffer1[SIZE+1];
//char buffer2[SIZE+1];
int fd1, fd2;
int run = 1;
int run2;
fd1 = open(file1, O_RDONLY);
if(fd1 < 0){
perror("after open "); // checks if the file was open ok
exit(-1);
}
fd2 = open(file2, O_RDWR );
if(fd2 < 0){
perror("after open "); // checks if the file was open ok
exit(-1);
}
while(run != 0){
run = read(fd1, buffer1, SIZE);
run2 = write(fd2, buffer1, SIZE);
printf("run 2: %d", run2);
}
close(fd1);
close(fd2);
return 1;
}
Probably because
fd2 = open(file2, O_RDONLY);
Read only file -> fd2
Your program doesn't handle the case where the quantity of data read is less than the buffer size:
while(run != 0){
run = read(fd1, buffer1, SIZE);
run2 = write(fd2, buffer1, SIZE);
printf("run 2: %d", run2);
}
If the amount read is less than SIZE, you will be writing garbage to the second file.
Why are you using low level file I/O?
This can be done using C language functions, at a high level:
FILE * input;
FILE * output;
input = fopen("input_file.bin", "rb");
output = fopen("output_file.bin", "wb");
while (!feof(input))
{
int quantity = fread(&buffer1[0], SIZE, 1, input);
int bytes_written = fwrite(&buffer1[0], quantity, 1, output);
}
In the above example, the number of bytes written is the number of bytes read. If the quantity of bytes read is less than SIZE, it will write the quantiy of bytes read to the output file, no more, no less.
I am trying to write to a pipe using C++. The following code gets called in an extra thread:
void writeToPipe()
{
int outfifo;
char buf[100];
char outfile[] = "out";
mknod(outfile, S_IFIFO | 0666, 0);
if ((outfifo = open(outfile, O_WRONLY)) < 0) {
perror("Opening output fifo failed");
return false;
}
int currentTimestamp = (int)time(0);
int bufLen = sprintf(bug, "Time is %d.", currentTimestamp);
write(outfifo, buf, bufLen);
}
The thread is called in main using:
thread writeThread(writeToPipe);
writeThread.detach();
If the pipe is not opened by another process, the C++ program just quits without an error. I don't know how to check if the pipe is opened.