I have a problem with execvp() in C++. Here is my code:
char * argv[]={};
command_counter = 0;
char line[255];
fgets(line,255,stdin);
argv[0] = strtok(line, TOKEN);//seperate the command with TOKEN
while (arg = strtok(NULL, TOKEN)) {
++command_counter;
cout << command_counter << endl;
argv[command_counter] = arg;
cout << argv[command_counter] << endl;
}
argv[++command_counter] = (char *) NULL;
execvp(argv[0],argv);
But the problem is, multiple arguments are not working when I use execvp() like this.
Like ls -a -l, it is only executing the ls -a as a result.
What's wrong with this program?
With the help of you guys the problem was solved by changing the statement of char * argv[128]
The first thing that's wrong with it is that you're creating a zero-sized array to store the arguments:
char * argv[]={};
then populating it.
That's a big undefined behaviour red flag right there.
A quick and dirty fix would be ensuring you have some space there:
char * argv[1000];
but, to be honest, that has its own problems if you ever get to the point where you may have more than a thousand arguments.
Bottom line is, you should ensure there's enough space in the array for storing your arguments.
One way of doing this is with dynamic memory allocation, which expands the array of arguments as needed, so as to ensure there's always enough space:
using namespace std;
#include <iostream>
#include <cstring>
#include <cstdlib>
#include <cstdio>
#include <unistd.h>
#define TOKEN " "
static char **addArg (char **argv, size_t *pSz, size_t *pUsed, char *str) {
// Make sure enough space for another one.
if (*pUsed == *pSz) {
*pSz = *pSz + 25;
argv = (char **) realloc (argv, *pSz * sizeof (char*));
if (argv == 0) {
cerr << "Out of memory\n";
exit (1);
}
}
// Add it and return (possibly new) array.
argv[(*pUsed)++] = (str == 0) ? 0 : strdup (str);
return argv;
}
int main (void) {
Initial size, used and array.
size_t sz = 0, used = 0;
char **argv = 0;
// Temporary pointer and command.
char *str, line[] = "ls -a -l";
// Add the command itself.
argv = addArg (argv, &sz, &used, strtok (line, TOKEN));
// Add each argument in turn, then the terminator.
while ((str = strtok (0, TOKEN)) != 0)
argv = addArg (argv, &sz, &used, str);
argv = addArg (argv, &sz, &used, 0);
// Then execute it.
execvp (argv[0], argv);
// Shouldn't reach here.
return 0;
}
Related
There is the CommandLineToArgvW() function, which is CommandLineToArgv + W, where this W means wide char (wchar_t in C/C++). But the CommandLineToArgvA() function that should exist, such as GetCommandLineW() and GetCommandLineA(), does not exist, apparently.
char:
int argv;
char **argv = CommandLineToArgvA(GetCommandLineA(), &argc);
wide char:
int argv;
wchar_t **wargv = CommandLineToArgvW(GetCommandLineW(), &argc);
Well, I searched every corner of the Internet for the term CommandLineToArgvA() and the most I found was this function in Linux Wine... I want to know, does this function exist, and if yes, is it normal that it is "hidden"? Otherwise, does it really not exist?
edit: The question was whether there was the CommandLineToArgvA function in the Windows API, however, it does not exist (comment by Remy Lebeau). The answer I checked as correct is the one that best explains how to use the existing CommandLineToArgvW function and turn the wchar_t into char, which will provide the same result that would be provided with the CommandLineToArgvA function if it existed.
I don’t think you should try parsing your own command-line string. Windows does it one way. Trying to write duplicate code to do the same thing is the Wrong Thing™ to do.
Just get the command-line, then use the Window facilities to convert it to UTF-8.
#include <stdlib.h>
#include <windows.h>
#include <shellapi.h>
#pragma comment(lib, "Shell32")
void get_command_line_args( int * argc, char *** argv )
{
// Get the command line arguments as wchar_t strings
wchar_t ** wargv = CommandLineToArgvW( GetCommandLineW(), argc );
if (!wargv) { *argc = 0; *argv = NULL; return; }
// Count the number of bytes necessary to store the UTF-8 versions of those strings
int n = 0;
for (int i = 0; i < *argc; i++)
n += WideCharToMultiByte( CP_UTF8, 0, wargv[i], -1, NULL, 0, NULL, NULL ) + 1;
// Allocate the argv[] array + all the UTF-8 strings
*argv = malloc( (*argc + 1) * sizeof(char *) + n );
if (!*argv) { *argc = 0; return; }
// Convert all wargv[] --> argv[]
char * arg = (char *)&((*argv)[*argc + 1]);
for (int i = 0; i < *argc; i++)
{
(*argv)[i] = arg;
arg += WideCharToMultiByte( CP_UTF8, 0, wargv[i], -1, arg, n, NULL, NULL ) + 1;
}
(*argv)[*argc] = NULL;
}
Obtains an argv just like the one main() gets, with a final NULL element
and writeable and all.
Interface is easy enough. Don’t forget to free() the result when you are done with it. Example usage:
#include <stdio.h>
#include <stdlib.h>
void f(void)
{
int argc;
char ** argv;
get_command_line_args( &argc, &argv );
for (int n = 0; n < argc; n++)
printf( " %d : %s\n", n, argv[n] );
free( argv );
}
int main(void)
{
f();
}
Enjoy!
I'm trying to write a program that takes several arguments at runtime to append text to a file.
The program produces a segmentation fault at runtime. Here is the code:
int main(int argc, char* argv[])
{
//error checking
if (argc < 1 || argc > 4) {
cout << "Usage: -c(optional - clear file contents) <Filename>, message to write" << endl;
exit(EXIT_FAILURE);
}
char* filename[64];
char* message[256];
//set variables to command arguments depending if -c option is specificed
if (argc == 4) {
strcpy(*filename, argv[2]);
strcpy(*message, argv[3]);
} else {
strcpy(*filename, argv[1]);
strcpy(*message, argv[2]);
}
int fd; //file descriptor
fd = open(*filename, O_RDWR | O_CREAT, 00000); //open file if it doesn't exist then create one
fchmod(fd, 00000);
return 0;
}
I am still quite a beginner and I'm having immense trouble understanding c strings. What's the difference between char* and char[] and char* []?
UPDATE:
The code still throws a segmentation fault, here is my revised code:
using std::cout;
using std::endl;
int main(int argc, char* argv[])
{
//error checking
if (argc < 1 || argc > 4) {
cout << "Usage: -c(optional - clear file contents) <Filename>, message to write" << endl;
exit(EXIT_FAILURE);
}
char filename[64];
char message[256];
//set variables to command arguments depending if -c option is specificed
if (argc == 4)
{
strncpy(filename, argv[2], 64);
strncpy(message, argv[3], 256);
}
else
{
strncpy(filename, argv[1], 64);
strncpy(message, argv[2], 256);
}
int fd; //file descriptor
fd = open(filename, O_RDWR | O_CREAT, 00000); //open file if it doesn't exist then create one
fchmod(fd, 00000);
return 0;
}
char* filename[64] creates an array of 64 pointers. You intend to create space for a string with 64 characters - this would be char filename[64]. Because you only allocated space for pointers, and never made the pointers point to any memory, you get a seg fault.
Solution: use char filename[64];
This creates a block of 64 bytes for your string; the value filename points to the start of this block and can be used in a copy operation
strcpy(filename, argv[2]);
I would strongly recommend using the "copy no more than n characters" function - this prevents a really long argument from causing buffer overflow. Thus
strncpy(filename, argv[2], 64);
would be safer. Even better
strncpy(filename, argv[2], 63);
filename[63] = '\0';
This guarantees that the copied string is null terminated.
You have the same problem with message. I don't think you need the code repeating...
Let me know if you need more info.
UPDATE
Today I learnt about the existence of strlcpy - see this answer. It will take care of including the NUL string terminator even when the original string was longer than the allocated space. See this for a more complete discussion, including the reasons why this function is not available on all compilers (which is of course a major drawback if you are trying to write portable code).
Since you have tagged this as C++ (and no one has yet mentioned it):
argv is already a C-style array, so there is no need to copy it to another (unless you just want to waste space). If you really wanted to copy it into something, a std::string object would be a better approach:
int main(int argc, char* argv[])
{
// assuming your conditional checks are already done here ...
std::string filename = argv[1];
std::string message = argv[2];
// do something
return 0;
}
Your variables filename and message are char pointer arrays, not C-style strings (which should be null-terminated char arrays). So you need to declare their type as:
char filename[64];
char message[256];
and use strcpy as:
strcpy(filename, argv[2]);
strcpy(message, argv[3]);
the call to open is similar:
fd = open(filename, O_RDWR | O_CREAT, 00000);
>>> I am still quite a beginner and I'm having immense trouble understanding c strings. What's the difference between char and char[] and char* []?*
Pointers are hard to understand the first time you encounter them.
char is a single byte in memory
char* is a pointer to memory (could be a single byte or an array of characters)
char[] is an array of characters, can be pointed at char*
char*[] is an array of pointers to char
When you have a variable filename, *filename dereferences that variable, which means that it is not the pointer, but the thing pointed at.
*filename is of type char, not valid parameter for strcpy, which is where your segfault occurs
*message is of type char, not valid parameter for strcpy, which is where your next segfault would occur
open(*filename is again a char, which is not a valid parameter for open
You mostly had the program right. the problem was your lack of clarity about how to use a pointer. Here is your code, revised a bit to work. I commented out the broken parts so you could compare broken to fixed.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
int
main(int argc, char* argv[])
{
//char* filename[64];
//char* message[256];
char filename[64]; //declare filename, point it at char[64]
char message[256]; //declare message, point it at char[256]
int fd; //file descriptor
printf("argc %d\n",argc);
//error checking
if ( (argc < 1) || (argc > 4) )
{
//cout << "Usage: -c(optional - clear file contents) <Filename>, message to write" << endl;
printf("Usage: -c(optional - clear file contents) <Filename>, message to write\n");
exit(EXIT_FAILURE);
}
int argi=1;
if( !strcmp(argv[argi],"-c") ) { argi++; } //clear
//set variables to command arguments depending if -c option is specificed
if (argc == 4)
{
//strcpy(*filename, argv[argi++]);
//strcpy(*message, argv[argi++]);
strcpy(filename, argv[argi++]);
strcpy(message, argv[argi++]);
}
else
{
//strcpy(*filename, argv[argi++]);
//strcpy(*message, argv[argi++]);
strcpy(filename, argv[argi++]);
strcpy(message, argv[argi++]);
}
//fd = open(*filename, O_RDWR | O_CREAT, 00000); //open file if it doesn't exist then create one
if( !(fd = open(filename, O_RDWR | O_CREAT, 00000)) ) //open file if it doesn't exist then create one
{
//always check for failure to open
//and emit error if file open fails
exit(EXIT_FAILURE);
}
//fchmod(fd, 00000);
write(fd,message,strlen(message));
return 0;
}
I am still quite a beginner and I'm having immense trouble understanding c strings. What's the difference between char* and char[] and char* []?
The short answer to your explicit question is that char* and char[] can both be used as C-strings. char* [] on the other hand is an array of C-strings.
I am currently doing some testing with a new addition to the ICU dictionary-based break iterator.
I have code that allows me to test the word-breaking on a text document but when the text document is too large it gives the error: bash: ./a.out: Argument list too long
I am not sure how to edit the code to break-up the argument list when it gets too long so that a file of any size can be run through the code. The original code author is quite busy, would someone be willing to help out?
I tried removing the printing of what is being examined to see if that would help, but I still get the error on large files (printing what is being examined isn't necessary - I just need the result).
If the code could be modified to read the source text file line by line and export the results line by line to another text file (ending up with all the lines when it is done), that would be perfect.
The code is as follows:
/*
Written by George Rhoten to test how word segmentation works.
Code inspired by the break ICU sample.
Here is an example to run this code under Cygwin.
PATH=$PATH:icu-test/source/lib ./a.exe "`cat input.txt`" > output.txt
Encode input.txt as UTF-8.
The output text is UTF-8.
*/
#include <stdio.h>
#include <unicode/brkiter.h>
#include <unicode/ucnv.h>
#define ZW_SPACE "\xE2\x80\x8B"
void printUnicodeString(const UnicodeString &s) {
int32_t len = s.length() * U8_MAX_LENGTH + 1;
char *charBuf = new char[len];
len = s.extract(0, s.length(), charBuf, len, NULL);
charBuf[len] = 0;
printf("%s", charBuf);
delete charBuf;
}
/* Creating and using text boundaries */
int main(int argc, char **argv)
{
ucnv_setDefaultName("UTF-8");
UnicodeString stringToExamine("Aaa bbb ccc. Ddd eee fff.");
printf("Examining: ");
if (argc > 1) {
// Override the default charset.
stringToExamine = UnicodeString(argv[1]);
if (stringToExamine.charAt(0) == 0xFEFF) {
// Remove the BOM
stringToExamine = UnicodeString(stringToExamine, 1);
}
}
printUnicodeString(stringToExamine);
puts("");
//print each sentence in forward and reverse order
UErrorCode status = U_ZERO_ERROR;
BreakIterator* boundary = BreakIterator::createWordInstance(NULL, status);
if (U_FAILURE(status)) {
printf("Failed to create sentence break iterator. status = %s",
u_errorName(status));
exit(1);
}
printf("Result: ");
//print each word in order
boundary->setText(stringToExamine);
int32_t start = boundary->first();
int32_t end = boundary->next();
while (end != BreakIterator::DONE) {
if (start != 0) {
printf(ZW_SPACE);
}
printUnicodeString(UnicodeString(stringToExamine, start, end-start));
start = end;
end = boundary->next();
}
delete boundary;
return 0;
}
Thanks so much!
-Nathan
The Argument list too long error message is coming from the bash shell and is happening before your code even gets started executing.
The only code you can fix to eliminate this problem is the bash source code (or maybe it is in the kernel) and then, you're always going to run into a limit. If you increase from 2048 files on command line to 10,000, then some day you'll need to process 10,001 files ;-)
There are numerous solutions to managing 'too big' argument lists.
The standardized solution is the xargs utility.
find / -print | xargs echo
is a un-helpful, but working example.
See How to use "xargs" properly when argument list is too long for more info.
Even xargs has problems, because file names can contain spaces, new-line chars, and other unfriendly stuff.
I hope this helps.
The code below reads the content of a file whos name is given as the first parameter on the command-line and places it in a str::buffer. Then, instead of calling the function UnicodeString with argv[1], use that buffer instead.
#include<iostream>
#include<fstream>
using namespace std;
int main(int argc, char **argv)
{
std::string buffer;
if(argc > 1) {
std::ifstream t;
t.open(argv[1]);
std::string line;
while(t){
std::getline(t, line);
buffer += line + '\n';
}
}
cout << buffer;
return 0;
}
Update:
Input to UnicodeString should be char*. The function GetFileIntoCharPointer does that.
Note that only the most rudimentary error checking is implemented below!
#include<iostream>
#include<fstream>
using namespace std;
char * GetFileIntoCharPointer(char *pFile, long &lRet)
{
FILE * fp = fopen(pFile,"rb");
if (fp == NULL) return 0;
fseek(fp, 0, SEEK_END);
long size = ftell(fp);
fseek(fp, 0, SEEK_SET);
char *pData = new char[size + 1];
lRet = fread(pData, sizeof(char), size, fp);
fclose(fp);
return pData;
}
int main(int argc, char **argv)
{
long Len;
char * Data = GetFileIntoCharPointer(argv[1], Len);
std::cout << Data << std::endl;
if (Data != NULL)
delete [] Data;
return 0;
}
I need to get parent directory from file in C++:
For example:
Input:
D:\Devs\Test\sprite.png
Output:
D:\Devs\Test\ [or D:\Devs\Test]
I can do this with a function:
char *str = "D:\\Devs\\Test\\sprite.png";
for(int i = strlen(str) - 1; i>0; --i)
{
if( str[i] == '\\' )
{
str[i] = '\0';
break;
}
}
But, I just want to know there is exist a built-in function.
I use VC++ 2003.
Thanks in advance.
If you're using std::string instead of a C-style char array, you can use string::find_last_of and string::substr in the following manner:
std::string str = "D:\\Devs\\Test\\sprite.png";
str = str.substr(0, str.find_last_of("/\\"));
Now, with C++17 is possible to use std::filesystem::path::parent_path:
#include <filesystem>
namespace fs = std::filesystem;
int main() {
fs::path p = "D:\\Devs\\Test\\sprite.png";
std::cout << "parent of " << p << " is " << p.parent_path() << std::endl;
// parent of "D:\\Devs\\Test\\sprite.png" is "D:\\Devs\\Test"
std::string as_string = p.parent_path().string();
return 0;
}
Heavy duty and cross platform way would be to use boost::filesystem::parent_path(). But obviously this adds overhead you may not desire.
Alternatively you could make use of cstring's strrchr function something like this:
include <cstring>
char * lastSlash = strrchr( str, '\\');
if ( *lastSlash != '\n') *(lastSlash +1) = '\n';
Editing a const string is undefined behavior, so declare something like below:
char str[] = "D:\\Devs\\Test\\sprite.png";
You can use below 1 liner to get your desired result:
*(strrchr(str, '\\') + 1) = 0; // put extra NULL check before if path can have 0 '\' also
On POSIX-compliant systems (*nix) there is a commonly available function for this dirname(3). On windows there is _splitpath.
The _splitpath function breaks a path
into its four components.
void _splitpath(
const char *path,
char *drive,
char *dir,
char *fname,
char *ext
);
So the result (it's what I think you are looking for) would be in dir.
Here's an example:
int main()
{
char *path = "c:\\that\\rainy\\day";
char dir[256];
char drive[8];
errno_t rc;
rc = _splitpath_s(
path, /* the path */
drive, /* drive */
8, /* drive buffer size */
dir, /* dir buffer */
256, /* dir buffer size */
NULL, /* filename */
0, /* filename size */
NULL, /* extension */
0 /* extension size */
);
if (rc != 0) {
cerr << GetLastError();
exit (EXIT_FAILURE);
}
cout << drive << dir << endl;
return EXIT_SUCCESS;
}
On Windows platforms, you can use
PathRemoveFileSpec or PathCchRemoveFileSpec
to achieve this.
However for portability I'd go with the other approaches that are suggested here.
You can use dirname to get the parent directory
Check this link for more info
Raghu
I am writing a c program for a class that is a small shell. The user inputs a command, and the code executes it using the exec() function.
I need to have a fork in the process so all the work is done in the child process. The only problem is that the child won't terminate properly and execute the command. When I run the code without the fork, it executes commands perfectly.
The problem seems to be coming from where I am creating the string to be used in the execv call. It's the line of code where I call strcpy. If I comment that out, things work fine. I also tried changing it to strncat with the same problem. I'm clueless as to what's causing this and welcome any help.
#include <sys/wait.h>
#include <vector>
#include <sstream>
#include <cstdlib>
#include <stdio.h>
#include <iostream>
#include <string.h>
#include <unistd.h>
using namespace std;
string *tokenize(string line);
void setCommand(string *ary);
string command;
static int argument_length;
int main() {
string argument;
cout << "Please enter a unix command:\n";
getline(cin, argument);
string *ary = tokenize(argument);
//begin fork process
pid_t pID = fork();
if (pID == 0) { // child
setCommand(ary);
char *full_command[argument_length];
for (int i = 0; i <= argument_length; i++) {
if (i == 0) {
full_command[i] = (char *) command.c_str();
// cout<<"full_command " <<i << " = "<<full_command[i]<<endl;
} else if (i == argument_length) {
full_command[i] = (char *) 0;
} else {
full_command[i] = (char *) ary[i].c_str();
// cout<<"full_command " <<i << " = "<<full_command[i]<<endl;
}
}
char* arg1;
const char *tmpStr=command.c_str();
strcpy(arg1, tmpStr);
execv((const char*) arg1, full_command);
cout<<"I'm the child"<<endl;
} else if (pID < 0) { //error
cout<<"Could not fork"<<endl;
} else { //Parent
int childExitStatus;
pid_t wpID = waitpid(pID, &childExitStatus, WCONTINUED);
cout<<"wPID = "<< wpID<<endl;
if(WIFEXITED(childExitStatus))
cout<<"Completed "<<ary[0]<<endl;
else
cout<<"Could not terminate child properly."<<WEXITSTATUS(childExitStatus)<<endl;
}
// cout<<"Command = "<<command<<endl;
return 0;
}
string *tokenize(string line) //splits lines of text into seperate words
{
int counter = 0;
string tmp = "";
istringstream first_ss(line, istringstream::in);
istringstream second_ss(line, istringstream::in);
while (first_ss >> tmp) {
counter++;
}
argument_length = counter;
string *ary = new string[counter];
int i = 0;
while (second_ss >> tmp) {
ary[i] = tmp;
i++;
}
return ary;
}
void setCommand(string *ary) {
command = "/bin/" + ary[0];
// codeblock paste stops here
You said:
Its the line of code where I call
strcpy.
You haven't allocated any memory to store your string. The first parameter to strcpy is the destination pointer, and you're using an uninitialized value for that pointer. From the strcpy man page:
char *strcpy(char *s1, const char *s2);
The stpcpy() and strcpy() functions copy the string s2 to s1 (including
the terminating `\0' character).
There may be other issues, but this is the first thing I picked up on.