Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I've recently learned about command line arguments and I understand how to use them. But I just don't get why I should use them at all. I mean, you could use any normal variable to do the same job as a command line argument.
Could someone explain or give a scenario of how a command line argument could be essential to a program?
edit myfile.txt
You could always make an editor to edit one specific file, but it would make more sense if the user was able to tell you which file he wanted to edit. Command line args is one way of doing this.
The purpose of a command line argument is to allow you to pass information into your program without hard coding it into the program. For example
Foo -pages:10
Foo -pages:20
Here we've passed information into the program (in this case a pages setting). If you set a variable in your program you'd have to recompile it every time you wanted to change it!
It means you don't have to edit the program to change something in it.
Say your program processes all files in a folder to remove icon previews. You could hardcode the folder to process in the program. Or you could specify it as a commandline argument.
I use this example because it describes a bash script I use on my Mac at home.
Automation.
You cannot script or use an application/tool in a headless (or unmanned) environment if you require interactive user input.
You can use "config files" and write and read from temporary files, but this can become cumbersome quickly.
Driving the application.
Almost every non-trivial application has some variation in what or how it does something; there is some level of control that can and must be applied. Similar to functions, accepting arguments is a natural fit.
The command line is a natural and intuitive environment, supporting and using a command line allows for better and easier adoption of the application (or tool).
A GUI can be used, sure, but unless your plan is to only support GUI environments and only support input via the GUI, the command line is required.
Consider echo, which repeats its arguments—it could hardly work without them.
Or tar—how could it tell whether to extract, create, decompress, list, etc. etc. without command line arguments?
Or git, with its options to pull, push, merge, branch, checkout, fetch, ...
Or literally any UNIX program except maybe true and false (although the GNU versions of those do take arguments.)
There are countless applications for passing arguments to main. For example, let's say you are a developer and you've designed an application for processing images. Normal users need only to pass the names of the images to your application for processing. The actual source files of your application are not available to them or they are probably not programmers.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I've been looking for some time now without some hope.
Basically I was wondering if it's possible to save a variable and read from it inside the executable after the program exits.
I know that one can use the fstream function to store the variable in a external file, But I'm looking for a way to store it internaly like in the .exe.
string l;
cin >> l;
// function to save it internally.....
Thanks in advance o.o
Here are a few hints on why it's not a good idea.
It is no better than using another file.
You cannot access a big block of memory as “all my data” and write it into a file, so you will have to serialize / unserialize properly. Once you have that code available, it is actually more work messing with a complex file format (be it ELF or PE) than writing to an empty file.
It is worse actually.
Bugs in writing the data could make your program unworkable.
Multiple users cannot each have their own data.
Your executable file is normally not writable.
On Unix-based systems, binary files are typically installed to system directories and a normal user simply cannot change them.
Even if running as root, it's not uncommon for system partition to be read-only (my own setup has / mounted as read-only for instance).
On Windows systems, although it is more common to run with admin rights, it's not universal and, anyway, the binary file of an running program is locked.
Even if you manage to workaround all this, it prevents data portability.
Install and update of your program and data is gone.
There is no way to backup your data and restore it later (possibly on another system).
The only programs modifying executables those days are malware. For this reason, intercepting executable-modifying programs and shutting them down is one of the most basic features of anti-malware software.
Along those lines, on system that implement signed binary or any kind of trust system, your modified binary won't pass signature tests.
So, lots of quirks, lots of complex workarounds both in your program and in user experience (need to request special permissions, tricky save and backup, very probable data loss). While on the other hand a simple save to a data file is easy to implement and user-friendly.
As mentioned by #drescherjm and #Peter in the comments, such a practice is what the security software look for, so its not really the brightest of idea.
I'm not well aware of your intentions but if you are trying to implement co-routines within your programs here's what you can do:
Create a static variable, say static int state=0;, and use that to implement co-routines on the scale of a program-lifetime.
Use a file, say "Sys_Status.dat" to store those variables' info.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Say I have a C++ program with 100 functions, and each function with 100 local variables, and each of them is an array, maybe 1D, maybe 2D, maybe 3D, maybe dynamically allocated.
Now I'm debugging the program and I have to check if all the variables are correct. Now I simply fprintf() them to their own files, and then check the data in the files. But I have to write many many fprintf() statements and fopen(), fclose() statements in the program, which is quite ugly.
Are there any better way or tool that can simplify and possibly automate this stuff?
You can use debugger for that, but it'll require to check everything on your own.
If you want to check everything automatically, just write unit tests and run them.
and each function with 100 local variable
There's your problem. Cut that so that each function is 100 lines (even then it's still too much!) and you'll have a fighting chance
Create a global log file and open/close it once.
Debug print is a powerful tool, but I suppose you'll need also a tool (write yourself) to compare the result files.
At first, as #UKMonkey already said your function shouldn't have 100 local variables. The best practice is to have functions with maximum 25 lines and maximum 80 characters in each line. That will make easier for you to debug and for others to understand your code.
Furthermore, if you use linux or other unix-like (unix based) systems, you can use GDB for debugging. Just compile your app giving -g flag to gcc/g++ and run it using GDB.
$ g++ -g example.cpp -o example.out
$ gdb ./example.out
there you can add breakpoints and print values of your variables. Read GDB manual for more details
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
Hi have a c++ program that elaborates a set of files with the same prefix (i.e file0, file1, file2, etc.). When I run the program (on linux systems) I usually pass the prefix as command line argument:
myscript file*
this elaborates all the files (within the folder) that have prefix file. The c++ script includes a for loop as:
for(i=1;i<argc;i++) {
//do something
}
I'm not an expert of c++ and I don't know how * is elaborated. Now, how could I pass a subset of files (i.e from file0 to file10 or from file20 to file35) to the c++ program? How can I use the shell commands to list a subset of files?
Assuming you are running on a linux-like system, the * is evaluated by the shell, before executing your program (which by the way is called a program, not a script, as it first has to be compiled before execution).
So the shell expands the * to match everything. This means you should modify how call the program, rather than modifying the code. For example File0* would match anything beginning with a File0.
Chances are good you are working with a bash terminal, in which case you should be looking for command line help. The GNU project publishes a great book called "introduction to the command line" (http://shop.fsf.org/product/Introduction_to_Command_Line/) which is released under the gpl and freely available. You might enjoy it.
You might like to be aware that your line: for(i=2; i
First, you are setting i to two, which skips over the first command line parameter. argv is an array where the 0'th element is the command itself, and all options are in elements 1 to argc-1. If you are intentonally skipping the first argument, then that's fine.
The second is a pretty small one, but it's a good idea to get in the habit of preferring the prefix increment operation (++i) over the postfix. It won't make a difference on a simple integer, but in some cases using the prefix operator results in more efficient code (by avoiding an unnecessary temporary). Since the prefix operator is just as readable as the postfix, you lose nothing by getting the habit of always using the prefix operator, unless you really need the postfix one. This is discussed quite well in, for example, Item 1 (don't optimize or pessimize prematurely) of Sutter and Alexandrescu's C++ Coding Guidelines.
Basile is right, the cpp program sees only real file names. The sequence of file names passed to the program is the result of the shell's file name expansion: In a directory with files a1, a1, a3, a11 a command like echo a[0-9]would result in "a1 a2 a3".
The bash does not have true regular expressions, so you would need to pipe the ls command through grep in order to get all files named f1...f100 or so (with different number lengths). Example: ls | egrep 'file[0-9]+'.
A program "my_executable" would get the result on the command line with something like
my_executable $(ls a* | egrep 'a[0-9]+$')
Putting a command inside $() replaces $() with the output of that command.
Hope that helps.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to avoid entering library's source files while debugging in Qt Creator with gdb?
anybody know how to tell gdb to only enter code that is in your project? I know it's hard for a debugger to know what is "in the project" and what is a library....but I thought some naive checks could help, eg don't look in any files that aren't in the users home directory. I frequently have code like this:
MyFunction(complexVarable, complexvar); //passed by value
and gdb insists on going through the copy constructors of the two passed values, but all I care about is MyFunction. Any tips? There are two parts to the question,
ignore code that isn't mine (not in home dir)
skip copies for function calls.
thanks.
EDIT: btw I use emacs, maybe there are some tools there I missed, but I'm open to using external gdb frontends.
As per my opinion this cannot be done.
every project has a flow of data from one function to other.
gdb is designed to work on the flow of data.
so if your project is somewhere in the middle of the flow,gdb cant help you,since evry function has some purpose to do with the input it gets and output it gives.
all you can do is create the same function separately and replicate the scenario as if its running in teh flow by giving the inputs it needs and output it gives.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
What is the best (hopefully free or cheap) way to detect and then, if necessary, remove a rootkit found on your machine?
SysInternals stopped updating RootKit Revealer a couple of years ago.
The only sure way to detect a rootkit is to do an offline compare of installed files and filesystem metadata from a trusted list of known files and their parameters. Obviously, you need to trust the machine you are running the comparison from.
In most situations, using a boot cdrom to run a virus scanner does the trick, for most people.
Otherwise, you can start with a fresh install of whatever, boot it from cdrom, attach an external drive, run a perl script to find and gather parameters (size, md5, sha1), then store the parameters.
To check, run a perl script to find and gather parameters, then compare them to the stored ones.
Also, you'd need a perl script to update your stored parameters after a system update.
--Edit--
Updating this to reflect available techniques. If you get a copy of any bootable rescue cd (such as trinity or rescuecd) with an up-to-date copy of the program "chntpasswd", you'll be able to browse and edit the windows registry offline.
Coupled with a copy of the startup list from castlecops.com, you should be able to track down the most common run points for the most common rootkits. And always keep track of your driver files and what the good versions are too.
With that level of control, your biggest problem will be the mess of spaghetti your registry is left in after you delete the rootkit and trojans. Usually.
-- Edit --
and there are windows tools, too. But I described the tools I'm familiar with, and which are free and better documented.
Rootkit revealer from SysInternals
Remember that you can never trust a compromised machine. You may think you found all signs of a rootkit, but the attacker may have created backdoors in other places. Non-standard backdoors that tools you use won't detect. As a rule you should reinstall a compromised machine from scratch.