Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Say I have a C++ program with 100 functions, and each function with 100 local variables, and each of them is an array, maybe 1D, maybe 2D, maybe 3D, maybe dynamically allocated.
Now I'm debugging the program and I have to check if all the variables are correct. Now I simply fprintf() them to their own files, and then check the data in the files. But I have to write many many fprintf() statements and fopen(), fclose() statements in the program, which is quite ugly.
Are there any better way or tool that can simplify and possibly automate this stuff?
You can use debugger for that, but it'll require to check everything on your own.
If you want to check everything automatically, just write unit tests and run them.
and each function with 100 local variable
There's your problem. Cut that so that each function is 100 lines (even then it's still too much!) and you'll have a fighting chance
Create a global log file and open/close it once.
Debug print is a powerful tool, but I suppose you'll need also a tool (write yourself) to compare the result files.
At first, as #UKMonkey already said your function shouldn't have 100 local variables. The best practice is to have functions with maximum 25 lines and maximum 80 characters in each line. That will make easier for you to debug and for others to understand your code.
Furthermore, if you use linux or other unix-like (unix based) systems, you can use GDB for debugging. Just compile your app giving -g flag to gcc/g++ and run it using GDB.
$ g++ -g example.cpp -o example.out
$ gdb ./example.out
there you can add breakpoints and print values of your variables. Read GDB manual for more details
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I've been looking for some time now without some hope.
Basically I was wondering if it's possible to save a variable and read from it inside the executable after the program exits.
I know that one can use the fstream function to store the variable in a external file, But I'm looking for a way to store it internaly like in the .exe.
string l;
cin >> l;
// function to save it internally.....
Thanks in advance o.o
Here are a few hints on why it's not a good idea.
It is no better than using another file.
You cannot access a big block of memory as “all my data” and write it into a file, so you will have to serialize / unserialize properly. Once you have that code available, it is actually more work messing with a complex file format (be it ELF or PE) than writing to an empty file.
It is worse actually.
Bugs in writing the data could make your program unworkable.
Multiple users cannot each have their own data.
Your executable file is normally not writable.
On Unix-based systems, binary files are typically installed to system directories and a normal user simply cannot change them.
Even if running as root, it's not uncommon for system partition to be read-only (my own setup has / mounted as read-only for instance).
On Windows systems, although it is more common to run with admin rights, it's not universal and, anyway, the binary file of an running program is locked.
Even if you manage to workaround all this, it prevents data portability.
Install and update of your program and data is gone.
There is no way to backup your data and restore it later (possibly on another system).
The only programs modifying executables those days are malware. For this reason, intercepting executable-modifying programs and shutting them down is one of the most basic features of anti-malware software.
Along those lines, on system that implement signed binary or any kind of trust system, your modified binary won't pass signature tests.
So, lots of quirks, lots of complex workarounds both in your program and in user experience (need to request special permissions, tricky save and backup, very probable data loss). While on the other hand a simple save to a data file is easy to implement and user-friendly.
As mentioned by #drescherjm and #Peter in the comments, such a practice is what the security software look for, so its not really the brightest of idea.
I'm not well aware of your intentions but if you are trying to implement co-routines within your programs here's what you can do:
Create a static variable, say static int state=0;, and use that to implement co-routines on the scale of a program-lifetime.
Use a file, say "Sys_Status.dat" to store those variables' info.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I've recently learned about command line arguments and I understand how to use them. But I just don't get why I should use them at all. I mean, you could use any normal variable to do the same job as a command line argument.
Could someone explain or give a scenario of how a command line argument could be essential to a program?
edit myfile.txt
You could always make an editor to edit one specific file, but it would make more sense if the user was able to tell you which file he wanted to edit. Command line args is one way of doing this.
The purpose of a command line argument is to allow you to pass information into your program without hard coding it into the program. For example
Foo -pages:10
Foo -pages:20
Here we've passed information into the program (in this case a pages setting). If you set a variable in your program you'd have to recompile it every time you wanted to change it!
It means you don't have to edit the program to change something in it.
Say your program processes all files in a folder to remove icon previews. You could hardcode the folder to process in the program. Or you could specify it as a commandline argument.
I use this example because it describes a bash script I use on my Mac at home.
Automation.
You cannot script or use an application/tool in a headless (or unmanned) environment if you require interactive user input.
You can use "config files" and write and read from temporary files, but this can become cumbersome quickly.
Driving the application.
Almost every non-trivial application has some variation in what or how it does something; there is some level of control that can and must be applied. Similar to functions, accepting arguments is a natural fit.
The command line is a natural and intuitive environment, supporting and using a command line allows for better and easier adoption of the application (or tool).
A GUI can be used, sure, but unless your plan is to only support GUI environments and only support input via the GUI, the command line is required.
Consider echo, which repeats its arguments—it could hardly work without them.
Or tar—how could it tell whether to extract, create, decompress, list, etc. etc. without command line arguments?
Or git, with its options to pull, push, merge, branch, checkout, fetch, ...
Or literally any UNIX program except maybe true and false (although the GNU versions of those do take arguments.)
There are countless applications for passing arguments to main. For example, let's say you are a developer and you've designed an application for processing images. Normal users need only to pass the names of the images to your application for processing. The actual source files of your application are not available to them or they are probably not programmers.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I have a file with the extension .out . I'm running windows 10. From what I understand, .out files are generated while coding in C and C++ in Linux. I was wondering if there was any way in which I could execute the file in windows. Renaming it's extension to .exe gave me an error saying the file was incompatible with 64-bit version of windows.
So is there any way I could execute the file, or better yet, view it's contents as proper code so I can work with it, while using Windows?
There's no way of directly converting a linux executable to Windows format.
You'll have to recompile or use Cygwin, It allows running Linux commands in Windows environment.
a.out is not neccessarily related to C or C++, it can be generated from any other kind of compiler/assembler. If you read the article, then you can see that it isn't even guaruanteed that this actually is what you may think of a.out format.
In order to execute it, the only possible way to achieve this is to install a Unix OS to execute it, but this again wont guaruantee that it really can be executed, because there may be dependencies or the wrong OS, etc..
To view the content of the file, there are different utillities on different platforms. For example you can use objdump on Linux or Cygwin/Windows to take a look at it. You can use a disassembler and see if you can make sense of it. On Windows you can use IDA which covers a broad range of fileformats and may be able to dissect it.
Now that you managed to take a look inside it, there is the next issue you asked for, by converting it. This is a tedious process though, because you must do it by hand. If IDA could identify it, you get a good start because you now have an assembly source as a starting point, but it will likely not assemble, and certainly not run on your target platform (Windows).
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I asked in another thread, how to profile my stuff, and people gave me lots of good replies, except that when I tried to use several of free profilers, including AMD Codeanalyst for example, they only support Microsoft PDB format, and MingW is unable to generate those.
So, what profiler can help me profile a multi-threaded application with Lua scripting and is compiled with MingW?
EDIT: gprof is crap, the awnser that says why I don't want it, is right on the spot... If I get all the functions that it litsts as troublesome, NONE of them are related to the issue that I have (there are a certain action that causes a massive slowdown, and I can't figure why, and gprof can't figure it either)
If you don't want to use gprof, I'm not surprised.
It took me a while to figure out how to do this under GDB, but here's what I do. Get the app running and change focus to the app's output window, even if it's only a DOS-box. Then I hit the Control-Break key (while it's being slow). Then GDB halts and I do info threads and it tells me what threads there are, typically 1 and 2. I switch to the thread I want, like thread 2. Then I do bt to see a stack trace. This tells me exactly what it was doing when I hit Control-Break. I do this a number of times, like 10 or 20, and if there's a performance problem, no matter what it is, it shows up on multiple samples of the stack. The slower it makes the program, the fewer samples I have to take before I see it.
For a complete analysis of how and why it works, see that link.
P.S. I also do handle SIGINT stop print nopass when I start GDB.
Does gprof not do it?
I thought MingW provided a gprof version to go with it.
If you want to profile Lua scripting, I could suggest using the LuaProfiler: http://luaprofiler.luaforge.net/manual.html. It works quite nicely.
I would strongly suggest implementing some sort of timers or your own profiler to get a simple profiling tool. A really simple one is to just output the times when certain points in your code is hit, output those times into a textfile and then write a simple lua or python script to parse the file and filter the interesting information.
I've used this (or a slightly more complex) version of profiling for most of my hobby-projects and it has proven very helpful.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to avoid entering library's source files while debugging in Qt Creator with gdb?
anybody know how to tell gdb to only enter code that is in your project? I know it's hard for a debugger to know what is "in the project" and what is a library....but I thought some naive checks could help, eg don't look in any files that aren't in the users home directory. I frequently have code like this:
MyFunction(complexVarable, complexvar); //passed by value
and gdb insists on going through the copy constructors of the two passed values, but all I care about is MyFunction. Any tips? There are two parts to the question,
ignore code that isn't mine (not in home dir)
skip copies for function calls.
thanks.
EDIT: btw I use emacs, maybe there are some tools there I missed, but I'm open to using external gdb frontends.
As per my opinion this cannot be done.
every project has a flow of data from one function to other.
gdb is designed to work on the flow of data.
so if your project is somewhere in the middle of the flow,gdb cant help you,since evry function has some purpose to do with the input it gets and output it gives.
all you can do is create the same function separately and replicate the scenario as if its running in teh flow by giving the inputs it needs and output it gives.