I'm currently porting some huge arithmetics from MATLAB to C++ including vast amount of data. I would like to step over C++ code in VS and compare contents of the key arrays with ones from MATLAB code being debugged simultaneously. Since there are many steps, it's very ugly to use some real C++ code for exporting values.
So, is there some convenient way to export content of known memory buffer from C++ to whatever? The only thing I can think of is to copy content of watch window. Some better ideas?
Update: Found >d command in Command Window. Almost fit my needs except 1) disturbing metainfo typed alongside with output, 2) only 10000 lines of output is possible :-|
After all, I was able to use >d command along with slight "packing" of my data to fit more in one string and performed several dumps. Since it was the one-moment task, solution is appropriate.
Related
I currently have a Visual Basic .NET program that uses COM to communicate to some software I am automating. Part of the program requires reading a file line by line, checking for certain keywords, and storing some into String objects to be used by the software.
My problem lies with the slow speed of visual basic in reading and performing these tasks. I have been able to write exponentially faster functions in Visual C++ in order to complete these tasks but I know no way of connecting the two.
Is it possible to call VB .NET's Shell method to run my C++ code and return the three strings?
Something along the lines of:
Shell("cplusplus_Program.exe " + filename)
//and somehow return three strings
Would there be a better way? I have no experience in creating .dll's, but would this be more suited for my task?
EDIT 1: I have been told that VB.NET should work fine/quick to meet my goals if I use it correctly, so I went back and changed from the old COM object I was using to StreamReader. I use the ReadLine method and it is still very slow. From my research I have seen many fast ways to stream the whole text document at once, but I need to go line by line, as each string line I receive needs to be checked and possibly fixed (though my test case never actually needs fixing). In case it might play a large role with the speed, I check each string using the .IndexOf method
This question already has answers here:
Convert assembly to machine code in C++
(3 answers)
Closed 6 years ago.
I would like to include NASM itself (the assembler) in a C++ project. Can I compile NASM as a shared library? If not, is there another assembler that works as a C or C++ library?
I checked libyasm but couldn't understand how I can use it to assemble my code.
Woah, this exploded when I was away.
I had solved this problem by tampering with the YASM source code, and totally forgot about the question in SO as it received absolutely no attention 8 months ago. Below are the details, followed by a better suggestion.
For the project that I had in mind, I needed to use YASM as a library, and I was in a hurry because I was doing this for a company. Back then there were no good libraries that I was aware of; and I had concluded that getting used to the LLVM framework was an overkill for the task (because all I wanted was to assemble singular x86 - x86_64 instructions and receive the bytes).
So I downloaded the source code for YASM.
Upon meddling with the code for a while, I noticed that the executable receives the file paths for input and output files; and passes these two strings along. I wanted char arrays in memory for the input and output; not files. So I figured, maybe if I could find all FILE pointers that are passed around, I can convert them to char pointers, and change every file read/write to array operations.
This turned out to be even more cumbersome than it sounds. Apparently YASM does not open input/output files once and uses the same FILE pointers; instead it passes around copies of the filepath strings. I needed a script that could make all the necessary changes for me, this wasn't good for me.
Eventually, I found all fopen/fclose calls in the program with a script, and replaced them with my_fopen/my_fclose. For each file that I made these replacements, I included my header file in which I implemented these two functions.
In both of these functions, I checked the incoming string, compared it with "fake_file". If they are equal, I passed a 'fake' FILE pointer pointing to two portions of memory, obtained from the function calls fmemopen and open_memstream. Otherwise I simply called the actual fopen/fclose functions. In other words, I redirected these two calls (only for a given filename) to a memory file. Then, I called the library with the filename parameter set to 'fake_file'.
Since I have had limited myself to Linux at that point, this approach worked for me. I also found out (using Valgrind) that there was a memory leak in the library version, so I wrote a very primitive garbage collector for it. Basically I wrapped malloc's etc. to keep track of all allocations that are not freed, and clean them after each execution.
This approach also allowed me to automate these changes using a script. Unfortunately I did all these in a company so I cannot leak any actual code.
Better suggestion:
As of May 31, 2016; you can use Keystone Engine instead. It is "based on LLVM, but it goes much further with a lot more to offer." The disassembly engine Capstone and this are a near perfect couple for assembly and disassembly. If you need either of these components, I suggest these instead of doing the hacks I described. Both of these engines are currently being developed; and even though Keystone has some small bugs, Capstone is very robust at the moment.
TL;DR: Use keystone.
I'm trying to write my first 'demoscene' application in MS Visual Studio Express 2010. Suddenly I realized, that my binary expanded from 16kb to ~100kb in fully-optimized-for-size release version. My target size is 64k. Is there any way to somehow "browse" binary to figure out, which methods consumes a lot of space, and which I should rewrite? I really want to know what my binary consists of.
From what I found in web, VS2010 is not the best compiler for demoscenes, but I still want to understand what's happening inside my .exe file.
I think you should have MSVC generate a map file for you. This is a file that will tell you the addresses of most of the different functions in your executable. The difference between consecutive addresses should tell you how much space the function takes. To generate a map file, add the /MAP linker option. For more info, see:
http://msdn.microsoft.com/en-us/library/k7xkk3e2(v=VS.100).aspx
You can strip off lots of unnecessary stuff from the executable and compress it with utilities such as mew.
I've found this useful for examining executable sizes (although not for demoscene type things): http://aras-p.info/projSizer.html
I will say this: if you are using the standard library at all then stop immediately. It is a huge code bloater. For example, each unique usage std::sort adds around 5KB and there's similar numbers for many of the standard containers (of course, it depends what functions you use, but in general they add lots of code).
Also, I'm not into the demo scene, but I believe people use Crinkler to compress their executables.
Use your version contol system to see what caused the increase. Going forward, Id log the built exe size during the nightly builds. And dont forget you can optimize for minimal size with the compiler settings.
I am trying to optimize the interaction between two scripts I have.
Two things I thought of are the c++ program not terminating unless you manually kill it, or generating all info in python before feeding it to c++.
Explanation of the problem:
What the scripts do:
C++ program (not made by me, and I can't program in c++ very well): takes a 7 number array and returns a single number, simple.
Python script (mine, and I can program a bit in python): generates those 7 number arrays, feeds them to the c++ program, waits for an answer and adds it to a list. It then makes the next array.
In theory, this works. However, as it is right now, it opens and closes the c++ program for each call. For one array that is no problem, but I'm trying to upscale to 25k arrays, and in the future to 6+ million arrays. Obviously it is then no longer feasible to open/close it each time, especially since the c++ program first has to load a 130mb VCD file to function.
Two options I thought of myself were to generate all arrays first in python, then feed them to the c++ program and then analyze all results. However, I wouldn't know how to do this with 6M arrays. It is not important however that the results I get back are in the same order as the arrays I feed in.
Second option I thought of was to make the c++ program not quit after each call. I can't program in c++ though so I don't know if this is possible, keeping it 'alive' so you can just feed arrays into it at times and get an answer.
(Note: I cannot program in anything else than python, and want to do this project in python. The c++ program cannot be translated to python for speed reasons.)
Thanks in advance, Max.
Firstly, just to be pedantic, there are no C++ scripts in normal use. C++ compiles, ultimately, to machine code, and the C++ program is properly referred to as a "program" and not a "script".
But to answer your question, you could indeed set up the C++ program to stay in memory, where it listens for connections and sends responses to your Python script. You'd want to study Unix IPC, particularly sockets.
Another way to approach it would be to incorporate what the C++ program does into your Python script, and forget about C++ altogether.
Without the source code or the exact specifications of the Python script and the C++ program, it's difficult to provide more information, but you could modify the C++ code to repeatedly read the array from the standard input and then write the results to standard output.
Then you could use the Python subprocess module to launch the C++ program from your Python script and communicate with it.
Note that simply wrapping a loop around the main() function of the C++ program will not be very helpful, because apparently the main issue is the time the program needs in order to read its data (the VCD that you mentioned).
The loop needs to be strictly around the code that computes the result - which means that you may have to factor everything else out in a way that allows the result computation to be done repeatedly without each run contaminating the next ones.
Okay, your best course of action is probably to write a C/C++ extension to Python that is able to call the C++ code that does the calculation you want. This is not terribly difficult, it will only require a minimal amount of C/C++ coding to make it work. A good explanation of extending Python can be found on the Python page at http://docs.python.org/extending/extending.html
What you in effect do is change your C++ program to be a dynamic library that the Python process can link in and call from the Python script.
If you need a bit of help getting it to work I'm sure we can help you out.
I think the best way is to build C++ extension module for python.
There are lot of ways to do it.
If you have c++ sources you can try SWIG
After that you can use c++ functions/object directly inside python - and manage them by python modules (here processing). It is really simple.
I think you're doing it wrong
What the scripts do: C++ program (not made by me, and I can't program in c++ very well): takes a 7 number array and returns a single number, simple. Python script (mine, and I can program a bit in python): generates those 7 number arrays, feeds them to the c++ program, waits for an answer and adds it to a list. It then makes the next array.
You have this?
python generate_arrays.py | someC++app | python gather_array.py
This allows you to run the three parts in parallel, using every Core of every CPU on the box. The OS makes sure that all three run concurrently.
If you're still not getting 100% CPU Load, you'll have to do something like this.
( python generate_arrays.py --even | someC++app >oneFile ) & ( python generate_arrays.py --odd | someC++app > anotherFile )
python gather_array.py oneFile anotherFile
That will run two copies of python generate_arrays.py and two copies of your magical C++ program.
You'll have to rewrite your generate_arrays.py program so that it takes a command-line option. When the option is --even, you generate 3 million arrays. When the options is --odd you generate the other 3 million arrays.
This (python | c++) & (python | c++) should get to 100% cpu use.
This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Have you used any of the C++ interpreters (not compilers)?
Hi,
I am currently learning C++ and a beginner in programming in general. I've been trying to write some code to a few programming problems from the book I'm using. What I find is that often I make mistakes in what I write and those mistakes come up when the program is run. Its usually quite obvious where in the program I've gone wrong when there is regular output. But in a long computation I'm often not sure why a particular code has acted a certain way. I've also looked at Python recently. Python works with an interpreter, which can take any piece of Python code and compute its output.
I was wondering if there was something similar for C++. Right now when I want to check a line or block of code I have to comment out a lot, save it, compile it, and then run it from a command line. And I have to do that many times for a single error until I've solved it. Is there a way to type code into an active terminal which would run code and show me output? What would be better still would be a way to select a block of code (like you select text) or multiple blocks (to see how a function is being handled) within the IDE and click run to run just that block of code and see its output without having comment out irrelevant lines or to save the file. The compiled code could just reside in memory.
CINT is a c & C++ interpretter that accepts nearly all valid C++. Unfortunately many Linux distros do not offer it, and you'll probably have to build it from source... and that is a non-trivial task.
Typically a debugger is used to step through code line by line, starting at a chosen breakpoint, and keep watch of all variables/values.
Unit testing is a technique to test smaller pieces of code.
A stepping debugger, as found in most IDEs will help you with this.
Here (for example) is a description of how to set the Execution point in In Visual Studio, which sounds like what you want to do.
For certain situations, the "Immediate Window" may be of use to you. It allows you to type in expressions to evaluate immediately.
Rather than just running individual lines independently, or relying on print statements to tell you the state of whatever variables you have decided to print, you can use the debugger to run to the point of interest (where you will have set a breakpoint), then you can examine the state of any in-scope variables, or even alter the normal flow of the program.
There are some solutions that try to do this - the ones I know are Ch and TextTransformer.
However, I doubt that this works very well. C++ is not at all designed to run as an interpreted language.
One of the problems is that C++ is very, very hard to parse. And this makes it very hard to provide certain types of tools that are usual for other languages. For example, I don't think there is any C++ refactoring tool that really works well.
C++ is a compiled language not like python. But there are few c/c++ interpreters out there but not sure about their features. Check these out: Ch interpreter and CINT
If you really want to learn c++ please do not use the c/c++ interpreters.
If you insist on using a interactive interpreter there is since a long time CINT which is the default interpreter used in the ROOT project. It got better over the years, but still has only limited capabilities when dealing with templates. Also, there is a move to replace it with a JIT compiling interpreter based on clang inside the ROOT project.
If I were you I would learn how to run compiler and an interactive debugger like suggested in some comments already.