I am writing a piece of code that recognizes certain patterns at the machine function level of the LLVM stack. I would like to write a test driver for this component. The test driver would need to create a snippet of machine code and pass an instruction pointer to my matcher:
MachineInstruction* mi = createSnipit(...);
bool result = matcher(mi, ...);
ASSERT_TRUE(result);
If this were not a stand-alone test driver, creating a new instruction or set of instructions in a specific basic-block would be easy. However, the test driver is a stand-alone program and is not invoked from the compiler. Somehow, it seems, I need to create a machine function, a basic block, and an instance of MachineRegisterInfo before I can create any instructions. How do I do that so that everything is correctly linked up and the LLVM infrastructure is in a consistent state?
Related
We have our own test automation software which executes our product exe. We do not have test cases written in C++ but our code is written in C++.
What we want is to run out automation tool on our exe which will run the test suite and then find the lines of code that have been executed (code-coverage).
Is there any way to do the above? Something similar to LCOV?
Semantic Designs' (my company) C++ Test Coverage Tool could be used for this for either MS C++ or GCC.
The tool instruments your source code before you compile it. The compiled binary is executed by whatever means; as it runs, the instrumentation collects test coverage information, and occasionally writes that data to a special file. That file is then analyzed/displayed by a special UI.
If you can get your automation tool to signal when an individual test is complete (this may happen as a natural "last action" on each test, or by other convention), then the test coverage data can be captured on a per-test basis to give you a finer-grain view of the coverage data.
The interface for my game engine is built using a markup language and Lua, similar to HTML and javascript. As such, visual elements will have handlers for UI events such as a mouse move or click, and each time a handler is to be run, the engine will check if it is compiled yet and if not will compile it via luaL_loadstring. Handers can be shared either by element duplication or assignment (this.onclick = that.onclick).
How do I set the environment of a chunk of lua code before running it? The idea is to make element- and event-specific data available to the chunk, and also to link to the environment of the parent UI element. Lua 5.2 changes removed lua_setfenv, so I am not sure how to accomplish this. The function lua_load allows specifying an environment, but seems to only be used for loading code and not running it.
From the reference manual:
You can use load (or loadfile) to load a chunk with a different environment. (In C, you have to load the chunk and then change the value of its first upvalue.)
Setting upvalues is done with lua_setupvalue. So, load your code first, then push the new environment and call lua_setupvalue the same way you would have called lua_setfenv before:
luaL_loadfile(L, "file.lua"); /* load and compile handler */
lua_getglobal(L, "my_environment"); /* push environment onto stack */
lua_setupvalue(L, -2, 1); /* pop environment and assign to upvalue#1 */
/* any other setup needed */
lua_pcall(L, ...); /* call handler */
Also, from the end of your question:
The function lua_load allows specifying an environment, but seems to only be used for loading code and not running it.
This is not actually the case; load (the Lua function) lets you specify an environment, but lua_load (the C function) does not.
Also, while it is "only used for loading code, and not running it", this is the same as luaL_loadstring - indeed, luaL_loadstring is just a wrapper around it. lua_load is a lower-level API function that can be used to implement custom loaders (for example, loading code over the network, or from a compressed file). So if you're already comfortable loading code with luaL_loadstring before running it with lua_pcall, lua_load should look familiar.
I have to spy on a C++ DLL. I would like to insert trace calls inside the assembly code, e.g modifying the code to put a small code that would trace some variable into a text file. I do not have access to the runtime of the machine where the dll is used, I only can access the storage, so I cannot spy dynamically using IDA debug, I must put some files with spy code inside and then start the machine, run it then shutdown the machine and get back the trace files eventually created in the storage.
Is there some way to automate that spy code insertion using IDA Pro for example or a similar tool.
I have decompiled the Dll using Hex-Ray and, yes I could modify the C source code and plant the functions there but unfortunately Hex-Ray cannot reverse all the code , only like 90%, and then I cannot use that way.
Seeing as this a dll, you can use the wrapgen IDA plugin to create a wrapper DLL that calls the original and insert whatever tracking and tracing code you need.
In more advanced cases you can use the wrapper dll to dynamically patch the original dll if you need to monitor function local variables.
So I'm interested in doing some unit testing of a library that interacts with a kernel module. To do this properly, I'd like to make sure that things like file handles are closed at the end of each test. The only way this really seems possible is by using fork() on each test case. Is there any pre-existing unit test framework that would automate this?
An example of what I would expect is as follows:
TEST() {
int x = open("/dev/custom_file_handle");
TEST_ASSERT_EQUAL(x, 3);
}
TEST() {
int y = open("/dev/other_file_handle");
TEST_ASSERT_EQUAL(x, 3);
}
In particular, after each test, the file handles were closed, which means that the file descriptor should likely be the same value after each test.
I am not actually testing the value of the file descriptor. This is just a simple example. In my particular case, only one user will be allowed to have the file descriptor open at any time.
This is targeting a Linux platform, but something cross platform would be awesome.
Google Test does support forking the process in order to test it. But only as "exit" and/or "death" tests. On the other hand, there is nothing to prevent you from writing every test like that.
Ideally, though, I would recommend that you approach your problem differently. For example, using the same Google Test framework, you can list test cases and run them separately, so writing a simple wrapper that invokes each binary multiple times to run different test will solve your problem. Fork has its own problems, you know.
The Check unit testing library for C by default executes each test in a separate child process.
It also supports two different kinds of fixtures - ones that are executed before/after each test - in the child process - (called 'checked') and ones that are executed before/after a test-suite - in the parent process - (called 'unchecked' fixtures).
You can disable the forking via the environment variable CK_FORK=no or an API call - e.g. to simplify debugging an issue.
Currently, libcheck runs under Linux, Hurd, the BSDs, OSX and different kinds of Windows (mingw, non-mingw etc.).
I am working on a system that uses a Voltage Controlled Oscillator chip (VCO) to help process a signal. The makers of the chip (Analog Devices) provide a program to load setup files onto the VCO but I want to be able to setup the chip from within the overarching signal processing control system. Fortunately Analog Devices also provides a DLL to interface with their chip and load setup files myself. I am programming in Visual C++ 6.0 (old I know) and my program is a dialog application.
I got the system to work perfectly writing setup files to the card and reading its status. I then decided that I needed to handle the case where there are multiple cards attached and one must be selected. The DLL provides GetDeviceCount() which returns an integer. For some reason every time the executable runs it returns 15663105 (garbage I assume). Whenever I debug my code however the function returns the correct number of cards. Here is my call to GetDeviceCount().
typedef int (__stdcall *GetDeviceCount)();
int AD9516_Setup()
{
int NumDevices;
GetDeviceCount _GetDeviceCount;
HINSTANCE hInstLibrary = LoadLibrary("AD9516Interface.dll");
_GetDeviceCount = (GetDeviceCount)GetProcAddress(hInstLibrary,"GetDeviceCount");
NumDevices = _GetDeviceCount();
return NumDevices;
}
Just to be clear: every other function from the DLL I have used is called exactly like this and works perfectly in the executable and debugger. I did some research and found out that a common cause of Heisenbugs is threading. I know that there is some threading behind the scenes in the dialogs I am using so I deleted all my calls to the function except one. I also discovered that the debugger code executes slower than executable code and I thought the chip may not have enough time to finish processing each command. First I tried taking up time between each chip function call by inserting an empty for loop and when that did not work I commented out all other calls to the DLL.
I do not have access to the source code used to build the DLL and I have no idea why its function would be returning garbage in the executable and not debugger. What other differences are there between running in debugger and executing that could cause an error? What are some other things I can do to search for this error?
Some compilers/IDEs add extra padding to variables in debug builds or initialize them to 0 - this might explain the differences you're encountering between debugging and "normal" execution.
Some things that might be worth checking:
- are you using the correct calling convention?
- do you get the same return value if no devices are connected?
- are you using the correct return type (uint vs int vs long vs ..)?
Try setting _GetDeviceCount to 0 before calling the function; that could be what the debugger is doing for you.