I wish to make interactive code learning system, it allows users, (young programmers normally) to write contents of one function in c++ language, send it to server and there it will be compiled into dynamic library and called from main program.
Program expects function to return correct answer depending on given parameters.
Of course, there will be some kids, that will cause errors like segmentation fault. (server is Linux powered).
So, can I make signal handler that would exit function?
What I wish to accomplish:
for (int i = 0; i < PLAYER_NUM; i++) {
snprintf(buf, sizeof(buf), "players/%s.so", player[i]);
handle = dlopen(buf, RTLD_LAZY);
add[i] = (int (*)(int, int))dlsym(handle, "sum");
} // that was simply loading of functions from libraries.
for (int x = 0; x < 10; x++)
for (int i = 0; i < PLAYER_NUM; i++) {
if(failed[i]) continue;
ret = add[i](x, 5);
if(sigfault_received() || res != (x + 5)) {
failed[i] = true;
}
}
Faulty code can cause all kinds of issues which might not be recoverable. So handling SIGSEGV won't really help.
The solution is to run that code in a separate process and use IPC, pipes or sockets to communicate with the main process.
Use a proper sandbox, and not one you built yourself. You can't be expected to be as creative predicting mischief as 10 kids together. E.g. system("rm -rf /") won't immediately segfault your program, but it certainly is undesirable.
Related
My goal is to read large chunks of executable memory from a target app.
ReadProcessMemory() sometimes fails, but that is okay, I still can examine the rest of the read bytes that I'm interested in.
I don't modify anything in the target application like values.
My problem is, that the target app crashes after a minute or so, or when certain reallocations happen in it.
I went to extremes like reading without VirtualProtectEx() in order to not to modify even the security attributes of the said regions of memory.
I'm curious what could cause a target application to crash after reading form its memory, without modifying values or access rights. (?)
Sidenote: The said memory is probably being accessed simultaneously by the target application as well as my application. (From the target app's perspective it is being read, executed and written.)
You can take a look at my code here:
UINT64 pageNum = 0;
BYTE page[4096];
for (UINT64 i = start; i < end; i+=0x1000)
{
ReadProcessMemory(qtHandle, (void*)i, &page, sizeof(page), &bytesRead);
foundCode = findCode(page, pageNum);
if (foundCode != 0)
{
foundCode += start - 11;
break;
}
pageNum++;
}
cout << hex<< foundCode << endl;
CloseHandle(qtHandle);
return 0;
}
UINT64 findCode(BYTE* pg, UINT64 pageNum)
{
for (size_t i = 0; i < 4096; i++)
{
if (findPattern(asm2, pg, i)) { //asm2 is an array of bytes
return (pageNum * 4096 + i);
}
}
return 0;
}
bool findPattern(BYTE* pattern, BYTE* page, size_t index)
{
for (size_t i = 0; i < sizeof(pattern); i++)
{
if (page[index + i] != pattern[i])
{
return false;
}
}
return true;
}
ReadProcessMemory() cannot cause the target program to crash.
Anticheat/antidebug might be detecting you and terminating the application
If you use VirtualProtectEx() to changing permissions that can cause a crash for sure
We would need to see more code to tell you what the problem is
It was the usage of VirtualProtectEx() that caused the problem.
In the follow two code snippets, is there actually any different according to the speed of compiling or running?
for (int i = 0; i < 50; i++)
{
if (i % 3 == 0)
continue;
printf("Yay");
}
and
for (int i = 0; i < 50; i++)
{
if (i % 3 != 0)
printf("Yay");
}
Personally, in the situations where there is a lot more than a print statement, I've been using the first method as to reduce the amount of indentation for the containing code. Been wondering for a while so found it about time I ask whether it's actually having an effect other than visually.
Reply to Alf (i couldn't get code working in comments...)
More accurate to my usage is something along the lines of a "handleObjectMovement" function which would include
for each object
if object position is static
continue
deal with velocity and jazz
compared with
for each object
if object position is not static
deal with velocity and jazz
Hence me not using return. Essentially "if it's not relevant to this iteration, move on"
The behaviour is the same, so the runtime speed should be the same unless the compiler does something stupid (or unless you disable optimisation).
It's impossible to say whether there's a difference in compilation speed, since it depends on the details of how the compiler parses, analyses and translates the two variations.
If speed is important, measure it.
If you know which branch of the condition has higher probability you may use GCC likely/unlikely macro
How about getting rid of the check altogether?
for (int t = 0; t < 33; t++)
{
int i = t + (t >> 1) + 1;
printf("%d\n", i);
}
It seems that most tutorials, guides, books and Q&A from the web refers to CUDA 3 and 4.x, so that is why I'm asking it specifically about CUDA 5.0. To the question...
I would like to program for an environment with two CUDA devices, but use only one thread, to make the design simple (specially because it is a prototype). I want to know if the following code is valid:
float *x[2];
float *dev_x[2];
for(int d = 0; d < 2; d++) {
cudaSetDevice(d);
cudaMalloc(&dev_x[d], 1024);
}
for(int repeats = 0; repeats < 100; repeats++) {
for(int d = 0; d < 2; d++) {
cudaSetDevice(d);
cudaMemcpy(dev_x[d],x[d],1024,cudaMemcpyHostToDevice);
some_kernel<<<...>>>(dev_x[d]);
cudaMemcpy(x[d],dev_x[d],1024,cudaMemcpyDeviceToHost);
}
cudaStreamSynchronize(0);
}
I would like to know specifically if cudaMalloc(...)s from before the testing for persist even with the interchanging of cudaSetDevice() that happens in the same thread. Also, I would like to know if the same happens with context-dependent objects such as cudaEvent_t and cudaStream_t.
I am asking it because I have an application in this style that keeps getting some mapping error and I can't find what it is, if some missing memory leak or wrong API usage.
Note: In my original code, I do check every single CUDA call. I did not put it here for code readability.
Is this just a typo?
for(int d = 0; d < 2; d++) {
cudaSetDevice(0); // shouldn't that be 'd'
cudaMalloc(&dev_x, 1024);
}
Please check the return value of all API calls!
I have been writing a code for a neural network using back propagation algorithm and for propagating inputs I have written the following code,but just for two inputs,its displaying segmentation fault.Is there any wrong withe code.I wan not able to figure it out....
void propagateInput(int cur,int next)
{
cout<<"propagating input"<<cur<<" "<<next<<endl;
cout<<"Number of nerons : "<<neuronsInLayer[cur]<<" "<<neuronsInLayer[next]<<endl;
for(int i = 0;i < neuronsInLayer[next];i++)
{
neuron[next][i].output = 0;
for(int j = 0;j < neuronsInLayer[cur];j++)
{
cout<<neuron[cur][j].output<<" ";
cout<<neuron[next][i].weight[j]<<"\n";
neuron[next][i].output += neuron[next][i].weight[j] * neuron[cur][j].output;
}
cout<<"out["<<i<<"] = "<<neuron[next][i].output<<endl;
}
cout<<"completed propagating input.\n";
}
for(int i = 0;i < neuronsInLayer[next];i++)...
neuronsInLayer[next] is a pointer. perhaps if i knew the type of neuronsInLayer i could assist you more.
That is not anywhere near enough information to debug your code. No info about line numbers or how the structures are laid out in memory or which ones are valid, etc.
So let me tell you how you can find this yourself. If you're using a Unix/Mac then use the GDB debugger on your executable, a.out:
$ gdb a.out
> run
*segfault*
> where
Visual Studio has a great debugger as well, just run it in Debug mode and it'll tell you where the segfault is and let you inspect memory.
I have a class in system-C with some data members as such:
long double x[8];
I'm initializing it in the construction like this:
for (i = 0; i < 8; ++i) {
x[i] = 0;
}
But the first time I use it in my code I have garbage there.
Because of the way the system is built I can't connect a debugger easily. Are there any methods to set a data breakpoint in the code so that it tells me where in the code the variables were actually changed, but without hooking up a debugger?
Edit:
#Prakash:
Actually, this is a typo in the question, but not in my code... Thanks!
You could try starting a second thread which spins, looking for changes in the variable:
#include <pthread.h>
void *ThreadProc(void *arg)
{
volatile long double *x = (volatile long double *)arg;
while(1)
{
for(int i = 0; i < 8; i++)
{
if(x[i] != 0)
{
__asm__ __volatile__ ("int 3"); // breakpoint (x86)
}
}
return 0; // Never reached, but placate the compiler
}
...
pthread_t threadID;
pthread_create(&threadID, NULL, ThreadProc, &x[0]);
This will raise a SIGTRAP signal to your application whenever any of the x values is not zero.
Just use printk/syslog.
It's old-fashioned, but super duper easy.
Sure, it will be garbage!
The code should have been as
for (i = 0; i < 8; ++i) {
x[i] = 0;
}
EDIT: Oops, Sorry for underestimating ;)
#Frank
Actually, that lets me log debug prints to a file. What I'm looking for is something that will let me print something whenever a variable changes, without me explicitly looking for the variable.
How about Conditional breakpoints? You could try for various conditions like first element value is zero or non zero, etc??
That's assuming I can easily connect a debugger. The whole point is that I only have a library, but the executable that linked it in isn't readily available.