I'm trying to implement a kernel system call to remove the first element from a queue. I'm getting a SEGKILL when debugging in gdb, with a line in kernel logs: BUG: unable to handle kernel paging request at ....
My struct for the queue is as follows:
typedef struct msgQueue
{
long len;
void *data;
struct list_head queue;
} msgQueue;
As you can see, it contains the pointer to a block of data, the length in bytes of that data, and a list_head struct object from list.h.
I inialize an object of type msgQueue(above) with these lines:
myQueue = (struct msgQueue *) kmalloc(sizeof(struct msgQueue), GFP_KERNEL);
INIT_LIST_HEAD(&myQueue->queue);
I implemnent a write function that is wroking correctly. The queue is not empty when I'm trying to delete from it. Here's the initialization of the new queue that I'm addingn and the lines to add it:
Function header:
asmlinkage long sys_writeMsgQueue(const void __user *data, long len)
Other lines:
tempQueue = (struct msgQueue *)kmalloc(sizeof(struct list_head), GFP_KERNEL);
tempQueue->data = kmalloc((size_t)len, GFP_KERNEL);
tempQueue->len = len;
uncopiedBytes = __copy_from_user(tempQueue->data, data, len);
list_add_tail(&(tempQueue->queue), &(myQueue->queue));
I can't paste all of even just my read function, because this is for a coure that I'm taking. But here is what I hope are the relevant parts:
asmlinkage long sys_readMsgQueue(void __user *data, long len)
{
long uncopiedBytes;
uncopiedBytes = __copy_to_user(myQueue, data, len);
printk("REMOVING FROM QUEUE AND FREEING\n\n\n");
list_del(&(myQueue->queue));
}
When I implement this basic functionality of this in a self contained c program in eclipse to try to debug it, it runs fine. Granted, I have to adjust it for user-space code so all of the kernel specific stuff is removed/changed (malloc instead of kmalloc, no system call-specific syntax, etc). I included list.h that I download so I'm using all of the same functions and such as far as list.h goes.
Does anything stand out at you that would cause the kernel paging error in my kernel logs?
tempQueue = (struct msgQueue *)kmalloc(sizeof(struct list_head), GFP_KERNEL);
looks wrong; you probably want
tempQueue = kmalloc(sizeof *tempQueue, GFP_KERNEL);
Related
typedef struct
{
char cStartByte; // Set Cmd 0xB1
int iTotalBytes;
char cSeqNum; // 0 to 99 repeating
char cCommand; //
char cPrintCmd; //
float fData[8]
} CMD,*psCmdOut;
In the code tried many options with no success what to put in ??? to sedn the above structure?
UDPClient1->SendBuffer(EHost->Text,12000, ????);
You can't send your structure as-is using a socket : you need to serialize it. You need to create a common format for data exchange, usually an array of char like this one.
Code :
unsigned char* ToCharArray(psCmdOut s)
{
unsigned char serial[12]; //32-bit arch
serial[0] = s.cStartByte;
/*etc.*/
return serial;
}
You can cast your structure in a (char*) back and forth , but I would advise strongly against it : the implicit conversion hides subtleties like endianness, internal memory padding and alignment, which can blow your system in a unpredictable way.
The answer depends on your version of Indy.
In Indy 8 and 9, SendBuffer() has the following signture:
void __fastcall SendBuffer(String AHost, const int APort, void* ABuffer, const int AByteCount);
So you can do this:
CMD cmd;
// fill cmd as needed...
UDPClient1->SendBuffer(EHost->Text, 12000, &cmd, sizeof(cmd));
In Indy 10, SendBuffer() was changed to take a TIdBytes (dynamic array of bytes) instead:
void __fastcall SendBuffer(const String AHost, const TIdPort APort, const TIdBytes ABuffer);
So you cannot pass the struct pointer directly anymore. However, Indy 10 has a RawToBytes() function to create a TIdBytes from a memory block, so you can do this instead:
CMD cmd;
// fill cmd as needed...
UDPClient1->SendBuffer(EHost->Text, 12000, RawToBytes(&cmd, sizeof(cmd)));
As the #Sam suggested:
UDPClient1->SendBuffer(EHost->Text,12000,reinterpret_cast(&cmd_command));
But the length of the structure is also required. So it will be:
UDPClient1->SendBuffer(EHost->Text,12000,reinterpret_cast<char*>(&cmd_command), sizeof(cmd_command));
And also I think it will be better if you do packing of the structure by adding
#pragma pack(1)
This will give you the actual size of the structure. With this you will be able to send the complete structure. And while receiving on the other side, typecast it back to the same structure.
I am trying to pass data from an x64 app to a x86 app using named pipes and overlapped I/O like what is defined here:
http://msdn.microsoft.com/en-us/library/windows/desktop/aa365603(v=vs.85).aspx
My server application's call to WriteFileEx succeeds and the structure I am sending through the pipe seems ok, however when I call ReadFile on the client side the data structure I retrieve is corrupted or different to the data that I sent, but it also successfully reads.
My client application has a unicode character set and the server's character set is 'not set', which I assume defaults to multibyte. I'm not in a position to change the server's character set to unicode.
Would this data corruption just be because I need to convert from multibyte to wide char on the client after I retrieve / read the data structure? If so is there built in helper functions that I can call on do to that?
Data structure being sent (defined identically on the server and client):
typedef struct
{
int id;
float vertices[VERTICES_COUNT][VERTICES_COMPONENTS];
unsigned short indices[INDICES_COUNT];
float texCoords[TEXTURE_COORD_COUNT][TEXTURE_COORD_COMPONENT];
unsigned char texData[TEXTURE_SIZE];
} MESHINST, *LPMESHINST;
typedef struct
{
OVERLAPPED oOverlap;
HANDLE pipeInst;
int addedCount;
MESHINST meshes[MESH_GROUP_BUFFER];
int removedCount;
int removed[MESH_REMOVE_BUFFER];
} MESHGROUPINST, *LPMESHGROUPINST;
WriteFileEx call on the server:
LPMESHGROUPINST meshes = (LPMESHGROUPINST)lpOverLap;
fWrite = WriteFileEx(
meshes->pipeInst,
(wchar_t*)meshes,
sizeof(MESHGROUPINST),
(LPOVERLAPPED)meshes,
(LPOVERLAPPED_COMPLETION_ROUTINE)CompletedWriteRoutine);
ReadFile call on the client:
(in header)
MESHGROUPINST _meshes;
(in cpp)
do
{
_success = ReadFile(
_pipe,
(wchar_t*)&_meshes,
sizeof(MESHGROUPINST),
&_numOfBytesRead,
NULL);
} while (!_success);
What is the type of _meshes in the ReadFile call? If it's a pointer, you'll be reading into the pointer, not the data being pointed to:
&_meshes
Should be:
_meshes
Also, it looks like you're writing process-specific HANDLE and OVERLAPPED info. Did you mean to write those?
You'll need to add more code for better help.
You need to ensure the structure is sent and received with 1-byte packing. Use #pragma pack(1) around the struct you wish to send/receive:
#pragma pack(1)
typedef struct
{
int id;
float vertices[VERTICES_COUNT][VERTICES_COMPONENTS];
unsigned short indices[INDICES_COUNT];
float texCoords[TEXTURE_COORD_COUNT][TEXTURE_COORD_COMPONENT];
unsigned char texData[TEXTURE_SIZE];
} MESHINST, *LPMESHINST;
#pragma pack()
Maybe I missed something from the tutorials because this is driving me nuts.
What I'm trying to accomplish: I want to create an array of structs for the OpenCL device to use as a work area. The host doesn't need to see it or interact with it in any way, it's just meant as a "scratch" space for the kernel to work within.
Here's what I have:
Declaration of struct inside header file accessible by both the main program and the OpenCL kernel:
typedef struct {
uint64_t a;
uint32_t b;
} result_list;
Initializing the scratch space buffer "outputBuffer" to hold MAX_SIZE elements:
cl_mem outputBuffer;
outputBuffer = clCreateBuffer(this->context,
CL_MEM_READ_WRITE,
sizeof(result_list) * MAX_SIZE,
NULL,
&status);
I never call clEnqueueWriteBuffer because the host doesn't care what the memory is. It's simply meant to be a working space for the kernel. I leave it as uninitialized but allocated.
Setting it as an argument for the kernel to use:
status = clSetKernelArg(myKernel,
1,
sizeof(cl_mem),
&this->outputBuffer);
The kernel (simplified to remove non-issue sections):
__kernel void kernelFunc(__global const uint32_t *input, __global result_list *outputBuffer) {
if (get_global_id(0) >= MAX_SIZE) { return; }
// Make a few local variables and play with them
outputBuffer[0].a = 1234; // Memory access violation here
// Code never reaches here
}
What am I doing wrong?
I installed CodeXL from AMD and it doesn't help much with debugging issues like these. The most it gives me is "The thread tried to read from or write to a virtual address to which it does not have access."
edit: It seems like it really doesn't like typedefs. Instead of using a struct, I simplified it to typedef uint64_t result_list and it refused to compile, saying "a value of type 'ulong' cannot be assigned to an entity of type 'result_list'", even though result_list -> uint64_t -> unsigned long.
Your problem is that you cannot put in a single header both definitions for HOST and DEVICE.
You have to separate them like this:
//HOST header
struct mystruct{
cl_ulong a;
cl_uint b;
};
//DEVICE header
typedef struct{
ulong a;
uint b;
} mystruct;
Notice that I also changed the datatype to the standar OpenCL datatypes. You should use those instead for compatibility.
I have two programs. I need one of them to send data and the other to receive that data.
I have some code in place that is hopefully sending a struct across the network.
However, I don't even know if it is working properly because I don't know how to code the receiving program to receive structs and pass the data it receives into a local struct to be manipulated.
Here is the code I'm using to send if it helps any
gamePacket.PplayerX = userSprite.x;
gamePacket.PplayerY = userSprite.y;
gamePacket.Plives = lives;
gamePacket.Pstate = state;
for(int z=0;z<8;z++)
{
gamePacket.PenemyX[z] = enemySprite[z].x;
gamePacket.PenemyY[z] = enemySprite[z].y;
}
char Buffer[sizeof(gamePacket)];
UDPSocket.Send(Buffer);
The struct is called Packet and gamePacket is an instance of it.
What I am stuck with is:
Is the code I posted even sending the struct
How do I receive the struct in the receiving program so that I can use the data inside it.
Its not send, you only declare a buffer. To send it you need to fill it. Also the way you use sizeof is wrong, it probably doesn't return the right size of all fields, you should count them up.
When you received everything you do the opposite, you allocate a struct and fill it using ofsets
If you need examples just ask. But learning is doing research so an push in the right direction is I think enough. (There are thousand examples on this.)
Ps: you can use pointers + offset because the memory of the struct is layed out next to each other. It are blocks of memory, just like an array.
EDIT; this link is what you need: Passing a structure through Sockets in C
EDIT: Example using pointers:
EDIT: Is this C# or C/C++? I'm sorry if so, change the example to C/C++ ;)
'
struct StructExample
{
int x;
int y;
};
int GetBytes(struct* Struct, void* buf)
{
//Access the memory location and store
*(int*)(buf + 0) = Struct->x;
*(int*)(buf + sizeof(int)) = Struct->y;
return sizeof(Struct->x) + sizeof(Struct->y)
}
Ps: I typed it with my mobile, I'm not 100% sure it compiles/works.
In c and c++ it is possible to use this code:
struct StructExample
{
int x;
int y;
};
struct StructExample a;
a->x = 1;
a->y = 2;
send(FSocket, &a, sizeof(a), 0);
My application sends varints many times. Every time I have to allocate memory for 2 objetcs: CodedOutputStream and FileOutputStream and later relese it. IMO it is unnecessary time loss. How can I send the varint without all this process? (I don't want to do it manually but with protobuf)
I have found it:
delete coded_output;
/* delete raw_output;*/
((FileOutputStream*)raw_output)->Flush();
but still there is one object to allocate every time
void Connection::send(const Message& msg) throw(EmptyMessage) {
//CodedOutputStream* coded_output = new CodedOutputStream(raw_output);
CodedOutputStream coded_output(raw_output);
int n = msg.ByteSize();
if(n<=0) throw EmptyMessage();
//coded_output->WriteVarint32(n);
coded_output.WriteVarint32(n);
//delete coded_output;
coded_output.~CodedOutputStream();
raw_output->Flush();
msg.SerializeToArray(buffer, n);
SocketMaintenance::write(buffer, n);
}
Annoucement Connection::receive() throw(EmptySocket) {
//CodedInputStream* coded_input = new CodedInputStream(raw_input);
CodedInputStream coded_input(raw_input);
google::protobuf::uint32 n;
//coded_input->ReadVarint32(&n);
coded_input.ReadVarint32(&n);
char *b;
int m;
//coded_input->GetDirectBufferPointer((const void**)&b, &m);
coded_input.GetDirectBufferPointer((const void**)&b, &m);
Annoucement ann;
ann.ParseFromArray(b, n);
return ann;
}
When I use the code above, I get this error (runtime error) from my client application (this apps uses only the send function):
libprotobuf FATAL
google/protobuf/io/zero_copy_stream_impl_lite.cc:346]
CHECK failed: (buffer_used_) ==
(buffer_size_): BackUp() can only be
called after Next(). terminate called
after throwing an instance of
'google::protobuf::FatalException'
what(): CHECK failed: (buffer_used_)
== (buffer_size_): BackUp() can only be called after Next(). Stopped
When I use the commented out part of code instead the corresponding one all works fine.
You don't have to allocate the CodedOutputStream on the heap, you can just declare it as a local variable (or class member) where you need it. It doesn't look like the constructor is particularly expensive.
Are you always writing to the same file? If so you can just use a single CodedOuputStream and FileOutputStream for all the writes.