I am trying to develop an application with Visual Studio C++ using some communication DLLs .
In one of the DLLs, I have a stack overflow exception.
I have two functions, one receives packet, and another function which do some operations on the packets.
static EEcpError RxMessage(unsigned char SrcAddr, unsigned char SrcPort, unsigned char DestAddr, unsigned char DestPort, unsigned char* pMessage, unsigned long MessageLength)
{
EEcpError Error = ERROR_MAX;
TEcpChannel* Ch = NULL;
TDevlinkMessage* RxMsg = NULL;
// Check the packet is sent to an existing port
if (DestPort < UC_ECP_CHANNEL_NB)
{
Ch = &tEcpChannel[DestPort];
RxMsg = &Ch->tRxMsgFifo.tDevlinkMessage[Ch->tRxMsgFifo.ucWrIdx];
// Check the packet is not empty
if ((0UL != MessageLength)
&& (NULL != pMessage))
{
if (NULL == RxMsg->pucDataBuffer)
{
// Copy the packet
RxMsg->SrcAddr = SrcAddr;
RxMsg->SrcPort = SrcPort;
RxMsg->DestAddr =DestAddr;
RxMsg->DestPort = DestPort;
RxMsg->ulDataBufferSize = MessageLength;
RxMsg->pucDataBuffer = (unsigned char*)malloc(RxMsg->ulDataBufferSize);
if (NULL != RxMsg->pucDataBuffer)
{
memcpy(RxMsg->pucDataBuffer, pMessage, RxMsg->ulDataBufferSize);
// Prepare for next message
if ((UC_ECP_FIFO_DEPTH - 1) <= Ch->tRxMsgFifo.ucWrIdx)
{
Ch->tRxMsgFifo.ucWrIdx = 0U;
}
else
{
Ch->tRxMsgFifo.ucWrIdx += 1U;
}
// Synchronize the application
if (0 != OS_MbxPost(Ch->hEcpMbx))
{
Error = ERROR_NONE;
}
else
{
Error = ERROR_WINDOWS;
}
}
else
{
Error = ERROR_WINDOWS;
}
}
else
{
// That should never happen. In case it happens, that means the FIFO
// is full. Either the FIFO size should be increased, or the listening thread
// does no more process the messages.
// In that case, the last received message is lost (until the messages are processed, or forever...)
Error = ERROR_FIFO_FULL;
}
}
else
{
Error = ERROR_INVALID_PARAMETER;
}
}
else
{
// Trash the packet, nothing else to do
Error = ERROR_NONE;
}
return Error;
}
static EEcpError ProcessNextRxMsg(unsigned char Port, unsigned char* SrcAddr, unsigned char* SrcPort, unsigned char* DestAddr, unsigned char* Packet, unsigned long* PacketSize)
{
EEcpError Error = ERROR_MAX;
TEcpChannel* Ch = &tEcpChannel[Port];
TDevlinkMessage* RxMsg = &Ch->tRxMsgFifo.tDevlinkMessage[Ch->tRxMsgFifo.ucRdIdx];
if (NULL != RxMsg->pucDataBuffer)
{
*SrcAddr = RxMsg->ucSrcAddr;
*SrcPort = RxMsg->ucSrcPort;
*DestAddr = RxMsg->ucDestAddr;
*PacketSize = RxMsg->ulDataBufferSize;
memcpy(Packet, RxMsg->pucDataBuffer, RxMsg->ulDataBufferSize);
// Cleanup the processed message
free(RxMsg->pucDataBuffer); // <= Exception stack overflow after 40 min
RxMsg->pucDataBuffer = NULL;
RxMsg->ulDataBufferSize = 0UL;
RxMsg->ucSrcAddr = 0U;
RxMsg->ucSrcPort = 0U;
RxMsg->ucDestAddr = 0U;
RxMsg->ucDestPort = 0U;
// Prepare for next message
if ((UC_ECP_FIFO_DEPTH - 1) <= Ch->tRxMsgFifo.ucRdIdx)
{
Ch->tRxMsgFifo.ucRdIdx = 0U;
}
else
{
Ch->tRxMsgFifo.ucRdIdx += 1U;
}
Error =ERROR_NONE;
}
else
{
Error = ERROR_NULL_POINTER;
}
return Error;
}
The problem occur after 40 min, during all this time I receive a lot of packets, and everything is going well.
After 40 min, the stack overflow exception occur on the free.
I don't know what is going wrong.
Can anyone help me please ?
Thank you.
A few suggestions:
The line
memcpy(Packet, RxMsg->pucDataBuffer, RxMsg->ulDataBufferSize);
is slightly suspect as it occurs just before the free() call which crashes. How is Packet allocated and how are you making sure a buffer overflow does not occur here?
If this is an asynchronous / multi-threaded program do you have the necessary locks to prevent data from being written/read at the same time?
Best bet if you still need to find the issue is to run a tool like Valgrind to help diagnose and narrow down memory issues more precisely. As dasblinklight mentions in the comments the issue most likely originates somewhere else and merely happens to show up at the free() call.
Related
I have a function for receiving messages of variable length through TCP. The send-function creates a buffer, puts the length of message in first four bytes, fills the rest with the message, and sends by parts. But the receive-function was receiving 4 bytes less. And suddenly, when I put one printf, everything is working as it should.
bool TCP_Server::recvMsg(SOCKET client_sock, std::unique_ptr<char[]>& buf_ptr, int* buf_len)
{
int msg_len;
int rcvd = 0, tmp;////
/* get msg len */
if((tmp = recv(client_sock, (char*)&msg_len, sizeof(msg_len), 0)) == -1)
{
handle_error("recv");
return false;
}
*buf_len = msg_len;
printf("msg_len = %d\n", msg_len); //
printf("tmp getting msg_len = %d\n", tmp);//
rcvd += tmp;//
buf_ptr.reset((char*)malloc(msg_len));
if(buf_ptr.get() == nullptr) // not enough memory
{
handle_error("malloc");
return false;
}
/* get msg of specified len */
/* get by biggest available pieces */
int i = 1;
while(int(msg_len - 1440 * i) > 0)
{
char* cur_ptr = buf_ptr.get() + 1440 * (i - 1);
if((tmp=recv(client_sock, cur_ptr, 1440, 0)) == -1)
{
handle_error("recv");
return false;
}
printf("1440 = %d\n", tmp); // doesn't work if I comment this line
rcvd += tmp;
i++;
}
int rest = msg_len - 1440 * (i - 1);
/* get the rest */
if((tmp = recv(client_sock, buf_ptr.get() + msg_len - rest, rest, 0)) == -1)
{
handle_error("(recv)reading with msg_len");
return false;
}
rcvd += tmp;//
printf("rcvd = %d\n", rcvd);//
return true;
}
In sum, if I comment printf("1440 = %d\n", tmp);, the function is receiving 4 bytes less.
I'm compiling with x86 Debug.
Here's the dissimilar lines in asm(/FA flag): http://text-share.com/view/50743a5e
But I don't see anything suspicious
printf writes to the console, which is a fairly slow operation, relatively speaking. The extra delay it produces might easily change how much data has arrived in the buffer when you call recv.
As Tulon comments, reads from TCP streams can be any length. TCP doesn't preserve message boundaries, so they don't necessarily match the send sizes on the other end. And if less data has been sent across the network than you asked to read, you'll get what is available.
Solution: stop thinking of 1440 byte chunks. Get rid of i and simply compare rcvd to msg_len.
I am trying to read complete messages from my GPS via serial port.
The message I am looking for starts with:
0xB5 0x62 0x02 0x13
So I read from the serial port like so
while (running !=0)
{
int n = read (fd, input_buffer, sizeof input_buffer);
for (int i=0; i<BUFFER_SIZE; i++)
{
if (input_buffer[i]==0xB5 && input_buffer[i+1]== 0x62 && input_buffer[i+2]== 0x02 && input_buffer[i+3]== 0x13 && i<(BUFFER_SIZE-1) )
{
// process the message.
}
}
The problem I am having is that I need to get a complete message. Half of a message could be in the buffer one iteration. And the other half could come into the message the next iteration.
Somebody suggested that free the buffer up from the complete message. And then I move the rest of data in the buffer to the beginning of the buffer.
How do I do that or any other way that make sure I get every complete selected message that comes in?
edit//
I want a particular class and ID. But I can also read in the length
To minimize the overhead of making many read() syscalls of small byte counts, use an intermediate buffer in your code.
The read()s should be in blocking mode to avoid a return code of zero bytes.
#define BLEN 1024
unsigned char rbuf[BLEN];
unsigned char *rp = &rbuf[BLEN];
int bufcnt = 0;
static unsigned char getbyte(void)
{
if ((rp - rbuf) >= bufcnt) {
/* buffer needs refill */
bufcnt = read(fd, rbuf, BLEN);
if (bufcnt <= 0) {
/* report error, then abort */
}
rp = rbuf;
}
return *rp++;
}
For proper termios initialization code for the serial terminal, see this answer. You should increase the VMIN parameter to something closer to the BLEN value.
Now you can conveniently access the received data a byte at a time with minimal performance penalty.
#define MLEN 1024 /* choose appropriate value for message protocol */
unsigned char mesg[MLEN];
while (1) {
while (getbyte() != 0xB5)
/* hunt for 1st sync */ ;
retry_sync:
if ((sync = getbyte()) != 0x62) {
if (sync == 0xB5)
goto retry_sync;
else
continue; /* restart sync hunt */
}
class = getbyte();
id = getbyte();
length = getbyte();
length += getbyte() << 8;
if (length > MLEN) {
/* report error, then restart sync hunt */
continue;
}
for (i = 0; i < length; i++) {
mesg[i] = getbyte();
/* accumulate checksum */
}
chka = getbyte();
chkb = getbyte();
if ( /* valid checksum */ )
break; /* verified message */
/* report error, and restart sync hunt */
}
/* process the message */
switch (class) {
case 0x02:
if (id == 0x13) {
...
...
You can break the read into three parts. Find the start of a message. Then get the LENGTH. Then read the rest of the message.
// Should probably clear these in case data left over from a previous read
input_buffer[0] = input_buffer[1] = 0;
// First make sure first char is 0xB5
do {
n = read(fd, input_buffer, 1);
} while (0xB5 != input_buffer[0]);
// Check for 2nd sync char
n = read(fd, &input_buffer[1], 1);
if (input_buffer[1] != 0x62) {
// Error
return;
}
// Read up to LENGTH
n = read(fd, &input_buffer[2], 4);
// Parse length
//int length = *((int *)&input_buffer[4]);
// Since I don't know what size an int is on your system, this way is better
int length = input_buffer[4] | (input_buffer[5] << 8);
// Read rest of message
n = read(fd, &input_buffer[6], length);
// input_buffer should now have a complete message
You should add error checking...
I'm trying to get the serial number of a USB device using libusb-1.0.
The problem I have is that sometimes the libusb_get_string_descriptor_ascii() function returns -7 (LIBUSB_ERROR_TIMEOUT) in my code, but other times the serial number is correctly written in my array and I can't figure out what is happening. Am I using libusb incorrectly? Thank you.
void EnumerateUsbDevices(uint16_t uVendorId, uint16_t uProductId) {
libusb_context *pContext;
libusb_device **ppDeviceList;
libusb_device_descriptor oDeviceDescriptor;
libusb_device_handle *hHandle;
int iReturnValue = libusb_init(&pContext);
if (iReturnValue != LIBUSB_SUCCESS) {
return;
}
libusb_set_debug(pContext, 3);
ssize_t nbUsbDevices = libusb_get_device_list(pContext, &ppDeviceList);
for (ssize_t i = 0; i < nbUsbDevices; ++i) {
libusb_device *pDevice = ppDeviceList[i];
iReturnValue = libusb_get_device_descriptor(pDevice, &oDeviceDescriptor);
if (iReturnValue != LIBUSB_SUCCESS) {
continue;
}
if (oDeviceDescriptor.idVendor == uVendorId && oDeviceDescriptor.idProduct == uProductId) {
iReturnValue = libusb_open(pDevice, &hHandle);
if (iReturnValue != LIBUSB_SUCCESS) {
continue;
}
unsigned char uSerialNumber[255] = {};
int iSerialNumberSize = libusb_get_string_descriptor_ascii(hHandle, oDeviceDescriptor.iSerialNumber, uSerialNumber, sizeof(uSerialNumber));
std::cout << iSerialNumberSize << std::endl; // Print size of serial number <--
libusb_close(hHandle);
}
}
libusb_free_device_list(ppDeviceList, 1);
libusb_exit(pContext);
}
I see nothing wrong with your code. I would not care to much about timeouts in the context of USB. It is a bus after all and can be occupied with different traffic.
As you may know there is depending on the version of USB a portion of the bandwidth reserved for control transfers. libusb_get_string_descriptor_ascii simply sends all the required control transfers to get the string. If any of those times out it will abort. You can try to send this control transfers yourself and use bigger timeout values but I guess the possibility of a timeout will always be there to wait for you (pun intended).
So it turns out my device was getting into weird states, possibly not being closed properly or the like. Anyway, calling libusb_reset_device(hHandle); just after the libusb_open() call seems to fix my sporadic timeout issue.
libusb_reset_device()
So I have this litte code, It loops through memory regions, saves them to a byte array, then uses it and finally deletes it (deallocate it). This all happens in a non-main thread, therefore the use of CriticalSections.
Code looks like this:
SIZE_T addr_min = (SIZE_T)sysInfo.lpMinimumApplicationAddress;
SIZE_T addr_max = (SIZE_T)sysInfo.lpMaximumApplicationAddress;
while (addr_min < addr_max)
{
MEMORY_BASIC_INFORMATION mbi = { 0 };
if (!::VirtualQueryEx(hndl, (LPCVOID)addr_min, &mbi, sizeof(mbi)))
{
continue;
}
if (mbi.State == MEM_COMMIT && ((mbi.Protect & PAGE_GUARD) == 0) && ((mbi.Protect & PAGE_NOACCESS) == 0))
{
SIZE_T region_size = mbi.RegionSize;
PVOID Base_Address = mbi.BaseAddress;
BYTE * dump = new BYTE[region_size + 1];
EnterCriticalSection(...);
memset(dump, 0x00, region_size + 1);
//this is where it crashes, same thing with memcpy
//Access violation reading "dump"'s address:
//memmove(unsigned char * dst=0x42aff024, unsigned char *
//src=0x7a768000, unsigned long count=1409024)
std::memmove(dump, Base_Address, region_size);
LeaveCriticalSection(...);
//Do Stuff with dump, that only involves reading from it
if (dump){
delete[] dump;
dump = NULL;
}
}
addr_min += mbi.RegionSize;
}
Code works fine most of the time. But sometimes it just crashes in memcpy/memmove. Under the Visual Studio Debugger it shows that the crash is because there is a error reading "dump", how is that possible if I just define and allocated memory for it. Thanks!
Also, could it be because memory can change in the middle of memcpy?
Thrift version is 0.8. I'm implementing my own thrift transport layer in client with C++, protocol use Binary, my server is use frame transport and binary protocol, and is no problem for sure. And I get "No more data to read" exception in TTransport.h readAll function. I traced the call link, find in TBinaryProtocol.tcc
template <class Transport_>
uint32_t TBinaryProtocolT<Transport_>::readMessageBegin(std::string& name,
TMessageType& messageType,
int32_t& seqid) {
uint32_t result = 0;
int32_t sz;
result += readI32(sz); **//sz should be the whole return buf len without the first 4 bytes?**
if (sz < 0) {
// Check for correct version number
int32_t version = sz & VERSION_MASK;
if (version != VERSION_1) {
throw TProtocolException(TProtocolException::BAD_VERSION, "Bad version identifier");
}
messageType = (TMessageType)(sz & 0x000000ff);
result += readString(name);
result += readI32(seqid);
} else {
if (this->strict_read_) {
throw TProtocolException(TProtocolException::BAD_VERSION, "No version identifier... old protocol client in strict mode?");
} else {
// Handle pre-versioned input
int8_t type;
result += readStringBody(name, sz);
result += readByte(type); **//No more data to read in buf, so exception here**
messageType = (TMessageType)type;
result += readI32(seqid);
}
}
return result;
}
So my quesiton is: in frame transport, the data struct, should ONLY be size + content(result, seqid, function name....), that's exactly what my server pack. Then my client read the first 4 bytes lenth, and use it to fetch the whole content, is there any other left to read now?
Here is my client code, I believe quite simple.the most import part I have emphasize that.
class CthriftCli
{
......
TMemoryBuffer write_buf_;
TMemoryBuffer read_buf_;
enum CthriftConn::State state_;
uint32_t frameSize_;
};
void CthriftCli::OnConn4SgAgent(const TcpConnectionPtr& conn)
{
if(conn->connected() ){
conn->setTcpNoDelay(true);
wp_tcp_conn_ = boost::weak_ptr<muduo::net::TcpConnection>(conn);
if(unlikely(!(sp_countdown_latch_4_conn_.get()))) {
return 0;
}
sp_countdown_latch_4_conn_->countDown();
}
}
void CthriftCli::OnMsg4SgAgent(const muduo::net::TcpConnectionPtr& conn,
muduo::net::Buffer* buffer,
muduo::Timestamp receiveTime)
{
bool more = true;
while (more)
{
if (state_ == CthriftConn::kExpectFrameSize)
{
if (buffer->readableBytes() >= 4)
{
frameSize_ = static_cast<uint32_t>(buffer->peekInt32());
state_ = CthriftConn::kExpectFrame;
}
else
{
more = false;
}
}
else if (state_ == CthriftConn::kExpectFrame)
{
if (buffer->readableBytes() >= frameSize_)
{
uint8_t* buf = reinterpret_cast<uint8_t*>((const_cast<char*>(buffer->peek())));
read_buf_.resetBuffer(buf, sizeof(int32_t) + frameSize_, TMemoryBuffer::COPY); **// all the return buf, include first size bytes**
if(unlikely(!(sp_countdown_latch_.get()))){
return;
}
sp_countdown_latch_->countDown();
buffer->retrieve(sizeof(int32_t) + frameSize_);
state_ = CthriftConn::kExpectFrameSize;
}
else
{
more = false;
}
}
}
}
uint32_t CthriftCli::read(uint8_t* buf, uint32_t len) {
if (read_buf_.available_read() == 0) {
if(unlikely(!(sp_countdown_latch_.get()))){
return 0;
}
sp_countdown_latch_->wait();
}
return read_buf_.read(buf, len);
}
void CthriftCli::readEnd(void) {
read_buf_.resetBuffer();
}
void CthriftCli::write(const uint8_t* buf, uint32_t len) {
return write_buf_.write(buf, len);
}
uint32_t CthriftCli::writeEnd(void)
{
uint8_t* buf;
uint32_t size;
write_buf_.getBuffer(&buf, &size);
if(unlikely(!(sp_countdown_latch_4_conn_.get()))) {
return 0;
}
sp_countdown_latch_4_conn_->wait();
TcpConnectionPtr sp_tcp_conn(wp_tcp_conn_.lock());
if (sp_tcp_conn && sp_tcp_conn->connected()) {
muduo::net::Buffer send_buf;
send_buf.appendInt32(size);
send_buf.append(buf, size);
sp_tcp_conn->send(&send_buf);
write_buf_.resetBuffer(true);
} else {
#ifdef MUDUO_LOG
MUDUO_LOG_ERROR << "conn error, NOT send";
#endif
}
return size;
}
So please give me some hints about this?
You seem to have mixed concepts of 'transport' and 'protocol'.
Binary Protocol describes how data should be encoded (protocol layer).
Framed Transport describes how encoded data should be delivered (forwarded by message length) - transport layer.
Important part - Binary Protocol is not (and should not) be aware of any transport issues. So if you add frame size while encoding on transport level, you should also interpret incoming size before passing read bytes to Binary Protocol for decoding. You can (for example) use it to read all required bytes at once etc.
After quick looking trough you code: try reading 4 bytes of frame size instead of peeking it. Those bytes should not be visible outside transport layer.