Working on a WinPCap project. Trying to do some basic pointer and memory operations and having lots of errors.
I've included the two lines I'm trying to run along with the includes.
The same lines in another VSC++ project work just fine. This is the error I am getting
Unhandled exception at 0x75a79617 in
pktdump_ex.exe: Microsoft C++
exception: std::bad_alloc at memory
location 0x0012f8e4..
#include <stdlib.h>
#include <stdio.h>
#include <iostream>
#include <string>
#include "DataTypes.h"
#include <sstream>
#include "EthernetLayer.h"
#include <pcap.h>
int* testPointer = new int[2];
delete[] testPointer;
EDIT:
Found out something useful.
The following code snippet is what is crashing the winpcap library.
EthernetStructPointers* testData;
testData = (EthernetStructPointers*)pkt_data;
EthernetStruct newData;
memcpy(newData.DEST_ADDRESS, testData->DEST_ADDRESS, 6);
These are the definitions of the structs.
struct EthernetStructPointers
{
u_char DEST_ADDRESS[6];
u_char SOURCE_ADDRESS[6];
u_char TYPE[2];
};
struct EthernetStruct
{
u_char DEST_ADDRESS[6];
u_char SOURCE_ADDRESS[6];
u_char TYPE[2];
u_char* dataPointer;
string DestAddress;
string SourceAddress;
string Type;
int length;
};
My guess is the freestore is corrupted by one the previous statements (perhaps by an incorrect use of the pcap interface), and you only learn of the error on the next memory allocation or release, when the manager detects it and throws a bad alloc.
std::bad_alloc should be thrown when you try to new something and have run out of memory. Can you check how much free memory is available to your process?
Related
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 years ago.
Improve this question
C++ newbie and have been writing a C++ program, but it finally breaks at the point of calling a lib function call from ctime.
The error shows info like this:
malloc(): memory corruption
AFAIK, this error(memory corruption) should be resulted from operate on out-of-bound memory address. And the print format represents YYYY-MM-DD-HH-MM, which is listed here, shows that the length should be definitively less than 100.
Additional info:
- The program is compiled with flags: "-O3 -g -Wall -Wextra -Werror -std=c++17"
- Compiler: g++ 7.4.0
- System: WSL Ubuntu-18
NOTE: This Code DOES NOT compiles and is NOT REPRODUCIBLE for the problem, See updates below
/** class file **/
#include <sys/wait.h>
#include <sys/types.h>
#include <unistd.h>
#include <errno.h>
#include <cstdlib>
#include <iostream>
#include <sstream>
#include <string>
#include <ios>
#include <fcntl.h>
#include <algorithm>
#include <cctype>
#include <ctime>
#include <limits>
#include "cache-proxy.hpp"
static int PROXY_CONFIG = 0;
void get_timestamp(char *buffer, int len);
std::string get_cwd(void);
CacheProxy::CacheProxy(__attribute__((unused)) const std::string& node)
{
curr_dir = fs::get_cwd();
Logger::get().info("curr_dir " + curr_dir);
proxy_path = "/usr/sbin/squid";
std::string squid("squid");
char buff[200];
get_timestamp(buff, 200); // error pops
std::string proxy_config_path;
/** plenty of codes following, but commented**/
}
void ~CacheProxy(){}
void get_timestamp(char *buffer, int len)
{
time_t raw_time;
struct tm *time_info;
time(&raw_time);
time_info = std::localtime(&raw_time);
std::strftime(buffer, len, "%F-%H-%M", time_info);
return;
}
// originally from other files, for convenient to be moved into this file
std::string get_cwd(void)
{
char path[PATH_MAX];
std::string retval;
if (getcwd(path, sizeof(path)) != NULL) {
retval = std::string(path);
} else {
Logger::get().err("current_path", errno);
}
return retval;
}
/** header file **/
#pragma once
#include <string>
class CacheProxy:
{
private:
int server_pid;
std::string proxy_path;
std::string curr_dir;
std::string squid_pid_path;
;
public:
CacheProxy(const std::string&);
~CacheProxy() override;
};
/** main file **/
int main(){
Node node(); // the parameter is never used in the CacheProxy constructor though
CacheProxy proxy(node); // error pops
proxy.init();
}
Thanks for any advices or thoughts.
Updates:
code updated as above and there are three major files. The code shows the exact same sequences of the logic of my original codebase by leaving out irrelevant codes(I commented them out when run into the errors), but please forgive me in giving out such rough codes.
Basically the the error pops during the object initialization and I currently assume that problems be either in get_cwd or localtime.
Please indicate if you need more infomations, though I think other codes are non-relevant indeed.
Updates Dec 21:
After commenting out different parts of the original code, I managed to locate the error part but cannot fix the bug. Opinions from comments are indeed true that the memory corruption error should originated from somewhere beforehand, however, what I am going to do to fix this problem, is somewhat different from other answers since I use setcap for my program and cannot use valgrind in this scenarios.
I used another tool called ASan(Address Sanitizer) to do the memory check. It was really easy to find out where the memory corruption is originated from with the tool and it has comprehensive analysis when the error occurs at runtime. I added support in the compiler and found out the main problem in my case is the memory allocation for string variables in CacheProxy class.
So far, it has turned out to be another problem which is "why there are indirect memory leakages originated from allocating memory for string objects when
constructor of this class is called", which I will not collapsed here in this question.
But it is really a good lesson for me that Memory problems actually have various types and causes, you cannot stare onto the source code for solving a problem which is not "index out of bound" or "illegal address access"(segfault) problem. Many tools are really handy and specialized in dealing with these things, so go and grab your tools.
Any crash inside malloc or free is probably cause of a earlier heap corruption.
Your memory is probably corrupted earlier.
If you're using Linux, try running your program under valgrind. Valgrind can help you find out this kind of error.
The 'obvious fixes' refered to by David is:
#include <iostream>
#include <ctime>
#include <cstdio>
void get_timestamp(char *buffer, int len)
{
time_t raw_time;
struct tm *time_info;
time(&raw_time);
time_info = localtime(&raw_time); // the line of code that breaks
strftime(buffer, len, "%F-%H-%M", time_info);
return;
}
int main() {
char buff[100];
get_timestamp(buff, 100);
std::cout << std::string(buff);
return 0;
}
I'm debugging a simple C++ script using gdb and see that I get an error when I try and initialize temp_grid. I try and compile it by running
g++ -Wall initial.cc -o initial
Is there a way to avoid this segmentation fault with something inside the script?
#include <iostream>
#include <array>
#include <valarray>
#include <stdlib.h>
#include <memory>
using namespace std;
int main()
{
using std::array;
array<array<float, 1024>, 1024> grid ={};
// temp grid
array<array<float, 1024>, 1024> temp_grid ={};
return 0;
}
You are most likely overflowing the stack, which has relatively limited storage space for local variables. Try allocating them using dynamic storage (using new). For maximum robustness, use smart pointers (unique_ptr) to manage the pointers.
i'm trying to write a C++ DLL which uses openSSL to secure a connection to a server.
I'm genuinly puzzled by the fact that this code
#include "stdafx.h"
#include <string.h>
#include <iostream>
//SSL stuff
#include <openssl/ssl.h>
#include <openssl/err.h>
#include <openssl/pem.h>
#include <openssl/x509.h>
#include <openssl/x509_vfy.h>
#include <openssl/ossl_typ.h>
#include <openssl/applink.c>
//Winsock stuf
#pragma comment(lib, "ws2_32.lib")
{... Create a method in which we set up the SSL connection ...}
char* tSend = "{\"reqtype\":\"keyexchange\"}";
int sendSize = strlen(tSend);
int net_tSend = htonl(sendSize);
SSL_write(ssl, &net_tSend, 4);
SSL_write(ssl, tSend, sendSize);
works fine in a Console application, but crashes in my DLL.
Here's my exception:
Exception thrown at 0x00007FF865207DA0 (libeay32.dll) in TestsForPKCSDLL.exe: 0xC0000005: Access violation reading location 0x0000000000000000.
Thanks a lot for your time.
After a bit of research, it looks like the problem comes from the htonl() function.
u_long mylong = 10L;
int net_tSend = htonl(mylong);
Exception thrown at 0x00007FF863807DA0 (libeay32.dll) in TestsForPKCSDLL.exe: 0xC0000005: Access violation reading location 0x0000000000000000.
Which apparently is not loaded properly. I think that, because my code is in a DLL, it crashes if the calling program doesn't reference SSL dlls. I'll try to link libeay32 and ssleay32 statically see if that works.
I need to read the information contained in a json file like this:
{"first":10, "second":"0", "P1":"1.e-20","P2":"1000","P3":"1000","P4":"1000","P5":"1"}
Since I do not have experience with this issue, I started by playing with the short code you can see below these lines. It does compile with no problem but it gives a segmentation fault back upon execution. The file general.json is in the same folder. The information contained in the json file is correctly printed in the screen if I comment the last line. Could anyone tell me what am I doing wrong?
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <fstream> // fstream.h in old versions of g++
#include <iostream> //para cout
#include <sstream>
#include <json/json.h>
using namespace std;
int main() {
struct json_object *new_json, *json_arr, *json_reg, *json_field;
string line;
stringstream jsonfile;
ifstream json("file.json", ios::in);
{getline(json, line); do {jsonfile << line;} while (getline(json, line));}
json.close();
cout << jsonfile.str().c_str();
new_json=json_tokener_parse(jsonfile.str().c_str());
json_field=json_object_object_get(json_reg, "first");
}
You are using the json_reg pointer without initializing it and the function dereferences it. You are (most likely) using json-c where:
json_object_object_get calls json_object_object_get_ex on the object
json_object_object_get_ex does switch(jso->o_type) dereferencing an invalid pointer
So, I'm trying to create a shared-memory segment in a C++ program, so I can for example write a simple character in it, and read that character from another C++ program.
I've downloaded the Boost libraries, as I read it simplifies this process.
Basically I have two questions: First of all, how do I write to it after its created? Then what should I write in the second program in order to identify the segment and read the info in it?
This is what I've got so far. It's not a lot, but I'm still new to this (first program):
#include "stdafx.h"
#include <boost/interprocess/windows_shared_memory.hpp>
#include <boost/interprocess/mapped_region.hpp>
int main(int argc, char *argv[])
{
using namespace boost::interprocess;
windows_shared_memory shared (create_only, "shm", read_write, 65536);
//created shared memory using the windows native library
mapped_region region (shared, read_write, 0 , 0 , (void*)0x3F000000);
//mapping it to a region using HEX
//Here I should write to the segment
return 0;
}
Thanks in advance. Any information I will be more than happy to provide, in order to receive the appropriate help.
The following is a slightly modified example which is based on the Boost documentation on Shared Memory
Note: When using windows_shared_memory keep in mind that the shared memory block will automatically be destroyed when the last process that uses it exists. In the example below that means, if the server exists before the client has a change to open the shared memory block, the client will throw an exception.
Server side:
#include <boost/interprocess/windows_shared_memory.hpp>
#include <boost/interprocess/mapped_region.hpp>
#include <cstring>
#include <cstdlib>
#include <string>
int main(int argc, char *argv[])
{
using namespace boost::interprocess;
//Create a native windows shared memory object.
windows_shared_memory shm (create_only, "shm", read_write, 65536);
//Map the whole shared memory in this process
mapped_region region(shm, read_write);
//Write a character to region
char myChar = 'A';
std::memset(region.get_address(), myChar , sizeof(myChar));
... it's important that the server sticks around, otherwise the shared memory
block is destroyed and the client will throw exception when trying to open
return 0;
}
Client side:
#include <boost/interprocess/windows_shared_memory.hpp>
#include <boost/interprocess/mapped_region.hpp>
#include <cstring>
#include <cstdlib>
#include <string>
int main(int argc, char *argv[])
{
using namespace boost::interprocess;
//Open already created shared memory object.
windows_shared_memory shm (open_only, "shm", read_only);
//Map the whole shared memory in this process
mapped_region region(shm, read_only);
//read character from region
char *myChar= static_cast<char*>(region.get_address());
return 0;
}
Instead of memsetting raw bytes in shared memory, you'll probably be better off using Boost.Interprocess. It's designed to simplify the use of common interprocess communication and synchronization mechanisms and offers a wide range of them - including shared memory. For example you could create a vector in shared memory.