I am trying to make code that throws a length_error exception. My goal is to detect and handle this exception condition. This is my attempt so far:
#include <iostream>
#include <string>
using namespace std;
int main(int argc, const char * argv[])
{
string buffer("hi");
cout<<buffer.max_size()<<endl;
try {
while(1) {
buffer.append(buffer);
}
}
catch(length_error &l) {
cout<<"Caught length_error exception: "<<l.what()<<endl;
}
catch(exception &e) {
cout<<"Caught exception: "<<e.what()<<endl;
}
return 0;
}
When I run the program I see the max size of the string is 18446744073709551599 bytes. The program continues running until all the memory is used up. Then it just goes quiet. No exception is thrown. The program is still running but its CPU usage went from 100% to around 0%.
Additional information:
OS: Mac OS 10.8.
Compiler: Clang 5.1
RAM: 16 GB
I believe your machine is going into virtual memory thrashing due to the memory consumption of growing your string by two characters a LOT of times.
A more effective way of getting this exception is to create a string of size max_size()+1 at the outset. Here's your code modified to do this, and (for me, at least) it throws the exception you expect instantly:
#include <iostream>
#include <string>
using namespace std;
int main(int argc, const char * argv[])
{
string buffer("hi");
cout<<buffer.max_size()<<endl;
try {
std::string blah(buffer.max_size()+1, 'X');
}
catch(length_error &l) {
cout<<"Caught length_error exception: "<<l.what()<<endl;
}
catch(exception &e) {
cout<<"Caught exception: "<<e.what()<<endl;
}
return 0;
}
std::length_error is only thrown in a case where a buffer size is known to exceed the container's max_size().
On a modern operating system with virtual memory, it's unlikely to request something that exceeds max_size() unless by accident (such as through negative unsigned values), and so the approach you are taking is unlikely to see this exception thrown. Instead, since you're using append, you're likely going to just use virtual memory by paging out to disk after you have exhausted your real memory -- which will slow the system down.
If you want to trigger a length_error, the easiest way would be to pass something greater than max_size() (assuming this is smaller than std::numeric_limits<std::string::size_type>::max()). You should be able to just do something like:
auto str = std::string{};
str.reserve(static_cast<std::string::size_type>(-1));
or
auto str = std::string{};
str.reserve(str.max_size()+1);
Related
I have currently a memory issue using the Botan library (version 2.15) for cryptography functions within a C++ project. My development environment is Solus Linux 4.1 (kernel-current), but I could observe this issue on Debian Buster too.
I observed that some memory allocated internally by Botan for calculations is not deallocated when going out of scope. When I called Botan::HashFunction, Botan::StreamCipher and Botan::scrypt multiple times, always going out of scope in between, the memory footprint increases steadily.
For example, consider this code:
#include <iostream>
#include <vector>
#include "botan/scrypt.h"
void pause() {
char ch;
std::cout << "Insert any key to proceed... ";
std::cin >> ch;
}
std::vector<uint8_t> get_scrypt_passhash(std::string const& password, std::string const& name) {
std::vector<uint8_t> key (32);
Botan::scrypt(key.data(), key.size(), password.c_str(), password.length(), salt.c_str(), salt.length(), 65536, 32, 1);
std::cout << "From function: before closing.\n";
pause();
return key;
}
int main(int argc, char *argv[]) {
std::cout << "Beginning test.\n";
pause();
auto pwhashed = get_scrypt_passhash(argv[1], argv[2]);
std::cout << "Test ended.\n";
pause();
}
I used the pause() function to observe the memory consumption (I called top/pmap and observed KSysGuard during the pause), when it is called from within get_scrypt_passhash before terminating, the used memory (both by top/pmap and KSysGuard) is about 2 MB more than at beginning, and after terminating the same.
I tried to dive into the Botan source code, but I cannot find memory leaks or the like. Valgrind also outputted that all allocated bytes have been freed, so no memory leaks were possible.
Just for information, I tried the same functionality with Crypto++ without observing this behavior.
Has anyone experienced the same issue? Is there a way to fix it?
We have a relatively large code base for a Linux server, it's dynamically linked-in libraries and server modules loaded during startup using dlopen(). The server as well as most of the other components are written in C++11, but some are in C99.
What approaches could one use to test whether the server, its dependencies and modules properly handle memory allocation failures, e.g.malloc/calloc returning NULL, operators new and new[] throwing std::bad_alloc etc, including allocation failures from std::string::resize() and such?
In the past, I've tried using memory allocation hooks to inject memory allocation failures into C applications, but I think these don't work for C++. What other options or approaches should I be looking at?
In fact, hooking into C malloc is enough, since under the hood the gcc C++ default implementation of operator new does call malloc and you confirmed you only needed a gcc compatible solution.
I could demonstrate it with that simple program:
mem.c++:
#include <iostream>
#include <string>
class A {
int ival;
std::string str;
public:
A(int i, std::string s): ival(i), str(s) {}
A(): ival(0), str("") {};
int getIval() const {
return ival;
}
std::string getStr() const {
return str;
}
};
int main() {
A a(2, "foo");
std::cout << &a << " : " << a.getIval() << " - " << a.getStr() << std::endl;
return 0;
}
memhook.c:
#include <stdio.h>
#include <stdlib.h>
extern void *__libc_malloc(size_t size);
void* malloc (size_t size) {
fprintf(stderr, "Allocating %u\n", size);
return NULL;
// return __libc_malloc(size);
}
When returning NULL (as above), the program displays:
Allocating 16
Allocating 100
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Abandon (core dumped)
That proves that returning NULL from the declared malloc function results in a std::bad_alloc exception in C++ code
When uncommenting the return __libc_malloc(size); the allocations are done by the libc malloc and the output becomes:
Allocating 16
0xbfe8d2e8 : 2 - foo
On linux you can hook into the operating system to force allocation to fail
man 2 mlockall
mlockall(MCL_CURRENT|MCL_FUTURE);
Should do what you want.
I have a string variable inside a vector that gets a string.
#include <vector>
#include <string>
using namespace std;
vector <string> buffer;
main(int argc, char *argv[]){
buffer[0] = "foobar";
return 0;
}
It causes this massive command line compiler error that starts with:
Multiple definition of WinMainCRTStartUP...
Then the error continues with roughly 500 lines of incoherent stuff.
I've successfully compiled other C++ programs with this compiler(GNU compiler). I don't know why this specific program is causing an error.
There are two problems. The first of all you need to add return type to main:
int main() {
...
}
Second. You have an empty buffer, so when your are doing buffer[0] = "anything" you will corrupt the memory (what may be discovered later when another function will access to the corrupted object). This happened because operator [] is unchecked for vector. If you will change it as:
int main() {
buffer.at(0) = "foobar";
return 0;
}
You will get an exception.
I'm studying C++ by 2 months and I'm having some problem with understanding the try-catch block in C++. I'm using the book : programming principles and practice using C++, here is what my book says :
the basic idea of the exceptions is that if a function find is an error that it cannot handle, it does not return normally, instead, it throws an exception indicating what went wrong. Any direct or indirect caller can catch the exception, that is, specify what to do if the called code used throw.
What does "any direct or indirect caller can cacht the exception means ? does the author means the caller of a function or the catch function ?". I'm confused about this, Could you exaplain it to me in simple way ?
Example for indirect call:
Here the exception happens in the called function. But the try catch is placed in the calling function, and not the called function.
#include <iostream>
#include <exception>
using namespace std;
void divideByZero(){
int a = 5;
int b = a / 0;
throw(b);
}
int main()
{
try{
divideByZero();
}
catch (exception& e){
cout<<e.what()<<endl;
}
return 0;
}
Example for direct exception:
Here the exception happens in the functions itself directly, and handled there itself.
#include <iostream>
using namespace std;
int main()
{
try{
int a = 5;
int b = a / 0;
throw(b);
}
catch (exception& e){
cout<<e.what()<<endl;
}
return 0;
}
The above program is used only for illustration and not for any real example, which you are likely to come across when you write a useful program.
since vector gets long unsigned int call to f(-1) throws bad_alloc. I suspect a call is made with 2147483648, actually 18446744073709551615 since it is x64 system. How can I get information about details of the error? This may be generalized, how can I get more details than e.what()?
void f(int i){
vector<int> v(i);
printf("vector size: %d", v.size());
}
int main(int argc, char** argv) {
//f(1); // vector size: 1
try{
f(-1); // terminate called after throwing an instance of 'std::bad_alloc'
//what(): std::bad_alloc
}catch(std::bad_alloc& e){
printf("tried to allocate: %d bytes in vector constructor", e.?);
}
return 0;
}
As far as the standard is concerned, there is no extra information other than what is provided by what() (whose content, by the way, is left to the implementation).
What you may do is to provide to vector your own allocator, that throws a class derived from bad_alloc but that also specifies the information you want to retrieve when catching it (e.g. the amount of memory required).
#include <vector>
#include <iostream>
template <typename T>
std::vector<T> make_vector(typename std::vector<T>::size_type size, const T init = T()) {
try {
return std::vector<T>(size, init);
}
catch (const std::bad_alloc) {
std::cerr << "Failed to allocate: " << size << std::endl;
throw;
}
}
int main()
{
make_vector<int>(std::size_t(-1));
return 0;
}
A reserve instead of initialization might suit better.
Please have copy elision/return value optimization and move in mind.