Here is my program:
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <string>
#include <iostream>
int main()
{
std::string hash = crypt("asd123","$2a$13$IP4FT4gf123I5bT6o4123123123123nbEXFqo.Oa123");
std::cout << hash;
}
Running this causes the error
terminate called after throwing an instance of 'std::logic_error'
what(): basic_string::_M_construct null not valid
Aborted (core dumped)
but if I remove the $ from the salt it runs fine.
The error message tells you that crypt returns a nullpointer for the given arguments. Most likely that's its way to signal failure. You need to check for that.
You can find out more about crypt by (1) finding documentation of the function, and (2) reading it.
For example, you can google “unistd crypt”.
And it so happens that the documentation specifies the valid set of characters you can use, in a nice table.
Related
this code show the message "terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Process returned 3 (0x3) execution time : 0.331 s"
where is the problem I could not identify. I used codeblocks. Ram of my PC is 8GB.
#include <iostream>
#include<vector>
#include <stdio.h>
#include <cstdlib>
#include <ctime>
#include <fstream>
#include <sstream>
using namespace std;
int main(){
ofstream bcrs_tensor;
bcrs_tensor.open("bcrs_tensor_Binary", ios::out | ios::binary);
int X=6187,Y=25,Z=78,M=33;
int new_dimension_1,new_dimension_2,new_x_1,new_x_2;
new_dimension_1=X*Z;
new_dimension_2=Y*M;
int* new_A = new int[ new_dimension_1*new_dimension_2 ];
vector<int> block_value,CO_BCRS,RO_BCRS;
block_value.reserve(303092010);
CO_BCRS.reserve(1554318);
RO_BCRS.reserve(37124);
cout<<"size"<<sizeof(block_value)<<endl;
return 0;
}
You are trying to allocate way more memory than your system has available for your app, so std::bad_alloc is being thrown, which you are not catching.
Assuming sizeof(int)=4 in your compiler, you are asking for:
1.48GB for new_A
1.12GB for block_value
5.92MB for CO_BCRS
145KB for RO_BCRS
For a total of 2.61GB.
Even though you have 8GB of RAM installed, your system does not have enough consecutive memory available to satisfy one of those allocations (ie, if it is a 32bit app, the whole process is limited to 2-3GB max, depending on configuration, memory manager implementation, etc. But a good chunk of that is reserved by the OS itself, you can't use it for your code).
I have the following scenario:
#include <iostream>
#include <string>
#include <sstream>
#include <fstream>
#include <ostream>
class File_ostream final : public std::basic_ostream<char, std::char_traits<char>>
{
};
int main()
{
const std::string input_file{"file_tests/test.txt.gz"};
std::ifstream ifs{input_file, std::ios_base::in | std::ios_base::binary};
File_ostream file_os{};
file_os << ifs.rdbuf(); // Memory fault (core dumped)
}
My program always crashes when inserting output to file_os and creates a core dump.
The code works fine in Linux but not in QNX :\
Do you have any explanation? hint?
The problem is that you are using the default constructor of basic_ostream which, by the standard, does not exist. I have no idea why g++ and QCC compile your code successfully, but they shouldn't.
Anyway, using non standardized functions reveals non standardized behavior, in your case a crash. I don't know if the correct usage of the default constructor is documented anywhere in the gcc docs, but just avoiding it, and using the correct constructor instead, should solve your issue.
I have a class which has a few attributes like shown below, my problem is that when I remove or place the string s attribute before std::atomic<char*> atomic_input the program terminates with exception:
'std::logic_error'
what(): basic_string::_M_construct null not valid
Aborted (core dumped)
#include <string>
#include <atomic>
// In ui.cpp
class UI
{
private:
std::atomic<char*> atomic_input;
std::string s; /* this can be renamed, but removing or placing it
before the above field crashes the program */
};
// In main.cpp
#include "ui.cpp"
int main()
{
srand (time(NULL));
initscr(); /* start the curses mode */
UI* ui = new UI();
return 0;
}
The string attribute is not accessed within the program in any way, renaming it is possible. The reason why I have an atomic field is that the value is shared among several threads.
I have tried placing the string field in different lines within the class attributes, the program only crashes if the declaration is before the atomic_input.
What might be causing the problem? Is it something to do with how the classes in C++ should be defined?
Looks like I've found a solution.
std::atomic<char*> atomic_input not being initialized like seen below was causing the issue. I still don't know how the string variable was interfering with it.
My guess is that the compiler somehow interprets the string as a constructor for atomic_input. The error only occurs when atomic_input is accessed in runtime and not in the compilation.
#include <string>
#include <atomic>
// In ui.cpp
class UI
{
private:
std::atomic<char*> atomic_input{(char*)""};
// std::string s; /* Initializing the atomic char like above solved the problem */
};
Here is a minimum example:
[joel#maison various] (master *)$ cat throw.cpp
#include <iostream>
int main(int argc, char* argv[])
{
throw("pouet pouet");
}
[joel#maison various] (master *)$ ./a.out
terminate called after throwing an instance of 'char const*'
Aborted (core dumped)
Reading the docs, it seems like the default terminate handler is abort(). I couldn't find anything about triggering a segfault in the abort man page.
Throwing an exception and not handling it calls abort() which raises SIGABRT.
You can verify it with the following
#include <iostream>
#include <stdexcept>
#include <signal.h>
extern "C" void handle_sigabrt(int)
{
std::cout << "Handling and then returning (exiting)" << std::endl;
}
int main()
{
signal(SIGABRT, &handle_sigabrt);
throw("pouet pouet");
}
Demo
I am trying to access an Infinispan Server using HotRod library in C++ because I'm not familiar with Java but I got Exception and don't know how to proceed.
The source code is:
#include "infinispan/hotrod/ConfigurationBuilder.h"
#include "infinispan/hotrod/RemoteCacheManager.h"
#include "infinispan/hotrod/RemoteCache.h"
#include <iostream>
#include <string>
int main(int argc, char **argv) {
infinispan::hotrod::ConfigurationBuilder cb;
cb.addServer().host("192.168.1.1").port(11222);
infinispan::hotrod::RemoteCacheManager cm(cb.build());
infinispan::hotrod::RemoteCache<std::string, std::string> cache = cm.getCache<std::string, std::string>("dCache");
cm.start();
std::cout << cache.size() << std::endl;
cm.stop();
return 0;
}
and what I got is:
terminate called after throwing an instance of 'infinispan::hotrod::HotRodClientException'
what(): scala.MatchError: 24 (of class java.lang.Byte)
Aborted
ps. GDB backtrace indicates the error is occurred on the line of std::cout << cache.size() << std::endl;.
C++ client version 8.0.0 uses by default the Hotrod protocol VERSION_24, that it's too new for Infinispan 6.0.0.
Try to configure VERSION_13 this way:
cb.addServer().host("192.168.1.1").port(11222).protocolVersion(Configuration::PROTOCOL_VERSION_13);
I don't know HotRod C++ and I don't know if it's the cause of your exception but, according this page,
the RemoteCacheManager constructors, by default, start the manager; so, the following cm.start() it's a second start (?).
In this example I see that the manager is created without starting it, so...
Suggestion: try with
infinispan::hotrod::RemoteCacheManager cm(cb.build(), false);