Persistent storage across several runs - c++

I was wondering what would be the best solution for having a storage container that does not loose its contents over several execution times (runs) without using input-output to the filesystem or external database.
Say I have a class foo() which stores integers. From main() I want to call a method that adds an integer and the class does not forget about its former contents.
//
// Data storage accross different runs
// This should go into the daemon process
//
#include<iostream>
#include<list>
using namespace std;
class foo {
public:
foo(int add): add(add) {}
void store(int i) {
vec.push_back( i + add);
}
private:
list<int> vec;
int add;
};
The main function should check for an already running daemon - if not starts it.
//
// Main program. Should check whether daemon runs already, if not starts it.
//
void main(int argc, char *argv[]) {
// if (daemon is not running)
// start daemon( some_number )
// call daemon::add( atoi(argv[1]) );
}
How would one do this best with shared libraries or with a daemon process? Storage and caller program are on the same Linux host.

Look at Linux Pipes for interprocess communication.
http://linux.die.net/man/2/pipe

Named pipes is one way. If you want non blocking though you might want to try the message queue route. Here is a link to one of the system calls http://linux.die.net/man/2/msgctl, you can look at the other calls from there.

May consider http://en.wikipedia.org/wiki/Memcached

Related

code that runs once in a lifetime

I am trying to make a program that associates a selected file with a hard drive serial number so it doesn't run on any other hard drive. The part where I have to set the hard drive serial number should run only once in the lifetime of the program.
I have no idea how to implement this and the solutions on the internet use macros, which I am not familiar with. I am hoping to implement this using visual studio 2017 and Windows 10.
std::call_once will only run once for the duration of a program.
#include <mutex>
#include <iostream>
void set_serial_number()
{
static std::once_flag flag1;
std::call_once(flag1, [](){ std::cout << "Set hard drive serial number\n"; });
}
Demo
You can call set_serial_number as much as you'd like, but the logic contained within the lambda passed to the call of std::call_once will only execute once.
e.g.,
int main(){
set_serial_number();
set_serial_number();
set_serial_number();
}
Output:
Set hard drive serial number
You can create a file on disk and write hash of serial number string appended with some other data into a file, and read on the next run of your program. You can use openssl for this.
#include <openssl/sha.h>
...
string tohash = serial + "secret_suffix";
unsigned char hash[SHA_DIGEST_LENGTH]; // == 20
SHA1(to_hash.c_str(), to_hash.length(), hash);
write_to_file(hash);
You can run this while installing your application.
Static function / class members are guaranteed to only ever exist once. So something like
class myThing {
public:
static const std::string hardDiskId;
private:
std::string queryHardDiskId();
};
const std::string myThing::hardDiskId = queryHardDiskId();
will ensure that your hard disk id is only queried once.

Is there a way to save a value across multiple executions of a C++ application?

For example,
int var;
int main() {
if(var==5) {
cout<<"Program has been run before"<<endl;
}
else {
var = 5;
cout<<"First run"<<endl;
}
}
This would print First run the first time, and Program has been run before each time after. Is this possible at all?
EDIT: a file won't work, is there any other method?
You need to save that counter somewhere outside of the application. The variables are stored in the memory that is reserved for the process. So when your process dies, the values in memory are gone as well.
If a flat file does not work, other options could be a database or perhaps even a separate daemon that keeps track on the run times of a certain application. But if you want to persist the counter over power cycles, you will need to save that data value somewhere in persistent memory (e.g. hard drive)
Ok, so here's the gist of it:
If the kernel you are running doesn't provide files you need to give specific details about what kernel and/or device you are using and if you need to store them between "reboots", since not being able to crate files sounds quite specifics.
If you don't have any flash/hdd/ssd or other type of "hard" to save data to, saving values between executions is impossible, you can't save values in RAM due to its dynamic nature.
What you could do is:
a) Write your own primitive fs management tool, if your architecture only ever runs your app this should be easy since you don't need to make a lot of checks, but you need to have a static memory of sorts to store the bytes to
b) At the end of executing re-compile the initial program and replace the values you want to replace with the ones present in your current program
c) Save the values in some external variables using a a shell:
#include <stdlib.h>
putenv("EXTERNAL_STATE=" + my_variable);
d) Send the state you wish to save over the network to a machine that has a filesystem and read/write it from there.
e) Have a separate application that runs in a while and listens for input from the console. when it receives sets input it runs your program with said variable as the parameter, when your program returns it outputs the variable and the "parent" application reads it and sets it internally
I came out with the idea of using shared memory from boost libraries.
The concept is that the first time the program runs, it creates another process of itself, just called with a specific parameter (yes, it's a sort of a fork, but in this way we have a portable solution). The parallel process just handles the initialization of the shared memory, and waits for a termination signal.
The major downside of the following implementation is that, in theory, the shared memory of the client (not the manager) could be opened before the server (which handles the shared memory) has completed the initialization.
Oh, I am just printing the index of the run in base 0, just for demonstration. Here the code.
#include <cstring>
#include <iostream>
#include <thread>
#include <chrono>
#include <mutex>
#include <condition_variable>
#include <csignal>
#include <boost/process.hpp>
#include <boost/interprocess/shared_memory_object.hpp>
#include <boost/interprocess/mapped_region.hpp>
static constexpr const char* daemonizer_string = "--daemon";
static constexpr const char* shared_memory_name = "shared_memory";
static std::mutex waiter_mutex;
static std::condition_variable waiter_cv;
struct shared_data_type
{
std::size_t count = 0;
};
extern "C"
void signal_handler(int)
{
waiter_cv.notify_one();
}
int main(int argc, const char* argv[])
{
namespace bp = boost::process;
namespace bi = boost::interprocess;
if(argc == 2 and std::strcmp(argv[1], daemonizer_string) == 0)
{
struct shm_remove
{
shm_remove() { bi::shared_memory_object::remove("shared_memory"); }
~shm_remove() { bi::shared_memory_object::remove("shared_memory"); }
} shm_remover;
bi::shared_memory_object shm(bi::create_only, shared_memory_name, bi::read_write);
shm.truncate(sizeof(shared_data_type));
bi::mapped_region region(shm, bi::read_write);
void* region_address = region.get_address();
shared_data_type* shared_data = new (region_address) shared_data_type;
std::signal(SIGTERM, signal_handler);
{
std::unique_lock<std::mutex> lock(waiter_mutex);
waiter_cv.wait(lock);
}
shared_data->~shared_data_type();
}
else
{
bi::shared_memory_object shm;
try
{
shm = bi::shared_memory_object(bi::open_only, shared_memory_name, bi::read_write);
}
catch(std::exception&)
{
using namespace std::literals::chrono_literals;
bp::spawn(argv[0], daemonizer_string);
std::this_thread::sleep_for(100ms);
shm = bi::shared_memory_object(bi::open_only, shared_memory_name, bi::read_write);
}
bi::mapped_region region(shm, bi::read_write);
shared_data_type& shared_data = *static_cast<shared_data_type*>(region.get_address());
std::cout << shared_data.count++ << '\n';
}
}

configuration update on a fast path packet processing

We have got an application which processes incoming packets with a thread pool. Each thread has got a configuration which is being used while packet processing.
We are currently using mutex to lock before a check for configuration has changed or not.
This makes threads to spend too much time to lock the mutex to check if there is a configuration update. We are wondering if there could be faster alternative you guys can suggest.
Implementation is with C++
Regards.
One possible way to address this would be through atomics via std::atomic. Following is a simplified solution to a simplified version of your problem. In the following, your problem has been simplified to a single processor thread (the multiple case is the same, in principle). The first version of the solution "leaks" on config changes. For rare enough config changes (which, at least from my experience, is a very common case), this might be acceptable. Otherwise, I'll describe in the end two ways to address it.
Say you start with the following configuration class:
#include <thread>
#include <vector>
#include <list>
#include <iostream>
#include <atomic>
#include <chrono>
constexpr int init_config_val = 3;
struct config{
int m_val = init_config_val;
};
The configuration has a single value field, m_val.
Now let's set types for an atomic pointer to a configuration, and a list of configurations:
using config_atomic_ptr_t = std::atomic<config *>;
using config_list_t = std::list<config>;
The thread process takes a pointer to an atomic configuration pointer. When it needs to access the configuration, it calls std::atomic::load.
void process(config_atomic_ptr_t *conf) {
while(true) {
const config *current_config = conf->load();
...
}
}
(Note that the above shows the thread checking the configuration at each iteration; in some types of applications, it might be enough to check it "often enough".)
When a different thread wants to set the configuration, it calls the following function:
void modify_config(config_list_t &configs, config_atomic_ptr_t &current_config, config conf) {
configs.push_back(conf);
current_config.store(&*configs.rbegin());
}
The function takes a reference to the list of configurations, a reference to an atomic configuration pointer, and a new configuration object. It pushes the configuration object to the end of the list, then uses std::atomic::store to set the pointer to the end element in the list.
This is how main can set up things:
int main() {
config_list_t configs;
configs.push_back(config{});
config_atomic_ptr_t current_config{&*configs.rbegin()};
std::thread processor(process, &current_config);
config new_conf{init_config_val + 1};
modify_config(configs, current_config, new_conf);
processor.join();
}
As stated before, each configuration change pushes a new configuration object to the list, and hence this program effectively has unbounded memory requirements.
At least from my experience, many applications need to support config changes in principle, but they're expected to be rare. If this is so, the above solution might be acceptable. (In fact, you can simplify things by removing the list, and just allocating new configurations on the heap.)
If not, there are at least two alternatives.
The first alternative involves fixing the above as follows:
In config, add another field describing the configuration version - say, an integer.
Send the process thread also a pointer to an std::atomic<int>.
Periodically (say once every 1000 iterations), the thread would check the version of the config it's using, and set the std::atomic<int> to reflect it.
A cleanup thread (possibly the main thread) would also periodically check the value of the std::atomic<int>, and clean up the list accordingly.
The second alternative is just passing your thread function a pointer to something like boost::lockfree::queue. At each iteration (or once every number of iterations), the thread could check the queue for a new configuration, and then use it.
Full Example
#include <thread>
#include <vector>
#include <list>
#include <iostream>
#include <atomic>
#include <chrono>
constexpr int init_config_val = 3;
struct config{
int m_val = init_config_val;
};
using config_atomic_ptr_t = std::atomic<config *>;
using config_list_t = std::list<config>;
void process(config_atomic_ptr_t *conf) {
while(true) {
const config *current_config = conf->load();
if(current_config->m_val != init_config_val)
break;
}
}
void modify_config(config_list_t &configs, config_atomic_ptr_t &current_config, config conf) {
configs.push_back(conf);
current_config.store(&*configs.rbegin());
}
int main() {
using namespace std::chrono_literals;
config_list_t configs;
configs.push_back(config{});
config_atomic_ptr_t current_config{&*configs.rbegin()};
std::thread processor(process, &current_config);
std::this_thread::sleep_for(1s);
config new_conf{init_config_val + 1};
modify_config(configs, current_config, new_conf);
processor.join();
}

pthread - accessing multiple objects with a thread

I'm trying to get my hands on multi threading and it's not working so far. I'm creating a program which allows serial communication with a device and it's working quite well without multi threading. Now I want to introduce threads, one thread to continuously send packets, one thread to receive and process packets and another thread for a GUI.
The first two threads need access to four classes in total, but using pthread_create() I can only pass one argument. I then stumled upon a post here on stack overflow (pthread function from a class) where Jeremy Friesner presents a very elegant way. I then figured that it's easiest to create a Core class which contains all the objects my threads need access to as well as all functions for the threads.So here's a sample from my class Core:
/** CORE.CPP **/
#include "SerialConnection.h" // Clas for creating a serial connection using termios
#include "PacketGenerator.h" // Allows to create packets to be transfered
#include <pthread.h>
#define NUM_THREADS 4
class Core{
private:
SerialConnection serial; // One of the objects my threads need access to
pthread_t threads[NUM_THREADS];
pthread_t = _thread;
public:
Core();
~Core();
void launch_threads(); // Supposed to launch all threads
static void *thread_send(void *arg); // See the linked post above
void thread_send_function(); // See the linked post above
};
Core::Core(){
// Open serial connection
serial.open_connection();
}
Core::~Core(){
// Close serial connection
serial.close_connection();
}
void Core::launch_threads(){
pthread_create(&threads[0], NULL, thread_send, this);
cout << "CORE: Killing threads" << endl;
pthread_exit(NULL);
}
void *Core::thread_send(void *arg){
cout << "THREAD_SEND launched" << endl;
((Core *)arg)->thread_send_function();
return NULL;
}
void Core::thread_send_function(){
generator.create_hello_packet();
generator.send_packet(serial);
pthread_exit(NULL);
}
Problem is now that my serial object crashes with segmentation fault (that pointer stuff going on in Core::thread_send(void *arg) makes me suspicious. Even when it does not crash, no data is transmitted over the serial connection even though the program executed without any errors. Execution form main:
/** MAIN.CPP (extract) VARIANT 1 **/
int main(){
Core core;
core.launch_threads(); // No data is transferred
}
However, if I call the thread_send_function directly (the one the thread is supposed to execute), the data is transmitted over the serial connection flawlessly:
/** MAIN.CPP (extract) VARIANT 1 **/
int main(){
Core core;
core.thread_send_function(); // Data transfer works
}
Now I'm wondering what the proper way of dealing with this situation is. Instead of that trickery in Core.cpp, should I just create a struct holding pointers to the different classes I need and then pass that struct to the pthread_create() function? What is the best solution for this problem in general?
The problem you have is that your main thread exits the moment it created the other thread, at which point the Core object is destroyed and the program then exits completely. This happens while your newly created thread tries to use the Core object and send data; you either see absolutely nothing happening (if the program exits before the thread ever gets to do anything) or a crash (if Core is destroyed while the thread tries to use it). In theory you could also see it working correctly, but because the thread probably takes a bit to create the packet and send it, that's unlikely.
You need to use pthread_join to block the main thread just before quitting, until the thread is done and has exited.
And anyway, you should be using C++11's thread support or at least Boost's. That would let you get rid of the low-level mess you have with the pointers.

Multithreading using the boost library

Wish to simultaneously call a function multiple times. I wish to use threads to call a function which will utilize the machines capability to the fullest. This is a 8 core machine, and my requirement is to use the machine cpu from 10% to 100% or more.
My requirement is to use the boost class. Is there any way I can accomplish this using the boost thread or threadpool library? Or some other way to do it?
Also, if I have to call multiple functions with different parameters each time (with separate threads), what is the best way to do this? [using boost or not using boost] and how?
#include <iostream>
#include <fstream>
#include <string.h>
#include <time.h>
#include <boost/thread/mutex.hpp>
#include <boost/bind.hpp>
using namespace std;
using boost::mutex;
using boost::thread;
int threadedAPI1( );
int threadedAPI2( );
int threadedAPI3( );
int threadedAPI4( );
int threadedAPI1( ) {
cout << "Thread0" << endl;
}
int threadedAPI2( ) {
cout << "Thread1" << endl;
}
int threadedAPI3( ) {
cout << "Thread2" << endl;
}
int threadedAPI4( ) {
cout << "Thread3" << endl;
}
int main(int argc, char* argv[]) {
boost::threadpool::thread_pool<> threads(4);
// start a new thread that calls the "threadLockedAPI" function
threads.schedule(boost::bind(&threadedAPI1,0));
threads.schedule(boost::bind(&threadedAPI2,1));
threads.schedule(boost::bind(&threadedAPI3,2));
threads.schedule(boost::bind(&threadedAPI4,3));
// wait for the thread to finish
threads.wait();
return 0;
}
The above is not working and I am not sure why? :-(
I suggest that you read up on the documentation for the functions you use. From your comment in James Hopkin's answer, it seems like you don't know what boost::bind does, but simply copy-pasted the code.
boost::bind takes a function (call it f), and optionally a number of parameters, and returns a function which, when called, calls f with the specified parameters.
That is, boost::bind(threadedAPI1, 0)() (creating a function which takes no arguments and calls threadedAPI1() with the argument 0, and then calling that) is equivalent to threadedAPI1(0).
Since your threadedAPI functions don't actually take any parameters, you can't pass any arguments to them. That is just fundamental C++. You can't call threadedAPI1(0), but only threadedAPI1(), and yet when you call the function, you try (via boost::bind) to pass the integer 0 as an argument.
So the simple answer to your question is to simply define threadedAPI1 as follows:
int threadedAPI1(int i);
However, one way to avoid the boost::bind calls is to call a functor instead of a free function when launching the thread. Declare a class something like this:
struct threadedAPI {
threadedAPI(int i) : i(i) {} // A constructor taking the arguments you wish to pass to the thread, and saves them in the class instance.
void operator()() { // The () operator is the function that is actually called when the thread starts, and because it is just a regular class member function, it can see the 'i' variable initialized by the constructor
cout << "Thread" << i << endl; // No need to create 4 identical functions. We can just reuse this one, and pass a different `i` each time we call it.
}
private:
int i;
};
Finally, depending on what you need, plain threads may be better suited than a threadpool. In general, a thread pool only runs a limited number of threads, so it may queue up some tasks until one of its threads finish executing. It is mainly intended for cases where you have many short-lived tasks.
If you have a fixed number of longer-duration tasks, creating a dedicated thread for each may be the way to go.
You're binding parameters to functions that don't take parameters:
int threadedAPI1( );
boost::bind(&threadedAPI1,0)
Just pass the function directly if there are no parameters:
threads.schedule(&threadedAPI1)
If your interest is in using your processor effeciently then you might want to consider intels thread building blocks http://www.intel.com/cd/software/products/asmo-na/eng/294797.htm. I believe it is designed specifically to utilise multi core processors while boost threads leaves control up to the user (i.e. TBB will thread differently on a quad core compared to a dual core).
As for your code you are binding functions which don't take parameters to a parameter. Why? You might also want to check the return code from schedule.