Can I reuse boost::ssl::stream? - c++

Suppose I use a boost::asio::ssl::stream<boost::asio::ip::tcp::socket>:
asio::ssl::stream<asio::ip::tcp::socket> s;
asio::connect(s.lowest_layer(), endpointIterator);
s.handshake(asio::ssl::stream_base::client);
And so on. Then for whatever reason the connection fails, or even I disconnect. Is it possible to reuse it for future connections, or do I have to create a new object for each connection? I.e. wrap the stream, something like this:
class BetterSslConnection
{
public:
BetterSslConnection() : mSocket(new asio::ssl::stream<asio::ip::tcp::socket>())
{
}
~BetterSslConnection()
{
if (mSocket)
{
mSocket->shutdown();
mSocket->lowest_layer().close();
}
}
void connect(... endpointIterator)
{
if (mSocket)
{
mSocket->shutdown();
mSocket->lowest_layer().close();
}
mSocket.reset(new asio::ssl::stream<asio::ip::tcp::socket>());
asio::connect(mSocket->lowest_layer(), endpointIterator);
mSocket->handshake(asio::ssl::stream_base::client);
// and so on.
}
private:
unique_ptr<asio::ssl::stream<asio::ip::tcp::socket>> mSocket;
};
class BetterSslConnection
{

Related

Singleton pattern in C++ for a DI container

I am trying to create a DI container in C++ (for studying purposes). I know about boost DI container option, but I just want to have fun writing one by myself.
I would like that the created container only had one instance per object "registered", so I should apply the Singleton design pattern.
But, what would be the best (idiomatic) way to implement the Singleton Pattern as an in C++20 or, at least, in modern C++ and why?
Do you mean something like this, using meyer's singleton.
(https://www.modernescpp.com/index.php/thread-safe-initialization-of-a-singleton)
I never use singletons that need to be created with new, since their destructor never gets called. With this pattern the destructors do get called when the program terminates.
#include <iostream>
//-----------------------------------------------------------------------------
// create an abstract baseclass (closest thing C++ has to an interface)
struct data_itf
{
virtual int get_value1() const = 0;
virtual ~data_itf() = default;
protected:
data_itf() = default;
};
//-----------------------------------------------------------------------------
// two injectable instance types
struct test_data_container :
public data_itf
{
int get_value1() const override
{
return 0;
}
~test_data_container()
{
std::cout << "test_data_container deleted";
}
};
struct production_data_container :
public data_itf
{
int get_value1() const override
{
return 42;
}
~production_data_container()
{
std::cout << "production_data_container deleted";
}
};
//-----------------------------------------------------------------------------
// meyers threadsafe singleton to get to instances implementing
// interface to be injected.
//
data_itf& get_test_data()
{
static test_data_container test_data;
return test_data;
}
data_itf& get_production_data()
{
static production_data_container production_data;
return production_data;
}
//-----------------------------------------------------------------------------
// object that needs data
class my_object_t
{
public:
explicit my_object_t(const data_itf& data) :
m_data{ data }
{
}
~my_object_t()
{
std::cout << "my_object deleted";
}
void function()
{
std::cout << m_data.get_value1() << "\n";
}
private:
const data_itf& m_data;
};
//-----------------------------------------------------------------------------
int main()
{
auto& data = get_production_data();
my_object_t object{ data };
object.function();
return 0;
}

Trying to access an object that is being destroyed

I have an object which contains a thread which indirectly accesses this object like so:
#include <iostream>
#include <thread>
#include <atomic>
class A;
class Manager
{
public:
Manager(void) = default;
void StartA(void)
{
a = std::make_unique<A>(*this);
}
void StopA(void)
{
a = nullptr;
}
A& GetA(void)
{
return *a;
}
private:
std::unique_ptr<A> a;
};
class A
{
public:
A(Manager& manager)
: manager{manager},
shouldwork{true},
thread{[&]{ this->Run(); }}
{
}
~A(void)
{
shouldwork = false;
thread.join();
}
private:
Manager& manager;
std::atomic<bool> shouldwork;
std::thread thread;
void Run(void)
{
while (shouldwork)
{
// Here goes a lot of code which calls manager.GetA().
auto& a = manager.GetA();
}
}
};
int main(int argc, char* argv[])
try
{
Manager man;
man.StartA();
man.StopA();
}
catch (std::exception& e)
{
std::cerr << "Exception caught: " << e.what() << '\n';
}
catch (...)
{
std::cerr << "Unknown exception.\n";
}
The problem is that when one thread calls Manager::StopA and enters destructor of A, the thread inside A segfaults at Manager::GetA. How can I fix this?
In StopA() you set a = nullptr;, this in turn destroys the a object and all further access to its members result in undefined behaviour (a likely cause the segmentation fault).
Simply moving the a = nullptr; to the destructor of the Manager could resolve this problem. Even better, allow the RAII mechanism of the std::unique_ptr to destroy the a object when the destructor of the Manager runs (i.e. remove the line of code completely).
With active object implementations, careful control of the member variables is important, especially the "stop variable/control" (here the shouldwork = false;). Allow the manager to access the variable directly or via a method to stop the active object before its destruction.
Some of the code here looks out of place or obscure, e.g. a = std::make_unique<A>(*this);. A redesign could help simplify some of the code. The Manager class could be removed.
class A
{
public:
A(): shouldwork{true}, thread{[&]{ this->Run(); }}
{
}
void StopA()
{
shouldwork = false;
thread.join();
}
private:
std::atomic<bool> shouldwork;
std::thread thread;
void Run(void)
{
while (shouldwork)
{
// code...
}
}
};
The code is modelled along the lines of std::thread, were the stopping of the tread is more controlled before an attempt is made to join it. The destructor is left empty in this case, to mimic the termination (calling std::terminate) result, as is the case with the standard thread library. Threads must be explicitly joined (or detached) before destruction.
Re-introducing the Manager, the code could look as follows;
class A
{
public:
A() : shouldwork{true}, thread{[&]{ this->Run(); }} {}
void StopA() { shouldwork = false; thread.join(); }
private:
void Run();
std::atomic<bool> shouldwork;
std::thread thread;
};
class Manager
{
public:
Manager() = default;
void StartA(void)
{
a = std::make_unique<A>();
}
void StopA(void)
{
a->StopA();
}
A& GetA(void)
{
return *a;
}
private:
std::unique_ptr<A> a;
};
void A::Run()
{
while (shouldwork)
{
// Here goes a lot of code which calls manager.GetA().
auto& a = manager.GetA();
}
}
And your main remains as it is.

std::shared_ptr becomes invalid when passing as an argument by value

Dear people of stackoverflow.
I have a class, named FcgiManager, that is intended to handle a proper ammount of fastcgi workers.
Currently it has a simple method to add a new worker to list:
bool FcgiManager::AddWorker() {
workers_.push_back(std::make_unique<FcgiWorker>(server_));
return true;
}
The "workers_" field is:
private:
// cut
std::list<std::unique_ptr<FcgiWorker>> workers_;
The program exits with "bad weak pointer" exception.
When I trace it i see that from this method it does reache FcgiWorkes constructor, here it is:
FcgiWorker::FcgiWorker(std::shared_ptr<Dolly> server) :
running_{false}, pool_{std::make_unique<FcgiRequestPool>()}, server_{server} {
// actual code is not important, because execution doesn't get here.
}
And i see, that the "server" argument shared_ptr has some martian values in it's structure, like count = 1443278, weak = -1.
But in the AddWorker() method, just before entering FcgiWorker constructor the "server_" field has appropriate values.
I have even added an "IsAlive()" method to server and tried calling server_->IsAlive() from AddWorker() and yes, it's alive.
I also tried to plainly create a worker like this:
Any advice appreciated.
bool FcgiManager::AddWorker() {
FcgiWorker test(server_);
return true;
}
To check if I am doing something wrong with make_unique, but the result was the same.
Any advice appreciated.
Update-1:
FcgiWorker::server_ is:
std::shared_ptr<Dolly> server_;
it is initialized in the FcgiWorker constructor's initialization list, see constructor code above.
Update-2
FcgiManager::server_ is:
std::shared_ptr<Dolly> server_;
Here is how it is created in main.cpp:
auto server = std::make_shared<Dolly>();
server->BindFcgiManagerToServer();
server->Run();
server.cpp:
Dolly::Dolly() : start_time_{std::time(nullptr)} {
fcgi_manager_ = std::make_unique<FcgiManager>();
}
void Dolly::BindFcgiManagerToServer() {
fcgi_manager_->set_server(self());
}
server.h:
class Dolly : public std::enable_shared_from_this<Dolly> {
private:
// cut
inline std::shared_ptr<Dolly> self() { return shared_from_this(); }
Update-3 & Update-4 & 5
Created MCVE, and, oh well - it compiles and does not throw. So i guess i'll have to solve my problem myself now :)
Thanks everyone!
(MCVE updated twice, since it was not complete the first time)
#include <memory>
#include <list>
#include <iostream>
class Dolly;
class FcgiRequestPool {
};
class FcgiWorker {
public:
FcgiWorker(std::shared_ptr<Dolly> server) :
pool_{std::make_unique<FcgiRequestPool>()}, server_{server} {
std::cout << "worker created" << std::endl << std::flush;
}
private:
std::shared_ptr<FcgiRequestPool> pool_;
std::shared_ptr<Dolly> server_;
};
class FcgiManager {
public:
inline void set_server(std::shared_ptr<Dolly> server) { server_ = server; }
bool InitWorkers() {
auto workers_num = 4;
auto created_workers_num = 0;
try {
for (created_workers_num = 0; created_workers_num < workers_num; created_workers_num++) {
AddWorker();
}
} catch (std::exception& e) {
std::cout << e.what() << std::endl << std::flush;
}
return true;
}
void Run() {
InitWorkers();
}
private:
bool AddWorker() {
workers_.push_back(std::make_unique<FcgiWorker>(server_));
return true;
}
std::list<std::unique_ptr<FcgiWorker>> workers_;
std::shared_ptr<Dolly> server_;
};
class Dolly : public std::enable_shared_from_this<Dolly> {
public:
Dolly() {
fcgi_manager_ = std::make_unique<FcgiManager>();
}
void BindFcgiManagerToServer() {
fcgi_manager_->set_server(self());
}
void Run() {
fcgi_manager_->Run();
}
private:
std::unique_ptr<FcgiManager> fcgi_manager_;
std::shared_ptr<Dolly> self() {
return shared_from_this();
}
};
int main() {
auto server = std::make_shared<Dolly>();
server->BindFcgiManagerToServer();
server->Run();
return 0;
}
Resolved
After creating MCVE and comparing it step-by-step with the actual program, I found out, that the shared_ptr exception was generated here:
pool_{std::make_unique<FcgiRequestPool>()}
And has no relation to the whole classes creation logic.
I am marking the answer with MCVE advice as valid, because it is actually a great advice and it did help me.
The root cause: I was creating a unique_ptr to a class, which was derived from std::enable_shared_from_this.

C++ setting up a callback function

I am working at a program which runs a custom webserver that should output some active HTML content:
// This is the webserver library...
class myWebServer
{
public:
myWebServer() {}
~myWebServer() {}
// ...
void sendPageToClient()
{
// ... "client" is the TCP socket
// ... "html" should contain the output of myMainProgram::ProcessASP
client->send(html);
}
void runServer()
{
while (1)
{
// listens to TCP socket
client->listen();
// receive query from browser
// send HTML using sendPageToClient()
// ...
sendPageToClient();
}
}
};
// This is the main program class...
class myMainProgram
{
public:
myMainProgram() {}
~myMainProgram() {}
// ...
string ProcessASP(string query)
{
return
"<html>The query string you have passed contains:<br>"
+ query +
"</html>";
}
void runProgram()
{
// do something
}
};
// This is a multi-threaded application
void main()
{
myMainProgram myProgram;
myWebServer myServer;
myProgram.runProgram();
myServer.runServer();
};
How can I set up a callback function that from the class myWebServer calls myMainProgram::ProcessASP passing parameters and receiving its output?
You probably want to use a std::function<std::string(std::string)>:
class myWebServer {
// not really a "callback"?
std::function<std::string(std::string)> callback;
public:
template <typename F>
void setCallback(F&& f) { callback = std::forward<F>(f); }
void runServer() {
// ...
std::string foo = callback("hello");
// do something with foo
}
};
And then, you can do:
myServer.setCallback([&](std::string query){
return myProgram.ProcessASP(query);
});

What is the right way to switch on the actual type of an object?

I'm writing an xml parser and I need to add objects to a class generically, switching on the actual type of the object. Problem is, I'd like to keep to an interface which is simply addElement(BaseClass*) then place the object correctly.
void E_TableType::addElement(Element *e)
{
QString label = e->getName();
if (label == "state") {
state = qobject_cast<E_TableEvent*>(e);
}
else if (label == "showPaytable") {
showPaytable = qobject_cast<E_VisibleType*>(e);
}
else if (label == "sessionTip") {
sessionTip = qobject_cast<E_SessionTip*>(e);
}
else if (label == "logoffmedia") {
logoffMedia = qobject_cast<E_UrlType*>(e);
}
else {
this->errorMessage(e);
}
}
This is the calling class, an object factory. myElement is an instance of E_TableType.
F_TableTypeFactory::F_TableTypeFactory()
{
this->myElement = myTable = 0;
}
void F_TableTypeFactory::start(QString qname)
{
this->myElement = myTable = new E_TableType(qname);
}
void F_TableTypeFactory::fill(const QString& string)
{
// don't fill complex types.
}
void F_TableTypeFactory::addChild(Element* child)
{
myTable->addElement(child);
}
Element* F_TableTypeFactory::finish()
{
return myElement;
}
void F_TableTypeFactory::addAttributes(const QXmlAttributes &attribs) {
QString tName = attribs.value(QString("id"));
myTable->setTableName(tName);
}
Have you considered using polymorphism here? If a common interface can be implemented by each of your concrete classes then all of this code goes away and things become simple and easy to change in the future. For example:
class Camera {
public:
virtual void Init() = 0;
virtual void TakeSnapshot() = 0;
}
class KodakCamera : Camera {
public:
void Init() { /* initialize a Kodak camera */ };
void TakeSnapshot() { std::cout << "Kodak snapshot"; }
}
class SonyCamera : Camera {
public:
void Init() { /* initialize a Sony camera */ };
void TakeSnapshot() { std::cout << "Sony snapshot"; }
}
So, let's assume we have a system which contains a hardware device, in this case, a camera. Each device requires different logic to take a picture, but the code has to support a system with any supported camera, so we don't want switch statements littered throughout our code. So, we have created an abstract class Camera.
Each concrete class (i.e., SonyCamera, KodakCamera) implementation will incluse different headers, link to different libraries, etc., but they all share a common interface; we just have to decide which one to create up front. So...
std::unique_ptr<Camera> InitCamera(CameraType type) {
std::unique_ptr<Camera> ret;
Camera *cam;
switch(type) {
case Kodak:
cam = new KodakCamera();
break;
case Sony:
cam = new SonyCamera();
break;
default:
// throw an error, whatever
return;
}
ret.reset(cam);
ret->Init();
return ret;
}
int main(...) {
// get system camera type
std::unique_ptr<Camera> cam = InitCamera(cameraType);
// now we can call cam->TakeSnapshot
// and know that the correct version will be called.
}
So now we have a concrete instance that implements Camera. We can call TakeSnapshot without checking for the correct type anywhere in code because it doesn't matter; we know the correct version for the correct hardware will be called. Hope this helped.
Per your comment below:
I've been trying to use polymorphism, but I think the elements differ too much. For example, E_SessionTip has an amount and status element where E_Url just has a url. I could unify this under a property system but then I lose all the nice typing entirely. If you know of a way this can work though, I'm open to suggestions.
I would propose passing the responsibility for writing the XML data to your types which share a common interface. For example, instead of something like this:
void WriteXml(Entity *entity) {
switch(/* type of entity */) {
// get data from entity depending
// on its type and format
}
// write data to XML
}
Do something like this:
class SomeEntity : EntityBase {
public:
void WriteToXml(XmlStream &stream) {
// write xml to the data stream.
// the entity knows how to do this,
// you don't have to worry about what data
// there is to be written from the outside
}
private:
// your internal data
}
void WriteXml(Entity *entity) {
XmlStream str = GetStream();
entity->WriteToXml(stream);
}
Does that work for you? I've done exactly this before and it worked for me. Let me know.
Double-dispatch may be of interest. The table (in your case) would call a virtual method of the base element, which in turns calls back into the table. This second call is made with the dynamic type of the object, so the appropriate overloaded method is found in the Table class.
#include <iostream>
class Table; //forward declare
class BaseElement
{
public:
virtual void addTo(Table* t);
};
class DerivedElement1 : public BaseElement
{
virtual void addTo(Table* t);
};
class DerivedElement2 : public BaseElement
{
virtual void addTo(Table* t);
};
class Table
{
public:
void addElement(BaseElement* e){ e->addTo(this); }
void addSpecific(DerivedElement1* e){ std::cout<<"D1"; }
void addSpecific(DerivedElement2* e){ std::cout<<"D2"; }
void addSpecific(BaseElement* e){ std::cout<<"B"; }
};
void BaseElement::addTo(Table* t){ t->addSpecific(this); }
void DerivedElement1::addTo(Table* t){ t->addSpecific(this); }
void DerivedElement2::addTo(Table* t){ t->addSpecific(this); }
int main()
{
Table t;
DerivedElement1 d1;
DerivedElement2 d2;
BaseElement b;
t.addElement(&d1);
t.addElement(&d2);
t.addElement(&b);
}
output: D1D2B
Have a Look at the Visitor Pattern, it might help you