Wrapping Winsock functions in C++ classes - c++

I've seen some people creating a "is-a" relationship like the following:
class TCPClient : public Socket
{
public:
TCPClient(const std::string& host, unsigned short port);
};
where the Socket class implements Winsock functions such as Connect(), Close(), Bind() etc.
Examples:
Example 1
Example 2
But this doesn't feel natural to me who is a newbie in socket programming.
Does the above hierarchy make more logical sense than the following "has-a" counterpart?
class TCPClient
{
public:
TCPClient(const std::string& host, unsigned short port);
....
private:
Socket m_socket;
};

A TCPClient uses a socket or has a socket, but is not itself a socket, and you wouldn't normally expect to be able to substitute a TCPClient anywhere a socket was expected. As such, public inheritance doesn't make sense.
You could use private inheritance for this case, but (at least in a typical case) it probably doesn't make much sense either. Private inheritance makes sense primarily when the base class provides at least one virtual function you plan to override in the child class. If you have a virtual function and need to override it, you have no real choice but to use inheritance. I wouldn't expect a Socket class to have an virtual functions though; that wouldn't normally apply here.
That basically leads to your second solution: the TCPClient should contain an instance of a Socket, rather than using inheritance at all.
I should add, however, that the Socket class you've shown seems to conflate the notion of an actual socket with the notion of an address. My first socket class (years ago) worked about like that, but since then I've concluded that it's not really an ideal design. I've become convinced that it's worthwhile to keep the notion of an address separate from the socket itself. Though mine is a bit less elaborate, I find it interesting that what I came up with looks almost like it could have been the prototype from which Boost ASIO was derived. It's a little smaller and simpler, but a lot of the basic ideas are generally pretty similar anyway.
That leads to my next recommendation: take a look at Boost ASIO. Lacking a fairly specific reason to do otherwise, it's what I'd advise (and generally use) in most new code. Although (as I said above) I've written several socket classes over the years, I haven't used any of them in much (any?) new code in quite a while now -- they really only have two possible advantages over ASIO. The first applies only to me: since I wrote and used them before ASIO existed, I already understand them and how they work. The second may be similar: at least to me, they seem a little bit smaller and simpler (but, again, that may be just because I used them first). Even so, the advantages of (for example) using something other people already understand trumps those quite easily.

Use has-a. A TCPClient uses a socket like a person uses a telephone. Would you derive a Person from a Telephone?

class TCPClient : public Socket
{
public:
TCPClient(const std::string& host, unsigned short port);
};
Network sockets are used not only in TCP/IP and the above design is more suitable if you plan to reuse your "Socket" class to implement other protocols using network sockets. For example:
class UDPClient : public Socket
{
};

I would say so. Socket is an abstraction, a file descriptor (UNIX) or handle (Windows), which has resources associated with it and is managed by the operating system. If we consider OSI model, the socket fits well into the presentation layer (it presents, or describes, a communication channel between two nodes), whereas a program that uses the socket sits on the application layer.
Considering this, I would prefer not to inherit from the socket, unless I implement a kind of advanced socket (by analogy: C-pointer vs. smart pointer) to present and handle a logical connection and somehow manage the resources associated with the socket. If XYZClient is an application, whose goal is to implement some business or data processing logic, I would not mix these two concepts together and use the second approach (has-a).
I would split infrastructure/resource-specific and business/application-specific logic.

Related

C++ design pattern for sockets

The goal is to be able to support both IPv4 and IPv6 for a rather complex application. At present, only IPv4 is handled. There is a class called Socket and a class called TlsSocket which extends from it. Let's say that Socket has a set of methods M1,M2,....,M9. TlsSocket overrides M7, M8 and M9.
Given the current design I was thinking of making Socket an abstract class and extending it twice - SocketIPv4 and SocketIPv6 which would implement methods M5 and M6 differently. However, then I would have to extend them twice again to have a TLS version for both IPv4 socket and IPv6 socket leading to code duplication. I was looking at the best design pattern for the problem at hand and I was convinced that the decorator design pattern would work best.
However, then TlsSocket would inherit from the abstract Socket class and then be composed of a concrete implementation of Socket (either IPv4 or IPv6). Therefore, I would essentially be initializing two Socket instances (one for composition and the other is TlsSocket itself) pointing to the same file descriptor. Everything should work fine but I am slightly uncomfortable initializing two socket instances pointing to the same file descriptor. Is there an alternative design pattern that I have missed and should consider?

Why is there no asio::ssl::iostream? (and how to implement it)

I'am currently exploring the Asio library and have working code for regular TCP connections. I used asio::ip::tcp::iostream objects since stuff I want to transmit already can serialize to/deserialize from iostreams, so this was really handy and worked well for me.
I then tried to switch to SSL connections and that's when everything turned crazy. There is apparently no built-in support to get the same iostream interface that all other protocols support for a secured connection. From a design perspective this is really perplexing to me. Is there any reason why this is the case?
I am aware of the discussion in How to create a boost ssl iostream? which concludes with a wrapper class to provide iostream functionality using boost. Apart from that, according to a comment, the implementation is flawed, this also does not give the same interface as for the other protocols (a basic_socket_iostream) which also allows to e.g., set expiration times and close the connection. (I am also using asio in the non-boost version and want to avoid adding boost as an additional dependency if possible).
So, I guess my questions are:
What exactly would I need to implement to get a basic_socket_iostream for an SSL connection? I assume it would be a derivation of asio::basic_streambuf or asio::basic_socket_streambuf but I somehow can't figure out how they work and need to be tweaked.. there's just a bunch of weird pointer movement and buffer allocations and documentation to me is quite unclear on what happens when exactly to achieve what...
Why is this not already present in the first place? It seems very unreasonable to have this one protocol behave entirely different from any other and thus require major refactoring for changing a tcp::iostream based project to support secured connections
> Well, the problem I have is that the ssl::stream really does neither: I doesn't give a socket but it also doesn't give me a stream interface that would be compatible to those available from the other protocols and, yes, in that sense it behaves very differently from the others (for no apparent reason)
I don't think the stream behaves any differently from the other protocols (see
https://www.boost.org/doc/libs/1_66_0/doc/html/boost_asio/overview/core/streams.html):
Streams, Short Reads and Short Writes
Many I/O objects in Boost.Asio are stream-oriented. This means that:
There are no message boundaries. The data being transferred is a continuous sequence of bytes.
Read or write operations may transfer fewer bytes than requested. This is referred to as a short read or short write.
Objects that provide stream-oriented I/O model one or more of the following type requirements:
SyncReadStream, where synchronous read operations are performed using a member function called read_some().
AsyncReadStream, where asynchronous read operations are performed using a member function called async_read_some().
SyncWriteStream, where synchronous write operations are performed using a member function called write_some().
AsyncWriteStream, where synchronous write operations are performed using a member function called async_write_some().
Examples of stream-oriented I/O objects include ip::tcp::socket, ssl::stream<>, posix::stream_descriptor, windows::stream_handle, etc.
Perhaps the confusion is that you're comparing to the iostream interface, which is simply not the same concept (it comes from the standard library).
To the question how you could make a iostream compatible stream wrapper for the ssl stream, I cannot devise an answer without consulting the documentations more and using a compiler, which I don't have on hand at the moment.
I think there is room for improvement in the library here. If you read the ip::tcp::iostream class (i.e. basic_socket_iostream<ip::tcp>), you'll see that it has two base classes:
private detail::socket_iostream_base<ip::tcp>
public std::basic_iostream<char>
The former contains a basic_socket_streambuf<ip::tcp> (a derived class of std::streambuf and basic_socket<ip::tcp>), whose address is passed to the latter at construction-time.
For the most part, basic_socket_streambuf<ip::tcp> performs the actual socket operations via its basic_socket<ip::tcp> base class. However, there is the connect_to_endpoints() member function that jumps the abstraction and calls several low-level functions from the detail::socket_ops namespace directly on socket().native_handle(). (This seems to have been introduced in Git commit b60e92b13e.) Those functions will only work on TCP sockets, even though the class is a template for any protocol.
Until I discovered this issue, my plan to integrate SSL support as a iostream/streambuf was to provide a ssl protocol class and a basic_socket<ssl> template specialization to wrap the existing ssl::context and ssl::stream<ip::tcp::socket> classes. Something like this (won't compile):
#include <boost/asio/ip/tcp.hpp>
#include <boost/asio/basic_socket.hpp>
#include <boost/asio/ssl.hpp>
namespace boost {
namespace asio {
namespace ip {
class ssl
: public tcp // for reuse (I'm lazy!)
{
public:
typedef basic_socket_iostream<ssl> iostream;
// more things as needed ...
};
} // namespace ip
template <>
class basic_socket<ip::ssl>
{
class SslContext
{
ssl::context ctx;
public:
SslContext() : ctx(ssl::context::sslv23_client)
{
ctx.set_options(ssl::context::default_workarounds);
ctx.set_default_verify_paths();
}
ssl::context & context() { return ctx; }
} sslContext;
ssl::stream<ip::tcp::socket> sslSocket;
public:
explicit basic_socket(const executor & ex)
: sslSocket(ex, sslContext.context())
{}
executor get_executor() noexcept
{
return sslSocket.lowest_layer().get_executor();
}
void connect(const ip::tcp::endpoint & endpoint_)
{
sslSocket.next_layer().connect(endpoint_);
sslSocket.lowest_layer().set_option(ip::tcp::no_delay(true));
sslSocket.set_verify_mode(ssl::verify_peer);
sslSocket.set_verify_callback(
ssl::rfc2818_verification("TODO: pass the domain here through the stream/streambuf somehow"));
sslSocket.handshake(ssl::stream<ip::tcp::socket>::client);
}
void close()
{
sslSocket.shutdown();
sslSocket.next_layer().close();
}
};
} // namespace asio
} // namespace boost
But due to the design issue I'll have to specialize basic_socket_streambuf<ip::ssl> as well, to avoid the detail::socket_ops routines. (I should also avoid injecting the ssl protocol class into the boost::asio::ip namespace, but that's a side concern.)
Haven't spent much time on this, but it seems doable. Fixing basic_socket_streambuf<>::connect_to_endpoints() first should help greatly.

Object-oriented networking

I've written a number of networking systems and have a good idea of how networking works. However I always end up having a packet receive function which is a giant switch statement. This is beginning to get to me. I'd far rather a nice elegant object-oriented way to handle receiving packets but every time I try to come up with a good solution I always end up coming up short.
For example lets say you have a network server. It is simply waiting there for responses. A packet comes in and the server needs to validate the packet and then it needs to decide how to handle it.
At the moment I have been doing this by switching on the packet id in the header and then having a huge bunch of function calls that handle each packet type. With complicated networking systems this results in a monolithic switch statement and I really don't like handling it this way. One way I've considered is to use a map of handler classes. I can then pass the packet to the relevant class and handle the incoming data. The problem I have with this is that I need some way to "register" each packet handler with the map. This means, generally, I need to create a static copy of the class and then in the constructor register it with the central packet handler. While this works it really seems like an inelegant and fiddly way of handling it.
Edit: Equally it would be ideal to have a nice system that works both ways. ie a class structure that easily handles sending the same packet types as receiving them (through different functions obviously).
Can anyone point me towards a better way to handle incoming packets? Links and useful information are much appreciated!
Apologies if I haven't described my problem well as my inability to describe it well is also the reason I've never managed to come up with a solution.
About the way to handle the packet type: for me the map is the best. However I'd use a plain array (or a vector) instead of a map. It would make access time constant if you enumerate your packet types sequentially from 0.
As to the class structure. There are libraries that already do this job: Available Game network protocol definition languages and code generation. E.g. Google's Protocol Buffer seems to be promising. It generates a storage class with getters, setters, serialization and deserialization routines for every message in the protocol description. The protocol description language looks more or less rich.
A map of handler instances is pretty much the best way to handle it. Nothing inelegant about it.
In my experience, table driven parsing is the most efficient method.
Although std::map is nice, I end up using static tables. The std::map cannot be statically initialized as a constant table. It must be loaded during run-time. Tables (arrays of structures) can be declared as data and initialized at compile time. I have not encountered tables big enough where a linear search was a bottleneck. Usually the table size is small enough that the overhead in a binary search is slower than a linear search.
For high performance, I'll use the message data as an index into the table.
When you are doing OOP, you try to represent every thing as an object, right? So your protocol messages become objects too; you'll probably have a base class YourProtocolMessageBase which will encapsulate any message's behavior and from which you will inherit your polymorphically specialized messages. Then you just need a way to turn every message (i.e. every YourProtocolMessageBase instance) into a string of bytes, and a way to do reverse. Such methods are called serialization techniques; some metaprogramming-based implementations exist.
Quick example in Python:
from socket import *
sock = socket(AF_INET6, SOCK_STREAM)
sock.bind(("localhost", 1234))
rsock, addr = sock.accept()
Server blocks, fire up another instance for a client:
from socket import *
clientsock = socket(AF_INET6, SOCK_STREAM)
clientsock.connect(("localhost", 1234))
Now use Python's built-in serialization module, pickle; client:
import pickle
obj = {1: "test", 2: 138, 3: ("foo", "bar")}
clientsock.send(pickle.dumps(obj))
Server:
>>> import pickle
>>> r = pickle.loads(rsock.recv(1000))
>>> r
{1: 'test', 2: 138, 3: ('foo', 'bar')}
So, as you can see, I just sent over link-local a Python object. Isn't this OOP?
I think the only possible alternative to serializing is maintaining the bimap IDs ⇔ classes. This looks really inevitable.
You want to keep using the same packet network protocol, but translate that into an Object in programming, right ?
There are several protocols that allow you to treat data as programming objects, but it seems, you don't want to change the protocol, just the way its treated in your application.
Does the packets come with something like a "tag" or metadata or any "id" or "data type" that allows you to map to an specific object class ? If it does, you may create an array that stores the id. and the matching class, and generate an object.
A more OO way to handle this is to build a state machine using the state pattern.
Handling incoming raw data is parsing where state machines provide an elegant solution (you will have to choose between elegant and performance)
You have a data buffer to process, each state has a handle buffer method that parses and processes his part of the buffer (if already possible) and sets the next state based on the content.
If you want to go for performance, you still can use a state machine, but leave out the OO part.
I would use Flatbuffers and/or Cap’n Proto code generators.
I solved this problem as part of my btech in network security and network programming and I can assure it's not one giant packet switch statement. The library is called cross platform networking and I modeled it around the OSI model and how to output it as a simple object serialization. The repository is here: https://bitbucket.org/ptroen/crossplatformnetwork/src/master/
Their is a countless protocols like NACK, HTTP, TCP,UDP,RTP,Multicast and they all are invoked via C++ metatemplates. Ok that is the summarized answer now let me dive a bit deeper and explain how you solve this problem and why this library can help you out whether you design it yourself or use the library.
First, let's talk about design patterns in general. To make it nicely organized you need first some design patterns around it as a way to frame your problem. For my C++ templates I framed it initially around the OSI Model(https://en.wikipedia.org/wiki/OSI_model#Layer_7:_Application_layer) down to the transport level(which becomes sockets at that point). To recap OSI :
Application Layer: What it means to the end user. IE signals getting deserialized or serialized and passed down or up from the networking stack
Presentation: Data independence from application and network stack
Session: dialogues between sessions
Transport: transporting the packets
But here's the kicker when you look at these closely these aren't design pattern but more like namespaces around transporting from A to B. So to a end user I designed cross platform network with the following standardized C++ metatemplate
template <class TFlyWeightServerIncoming, // a class representing the servers incoming payload. Note a flyweight is a design pattern that's a union of types ie putting things together. This is where you pack your incoming objects
class TFlyWeightServerOutgoing, // a class representing the servers outgoing payload of different types
class TServerSession, // a hook class that represent how to translate the payload in the form of a session layer translation. Key is to stay true to separation of concerns(https://en.wikipedia.org/wiki/Separation_of_concerns)
class TInitializationParameters> // a class representing initialization of the server(ie ports ,etc..)
two examples: https://bitbucket.org/ptroen/crossplatformnetwork/src/master/OSI/Transport/TCP/TCPTransport.h
https://bitbucket.org/ptroen/crossplatformnetwork/src/master/OSI/Transport/HTTP/HTTPTransport.h
And each protocol can be invoked like this:
OSI::Transport::Interface::ITransportInitializationParameters init_parameters;
const size_t defaultTCPPort = 80;
init_parameters.ParseServerArgs(&(*argv), argc, defaultTCPPort, defaultTCPPort);
OSI::Transport::TCP::TCP_ServerTransport<SampleProtocol::IncomingPayload<OSI::Transport::Interface::ITransportInitializationParameters>, SampleProtocol::OutgoingPayload<OSI::Transport::Interface::ITransportInitializationParameters>, SampleProtocol::SampleProtocolServerSession<OSI::Transport::Interface::ITransportInitializationParameters>, OSI::Transport::Interface::ITransportInitializationParameters> tcpTransport(init_parameters);
tcpTransport.RunServer();
citation:
https://bitbucket.org/ptroen/crossplatformnetwork/src/master/OSI/Application/Stub/TCPServer/main.cc
I also have in the code base under MVC a full MVC implementation that builds on top of this but let's get back to your question. You mentioned:
"At the moment I have been doing this by switching on the packet id in the header and then having a huge bunch of function calls that handle each packet type."
" With complicated networking systems this results in a monolithic switch statement and I really don't like handling it this way. One way I've considered is to use a map of handler classes. I can then pass the packet to the relevant class and handle the incoming data. The problem I have with this is that I need some way to "register" each packet handler with the map. This means, generally, I need to create a static copy of the class and then in the constructor register it with the central packet handler. While this works it really seems like an inelegant and fiddly way of handling it."
In cross platform network the approach to adding new types is as follows:
After you defined the server type you just need to make the incoming and outgoing types. The actual mechanism for handling them is embedded with in the incoming object type. The methods within it are ToString(), FromString(),size() and max_size(). These deal with the security concerns of keeping the layers below the application layer secure. But since your defining object handlers now you need to make the translation code to different object types. You'll need at minimum within this object:
1.A list of enumerated object types for the application layer. This could be as simple as numbering them. But for things like the session layer have a look at session layer concerns(for instance RTP has things like jitter and how to deal with imperfect connection. IE session concerns). Now you could also switch from enumerated to a hash/map but that's just another way of dealing of the problem how to look up the variable.
Defining Serialize and de serialize the object(for both incoming and outgoing types).
After you serialized or deserialize put the logic to dispatch it to the appropriate internal design pattern to handle the application layer. This could possibly be a builder , or command or strategy it really depends on it's use case. In cross platform some concerns is delegated by the TServerSession layer and others by the incoming and outgoing classes. It just depends on the seperation of concerns.
Deal with performance concerns. IE its not blocking(which becomes a bigger concern when you scale up concurrent user).
Deal with security concerns(pen test).
If you curious you can review my api implementation and it's a single threaded async boost reactor implementation and when you combine with something like mimalloc(to override new delete) you can get very good performance. I measured like 50k connections on a single thread easily.
But yeah it's all about framing your server in good design patterns , separating the concerns and selecting a good model to represent the server design. I believe the OSI model is appropriate for that which is why i put in cross platform network to provide superior object oriented networking.

Implementation communication protocols in C/C++

I am in the process of starting to implement some proprietary communication protocol stack in software but not sure where to start. It is the kind of work I have not done before and I am looking for help in terms of resources for best/recommended approaches.
I will be using c/c++ and I am free to use use libraries (BSD/BOOST/Apache) but no GPL. I have used C++ extensively so using the features of C++ is not a problem.
The protocol stack has three layers and it is already fully specified and formally verified. So all I need to do is implemented and test it fully in the specified languages. Should also mention that protocol is very simple but can run on different devices over a reliable physical transport layer. I know the events, inputs, outputs, side effects and the behaviour of the protocol state machine(s). Generally, an interrupt is received to read the message received from the physical layer to read it and send to the waiting device. The receiving device can process and pass the response message to the protocol layer to send out on the physical layer.
Any help with references/recommendations will be appreciated. I am willing to use a different language if only to help me understand how to implement them but I will have to eventually resort to the language of choice.
Update: An example protocol I wish to implement is something like SNEP.
I do not need to worry about connection management. We can assume the connection is already establish and I the protocol does is data exchange where the protocol messages are already well defined in specifications
Start with interfaces and messages.
Declare the interfaces of session that allow peers to exchange messages. Declare the messages as C++ structs with simple types, like ints, doubles, std::string's and and std::vectors. For example:
// these are your protocol messages
struct HelloRequest {
uint32_t seq_no;
// more stuff
};
struct HelloResponse {
uint32_t seq_no;
// more stuff
};
// Session callback for received messages
struct SessionReceiver {
virtual void connected(Session*) = 0;
virtual void receive(Session* from, HelloRequest msg) = 0;
virtual void receive(Session* from, HelloResponse msg) = 0;
virtual void disconnected(Session*) = 0;
};
// Session interface to send messages
struct Session {
virtual void send(HelloRequest msg) = 0;
virtual void send(HelloResponse msg) = 0;
};
// this connects asynchronously and then calls SessionReceiver::connected() with a newly established session
struct SessionInitiator {
virtual void connect(SessionReceiver* cb, std::string peer) = 0;
};
// this accepts connections asynchronously and then calls SessionReceiver::connected() with a newly accepted session
struct SessionAcceptor {
virtual void listen(SessionReceiver* cb, std::string port) = 0;
};
Then test your interfaces by coding the business logic that uses these interfaces. Once you are confident that the interfaces allow you to implement the required logic implement the interfaces and serialization of your messages using your preferred event-driven framework, like libevent or Boost.Asio.
Edit:
Note that interfaces allow you to have mock or test implementations. Also the fact that serialization happens behind the interface means that for in-process peers you don't have to serialize and deserialize the messages, you can pass them as is.
Boost.ASIO is pretty cutting edge when it comes to Asynchronous (or synchronous) network communication in C++
Have a look at Google Protocol Buffers.
From the description:
Protocol buffers are a flexible, efficient, automated mechanism for serializing structured data – think XML, but smaller, faster, and simpler. You define how you want your data to be structured once, then you can use special generated source code to easily write and read your structured data to and from a variety of data streams and using a variety of languages. You can even update your data structure without breaking deployed programs that are compiled against the "old" format.
Protocol Buffers are language and platform neutral so should fit into your project. I couldn't find the license, but at least it doesn't say "GPL" anywhere that I could find.
This will help you with the protocols. With the actual data transmission, well, unless you are writing the OS yourself, there should be some primitives you should use. It's hard to give more exact help on implementation unless you provide a bit more detail. For instance, what communication channel are you using? Ethernet?
But as a rule of thumb, you should make the ISR as short as possible. In these kinds of solutions that usually means copying data to a ring buffer. This way you don't have to allocate memory in the ISR. ISR, after having copied the data, should inform upper layers of the package. If you can use DMA, use that. In that case it might be possible to send the notification before you even start the DMA transfer.
You might also want to check out Linux Device Drivers, chapter 10 in particular. Check out the part about Bottom and Top Halves.

TDD, Unit Test and architectural changes

I'm writing an RPC middleware in C++. I have a class named RPCClientProxy that contains a socket client inside:
class RPCClientProxy {
...
private:
Socket* pSocket;
...
}
The constructor:
RPCClientProxy::RPCClientProxy(host, port) {
pSocket = new Socket(host, port);
}
As you can see, I don't need to tell the user that I have a socket inside.
Although, to make unit tests for my proxies it would be necessary to create mocks for sockets and pass them to the proxies, and to do so I must use a setter or pass a factory to the sockets in the proxies's constructors.
My question: According to TDD, is it acceptable to do it ONLY because the tests? As you can see, these changes would change the way the library is used by a programmer.
I don't adhere to a certain canon i would say if you think you would benefit from testing through a mock socket the do it, you could implement a parallel constructor
RPCClientProxy::RPCClientProxy(Socket* socket)
{
pSocket = socket
}
Another option would be to implement a host to connect to for testing that you can configure to expect certain messages
What you describe is a perfectly normal situation, and there are established patterns that can help you implement your tests in a way that won't affect your production code.
One way to solve this is to use a Test Specific Subclass where you could add a setter for the socket member and use a mock socket in the case of a test. Of-course you would need to make the variable protected rather than private but that's probably no biggie. For example:
class RPCClientProxy
{
...
protected:
Socket* pSocket;
...
};
class TestableClientProxy : public RPCClientProxy
{
TestableClientProxy(Socket *pSocket)
{
this->pSocket = pSocket;
}
};
void SomeTest()
{
MockSocket *pMockSocket = new MockSocket(); // or however you do this in your world.
TestableClientProxy proxy(pMockSocket);
....
assert pMockSocket->foo;
}
In the end it comes down to the fact that you often (more often than not in C++) have to design your code in such a way as to make it testable and there is nothing wrong with that. If you can avoid these decisions leaking out into the public interfaces that may be better sometimes, but in other cases it can be better to choose, for example, dependency inject through constructor parameters above say, using a singleton to provide access to a specific instance.
Side note: It's probably worth taking a look through the rest of the xunitpatterns.com site: there are a whole load of well established unit-testing patterns to understand and hopefully you can gain from the knowledge of those who have been there before you :)
Your issue is more a problem of design.
If you ever with to implement another behavior for Socket, you're toasted, as it involves rewriting all the code that created sockets.
The usual idea is to use an abstract base class (interface) Socket and then use an Abstract Factory to create the socket you wish depending on the circumstances. The factory itself could be either a Singleton (though I prefer Monoid) or passed down as arguments (according to the tenants of Dependency Injection). Note that the latter means no global variable, which is much better for testing, of course.
So I would advise something along the lines of:
int main(int argc, char* argv[])
{
SocketsFactoryMock sf;
std::string host, port;
// initialize them
std::unique_ptr<Socket> socket = sf.create(host,port);
RPCClientProxy rpc(socket);
}
It has an impact on the client: you no longer hide the fact that you use sockets behind the scenes. On the other hand, it gives control to the client who may wish to develop some custom sockets (to log, to trigger actions, etc..)
So it IS a design change, but it is not caused by TDD itself. TDD just takes advantage of the higher degree of control.
Also note the clear resource ownership expressed by the use of unique_ptr.
As others have pointed out, a factory architecture or a test-specific subclass are both good options in this situation. For completeness, one other possibility is to use a default argument:
RGCClientProxy::RPCClientProxy(Socket *socket = NULL)
{
if(socket == NULL) {
socket = new Socket();
}
//...
}
This is, perhaps somewhere between the factory paradigm (which is ultimately the most flexible, but more painful for the user) and newing up a socket inside your constructor. It has the benefit that existing client code doesn't need to be modified.