How to create simple http server with boost capable of receiving data editing it and sharing? - c++

So using any free opensource cross platform library like boost how to create a web service capable of reciving a data stream (for example stream of mp3 frames) on one URL like http://adress:port/service1/write/ and capable of sharing latest recived data to all consumers on http://adress:port/service1/read/ so of course mp3 is just an example of packed stream-able data - generally it can be anything packed. How to create such thing?
Generaly I am honesly triing to understend how to do such thing with C++ Network Library but it is just quite unclear to me.

The boost::asio documentation has four examples of complete HTTP server implementations, each with a slightly different threading architecture.
http://www.boost.org/doc/libs/1_43_0/doc/html/boost_asio/examples.html

You do not say what platform to use, but if Windows is an alternative, the Windows HTTP API easy to use and a great performer.
http://msdn.microsoft.com/en-us/library/aa364510(VS.85).aspx

Related

ZeroC ICE vs 0MQ/ZeroMQ vs Crossroads IO vs Open Source DDS

How does ZeroC ICE compare to 0MQ? I know that 0MQ/Crossroads and DDS are very similar, but cant seem to figure out where ICE comes in.
I need to quickly implement a system that offloads real-time market-data from C++ to C#, as a first phase of my project. The next phase will be to implement an Event Based architecture with an underlying Pub/Sub design.
I am willing to use TCP.. but the the system is currently running on a single 24 core server.. so an IPC option would be nice. From what I understand ICE is only TCP, while DDS and 0mq have an IPC option.
Currently ,I am leaning towards using Protobuf with either ICE or Crossroads IO. Got turned off from the OpenSplice DDS website. Ive done lots research on the various options, was originally considering OpenMPI + boost:mpi, but there does not seem to be MPI for .NET.
My question is:
How does ICE compare to 0MQ? I cant wrap my head around this. Was unable to find anything online that compares the two.
thanks in advance.
........
More about my project:
Currently using CMAKE C++ on Windows, but the plan is to move to CentOS at some point. An additional desired feature is to store the tic data and all the messages in a "NoSql" database such as Hbase/Hadoop or HDF5. Do any of these middleware/messaging/pub-sub libraries have any database integration?
Some thoughts about ZeroC:
Very fast; Able to have multiple endpoints; Able to load balance on the endpoints; Able to reconnect to a different endpoint in case one of the node goes down. This is transparent to the end user; Has good tool chain (IceGrid, IceStorm, IceBox, etc); Distributed, high availability, multiple failover, etc
Apart from that, I have used it for hot swapping code modules (something similar to Erlang) by having the client create the proxy with multiple endpoints, and later on bring down each endpoint for a quick upgrade one by one. With the transparent retry to a different endpoint, I could have the system up and running the whole time i did an upgrade. Not sure if this is an advertised feature or an unadvertised side-effect :)
Overall, it is very easy to scale out your servers if need be using ZeroC Ice.
I know ZeroMQ provides a fantastic set of tools and messaging patterns and I would keep using it for my pet projects. However, The problem that i see is that it is very easy to go overboard and lose track of all your distributed components. This is a must have in a distributed environment. How will you know where your clients/server are when you need to upgrade? If one of components down the chain does not receive a message, how to identify where the issue is? the publisher? the client? or any one of the bridges (REP/REQ, XREP/XREQ, etc) in between?
Overall, ZeroC provides a much better toolset and ecosystem for enterprise solutions.
And it is open source :)
Jaybny,
ZMQ:
If you want real good performance and the only job for Phase 1 of your job is to move data from C++ to C#, then Zmq is the best option.
Having a pub/sub model for event driven architecture is also something that Zmq can help you with, with its in-built messaging pattern.
Zmq also supports your IPC requirements in this case. Eg: you can have one instance of your application that consumes 24 cores by multithreading and communicating via IPC.
ZeroC Ice:
Ice is a RPC framework very much like CORBA.
Eg.
Socket/ZMQ - You send message over the wire. Read it at the other end, parse the message, do some action, etc.
ZeroC Ice - Create a contract between client and server. Contract is nothing but a template of a class. Now the client calls a proxy method of that class, and the server implements/actions it and returns the value. Thus, int result = mathClass.Add(10,20) is what the client calls. The method, parameters, etc is marshalled and sent to the server, server implements the Add method, returns the result, and the client gets 30 as the result. Thus on the client side, the api is nothing but a proxy for a servant running on a remote host.
Conclusion:
ZeroC ICE has some nice enterprisy features which are really good. However, for your project requirements, ZMQ is the right tool.
Hope this helps.
For me.. the correct answer was Crossroads I/O . It does everything I need.. but still unable to pub/sub when using protobufs... im sure ZeroC ICE is great for distributed IPC, but 0MQ/Crossroads, gives you the added flexibility to use Inter-Thread-Communication.
Note: on windows, 0mq does not have IPC.
So, all in all, the crossroads fork of 0mq is the best. but you will have to roll your own windows/ipc (or use tcp::127..) , and publisher side topic filtering features for pub/sub.
nanomsg, from the guy who wrote crossroads and 0mq (i think).
http://nanomsg.org/

Reliable Multicast over local network

I am implementing a messaging system using C++ and Qt. After much thought, I have determined that multicasting or a multicast-style technique will work best to solve my problem. However, I have learned about UDP's unreliability and believe it to be unacceptable.
My requirements are as follows:
Messages are to be sent in a binary serialized form.
From any given node on the network, I must be able to send messages to the other nodes.
Message delivery must be insured.
I have heard of OpenPGM and NORM as alternatives for UDP. If anyone has experience with either of these, could you please share?
I am also open to the possibility of implementing "reliable" multicast by myself, in the application layer, but I would prefer not to if there is a library that already implements this.
I am using C++ and Qt, therefore .NET or Java-based solutions are not acceptable unless they are open-source and I may port them to C++.
Thank you very much.
EDIT 20120816T1853 MDT: An additional question: would either PGM or NORM have to be implemented at the hardware/IP level? Or can they be overlayed on top of existing protocols?
We have implemented our own reliable multicast protocol over UDP called RSP, since we needed something cross-platform and at the time couldn't find a good solution between Linux and Windows. The Windows PGM implementation disconnects slow clients which leave the send window, whereas our implementation throttles the sender similar to TCP. Afaik OpenPGM can be configured to do the same.
The open source NORM at http://cs.itd.nrl.navy.mil/work/norm can be built as a library and has C++ API with Python and Java language bindings. If you ping the developer (me) via the mailing list, I can help get you started.
There is a large RFC division of reliable multicast protocols, and many implementations out there. It's a long time since I looked at this but there are TRAM, LRMP, ...

C++ library for dealing with multiple HTTP connections

I'm looking for a library to deal with multiple simultaneous HTTP connections (pref. on a single thread) to use in C++ in Windows so it can be Win32 API based. This is for a client application that must process a list of requests but keep 4 running at all times until the list is complete.
So far, I have tried cURL (multi interface) which seems to be the most appropriate that I have found but my problem is that I may have a queue of 200 requests but I need to only run 4 of them at a time. This becomes problematic when one request may take 2 seconds and another may take 2 mins as you have to wait on all handles and receive the result of all requests in one block. If anyone knows a way round this it would be very useful.
I have also tried rolling my own using WinHTTP but I need to throttle the requests so they would ideally need to be on a single thread and use callbacks for data which WinHTTP does not do.
The best thing I've found which would solve all my problems is ASIHTTPRequest but unfortunately it's Mac OSX only.
Thanks,
J
Have you looked at boost.asio? http://www.boost.org/doc/libs/1_46_1/doc/html/boost_asio.html
Its meant to scale well and has http server examples:
http://www.boost.org/doc/libs/1_46_1/doc/html/boost_asio/examples.html
Did you tried Boost Asio ?
Is multiplataform and stellar performance, and with nice examples of HTTP.
http://www.boost.org/doc/libs/1_46_1/doc/html/boost_asio.html
Asio is a great library but it's generic, the HTTP examples are just that: examples, there's no support for redirection, authentication and so on.
I know of two libraries built on top of Boost & Asio that support the HTTP protocol: cpp-netlib and Pion Network Library but AFAIK neither directly supports what you want.
All that being said if you're comfortable with using libcurl it should be pretty easy to use the "easy" interface with callbacks and implement the requests queue yourself.
libcurl's multi interface supports exactly what you're asking for.

How to get started implementing a video streaming server in c/c++?

In my project I need a dedicated server that dispatches the streams over to multiple clients.
More specificly, I've a callback function that gets called to gather the stream data, but no idea how to stream it over to other applications.
What's the best way to get started on this ?
What type of video are you planning to stream?
There's an open source library called liveMedia available at http://www.live555.com. This c++ library is available under LGPL and implements the RTSP, RTP/RTCP protocols and payload formats for many different media types. There is a class called DeviceSource IIRC that facilitates getting data into the library. There is an active mailing list and you should be able to find lots of information by searching the archives.
There are also a bunch of example test projects that illustrate how to stream mpeg, mp3, etc.
Should you choose to use standardized protocols, you might want to read up on RTP and RTSP.
I think you should check communication through network sockets.
There is no network concept in C++, so you have to rely on your system API or libraries ( as boost.asio for instance )

Adding SSL support to existing TCP & UDP code?

Here's my question.
Right now I have a Linux server application (written using C++ - gcc) that communicates with a Windows C++ client application (Visual Studio 9, Qt 4.5.)
What is the very easiest way to add SSL support to both sides in order to secure the communication, without completely gutting the existing protocol?
It's a VOIP application that uses a combination of UDP and TCP to initially set up the connection and do port tunneling stuff, and then uses UDP for the streaming data.
I've had lots of problems in the past with creating the security certificates from scratch that were necessary to get this stuff working.
Existing working example code would be ideal.
Thank you!
SSL is very complex, so you're going to want to use a library.
There are several options, such as Keyczar, Botan, cryptlib, etc. Each and every one of those libraries (or the libraries suggested by others, such as Boost.Asio or OpenSSL) will have sample code for this.
Answering your second question (how to integrate a library into existing code without causing too much pain): it's going to depend on your current code. If you already have simple functions that call the Winsock or socket methods to send/receive ints, strings, etc. then you just need to rewrite the guts of those functions. And, of course, change the code that sets up the socket to begin with.
On the other hand, if you're calling the Winsock/socket functions directly then you'll probably want to write functions that have similar semantics but send the data encrypted, and replace your Winsock calls with those functions.
However, you may want to consider switching to something like Google Protocol Buffers or Apache Thrift (a.k.a. Facebook Thrift). Google's Protocol Buffers documentation says, "Prior to protocol buffers, there was a format for requests and responses that used hand marshalling/unmarshalling of requests and responses, and that supported a number of versions of the protocol. This resulted in some very ugly code. ..."
You're currently in the hand marshalling/unmarshalling phase. It can work, and in fact a project I work on does use this method. But it is a lot nicer to leave that to a library; especially a library that has already given some thought to updating the software in the future.
If you go this route you'll set up your network connections with an SSL library, and then you'll push your Thrift/Protocol Buffer data over those connections. That's it. It does involve extensive refactoring, but you'll end up with less code to maintain. When we introduced Protocol Buffers into the codebase of that project I mentioned, we were able to get rid of about 300 lines of marshalling/demarshalling code.
I recommend to use GnuTLS on both the client and the server side, only for the TCP connection. Forget about the UDP data for now. The GnuTLS documentation has example code for writing both clients and servers. Please understand that at least the server side (typically the TCP responder) needs to have a certificate; the client side can work with anonymous identification (although there is even an example without server certificate, using only DH key exchange - which would allow man-in-the-middle attacks).
In general, it is likely that you will have to understand the principles of SSL, no matter what library you use. Library alternatives are OpenSSL (both Unix and Windows), and SChannel (only Windows).
Have you tried the SSL support in Boost.Asio or ACE? Both use OpenSSL under-the-hood, and provide similar abstractions for TCP, UDP and SSL. Sample code is available in both the Boost.Asio and ACE distributions.
One thing you may need to keep in mind is that SSL is record-oriented instead of the stream-oriented (both TCP and UDP). This may affect how you multiplex events since you must, for example, read the full SSL record before you can call a read operation complete.
To help handle this with no changes to the application yo may want to look at the stunnel project (http://www.stunnel.org/). I don't think that it will handle the UDP for you though.
The yaSSL and CyaSSL embedded SSL/TLS libraries have worked well for me in the past. Being targeted at embedded systems, they are optimized for both speed and size. yaSSL is written in C++ and CyaSSL is written in C. In comparison, CyaSSL can be up to 20 times smaller than OpenSSL.
Both support the most current industry standards (up to TLS 1.2), offer some cool features such as stream ciphers, and are dual licensed under the GPLv2 and a commercial license (if you need commercial support).
They have an SSL tutorial which touches on adding CyaSSL into your pre-existing code as well: http://www.yassl.com/yaSSL/Docs-cyassl-manual-11-ssl-tutorial.html
Product Page: http://yassl.com/yaSSL/Products.html
Regards,
Chris