Here is my problem: I have a C++ application that consists of Qt GUI and quite a lot of backend code. Currently it is linked into one executable and runs on Solaris. Now, I would like to run the GUI on Windows and leave the rest of the code running on Solaris (porting it will be a huge effort). The interface between GUI and backend is pretty clean and consists of one C++ abstract class (also uses some stl containers). This is the part I would like to turn into webservice.
The problem is that our backend code is not thread safe therefore I will need to run a separate process on Solaris for every GUI on Windows. However, for performance reasons I cannot start and finish process for every request from the GUI.
This design means that I need to take care of several problems:
there must be a single point of contact for the GUI code,
the communication must happen with the instance started during first call (it should either be routed or the first call should return address of the actual server instance),
there must be some keep-alive messages sent between GUI and server process to manage lifetime of server process (server process cannot run forever).
Could you recommend a framework that would take care of these details (message routing/dispatching and lifetime management)?
You could technically configure Apache httpd to spawn a new instance per connection. The configuration also allows you to manage the time the processes stay alive when idle, and how many processes to leave running at a minimum. This would work well as long as the web service is stateless. A little weird, but technically feasible.
If you use something like gSoap, you can compile your C++ classes in Solaris directly into a gSoap mod and won't have to adapt it to any front-end like PHP or Java. It'll just plug into Apache httpd and start working.
Edit:
I just thought about it, and you could probably use HTTP 1.1 keep-alives to manage the life of the process too. Apache lets you configure how long it will allow the keep-alive to remain open, which keeps the thread/process for the connection active.
Related
I am writing a simple web server with C++ that handles long-lived connections. However, I need to reload my web server from time to time. I wonder if there is a way that I can hand over the established connections from one process to another process to be able to retain my established connections after reload.
Would that be enough to only pass file descriptors? what would happen to connection states?
Any similar open source project that does the same thing?
Any thoughts or ideas?
Thanks,
I really have no idea whether this is possible, but I think not. If you fork() then the child will "inherit" the descriptors, but I don't know whether they behave like the should (though I suspect that they do.) And with forking, you can't run new code (can you?) Simple descriptor numbers are process-specific, so just passing them to a new, unrelated process won't work either, and they will be closed when your process terminates anyway.
One solution (in the absence of a simpler one,) is to break your server into two processes:
Front-end: A very simple process that just accepts the connections, keep them open and forwards any data it receives to the second process, and vice versa.
Server: The real web server, that does all the logic and processing, but does not communicate with the clients directly.
The first and second processes communicate via a simple protocol. One feature of this protocol must that it does support the second process being terminated and relaunched.
Now, you can reload the actual server process without losing the client connections (since they are handled by the front-end process.) And since this front-end is extremely simple and probably has very few configurations and bugs, you rarely need to reload it at all. (I'm assuming that you need to reload your server process because it runs into bugs that need to be fixed or you need to change configurations and stuff.)
Another important and helpful feature that this system can have is to be able to transition between server processes "gradually". That is, you already have a front-end and a server running, but you decide to reload the server. You launch another server process that connects to the front-end (while the old server is still running and connected,) and the front-end process forwards all the new client connections to the new server process (or even all the new requests coming from the existing client connections.) And when the old server finishes processing all the requests that it has under processing, it gracefully and cleanly exits.
As I said, this is a solution you might to try only if nothing easier and simpler is found.
I have an application running in a Java EE App Server and it needs to call a web service of a partner company.
Using wsimport.exe from my JDK (1.6) I have generated the client classes. I instantiate the service and get the port in order to call the web service.
I noticed that the first call to the web service is slow, and I am led to believe this is because it is validating the WSDL. Subsequent calls are fast.
I could keep the WSDL locally, and apparently that will speed up the first call.
In order to optimise my app, I was thinking I could create a pool of the clients. This has the added advantage that I have some throttling in the app - lets say I have a pool of 5 clients, then at most I will be using memory for 5 clients. If the load increased suddenly on my server, I don't have to worry that an unlimited number of clients would cause an out of memory error. I am assuming, based on past experience, that the web service clients use a lot of memory...
Would you bother with a pool?
How would you get over the first call to the web service being slow?
What is the best way to create that pool, so that I have to do the least amount of programming (i.e. I'd like to use a library / API / whatever, so that I don't have to reinvent the wheel and code some hairy bugs).
The Apache Commons Pool might be exactly what I am after.
It is configurable and seems to have thought of everything.
A colleague of mine suggested that you can use the #WebServiceRef annotation on a field in an EJB. The idea is that the server would inject a reference to a client, from which one can create a port for each thread that calls the EJB.
I assume that injected references come from a pool, although the specification doesn't appear to talk about this. The Javadoc for the annotation explicitly mentions that:
"the injected references are not thread safe"
AKKA with a master/slave setup as shown in the link could work well, albeit a little more complex than the Apache Commons Pool listed in another answer. AKKA also uses an execution pool, with its own threads, which isn't strictly allowed in the Java EE world, although I'd argue that because a well tested framework is in charge of the threads, there is no danger, and it shouldn't interfere with the app servers control of threads anyway as the number of threads being handled by AKKA is minimal.
I have a RESTful web service with a C++ API at the back-end. I am using the FastCGI library to facilitate the REST interface. My C++ API has multiple functions that may be used independently. I am looking for a way to make it as fast as possible. Here are a few ideas I got:
Have one FastCGI application that gets the function to be executed, executes that function and returns the output. This way the API calls keep waiting until one 'function' is complete, even though the next call is for a different independent function.
Have multiple FastCGI applications, each having access to only one function from the API, each getting inputs for that particular app and returning outputs of that particular app alone.
This way I can have concurrent calls made to all the different functions, and separate process queues would be made for each function that I have, instead of having one generic process queue to the FastCGI application consisting of calls to different independent functions.
While this looks like it would perform better, I am not sure if it is possible to implement a system such as this - i.e having many FastCGI apps running in parallel from the same server. If it is possible, can someone tell me how to implement this?
Each FastCGI application is a separate program, running in a loop, and communicating with Apache in a binary protocol defined by FastCGI specification. The only possible concurrency problems are the same concurrency problems you would experience if you were running concurrent CGI or PHP requests, with just one exception: since FastCGI processes do not terminate, any limited resources will have to be carefully managed. For example, if you only have a ten-client licence to a database server, you can't have eleven FastCGI processes using the database unless you manage connections better than "open at start, let it close at the end" method often used in CGI or PHP.
i am writing an program in c++ and i need an web interface to control the program and which will be efficient and best programming language ...
Your application will just have to listen to messages from the network that your web application would send to it.
Any web application (whatever the language) implementation could use sockets so don't worry about the details, just make sure your application manage messages that you made a protocol for.
Now, if you want to keep it all C++, you could use CPPCMS for your web application.
If it were Windows, I could advice you to register some COM component for your program. At least from ASP.NET it is easily accessible.
You could try some in-memory exchange techniques like reading/writing over a localhost socket connection. It however requires you to design some exchange protocol first.
Or data exchange via a database. You program writes/reads data from the database, the web front-end reads/writes data to the database.
You could use a framework like Thrift to communicate between a PHP/Python/Ruby/whatever webapp and a C++ daemon, or you could even go the extra mile (probably harder than just using something like Thrift) and write language bindings for the scripting language of your choice.
Either of the two options gives you the ability to write web-facing code in a language more suitable for the task while keeping the "heavy lifting" in C++.
Did you take a look at Wt? It's a widget-centric C++ framework for web applications, has a solid MVC system, an ORM, ...
The Win32 API method.
MSDN - Getting Started with Winsock:
http://msdn.microsoft.com/en-us/library/ms738545%28v=VS.85%29.aspx
(Since you didn't specify an OS, we're assuming Windows)
This is not as simple as it seems!
There is a mis-match between your C++ program (which presumibly is long running otherwise why would it need controlling) and a typical web program which starts up when it receives the http request and dies once the reply is sent.
You could possibly use one of the Java based web servers where it is possible to have a long running task.
Alternatively you could use a database or other storage as the communication medium:-
You program periodically writes it current status to a well know table, when a user invokes the control application it reads the current status and gives an appropriate set of options to the user which can then be stored in the DB, and actioned by your program the next time it polls for a request.
This works better if you have a queuing mechanism avaiable, as it can then be event driven rather than polled.
Go PHP :) Look at this Program execution Functions
(Edited to try to explain better)
We have an agent, written in C++ for Win32. It needs to periodically post information to a server. It must support disconnected operation. That is: the client doesn't always have a connection to the server.
Note: This is for communication between an agent running on desktop PCs, to communicate with a server running somewhere in the enterprise.
This means that the messages to be sent to the server must be queued (so that they can be sent once the connection is available).
We currently use an in-house system that queues messages as individual files on disk, and uses HTTP POST to send them to the server when it's available.
It's starting to show its age, and I'd like to investigate alternatives before I consider updating it.
It must be available by default on Windows XP SP2, Windows Vista and Windows 7, or must be simple to include in our installer.
This product will be installed (by administrators) on a couple of hundred thousand PCs. They'll probably use something like Microsoft SMS or ConfigMgr. In this scenario, "frivolous" prerequisites are frowned upon. This means that, unless the client-side code (or a redistributable) can be included in our installer, the administrator won't be happy. This makes MSMQ a particularly hard sell, because it's not installed by default with XP.
It must be relatively simple to use from C++ on Win32.
Our client is an unmanaged C++ Win32 application. No .NET or Java on the client.
The transport should be HTTP or HTTPS. That is: it must go through firewalls easily; no RPC or DCOM.
It should be relatively reliable, with retries, etc. Protection against replays is a must-have.
It must be scalable -- there's a lot of traffic. Per-message impact on the server should be minimal.
The server end is C#, currently using ASP.NET to implement a simple HTTP POST mechanism.
(The slightly odd one). It must support client-side in-memory queues, so that we can avoid spinning up the hard disk. It must allow flushing to disk periodically.
It must be suitable for use in a proprietary product (i.e. no GPL, etc.).
How is your current solution showing its age?
I would push the logic on to the back end, and make the clients extremely simple.
Messages are simply stored in the file system. Have the client write to c:/queue/{uuid}.tmp. When the file is written, rename it to c:/queue/{uuid}.msg. This makes writing messages to the queue on the client "atomic".
A C++ thread wakes up, scans c:\queue for "*.msg" files, and if it finds one it then checks for the server, and HTTP POSTs the message to it. When it receives the 200 status back from the server (i.e. it has got the message), then it can delete the file. It only scans for *.msg files. The *.tmp files are still being written too, and you'd have a race condition trying to send a msg file that was still being written. That's what the rename from .tmp is for. I'd also suggest scanning by creation date so early messages go first.
Your server receives the message, and here it can to any necessary dupe checking. Push this burden on the server to centralize it. You could simply record every uuid for every message to do duplication elimination. If that list gets too long (I don't know your traffic volume), perhaps you can cull it of items greater than 30 days (I also don't know how long your clients can remain off line).
This system is simple, but pretty robust. If the file sending thread gets an error, it will simply try to send the file next time. The only time you should be getting a duplicate message is in the window between when the client gets the 200 ack from the server and when it deletes the file. If the client shuts down or crashes at that point, you will have a file that has been sent but not removed from the queue.
If your clients are stable, this is a pretty low risk. With the dupe checking based on the message ID, you can mitigate that at the cost of some bookkeeping, but maintaining a list of uuids isn't spectacularly daunting, but again it does depend on your message volume and other performance requirements.
The fact that you are allowed to work "offline" suggests you have some "slack" in your absolute messaging performance.
To be honest, the requirements listed don't make a lot of sense and show you have a long way to go in your MQ learning. Given that, if you don't want to use MSMQ (probably the easiest overall on Windows -- but with [IMO severe] limitations), then you should look into:
qpid - Decent use of AMQP standard
zeromq - (the best, IMO, technically but also requires the most familiarity with MQ technologies)
I'd recommend rabbitmq too, but that's an Erlang server and last I looked it didn't have usuable C or C++ libraries. Still, if you are shopping MQ, take a look at it...
[EDIT]
I've gone back and reread your reqs as well as some of your comments and think, for you, that perhaps client MQ -> server is not your best option. I would maybe consider letting your client -> server operations be HTTP POST or SOAP and allow the HTTP endpoint in turn queue messages on your MQ backend. IOW, abstract away the MQ client into an architecture you have more control over. Then your C++ client would simply be HTTP (easy), and your HTTP service (likely C# / .Net from reading your comments) can interact with any MQ backend of your choice. If all your HTTP endpoint does is spawn MQ messages, it'll be pretty darned lightweight and can scale through all the traditional load balancing techniques.
Last time I wanted to do any messaging I used C# and MSMQ. There are MSMQ libraries available that make using MSMQ very easy. It's free to install on both your servers and never lost a message to this day. It handles reboots etc all by itself. It's a thing of beauty and 100,000's of message are processed daily.
I'm not sure why you ruled out MSMQ and I didn't get point 2.
Quite often for queues we just dump record data into a database table and another process lifts rows out of the table periodically.
How about using Asynchronous Agents library from .NET Framework 4.0. It is still beta though.
http://msdn.microsoft.com/en-us/library/dd492627(VS.100).aspx