How to measure speed of an execution via IPC of two applications - c++

I have a sending application that sends a command via an anonymous pipe.
I have a receiving application that receives the command, handles it, and returns a result.
The sending application receives the result.
I can time the length of the complete operation in the first application (steps 1-3). I can also time the handling of the second application that executes the command (step 2).
Looking at both logs I can subtract the time for step 2 form the time of step 1-3 and assuming that the sending application doesn't waste any time, I now know how many time for the transfer was used.
Is there any way, that I can sync both logs in some way, that both logs show the same time-token in msecs. Or to sync all operations in some way, that I can see all timing in the log of the first application.
I know that the applications need some special commands to sync them in some way. But this would be possible, but I have no clue if this is possible at all.
Or the question in other words: Is it possible to time everything in application 1 without looking on both logs individually.
Best possible result:
I know the time that is used for the i/o (pipe)
Time of code executed in app 2 (without i/o)
Time of code executed in app 1 (without i/o)
Further information: The applications run on different machines. Even in different networks connected via VPN.

Similar to what Joseph Larson suggested, but without concern of clocks to be synchronized: could you just append a time spent in app 2 to the result it returns? Then log it from app 1, together with total time.

Related

speed up boot time of compute engine instances

I've written a simple batch file that starts apache and sends a curl request to my server at start time. I am using windows server 2016 and n-4 compute engine instance.
I've noticed that 2 identical machines require vastly different start up times. One sends a message in just 40s, other one takes almost 80s. While in console, both seem to start at the same time, the reality is different, since the other one is inaccessible for 80s via RD tools.
The second machine is made from disk image of the first one. What factors contribute to the start time? Where should I trip the fat?
The delay could occur if the instances are in different regions and also if the second instance has some additional memory intensive applications or additional customizations done. The boot disk type for the instance also contributes to the booting time. Are you getting any information from the logs about this delay during the startup time? You could also compare traceroute results on both instances to see if there is a delay at some point in the network.

why using messaging queue in web applications

When developing my web application using Django, I faced a problem, when I call some functions locally they work correctly, but once i call them over HTTP request they are not executed.
I asked around and i was told to execute them asynchronously outside the request response cycle using celery and a messaging queue server, it worked well, but still I don't understand why i have to execute some tasks asynchronously even when i don't have race condition and there's only one client calling the web service.
This is a big black spot for me because I make it work without really knowing how.
Can anyone explain it to me?
Thanks.
The two main benefits I know of for queue-based systems are:
One, a response can be given to the client without having to wait for work to be done. This lets pages load faster and clients spend less time waiting.
Second, a queue gives you a central location for scheduled jobs that multiple workers can draw from. If a certain component of your application can't keep up with the amount of work there is to do (or if it fails for some reason), you can have other instances of that component doing the work, and there is a single place where all of the work that needs to be done can be found.

How to get intermediate feedback from my REST service for a single GET request

In my user interface, I am trying to implement a progress bar which shows the percent of completion of work for a user request.
My back-end REST service needs to do a lot of computations; hence it is relatively slow. I want to show the user what work in backend is finished. For instance: Task1 finished, working on Task2 (hence show 50% on the progress bar)
My problem:
The service returns result only after it has finished its entire task. I do not know how to get intermediate feedback to show the user that a certain percent of work is complete so he/she should be patient.
Just to clarify, before you start suggesting any of following:
I do not want to use gif Ajax loader.
Service is already optimized, cannot be fine tuned any further.
The service work is already very atomic, it cannot be further broken down in more than 1 service without causing further performance penalty due to additional network traffic.
Let me know if above is not possible to accomplish, I can stop my search.
What you want to do is bi-directional communication with an HTTP-Server and there are basically two ways to do it:
Rest-Polling:
Set up a second API-Call that a client could poll in regular intervals to get the current status of the computation.
WebSockets:
Set up a WebSocket-connection between your client and your server, which would allow the server to initiate the communication to the client and send a message as soon as a task is finished. Adding WebSockets just for that would probably cause even more network traffic than Rest-Polling.
If none of these are options for you, then I don`t think that what you want is possible.

Asynchronous server stopping getting data from client with no visible reason

I have a problem with client-server application. As I've almost run out of sane ideas for its solving I am asking for help. I've stumbled into described situation about three or four times now. Provided data is from last failure, when I've turned all the possible logging, messages dumping and so on.
System description
1) Client. Works under Windows. I take as an assumption that there is no problem with its work (judging from logs)
2) Server. Works under Linux (RHEL 5). It is server where I has a problem.
3) Two connections are maintained between client and server: one command and one for data sending. Both work asynchronously. Both connections live in one thread and on one boost::asio::io_service.
4) Data to be sent from client to server is messages delimeted by '\0'.
5) Data load is about 50 Mb/hour, 24 hours a day.
6) Data is read on server side using boost::asio::async_read_until with corresponding delimeter
Problem
- For two days system worked as expected
- On third day at 18:55 server read one last message from client and then stopped reading them. No info in logs about new data.
- From 18:55 to 09:00 (14 hours) client reported no errors. So it sent data (about 700 Mb) successfully and no errors arose.
- At 08:30 I started investigation of a problem. Server process was alive, both connections between server and client were alive too.
- At 09:00 I attached to server process with gdb. Server was in sleeping state, waiting for some signal from system. I believe I accidentally hit Ctrl + C and may be there was some message.
- Later in logs I found message with smth like 'system call interrupted'. After that both connections to client were dropped. Client reconnected and server started to worked normally.
- The first message processed by server was timestamped at 18:57 on client side. So after restarting normal work, server didn't drop all the messages up to 09:00, they were stored somewhere and it processed them accordingly after that.
Things I've tried
- Simulated scenario above. As server dumped all incoming messages I've wrote a small script which presented itself as client and sent all the messages back to server again. Server dropped with out of memory error, but, unfortunately, it was because of high data load (about 3 Gb/hour this time), not because of the same error. As it was Friday evening I had no time to correctly repeat the experiment.
- Nevertheless, I've run server through Valgrind to detect possible memory leaks. Nothing serious was found (except the fact that server was dropped because of high load), no huge memory leaks.
Questions
- Where were these 700 Mb of data which client sent and server didn't get? Why they were persistent and weren't lost when server restarted the connection?
- It seems to me that problem is someway connected with server not getting message from boost::asio::io_service. Buffer is get filled with data, but no calls to read handler are made. Could this be problem on OS side? Something wrong with asynchronous calls may be? If it is so, how could this be checked?
- What can I do to detect the source of problem? As i said I've run out of sane ideas and each experiment costs very much in terms of time (it takes about two or three days to get the system to described state), so I need to run as much possible checks for experiment as I could.
Would be grateful for any ideas I can use to get to the error.
Update: Ok, it seems that error was in synchronous write left in the middle of asynchronous client-server interaction. As both connections lived in one thread, this synchronous write was blocking thread for some reason and all interaction both on command and data connection stopped. So, I changed it to async version and now it seems to work.
As i said I've run out of sane ideas and each experiment costs very
much in terms of time (it takes about two or three days to get the
system to described state)
One way to simplify investigation of this problem is to run server inside some Virtual Machine until it reaches this broken state. Then you can make snapshot of whole system and revert to it every time when things go wrong during investigation. At least you will not have to wait 3 days to get this state again.

Is there any way to access information about a Coldfusion server's load from within coldfusion?

I am writing a scheduled task which I would like to run frequently.
The problem is that I do not want this task to be run if the server is experiencing a high traffic load.
Is there any way other then getting the free/total/max memory from java to try and figure out whether this task should continue?
GetMetricData() is going to give you a very good indication of how busy your server is, i.e. how many requests are running and how many are queued as well as other info.
It's the same info that you get from running cfstat from the command line (you'll find that under {cfroot}\bin\cfstat.exe).
However, knowing how busy you are at the very moment might not be very useful to you if you just call that function once. It might be better for you to log performance data to file or to a database table using Windows perfmon. You can then get the average number of running/queued requests over the past 5 minutes (or whatever) and make your decision on whether to run your task.
There's an easy way to retrieve the memory usage information.
http://misterdai.wordpress.com/2009/11/25/retrieving-coldfusion-memory-usage/
For CPU load I think you can get it from getMetricData() but there are other methods too, but since this is my first stackoverflow post I'm only allowed one link :P But it's on my blog so just do a CPU search when you look at the link above.
You might find it useful to dig into getMetricData() for the performance monitoring stats. It's a good way of telling how busy your server is by the number of running and queued requests.
Hope this helps,
Dave (aka Mister Dai)
Use the ColdFusion AdminApi. Call http://servername/CFIDE/adminapi/servermonitor.cfc in your browser to get the cfcdocs of the component. If gives you many methods to get the health of you CF server instance.