C++ boost program slower on a different server - c++

I have been using amazon EC2 instances to run a C++ program. Its heaviest task is to launch a boost dijkstra algorithm. On my laptop (running on Ubuntu 15.04) and on the EC2 instances (Ubuntu 14.04) I had similar performances. A typical dijkstra would take 60ms. Interestingly enough, the type of EC2 instance would have low performance impact on a single Dijkstra run.
Now, I've just set up an OVH cloud server, runnning on Ubuntu 14.04. I have followed the same steps to install all the dependencies I need. The very same Dijkstra now takes 130ms to run.
Boost version is the same, as are my compiler options (I'm using -O3). I've tried different types of OVH instances (RAM oriented, CPU oriented). The timing remains unchanged.
I doubt that OVH cloud performance could be this bad. Any idea ideas about what I could have missed or tests that I could do to understand what is going on ?

Related

Speed up Chromedriver/Selenium in AWS EC2 instance

Hi i've developed a bot that automates the shopping process on a specific website. When testing it on my mac it works perfectly and can place an order quite fast. I have tried to run the script on an AWS EC2 instance using the free t2.micro tier with an Ubuntu instance.
The script runs fine and all the packages work but i've noticed the time it takes to open chrome in headless mode and finish the process is 5/6 times longer than when I run it on my local macbook. Ive tried all the suggested things in the chromedriver options to do with the proxy server but my EC2 instance still isn't fast enough.
Is it the small t2.micro free tier thats slowing me down or should i be using a different instance other than Ubuntu if I want to speed up my selenium script?
You're using an incredibly small machine, which is going to be much slower than the powerful machine you're running locally.

Cloud Redis latency causes (vs. local redis on macbook pro)

Redis can give sub millisecond response times. That's a great promise. I'm testing heroku redis and I get 1ms up to about 8ms, for a zincrby. I'm using microtime() in php to wrap the call. This heroku redis (I'm using the free plan) is a shared instance and there is resource contention so I expect response times for identical queries to vary, and they certainly do.
I'm curious as to the cause of the difference in performance vs. redis installed on my macbook pro via homebrew. There's obviously no network latency there. What I'm curious about is does this mean that any cloud redis (i.e. connecting over the network, say within aws), is always going to be quite a bit slower than if I were to have one cloud server and run a redis inside the same physical machine, thus eliminating network latency?
There is also resource contention in these cloud offerings, unless a private server is chosen which costs a lot more.
Some numbers: my local macbook pro consistently gives 0.2ms for the identical zincrby that takes between 1ms & 8ms on the heroku redis.
Is network latency the cause of this?
No, probably not.
The typical latency of a 1 Gbit/s network is about 200us. That's 0.2ms.
What's more, in aws you're probably on 10gbps at least.
As this page in the redis manual explains, the main cause of the latency variation between these two environments will almost certainly be a result of the higher intrinsic latency (there's a redis command to test this on any particular system: redis-cli --intrinsic-latency 100, see the manual page above) arising from being run in a linux container.
i.e., network latency is not the dominant cause of the variation seen here.
Here is a checklist (from redis manual page linked above).
If you can afford it, prefer a physical machine over a VM to host the server.
Do not systematically connect/disconnect to the server (especially true for web based applications). Keep your connections as long lived
as possible.
If your client is on the same host than the server, use Unix domain sockets.
Prefer to use aggregated commands (MSET/MGET), or commands with variadic parameters (if possible) over pipelining.
Prefer to use pipelining (if possible) over sequence of roundtrips.
Redis supports Lua server-side scripting to cover cases that are not suitable for raw pipelining (for instance when the result of a command
is an input for the following commands).

Deployment of VM workers running an C++ executable on Azure

I am presented with a program developed in c++ that implements a compute intensive algorithm that uses mpi to achieve better results. The executable has been tested on a single VM with 16 cores on Azure and the results have been satisfying. Now we need to test its performance and its scalability on 64 or 128 cores. As far as I know , a single VM can't employ more than 16 cores , so I think I need to implement a set of VMs that will execute the computation, but I don't know where to start with the deployment and the communication among the VMs. Any guidance or ideas would be appreciated.

Vagrant "Waiting for VM to boot. This can take a few minutes" is slow

I'm working on Chef recipes, and often need to test the full run-through with a clean box by destroying a VM and bringing it back up. However, this means I get this message in Vagrant/VirtualBox:
Waiting for VM to boot. This can take a few minutes.
very often. What are some steps I can take to make the boot faster?
I am aware this is an "opinion" question and would welcome some suggestions to make this more acceptable, besides breaking it into a bunch of small questions like "Will switching to an SSD make my VirtualBox boot faster? Will reducing the number of forwarded ports make my VirtualBox boot faster", etc.
I would go for using LXC containers instead of VirtualBox. That gives you much faster feedback cycle.
Here is a nice introduction to the vagrant-lxc provider.
You could set up a VirtualBox VM for Vagrant / Chef development with LXC containers (e.g. like this dev-box). Then take this sample-cookbook and run either the ChefSpec unit tests via rake test or the kitchen-ci integration tests via rake integration. You will see that it's much faster with LXC than it is with VirtualBox (or any other full virtualization hypervisor).
Apart from that:
yes, SSDs help a lot :-)
use vagrant-cachier which speeds up loads of other things via caching
use a recent Vagrant version which uses Ruby 2.0+ (much faster than 1.9.3)
don't always run a full integration test, some things can be caught via unit tests / chefspec already
use SSH connection sharing and persistent connections
etc...
As an another alternative you could also use chef-runner, which explicitly tries to solve the fast feedback problem

Increasing Jenkins process priority?

When my C++ program build script is called from a Jenkins job, it takes far more time to be built. The CPU usage instead of being on 100% is on only taking 16%.
Of course I don't want Jenkins to fully occupy my computer rendering it unusable while doing a build but making it faster would be very useful.
I have installed Jenkins via brew on Mac OS.
Does anyone know how to change the priority of the Jenkins process so it's allowed to use more CPU while building?
Following one of the comments suggestions I have decided to increase the heap size of the Java machine on the homebrew.mxcl.jenkins.plist file:
<string>-Xmx2048m</string>
And then call:
brew services stop jenkins
brew services start jenkins
The behaviour was the same so I decided to restart the machine and try again and now it is working as supposed. I'm not sure if this was a general glitch or if it was related with the Java heap size parameter.