When my C++ program build script is called from a Jenkins job, it takes far more time to be built. The CPU usage instead of being on 100% is on only taking 16%.
Of course I don't want Jenkins to fully occupy my computer rendering it unusable while doing a build but making it faster would be very useful.
I have installed Jenkins via brew on Mac OS.
Does anyone know how to change the priority of the Jenkins process so it's allowed to use more CPU while building?
Following one of the comments suggestions I have decided to increase the heap size of the Java machine on the homebrew.mxcl.jenkins.plist file:
<string>-Xmx2048m</string>
And then call:
brew services stop jenkins
brew services start jenkins
The behaviour was the same so I decided to restart the machine and try again and now it is working as supposed. I'm not sure if this was a general glitch or if it was related with the Java heap size parameter.
Related
Happy ugly Christmas sweater day :-)
I am running into some strange problems with my AWS Linux 16.04 instance running Hadoop 2.9.2.
I have just successfully installed and configured Hadoop to run in a simulated distributed mode. Everything seems to be fine. When I start hdfs and yarn I don't get any errors. But as soon as I try to do even something as simple as list the contents of the root hdfs directory, or create a new directory, the whole instance becomes super slow. I wait for about 10 min and it never produces a directory listing so I hit Ctrl+C and it takes another 5 minutes to kill the process. Then I try to stop both, the hdfs and yarn, and it succeeds but also takes a long time to do that. And even after hdfs and yarn have been stopped the instance is still being barely responsive. At this point all I can do to make it function normally again is to go to AWS console and restart it.
Does anyone have any idea what I might've screwed up ( I am pretty sure it's something I did. It usually is :-) )?
Thank you.
Well, I think I figured out what was wrong and the answer is trivial. Basically, my ec2 instance doesn't have enough RAM. It's a basic free tier eligible instance and by default it comes with only 1GB of RAM. Hilarious. Totally useless.
But I learned something useful anyway. One other thing I had to do to make my Hadoop installation work (I was getting "connection refused" error but I did make it work) was that in core-site.xml file I had to change the line that says
<value>hdfs://localhost:9000</value>
to
<value>hdfs://ec2-XXX-XXX-XXX-XXX.compute-1.amazonaws:9000</value>
(replace the XXXs in the above with your instance's IP address)
I have been using amazon EC2 instances to run a C++ program. Its heaviest task is to launch a boost dijkstra algorithm. On my laptop (running on Ubuntu 15.04) and on the EC2 instances (Ubuntu 14.04) I had similar performances. A typical dijkstra would take 60ms. Interestingly enough, the type of EC2 instance would have low performance impact on a single Dijkstra run.
Now, I've just set up an OVH cloud server, runnning on Ubuntu 14.04. I have followed the same steps to install all the dependencies I need. The very same Dijkstra now takes 130ms to run.
Boost version is the same, as are my compiler options (I'm using -O3). I've tried different types of OVH instances (RAM oriented, CPU oriented). The timing remains unchanged.
I doubt that OVH cloud performance could be this bad. Any idea ideas about what I could have missed or tests that I could do to understand what is going on ?
I'm working on Chef recipes, and often need to test the full run-through with a clean box by destroying a VM and bringing it back up. However, this means I get this message in Vagrant/VirtualBox:
Waiting for VM to boot. This can take a few minutes.
very often. What are some steps I can take to make the boot faster?
I am aware this is an "opinion" question and would welcome some suggestions to make this more acceptable, besides breaking it into a bunch of small questions like "Will switching to an SSD make my VirtualBox boot faster? Will reducing the number of forwarded ports make my VirtualBox boot faster", etc.
I would go for using LXC containers instead of VirtualBox. That gives you much faster feedback cycle.
Here is a nice introduction to the vagrant-lxc provider.
You could set up a VirtualBox VM for Vagrant / Chef development with LXC containers (e.g. like this dev-box). Then take this sample-cookbook and run either the ChefSpec unit tests via rake test or the kitchen-ci integration tests via rake integration. You will see that it's much faster with LXC than it is with VirtualBox (or any other full virtualization hypervisor).
Apart from that:
yes, SSDs help a lot :-)
use vagrant-cachier which speeds up loads of other things via caching
use a recent Vagrant version which uses Ruby 2.0+ (much faster than 1.9.3)
don't always run a full integration test, some things can be caught via unit tests / chefspec already
use SSH connection sharing and persistent connections
etc...
As an another alternative you could also use chef-runner, which explicitly tries to solve the fast feedback problem
We are trying to use VisualVM to track down some memory leakage in CF8, however, cannot get the tool to work 100%. Basically, everything comes up, except the Memory sampling. Says that the "JVM is not supported".
However, all the other features work (we can do CPU sampling, just not memory). Found this kind of weird that we can do everything else but the memory stuff, so am wondering if maybe we need to specify another JVM argument to allow this?
Some other info:
We are connecting locally via 127.0.0.1 or localhost.
I installed the Visual GC plugin, and it cannot connect either.
VisualVM and JRUN/CF8 are both using the same Java version (1.6.0_31), however, they are not pulled from the same location (maybe this matters). VisualVM uses the installed JDK, whereas JURN/CF8 uses just the binaries that we copied locally to the CF8 installation folder.
Installed another plugin that shows JVM properties, and it says that the JVM is not "attachable". Don't know what that means, but am just wanting to mention it.
Any help with this would be greatly appreciated. If we can just get that memory sampling, I think we can get on top of our performance issues that have plagued us here recently. Thanks in advance!
EDIT:
Also, just checked, and JRUN is being started under "administrator", whereas I am launching VisualVM under a different user. Maybe this is relevant?
Yes, it is relevant that you are running VisualVM under different user. Memory Sampling uses Attach API, which only works if you are running monitored application and VisualVM as the same user. This is also reason that the JVM properties reports that your application is not attachable. If you run VisualVM as "administrator", it will automatically detect your Coldfusion 8 application and the Memory sampler will work.
I'm still cheap.
I have a software development environment which is a bog-standard Ubuntu 11.04 plus a pile of updates from Canonical. I would like to set it up such that I can use an Amazon EC2 instance for the 2 hours per week when I need to do full system testing on a server "in the wild".
Is there a way to set up an Amazon EC2 server image (Ubuntu 11.04) so that whenever I fire it up, it starts, automatically downloads code updates (or conversely accepts git push updates), and then has me ready to fire up an instance of the application server. Is it also possible to tie that server to a URL (e.g ec2.1.mydomain.com) so that I can hit my web app with a browser?
Furthermore, is there a way that I can run a command line utility to fire up my instance when I'm ready to test, and then to shut it down when I'm done? Using this model, I would be able to allocate one or more development servers to each developer and only pay for them when they are being used.
Yes, yes and more yes. Here are some good things to google/hunt down on SO and SF
--ec2 command line tools,
--making your own AMI's from running instances (to save tedious and time consuming startup gumf),
--route53 APIs for doing DNS magic,
--ubunutu cloud-init for startup scripts,
--32bit micro instances are your friend for dev work as they fall in the free usage bracket
All of what James said is good. If you're looking for something requiring less technical know-how and research, I'd also consider:
juju (sudo apt-get install -y juju). This lets you start up a series of instances. Basic tutorial is here: https://juju.ubuntu.com/docs/user-tutorial.html