GitLab + Laravel 5, faster build, maybe without docker? - build

Hy There!
Please excuse me for not knowing mutch about GitLab, I'll summ up what do I wish for and please tell me if it is possible or not. And if yes, then refer a how to for me please. :slight_smile:
I wish to implement GitLab to store our repos which are mainly Laravel 5 projects. Also I wish to run some tests on them, like PHPUnit test, Behat, etc. For this, I currently use the docer ability of GitLab to build a project. It puts the files into a docker, there I have to run composer install, and a few other things. But this takes soooo long! It just slows down the development.
Is it possible to: Run "composer install" and "npm install" and other things that we need to set up the website ONCE on the repository, and from then on, we can do only the testing.

After you setup docker to cache the dependency downloads, your next step is to move your runners to a new host, or give your current host more RAM.
I'm using GitLab's Omnibus installation and my GitLab instance uses 1.7GB of RAM with very little traffic, and my runners use up to 1GB when running some of my builds and tests. If your GitLab instance and runners have a similar memory footprint, then your machine will start to use the swap memory during tests and that will really slow down your runners.
Also, your runners likely have high CPU usage when running tests, and add on top of that the CPU required when your system is using swap memory, and you start to slow down there too.
I would recommend moving the runners to a different machine, for performance and security reasons. If you can't do that, then at least increase the RAM to 3GB.

Related

Can I run Eclipse che on a small VPS?

I have a relatively small VPS that I use as a remote dev environment :
1 vCore
2 Mb of RAM
I plan to have up to 3 dev environments on the VPS. I dont need to run 2 simultaneously however.
The biggest project is roughly the same size as a small Magento eShop. It is actually run by Python and Django.
The environment runs on Ubuntu + Nginx + uWCGI but this could be changed.
I can code remotely in the VPS using Eclipse RSE or Codeanywhere.
However Eclipse CHE offer very interesting functionalities for this type of remote environment.
The main risk is that the VPS configuration is very small. It is exactly the minimal configuration stated in the doc. I don't know if I can use it this way without making things really slow...
My instinct is "no", I think 2MB of RAM is not sufficient given that the Che workspace server is itself a Java application that needs about 750 MB of RAM. If you are running the workspace somewhere else, and just using the VPS as a compute node for container workspaces, I would suspect the answer is still no, as your container OS and language runtime will need more than 2MB of RAM. If you meant 2GB of RAM, it's still difficult, but maybe feasible, to run a workspace with a full Django environment on there, using a workspace server running on a separate host.
It sure would be nice to see if you can make it work, though - and I would love to hear if you make it work!

Executing Django Unit Tests with a Continuous integration server

This may seem like a very broad question, but i am really interested to know about possible approaches. Our team has a Django Web app and we have huge amount of unit tests for our features. Now in github, we have master branch, develop branch, and individual feature/bug branches. Now the problem i want to solve is,
Every time some code is merged into develop branch, i want to run all(or subset) of unit tests against that branch. It would be cool to have it automated, i-e i do not have to trigger the test run.
I have read and heard about Jenkins - http://michal.karzynski.pl/blog/2014/04/19/continuous-integration-server-for-django-using-jenkins/. Currently one of the approaches i am leaning towards.
But i wanted to know if there are better approaches or tools which i can use.
Appreciate all your help.
For what it's worth, you can't really go wrong with Jenkins for the functionality you are looking to achieve.
Although Travis CI may be a better option given that it's meant to work seamlessly with Github and it appears all of your repositories have been moved to Github.
Really depends on your business needs though.
Getting Jenkins up and running, from past experiences, has always gone very smoothly and it gives you the benefit of keeping all data in house as you have the option to host Jenkins on your own private servers but probably doesn't scale or run as efficiently as Travis CI does depending on your setup.
Travis CI will probably allow for an even more seamless approach because it's already being hosted for you and tied directly into Github, but you won't get the privacy as running Jenkins on your own servers. There is a paid option though it appears for Travis CI which again, depending on your business needs, may be a better option.

How can I speed up my unit tests using cloud computing?

I work on a project with a lot of legacy code that has some tests (in JUnit). Those unit tests are pretty slow and running them often slows me down. I plan on refactoring and optimizing them (so they're real unit tests), but before I do this, I'd like to speed them up just for now, by running them in parallel. Of course I could get myself some cluster and run them there, but that's not worth the hassle and the money. Is it possible to do so with some cloud services like e.g. Amazon AWS? Can you recommend me some articles where I could read more about it?
Running unit tests in parallel requires 2 things:
Creating groups of multiple tests using either suites or JUnit categories
A way to launch each group of tests
Once you have #1, an easy way to get #2 is to use Continuous Integration.
Do you already use Continuous Integration ?
If not, it is easy to add. One of the more popular ones is Jenkins
Jenkins also supports distributed builds where a master code launches build jobs on slave nodes. You could use this to run each of your groups in parallel.
Continuous Integration in the Cloud
Once you have Continuous Integration set up, you can move it into the cloud if you wish.
If you want to stick with jenkins, you can use
Cloud Bees which costs money
Red Hat Openshift "Gears" can be used to run Jenkins. I think it is possible to use gears in Amazon WS but I've never done it. You can run a few gears with limited memory usage for free, but if you need more it will cost money.
If you don't want to use Jenkins, there are other cloud based CIs including
Travis-CI which integrates with github. Here is an article about how to speed up builds in Travis

Rails app, Continuous Integration/Deployment Environments

When developing, my team obviously uses development as our environment.
When we run automated tests, we use testing.
We also have staging and production environments, respectively used for our testers to check out features and the final "live" product.
We're trying to setup an internal CI server to run our automated tests against and to eventually assist with automated deployments.
Since the CI server is really running automated tests, some think it should be run in testing environment. However, in order for the CI server to actually be useful, my thoughts are that it needs to be run in production mode with as close-as-possible a mirror of the actual production environment (without touching the production DB, obviously).
Is there an accepted environment that a CI server should be executed under? production environment (with different DB) seems the only logical answer to me, but I may be missing something...
Running any tests on PROD environment as you said
seems the only logical answer
but is not quite true. There are risks that your tests can seriously damage the actual environment/application to a point where you'll face a recovery option. After all the dark side of testing is to show/find that your software has not only minor bugs and it is working not as expected.
I can think of at least these 'why not test production' considerations:
when the product is launched, the customer rely on it. Expecting that your software is working ()being already tested). Your live environment should do its job and not be loaded with tests. If the product misbehaved (or did not perform), the technical team have to be sent to to cover the damage, fix the gaps and make it run hassle free. Now this not only affected the product cost, but delayed the project deadlines in a major way. This will make a recursive effect at the vendors profits and next few projects.
the production or development team when completes a product development at their end, have to produce this test environment for testing team prior to loading their newly developed product on that environment for testing.
To me, no matter that you
also have staging and production environments
it is essential to use the Test one accordingly. Further more Testing environment should be (configured) as close as it gets to the Production. Also one person could be trying to test while another person breaks the thing that he has been testing. With out the two being separate their is no way to do proper testing.
Just to be full answer, your STAGE environment can have different roles depending on the company.
One is that it can be the QA/STAGE environment that has an exact copy of production which is used for both QA and system testing (testing of the system when a lot of updates/changes or upgrade is going to go into production).
UPDATE:
That was my point too. The QA environment should be a mirror of the PROD. Possible solution about your issue with caching/pre-loading files onto staging/production is creation of pre-/post-steps .bat (let's assume) files.
In our current Test project we use this approach. In pre-steps we set-up files needed for test execution (like removing files from previous runs and downloading latest copies/artifacts). In post-steps we set up reporting files needed.The advantage is that your files will be collected and sync before every execution.
About the
not on the same physical hardware
in my case we support dedicated remote Test server. Advantages are clear, only thing that you need to be considered is that it'll require maintenance (administration).

Continuous deployment of C/C++ executable to Linux production servers

I wonder if there is any best practice or at least a more practical way to deploy C/C++ executable to Linux based production servers.
I have Jenkins up and running as CI server, and created a main SVN module which contains multiple svn:externals. This module is mainly served as a pipeline of related C++ applications. (Perhaps I should post this an another question on whether svn:externals is the correct way to do it)
So the main question is the deployment steps, I am planing to make all production servers as Jenkins' slaves with parameterized config, for the purpose of building from SVN tags. And use some scripts to copy all executables to, eg: /opt/mytools/bin in multiple production servers.
Any recommendations?
The best deployment route is the one specified by your distribution, IMHO. That is, for debian packages, bundle your applications into .deb-files, put them into a repository and let apt-get take care of the rest. This way, you have a minimal impact on the production environment and most admins are already familiar with the deployment scheme.
I'm working through some of the same questions, and I'm finding that Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation by Humble and Farley has been a good (technology agnostic) starting point - not perfect but it's pointed me in the right direction when I had no idea what to do next.
The continuous delivery book recommends setting up 'build pipelines' in which you run progressively more and more automated tests, with only the final manual tests and deploy rollback options being triggered by a real person.