Meteor build locally or on aws host - amazon-web-services

A Meteor app is running on the local machine. Then it gets built appDir$ Meteor build . and the resultant myApp.tar.gz gets copied to the AWS cloud. Then a script runs on the cloud to put the app into a docker container following some Dockerfile commands.
Every time a change needs to be done, a repeat of the above follows, any better way to reduce the effort of re- building/copying/dockerizing?
Is it possible by using volume and docker-compose and just sync the changes from the local development machine to the aws EC2 volume directory? How?
//Dockerfile on AWS EC2
FROM lambdalinux/baseimage-amzn:2016.09-000
RUN curl --silent --location https://rpm.nodesource.com/setup_4.x | bash -
RUN yum install -y tar nodejs
ADD ./myApp.tar.gz /opt/
EXPOSE 80
ENV ROOT_URL http://example.com
ENV MONGO_URL "mongodb://username:pass..."
ENV PORT 80
# Install nodejs modules
WORKDIR /opt/bundle/
RUN npm install fibers
RUN npm install underscore
RUN npm install source-map-support
RUN npm install semver
# Start the app
CMD node ./main.js

There is a command called rsync that will do a smart sync of a whole directory structure - if you unpacked the build locally you could then rsync it up to the server.
It can use either file dates or checksums to work out what has changed, and will make the process quicker. Minified files will probably change every time, but certainly many assets won't change every time.
I would set it up with a mirror of your production directory, sync the files into there, do some (automated) sanity checks first, and then switch the new version into place. If it doesn't work you can switch the old version back. There is a little work required to get this set up, but it will make deployment faster/easier

Related

How to make GitLab Windows shared runners to build faster?

Background
I have a CI pipeline for a C++ library I've been developing. So far, I can distribute this lib to Linux and Windows systems. Since I use GitLab to build, test and package my lib, I'd like to have my Windows builds running faster and I have no clue on how to do that.
Currently, I use the following script for my Windows builds:
.windows_template:
tags:
- windows
before_script:
- choco install cmake.install -y --installargs '"ADD_CMAKE_TO_PATH=System"'
- choco install python --pre -y
- choco install git -y
- $env:ChocolateyInstall = Convert-Path "$((Get-Command choco).Path)\..\.."; Import-Module "$env:ChocolateyInstall\helpers\chocolateyProfile.psm1"; refreshenv
- python -m pip install --upgrade pip
- pip install conan monotonic
The problem
Any build with the script above can take up to 10 minutes; worse: I have two stages, each one taking the same amount of time. This means that my whole CI pipeline will take 20 minutes to finish because of slowness in Windows builds.
Ideal solution
EVERYTHING in my before_script can be cached or stored as an image. I only need some hints on how to do it properly.
Additional information
I use the following tools for my builds:
CMake: to support my building process;
Python3: to test and build packages;
Conan (requires Python3): to support the creation of packages with several features, as well as distribute them;
Git: to download Googletest in CMake configuration step This is already provided in the cookbooks - I might just remove this extra installation step in my before_script;
Googletest (requires Python3): testing library;
Visual Studio DEV Tools: to compile the library This is already in the cookbooks.
Installing packages like this (whether it's OS packages though apt-get install... or pip, or anything else) is generally against best practices for CI/CD jobs because every job that runs will have to do the same thing, costing a lot of time as you run more pipelines, as you've seen already.
A few alternatives are to search for an existing image that has everything you need (possible but not likely with more dependencies), split up your job into pieces that might be solved by an image with just one or two dependencies, or create a custom docker image to use in your jobs. I answered a similar question with an example a few weeks ago here: "Unable to locate package git" when running GitLab CI/CD pipeline
But here's an example Dockerfile with Windows:
# Dockerfile
FROM mcr.microsoft.com/windows
RUN ./install_chocolatey.sh
RUN choco install cmake.install -y --installargs '"ADD_CMAKE_TO_PATH=System"'
RUN choco install python --pre -y
RUN choco install git -y
...
The FROM line says that our new image extends the mcr.microsoft.com/windows base image. You can extend any image you have access to, even if it already extends another image (in fact, that's how most images work: they start with something small, like a base OS installation, then add things needed for that package. PHP for example starts on an Ubuntu image, then installs the necessary PHP packages).
The first RUN line is just an example. I'm not a Windows user and don't have experience installing Chocolatey, but you'd do here whatever you'd normally do to install it locally. The rest are for installing whatever else you need.
Then run
docker build /path/to/dockerfile-dir -t mygroup/mytag:version
The path you supply needs to be the directory that contains the Dockerfile, not the Dockerfile itself. The -t flag sets the image's tag after it's built (though you can do that with a separate command, docker tag too).
Next, you'll have to log into whichever registry you're using (Docker Hub (https://docs.docker.com/docker-hub/repos/), Gitlab Container Registry (https://docs.gitlab.com/ee/user/packages/container_registry/), a private registry your employer may support, or any other option.
docker login my.docker.hub.com
Now you can push the image to the registry:
docker push my.docker.hub.com/mygroup/mytag:version
You'll have to review the information in the docs about telling your Gitlab runner or pipelines how to authenticate with the registry (unless it's Public on Docker Hub or you use the Gitlab Container Registry) https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#define-an-image-from-a-private-container-registry
Once all that's done, you can use your new image in your CI jobs, and everything we put into the image will be ready to use:
.windows_template:
image: my.docker.hub.com/mygroup/mytag:version
tags:
- windows
...

Cloud Run needs NGINX or not?

I am using cloud run for my blog and a work site and I really love it.
I have deployed python APIs and Vue/Nuxt Apps by containerising it according to the google tutorials.
One thing I don't understand is why there is no need to have NGINX on the front.
# Use the official lightweight Node.js 12 image.
# https://hub.docker.com/_/node
FROM node:12-slim
# Create and change to the app directory.
WORKDIR /usr/src/app
# Copy application dependency manifests to the container image.
# A wildcard is used to ensure both package.json AND package-lock.json are copied.
# Copying this separately prevents re-running npm install on every code change.
COPY package*.json ./
# Install production dependencies.
RUN npm install --only=production
# Copy local code to the container image.
COPY . ./
# Run the web service on container startup.
RUN npm run build
CMD [ "npm", "start" ]
# Use the official lightweight Python image.
# https://hub.docker.com/_/python
FROM python:3.7-slim
# Copy local code to the container image.
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . ./
# Install production dependencies.
RUN apt-get update && apt-get install -y \
libpq-dev \
gcc
RUN pip install -r requirements.txt
# Run the web service on container startup. Here we use the gunicorn
# webserver, with one worker process and 8 threads.
# For environments with multiple CPU cores, increase the number of workers
# to be equal to the cores available.
CMD exec gunicorn -b :$PORT --workers=4 main:server
All this works without me calling Nginx ever.
But I read alot of articles whereby people bundle NGINX in their container. So I would like some clarity. Are there any downsides to what I am doing?
One considerable advantage of using NGINX or a static file server is the size of the container image. When serving SPAs (without SSR), all you need is to get the bundled files to the client. There's no need to bundle build dependencies or runtime that's needed to compile the application.
Your first image is copying whole source code with dependencies into the image, while all you need (if not running SSR) are the compiled files. NGINX can give you the "static site server" that will only serve your build and is a lightweight solution.
Regarding python, unless you can bundle it somehow, it looks ok to use without the NGINX.

Where should c++ application be compiled in GitLab CI Docker workflow?

I’m looking to understand how to properly structure my .gitlab-ci.yml and Dockerfile such that I can build a C++ application into a Docker container.
I’m struggling with where the actual compilation and link of the C++ application should take place within the CI workflow.
What I’ve done:
My current in approach is to use Docker in Docker with a private gitlab docker registry.
My gitlab-ci.yml uses a dind docker image service I created based on the the docker:19.03.1-dind image but includes my certificates to talk securely to my private gitlab docker registry.
I also have a custom base image referenced by my gitlab-ci.yml based on docker:19.03.1 that includes what I need for building, eg cmake, build-base mariadb-dev, etc.
Have my build script added to the gitlab-ci.yml to build the application, cmake … && cmake --build .
The dockerfile then copies the final binary produced in my build step.
Having done all of this it doesn’t feel quite right to me and I’m wondering if I’m missing the intent. I’ve tried to find a C++ example online to follow as example but have been unsuccessful.
What I’m not fully understanding is the role of each player in the docker-in-docker setup: docker image, dind image, and finally the container I’m producing…
What I’d like to know…
Who should perform the build and contain the build environment, the base image specified in my .gitlab-ci.yml or my Dockerfile?
If I build with the dockerfile, how to i get the contents of the source into the docker container? Do I copy the /builds dir? Should I mount it?
Where to divide who performs work, gitlab-ci.yml or Docker file?
Reference to a working example of a C++ docker application built with Docker-in-Docker Gitlab CI.
.gitlab-ci.yml
image: $CI_REGISTRY/building-blocks/dev-mysql-cpp:latest
#image: docker:19.03.1
services:
- name: $CI_REGISTRY/building-blocks/my-dind:latest
alias: docker
stages:
- build
- release
variables:
# Use TLS https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#tls-enabled
DOCKER_TLS_CERTDIR: "/certs"
CONTAINER_TEST_IMAGE: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
CONTAINER_RELEASE_IMAGE: $CI_REGISTRY_IMAGE:latest
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
build:
stage: build
script:
- mkdir build
Both approaches are equally valid. If you look at other SO questions, one thing you'll probably notice is that Java/Docker images almost universally build a jar file on their host and then COPY it into an image, but Go/Docker images tend to use a multi-stage Dockerfile starting from sources.
If you already have a fairly mature build system and your developers already have a very consistent setup, it makes sense to do more work in the CI environment (in your .gitlab.yml file). Build your application the same way you already do, then COPY it into a minimal Docker image. This approach is also helpful if you need to ship both Docker and non-Docker artifacts. If you have a make dist style tar file and want to get a Docker image out of it, you could use a very straightforward Dockerfile like
FROM ubuntu
RUN apt-get update && apt-get install ...
ADD dist/myapp.tar.gz /usr/local # unpacking it
EXPOSE 12345
CMD ["myapp"] # /usr/local/bin/myapp
On the other hand, if your developers have a variety of desktop environments and you're really trying to standardize things, and you only need to ship the Docker image, it could make sense to centralize most things in the Dockerfile. This would have the advantage that every developer could run the exact build sequence themselves locally, rather than depending on the CI system to try simple changes. Something built around GNU Autoconf might look more like
FROM ubuntu AS build
RUN apt-get update \
&& apt-get install --no-install-recommends --assume-yes \
build-essential \
lib...-dev
WORKDIR /app
COPY . .
RUN ./configure --prefix=/usr/local \
&& make \
&& make install
FROM ubuntu
RUN apt-get update \
&& apt-get install --no-install-recommends --assume-yes \
lib...
COPY --from=build /usr/local /usr/local
CMD ["myapp"]
If you do the primary build in a Dockerfile, you need to COPY the source code in. Volume mounts don't work at this point in the sequence. CI systems should avoid bind-mounting source code into a container in any case: you want to run tests against the actual artifact you've built, and not a hybrid of a built Docker image but with all of its source code replaced.

Install composer dependencies while deploying

I'm using Elastic Beanstalk to deploy my application as a Single Docker Application.
My Dockerfile does composer install while deploying, but I get a Could not authenticate against github.com error.
I use these lines in my Dockerfile to install my dependencies:
WORKDIR /www
RUN ["composer", "install", "-o"]
How would I solve this issue?
I think you need to configure composer inside your container with your key or something like that, remember that inside your container you're basically on another os and you don't have public keys etc.
I'd try to install it from source rather than from git (as you don't have keys).
try this:
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer ()

How do I make bower install work with aws.push?

As a starting point to making my own app that uses meanjs, I went to the meanjs website and used their yeomen generator to create the template/sample app. Following the instructions getting the sample application running out of the box on my local desktop machine worked within minutes. To complete the exercise I tried to deploy the sample app to an AWS/EC2 instance before making any changes to it. I have used the command line deployment tools in the past and liked it. Also it is nice how now you can just select an EC2 Linux instance with node and npm already installed and ready.
After checking the sample into git, I run "git aws.push" to deploy the app.
The problem is in the package.json the line:
"postinstall": "bower install --config.interactive=false"
In the eb-activity.log:
npm WARN cannot run in wd meansample#0.0.1 bower install --config.interactive=false (wd=/tmp/deployment/application)
The result is that AngularJS ends up not getting installed in /public/lib.
First thing I tried was giving the full path in the package.json file: node_modules/bower/bin/bower. This didn't help and results in the same error. Also noting that other commands like "grunt" don't need the full path specified in the package.json and they work.
I don't understand enough of the black box magic that aws.push does to understand why this error is happening. For example what user does it run as? What permissions does that user have? what options if any does it use when it runs npm install?
I did figure out a work-around, but it adds a lot of extra steps that shouldn't be required if aws.push was able to run bower install directly. Basically I can manually run the bower install in the ssh client connected to my EC2 instance, set the owner/group on the installed files, and restart the server.
Work-around steps:
1) On local command prompt run git aws.push. Wait for unsuccessfully deployment to finish.
2) Connect ssh client to EC2 instance. From the command prompt:
cd /var/app/current
/* NOTE: if I don't use sudo the ec2user I am logged in as does not have permission to create /public/lib needed to install AngularJS into*/
sudo node_modules/bower/bin/bower install --config.interactive=false --allow-root
/* NOTE: just changing the owner and group to match the same as the other files that aws.push deployed */
sudo chown -R nodejs public/lib
sudo chgrp -R nodejs public/lib
From AWS dashboard, select the correct EC2 instance, Action = Restart App Server(s)
Now AngularJS is install and the sample app works.
How do I eliminate the extra steps and make it so aws.push can do the bower install successfully?
I have experienced the same problem when trying to publish my nodejs app in a private server running CentOs using root user. The same error is fired by "postinstall": "./node_modules/bower/bin/bower install" in my package.json file so the only solution that was working for me is to use both options to avoid the error:
1: use --allow-root option for bower install command
"postinstall": "./node_modules/bower/bin/bower --allow-root install"
2: use --unsafe-perm option for npm install command
npm install --unsafe-perm