Building a container with gcloud, kubectl and python3.6 - dockerfile

I want my deployment pipeline to run a python 3.6 script on my GKE hosted database.
For that, locally, I use kubectl port-forward then run the script successfully.
However, to run it in my pipeline I need to start a container that will support both GKE access and python3.6
To run python3.6 I'm using the image python:3.6
To run gcloud and kubectl I'm using the image google/cloud-sdk:latest
However, gcloud is using python2, hence making it very difficult for me to orchestrate a container that will include all of these tools.
For reference, I'm using Bitbucket Pipelines. Might be able to solve it with the services feature, but currently its too complicated since I need to run many commands on both potential containers.

Related

Is it possible to install Apache superset on a ECS container

I am working on Apache Superset, I was able to install it on a linux EC2 instance using docker , is there is any possibility to install it on ECS ?
There are a couple of approaches to this.
First you can take the container image here and build an ECS task def / ECS service around it by bringing it up standalone. Make sure you enable ECS exec to be able to execute into the container and launch those commands. I have not tested this but I see no reason why it should not work.
I have also spent some time trying to make the docker compose files in the Superset GH repo work with Amazon ECS. You can read more about my findings here.

Can I use Cloud Build to perform CI/CD tasks in VM instances?

I'm using Google Cloud Platform and exploring its CI/CD tools.
I have an app deployed in a VM instance and I'm wondering if I can use GCP's tool such as Cloud Build to do CI/CD instead of using Jenkins.
From what I've learned over several resources, Cloud Build seems to be a nice tool for Cloud Run (deploying Docker images) and Cloud Functions.
Can I use it for apps deployed in VM instances?
When you create a job in Cloud Build, you set up a cloudbuild.yaml file in which you specify the build steps. How to write the step such that it will go into a linux VM, log in as a particular user, cd into a directory, pull the master branch of the project repo, and start running the main.py (say it's a python project)?
You can do this like that
- name: gcr.io/cloud-builders/gcloud
entrypoint: "bash"
args:
- "-c"
- |
gcloud compute ssh --zone us-central1-a my_user#oracle -- whoami;ls -la;echo "cool"
However, it's not a cloud native solution to deploy an app. The VM aren't "pets" but "cattle", that means, when you no longer need it, kill it, no emotion!
So, a modern way to use the cloud, is to create a new VM with the new version of your app. Optionally, you can keep the previous VM, stopped (to pay nothing) in case of rollback. To achieve this, you can add a startup script which install all the required packages, libraries, and you app on the VM, and start it.
An easiest way is to create a container. Like this, all the system dependencies are inside the container, and the VM doesn't need any customization: simply download the container and run it
Cloud Build allows you creating a VM with a startup script with the gcloud CLI. You can also stop the previous one. Do you have a persistent to reuse (for the data between version)? with cloud build you can also clone it and attach it to the new VM; or detach it from the previous one and attach it to the new one!

Installation of Jenkins in AWS EC2 Ubuntu 16.04 LTS

I am trying to implement the CI/CD pipeline for my spring boot application deployment using Jenkins on AWS EC2 machine. And I am using containerized deployment of micro services using Docker. When I am exploring about the installation of Jenkins I found that , we can use Jenkins docker image. And also we can install normally. I found the following link for example of normal installation of Jenkins.
wget -q -O — https://pkg.jenkins.io/debian/jenkins-ci.org.key | sudo apt-key add -
Here my confusion is that , If I am using Dockerized deployment of my micro services , Can I use normal installation of Jenkins in my VM and Can I use docker commands inside Jenkins pipeline job?
Can anyone help me to clarify the confusion please?
If you want to run docker commands in Jenkins pipelines on the same machine where Jenkins exists you should run it without container as that configuration will be much easier for you - you need to just add Jenkins to "docker" group so he can run docker containers.
When you run Jenkins from within container configuration is a little harder as probably you need to map host's docker daemon socket to Jenkins container so he can start docker containers on host or you need to use docker-in-docker feature but please take a look on that article: http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/

Using plugin in Google Composer make it crash

I wrote a small plugin for Apache Airflow, which runs fine on my local deployment. However, when I use Google Composer, the user interface hangs and becomes unresponsive. Is there any way to restart the webserver in Google Composer
(Note: This answer is currently more suggestive than finalized.)
As far as restarting the webserver goes...
What doesn't work:
I reviewed Airflow Web Interface in the docs which describes using the webserver but not accessing it from a CLI or restarting.
While you can also run Airflow CLI commands on Composer, I don't see a command for restarting the webserver in the Airflow CLI today.
I checked the gcloud CLI in the Google Cloud SDK but didn't find a restart related command.
Here are a few ideas that may work for restarting the Airflow webserver on Composer:
In the gcloud CLI, there's an update command to change environment properties. I would assume that it restarts the scheduler and webserver (in new containers) after you change one of these to apply the new setting. You could set an arbitrary environment variable to check, but just running the update command with no changes may work.
gcloud beta composer environments update ...
Alternatively, you can update environment properties excluding environment variables in the GCP Console.
I think re-running the import plugins command would cause a scheduler/webserver restart as well.
gcloud beta composer environments storage plugins import ...
In a more advanced setup, Composer supports deploying a self-managed Airflow web server. Following the linked guide, you can: connect into your Composer instance's GKE cluster, create deployment and service Kubernetes configuration files for the webserver, and deploy both with kubectl create. Then you could run a kubectl replace or kubectl delete on the pod to trigger a fresh start.
This all feels like a bit much, so hopefully documentation or a simpler way to achieve webserver restarts emerges to succeed these workarounds.

Automate code deploy from Git lab to AWS EC2 instance

We're building an application for which we are using GitLab repository. Manual deployment of code to the test server which is Amazon AWS EC2 instance is tedious, I'm planning to automate deployment process, such that when we commit code, it should reflect in the test instance.
from my knowledge we can use AWS code-deploy service to fetch the code from GitHub. But code deploy service does not support GitLab repository . Is there a way to automate the code deployment process to AWS Ec2 instance through GitLab. or Is there a shell scripting possibility to achieve this? Kindly educate me.
One way you could achieve this with AWS CodeDeploy is by using the S3 option in conjunction with Gitlab-CI: http://docs.aws.amazon.com/codepipeline/latest/userguide/getting-started-w.html
Depending on how your project is setup, you may have the possibility to generate a distribution Zip (Gradle offers this through the application plugin). You may need to generate your "distribution" file manually if your project does not offer such a capability.
Gitlab does not offer a direct S3 integration, however through the gitlab-ci.yml you would be able to download it into the container and run the necessary upload commands to put the generated zip file on the S3 container as per the AWS instructions to trigger the deployment.
Here is an example of what your brefore-script could look like in the gitlab-ci.yml file:
before_script:
- apt-get update --quiet --yes
- apt-get --quiet install --yes python
- pip install -U pip
- pip install awscli
The AWS tutorial on how to use CodeDeploy with S3 is very detailed, so I will skip attempting to reproduce the contents here.
In regards to the actual deployment commands and actions that you are currently performing manually, AWS CodeDeploy provides the capability to run certain actions through scripts defined in the app-spec file depending on event hooks for the application:
http://docs.aws.amazon.com/codedeploy/latest/userguide/writing-app-spec.html
http://docs.aws.amazon.com/codedeploy/latest/userguide/app-spec-ref.html
http://docs.aws.amazon.com/codedeploy/latest/userguide/app-spec-ref-hooks.html
I hope this helps.
This is one of my old post. But I happened to find an answer for this. Although my question is specific to work with code deploy I would say there is no such need to use any aws requirements using gitlab.
We don't require Code Deploy at all. There is no need to use any external CI server like the team city or the jenkins to perform the CI from the GitLab anymore.
We need to add the .gitlab-ci.yml file in the source directory of the branch and write an .yml script in it. There are pipelines in the GitLab that will perform the CI/CD automatically.
The pipelines of the GitLab CI/CD looks more similar to the working functionality of Jenkins Server. using the YML script we can perform SSH on the EC2 instance and place the files in it.
An example of how to write the gitlab .yml file to ssh to ec2 instance is here https://docs.gitlab.com/ee/ci/yaml/README.html