aws codebuild / using local redis (ubuntu image) - amazon-web-services

Getting into code build; currently looking to use redis on a local ubuntu image;
using the following script :
version: 0.2
phases:
install:
commands:
- apt update
- apt install -y redis-server wget
pre_build:
commands:
- wget https://raw.githubusercontent.com/Hronom/wait-for-redis/master/wait-for-redis.sh
- chmod +x ./wait-for-redis.sh
- service redis-server start
- ./wait-for-redis.sh localhost:6379
build:
commands:
- redis-cli info
- redis-cli info server
For now it seems to us that docker-compose is not ultimately required, we would first look into using it that way - expecting a standard ubuntu behaviour.
We're installing postgres with a similar approach, it does start properly and is fully usable.
Here we're unable to start redis properly, wait-for-redis keeps retrying (keep getting error Could not connect to Redis at localhost:6379: Connection refused)
With an ec2 linux image (yum based), we don't have such issue
What would be the correct way start redis in that ubuntu context ?

Just ran into the same problem.
When I added a cat /var/log/redis/*.log to the buildspec I discovered that Redis was not able to bind:
Creating Server TCP listening socket ::1:6379: bind: Cannot assign requested address
Further research showed this to be a known issue: https://github.com/redis/redis/issues/3241
... which can be fixed by adding these lines to the buildspec (before using redis):
- sed -i '/^bind/s/bind.*/bind 127.0.0.1/' /etc/redis/redis.conf
- service redis-server restart

Related

Provisioning AWS Ubuntu using Ansible errors out with permission errors when gathering facts or installing apps regardless of become escalations

When provisioning an AWS instance running Ubuntu 20.04.3 LTS (GNU/Linux 5.11.0-1028-aws x86_64) provisioning stops at the following tasks:
TASK [Gathering Facts] task when become:true is set in the playbook. The following error message is displayed: Missing sudo password
apt: update_cache=yes task when become: true is set in the playbook and gather_facts: false. The following error message is displayed: Missing sudo password
apt: update_cache=yes task when become: true is not set in the playbook. The following error message is displayed: Failed to lock apt for exclusive operation: Failed to lock directory /var/lib/apt/lists/: E:Could not open lock file /var/lib/apt/lists/lock - open (13: Permission denied)"
TASK [geerlingguy.pip : Ensure Pip is installed.] task when become: true is not set in the playbook. The following error messages are displayed:
"E: Could not open lock file /var/lib/dpkg/lock-frontend - open (13: Permission denied)",
"E: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), are you root?"
I suspect this is happening because this is a AWS modified OS, being GNU/Linux 5.11.0-1028-aws. I can ssh into the instance and run sudo apt update and sudo apt install python3-pip and it works without a password because I have ALL=(ALL:ALL) NOPASSWD:ALL set in sudoers for my ssh user. However, when I run sudo apt update and sudo apt install python3-pip with become: true is not set in the playbook, the above error messages are displayed.
I have run all versions of these and re-ran the playbook.
sudo rm /var/lib/apt/lists/lock
sudo rm /var/cache/apt/archives/lock
sudo rm /var/lib/dpkg/lock
sudo rm /var/lib/apt/lists/lock
sudo rm /var/lib/dpkg/lock-frontend
I found this stack overflow answer Ansible playbook fails to lock apt and added those tasks but it would fail at this step raw: apt-get -y purge unattended-upgrades.
unattended-upgrades seems to be disabled because I no longer see this message when I login to the instance.
2 updates could not be installed automatically. For more details, see /var/log/unattended-upgrades/unattended-upgrades.log
This is the first time I have had an issue with running these very basic ansible tasks. Given that AWS has its own ansible modules, I am sure I am doing something wrong or missing something obvious. I've had a hard time finding a solution to this problem because Googling has been ineffective. There are too many irrelevant results due to the popularity of the AWS ansible modules. I'm not trying to create or modify any AWS instances. I'm just trying to provision one.
I'm hoping one of the many AWS or Ansible experts here can help me out.
Here's the code: https://github.com/kahunacoder/ansible-wikijs
Here's an example
playbook.yml:
- hosts: all
gather_facts: true
become: true
vars:
ansible_python_interpreter: /usr/bin/python3
pip_package: python3-pip
pip_install_packages:
- name: docker
tasks:
- name: Update apt
apt: update_cache=yes
roles:
- geerlingguy.pip
hosts.yml:
ansible_host: wiki.mydomain.com # dev machine
ansible_ssh_user: wiki
ansible_ssh_private_key_file: "~/.ssh/id_rsa"
ansible_connection: ssh
ansible_python_interpreter: /usr/bin/python3
sudoers:
wiki ALL=(ALL:ALL) NOPASSWD:ALL
The fix was two lines of code to my ansible.cfg file.
[sudo_become_plugin]
flags = -H -S
I found the answer here: Ansible: sudo without password

MERN stack app deployment to AWS EC2 instance

Hello I'm trying to set up my aws instance and deploy my mern app (Its not a static app) but I've found so many people doing different stuff and it got me a little bit confused, can anyone explains to me the process that I will have to go through to have a functional deployed mern app with aws? There is no need to go in details I just need someone to explain to me the basics.
Setting up an AWS server with NodeJS:
- Create instance.
- ssh into instance
- Git clone the repo
- Sudo apt-get update
- install npm
- npm install
- Add any env or required file that is in gitignore
- sudo ufw allow ssh
- sudo ufw allow 443/tcp
- sudo ufw allow 80/tcp
Setup PM2 and configure for port 80
- $ sudo npm install pm2 -g
- $ pm2 start index.js
- $ pm2 stop index
- Open up your apps index.js file and change port 5000(default) to port 80
- Also need to upload and configure certificate files to use port 443 with https
- $ sudo apt-get install libcap2-bin
- $ sudo setcap cap_net_bind_service=+ep `readlink -f \`which node\``
- $ pm2 start index

Docker Cloud autotest cant find service

I am currently trying to dockerize one of my Django API projects. It uses postgres as the database. I am using Docker Cloud as a CI so that I can build, lint and run tests.
I started with the following DockerFile
# Start with a python 3.6 image
FROM python:3.6
ENV PYTHONUNBUFFERED 1
ENV POSTGRES_USER postgres
ENV POSTGRES_PASSWORD xxx
ENV DB_HOST db
RUN mkdir /code
ADD . /code/
WORKDIR /code
RUN pip install -r requirements.txt
RUN pylint **/*.py
# First tried running tests from here.
RUN python3 src/manage.py test
But this DockerFile always fails as Django cant connect to any database when running the unit tests and justs fails with the following error as no postgres instance is running in this Dockerfile
django.db.utils.OperationalError: could not translate host name "db"
to address: Name or service not known
Then I discovered something called "Autotest" in Docker Cloud that allows you to use a docker-compose.text.yml file to describe a stack and then run some commands with each build. This seemed like what I needed to run the tests, as it would allow me to build my Django image, reference an already existing postgres image and run the tests.
I removed the
RUN python3 src/manage.py test
from the DockerFile and created the following docker-compose.test.yml file.
version: '3.2'
services:
db:
image: postgres:9.6.3
environment:
- POSTGRES_USER=$POSTGRES_USER
- POSTGRES_PASSWORD=$POSTGRES_PASSWORD
sut:
build: .
command: python src/manage.py test
environment:
- POSTGRES_USER=$POSTGRES_USER
- POSTGRES_PASSWORD=$POSTGRES_PASSWORD
- DB_HOST=db
depends_on:
- db
Then when I run
docker-compose -f docker-compose.test.yml build
and
docker-compose -f docker-compose.test.yml run sut
locally, the tests all run and all pass.
Then I push my changes to Github and Docker cloud builds it. The build itself succeeds but the autotest, using the docker-compose.test.yml file fails with the following error:
django.db.utils.OperationalError: could not connect to server:
Connection refused
Is the server running on host "db" (172.18.0.2) and accepting
TCP/IP connections on port 5432?
So it seems like the db service isnt being started or is too slow to start on Docker Cloud compared to my local machine?
After Google-ing around a bit I found this https://docs.docker.com/compose/startup-order/ where it says that the containers dont really wait for each other to be a 100% ready. Then they recommend writing a wrapper script to wait for postgres if that is really needed.
I followed their instructions and used the wait-for-postgres.sh script.
Juicy part:
until psql -h "$host" -U "postgres" -c '\l'; do
>&2 echo "Postgres is unavailable - sleeping"
sleep 1
done
and replaced the command in my docker-compose.test.yml from
command: python src/manage.py test
to
command: ["./wait-for-postgres.sh", "db", "python", "src/manage.py",
"test"]
I then pushed to Github and Docker Cloud starts building. Building the image works but now the Autotest just waits for postgres forever (I waited for 10 minutes before manually shutting down the build process in Docker Cloud)
I have Google-d a fair bit around today and it seems like most "Dockerize Django" tutorials dont really mention unit testing at all.
Am I running Django unit tests completely wrong using Docker?
Seems strange to me that it runs perfectly fine locally but when Docker Cloud runs it, it fails!
I seem to have fixed it by downgrading the docker-compose version in the file from 3.2 to 2.1 and using healthcheck.
The healthcheck option gives me a syntax error in depends_on clause as you have to pass an array into it. No idea why this is not supported in version 3.2
But here is my new docker-compose.test.yml that works
version: '2.1'
services:
db:
image: postgres:9.6.3
environment:
- POSTGRES_USER=$POSTGRES_USER
- POSTGRES_PASSWORD=$POSTGRES_PASSWORD
healthcheck:
test: ["CMD-SHELL", "psql -h 'localhost' -U 'postgres' -c
'\\l'"]
interval: 30s
timeout: 30s
retries: 3
sut:
build: .
command: python3 src/manage.py test
environment:
- POSTGRES_USER=$POSTGRES_USER
- POSTGRES_PASSWORD=$POSTGRES_PASSWORD
- DB_HOST=db
depends_on:
// Does not work in 3.2
db:
condition: service_healthy

Deploy docker container to digital ocean droplet from gitlab-ci

So here is what I want to do.
Push to master in git
Have gitlab-ci hear that push an start a pipeline
The pipeline builds code and pushes a docker container to the gitlab registry
The pipeline logs into a digital ocean droplet via ssh
The pipeline pulls the docker container from the gitlab registry
The pipeline starts the container
I can get up to step 4 no problem. But step 4 just fails every which way. I've tried the ssh key approach:
https://gitlab.com/gitlab-examples/ssh-private-key/blob/master/.gitlab-ci.yml
But that did not work.
So I tried a plain text password approach like this:
image: gitlab/dind:latest
before_script:
- apt-get update -y && apt-get install sshpass
stages:
- deploy
deploy:
stage: deploy
script:
- sshpass -p "mypassword" ssh root#x.x.x.x 'echo $HOME'
this version just exits with code 1 like so
Pseudo-terminal will not be allocated because stdin is not a terminal.
ln: failed to create symbolic link '/sys/fs/cgroup/systemd/name=systemd': Operation not permitted
/usr/local/bin/wrapdocker: line 113: 54 Killed docker daemon $DOCKER_DAEMON_ARGS &> /var/log/docker.log
Timed out trying to connect to internal docker host.
Is there a better way to do this? How can I at the very least access my droplet from inside the gitlab-ci build environment?
I just answered this related question: Create react app + Gitlab CI + Digital Ocean droplet - Pipeline succeeds but Docker container is deleted right after
Heres the solution he is using to get ssh creds set:
before_script:
## Install ssh agent (so we can access the Digital Ocean Droplet) and run it.
- apk update && apk add openssh-client
- eval $(ssh-agent -s)
## Write the environment variable value to the agent store, create the ssh directory and give the right permissions to it.
- echo "$SECRETS_DIGITAL_OCEAN_DROPLET_SSH_KEY" | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
## Make sure that ssh will trust the new host, instead of asking
- echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
## Test it!
- ssh -t ${SECRETS_DIGITAL_OCEAN_DROPLET_USER}#${SECRETS_DIGITAL_OCEAN_DROPLET_IP} 'echo $HOME'
Code credit goes to https://stackoverflow.com/users/6655011/leonardo-sarmento-de-castro

Service name for Docker Compose remote interpreter in PyCharm 5.1 Beta 2

I've imported to PyCharm 5.1 Beta 2 a tutorial project, which works fine when I run it from the commandline with docker-compose up
: https:// docs.docker.com/compose/django/
Trying to set a remote python interpreter is causing problems.
I've been trying to work out what the service name field is expecting:
remote interpreter - docker compose window - http:// i.stack.imgur.com/Vah7P.png.
My docker-compose.yml file is:
version: '2'
services:
db:
image: postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
When I try to enter web or db or anything at all that comes to mind, I get an error message: Service definition is expected to be a map
So what am I supposed to enter there?
EDIT1 (new version: Pycharm 2016.1 release)
I have now updated to the latest version and am having still issues: .IOError: [Errno 21] Is a directory
Sorry for not tagging all links - have a new user link limit
The only viable way we found to workaround this (Pycharm 2016.1) is setting up an SSH remote interpreter.
Add this to the main service Dockerfile:
RUN apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Then log into docker container like this (in the code sample pass 'screencast'):
$ ssh root#192.168.99.100 -p 2000
Note: We aware the IP and port might change depending on your docker and compose configs
For PyCharm just set up a remote SSH Interpreter and you are done!
https://www.jetbrains.com/help/pycharm/2016.1/configuring-remote-interpreters-via-ssh.html