Installing CPhalcon on an AWS Docker image - amazon-web-services

I have a docker image that installs phalcon onto a Docker image. Here is the Dockerfile:
FROM ubuntu:trusty
MAINTAINER Fernando Mayo <fernando#tutum.co>, Feng Honglin <hfeng#tutum.co>
# Install packages
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && \
sudo apt-get -y install supervisor php5-dev libpcre3-dev gcc make php5-mysql git curl unzip apache2 libapache2-mod-php5 mysql-server php5-mysql pwgen php-apc php5-mcrypt php5-curl && \
echo "ServerName localhost" >> /etc/apache2/apache2.conf
# Add image configuration and scripts
ADD start-apache2.sh /start-apache2.sh
ADD start-mysqld.sh /start-mysqld.sh
ADD run.sh /run.sh
RUN chmod 755 /*.sh
ADD my.cnf /etc/mysql/conf.d/my.cnf
ADD supervisord-apache2.conf /etc/supervisor/conf.d/supervisord-apache2.conf
ADD supervisord-mysqld.conf /etc/supervisor/conf.d/supervisord-mysqld.conf
ADD php.ini /etc/php5/cli/php.ini
ADD 000-default.conf /etc/apache2/sites-available/000-default.conf
ADD 30-phalcon.ini /etc/php5/apache2/conf.d/30-phalcon.ini
ADD 30-phalcon.ini /etc/php5/cli/conf.d/30-phalcon.ini
#RUN rm -rd /var/www/html/*
#RUN git clone --depth=1 git://github.com/phalcon/cphalcon.git /var/www/html/cphalcon
#RUN chmod 755 /var/www/html/cphalcon/build/install
#CMD["/var/www/html/cphalcon/build/install"]
RUN git clone --depth=1 git://github.com/phalcon/cphalcon.git /usr/local/src/cphalcon
RUN cd /usr/local/src/cphalcon/build && ./install ;\
echo "extension=phalcon.so" > /etc/php5/mods-available/phalcon.ini ;\
php5enmod phalcon
RUN sudo service apache2 stop
RUN sudo service apache2 start
# Remove pre-installed database
RUN rm -rf /var/lib/mysql/*
# Add MySQL utils
ADD create_mysql_admin_user.sh /create_mysql_admin_user.sh
RUN chmod 755 /*.sh
# config to enable .htaccess
RUN a2enmod rewrite
# Copy over private key, and set permissions
ADD .ssh /root/.ssh
# Get aws stuff
RUN curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
RUN unzip awscli-bundle.zip
RUN ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
RUN rm -rd /var/www/html/*
RUN git clone ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos/Demo-Server /var/www/html
#Environment variables to configure php
ENV PHP_UPLOAD_MAX_FILESIZE 10M
ENV PHP_POST_MAX_SIZE 10M
# Add volumes for MySQL
VOLUME ["/etc/mysql", "/var/lib/mysql" ]
EXPOSE 80 3306
CMD ["/run.sh"]
When I run this Docker image locally it works fine, but when I run it on Elastic Beanstalk I get the error: PHP Fatal error: Class 'Phalcon\Loader' not found. To debug this I checked phpinfo() both locally and on the AWS server. Locally it shows all of the phalcon files installed, but on AWS I don't get any info about CPhalcon. How could the Docker image install Phalcon correctly when running on my local machine but not on Elastic Beanstalk?

Related

Files downloaded with user-data deleted?

I have a user-data bootstrap script that creates a folder called content in root directory and downloads files from an S3 bucket.
#!/bin/bash
sudo yum update -y
sudo yum search docker
sudo yum install docker -y
sudo usermod -a -G docker ec2-user
id ec2-user
newgrp docker
sudo yum install python3-pip -y
sudo pip3 install docker-compose
sudo systemctl enable docker.service
sudo systemctl start docker.service
export PATH=$PATH:/usr/local/bin
mkdir content
docker network create web_todos
docker run -d -p 80:80 --name nginx-proxy --network=web_todos -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
aws s3 cp s3://jv-pocho/docker-compose.yaml .
aws s3 cp s3://jv-pocho/backup.sql .
aws s3 cp s3://jv-pocho/dns-updater.sh .
aws s3 sync s3://jv-pocho/images/ ./content/images
aws s3 sync s3://jv-pocho/themes/ ./content/themes
docker-compose up -d
sleep 30
docker exec -i db_jv sh -c 'exec mysql -uroot -p"$MYSQL_ROOT_PASSWORD"' < backup.sql
rm backup.sql
chmod +x dns-updater.sh
This bootstrap works ok, it creates the folder and download the files (it has permissions to download the files) i.e.:
download: s3://jv-pocho/dns-updater.sh to ./dns-updater.sh
[ 92.739262] cloud-init[3203]: Completed 32.0 KiB/727.2 KiB (273.1 KiB/s) with 25 file(s) remaining
so it's copying all the files correctly. The thing is that when i enter via SSH to the instance, i don't have any files inside
[ec2-user#ip-x-x-x-x ~]$ ls
[ec2-user#ip-x-x-x-x ~]$ ls -l
total 0
all commands worked as expected, all the yum installs, python, docker, etc were successfully installed, but no files.
Are files deleted after the bootstrap script ran?
thanks!
Try to copy them in a specific path, then look for it. Because here we don't know which path it's going to use.
Use the following command for specific path:
aws s3 cp s3://Bucket-name/Objet /Path
else you can do one thing,
use pwd command to get the current directory and print it using echo command so that you will get the present working directory.

Cloud sql proxy not working from docker container

My application is running on docker container and deployed with google compute groups and autoscalling enabled.
The problem iam facing is connecting mysql instance from auto-scaled compute instances but its not working expected.
Dockerfile
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y software-properties-common && \
...installation other extenstion
RUN curl -sS https://getcomposer.org/installer | \
php -- --install-dir=/usr/bin/ --filename=composer
COPY . /var/www/html
CMD cd /var/www/html
RUN composer install
ADD nginx.conf/default /etc/nginx/sites-available/default
RUN wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
RUN chmod +x cloud_sql_proxy
RUN mkdir /cloudsql
RUN chmod 777 /cloudsql
RUN chmod 777 -R storage bootstrap/cache
EXPOSE 80
**CMD service php7.1-fpm start && nginx -g "daemon off;" && ./cloud_sql_proxy -dir=/cloudsql -instances=<connectionname>=tcp:0.0.0.0:3306 -credential_file=file.json &**
The last line ./cloud_sql_proxy -dir=/cloudsql -instances=<connectionname>=tcp:0.0.0.0:3306 -credential_file=file.json & is not getting executed when I run my container.
If I run this ./cloud_sql_proxy -dir=/cloudsql -instances=<connectionname>=tcp:0.0.0.0:3306 -credential_file=file.json & inside container (by going to container via docker command) it's working and when i close the terminal again its stop working.
Even I tried to run in background, but no luck.
Anyone have a idea of it?
Has been fixed by
Create start.sh file and move all command to start.sh
After start sql proxy I put sleep 10 and start the nginx and php
Now it's works as expected.
Dockerfile
FROM ubuntu:16.04
...other command
ADD start.sh /
RUN chmod +x /start.sh
EXPOSE 80
CMD ["/start.sh"]
and this is start.sh file
//start.sh
#!/bin/sh
./cloud_sql_proxy -dir=/cloudsql -instances=<connectionname>=tcp:0.0.0.0:3306 -credential_file=<file>.json &
sleep 10
service php7.1-fpm start
nginx -g "daemon off;"

sdkman does not install java in a dockerfile

I have this docker file:
# We are going to star from the jhipster image
FROM jhipster/jhipster
# install as root
USER root
### Setup docker cli (don't need docker daemon) ###
# Install some packages
RUN apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common -y
# Add Dockers official GPG key:
RUN ["/bin/bash", "-c", "set -o pipefail && curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -"]
# Add a stable repository
RUN add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
# Setup aws credentials as environment variables
ENV AWS_ACCESS_KEY_ID "change it!"
ENV AWS_SECRET_ACCESS_KEY "change it!"
# noninteractive install for tzdata
ARG DEBIAN_FRONTEND=noninteractive
# set timezone for tzdata
ENV TZ=America/Sao_Paulo
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
# Install the latest version of Docker Engine - Community and also aws cli
RUN apt-get update && apt-get install docker-ce docker-ce-cli containerd.io awscli -y
# change back to default user
USER jhipster
# install skd and java version 1.8
RUN curl -s "https://get.sdkman.io" | bash
RUN bash $HOME/.sdkman/bin/sdkman-init.sh
RUN bash -c "sdk install java 8.0.222.j9-adpt"
When I run a command to build an image from this dockerfile it fails on the last step with a message:
/bin/sh: 1: sdk: not found
When I install it on my local machine it runs sdkman (sdk) on bash. But on this script it calls it from sh not bash. How can I make it calls skdman (sdk) from sh? What I actually want to do is install a specific java version through sdkman (sdk). Is there another way to do it?
For sdk command to be available you need to run source sdkman-init.sh.
Here is a working sample with java 11 on centos.
FROM centos:latest
ARG CANDIDATE=java
ARG CANDIDATE_VERSION=11.0.6-open
ENV SDKMAN_DIR=/root/.sdkman
# update the image
RUN yum -y upgrade
# install requirements, install and configure sdkman
# see https://sdkman.io/usage for configuration options
RUN yum -y install curl ca-certificates zip unzip openssl which findutils && \
update-ca-trust && \
curl -s "https://get.sdkman.io" | bash && \
echo "sdkman_auto_answer=true" > $SDKMAN_DIR/etc/config && \
echo "sdkman_auto_selfupdate=false" >> $SDKMAN_DIR/etc/config
# Source sdkman to make the sdk command available and install candidate
RUN bash -c "source $SDKMAN_DIR/bin/sdkman-init.sh && sdk install $CANDIDATE $CANDIDATE_VERSION"
# Add candidate path to $PATH environment variable
ENV JAVA_HOME="$SDKMAN_DIR/candidates/java/current"
ENV PATH="$JAVA_HOME/bin:$PATH"
ENTRYPOINT ["/bin/bash", "-c", "source $SDKMAN_DIR/bin/sdkman-init.sh && \"$#\"", "-s"]
CMD ["sdk", "help"]
The problem is every RUN command in Dockerfile is executed within a new bash environment, so you need to put both of your last two commands under the same line to look like this:
RUN bash $HOME/.sdkman/bin/sdkman-init.sh && bash -c "sdk install java 8.0.222.j9-adpt"

How to run Django Daphne service on Google Kubernetes Engine and Google Container Registry

Dockerfile
FROM ubuntu:18.04
RUN apt-get update
RUN apt-get install build-essential -y
WORKDIR /app
COPY . /app/
# Python
RUN apt-get install python3-pip -y
RUN python3 -m pip install virtualenv
RUN python3 -m virtualenv /env36
ENV VIRTUAL_ENV /env36
ENV PATH /env36/bin:$PATH
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
# Start Daphne [8443]
ENV DJANGO_SETTINGS_MODULE=settings
CMD daphne -e ssl:8443:privateKey=/ssl-cert/privkey.pem:certKey=/ssl-cert/fullchain.pem asgi:application
# Open port 8443
EXPOSE 8443
Enable Google IP Alias in order that we may connect to Google Memorystore/Redis
Build & Push
$ docker build -t [GCR_NAME] -f path/to/Dockerfile .
$ docker tag [GCR_NAME] gcr.io/[GOOGLE_PROJECT_ID]/[GCR_NAME]:[TAG]
$ docker push gcr.io/[GOOGLE_PROJECT_ID]/[GCR_NAME]:[TAG]
Deploy to GKE
$ envsubst < k8s.yml > patched_k8s.yml
$ kubectl apply -f patched_k8s.yml
$ kubectl rollout status deployment/[GKE_WORKLOAD_NAME]
I configured Daphne on GKE/GCR. If you guys have other solutions, please give me your advice.
system is not included in the Ubuntu:18.04 docker image.
Add an ENTRYPOINT to your Dockerfile with commands in ExecStart property of project-daphne.service.

ECS Docker container won't start

I have a Docker container with this Dockerfile:
FROM node:8.1
RUN rm -fR /var/lib/apt/lists/*
RUN echo "deb http://ppa.launchpad.net/webupd8team/java/ubuntu trusty main" | tee /etc/apt/sources.list.d/webupd8team-java.list
RUN echo "deb-src http://ppa.launchpad.net/webupd8team/java/ubuntu trusty main" | tee -a /etc/apt/sources.list.d/webupd8team-java.list
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys EEA14886
RUN apt-get update
RUN echo debconf shared/accepted-oracle-license-v1-1 select true | \
debconf-set-selections
RUN echo debconf shared/accepted-oracle-license-v1-1 seen true | \
debconf-set-selections
RUN apt-get install -y oracle-java8-installer
RUN apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN mkdir -p /app
WORKDIR /app
# Install app dependencies
COPY package.json /app/
RUN npm install
# Bundle app source
COPY . /app
# Environment Variables
ENV PORT 8080
# start the SSH daemon service
RUN service ssh start
# create a non-root user & a home directory for them
RUN useradd --create-home --shell /bin/bash tunnel-user
# set their password
RUN echo 'tunnel-user:93wcBjsp' | chpasswd
# Copy the SSH key to authorized_keys
COPY tunnel.pub /app/
RUN mkdir -p /home/tunnel-user/.ssh
RUN cat tunnel.pub >> /home/tunnel-user/.ssh/authorized_keys
# Set permissions
RUN chown -R tunnel-user:tunnel-user /home/tunnel-user/.ssh
RUN chmod 0700 /home/tunnel-user/.ssh
RUN chmod 0600 /home/tunnel-user/.ssh/authorized_keys
# allow the tunnel-user to SSH into this machine
RUN echo 'AllowUsers tunnel-user' >> /etc/ssh/sshd_config
EXPOSE 8080
EXPOSE 22
CMD [ "npm", "start" ]
My ECS task has this definition. I'm using a role which has AmazonEC2ContainerServiceforEC2Role.
When I try to start it as a task in my ECS cluster I get this error:
CannotStartContainerError: API error (500): driver failed programming external connectivity on endpoint ecs-ssh-4-ssh-8cc68dbfaa8edbdc0500 (387e024a87752293f51e5b62de9e2b26102d735e8da500c8e7fa5e1b4b4f0983): Error starting userland proxy: listen tcp 0.0.0
How do I fix this?