I'm using Elastic Beanstalk to deploy my application as a Single Docker Application.
My Dockerfile does composer install while deploying, but I get a Could not authenticate against github.com error.
I use these lines in my Dockerfile to install my dependencies:
WORKDIR /www
RUN ["composer", "install", "-o"]
How would I solve this issue?
I think you need to configure composer inside your container with your key or something like that, remember that inside your container you're basically on another os and you don't have public keys etc.
I'd try to install it from source rather than from git (as you don't have keys).
try this:
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer ()
Related
Currently, I'm trying to access a simple Django application, which I created in an Azure Virtual Machine. As the application is still simple, I only want to access the "The install worked successfully! Congratulations!" page from my local machine by accessing http://VM_IP:PORT/. I was able to do just that, but when I tried to Dockerize the project and access the built image from my local machine, it didn't work.
I've already made some setup in my Azure portal so that the Virtual Machine is able to listen to a specific port; in this case is 8080 (so http://VM_IP:8080/). I'm quite new to Docker, so I'm assuming there was something missing in the Dockerfile I've created for the project.
Dockerfile
RUN mkdir /app
WORKDIR /app
# Add current directory code to working directory
ADD . /app/
# set default environment variables
ENV PYTHONUNBUFFERED 1
ENV LANG C.UTF-8
ENV DEBIAN_FRONTEND=noninteractive
# set project environment variables
# grab these via Python's os.environ# these are 100% optional here
ENV PORT=8080
# Install system dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
tzdata \
python3-setuptools \
python3-pip \ python3-dev \
python3-venv \
git \
&& \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
# install environment dependencies
RUN pip3 install --upgrade pip
RUN pip3 install pipenv
# Install project dependencies
RUN pipenv install --skip-lock --system --dev
EXPOSE 8888
CMD gunicorn simple_project.wsgi:application --bind 0.0.0.0:$PORT
I'm not sure what's happening. It will be very appreciated if someone could point out what I was missing? Thanks in advance.
I suspect the problem may be that you're confusing the EXPOSE build-time instruction with the publish runtime flag. Without the latter, any containers on your VM would be inaccessible to the host machine.
Some background:
The EXPOSE instruction is best thought of as documentation; it has no effect on container networking. From the docs:
The EXPOSE instruction does not actually publish the port. It functions as a type of documentation between the person who builds the image and the person who runs the container, about which ports are intended to be published.
Seems kind of odd at first glance, but it's for good reason: the image itself does not have the permissions to declare host port-mappings — that's up to the container runtime and whoever is operating it (you!).
The way to do this is by passing the --publish or -p flag to the docker run command, allowing you to define a mapping between the port open in the container network and a port on the host machine. So, for example, if I wanted to run an nginx container that can be accessed at port 8080 on my localhost, I'd run: docker run --rm -d -p 8080:80 nginx. The running container is then accessible at localhost:8080 on the host. Of course, you can also use this to expose container ports from one host to another. Without this, any networking configuration in your Dockerfile is executed in the context of the container network, and is basically inaccessible to the host.
TL;DR: you probably just need to publish your ports when you create and run the container on your VM: docker run -p {vm_host_port}:{container_port} {image_name}. Note that port mappings cannot be added or changed for existing containers; you'd have to destroy the container and recreate it.
Side note: while docker run is quick and easy, it can quickly become unmanageable as your project grows and you add in environment variables, attach volumes, define inter-container dependencies, etc. An alternative with native support is docker-compose, which lets you define the runtime configuration of your container (or containers) declaratively in YAML — basically picking up where the Dockerfile leaves off. And once it's set up, you can just run docker-compose up, instead of having to type out lengthy docker commands, and waste time debugging when you forgot to include a flag, etc. Just like we use Dockerfiles to have a declarative, version-controlled description of how to build our image, I like to think of docker-compose as one way to create a declarative, version-controlled description for how to run and orchestrate our image(s).
Hope this helps!
When running a bash script during CodeBuild, I get this error:
./scripts/test.sh: line 95: docker: command not found
However, I've made sure to install docker at the start of the script using:
curl -sSL https://get.docker.com/ | sh
apt-get install -y docker-ce docker-compose
But this results in the following error:
Package docker-ce is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
E: Package 'docker-ce' has no installation candidate
Any ideas on how to get docker working during CodeBuild?
There are a few different options for this in CodeBuild:
You can use CodeBuild provided images, which will already have docker installed on them. To use any one of these images select the privilege mode when creating the CodeBuild project.
You can enable Docker in custom image (images not managed by CodeBuild. e.g.: hosted in your ECR repo or public DockerHub) when configuring CodeBuild project. Select the privileged mode for your project settings. Instructions here: https://docs.aws.amazon.com/codebuild/latest/userguide/sample-docker-custom-image.html
I'm not sure if this is Docker, the Elastick Beanstalk, or docker image issue related, but my problem is that I'm running the command eb local run to start the local environment alongside with docker.
Expected behavior
The command runs seamlessly
Actual behavior
ERROR: DockerVersionError - Your local host has the 'docker-py' version 1.10.6 Python package installed on it.
When you run a Multicontainer Docker application locally, the EB CLI requires the 'docker' Python package.
To fix this error:
Be sure that no applications on your local host require 'docker-py', and then run this command:
pip uninstall docker-py
The EB CLI will install 'docker' the next time you run it.
$ eb --version : EB CLI 3.12.2 (Python 2.7.1)
$ docker -v : Docker version 17.12.0-ce, build c97c6d6
If you want to launch multi-container Dockers using eb local run, you need to have uninstalled docker-py, and installed docker
As the error message indicates:
perform pip uninstall docker-py ** if you don't need it **.
run pip install "docker>=2.6.0,<2.7" immediately after
docker and docker-py cannot coexist. These release notes highlight the change in the package name. These release notes allude to the breakage the change in package name caused.
Not to be confused with Docker, the engine/client, docker-py/docker is a Python wrapper around the Docker client which the EBCLI relies on.
A Meteor app is running on the local machine. Then it gets built appDir$ Meteor build . and the resultant myApp.tar.gz gets copied to the AWS cloud. Then a script runs on the cloud to put the app into a docker container following some Dockerfile commands.
Every time a change needs to be done, a repeat of the above follows, any better way to reduce the effort of re- building/copying/dockerizing?
Is it possible by using volume and docker-compose and just sync the changes from the local development machine to the aws EC2 volume directory? How?
//Dockerfile on AWS EC2
FROM lambdalinux/baseimage-amzn:2016.09-000
RUN curl --silent --location https://rpm.nodesource.com/setup_4.x | bash -
RUN yum install -y tar nodejs
ADD ./myApp.tar.gz /opt/
EXPOSE 80
ENV ROOT_URL http://example.com
ENV MONGO_URL "mongodb://username:pass..."
ENV PORT 80
# Install nodejs modules
WORKDIR /opt/bundle/
RUN npm install fibers
RUN npm install underscore
RUN npm install source-map-support
RUN npm install semver
# Start the app
CMD node ./main.js
There is a command called rsync that will do a smart sync of a whole directory structure - if you unpacked the build locally you could then rsync it up to the server.
It can use either file dates or checksums to work out what has changed, and will make the process quicker. Minified files will probably change every time, but certainly many assets won't change every time.
I would set it up with a mirror of your production directory, sync the files into there, do some (automated) sanity checks first, and then switch the new version into place. If it doesn't work you can switch the old version back. There is a little work required to get this set up, but it will make deployment faster/easier
As a starting point to making my own app that uses meanjs, I went to the meanjs website and used their yeomen generator to create the template/sample app. Following the instructions getting the sample application running out of the box on my local desktop machine worked within minutes. To complete the exercise I tried to deploy the sample app to an AWS/EC2 instance before making any changes to it. I have used the command line deployment tools in the past and liked it. Also it is nice how now you can just select an EC2 Linux instance with node and npm already installed and ready.
After checking the sample into git, I run "git aws.push" to deploy the app.
The problem is in the package.json the line:
"postinstall": "bower install --config.interactive=false"
In the eb-activity.log:
npm WARN cannot run in wd meansample#0.0.1 bower install --config.interactive=false (wd=/tmp/deployment/application)
The result is that AngularJS ends up not getting installed in /public/lib.
First thing I tried was giving the full path in the package.json file: node_modules/bower/bin/bower. This didn't help and results in the same error. Also noting that other commands like "grunt" don't need the full path specified in the package.json and they work.
I don't understand enough of the black box magic that aws.push does to understand why this error is happening. For example what user does it run as? What permissions does that user have? what options if any does it use when it runs npm install?
I did figure out a work-around, but it adds a lot of extra steps that shouldn't be required if aws.push was able to run bower install directly. Basically I can manually run the bower install in the ssh client connected to my EC2 instance, set the owner/group on the installed files, and restart the server.
Work-around steps:
1) On local command prompt run git aws.push. Wait for unsuccessfully deployment to finish.
2) Connect ssh client to EC2 instance. From the command prompt:
cd /var/app/current
/* NOTE: if I don't use sudo the ec2user I am logged in as does not have permission to create /public/lib needed to install AngularJS into*/
sudo node_modules/bower/bin/bower install --config.interactive=false --allow-root
/* NOTE: just changing the owner and group to match the same as the other files that aws.push deployed */
sudo chown -R nodejs public/lib
sudo chgrp -R nodejs public/lib
From AWS dashboard, select the correct EC2 instance, Action = Restart App Server(s)
Now AngularJS is install and the sample app works.
How do I eliminate the extra steps and make it so aws.push can do the bower install successfully?
I have experienced the same problem when trying to publish my nodejs app in a private server running CentOs using root user. The same error is fired by "postinstall": "./node_modules/bower/bin/bower install" in my package.json file so the only solution that was working for me is to use both options to avoid the error:
1: use --allow-root option for bower install command
"postinstall": "./node_modules/bower/bin/bower --allow-root install"
2: use --unsafe-perm option for npm install command
npm install --unsafe-perm