I have created a VM instance using a docker image stored in Google container registry. The instance is working fine and I'm happy. What I am trying to figure out is where the source files are stored.
When I access the VM via SSH and navigate to usr/src/app I don't see anything. In my dockerfile I specified that directory to be used as the app directory.
WORKDIR /usr/src/app
Where can I see the source code?
The source files will be located in the location you specified within the container in question.
After you access the VM via SSH, you may run docker ps to get the list of containers running on your instance. You'll then be able to SSH into your running container via the following command:
docker exec -it <container name> /bin/bash
Once in there you should be able to see your source code in /usr/src/app.
Related
So I have set up ci/cd using gitlab and is now able to do
Build the Docker image
Tag it properly
Push it to ECR
SSH to EC2 instance
Pull image to the EC2 instance
However, I still need to run the docker image for it to be complete.
Right now, I am using the --env_file to specify the env_file for that container, but I still have to create the env file manually on the ec2 instance first.
Is there a way for me to just copy and replace the .env file I have in my repository to the ec2 instance, so it can be updated from that file instead of having to redo it every time there's a change?
You can use scp <path-to-env-file> <ec2-user#><ec2-address>:<path-to-docker-directory>.
Another solution: You could also build Ansible Playbook and execute all of your steps. You would need to write your steps in form of ansible tasks or roles, then target the correct host. For example steps from 1->3 are excuted locally, and 5->7 (where 6 is copying .env file and 7 is starting the docker container) to be remotely (on EC2 instance) executed.
More on this: https://www.redhat.com/en/topics/automation/what-is-an-ansible-playbook#:~:text=Ansible%20Playbooks%20are%20lists%20of,as%20which%20user%20executes%20it.
I have an Ec2 on AWS.
I tried
SSH into that box
install Docker
pull Docker image from my : Repository URI
docker pull bheng-api-revision-test:latest 616934057156.dkr.ecr.us-east-2.amazonaws.com/bheng-api-revision-test:latest
tag it
docker tag bheng-api-revision-test:latest 616934057156.dkr.ecr.us-east-2.amazonaws.com/bheng-api-revision-test:latest
I'm trying to build it, and I don't know what command I should use.
I've tried
docker build bheng-api-revision-test:latest 616934057156.dkr.ecr.us-east-2.amazonaws.com/bheng-api-revision-test:latest .
I kept getting
How would one go about debugging this further?
I am trying to deploy a container on a Google VM instance.
From the doc it seems straightforward: specify your image in the container text field and start the VM.
My image is stored in the Google Container Registry in the same project as the VM. However, the VM starts but does not pull and run the docker image. I ssh'ed the VM and docker images ls returns an empty list.
Pulling the image doesn't work.
~ $ docker pull gcr.io/project/image
Using default tag: latest
Error response from daemon: repository gcr.io/project/image not found: does not exist or no pull access
I know we're supposed to use gcloud docker but gcloud isn't installed on the VM (which is dedicated to containers) so I supposed it's something else.
Also, the VM service account has read access to storage. Any idea?
From the GCR docs, you can use docker-credential-gcr to automatically authenticate with credentials from your GCE instance metadata.
To do that manually (assuming you have curl and jq installed):
TOKEN=$(curl "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token" -H "Metadata-Flavor: Google" | jq -r ".access_token")
docker login -u oauth2accesstoken -p "$TOKEN" https://gcr.io
To pull the image from the gcr.io container registry you can use the gcloud sdk, like this:
$ gcloud docker -- pull gcr.io/yrmv-191108/autoscaler
Or you can use the docker binary directly as you did. This command has the same effect as the previous gcloud one:
$ docker pull gcr.io/yrmv-191108/autoscaler
Basically you problem is that you are not specifying either the project you are working nor the image you are trying to pull, unless (very unlikely) your project ID is project and the image you want to pull is named image.
You can get a list of the images you have uploaded to your current project with:
$ gcloud container images list
Which, for me, gets:
NAME
gcr.io/yrmv-191108/autoscaler
gcr.io/yrmv-191108/kali
Only listing images in gcr.io/yrmv-191108. Use --repository to list images in other repositories.
If, for some reason you don't have permissions to install the Gcloud SDK (very advisable for working with Google Cloud) you can see your uploaded images on the Google Cloud GUI by navigating to "Container registry -> images"
I am using Amazon ECS, and launched the EC2 instance with ECS Linux AMI. On that EC2 instance I have multiple Docker containers each running different-different applications.
I wanted to create a folder on host, where I can add/edit anything and will get reflected to a shared folder from docker container. I have tried using following command:
docker run --name {new_container_name} -d -p 80:80 -v /{source folder}:/{destination folder}:ro {container name}
I referred to this link.
Actual Result - it will creates a new container with similar to as {container name}.
But, what I want is, I need a way with which I can create a shared folder with the current running Docker container. How can I do that?
I want to mount a file from my host-EC2 to a container which is running on ECS server. Every change that happens on the file that is on the Host, should be updated on the file in container.
What I have tried is as follows:
Dockerfile:
FROM nginx
COPY conf/default.conf /etc/nginx/conf.d/default.conf
volume /etc/nginx/conf.d
RUN ln -sf conf/default.conf /etc/nginx/conf.d/
EXPOSE 80
Then pushed the image on ECR Repo and created task, added volume(source path:conf), mountpoint(/etc/nginx/conf.d) and created service to run the container. However every change that I do in the host server, dir: /conf/default.conf does not work on the container /etc/nginx/conf.d/default.conf
I know there is docker run -v but as I'm using ECR, it runs the container itself through the setup.
Any suggestion would be appreciated.
Just a suggestion: In you build system, copy the file to S3, and then in your docker container, run a script on startup that copies the latest file back from S3 and mounts it. Then you won't have to rebuild/redeploy your docker container constantly.