AWS Cloudshell unable to start docker service - amazon-web-services

I have been created EKS cluster.
Now, I'm trying to create docker image to push it into my private ECR so I just installed docker using the following command:
amazon-linux-extras install docker
The installation succeed but when I'm tried to use docker I got the following:
[cloudshell-user#ip-10-0-73-203 ~]$ docker images
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
When I'm trying to start docker service I got:
[cloudshell-user#ip-10-0-73-203 ~]$ sudo systemctl start docker
Failed to get D-Bus connection: Operation not permitted
How can I solve it? Should I need to use another user?

Unfortunately this cannot be done (today).
Currently, the AWS CloudShell compute environment doesn't support Docker containers.
From the doc page.
An alternative would be to run a full fledge instance using Cloud9. Note Cloud9 has a cost as it is backed by an EC2 instance.

Related

aws - is there any cicd tools for uploading server code to ec2 instance

I am hosting the server in Ec2 instance using docker container
Every time I want to deploy new version of server code, I have to do the followings:
1. use filezilla to sftp source code from local to ec2 instance
2. ssh -i server.pem ec2-user#ec2-xxx-xxx-xxx-xxx.ap-southeast-1.compute.amazonaws.com
3. run docker-compose down to stop the container
4. sudo tar -zxvf my-server.tgz -C server.
5. run docker-compose up -d to start the container again
I'm just wondering if there are any ways to automate these steps using some CICD tool

How to start docker in AWS EC2?

I have started an EC2 instance which is based on Amazon Linux2 AMI(HVM), SSD Volume Type. I want to install docker in that instance. I ran following command:
sudo yum update -y
sudo yum install -y docker
sudo chkconfig docker on
chkconfig --list docker
I get following message in my putty session:
Note: This output shows SysV services only and does not include native
systemd services. SysV configuration data might be overridden by native
systemd configuration.
If you want to list systemd services use 'systemctl list-unit-files'.
To see services enabled on particular target use
'systemctl list-dependencies [target]'.
error reading information on service docker: No such file or directory
I think Docker got installed alright, but it is not starting.Because in putty log I find
Installed:
docker.x86_64 0:18.06.1ce-8.amzn2
When I gave the command
sudo chkconfig docker on
putty told me:
Note.Forwarding request to 'systemctl enable docker.service'
So I even tried
sudo systemctl enable docker.service
Do I have to use some other AMI?
If you are using ECS, unless you have a reason to use a custom AMI, you should be using a supported ECS optimised AMI. These AMI are pre-configured with docker and all other ecs requirements:
The Amazon ECS-optimized AMIs are preconfigured with these requirements and recommendations. We recommend that you use the Amazon ECS-optimized Amazon Linux 2 AMI for your container instances unless your application requires a specific operating system or a Docker version that is not yet available in that AMI.
See https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html

How to open glassfish admin UI (console) in AWS ElasticBeansTalk installed with glassfish 4.1 java 8?

I have deployed my war file on AWS ElasticBeanstalk (setup with glassfish4.1 java 1.8). I want to open glassfish admin UI in browser.
Thanks in advance!
I am not sure its possible to access the glassfish console UI (at least I never went to this point so far, but might be possible using docker forward port ...)
what I do is the following:
SSH into the ec2 instance elastic beanstalk has provisioned
run sudo docker ps -a to find out about the container running on the instance
ssh into the container sudo docker exec -it <container id here> bash
this will log you on the container running glassfish, from there you can run the asadmin command

Restart ecs-agent from user-data

I mounted EBS to ecs-enabled instance in AWS.
For EBS to be visible to docker, docker daemon has to be restarted. I added appropriate commands to the user-data. But I am unable to restart ecs-agent docker container from the user data.
Following is my user-data:
#!/bin/bash
echo ECS_CLUSTER=MYCLUSTER>> /etc/ecs/ecs.config
mkfs -t ext4 /dev/sdb
mkdir /db/
mount /dev/sdb /db/
service docker stop
service docker start
docker start ecs-agent
On SSH, I could see that the ecs-agent container is created but it is not running. When I start the container manually, it is working. What is the correct way to start it during instance launch? What am I missing in my user-data script?
I need to create a launch configuration for use in my auto-scaling group. Instances should have EBS enabled and visible to docker.
If you need to restart the Docker daemon, it seems likely that you're dealing with an existing EC2 instance. In that case, user data scripts won't help you because according to the EC2 User Guide they "only run during the first boot cycle when an instance is launched".
As for the correct way to start the ECS agent during instance launch, it depends on which distribution you're running. For Amazon Linux instances the ECS Developer Guide recommends the ecs-init package:
sudo yum install -y ecs-init
sudo service docker start
sudo start ecs
(If you put this in your user data scripts, do not use sudo.)

How to docker attach to a container - Google Cloud Platform / Kubernetes

I have a containerized app running on a VM. It consists of two docker containers. The first contains the WebSphere Liberty server and the web app. The second contains PostgreSQL and the app's DB.
On my local VM, I just use docker run to start the two containers and then I use docker attach to attach to the web server container so I can edit the server.xml file to specify the public host IP for the DB and then start the web server in the container. The app runs fine.
Now I'm trying to deploy the app on Google Cloud Platform.
I set up my gcloud configuration (project, compute/zone).
I created a cluster.
I created a JSON pod config file which specifies both containers.
I created the pod.
I opened the firewall for the port specified in the pod config file.
At this point:
I look at the pod (gcloud preview container kubectl get pods), it
shows both containers are running.
I SSH to the cluster (gcloud compute ssh xxx-mycluster-node-1) and issue sudo docker ps and it shows the database container running, but not the web server container. With sudo docker ps -l I can see the web server container that is not running, but it keeps trying to start and exiting every 10 seconds or so.
So now I need to update the server.xml and start the Liberty server, but I have no idea how to do that in this realm. Can I attach to the web server container like I do in my local VM? Any help would be greatly appreciated. Thanks.
Yes, you can attach to a container in a pod.
Using Kubernetes 1.0 issue the following command:
Do:
kubectl get po to get the POD name
kubectl describe po POD-NAME to find container name
Then:
kubectl exec -it POD-NAME -c CONTAINER-NAME bash Assuming you have bash
Its similar to docker exec -it CONTAINER-NAME WHAT_EVER_LOCAL_COMMAND
On the machine itself, you can see crash looping containers via:
docker ps -a
and then
docker logs
you can also use kubectl get pods -oyaml to get details like restart count that will validate that the container is crash-looping.