I am trying to set the current context of the cluster using shell script.
#!/usr/bin/env bash
#Set Parameters
echo ${cluster_arn}
sh "aws eks update-kubeconfig --name abc-eks-cluster --role-arn ${cluster_arn} --alias abc-eks-cluster"
export k8s_host="$(kubectl config view --minify | grep server | cut -f 2- -d ":" | tr -d " ")"
However, the above command gives error:
sh: 0: Can't open aws eks update-kubeconfig --name abc-eks-cluster --role-arn arn:aws:iam::4399999999873:role/abc-eks-cluster-cluster-admin --alias abc-eks-cluster
Can someone suggest me how do what is the issue and how can I set the current context because the k8_host command yiedls another error that context is not set.
I have no idea why you are even involving sh as a subshell, when you're already in a shell script, but
Can someone suggest me ... what is the issue
You have provided the entire command to sh but failed to use the -c that informs it that the first argument is an inline shell snippet; otherwise, sh expects that the first non-option argument is a file which is why it says it cannot open a file with that horrifically long name
There are two outcomes to get out of this mess: use -c or stop trying to use a subshell
sh -c "aws eks update-kubeconfig --name abc-eks-cluster --role-arn ${cluster_arn} --alias abc-eks-cluster"
or:
aws eks update-kubeconfig --name abc-eks-cluster --role-arn ${cluster_arn} --alias abc-eks-cluster
Related
I was using the follwoing command to import the inventory from aws and it worked well:
ansible-inventory -i /etc/ansible/inventory/ec2.py --list -y > $some_dic
now, I want to use specific aws credentials so I modified the command as follwoing:
/etc/ansible/inventory/ec2.py --list --profile my-profile
which works fine.
However, when I put it all togeather it doesn't work
ansible-inventory -i /etc/ansible/inventory/ec2.py --list --profile my-profile -y > $some_dic
error:
ansible-inventory: error: unrecognized arguments: --profile
any ideas on this issue?
ansible-inventory command tries to parse all the options including --profile which it doesn't have.
/etc/ansible/inventory/ec2.py --list --profile my-profile executes ec2.py with --profile option but when the same ec2.py is passed to ansible-inventory using -i option that file itself becomes a parameter for ansible-inventory command.
Though, haven't tried myself, you can try setting AWS_PROFILE and then execute the command similar to what described here.
Also have a look at the documentation for available options of ansible-inventory.
I'm trying to make an ECS deploy via travis using this script: https://github.com/silinternational/ecs-deploy. Basically, this is the script that I'm using:
# login AWS ECR
eval $(aws ecr get-login --region us-east-1)
# build the docker image and push to an image repository
npm run docker:staging:build
docker tag jobboard.api-staging:latest $IMAGE_REPO_URL:latest
docker push $IMAGE_REPO_URL:latest
# update an AWS ECS service with the new image
ecs-deploy -c $CLUSTER_NAME -n $SERVICE_NAME -i $IMAGE_REPO_URL:latest -t 3000 --verbose
This is the error I'm getting and the verbose info
+RUNNING_TASKS=arn:aws:ecs:eu-west-1:xxxxxxx:task/yyyyyy
+[[ ! -z arn:aws:ecs:eu-west-1:xxxxxxx:task/yyyyy ]]
++/home/travis/.local/bin/aws --output json ecs --region eu-west-1 describe-tasks --cluster jobboard-api-cluster-staging --tasks arn:aws:ecs:eu-west-1:xxxxxxx:task/yyyyyy
++grep -e RUNNING
++jq '.tasks[]| if .taskDefinitionArn == "arn:aws:ecs:eu-west-1:xxxxxxx:task-definition/jobboard-api-task-staging:7" then . else empty end|.lastStatus'
+i=2610
+'[' 2610 -lt 3000 ']'
++/home/travis/.local/bin/aws --output json ecs --region eu-west-1 list-tasks --cluster jobboard-api-cluster-staging --service-name jobboard-api-service-staging --desired-status RUNNING
++jq -r '.taskArns[]'
ERROR: New task definition not running within 3000 seconds
I don't know if it could be related but I'm getting this error too from python
/home/travis/.local/lib/python2.7/site-packages/urllib3/util/ssl_.py:369: SNIMissingWarning: An HTTPS request has been made, but the SNI (Server Name Indication) extension to TLS is not available on this platform. This may cause the server to present an incorrect TLS certificate, which can cause validation failures. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
SNIMissingWarning
The problem is that the first deploy works correctly, and after the second try it starts to fail. Do I need to make some field dynamic to avoid this?
What do you think the problem is?
Thanks
I use to connect to EC2 container instances following this steps, https://docs.aws.amazon.com/AmazonECS/latest/developerguide/instance-connect.html wondering how I can connect to FARGATE-managed container instances instead.
Looking on that issue on github https://github.com/aws/amazon-ecs-cli/issues/143 I think it's not possible to make docker exec from remote host into container on ECS Fargate. You can try to run ssh daemon and your main process in one container using e.g. systemd (https://docs.docker.com/config/containers/multi-service_container/) and connect to your container using SSH but generally it's not good idea in containers world.
Starting from the middle of March 2021, executing a command in the ECS container is possible when the container runs in AWS Fargate. Check the blog post Using Amazon ECS Exec to access your containers on AWS Fargate and Amazon EC2.
Quick checklist:
Enable command execution in the service.
Make sure to use the latest platform version in the service.
Add ssmmessages:.. permissions to the task execution role.
Force new deployment for the service to run tasks with command execution enabled.
AWS CLI command to run bash inside the instance:
aws ecs execute-command \
--region eu-west-1 \
--cluster [cluster-name] \
--task [task id, for example 0f9de17a6465404e8b1b2356dc13c2f8] \
--container [container name from the task definition] \
--command "/bin/bash" \
--interactive
The setup explained above should allow to run the /bin/bash command and get an interactive shell into the container running on AWS Fargate. Please check the documentation Using Amazon ECS Exec for debugging for more details.
It is possible, but not easy.straight forward.
Shortly: install SSH, don't expose ssh port out from VPC, add bastion host, SSH through bastion.
A little bit more details:
spin up SSHD with password-less authentication. Docker instructions
Fargate Task: Expose port 22
Configure your VPC, instructions
create EC2 bastion host
From there SSH into your Task's IP address
Enable execute command on service.
aws ecs update-service --cluster <Cluster> --service <Service> --enable-execute-command
Connect to fargate task.
aws ecs execute-command --cluster <Cluster> \
--task <taskId> \
--container <ContainerName> \
--interactive \
--command "/bin/sh"
Ref - https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-exec.html
Here is an example of adding SSH/sshd to your container to gain direct access:
# Dockerfile
FROM alpine:latest
RUN apk update && apk add --virtual --no-cache \
openssh
COPY sshd_config /etc/ssh/sshd_config
RUN mkdir -p /root/.ssh/
COPY authorized-keys/*.pub /root/.ssh/authorized_keys
RUN cat /root/.ssh/authorized-keys/*.pub > /root/.ssh/authorized_keys
RUN chown -R root:root /root/.ssh && chmod -R 600 /root/.ssh
COPY docker-entrypoint.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/docker-entrypoint.sh
RUN ln -s /usr/local/bin/docker-entrypoint.sh /
# We have to set a password to be let in for root - MAKE THIS STRONG.
RUN echo 'root:THEPASSWORDYOUCREATED' | chpasswd
EXPOSE 22
ENTRYPOINT ["docker-entrypoint.sh"]
# docker-entrypoint.sh
#!/bin/sh
if [ "$SSH_ENABLED" = true ]; then
if [ ! -f "/etc/ssh/ssh_host_rsa_key" ]; then
# generate fresh rsa key
ssh-keygen -f /etc/ssh/ssh_host_rsa_key -N '' -t rsa
fi
if [ ! -f "/etc/ssh/ssh_host_dsa_key" ]; then
# generate fresh dsa key
ssh-keygen -f /etc/ssh/ssh_host_dsa_key -N '' -t dsa
fi
#prepare run dir
if [ ! -d "/var/run/sshd" ]; then
mkdir -p /var/run/sshd
fi
/usr/sbin/sshd
env | grep '_\|PATH' | awk '{print "export " $0}' >> /root/.profile
fi
exec "$#"
More details here: https://github.com/jenfi-eng/sshd-docker
when I generate the docker login command to my AWS ECR with the following command:
aws ecr get-login --region us-east-2
I get an output like:
docker login -u AWS -p [bigbass] -e none https://xxxx.dkr.ecr.us-east-2.amazonaws.com
The problem is the -e flag that throws an error:
unknown shorthand flag: 'e' in -e
See 'docker login --help'.
I first thought that the problem was a mis configured aws configure, as I was using none as "Default output format" option. After that I fixed the format option inside aws configure but it still happens.
They not so long ago changed their CLI. It looks like this now:
get-login
[--registry-ids <value> [<value>...]]
[--include-email | --no-include-email]
So simply replace -e none with --no-include-email.
See the corresponding documentation here.
I saw the following code in the begining of the userdata script. What is it for?
if (${PROXY}) {
&"cfn-init.exe" -v -s ${STACK_ID} -r ${RESOURCE} --region ${REGION} --http-proxy=${PROXY} --https-proxy=${PROXY}
} else {
write-host "Unable to determine Proxy setting"
&"cfn-init.exe" -v -s ${STACK_ID} -r ${RESOURCE} --region ${REGION}
}
"cfn-init.exe" is a utility from cloudformation to execute instance initialization provisioning defined in cloudformation template .
This ensures the server initialization configurations defined under "AWS::CloudFormation::Init" section inside the cloud formation template is executed.
Please check the following documentation for reference
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-init.html