I have fargate container which has the php prgram.
Then What I want to do is exec command in the container,
it is equivalent to docker command in local.
docker exec -it mycontainer /usr/bin/php mycommand
So, I want to do this from EventBridge (like cron)
Is it possible to do this??
In short:
It is possible to execute command inside container by aws-cli:
aws ecs execute-command \
--region $AWS_REGION \
--cluster ecs-exec-demo-cluster \
--task ef6260ed8aab49cf926667ab0c52c313 \
--container nginx \
--command "/bin/bash" \
--interactive
Most features that able to run via CLI able to run by using AWS SDK and supported program language.
Any supported language program may be added as AWS Lambda function
Any Lambda may be called by EventBridge
So answer is yes.
Details depends on how you plan to realize code that will be triggered by EventBridge.
Related
I am using CircleCI for my CI/CD along with CodeDeploy. I would like to run an ecs run-task command and would like the task to complete before moving on to the more intricate deployment stages, which we use CodeDeploy for, and is triggered through the CircleCI config. In a previous version of the aws cli the --wait flag was an option for this, but is not an option in aws version 2+. Are there any other simple alternatives that people are using to get around this?
Adding my solution here thanks to Mark B's response.
TASK_ID=$(aws ecs run-task \
--profile staging \
--cluster <cluster-name> \
--task-definition <task-definition> \
--count 1 \
--launch-type FARGATE \
--network-configuration "awsvpcConfiguration={subnets=[$SUBNET_ONE_STAGING, $SUBNET_TWO_STAGING, $SUBNET_THREE_STAGING],securityGroups=[$SECURITY_GROUP_IDS_STAGING],assignPublicIp=ENABLED}" \
--overrides '{"containerOverrides":[{"name": "my-app","command": ["/bin/sh", "-c", "bundle exec rake db:migrate && bundle exec rake after_party:run"]}]}' \
| jq -r '.tasks[0].taskArn') \
&& aws ecs wait tasks-stopped --cluster <cluster-name> --tasks ${TASK_ID}
You would use the aws ecs wait capability in the CLI. Note that this is the same in version 1 of the CLI and version 2, there was never a --wait for ECS tasks in the core AWS CLI as far as I'm aware.
Specifically, after starting the task and getting the task ID returned from the run-task command, you would use aws ecs wait task-stopped --tasks <task-id> to wait for the task to be done/stopped.
I am currently facing issues with deploying my Docker image on AWS. I managed to push my image into a Elastic Container Registry repository. I created an Elastic Container Service Cluster with a Task. Everything seems fine so far.
It does not start as I expect. I noticed that locally my Docker image must be executed with the "-it" argument (interactive shell).
Can you tell me how to enable such "-it" parameter?
Thanks!
you can set 'initProcessEnabled' to true parameter in container definition. This will allow us to access the running container
Following doc might help :
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#container_definition_linuxparameters
Once this parameter is set to true you can access running container using below cli command.
aws ecs execute-command --cluster *cluster-name* \
--region *aws-region*
--task *task-id* \
--container *container-name* \
--interactive \
--command "/bin/sh"
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-exec.html
Question:
How can I install aws cli, from WITHIN the ECS task ?
DESCRIPTION:
I'm using a docker container to run the logstash application (it is part of the elastic family).
The docker image name is "docker.elastic.co/logstash/logstash:7.10.2"
This logstash application needs to write to S3, thus it needs AWS CLI installed.
If aws is not installed, it crashes.
# STEP 1 #
To avoid crashing, when I used this application only as a docker, I ran it in a way that I caused the 'logstash start' to be delayed, after docker container was started.
I did this by adding "sleep" command to an external docker-entrypoint file, before it starts the logstash.
This is how it looks in the docker-entrypoint file:
sleep 120
if [[ -z $1 ]] || [[ ${1:0:1} == '-' ]] ; then
exec logstash "$#"
else
exec "$#"
fi
# EOF
# STEP 2 #
run the docker with "--entrypoint" flag so it will use my entrypoint file
docker run \
-d \
--name my_logstash \
-v /home/centos/DevOps/psifas_logstash_docker-entrypoint:/usr/local/bin/psifas_logstash_docker-entrypoint \
-v /home/centos/DevOps/logstash.conf:/usr/share/logstash/pipeline/logstash.conf \
-v /home/centos/DevOps/logstash.yml:/usr/share/logstash/config/logstash.yml \
--entrypoint /usr/local/bin/psifas_logstash_docker-entrypoint \
docker.elastic.co/logstash/logstash:7.10.2
# STEP 3 #
install aws cli and configure aws cli from the server hosting the docker:
docker exec -it -u root <DOCKER_CONTAINER_ID> yum install awscli -y
docker exec -it <DOCKER_CONTAINER_ID> aws configure set aws_access_key_id <MY_aws_access_key_id>
docker exec -it <DOCKER_CONTAINER_ID> aws configure set aws_secret_access_key <MY_aws_secret_access_key>
docker exec -it <DOCKER_CONTAINER_ID> aws configure set region <MY_region>
This worked for me,
Now I want to "translate" this flow into an AWS ECS task.
in ECS I will use parameters instead of running the above 3 "aws configure" commands.
MY QUESTION
How can I do my 3rd step, installing aws cli, from WITHIN the ECS task ? (meaning not to run it on the EC2 server hosting the ECS cluster)
When I was working on the docker I also thought of these options to use the aws cli:
find an official elastic docker image containing both logstash and aws cli. <-- I did not find one.
create such an image by myself and use. <-- I prefer not , because I want to avoid the maintenance of creating new custom images when needed (e.g when new version of logstash image is available).
Eventually I choose the 3 steps above, but I'm open to suggestion.
Also, My tests showed that running 2 containers within the same ECS task:
logstah
awscli
and then the logstash container will use the aws cli container
(image "amazon/aws-cli") is not working.
THANKS A LOT IN ADVANCE :-)
Your option #2, create the image yourself, is really the best way to do this. Anything else is going to be a "hack". Also, you shouldn't be running aws configure for an image running in ECS, you should be assigning a IAM role to the task, and the AWS CLI will pick that up and use it.
Mark B, your answer helped me to solve this. Thanks!
writing here the solution in case it will help somebody else.
There is no need to install AWS CLI, in the logstash docker container running inside the ECS task.
Inside the logstash container (from image "docker.elastic.co/logstash/logstash:7.10.2") there is AWS SDK to connect to the S3.
The only thing required is to allow the ECS Task execution role, access to S3.
(I attached AmazonS3FullAccess policy)
I created a cluster on Dataproc and it works great. However, after the cluster is idle for a while (~90 min), the master node will automatically stops. This happens to every cluster I created. I see there is a similar question here: Keep running Dataproc Master node
It looks like it's the initialization action problem. However the post does not give me enough info to fix the issue. Below are the commands I used to create the cluster:
gcloud dataproc clusters create $CLUSTER_NAME \
--project $PROJECT \
--bucket $BUCKET \
--region $REGION \
--zone $ZONE \
--master-machine-type $MASTER_MACHINE_TYPE \
--master-boot-disk-size $MASTER_DISK_SIZE \
--worker-boot-disk-size $WORKER_DISK_SIZE \
--num-workers=$NUM_WORKERS \
--initialization-actions gs://dataproc-initialization-actions/connectors/connectors.sh,gs://dataproc-initialization-actions/datalab/datalab.sh \
--metadata gcs-connector-version=$GCS_CONNECTOR_VERSION \
--metadata bigquery-connector-version=$BQ_CONNECTOR_VERSION \
--scopes cloud-platform \
--metadata JUPYTER_CONDA_PACKAGES=numpy:scipy:pandas:scikit-learn \
--optional-components=ANACONDA,JUPYTER \
--image-version=1.3
I need the BigQuery connector, GCS connector, Jupyter and DataLab for my cluster.
How can I keep my master node running? Thank you.
As summarized in the comment thread, this is indeed caused by Datalab's auto-shutdown feature. There are a couple ways to change this behavior:
Upon first creating the Datalab-enabled Dataproc cluster, log in to Datalab and click on the "Idle timeout in about ..." text to disable it: https://cloud.google.com/datalab/docs/concepts/auto-shutdown#disabling_the_auto_shutdown_timer - The text will change to "Idle timeout is disabled"
Edit the initialization action to set the environment variable as suggested by yelsayed:
function run_datalab(){
if docker run -d --restart always --net=host -e "DATALAB_DISABLE_IDLE_TIMEOUT_PROCESS=true" \
-v "${DATALAB_DIR}:/content/datalab" ${VOLUME_FLAGS} datalab-pyspark; then
echo 'Cloud Datalab Jupyter server successfully deployed.'
else
err 'Failed to run Cloud Datalab'
fi
}
And use your custom initialization action instead of the stock gs://dataproc-initialization-actions one. It could be worth filing a tracking issue in the github repo for dataproc initialization actions too, suggesting to disable the timeout by default or provide an easy metadata-based option. It's probably true that the auto-shutdown behavior isn't as expected in default usage on a Dataproc cluster since the master is also performing roles other than running the Datalab service.
Once you've created a task definition in Amazon's EC2 Container Service, how do you delete or remove it?
It's a known issue. Once you de-register a Task Definition it goes into INACTIVE state and clutters up the ECS Console.
If you want to vote for it to be fixed, there is an issue on Github. Simply give it a thumbs up, and it will raise the priority of the request.
I've recently found this gist (thanks a lot to the creator for sharing!) which will deregister all task definitions for your specific region - maybe you can adapt it to skip some which you want to keep: https://gist.github.com/jen20/e1c25426cc0a4a9b53cbb3560a3f02d1
You need to have jq to run it:
brew install jq
I "hard-coded" my region, for me it's eu-central-1, so be sure to adapt it for your use-case:
#!/usr/bin/env bash
get_task_definition_arns() {
aws ecs list-task-definitions --region eu-central-1 \
| jq -M -r '.taskDefinitionArns | .[]'
}
delete_task_definition() {
local arn=$1
aws ecs deregister-task-definition \
--region eu-central-1 \
--task-definition "${arn}" > /dev/null
}
for arn in $(get_task_definition_arns)
do
echo "Deregistering ${arn}..."
delete_task_definition "${arn}"
done
Then when I run it, it starts removing them:
Deregistering arn:aws:ecs:REGION:YOUR_ACCOUNT_ID:task-definition/NAME:REVISION...
Oneline approach inspired by Anna A reply:
aws ecs list-task-definitions --region eu-central-1 \
| jq -M -r '.taskDefinitionArns | .[]' \
| xargs -I {} aws ecs deregister-task-definition \
--region eu-central-1 \
--task-definition {} \
| jq -r '.taskDefinition.taskDefinitionArn'
There is no option to delete a task definition on the AWS console.
But, you can deregister (delete) a task definition by executing the following command number of revisions that you have:
aws ecs deregister-task-definition --task-definitiontask_defination_name:revision_no
Created following gist to safely review, filter and deregister AWS task-definitions and revisions in bulk (max 100 at a time) using JS CLI.
https://gist.github.com/shivam-nagar/aa79b02b74f616f8714d51e419bd10de
Can use this to deregister all revisions for task-definition. This will result in task-definition itself marked as inactive.
Now its supported
I just went inside the Task Definations and clicked on Actions and click on Deregister and it was removed from the UI