aws ecs-cli extra-user-data: flag provided but not defined - amazon-web-services

I'm facing a strange problem trying to configure a AWS ECS cluster through ecs-cli.
To be specific, if I use the flag --extra-user-data it says: flag provided but not defined.
here's my command syntax:
ecs-cli up --capability-iam --keypair test --size 1 --instance-type t2.small --extra-user-data file://init-ec2 --launch-type EC2 --force --cluster test --region eu-west-1
Here's the exception:
ERRO[0000] flag provided but not defined: -extra-user-data
Any help is appreciated...

--extra-user-data was released in version 1.9.0 of ecs-cli. The error message you've provided appears to indicate that you're running a previous version. Update to the latest version of ecs-cli and try again.

Related

aws emr cli failed for InvalidRequestException

I was able to run the create-cluster cli successfully and launched my EMR cluster, but when I tried to run below command to add a step:
aws emr add-steps --cluster-id j-your-cluster-id --steps Type=CUSTOM_JAR,Name=CustomJAR,ActionOnFailure=CONTINUE,Jar=s3://mybucket/mytest.jar,Args=arg1,arg2,arg3 Type=CUSTOM_JAR,Name=CustomJAR,ActionOnFailure=CONTINUE,Jar=s3://mybucket/mytest.jar,MainClass=mymainclass,Args=arg1,arg2,arg3 --profile my-test-account
it failed with this error:
An error occurred (InvalidRequestException) when calling the DescribeCluster operation: Cluster id 'j-your-cluster-id' is not valid.
and I've double checked j-your-cluster-id is matching my cluster-id exactly.
I feel like this is a permission issue, but how come the same profile could let me create a cluster, but cannot let me describe it?
How can I dig further and fix this please?
Based on the comments.
The issue was caused by execution AWS CLI in different region than intended. The solution was to use --region option to provide correct region for the CLI.

AWS EMR - Terminated with errors On the master instance application provisioning failed

I'm provisioning an EMR cluster emr-5.30.0. I run this using Terraform and get the following error on AWS CONSOLE as it fails.
Amazon EMR Cluster j-11I5FOBxxxxxx has terminated with errors at 2020-10-26 19:51 UTC with a reason of BOOTSTRAP_FAILURE.
I don't have any bootstrap steps. I can't view any logs either to see what is happening. Log URI is blank and can't SSH to cluster too since it's terminated.
Any pointers would be appreciated?
Providing AWS-CLI-EXPORT output:
aws emr create-cluster --auto-scaling-role EMR_AutoScaling_DefaultRole --applications Name=Spark --tags 'Account=xxx' 'Function=xxx' 'Repository=' 'Mail=xxx#xxx.com' 'Slack=xxx' 'Builder=xxx' 'Environment=xxx' 'Service=xxx xxx xxx' 'Team=xxx' 'Name=xxx-xxx-xxx' --ebs-root-volume-size 100 --ec2-attributes '{"KeyName":"xxx","AdditionalSlaveSecurityGroups":[""],"InstanceProfile":"EMR_EC2_DefaultRole","ServiceAccessSecurityGroup":"sg-xxx","SubnetId":"subnet-xxx","EmrManagedSlaveSecurityGroup":"sg-xxx","EmrManagedMasterSecurityGroup":"sg-xxx","AdditionalMasterSecurityGroups":[""]}' --service-role EMR_DefaultRole --release-label emr-5.30.0 --name 'xxx-xxx-xxx' --instance-groups '[{"InstanceCount":1,"EbsConfiguration":{"EbsBlockDeviceConfigs":[{"VolumeSpecification":{"SizeInGB":32,"VolumeType":"gp2"},"VolumesPerInstance":4}]},"InstanceGroupType":"MASTER","InstanceType":"m5.2xlarge","Name":""},{"InstanceCount":2,"EbsConfiguration":{"EbsBlockDeviceConfigs":[{"VolumeSpecification":{"SizeInGB":40,"VolumeType":"gp2"},"VolumesPerInstance":1}]},"InstanceGroupType":"CORE","InstanceType":"m5.2xlarge","Name":""}]' --configurations '[{"Classification":"hadoop-env","Properties":{},"Configurations":[{"Classification":"export","Properties":{"PYSPARK_PYTHON":"/usr/bin/python3","JAVA_HOME":"/usr/lib/jvm/java-1.8.0"}}]},{"Classification":"spark-env","Properties":{},"Configurations":[{"Classification":"export","Properties":{"PYSPARK_PYTHON":"/usr/bin/python3","JAVA_HOME":"/usr/lib/jvm/java-1.8.0"}}]}]' --scale-down-behavior TERMINATE_AT_TASK_COMPLETION --region eu-west-2
Issue was due to JAVA_HOME incorrectly set.
JAVA_HOME":"/usr/lib/jvm/java-1.8.0"
Resolution: Check logs in S3 under: provision-node/reports and it should tell you which bootstrap step fails...
Try to change the instance type and try running it in different AZ and see if problem persists.
Building a cluster with emr-6.2.0 on md5.xlarge, this is JAVA_HOME:
/usr/lib/jvm/java-1.8.0-amazon-corretto.x86_64

ecs-cli refers to old cluster after changing default profile; doesn't show EC2 instances

I've been using AWS's ECS CLI to spin clusters of EC2 instances up and down for various tasks. The problem I'm running into is that it seems to be referring to old information that I don't know how to change.
e.g., I just created a cluster, my-second-cluster successfully, and can see it in the AWS console:
$ ecs-cli up --keypair "my-keypair" --capability-iam --size 4 --instance-type t2.micro --port 22 --cluster-config my-second-cluster --ecs-profile a-second-profile
INFO[0001] Using recommended Amazon Linux 2 AMI with ECS Agent 1.45.0 and Docker version 19.03.6-ce
INFO[0001] Created cluster cluster=my-second-cluster region=us-east-1
INFO[0002] Waiting for your cluster resources to be created...
INFO[0002] Cloudformation stack status stackStatus=CREATE_IN_PROGRESS
INFO[0063] Cloudformation stack status stackStatus=CREATE_IN_PROGRESS
INFO[0124] Cloudformation stack status stackStatus=CREATE_IN_PROGRESS
VPC created: vpc-123abc
Security Group created: sg-123abc
Subnet created: subnet-123abc
Subnet created: subnet-123def
Cluster creation succeeded.
...but eci-cli ps returns an error referring to an old cluster:
$ ecs-cli ps
FATA[0000] Error executing 'ps': Cluster 'my-first-cluster' is not active. Ensure that it exists
Specifying the cluster explicitly (ecs-cli ps --cluster my-second-cluster --region us-east-1) returns nothing, even though I see the 4 EC2 instances when I log into the AWS console.
Supporting details:
Before creating this second cluster, I created a second profile and set it to the default. I also set the new cluster to be the default.
$ ecs-cli configure profile --access-key <MY_ACCESS_KEY> --secret-key <MY_SECRET_KEY> --profile-name a-second-profile
$ ecs-cli configure profile default --profile-name a-second-profile
$ ecs-cli configure --cluster my-second-cluster --region us-east-1
INFO[0000] Saved ECS CLI cluster configuration default.
It's unclear to me where these ECS profile and cluster configs are stored (I'd expect to see them as files in ~/.aws, but no), or how to manipulate them beyond the cli commands that don't give great feedback. Any ideas on what I'm missing?
The ECS CLI stores it's credentials at ~/.ecs/credentials.
When you create the initial profile it's name is default and is used by default. When you set a-second-profile to default, it sets the metadata to use a-second-profile by default but you still have a profile named default that points to the original creds.
My guess is that to see the first cluster you need to now specify a profile name since you changed the default. If you didn't give your initial profile a name then it will be default.
ecs-cli ps --ecs-profile default
If you deleted your cluster configuration you may need to add the cluster again and associate to the right profile:
ecs-cli configure --cluster cluster_name --default-launch-type launch_type --region region_name --config-name configuration_name
I hope that makes sense. Hopefully looking at how your commands update ~/.ecs/credentials be helpful.
Some resources:
ECS CLI Configurations

AWS deplyong ECS services through CLI

I want to deploy (restart) my ECS tasks (of launch type Fargate) through aws cli (in last step of CI/CD).
The issue with them is that it seems I have to stop tasks, and update their status again. Still ok, but in the following command:
aws --region regionName ecs stop-task --cluster example-cluster --task taskID, for --task I either must use task UUID or task's ARN, both of which are not fully fixed.
Task's UUID changes by each revision and ARN is also a name whose last part is the revision number. Is there an identifier fully fixed that I can use as ARN?
Also, in ARN, for example if I have nginx:4, I cannot use "latest" instead of 4, making it completely difficult to handle and automate.
I found the solution, it was a mistake to use *-task family of commands. To deploy a service, we simply must use update-service command, like this:
aws --region regionName ecs update-service --cluster clusterName --force-new-deployment --service serviceName
The point is with --force-new-deployment, and this command is useful for those who do not use CodeDeploy.

Error while executing aws eks commands on aws-cli

successfully installed the EKS CLI on the terminal. But when I try to execute the command
aws eks --us-east-1 region update-kubeconfig --name codefresh
it showing an error saying
aws: error: argument command: Invalid choice
It would be great if someone helps me with the proper solution.
You have an error in your call. You specify region with --region us-east-1 and not with --us-east-1 region