aws emr cli failed for InvalidRequestException - amazon-web-services

I was able to run the create-cluster cli successfully and launched my EMR cluster, but when I tried to run below command to add a step:
aws emr add-steps --cluster-id j-your-cluster-id --steps Type=CUSTOM_JAR,Name=CustomJAR,ActionOnFailure=CONTINUE,Jar=s3://mybucket/mytest.jar,Args=arg1,arg2,arg3 Type=CUSTOM_JAR,Name=CustomJAR,ActionOnFailure=CONTINUE,Jar=s3://mybucket/mytest.jar,MainClass=mymainclass,Args=arg1,arg2,arg3 --profile my-test-account
it failed with this error:
An error occurred (InvalidRequestException) when calling the DescribeCluster operation: Cluster id 'j-your-cluster-id' is not valid.
and I've double checked j-your-cluster-id is matching my cluster-id exactly.
I feel like this is a permission issue, but how come the same profile could let me create a cluster, but cannot let me describe it?
How can I dig further and fix this please?

Based on the comments.
The issue was caused by execution AWS CLI in different region than intended. The solution was to use --region option to provide correct region for the CLI.

Related

AWS EMR - Terminated with errors On the master instance application provisioning failed

I'm provisioning an EMR cluster emr-5.30.0. I run this using Terraform and get the following error on AWS CONSOLE as it fails.
Amazon EMR Cluster j-11I5FOBxxxxxx has terminated with errors at 2020-10-26 19:51 UTC with a reason of BOOTSTRAP_FAILURE.
I don't have any bootstrap steps. I can't view any logs either to see what is happening. Log URI is blank and can't SSH to cluster too since it's terminated.
Any pointers would be appreciated?
Providing AWS-CLI-EXPORT output:
aws emr create-cluster --auto-scaling-role EMR_AutoScaling_DefaultRole --applications Name=Spark --tags 'Account=xxx' 'Function=xxx' 'Repository=' 'Mail=xxx#xxx.com' 'Slack=xxx' 'Builder=xxx' 'Environment=xxx' 'Service=xxx xxx xxx' 'Team=xxx' 'Name=xxx-xxx-xxx' --ebs-root-volume-size 100 --ec2-attributes '{"KeyName":"xxx","AdditionalSlaveSecurityGroups":[""],"InstanceProfile":"EMR_EC2_DefaultRole","ServiceAccessSecurityGroup":"sg-xxx","SubnetId":"subnet-xxx","EmrManagedSlaveSecurityGroup":"sg-xxx","EmrManagedMasterSecurityGroup":"sg-xxx","AdditionalMasterSecurityGroups":[""]}' --service-role EMR_DefaultRole --release-label emr-5.30.0 --name 'xxx-xxx-xxx' --instance-groups '[{"InstanceCount":1,"EbsConfiguration":{"EbsBlockDeviceConfigs":[{"VolumeSpecification":{"SizeInGB":32,"VolumeType":"gp2"},"VolumesPerInstance":4}]},"InstanceGroupType":"MASTER","InstanceType":"m5.2xlarge","Name":""},{"InstanceCount":2,"EbsConfiguration":{"EbsBlockDeviceConfigs":[{"VolumeSpecification":{"SizeInGB":40,"VolumeType":"gp2"},"VolumesPerInstance":1}]},"InstanceGroupType":"CORE","InstanceType":"m5.2xlarge","Name":""}]' --configurations '[{"Classification":"hadoop-env","Properties":{},"Configurations":[{"Classification":"export","Properties":{"PYSPARK_PYTHON":"/usr/bin/python3","JAVA_HOME":"/usr/lib/jvm/java-1.8.0"}}]},{"Classification":"spark-env","Properties":{},"Configurations":[{"Classification":"export","Properties":{"PYSPARK_PYTHON":"/usr/bin/python3","JAVA_HOME":"/usr/lib/jvm/java-1.8.0"}}]}]' --scale-down-behavior TERMINATE_AT_TASK_COMPLETION --region eu-west-2
Issue was due to JAVA_HOME incorrectly set.
JAVA_HOME":"/usr/lib/jvm/java-1.8.0"
Resolution: Check logs in S3 under: provision-node/reports and it should tell you which bootstrap step fails...
Try to change the instance type and try running it in different AZ and see if problem persists.
Building a cluster with emr-6.2.0 on md5.xlarge, this is JAVA_HOME:
/usr/lib/jvm/java-1.8.0-amazon-corretto.x86_64

AWS Cli command to get "AWS CLI Export"

When EMR Cluster has been provisioned, I want to get the "AWS CLI Export" output using aws cli command.
Please let me know if someone has an idea to get aws cli export output through aws cli command

Error while executing aws eks commands on aws-cli

successfully installed the EKS CLI on the terminal. But when I try to execute the command
aws eks --us-east-1 region update-kubeconfig --name codefresh
it showing an error saying
aws: error: argument command: Invalid choice
It would be great if someone helps me with the proper solution.
You have an error in your call. You specify region with --region us-east-1 and not with --us-east-1 region

NoSuchBucket error when running Kubernetes on AWS

Downloaded Kubernetes 1.1.8 from:
https://github.com/kubernetes/kubernetes/releases/download/v1.1.8/kubernetes.tar.gz
Followed the instructions at:
https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/aws.md
And got the following error:
kubernetes-1.1.8 > ./kubernetes/cluster/kube-up.sh
... Starting cluster using provider: aws
... calling verify-prereqs
... calling kube-up
Starting cluster using os distro: vivid
Uploading to Amazon S3
Creating kubernetes-staging-0eaf81fbc51209dd47c13b6d8b424149
make_bucket: s3://kubernetes-staging-0eaf81fbc51209dd47c13b6d8b424149/
A client error (NoSuchBucket) occurred when calling the GetBucketLocation operation: The specified bucket does not exist
+++ Staging server tars to S3 Storage: kubernetes-staging-0eaf81fbc51209dd47c13b6d8b424149/devel
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
aws: error: argument --region: expected one argument
AWS Console showed that the bucket was created but was empty.
It's probably a region issue; I'm guessing that the bucket is created in another region than Kubernetes tries to access.
Looks like the aws cmdline tool is confused about the region:
aws: error: argument --region: expected one argument
When it can't determine the region, it defaults to one of the us regions.
EDIT: the S3 sync is triggered by script cluster/aws/util.sh.
The command executed is aws s3 sync --region ${s3_bucket_location} --exact-timestamps ${local_dir} "s3://${AWS_S3_BUCKET}/${staging_path}/".
You can add an echo ${s3_bucket_location} before the line above. It should give you more information on what the region is set to.

aws ec2 get-console-output prints nothing to the screen

I am creating an aws ec2 instance using this tutorial, and I can't find any information on troubleshooting my issue, or any evidence that anyone else has even experienced this!
I used an IAM user with admin permissions to set up an ec2 instance, and when I run
$> aws ec2 get-console-output --instance-id <my-ec2-id>
a blank line is output, followed by
'Output'
and nothing else!
According to the tutorial, this command would enable me to see the remote RSA fingerprint to verify I'm making the right connection.
I can log into my ec2 instance just fine (though I suppose without the previous step there's no way to be absolutely sure).
Additionally, the IAM user I'm working with is not my CLI's default user, and I set up a profile to handle it. But if I try
$> aws ec2 get-console-output --profile <user-profile> --instance-id <my-ec2-id>
I still get the same results as before. The maddening thing is that I have solved this problem before, but I can't remember how.
Certain AWS CLI operations may not explicitly state if the credentials are invalid or if users are lacking the roles/permissions to access the resources defined. In this case, it is likely due to the Access Credentials being invalid - and you can verify this with a describe-instances or similar command.
In older versions of the CLI (~1.7), in order to easier debug this, you can use the --debug argument, such as:
> aws ec2 get-console-output --instance-id i-<id> --debug
<Errors><Error><Code>InvalidInstanceID.NotFound</Code><Message>The instance ID 'i-e7bffa43' does not exist</Message></Error></Errors>
In newer versions of the CLI (1.9) this particular argument gives a bit more detail in its error:
> aws ec2 get-console-output --instance-id i-<id>
A client error (InvalidInstanceID.NotFound) occurred when calling the GetConsoleOutput operation: The instance ID 'i-<id>' does not exist