NoSuchBucket error when running Kubernetes on AWS - amazon-web-services

Downloaded Kubernetes 1.1.8 from:
https://github.com/kubernetes/kubernetes/releases/download/v1.1.8/kubernetes.tar.gz
Followed the instructions at:
https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/aws.md
And got the following error:
kubernetes-1.1.8 > ./kubernetes/cluster/kube-up.sh
... Starting cluster using provider: aws
... calling verify-prereqs
... calling kube-up
Starting cluster using os distro: vivid
Uploading to Amazon S3
Creating kubernetes-staging-0eaf81fbc51209dd47c13b6d8b424149
make_bucket: s3://kubernetes-staging-0eaf81fbc51209dd47c13b6d8b424149/
A client error (NoSuchBucket) occurred when calling the GetBucketLocation operation: The specified bucket does not exist
+++ Staging server tars to S3 Storage: kubernetes-staging-0eaf81fbc51209dd47c13b6d8b424149/devel
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
aws: error: argument --region: expected one argument
AWS Console showed that the bucket was created but was empty.

It's probably a region issue; I'm guessing that the bucket is created in another region than Kubernetes tries to access.
Looks like the aws cmdline tool is confused about the region:
aws: error: argument --region: expected one argument
When it can't determine the region, it defaults to one of the us regions.
EDIT: the S3 sync is triggered by script cluster/aws/util.sh.
The command executed is aws s3 sync --region ${s3_bucket_location} --exact-timestamps ${local_dir} "s3://${AWS_S3_BUCKET}/${staging_path}/".
You can add an echo ${s3_bucket_location} before the line above. It should give you more information on what the region is set to.

Related

aws emr cli failed for InvalidRequestException

I was able to run the create-cluster cli successfully and launched my EMR cluster, but when I tried to run below command to add a step:
aws emr add-steps --cluster-id j-your-cluster-id --steps Type=CUSTOM_JAR,Name=CustomJAR,ActionOnFailure=CONTINUE,Jar=s3://mybucket/mytest.jar,Args=arg1,arg2,arg3 Type=CUSTOM_JAR,Name=CustomJAR,ActionOnFailure=CONTINUE,Jar=s3://mybucket/mytest.jar,MainClass=mymainclass,Args=arg1,arg2,arg3 --profile my-test-account
it failed with this error:
An error occurred (InvalidRequestException) when calling the DescribeCluster operation: Cluster id 'j-your-cluster-id' is not valid.
and I've double checked j-your-cluster-id is matching my cluster-id exactly.
I feel like this is a permission issue, but how come the same profile could let me create a cluster, but cannot let me describe it?
How can I dig further and fix this please?
Based on the comments.
The issue was caused by execution AWS CLI in different region than intended. The solution was to use --region option to provide correct region for the CLI.

How to run an AWS CLI: Elastic Beanstalk Wait command in Azure DevOps

The structure of the wait command is:
$ aws <command> wait <subcommand> [options and parameters]
However in DevOps it only seems to support:
$ aws <command> <subcommand> [options and parameters]
See example below where there is a Command and Subcommand. Where does the Wait go? I'm trying to run this command https://awscli.amazonaws.com/v2/documentation/api/latest/reference/elasticbeanstalk/wait/environment-updated.html
I had to set the Subcommand to wait and move the environment-updated down into the Options and parameters
It looks that you want be able to do this using extension. However, you have aws CLI installed on the agent so what you need is to setup few variables and then call your commands from powershell step.
Supply standard AWS environment variables in the build agent process
You can specify credentials with standard named AWS environment variables. These variables can be used to get credentials from a custom credentials store.
The following are all the supported standard named AWS environment variables:
AWS_ACCESS_KEY_ID – IAM access key ID.
AWS_SECRET_ACCESS_KEY – IAM secret access key.
AWS_SESSION_TOKEN – IAM session token.
AWS_ROLE_ARN – Amazon Resource Name (ARN) of the role you want to assume.
AWS_REGION – AWS Region code, for example, us-east-2.
You can also create a feature request on github to support wait command by extension.

aws transfer update-server is throwing errors

I am using terraform to create an aws sftp server and trying to use IP whitelisting to secure my server.
Terraform aws_transfer_server command supports only endpoint_types such as PUBLIC or VPC_ENDPOINT at this time. So I am using null_resource to execute an aws command to update the sftp server after it was created. The terraform snippet is below:
resource "null_resource" "update_sftp_server" {
provisioner "local-exec" {
command = <<EOF
aws transfer update-server --server-id ${aws_transfer_server.sftp.id} --endpoint-type VPC --endpoint-details SubnetIds="${join("\", \"", var.subnet_ids)}", AddressAllocationIds="${join("\", \"", toset(aws_eip.nlb.*.id))}", VPCEndpointID="${aws_vpc_endpoint.transfer.id}", VpcId="${var.vpc_id}"
EOF
}
depends_on = [aws_transfer_server.sftp, aws_vpc_endpoint.transfer]
}
This executes the below aws command
aws transfer update-server --server-id s-######## --endpoint-type VPC --endpoint-details SubnetIds="subnet-#####", "subnet-#####", AddressAllocationIds="eipalloc-######", "eipalloc-######", VPCEndpointID="vpce-#######", VpcId="vpc-#####"
But I am getting an error as below:
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
Unknown options: AddressAllocationIds=eipalloc-######, eipalloc-######, VPCEndpointID=vpce-######, VpcId=vpc-######, subnet-######
Can someone help me to know why this error is thrown? My environment details are below:
Terraform v0.12.28
provider.aws v3.0.0
provider.null v2.1.2
aws-cli/2.0.33 Python/3.7.7 Windows/10 botocore/2.0.0dev37
Have you tried building your argument list without spaces? So that it looks like SubnetIds="subnet-#####","subnet-#####",AddressAllocationIds="eipalloc-######","eipalloc-######",VPCEndpointID="vpce-#######",VpcId="vpc-#####" ?
Otherwise when the commandline is broken up into tokens, most of those bits will not be parsed as part of the --endpoint-details argument.

AWS Translate | Asynchronous Batch Processing | CLI | describe-text-translation-job not valid command

I have installed and configure AWS CLI both my windows 10 machine and AWS EC2 Linux machine also have one AWS translated batch job in frankfurt aws region. I am following this document for to initiate the batch translation process using CLI. https://docs.aws.amazon.com/translate/latest/dg/translate-dg.pdf
Now, suppose I am using this sample command
aws translate describe-text-translation-job --job-id xxxxxxx
I am getting this error everyplace
[ec2-user#ip-xx-xx-xx-xx ~]$ aws translate describe-text-translation-job --job-id xxxxxxx
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
aws: error: argument operation: Invalid choice, valid choices are:
delete-terminology | get-terminology
import-terminology | list-terminologies
translate-text | help
It only show 5 valid choice other than help but as per the documentation it should be more
https://docs.aws.amazon.com/cli/latest/reference/translate/index.html#cli-aws-translate
why I am not getting these options
describe-text-translation-job
start-text-translation-job
stop-text-translation-job
list-text-translation-jobs
It was AWS CLI version problem only, I updated the version and now I get all the required methods.

How to resolve this error on a Kubernetes Installation on AWS?

I'm trying to install Kubernetes (first time) on AWS according to this tutorial http://kubernetes.io/docs/getting-started-guides/aws/#prerequisites
I can use the AWS CLI but after running the following command:
export KUBERNETES_PROVIDER=aws; curl -sS https://get.k8s.io | bash
Then I get this error:
Unpacking kubernetes release v1.3.0
Creating a kubernetes on aws...
... Starting cluster in us-west-2a using provider aws
... calling verify-prereqs
... calling kube-up
Starting cluster using os distro: jessie
usage: aws [options] <command> <subcommand> [parameters]
aws: error: argument subcommand: Invalid choice, valid choices are:
list
Uploading to Amazon S3
+++ Staging server tars to S3 Storage: kubernetes-staging-a9b7435c8fc7b6c3d3e26fdd5b84aaae/devel
usage: aws [options] <command> <subcommand> [parameters]
aws: error: argument --region: expected one argument
any help/insight appreciated..
I had the same issue; it turned out I had pip install aws not pip install awscli. After uninstalling aws and installing awscli I was good to go.
Seems region is missing from environment vars.
Below example is env vars for Singapore region
export KUBE_AWS_ZONE=ap-southeast-1a
export NUM_NODES=2
export MASTER_SIZE=t2.micro
export NODE_SIZE=t2.micro
export AWS_S3_REGION=ap-southeast-1
export AWS_S3_BUCKET=mudrii-kubernetes-artifacts
export KUBE_AWS_INSTANCE_PREFIX=k8s