Enable AWS Batch in AWS CLI - amazon-web-services

I am working in the US-East-1 (N.Virginia) and have even configured the Default Region Name to us-east-1 using the command aws configure.
But I am not able to access Batch using CLI. Batch is not even listed as one of the Available Services in aws help.
Any ideas how to enable Batch in AWS Cli? I have administrative access in IAM console so permissions don't seem to be the issue.

The batch service is relatively new, so its commands only exist in fairly new versions of the aws CLI.
Commands for batch in the latest cli documentation: http://docs.aws.amazon.com/cli/latest/reference/batch/index.html?highlight=batch
If you are running Windows, simply download the updated installer. https://aws.amazon.com/cli/
If you are using OSX or Linux use pip. pip install --upgrade awscli

Related

Connection to sts.amazonaws.com timed out when calling Python boto3 API from EC2 instance

I am trying to setup some build and deployment servers based on EC2 instances to deploy software to AWS via CloudFormation.
The current setup uses the AWS CLI to deploy CloudFormation templates, and authentication is handled using a credentials profile where the ~/.aws/config file has a profile with:
[profile x]
role_arn = x
credential_source = Ec2InstanceMetadata
region = x
The setup using the AWS CLI appears to be working fine, and can deploy CloudFormation templates, upload files to S3 etc.
I wanted to automate this further and use a configuration-based approach to allow for more flexibility in our deployments. To achieve this, I have written some Python code to parse a config file and use the Boto3 library (which the AWS CLI also uses) to replicate the functionality. However when I am trying to do similar things in Boto3 (like deploy CloudFormation and upload files to S3), I get the following error: Connection to sts.amazonaws.com timed out. Unfortunately I can't provide the full stack trace since it's on a separate network. I am running Python 3.7 and boto3-1.21-13, botocore-1.24.13.
I assume it might be because I need to setup a VPC endpoint for STS? However, I can't work out why and how the AWS CLI works fine, but Boto3 doesn't. Especially since AWS CLI uses Boto3 under the hood.
In addition, I have confirmed that I can retrieve instance metadata using curl from the EC2 instances.
To reproduce the error, this command fails for me:
python -c "import boto3;print(boto3.Session(profile_name='x').client('s3').list_objects('bucket')"
However this AWS cli command works:
aws --profile x s3 ls bucket
I guess I don't understand why the AWS CLI command works, when the boto3 command fails. Why does boto3 needs to call the sts.amazonaws.com endpoint, when the AWS CLI seemingly doesn't? What am I missing?
The aws cli and boto3 both use botocore, which is only a minor detail. Nevertheless, both the cli and boto3, when run in the same environment with the same access to the credentials, should indeed be able to reach the same endpoint.
This:
aws sts get-caller-identity --profile x
and:
python -c "import boto3;print(boto3.Session(profile_name='x').client('sts').get_caller_identity())"
are equivalent and should make the same api calls to the same endpoint.
As an aside, I find it is often best not to have your code concerned with session handling at all. It seems most simple to me for the code to expect the environment to handle that. So just export AWS_PROFILE and run the code. This prevents other user of the script from having to have the same profile and name it the same.
Yeah so it turns out I just needed to set/export AWS_STS_REGIONAL_ENDPOINTS='regional'.
After many hours of trawling the botocore and awscli source and logs, I found out that botocore sets it by default to 'legacy'.
Where as in v2 of the AWS CLI, they set it to 'regional'.

AWS Aurora - How to enable serverless mode via CLI

I am using the following command to create AWS Aurora Serverless instance
aws rds create-db-cluster --db-cluster-identifier test-cluster --database-name testdb --master-username test --master-user-password testtest --engine aurora --engine-mode serverless --region us-east-1
but I am getting the following error.
Unknown options: --engine-mode, serverless
Above command works great on my AWS account but its not working on my clients account. (I just have programmatic access to that account). I have double check the permissions and I have the similar permissions as of my own account.
Summary: AWS command to create serverless aurora cluster is working on one account but not on another account with similar permissions.
Account 1:
Account2:
The error message states that it does not know about the engine-mode argument. This is a clear indication that your AWS CLI version is out dated. Serverless was added as part of a recent (late 2018) release, so you need to update your client's AWS CLI to recognize these inputs.
I have figured it out. I was using awscli version 1.14 on my server and 1.16 on my laptop. I updated the awscli and now its working fine.
sudo pip install --upgrade awscli

Create and Configure a Cognito User Pool from the AWS CLI

I'm currently trying to automate the Cognito User Pool creation process via bash scripts on AWS-CLI. However, following the steps from the AWS console, I'm trying to reproduce the same steps via the CLI. I like to know which commands I should be looking at and in what sequence? The AWS docs don't really say much and the commands sometimes tend to be confusing.
Any ideas will be greatly appreciated.
Cheers!
Nyah
First install aws cli using following command
sudo pip install awscli
Configure AWS credentials, Run below commonond, system will ask following input AWS Access Key ID, AWS Secret Access Key, Default region name, Default output format
sudo aws configure
Create user pool
sudo aws cognito-idp create-user-pool --pool-name MyUserPool
You can install first awscli
sudo pip install awscli
configure aws cli with your private key and access key.
to run configure aws-cli run the command:
aws configure
For all the details of Cognito, you can find available command for it over here: https://docs.aws.amazon.com/cli/latest/reference/cognito-idp/index.html
Create the user pool in cognito:
aws Cognito-idp create-user-pool --pool-name <Whatever name you want to add>
Update the user pool incognito
aws cognito-idp update-user-pool --user-pool-id <value>
You can also refer this bash script :
https://github.com/awslabs/aws-cognito-angular-quickstart/blob/master/aws/createResources.sh

aws-cli equivalent of eb deploy?

I'm attempting to deploy an app from one ElasticBeanstalk instance to another. Running pip install awsebcli --upgrade --user doesnt install the eb cli tool for some odd reason on the EC2 machine.
Does anyone know the equivalent of eb deploy using only the aws cli options?
This question is a bit confusing. Are you attempting to move code between EC2 instances in your Beanstalk environment?
If I'm assuming correctly, you've pulled/changed your code on one Beanstalk host. And now you're trying to propagate that change to the other instances using the EB CLI. That's not a best practice. Beanstalk has a mechanism to deploy your code to all instances.
The EB CLI is meant to be run from your workstation to push code from your IDE/editor to the Beanstalk hosts in AWS.
Beanstalk keeps a copy of that code revision in S3. And if the Beanstalk environment is load balanced then all instances will be running the same application version when scaling events or deployments occur because it will pull your code from a common source.
But to answer your question:
Does anyone know the equivalent of eb deploy using only the aws cli options?
You're gonna wanna ZIP and upload your code to S3 and note the S3 key and bucket values of where it's located.
Then create a new application version.
% aws elasticbeanstalk create-application-version --application-name="<APPLICATION_NAME>" --version-label="<NEW_VERSION_LABEL>" --source-bundle="{\"S3Bucket\": \"<S3_BUCKET_NAME>\",\"S3Key\": \"<S3_KEY>\"}"
Then deploy your new application version to the running environment.
% aws elasticbeanstalk update-environment --environment-id="<ENVIRONMENT_ID>" --version-label="<NEW_VERSION_LABEL>"
Reading is hard...
Linux requires you to "[a]dd the path to the executable file to your PATH variable"
export PATH=~/.local/bin:$PATH
eb --version now works

Jenkins on AWS EC2 instance unable to use instance profile after upgrade

I had a Jenkins 2.46 installation running on an EC2 box, associated to a IAM role through an instance profile.
Jenkins was able to do various tasks requiring AWS credentials (f.e. use terraform, upload files to s3, access CodeCommit git repos) using just the instance profile role (no access key or secret keys were stored on the instance).
After upgrading to Jenkins 2.89, this is no longer the case: every task requiring authentication with AWS fails with a 403 error.
However, running a command on the instance bash as the jenkins user still works fine (f.e. running sudo -u jenkins /usr/bin/aws s3 ls s3://my-bucket/ lists bucket files; running the same command into Jenkins' Script Console yelds a 403).
I read the release notes of every version from 2.46 to 2.89 but I did not find anything relevant.
Jenkins was installed and updated through yum, the aws cli was installed using the bundled installer provided by AWS.