So Amazon Web Service (AWS) has just upgraded their CLI to v2. I've updated my version to the latest (and checked with aws --version to make sure it's using it). Trouble is, now my simple command doesn't work and I can't figure out why.
With CLI v1, I used: aws s3 sync myfile s3bucket
This worked fine and I had no issues with it.
Now, with CLI v2, it throws this error:
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
aws: error: the following arguments are required: paths
I've looked up the documentation and the only mention of "paths" is this:
Options
*******
"paths" (string)
Literally just that. Any idea what this paths option (that is seemingly not an option, but a required parameter) does? And how then do I get aws s3 sync to start working again on this new CLI version?
Very disgruntled right now. I had just got this app working, sent it to a coworker to test it and bam! Broken!
Edit2: On checking the path files, I've realised the mistake. Each new subfolder I added to path adds the \ AFTER the folder, thus ending on the folder "CloudSync\Data" for example. It's not the updated CLI that's messing things up (although it was the update message that broke my program on my coworkers pc). It was that darned \
Thanks for the help though.
sync command can be used to sync two PATHS.. Possibilities are as follows :
<LocalPath> <S3Uri> or <S3Uri> <LocalPath> or <S3Uri> <S3Uri>
i.e. syncing localpath to S3; OR S3 to Local; OR two S3 paths itself.
There are some changes in v2. I executed on my local system and made sure this is working fine: Basically LocalPath should be a folder ; and no longer just a file. So that it will sync all contents inside temp folder in this example.
aws s3 sync temp s3://datalake-dl/
Alex's comment on the accepted answer was my problem too.
This fails with the error "the following arguments are required: paths"
aws s3 sync 'C:\my-folder\' s3://my-bucket-my-folder/
This succeeds
aws s3 sync 'C:\my-folder' s3://my-bucket-my-folder
Related
aws iam list-users command not working
I have setup AWS CLI in Windows. The path has been added under the system environment variable.
When I try the commands aws --version and aws configure, it is successful. But to see the list of users the command aws iam list-users is throwing this error:
'more' is not recognized as an internal or external command, operable program or batch file.
I am stuck. Could anyone help please?
It sounds like the AWS CLI is trying to use an output paginator that is not in the path.
Put simply, AWS CLI sends its output via a utility that lets you 'page' through the results. In your case, it is trying to use the more command.
You can tell the AWS CLI not to use a paginator by putting this in the .aws/config file:
[default]
cli_pager=
For more details, see: Using AWS CLI pagination options - Client-side pager - AWS Command Line Interface
I am trying to setup some build and deployment servers based on EC2 instances to deploy software to AWS via CloudFormation.
The current setup uses the AWS CLI to deploy CloudFormation templates, and authentication is handled using a credentials profile where the ~/.aws/config file has a profile with:
[profile x]
role_arn = x
credential_source = Ec2InstanceMetadata
region = x
The setup using the AWS CLI appears to be working fine, and can deploy CloudFormation templates, upload files to S3 etc.
I wanted to automate this further and use a configuration-based approach to allow for more flexibility in our deployments. To achieve this, I have written some Python code to parse a config file and use the Boto3 library (which the AWS CLI also uses) to replicate the functionality. However when I am trying to do similar things in Boto3 (like deploy CloudFormation and upload files to S3), I get the following error: Connection to sts.amazonaws.com timed out. Unfortunately I can't provide the full stack trace since it's on a separate network. I am running Python 3.7 and boto3-1.21-13, botocore-1.24.13.
I assume it might be because I need to setup a VPC endpoint for STS? However, I can't work out why and how the AWS CLI works fine, but Boto3 doesn't. Especially since AWS CLI uses Boto3 under the hood.
In addition, I have confirmed that I can retrieve instance metadata using curl from the EC2 instances.
To reproduce the error, this command fails for me:
python -c "import boto3;print(boto3.Session(profile_name='x').client('s3').list_objects('bucket')"
However this AWS cli command works:
aws --profile x s3 ls bucket
I guess I don't understand why the AWS CLI command works, when the boto3 command fails. Why does boto3 needs to call the sts.amazonaws.com endpoint, when the AWS CLI seemingly doesn't? What am I missing?
The aws cli and boto3 both use botocore, which is only a minor detail. Nevertheless, both the cli and boto3, when run in the same environment with the same access to the credentials, should indeed be able to reach the same endpoint.
This:
aws sts get-caller-identity --profile x
and:
python -c "import boto3;print(boto3.Session(profile_name='x').client('sts').get_caller_identity())"
are equivalent and should make the same api calls to the same endpoint.
As an aside, I find it is often best not to have your code concerned with session handling at all. It seems most simple to me for the code to expect the environment to handle that. So just export AWS_PROFILE and run the code. This prevents other user of the script from having to have the same profile and name it the same.
Yeah so it turns out I just needed to set/export AWS_STS_REGIONAL_ENDPOINTS='regional'.
After many hours of trawling the botocore and awscli source and logs, I found out that botocore sets it by default to 'legacy'.
Where as in v2 of the AWS CLI, they set it to 'regional'.
I want to display some help text, and search it with grep.
aws ec2 help | grep instance
AWS CLI uses more to paginate the help.
To disable it I've already tried:
aws --no-cli-pager ec2 help | grep instance
export AWS_PAGER=''; aws ec2 help | grep instance
and changing cli_pager in config file:
[default]
cli_pager=
It still uses the pager.
I'm using AWS CLIv2 Windows version on Cygwin.
How does one disable it?
There are two ways to disable pagination in the AWS CLI.
1: Using the cli_pager option in the config file:
[default]
cli_pager=
2: Using the AWS_PAGER environment variable:
$ export AWS_PAGER=""
Please note: They only work if you’re using the AWS CLI version 2. They aren’t available if you run AWS CLI version 1. For information on how to install version 2, see Installing, updating, and uninstalling the AWS CLI version 2.
There is in fact no well-supported way to do this for the special case of the help output. The help output is treated specially by the v2 aws-cli and ignores the configured cli_pager gadgetry.
The workaround is simply to remove the tty and pipe to cat:
aws help |cat
see:
https://github.com/aws/aws-cli/issues/4972
I am using amazon web services cli. I use a makefile to to build my lambda project and upload it to aws lambda. I am on a windows machine and using powershell to call make.
I try to delete my lambda function with the following lines
AWS_PATH = /cygdrive/c/Users/TestBox/AppData/Roaming/Python/Scripts/aws
AWS_WIN_PATH = $(shell cygpath -aw ${AWS_PATH})
AWS_REGION = eu-west-2
lambda_delete:
$(AWS_WIN_PATH) lambda delete-function --function-name LambdaTest --region $(AWS_REGION) --debug
I get this error..
NoCredentialsError: Unable to locate credentials
Unable to locate credentials. You can configure credentials by running "aws configure".
Running aws configure list prints out a valid default profile.
I think the problem is because i am using gnu make installed by cygwin on a windows machine. Using powershell to call make.
So the path to credentials looks like this "cygdrive/c/users/testbox/.aws/credentials" instead "c:\users\testbox.aws\credentials", when ~/.aws/credentials is evaluated by aws. I think :)
I had the same problem with the path to aws itself and had to use $(shell cygpath -aw ${AWS_PATH}) to convert it to a path windows python could use.
Is there any way to pass the credentials directly to the lambda delete-function or indirectly through a path to a file? I cant seem to think of a way because the code that searches for the credentials is internal to botocore.
Is there a way around this that you know off?
Alternative solution, consider using AWS SAM templates
Use AWS SAM templates to deploy your Lambda functions and AWS resources using CloudFormation.
Edit your SAM template and define your AWS resources. For example, define Lambda functions/path to your code.
aws cloudformation package to package and upload your local code to S3.
aws cloudformation deploy to provision and update AWS resources with the updated code on S3.
This would work in CMD/Powershell without the make hassle. You will also have the benefit of having your resources versioned as code and you won't need to worry about tracking or adding new AWS APIs in your make file.
More complex serverless frameworks for reference:
AWS Chalice https://github.com/aws/chalice
Django/Flask + Lambda https://github.com/Miserlou/Zappa
Cross cloud serverless solution https://github.com/serverless/serverless
For a month or so, I've been studying AWS services and now I have to accomplish some basic stuff on AWS elastic beanstalk via command line. As far as I understand there are the aws elasticbeanstalk [command] and the eb [command] CLI installed on the build instance.
When I run eb status inside application folder, I get response in the form:
Environment details for: app-name
Application name: app-name
Region: us-east-1
Deployed Version: app-version
Environment ID: env-name
Platform: 64bit Amazon Linux ........
Tier: WebServer-Standard
CNAME: app-name.elasticbeanstalk.com
Updated: 2016-07-14 .......
Status: Ready
Health: Green
That tells me eb init has been run for the application.
On the other hand if I run:
aws elasticbeanstalk describe-application-versions --application-name app-name --region us-east-1
I get the error:
Unable to locate credentials. You can configure credentials by running "aws configure".
In home folder of current user there is a .aws directory with a credential file containing a [profile] line and aws_access_key_id and
aws_secret_access_key lines all set up.
Beside the obvious problem with the credentials, what I really lack is understanding of the two cli. Why is EB cli not asking for credentials and AWS cli is? When do I use one or the other? Can I use only aws cli? Any clarification on the matter will be highly appreciated.
EDIT:
For anyone ending up here, having the same problem with "Unable to locate credentials". Adding --profile profile-name option solved the problem for me. profile-name can be found in ~/.aws/config (or credentials) file on [profile profile-name] line.
In order to verify that the AWS CLI is configured on your system run aws configure and provide it with all the details it requires. That should fix your credentials problem and checking the change in configuration will allow you to understand what's wrong with your current conf.
the eb cli and the aws cli have very similar capabilities, and I too am a bit confused as to why they both should exist. From my experience the main differences are that the cli is used to interact with your AWS account using simple requests while the eb cli creates connections between you and the eb envs and so allows for finer control over them.
For instance - I've just developed a CI/CD pipeline for our beanstalk apps. When I use the eb cli I can monitor the deployment of our apps and notify the developers when it's finished. aws cli does not offer that functionality, and the only to achieve that is to repeatedly query the service until you receive the desired result.
The AWS CLI is a general tool that works on all AWS resources. It's not tied to a specific software project, the type of machine you're on, the directory you're in, or anything like that. It only needs credentials, whether they've been put there manually if it's your own machine, or generated by AWS if it's an EC2 instance.
The EB CLI is a high level tool to wrangle your software project into place. It's tied to the directory you're in, it assumes that the stuff in your directory is your project, and it has short commands that do a lot of background work to magically put everything in the right place.