Configuring aws cli use fakes3 - amazon-web-services

Keen to setup fake s3, have it working via docker setup. Running on port 4569. I cannot figure out how to test using aws cli (version 1.10.6). specifically change the port for the access.
i.e. want to do a command like
$ aws s3 cp test.txt s3://mybucket/test2.txt
i need to specify the port, i've tried
--port settings on command line: i.e. AWS_ACCESS_KEY_ID=ignored AWS_SECRET_ACCESS_KEY=ignored aws s3 --profile fakes3 cp test.txt s3://mybucket/test2.txt (says not valid parameter)
adding a profile and including end_point="localhost:4569 in config in ~/.aws`. gives error about AUTH Key
running fakes3 on 443 but that then clashes with my local machine
Has anyone got aws cli working with fakes3?
$ aws s3 --version
aws-cli/1.10.6 Python/2.7.11 Darwin/15.2.0 botocore/1.3.28

Use the --endpoint-url argument. If fakes3 is listening on port 4569, try this:
aws --endpoint-url=http://localhost:4569 s3 ls

Related

AWS S3 cli not working on Windows server

This works on my Linux box, but I can't get a simple AWS S3 cli command to work on a Windows server (2012).
I'm running a simple copy command to a bucket. I get the following error:
Parameter validation failed:
Invalid length for parameter Key, value: 0, valid range: 1-inf
I googled this, couldn't find anything relevant. And I'm not the best at working with Windows servers.
What does this error actually mean?
Here's the command:
aws s3 cp test.zip s3://my-bucket
Version:
aws-cli/1.11.158 Python/2.7.9 Windows/2012Server botocore/1.7.16
You might try this:
aws s3 cp test.zip s3://my-bucket --recursive
The error message:
Invalid length for parameter Key
Is telling you that you need to specify a Key for your object (a filename basically). Like so:
aws s3 cp test.zip s3://my-bucket/test.zip
The error message has nothing to do with specifying the file name on the destination file path (that will be taken from the origin). It has everything to do with having a valid access key and secret key setup.
Run the following command to verify if you have configured your credentials.
aws configure list

How can I transfer a remote file to my S3 bucket using AWS CLI?

I tried to follow advice provided at https://stackoverflow.com/a/18136205/6608952 but was unsure how to share myAmazonKeypair path in a .pem file on the remote server.
scp -i yourAmazonKeypairPath.pem fileNameThatYouWantToTransfer.php ec2-user#ec2-00-000-000-15.us-west-2.compute.amazonaws.com:
The command completed after a few minutes with this display:
ssh: connect to host
myBucketEndpointName
port 22: Connection timed out
lost connection
I have a couple of very large files to transfer and would prefer not to have to download the files to my local computer and then re-upload them to the S3 bucket.
Any suggestions?
There is no direct way to upload files to S3 from a remote location. i.e a URL
So to achieve that, you have two options :
Download the file on your local machine and then upload it via AWS Console or AWS CLI.
Download the file in AWS EC2 Instance and upload to S3 by AWS CLI.
The first method is pretty simple, not much explanation needed.
But for the second method, you'll need to do :
Create an EC2 Instance in the same region as the S3 Bucket is. Or if you already have an Instance, then login/ssh to it.
Download the file from the source to the EC2 Instance. via wget or curl whichever is comfortable.
Install AWS CLI on the EC2 Instance.
Create IAM User and Grant him Permission for your S3 Bucket.
Configure your AWS CLI with your IAM Credentials.
Upload your file to S3 Bucket with AWS CLI S3 CP Utility.
Terminate the Instance, if you set up the instance only for this.
Do it with shell script easily. If you have a list of URLs in files.txt do it like it is described here:
#!/bin/bash
input="files.txt"
while IFS= read -r line do
name=$(basename "$line")
echo $name
wget $line
aws s3 mv $name <YOUR_S3_URI>
done < "$input"
Or for one file:
wget <FILE_URL> | aws s3 mv <FILE_NAME> <YOUR_S3_URI>

aws ec2 modern.ie image upload

Has anyone been successful to upload a modern.ie vdmk image to aws ec2?
I've tried via the ec2 import instance command:
ec2-import-instance IE10.Win7.For.Windows.VMWare\IE10_-_Win7-disk1.vmdk -f vmdk -t t2.small -a i386 -b xxxx --subnet subnet-xxxxx -p Windows -o %AWS_ACCESS_KEY% -w %AWS_SECRET_KEY% ...
but once i described the import, i got: ClientError: Unsupported Windows OS
After some reading I attempted to create an AMI via the aws cli interface after loading the file to s3 creating the policies etc:
aws ec2 import-image --cli-input-json "{ \"Description\": \"ModernIE Win7IE10\", \"DiskContainers\": [ { \"Description\": \"First CLI task\",
\"UserBucket\": { \"S3Bucket\": \"xxx_temp\", \"S3Key\" : \"IE10_-_Win7-disk1.vmdk\" } } ], \"LicenseType\": \"BYOL\", \"Architecture\": \"i386\", \"Platform\": \"Windows\"}"
But describing the import i get : "StatusMessage": "ClientError: Disk validation failed [Invalid S3 source location]"
I've even made the bucket url public!
Anyone have any ideas?
Thanks!
Use the AWS CLI to test that error:
aws s3 ls s3://xxx_temp
If you do not see the IE10_-_Win7-disk1.vmdk listed there, then the S3 upload is your problem. Re-verify your S3 key.
Also check the bucket policy and make sure the configured IAM user for your CLI has access to that bucket.
If you're seeing the Unsupported Windows OS I would check the Prerequisites very carefully.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/VMImportPrerequisites-ImportInstance.html
Not all operating systems can be imported. I frequently have an issue importing a linux VM where I've upgraded the kernel version and it becomes "Unsupported". The importer is very picky.
During the import process you can use the identifier returned from the import command to follow its status like so:
aws ec2 describe-import-image-tasks --cli-input-json "{"ImportTaskIds":["$IMPORT_ID"]}"
I have been most successful converting the VM to an OVA first, uploading THAT to S3 and running the import command against that.
If you are using VirtualBox you can do that from the command line:
vboxmanage export ${VM_NAME} -o MyExportedVM.ova;

Pass AWS credentials (IAM role credentials) to code running in Docker container

When running code on an EC2 instance, the SDK I use to access AWS resources, automagically talks to a locally linked web server on 169.254.169.254 and gets that instances AWS credentials (access_key, secret) that are needed to talk to other AWS services.
Also there are other options, like setting the credentials in environment variables or passing them as command line args.
What is the best practice here? I really prefer to let the container access the 169.254.169.254 (by routing the requests) or even better run a proxy container that mimics the behavior of the real server at 169.254.169.254.
Is there already a solution out there?
The EC2 metadata service will usually be available from within docker (unless you use a more custom networking setup - see this answer on a similar question).
If your docker network setup prevents it from being accessed, you might use the ENV directive in your Dockerfile or pass them directly during run, but keep in mind that credentials from IAM roles are automatically rotated by AWS.
Amazon does have some mechanisms for allowing containers to access IAM roles via the SDK and either routing/forwarding requests through the ECS agent container or the host. There is way too much to copy and paste, but using --net host is the LEAST recommended option because without additionally filters that allows your container full access to anything it's host has permission to do.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html
declare -a ENVVARS
declare AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_SESSION_TOKEN
get_aws_creds_local () {
# Use this to get secrets on a non AWS host assuming you've set credentials via some mechanism in the past, and then don't pass in a profile to gitlab-runner because it doesn't see the ~/.aws/credentials file where it would look up profiles
awsProfile=${AWS_PROFILE:-default}
AWS_ACCESS_KEY_ID=$(aws --profile $awsProfile configure get aws_access_key_id)
AWS_SECRET_ACCESS_KEY=$(aws --profile $awsProfile configure get aws_secret_access_key)
AWS_SESSION_TOKEN=$(aws --profile $awsProfile configure get aws_session_token)
}
get_aws_creds_iam () {
TEMP_ROLE=$(aws sts assume-role --role-arn "arn:aws:iam::123456789012:role/example-role" --role-session-name AWSCLI-Session)
AWS_ACCESS_KEY_ID=$(echo $TEMP_ROLE | jq -r . Credentials.RoleAccessKeyID)
AWS_SECRET_ACCESS_KEY=$(echo $TEMP_ROLE | jq -r . Credentials.RoleSecretKey)
AWS_SESSION_TOKEN=$(echo $TEMP_ROLE | jq -r . Credentials.RoleSessionToken)
}
get_aws_creds_local
get_aws_creds_iam
ENVVARS=("AWS_ACCESS_KEY_ID=$ACCESS_KEY_ID" "AWS_SECRET_ACCESS_KEY=$ACCESS_SECRET_ACCESS_KEY" "AWS_SESSION_TOKEN=$AWS_SESSION_TOKEN")
# passing creds into GitLab runner
gitlab-runner exec docker stepName $(printf " --env %s" "${ENVVARS[#]}")
# using creds with a docker container
docker run -it --rm $(printf " --env %s" "${ENVVARS[#]}") amazon/aws-cli sts get-caller-identity

How to use multiple AWS accounts from the command line?

I've got two different apps that I am hosting (well the second one is about to go up) on Amazon EC2.
How can I work with both accounts at the command line (Mac OS X) but keep the EC2 keys & certificates separate? Do I need to change my environment variables before each ec2-* command?
Would using an alias and having it to the setting of the environment in-line work? Something like: alias ec2-describe-instances1 = export EC2_PRIVATE_KEY=/path; ec2-describe-instances
You can work with two accounts by creating two profiles on the aws command line.
It will prompt you for your AWS Access Key ID, AWS Secret Access Key and desired region, so have them ready.
Examples:
$ aws configure --profile account1
$ aws configure --profile account2
You can then switch between the accounts by passing the profile on the command.
$ aws dynamodb list-tables --profile account1
$ aws s3 ls --profile account2
Note:
If you name the profile to be default it will become default profile i.e. when no --profile param in the command.
More on default profile
If you spend more time using account1, you can make it the default by setting the AWS_DEFAULT_PROFILE environment variable. When the default environment variable is set, you do not need to specify the profile on each command.
Linux, OS X Example:
$ export AWS_DEFAULT_PROFILE=account1
$ aws dynamodb list-tables
Windows Example:
$ set AWS_DEFAULT_PROFILE=account1
$ aws s3 ls
How to set "manually" multiple AWS accounts ?
1) Get access - key
AWS Console > Identity and Access Management (IAM) > Your Security Credentials > Access Keys
2) Set access - file and content
~/.aws/credentials
[default]
aws_access_key_id={{aws_access_key_id}}
aws_secret_access_key={{aws_secret_access_key}}
[{{profile_name}}]
aws_access_key_id={{aws_access_key_id}}
aws_secret_access_key={{aws_secret_access_key}}
3) Set profile - file and content
~/.aws/config
[default]
region={{region}}
output={{output:"json||text"}}
[profile {{profile_name}}]
region={{region}}
output={{output:"json||text"}}
4) Run - file with params
Install command-line app - and use AWS Command Line it, for example for product AWS EC2
aws ec2 describe-instances -- default
aws ec2 describe-instances --profile {{profile_name}} -- [{{profile_name}}]
Ref
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html
IMHO, the easiest way is to edit .aws/credentials and .aws/config files manually.
It's easy and it works for Linux, Mac and Windows. Just read this for more detail (1 minute read).
.aws/credentials file:
[default]
aws_access_key_id=AKIAIOSFODNN7EXAMPLE
aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
[user1]
aws_access_key_id=AKIAI44QH8DHBEXAMPLE
aws_secret_access_key=je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY
.aws/config file:
[default]
region=us-west-2
output=json
[profile user1] <-- 'profile' in front of 'profile_name' (not for default)!!
region=us-east-1
output=text
You should be able to use the following command-options in lieu of the EC2_PRIVATE_KEY (and even EC2_CERT) environment variables:
-K <private key>
-C <certificate>
You can put these inside aliases, e.g.
alias ec2-describe-instances1 ec2-describe-instances -K /path/to/key.pem
Create or edit this file:
vim ~/.aws/credentials
List as many key pairs as you like:
[default]
aws_access_key_id=AKIAIOSFODNN7EXAMPLE
aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
[user1]
aws_access_key_id=AKIAI44QH8DHBEXAMPLE
aws_secret_access_key=je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY
Set a local variable to select the pair of keys you want to use:
export AWS_PROFILE=user1
Do what you like:
aws s3api list-buckets # any aws cli command now using user1 pair of keys
You can also do it command by command by including --profile user1 with each command:
aws s3api list-buckets --profile user1
# any aws cli command now using user1 pair of keys
More details: Named profiles for the AWS CLI
The new aws tools now support multiple profiles.
If you configure access with the tools, it automatically creates a default in ~/.aws/config.
You can then add additional profiles - more details at: Getting started with the AWS CLI
I created a simple tool, aaws, to switch between AWS accounts.
It works by setting the AWS_DEFAULT_PROFILE in your shell. Just make sure you have some entries in your ~/.aws/credentials file and it will easily switch between multiple accounts.
/tmp
$ aws s3 ls
Unable to locate credentials. You can configure credentials by running "aws configure".
/tmp
$ aaws luk3
[luk3] 🔐 /tmp
$ aws s3 ls
2013-11-05 21:40:04 luk3thomas.com
I wrote a toolkit to switch default AWS profile.
The mechanism is physically moving the profile key to the default section in config and credentials files.
The better solution today should be one of the following ways:
Use aws command option --profile.
Use environment variable AWS_PROFILE.
I don't remember why I didn't use the solution of --profile, maybe I was not realized its existence.
However the toolkit can still be useful by doing other things. I'll add a soft switch flag by using the way of AWS_PROFILE in the future.
$ xsh list aws/cfg
[functions] aws/cfg/move
[functions] aws/cfg/set
[functions] aws/cfg/activate
[functions] aws/cfg/get
[functions] aws/cfg/delete
[functions] aws/cfg/list
[functions] aws/cfg/copy
Repo: https://github.com/xsh-lib/aws
Install:
curl -s https://raw.githubusercontent.com/alexzhangs/xsh/master/boot | bash && . ~/.xshrc
xsh load xsh-lib/aws
Usage:
xsh aws/cfg/list
xsh aws/cfg/activate <profilename>
You can write shell script to set corresponding values of environment variables for each account based on user input. Doing so, you don't need to create any aliases and, furthermore, tools like ELB tools, Auto Scaling Command Line Tools will work under multiple accounts as well.
To use an IAM role, you have to make an API call to STS:AssumeRole, which will return a temporary access key ID, secret key, and security token that can then be used to sign future API calls. Formerly, to achieve secure cross-account, role-based access from the AWS Command Line Interface (CLI), an explicit call to STS:AssumeRole was required, and your long-term credentials were used. The resulting temporary credentials were captured and stored in your profile, and that profile was used for subsequent AWS API calls. This process had to be repeated when the temporary credentials expired (after 1 hour, by default).
More details: How to Use a Single IAM User to Easily Access All Your Accounts by Using the AWS CLI