I'm trying to execute Exercise 1.1 in the AWS Certified Solutions Architect Study Guide, and am stymied right away.
It says to "Install (if necessary) and configure te AWS CLI on your local system...."
I don;t know if it is installed, adn have no idea what to do here, but I will press on.
It also says "To get you started, here are some basice CLI commets:"
aws s3 ls
aws s3 mb <bucketname>
aws s3 cp /path/to/file.txt s3://bucketname
OK. I type aws s3 ls and get an error saying that my access keys aren't set up and that I can fix this my running aws configure.
I run aws configure and am asked for: 1) AWS Access Key ID, 2) AWS Secret Access Key, 3) Default region name, and 4) Default output format.
I have no idea whatsoever what any of this is. i recall having set up a key pair at some point in the not to distant past, and manage to find a reference to it. I put in the fingerprint of the public key in the first, and manage to find the file that I was sent for the secret key, and cut and paste it. But it has a bunch of lines, so cut and pasting it doesn't work. I really doubt that this is the right stuff, anyway, but I have no idea what would be the right stuff. Can anyone help?
Access key and secret are your credentials to hit the AWS API. You've to create an IAM user (as best practice), and then created the access key and secret. And then you've to configure the details using aws configure.
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html
Configuring the AWS CLI
Related
I have an old archive folder that exists on an on premise Windows server that I need to put into an S3 bucket, but having issues, it's more my knowledge of AWS tbh, but I'm trying.
I have created the S3 bucket and I can to attach it to the server using net share (AWS gives you the command via the AWS gateway) and I gave it a drive letter. I then tried to use robocopy to copy the data, but it didn't like the drive letter for some reason.
I then read I can use the AWS CLI so I tried something like:
aws s3 sync z: s3://archives-folder1
I get - fatal error: Unable to locate credentials
I guess I need to put some credentials in somewhere (.aws), but after reading too many documents I'm not sure what to do at this point, could someone advise?
Maybe there is a better way.
Thanks
You do not need to 'attach' the S3 bucket to your system. You can simply use the AWS CLI command to communicate directly with Amazon S3.
First, however, you need to provide the AWS CLI with a set of AWS credentials that can be used to access the bucket. You can do this with:
aws configure
It will ask for an Access Key and Secret Key. You can obtain these from the Security Credentials tab when viewing your IAM User in the IAM management console.
I am starting into the AWS world and I recently configured my local environment to connect to my AWS account through the terminal, but I’m having a hard time finding the correct command to log in. Could someone please point me how to do this.
Thank you beforehand
The AWS CLI does not "log in". Rather, each individual request is authenticated with a set of credentials (similar to a username and password). It's a bit like making a phone call -- you do not need to "log in" to your telephone. Instead, the system is configured to already know who you are.
To store credentials for use with the AWS CLI, you can run the aws configure command. It will prompt you for an Access Key and Secret Key, which will be stored in a configuration file. These credentials will then be used with future AWS CLI commands.
If you are using your own AWS Account, you can obtain an Access Key and Secret Key by creating an IAM User in the Identity and Access Management (IAM) management console. Simply select programmatic access to obtain these credentials. You will need to assign appropriate permissions to this IAM User. (It is not recommended to use your root login for such purposes.)
to login using shell you will need:
IAM keys (https://aws.amazon.com/premiumsupport/knowledge-center/create-access-key/)
AWS Cli (https://aws.amazon.com/cli/)
According to many advices, we should not configure IAM USER but using IAM Role instead to avoid someone managed to grab the user confidential in .aws folder.
Lets say I don't have any EC2 instances. Can I still able to perform S3 operation via AWS CLI? Says aws s3 ls
MacBook-Air:~ user$ aws s3 ls
Unable to locate credentials. You can configure credentials by running "aws configure".
You are correct that, when running applications on Amazon EC2 instances or as AWS Lambda functions, an IAM role should be assigned that will provide credentials via the EC2 metadata service.
If you are not running on EC2/Lambda, then the normal practice is to use IAM User credentials that have been created specifically for your application, with least possible privilege assigned.
You should never store the IAM User credentials in an application -- there have been many cases of people accidentally saving such files into GitHub, and bad actors grab the credentials and have access to your account.
You could store the credentials in a configuration file (eg via aws configure) and keep that file outside your codebase. However, there are still risks associated with storing the credentials in a file.
A safer option is to provide the credentials via environment variables, since they can be defined through a login profile and will never be included in the application code.
I don't think you can use service roles on your personal machine.
You can however use multi-factor authentication for AWS CLI
You can use credentials on any machine not just EC2.
Follow the steps as described by the documentation for your OS.
http://docs.aws.amazon.com/cli/latest/userguide/installing.html
Thus far I get access to my AWS resources using Access Key Id and Secret Access Key. But every time I end my session I have to manually enter these keys when typing aws configure
Is there an automated way, perhaps with SSH private key on the local host ?
Generally speaking when you use "aws configure", and enter your credentials, those credentials are saved in the .aws/credentials file in a path on your machine (exactly where will depend on the OS). You shouldn't have to run 'aws configure' again unless your credentials change.
Once that is done - one time - every further execution of an AWS CLI command should just use those stored credentials - you should not have to ever enter them more than once.
Aws provides this feature to handle access and secret keys in the format of profiles..
Let say if you have multiple accounts.. or multiple regions..
You can setup those as profiles with the help of aws configure --profile <profilename>
And when performing operations in one particular account in one particular region..
export DEFAULT_AWS_PROFILE=<profilename>
By doing this it is easy to work with multiple envs.
I am new to Amazon EMR and Hadoop in general. I am currently trying to set up a Pig job on an EMR cluster and to import and export data from S3. I have set up a bucket in s3 with my data named "datastackexchange". In an attempt to begin to copy the data to Pig, I have used the following command:
ls s3://datastackexchange
And I am met with the following error message:
AWS Access Key ID and Secret Access Key must be specified as the username or password (respectively) of a s3 URL, or by setting the fs.s3.awsAccessKeyId or fs.s3.awsSecretAccessKey properties (respectively).
I presume I am missing some critical steps (presumably involving setting up the access keys). As I am very new to EMR, could someone please explain what I need to do to get rid of this error and allow me to use my S3 data in EMR?
Any help is greatly appreciated - thank you.
As you correctly observed, your EMR instances do not have the privileges to access the S3 data. There are many ways to specify the AWS credentials to access your S3 data, but the correct way is to create IAM role(s) for accessing your S3 data.
Configure IAM Roles for Amazon EMR explains the steps involved.