I want to assign a result of shell script to a variable in next way:
SOME_KEY = $(shell aws secretsmanager get-value ...)
And for a specific target, i want to overwrite AWS credentials:
get-some-key: export AWS_ACCESS_KEY_ID=$(SOME_OTHER_AWS_ACCESS_KEY_ID)
get-some-key: export AWS_SECRET_ACCESS_KEY=$(SOME_OTHER_AWS_SECRET_ACCESS_KEY)
get-some-key:
echo $$SOME_KEY
I expected to get a value, but i don't, since $(shell) command uses initial AWS credentials.
What is a way to correctly pass AWS credentials to same shell?
Thanks in advance!
I think I finally understand what you're trying to do. You have aws access credential environment variables set outside make. You want to use overriding creds when calling this target. If that is correct, consider the following:
id = aws sts get-caller-identity
get-id-with-creds: export AWS_ACCESS_KEY_ID=redacted
get-id-with-creds: export AWS_SECRET_ACCESS_KEY=redacted
get-id-with-creds: get-id
get-id:
$(id)
And I verified that works:
make get-id # gives back my default user
make get-id-with-creds # gives back my user with creds in Makefile
Related
I have script written in bash where I create a key with a certain name.
#!/bin/bash
project_id="y"
secret_id="x"
secret_value="test"
gcloud config set project "$project_id"
gcloud secrets create "$secret_id" --replication-policy="automatic"
I want to be able to also directly add the secret-value to my secret, so that I do not have to go into my GCP account and set it manually (which would defeat the purpose). I have seen that it is possible to attach files through the following command, however there does not seem to be a similar command for a secret value.
--data-file="/path/to/file.txt"
From https://cloud.google.com/sdk/gcloud/reference/secrets/create#--data-file:
--data-file=PATH
File path from which to read secret data. Set this to "-" to read the secret data from stdin.
So set --data-file to - and pass the value over stdin. Note, if you use echo use -n to avoid adding a newline.
echo -n $secret_value | gcloud secrets create ... --data-file=-
The aws command is
aws s3 ls --endpoint-url http://s3.amazonaws.com
can I load endpoint-url from any config file instead of passing it as a parameter?
This is an open bug in the AWS CLI. There's a link there to a cli plugin which might do what you need.
It's worth pointing out that if you're just connecting to standard Amazon cloud services (like S3) you don't need to specify --endpoint-url at all. But I assume you're trying to connect to some other private service and that url in your example was just, well, an example...
alias aws='aws --endpoint-url http://website'
Updated Answer
Here is an alternative alias to address the OP's specific need and comments above
alias aws='aws $([ -r "$SOME_CONFIG_FILE" ] && sed "s,^,--endpoint-url ," $SOME_CONFIG_FILE) '
The SOME_CONFIG_FILE environment variable could point to a aws-endpoint-override file containing
http://localhost:4566
Original Answer
Thought I'd share an alternative version of the alias
alias aws='aws ${AWS_ENDPOINT_OVERRIDE:+--endpoint-url $AWS_ENDPOINT_OVERRIDE} '
This idea I replicated from another alias I use for Terraform
alias terraform='terraform ${TF_DIR:+-chdir=$TF_DIR} '
I happen to use direnv with an /Users/darren/Workspaces/current-client/.envrc containing
source_up
PATH_add bin
export AWS_PROFILE=saml
export AWS_REGION=eu-west-1
export TF_DIR=/Users/darren/Workspaces/current-client/infrastructure-project
...
A possible workflow for AWS-endpoint overriding could entail cd'ing into a docker-env directory, where /Users/darren/Workspaces/current-client/app-project/docker-env/.envrc contains
source_up
...
export AWS_ENDPOINT_OVERRIDE=http://localhost:4566
where LocalStack is running in Docker, exposed on port 4566.
You may not be using Docker or LocalStack, etc, so ultimately you will have to provide the AWS_ENDPOINT_OVERRIDE environment variable via a mechanism and with an appropriate value to suit your use-case.
I want to specify the AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY_ID at run-time.
I already tried using
hadoop -Dfs.s3a.access.key=${AWS_ACESS_KEY_ID} -Dfs.s3a.secret.key=${AWS_SECRET_ACCESS_KEY} fs -ls s3a://my_bucket/
and
export HADOOP_CLIENT_OPTS="-Dfs.s3a.access.key=${AWS_ACCESS_KEY_ID} -Dfs.s3a.secret.key=${AWS_SECRET_ACCESS_KEY}"
and
export HADOOP_OPTS="-Dfs.s3a.access.key=${AWS_ACCESS_KEY_ID} -Dfs.s3a.secret.key=${AWS_SECRET_ACCESS_KEY}"
In the last two examples, I tried to run with:
hadoop fs -ls s3a://my-bucket/
In all the cases I got:
-ls: Fatal internal error
com.amazonaws.AmazonClientException: Unable to load AWS credentials from any provider in the chain
at com.amazonaws.auth.AWSCredentialsProviderChain.getCredentials(AWSCredentialsProviderChain.java:117)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3521)
at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1031)
at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:994)
at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:297)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:235)
at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:218)
at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
What am doing wrong?
This is a correct way to pass the credentials at runtime,
hadoop fs -Dfs.s3a.access.key=${AWS_ACCESS_KEY_ID} -Dfs.s3a.secret.key=${AWS_SECRET_ACCESS_KEY} -ls s3a://my_bucket/
Your syntax needs a small fix. Make sure that empty strings are not passed as the values to these properties. It would make these runtime properties invalid and would go on searching for the credentials as per the authentication chain.
The S3A client follows the following authentication chain:
If login details were provided in the filesystem URI, a warning is
printed and then the username and password extracted for the AWS key
and secret respectively.
The fs.s3a.access.key and fs.s3a.secret.key are looked for in the
Hadoop XML configuration.
The AWS environment variables are then looked for.
An attempt is made to query the Amazon EC2 Instance Metadata Service
to retrieve credentials published to EC2 VMs.
The other possible methods to pass the credentials at runtime (please note that it is neither safe nor recommended to supply them during runtime),
1) Embed them in the S3 URI
hdfs dfs -ls s3a://AWS_ACCESS_KEY_ID:AWS_SECRET_ACCESS_KEY#my-bucket/
If the secret key contains any + or / symbols, escape them with %2B and %2F respectively.
Never share the URL, logs generated using it, or use such an inline authentication mechanism in production.
2) export environment variables for the session
export AWS_ACCESS_KEY_ID=<YOUR_AWS_ACCESS_KEY_ID>
export AWS_SECRET_ACCESS_KEY=<YOUR_AWS_SECRET_ACCESS_KEY>
hdfs dfs -ls s3a://my-bucket/
I think part of the problem is that, confusingly, unlike the JVM -D opts, the Hadoop -D command expects a space between the -D and the key, e.g:
hadoop fs -ls -D fs.s3a.access.key=AAIIED s3a://landsat-pds/
I would still avoid doing that on the command line though, as anyone who can do a ps command can see your secrets.
Generally we stick them into core-site.xml when running outside EC2; in EC2 it's handled magically
I seem to be having difficulty deleting the access key profile i created for a test user using
aws configure --profile testuser
I have tried deleting the entries in my ~/.awsdirectory however when i run aws configure, i am getting the following error.
botocore.exceptions.ProfileNotFound: The config profile (testuser) could not be found
A workaround is adding [profile testuser] in my ~/.aws/config file but i dont want to do that. I want to remove all traces of this testuser profile from my machine.
The Configuring the AWS Command Line Interface documentation page lists various places where configuration files are stored, such as:
Linux: ~/.aws/credentials
Windows: C:\Users\USERNAME \.aws\credentials
There is also a default profile, which sounds like something that might be causing your situation:
Linux: export AWS_DEFAULT_PROFILE=user2
Windows: set AWS_DEFAULT_PROFILE=user2
I suggest checking to see whether that environment variable has been set.
look for a hidden folder; .aws/credentials
it path name is most likely: '/Users/COMPUTER_NAME/.aws/credentials'
change computer name to your computer name, there you will find two files,
config and credentials, edit them with a regular text editor
Im facing some sort of credentials issue when trying to connect to my dynamoDB on aws. Locally it all works fine and I can connect using env variables for AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_DEFAULT_REGION and then
dynamoConnection = boto3.resource('dynamodb', endpoint_url='http://localhost:8000')
When changing to live creds in the env variables and setting the endpoint_url to the dynamoDB on aws this fails with:
"botocore.exceptions.ClientError: An error occurred (InvalidSignatureException) when calling the Query operation: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details."
The creds are valid as they are used in a different app which talks to the same dynamoDB. Ive also tried not using env variables but rather directly in the method but the error persisted. Furthermore, to avoid any issues with trailing spaces Ive even used the credentials directly in the code. Im using Python v3.4.4.
Is there maybe a header that also should be set that Im not aware of? Any hints would be apprecihated.
EDIT
Ive now also created new credentials (to make sure there are only alphanumerical signs) but still no dice.
You shouldn't use the endpoint_url when you are connecting to the real DynamoDB service. That's really only for connecting to local services or non-standard endpoints. Instead, just specify the region you want:
dynamoConnection = boto3.resource('dynamodb', region_name='us-west-2')
It sign that your time zone is different. Maybe you can check your:
1. Time zone
2. Time settings.
If there are some automatic settings, you should fix your time settings.
"sudo hwclock --hctosys" should do the trick.
Just wanted to point out that accessing DynamoDB from a C# environment (using AWS.NET SDK) I ran into this error and the way I solved it was to create a new pair of AWS access/secret keys.
Worked immediately after I changed those keys in the code.