Is there a way to specify AWS_SESSION_TOKEN when using SQLWorkbench and Athena JDBC driver? - amazon-web-services

I am using SQLWorkbench to connect to AWS Athena and SQLWorkbench Variables section to specify AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. This works. However, when I have to connect to accounts, which require AWS_SESSION_TOKEN, the connection fails. I can connect by modifying credentials file, but that's inconvenient. Is there a better way?

I received an answer from AWS support, and at this point, according to them, it appears that the driver does not support AWS_SESSION_TOKEN parameter.
Answering the question, which appeared on the thread, if you have to use session token, it appears that the only way is to modify your aws credentials file. This can be done either by adding a section or modifying default. Here is an example of a connection string for the former, where simba_session is a profile in credentials:
jdbc:awsathena://AwsRegion=us-west-2;AwsCredentialsProviderClass=com.simba.athena.amazonaws.auth.profile.ProfileCredentialsProvider;AwsCredentialsProviderArguments=simba_session;
If you don't need to use session token, you can specify AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY by pressing the Variables button and adding the keys/values. In this case, the connection string can look like this:
jdbc:awsathena://AwsRegion=us-west-2;AwsCredentialsProviderClass=com.simba.athena.amazonaws.auth.DefaultAWSCredentialsProviderChain;
Also note that you can add S3OutputLocation (if needed) and a Workgroup (if needed) by pressing Extended Properties button and adding keys/values, rather than doing it in the connection string.

Related

Is there a way to switch between or reset role based credentials using `okta-aws` command line utility?

Going through Okta's github repo it doesn't seem like there is an easy way to switch between AWS roles using their command line utility.
Normally I get temporary credentials using okta-aws default sts get-caller-identity. If I'm switching between projects and need to likewise switch my temporary credentials to a different role, the best method I've found thus far is to delete two files, ~/.okta/profile & ~/.okta/.current-session, then re-run the above command.
I've also found that when OKTA_STS_DURATION in .okta/config.properties is set for longer than is configured for that AWS role, you either need to wait until the local duration setting expires or reset the credentials process using the same method as above.
Is there a way to switch between or reset role based credentials using okta-aws command line utility?
Note: I'm using role as interchangeable with profile -- please correct me if this is inaccurate.
At this point the best solution is to switch over to gimme-aws-creds. After you run through the setup it ends up being easier to use, and appears to be better maintained by Nike.

What's the correct format of private_key when using it as an environment variable?

I am trying to use private_key for some GCP service nodejs client libraries, e.g. #google-cloud/pubsub, #google-cloud/trace-agent
I got private_key from service account credential json file like this:
I am trying to use it as an environment variable for cloud function.
.env.yaml:
And use it like this:
// ...
credentials: {
private_key: envs.private_key,
client_email: envs.client_email
},
projectId: envs.X_GOOGLE_GCLOUD_PROJECT
But got an error:
Error: error:0906D06C:PEM routines:PEM_read_bio:no start line
I check stackdriver logs, here is the private_key environment variable I got:
My guess is the format of private_key is not correct. It's probably caused by the newline symbol \n. So, what's the correct format when using private_key like this?
Setting the key in the .env.yaml file is not a good idea. Indeed, you will be able to commit it to git, maybe in a public repo, and you will set it in plain text as environment variable of your function.
It will be better if you set the file in a bucket, and load it in the runtime. BTW you will keep no secret in the project files.
Another solution is to encrypt with kms the key and decrypt it at runtime. This time you still have the secret in your project files, but encrypted.
But, what do you need another service account? This one on the function is not enough?
GCLOUD_KEY='{"private_key_id":"XXX", "private_key":"YYY",
"client_email":"ZZZ#ZZZ.COM", "client_id":"ABC123",
"type":"service_account"}'

Apache Zeppelin with Athena handling session token using jdbc Interpreter

I am trying to connect Athena with Apache Zeppelin.I need to handle secret_key, Access_key, and Session_token. I am feeling hard to establish my connection with the Zeppelin JDBC interpreter.
I am following the steps as mentioned in this block,
If any one can help me out in establishing the connection with AWS Session token approach that would be helpful.
Thank You
The main docs for this are here:
https://docs.aws.amazon.com/athena/latest/ug/connect-with-jdbc.html
I found there are 2 driver versions, -1.1.0 and -1.0.1 . I could only get Zeppelin working with 1.1.0, and the links on that page don't go to that file, the only way to get it was using the aws s3 cp command
e.g.
aws s3 cp s3://athena-downloads/drivers/AthenaJDBC41-1.1.0.jar .
although I've given feedback on that page so it should be fixed soon.
Regarding the parameters, you use default.user and enter the Access_Key, default.password and enter the Secret_key. The default.driver should be com.amazonaws.athena.jdbc.AthenaDriver
The default.s3_staging_dir is actually the bucket where csv results are written so needs to match your athena settings.
There is no mention of where you might put a session token, however, you could always try putting it on the jdbc connection string ( which goes in default.url parameter value)
e.g.
jdbc:awsathena://athena.{REGION}.amazonaws.com:443?SessionToken=blahblahsomethingrealsessiontokengoeshere
but of course, replace {REGION} with the actual aws region and use your real session token.

access credentials error in Copy Command in S3

I am facing access credentials error when i ran copy Command in S3.
my copy command is :
copy part from 's3://lntanbusamplebucket/load/part-csv.tbl'
credentials 'aws_access_key_id=D93vB$;yYq'
csv;
error message is:
error: Invalid credentials. Must be of the format: credentials 'aws_iam_role=...' or 'aws_access_key_id=...;aws_secret_access_key=...[;token=...]'
'aws_access_key_id=?;
aws_secret_access_key=?''
Could you please can any one explain what is aws_access_key_id and aws_secret_access_key ?
where we can see this?
Thanks in advance.
Mani
The access key you're using looks more like a secret key, they usually look something like "AKIAXXXXXXXXXXX".
Also, don't post them openly in StackOverflow questions. If someone gets a hold of a set of access keys, they can access your AWS environment.
Access Key & Secret Key are the most basic form of credentials / authentication used in AWS. One is useless without the other, so if you've lost one of the two, you'll need to regenerate a set of keys.
To do this, go into the AWS console, go to the IAM services (Identity and Access Management) and go into users. Here, select the user that you're currently using (probably yourself) and go to the Security Credentials tab.
Here, under Access keys, you can see which sets of keys are currently active for this user. You can only have 2 sets active at one time, so if there's already 2 sets present, delete one and create a new pair. You can download the new pair as a file called "credentials.csv" and this will contain your user, access key and secret key.

InvalidSignatureException when using boto3 for dynamoDB on aws

Im facing some sort of credentials issue when trying to connect to my dynamoDB on aws. Locally it all works fine and I can connect using env variables for AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_DEFAULT_REGION and then
dynamoConnection = boto3.resource('dynamodb', endpoint_url='http://localhost:8000')
When changing to live creds in the env variables and setting the endpoint_url to the dynamoDB on aws this fails with:
"botocore.exceptions.ClientError: An error occurred (InvalidSignatureException) when calling the Query operation: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details."
The creds are valid as they are used in a different app which talks to the same dynamoDB. Ive also tried not using env variables but rather directly in the method but the error persisted. Furthermore, to avoid any issues with trailing spaces Ive even used the credentials directly in the code. Im using Python v3.4.4.
Is there maybe a header that also should be set that Im not aware of? Any hints would be apprecihated.
EDIT
Ive now also created new credentials (to make sure there are only alphanumerical signs) but still no dice.
You shouldn't use the endpoint_url when you are connecting to the real DynamoDB service. That's really only for connecting to local services or non-standard endpoints. Instead, just specify the region you want:
dynamoConnection = boto3.resource('dynamodb', region_name='us-west-2')
It sign that your time zone is different. Maybe you can check your:
1. Time zone
2. Time settings.
If there are some automatic settings, you should fix your time settings.
"sudo hwclock --hctosys" should do the trick.
Just wanted to point out that accessing DynamoDB from a C# environment (using AWS.NET SDK) I ran into this error and the way I solved it was to create a new pair of AWS access/secret keys.
Worked immediately after I changed those keys in the code.