kinesis stream account incorrect - python-2.7

I have setup my pc with python and connections to AWS. This has been successfully tested using the s3_sample.py file, I had to create an IAM user account with the credentials in a file which worked fine for S3 buckets.
My next task was to create an mqtt bridge and put some data in a stream in kinesis using the awslab - awslabs/mqtt-kinesis-bridge.
This seems to be all ok except I get an error when I run the bridge.py. The error is:
Could not find ACTIVE stream:my_first_stream error:Stream my_first_stream under account 673480824415 not found.
Strangely this is not the account I use in the .boto file that is suggested to be set up for this bridge, which are the same credentials I used for the S3 bucket
[Credentials]
aws_access_key_id = AA1122BB
aws_secret_access_key = LlcKb61LTglis
It would seem to me that the bridge.py has a hardcoded account but I can not see it and i can't see where it is pointing to the .boto file for credentials.
Thanks in Advance

So the issue of not finding the Active stream for the account is resolved by:
ensure you are hooked into the US-EAST-1 data centre as this is the default data centre for bridge.py
create your stream, you will only need 1 shard
The next problem stems from the specific version of MQTT and the python library paho-mqtt I installed. The bridge application was written with the API of MQTT 1.2.1 using paho-mqtt 0.4.91 in mind.
The new version which is available for download on their website has a different way of interacting with the paho-mqtt library which passes an additional "flags" object to the on_connect callback. This generates the error I was experiencing, since its not expecting the 5th argument.
You should be able to fix it by making the following change to bridge.py
Line 104 currently looks like this:
def on_connect(self, mqttc, userdata, msg):
Simply add flags, after userdata, so that the callback function looks like this:
def on_connect(self, mqttc, userdata,flags, msg):
This should resolve the issue of the final error of the incorrect number of arguments being passed.
Hope this helps others, thank for the efforts.

When you call python SDK for aws service, there is a line to import the boto modules for aws services in bridge.py.
import boto
The setting is pointing to the .boto for credentials and defined defaultly in boto.
Here is the explanation Boto Config :
Details
A boto config file is a text file formatted like an .ini configuration file that specifies values for options that control the behavior of the boto library. In Unix/Linux systems, on startup, the boto library looks for configuration files in the following locations and in the following order:
/etc/boto.cfg - for site-wide settings that all users on this machine will use
~/.boto - for user-specific settings
~/.aws/credentials - for credentials shared between SDKs
Of course, you can set the environment directly,
export AWS_ACCESS_KEY_ID="Your AWS Access Key ID"
export AWS_SECRET_ACCESS_KEY="Your AWS Secret Access Key"

Related

How to specify the GCP Credential Location in application.properties file (for using the Pub/Sub in GCP)?

This seems straightforward to do that passing the Service Account key file (generated from the GCP console) by specifying the file location in the application.properties file. However, I tried all the following options:
1. spring.cloud.gcp.credentials.location=file:/home/my_user_id/mp6key.json
2. spring.cloud.gcp.credentials.location=file:src/main/resources/mp6key.json
3. spring.cloud.gcp.credentials.location=file:./main/resources/mp6key.json
4. spring.cloud.gcp.credentials.location=file:/src/main/resources/mp6key.json
It all ended up with the same error:
java.io.FileNotFoundException: /home/my_user_id/mp6key.json (No such file or directory)
Could anyone advise where I should put the key file and then how should I specify the path to the file properly?
The same programs run successfully in Ecplise with messages published and subscribed using the Pub/Sub processing from GCP (using the Project Id/Service Account key generated in GCP), but now stuck with the above issue after deployed to run on GCP.
As mentioned in the official documentation, the credentials file can be obtained from a number of different locations such as the file system, classpath, URL, etc.
for example, if the service account key file is stored in the classpath as src/main/resources/key.json, pass the following property
spring.cloud.gcp.credentials.location=classpath:key.json
if the key file is stored somewhere else in your local file system, use the file prefix in the property value
spring.cloud.gcp.credentials.location=file:<path to key file>
My line looks like this:
spring.cloud.gcp.credentials.location=file:src/main/resources/[my_json_file]
And this works.
The following also works if I put it in the root of the project directory:
spring.cloud.gcp.credentials.location=file:./[my_json_file]
Have you tried to follow this quickstart? Please, try to follow it thoughtfully and explain if you get any error finishing the quickstart.
Anyway, before running your Java script, try running on the console the following (please modify with the exact path where you store your key):
export GOOGLE_APPLICATION_CREDENTIALS="/home/user/Downloads/mp6key.json"
How are you authenticating your credentials in your Java script?
My answer is easy: if you run you code on GCP, you don't have to use service account key file. Problem eliminated, problem solved!
More seriously, have a look on service identity. I don't know what is your current service (Compute? Function? Cloud Run?). Anyway, you can attach any service account on GCP components. Then, when you code, simply use the default credential. Automatically the component identity is loaded. No key to manage, no key to store securely, no key to rotate!
If you provide more detail on your target platform, I could provide your some guidance to achieve this.
Keep in mind that the service account key file are designed to be used by automatic apps (w/o user account involved) hosted outside GCP (on prem, other Cloud Provider, a CI/CD, Apigee,...)
UPDATE
When you use your personal account, you can also use the default credential.
Install gcloud SDK on your computer
Use the command gcloud auth application-default login
Follow the instructions
Enjoy!
If it doesn't work, get the <path> displayed after the login command and set this value in the environment variable named GOOGLE_APPLICATION_CREDENTIALS.
If you definitively want to use service account key file (which are a security issue for the previous reason, but...), you can use it locally
Either set the json key file path into the GOOGLE_APPLICATION_CREDENTIALS environment variable
Or run this command gcloud auth activate-service-account --key-file=<path to your json key file>
Provided your file is in the resources folder try
file://mp6key.json
using file:// instead of file:/ works for me at least

How do I set up AWS credentials on Media Temple DV

I am having a hard time setting up the credentials for AWS S3 usage via aws-php-sdk within Media Temple.
I continue to receive the error: Cannot read credentials from /.aws/credentials
I tried to follow the guide to install the AWS CLI via https://docs.aws.amazon.com/cli/latest/userguide/awscli-install-linux.html. I then used the following to set the credentials via https://docs.aws.amazon.com/cli/latest/userguide/tutorial-ec2-ubuntu.html#configure-cli
... But I get that error still.
I then had a chat with Media Temple support, who created the .aws/credentials file in root, but then the error message changed to:
Warning: is_readable(): open_basedir restriction in effect. File(/.aws/credentials) is not within the allowed path(s)
MT advised me to not change the basedir settings. They also advised me to simply change where the credentials are read from if possible.
Anyone successfully use AWS credentials on MT?
Trying to do this with the AWS CLI via SSH was like beating my head against a brick wall on Media Temple.
I then tried to set the credentials via environment variables, but that was a no-go.
I then got the idea to put the credentials file within a directory that PHP could access. However, I had to set the location where aws-php-sdk would look for it. I found the environment variable within some documentation and tried to set the variable via php's setenv() function. No dice.
I then searched the aws-php-sdk for the initial error I was seeing, backtracked until I could find where the credentials file location was being set. Turns out the documentation was wrong and the correct environment variable name was HOME.
In the end, all that was needed was to set HOME prior to using AWS. Easy enough, but should have been 100x easier to figure out. Something along these lines:
// Set environment variable for credentials location
putenv('HOME=../');
// Set bucket name
$this->bucket = $bucket;
// Create an S3Client
$this->s3Client = new Aws\S3\S3Client([
'profile' => $this->profile,
'version' => $this->version,
'region' => $this->region
]);

Load JSON file's content to Heroku's environment variable

I am using Google Speech API in my Django web-app. I have set up a service account for it and am able to make API calls locally. I have pointed the local GOOGLE_APPLICATION_CREDENTIALS environment variable to the service account's json file which contains all the credentials.
This is the snapshot of my Service Account's json file:
I have tried setting heroku's GOOGLE_APPLICATION_CREDENTIALS environment variable by running
$ heroku config:set GOOGLE_APPLICATION_CREDENTIALS="$(< myProjCreds.json)"
$ heroku config
GOOGLE_APPLICATION_CREDENTIALS: {
^^ It gets terminated at the first occurrence of " in the json file which is immediately after {
and
$ heroku config:set GOOGLE_APPLICATION_CREDENTIALS='$(< myProjCreds.json)'
$ heroku config
GOOGLE_APPLICATION_CREDENTIALS: $(< myProjCreds.json)
^^ The command gets saved into the environment variable
I tried setting heroku's GOOGLE_APPLICATION_CREDENTIALS env variable to the content of service account's json file but it didn't work (because apparently the this variable's value needs to be an absolute path to the json file) . I found a method which authorizes a developer account without loading json accout rather using GOOGLE_ACCOUNT_TYPE, GOOGLE_CLIENT_EMAIL and GOOGLE_PRIVATE_KEY. Here is the GitHub discussion page for it.
I want something similar (or something different) for my Django web-app and I want to avoid uploading the json file to my Django web-app's directory (if possible) for security reasons.
Depending on which library you are using for communicating with Speach API you may use several approaches:
You may serialize your JSON data using base64 or something similar and set resulting string as one environment variable. Than during you app boot you may decode this data and configure your client library appropriately.
You may set each pair from credentials file as separate env variables and use them accordingly. Maybe library that you're using support authentication using GOOGLE_ACCOUNT_TYPE, GOOGLE_CLIENT_EMAIL and GOOGLE_PRIVATE_KEY similar to the ruby client that you're linking to.
EDIT:
Assuming that you are using google official client library, you have several options for authenticating your requests, including that you are using (service account): https://googlecloudplatform.github.io/google-cloud-python/latest/core/auth.html You may save your credentials to the temp file and pass it's path to the Client object https://google-auth.readthedocs.io/en/latest/user-guide.html#service-account-private-key-files (but it seems to me that this is very hacky workaround). There is a couple of other auth options that you may use.
EDIT2:
I've found one more link with the more robust approach http://codrspace.com/gargath/using-google-auth-apis-on-heroku/. There is ruby code, but you may do something similar in Python for sure.
Let's say the filename is key.json
First, copy the content of the key.json file and add it to the environment variable, let's say KEY_DATA.
Solution 1:
If my command to start the server is node app.js, I'll do echo $KEY_DATA > key.json && node app.js
This will create a key.json file with the data from KEY_DATA and then start the server.
Solution 2:
Save the data from KEY_DATA env variable in the some variable and then parse it to JSON, so you have the object which you can pass for authentication purposes.
Example in Node.js:
const data = process.env.KEY_DATA;
const dataObj = JSON.parse(data);

Apache Zeppelin with Athena handling session token using jdbc Interpreter

I am trying to connect Athena with Apache Zeppelin.I need to handle secret_key, Access_key, and Session_token. I am feeling hard to establish my connection with the Zeppelin JDBC interpreter.
I am following the steps as mentioned in this block,
If any one can help me out in establishing the connection with AWS Session token approach that would be helpful.
Thank You
The main docs for this are here:
https://docs.aws.amazon.com/athena/latest/ug/connect-with-jdbc.html
I found there are 2 driver versions, -1.1.0 and -1.0.1 . I could only get Zeppelin working with 1.1.0, and the links on that page don't go to that file, the only way to get it was using the aws s3 cp command
e.g.
aws s3 cp s3://athena-downloads/drivers/AthenaJDBC41-1.1.0.jar .
although I've given feedback on that page so it should be fixed soon.
Regarding the parameters, you use default.user and enter the Access_Key, default.password and enter the Secret_key. The default.driver should be com.amazonaws.athena.jdbc.AthenaDriver
The default.s3_staging_dir is actually the bucket where csv results are written so needs to match your athena settings.
There is no mention of where you might put a session token, however, you could always try putting it on the jdbc connection string ( which goes in default.url parameter value)
e.g.
jdbc:awsathena://athena.{REGION}.amazonaws.com:443?SessionToken=blahblahsomethingrealsessiontokengoeshere
but of course, replace {REGION} with the actual aws region and use your real session token.

InvalidSignatureException when using boto3 for dynamoDB on aws

Im facing some sort of credentials issue when trying to connect to my dynamoDB on aws. Locally it all works fine and I can connect using env variables for AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_DEFAULT_REGION and then
dynamoConnection = boto3.resource('dynamodb', endpoint_url='http://localhost:8000')
When changing to live creds in the env variables and setting the endpoint_url to the dynamoDB on aws this fails with:
"botocore.exceptions.ClientError: An error occurred (InvalidSignatureException) when calling the Query operation: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details."
The creds are valid as they are used in a different app which talks to the same dynamoDB. Ive also tried not using env variables but rather directly in the method but the error persisted. Furthermore, to avoid any issues with trailing spaces Ive even used the credentials directly in the code. Im using Python v3.4.4.
Is there maybe a header that also should be set that Im not aware of? Any hints would be apprecihated.
EDIT
Ive now also created new credentials (to make sure there are only alphanumerical signs) but still no dice.
You shouldn't use the endpoint_url when you are connecting to the real DynamoDB service. That's really only for connecting to local services or non-standard endpoints. Instead, just specify the region you want:
dynamoConnection = boto3.resource('dynamodb', region_name='us-west-2')
It sign that your time zone is different. Maybe you can check your:
1. Time zone
2. Time settings.
If there are some automatic settings, you should fix your time settings.
"sudo hwclock --hctosys" should do the trick.
Just wanted to point out that accessing DynamoDB from a C# environment (using AWS.NET SDK) I ran into this error and the way I solved it was to create a new pair of AWS access/secret keys.
Worked immediately after I changed those keys in the code.