Play! and AWS - provider chain is not respected - amazon-web-services

This likely going to be very simple. I have a Play! app that needs to talk to AWS and i'm trying to get any secrets out of code, even in dev mode.
i have the following code in a function (minus the case class):
case class AwsCredentials(token: String, accessId: String, secretKey: String)
val client = AWSSecurityTokenServiceClientBuilder.defaultClient()
val token = client.getSessionToken
client.shutdown()
val cred = AmazonS3ClientBuilder.standard()
.withRegion(Regions.US_EAST_1).getCredentials.getCredentials
AwsCredentials(token.getCredentials.getSessionToken,
cred.getAWSAccessKeyId,
cred.getAWSSecretKey)
the cred.getAwsAccessKeyId and cred.getAWSSecretKey always fall down to the magic keys, not the ones either from the environment variables that can be set or the user.dir/.aws/credentials or from /home/play/.aws/ directly as the service does run as play.
Locally with Play! running via sbt, this works just fine and uses my local keys, but as soon as I deploy it and start with an init.d script it no longer does. I have confirmed its in the environment variables and we use other environment variables, that work just fine.
Ultimately this is a AWS issue, but since i can get it to work locally with ammonite with the play user, there must be something amiss in Play.
Thanks!

Sorry for wasting anyones time that chose to read and ponder this issue, as the underlying issue, i'm nearly positive of is that, I need to provide the accessKey to fineuploader, so that is on the page and in role mode i need a token which needs to be passed as a header as soon as i do this, i'm not signing the request with the privateKey that matches fineuploader's accessKey
If i figure out, or want to invest the time to modify the JS code, i'll post back.

Related

How to safely return hash tokens or sensitive data in unit tests

I've got some password reset functionality that I'm re-writing but adding tests.
My test should:
Create a user entry in the DB
Create a hash token entry in the DB for that user
Verify that the hash token matches an entry in the DB.
The last one (3), I'm having trouble testing, because typically, I wouldn't return the hash token to the user (client) in production. Instead, I'd fire off an event to my taskworker with the hash token string being passed around in memory and then send them an email with the secret link.
My route for verification looks like:
server.get('/api/v1/users/password/reset/:token', userPasswordReset.fetch)
Such that, as you can see, I need to grab token from the request parameter, which means I need to have it sent back to me somehow (but only in a test environment, NOT in production).
To currently solve for this, in my controller I'm currently doing this:
return process.env.NODE_ENV === 'test'
? res.status(201).send({ token: record.token })
: res.status(201).end()
However, I'd like to know if there is a safer way to go about this. I don't love the idea of putting fragile code in my production code that is dependent upon an env variable.
I've thought about writing the token to a file on the system, but because these tests will run within different environments I'm not sure that's entirely reliable based upon virtual filesystems.
You should only be testing your public API. You don't test a scenario that is invalid. You can test hash creation and storage of token at DB layer. Hope this helps.

Path based AWS API Caching Keys Issue

I have several API paths set up in a test API Gateway setup with a simple 'api' stage. I am using AWS Lambda and wish to cache the results of the lambda call.
There are three test paths (no authentication)
/a/{thing} (GET Caching turned on in stage)
/b/{thing} (GET Caching turned off in stage)
/c/{thing} (GET Caching turned off in stage)
They all map to the same lambda function. The lambda function returns the current time and the value of {thing}.
If I request /a/0000 through /a/1000 I get back the same result for a function that ran for thing=0000.
If I request /b/0000 through /b/1000 (or /c/) I get back uncached results.
thing is selected as 'cache' in resources /a/{thing}. Nothing else is set 'cache'.
It is my understanding that selecting 'cache' next to a path element, query element, or header would construct a cache key - possibly a multi-key cache key hash. That would be ideal!
Ideally /a/0000 and /a/1234 would return a cached version keyed to the {thing} value.
What did I do wrong or misread or step over? Am I hitting a bug when it comes to AWS Lambda? Is caching keyed to authorization - these URLs are public and unauthenticated. I'm just using curl to request these and nothing is being cached on the client side of course.
Honestly. I've also tried using a query argument as the only cache key and let the cache flush and waited 30 minutes to try try try again. Still not giving the results I would expect.
Pro Tip:
You still have to deploy from resources to stage when you set up cache keys. This makes sense of course but it would be good if the management console showed more about the method parameters than it does.
I am using Chalice.. which is why I wasn't deploying in the normal fashion.

InvalidSignatureException when using boto3 for dynamoDB on aws

Im facing some sort of credentials issue when trying to connect to my dynamoDB on aws. Locally it all works fine and I can connect using env variables for AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_DEFAULT_REGION and then
dynamoConnection = boto3.resource('dynamodb', endpoint_url='http://localhost:8000')
When changing to live creds in the env variables and setting the endpoint_url to the dynamoDB on aws this fails with:
"botocore.exceptions.ClientError: An error occurred (InvalidSignatureException) when calling the Query operation: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details."
The creds are valid as they are used in a different app which talks to the same dynamoDB. Ive also tried not using env variables but rather directly in the method but the error persisted. Furthermore, to avoid any issues with trailing spaces Ive even used the credentials directly in the code. Im using Python v3.4.4.
Is there maybe a header that also should be set that Im not aware of? Any hints would be apprecihated.
EDIT
Ive now also created new credentials (to make sure there are only alphanumerical signs) but still no dice.
You shouldn't use the endpoint_url when you are connecting to the real DynamoDB service. That's really only for connecting to local services or non-standard endpoints. Instead, just specify the region you want:
dynamoConnection = boto3.resource('dynamodb', region_name='us-west-2')
It sign that your time zone is different. Maybe you can check your:
1. Time zone
2. Time settings.
If there are some automatic settings, you should fix your time settings.
"sudo hwclock --hctosys" should do the trick.
Just wanted to point out that accessing DynamoDB from a C# environment (using AWS.NET SDK) I ran into this error and the way I solved it was to create a new pair of AWS access/secret keys.
Worked immediately after I changed those keys in the code.

activemerchant auth.net login error

I cannot get our auth.net account to work on a new staging server.
The same code, and credentials work find on the current production server, and on my local machine.
The activemerchant config looks like this:
ActiveMerchant::Billing::Base.mode = :production #(Rails.env.production? ? :production : :test)
ActiveMerchant::Billing::CreditCard.require_verification_value = false
I have checked every where i can think of for some config that would be changing staging vs. development vs. production and can find nothing!
I have put logging in to confirm that I am passing the correct login / password to activemerchant.
well this was stupid, apparently another developer when copying the credentials somehow changed a single inner character (from an 8 to a 6) so it was not easily noticable.
I am just posting this answer because I found a great tool that is not very well documented in activemerchant...
add these two lines to an initializer, and you will get a complete log of the low level transactions going on...
ActiveMerchant::Billing::PaypalGateway.wiredump_device = File.new(File.join([Rails.root, "log", "paypal.log"]), "a")
ActiveMerchant::Billing::PaypalGateway.wiredump_device.sync = true
You can substitute PaypalGateway with AuthorizeNetCimGateway (or probably whatever gateway you are using)

Amazon Web Services - CreateDBSnapshot

I am completely new to Amazon Web Services, however, I did get an account and I am able to browse our list of servers. I am trying to create a database backup programmatically using .NET. I have installed AWS for .NET and I have built and run the sample Empty console program.
I can see that I can create an instance of the RDS service with the following line:
AmazonRDS rds = AWSClientFactory.CreateAmazonRDSClient(RegionEndPoint.USEast1);
However, I notice that the rds.CreateDBSnapshot(); needs a request object but I don't see anything like CreateDBSnapshotRequest in the reference .dll, can anyone help with a working example?
Like you said CreateDBSnapshotRequest is the parameter you have to pass to this function.
CreateDBSnapshotRequest is defined in the Amazon.RDS.Model namespace within the AWSSDK.dll assembly (version 1.5.25.0)
Within CreateDBSnapshotRequest you must pass the the DB Instance Identifier (for example mydbinstance-1), that you defined when you invoked the CreateDBInstance (or one of it's related methods) and the identifier for the snapshot you wish to generate (example: my-snapshot-id) for this DB Instance.
edit / example
Well there are a couple ways to achieve this, here's one example - hope it clears up your doubts
using Amazon.RDS;
using Amazon.RDS.Model;
...
...
//gets the credentials from the default configuration
AmazonRDS rdsClient = AWSClientFactory.CreateAmazonRDSClient();
CreateDBSnapshotRequest dbSnapshotRequest = new CreateDBSnapshotRequest();
dbSnapshotRequest.DBInstanceIdentifier = "my-oracle-instance";
dbSnapshotRequest.DBSnapshotIdentifier = "daily-snapshot";
rdsClient.CreateDBSnapshot(dbSnapshotRequest);
Dont't forget that the DB Instance (in the example my-oracle-instance) must exist (duh :) and must be in the available state, like this: