access environment variables from the .yaml file in my jest test file - unit-testing

In the attached folder is my folder structure.
There are a few environment variables defined in my template.yaml file for my rds connection, the values of which are in the parameter store in my aws console. I access them in my index.js file as 'process.env.DB_name', 'process.env.DB_PASSWORD' and so on.. but i get those values as undefined in my index.test.js file. Online blogs suggest to assign the value to process.env variables on the fly in the tests and restore them back to the original as so:
const original = process.env.DB_NAME;
process.env.DB_NAME = 'database name';
process.env.DB_NAME = original;
but I don't want my values to be exposed as I am connecting to a test mysql database for my tests on my rds instance. Is there a way I could assign the values anonymously using mock functions or any other way.?

Related

AWS DotNet Core credentials

I have an existing dotnet core web application that I need to use a profile other than [default] when I'm developing locally.
I'm running into an issue in that the location of credential file appears to not be defaulted yet to ~/.aws/credentials. Based on the credential lookup sequence check 2 should work if I set the value of AWSConfigs.AWSProfileName before creating the SSM Client but it doesn't and just falls through the remaining flow and throws an error saying it can't find the EC2 Meta Data. The same is the case for check 3. When the credentials are in the [default] definition check 4 will succeed which I expected would fail as well if defaults haven't been initialized yet. I have multiple AWS accounts that I get temporary security tokens from an SSO system based on the config file and because of temporary token requirement I can't use the [default] profile as I need to be able to switch between them to run the same code base.
I've been able to get around this by explicitly accessing the credential store and generating a set of credentials to pass into the constructor for the SSM Client.
Amazon.Runtime.CredentialManagement.CredentialProfile developerProfile;
AmazonSimpleSystemsManagementClient ssmClient;
if (new Amazon.Runtime.CredentialManagement.SharedCredentialsFile().TryGetProfile(Configuration["AWS:Profile"], out developerProfile)) //Test to determine if we have local credentials file with a profile
{
AWSCredentials credentials = new Amazon.Runtime.SessionAWSCredentials(developerProfile.Options.AccessKey, developerProfile.Options.SecretKey, developerProfile.Options.Token);
ssmClient = new AmazonSimpleSystemsManagementClient(credentials, developerProfile.Region);
}
else
{
ssmClient = new AmazonSimpleSystemsManagementClient(Region);
}
The above snippet is designed to allow for running locally with a specific profile and file location and when either do not exist assumes that it's running in an EC2 or ECS environment and can source the credentials from the metadata.
The location of the code that needs access AWS' Parameter Store in located in the Startup method so other properties can be initialized before the ConfigureServices method is run. I have additional AWS services that I initialize a client for that work as expected after the ConfigureServices has run. Should I not expect the credential provider to be properly initialized before the ConfigureServices method is run?

Better way of handling multiple environment variable in AWS Codebuild

I have a AWS Codebuild project connected to my Github account. Within my github I have separate branches for each environment.
I have in total 4 environments (and by that relationship, 4 github branches) currently: dev, qa, customer1-poc, customer2-prod.
Now I use multitude of environment variables within my project and initially I was setting up these env vars within the Codebuild project under Environment > Environment variables section. So ideally per env there are 4 env vars which are distinguished using the env name.
For example if there is an env var called apiKey it is saved in codebuild 4 times by the name
apiKey_dev
apiKey_qa
apiKey_customer1poc
apiKey_customer2prod
You get the idea. Same goes for other env vars which need to be different across all envs.
These env vars are read from the buildspec file and passed on to serverless.yml file.
Now the issue is as I keep creating new environments (like more poc, prod envs) I need to keep replicating the set of env vars for each env and its getting tedious.
Is there some way I can save these env vars outside the Codebuild project which can then be passed on to the Lambda function upon successful builds?
CodeBuild has native integration with Parameter store:
https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec.env.parameter-store
In Paramter store, you can keep your variable as a json with name like /config/prod":
... then retrieve it in CodeBuild and parse via 'jq' 2. This way, all the environment specific variables are in one place. If you go this way, make sure to encrypt the Param Store variable with a KMS key if it contains secrets. Also check AWS Secrets Manager.

Retrieving an RDS endpoint from within USER DATA

I have a single MySQL RDS instance and an AMI containing a Grails application. I would like to use the User Data function to populate the Grails application.yml file with the RDS endpoint. How do I retrieve RDS the endpoint from within User Data?
There are two ways to use User Data:
Just as data: The contents of User Data is accessible via http://169.254.169.254/latest/user-data/, so your application could just parse the contents and do what you wish with it.
As an executable script: On Linux, starting User Data with #! will cause it to be executed, so you could write a script to update the application.yml file.
An alternate concept would be to store the RDS Endpoint in the AWS Systems Manager Parameter Store. Then, use a User Data script to extract it from there and store it in application.yml. This way, the endpoint can be easily updated in Parameter Store without modifying any scripts.
User Data is nothing but shell script while runing on Linux AMI.
You can edit appication.yml file using shell script and add you parameters.

Load JSON file's content to Heroku's environment variable

I am using Google Speech API in my Django web-app. I have set up a service account for it and am able to make API calls locally. I have pointed the local GOOGLE_APPLICATION_CREDENTIALS environment variable to the service account's json file which contains all the credentials.
This is the snapshot of my Service Account's json file:
I have tried setting heroku's GOOGLE_APPLICATION_CREDENTIALS environment variable by running
$ heroku config:set GOOGLE_APPLICATION_CREDENTIALS="$(< myProjCreds.json)"
$ heroku config
GOOGLE_APPLICATION_CREDENTIALS: {
^^ It gets terminated at the first occurrence of " in the json file which is immediately after {
and
$ heroku config:set GOOGLE_APPLICATION_CREDENTIALS='$(< myProjCreds.json)'
$ heroku config
GOOGLE_APPLICATION_CREDENTIALS: $(< myProjCreds.json)
^^ The command gets saved into the environment variable
I tried setting heroku's GOOGLE_APPLICATION_CREDENTIALS env variable to the content of service account's json file but it didn't work (because apparently the this variable's value needs to be an absolute path to the json file) . I found a method which authorizes a developer account without loading json accout rather using GOOGLE_ACCOUNT_TYPE, GOOGLE_CLIENT_EMAIL and GOOGLE_PRIVATE_KEY. Here is the GitHub discussion page for it.
I want something similar (or something different) for my Django web-app and I want to avoid uploading the json file to my Django web-app's directory (if possible) for security reasons.
Depending on which library you are using for communicating with Speach API you may use several approaches:
You may serialize your JSON data using base64 or something similar and set resulting string as one environment variable. Than during you app boot you may decode this data and configure your client library appropriately.
You may set each pair from credentials file as separate env variables and use them accordingly. Maybe library that you're using support authentication using GOOGLE_ACCOUNT_TYPE, GOOGLE_CLIENT_EMAIL and GOOGLE_PRIVATE_KEY similar to the ruby client that you're linking to.
EDIT:
Assuming that you are using google official client library, you have several options for authenticating your requests, including that you are using (service account): https://googlecloudplatform.github.io/google-cloud-python/latest/core/auth.html You may save your credentials to the temp file and pass it's path to the Client object https://google-auth.readthedocs.io/en/latest/user-guide.html#service-account-private-key-files (but it seems to me that this is very hacky workaround). There is a couple of other auth options that you may use.
EDIT2:
I've found one more link with the more robust approach http://codrspace.com/gargath/using-google-auth-apis-on-heroku/. There is ruby code, but you may do something similar in Python for sure.
Let's say the filename is key.json
First, copy the content of the key.json file and add it to the environment variable, let's say KEY_DATA.
Solution 1:
If my command to start the server is node app.js, I'll do echo $KEY_DATA > key.json && node app.js
This will create a key.json file with the data from KEY_DATA and then start the server.
Solution 2:
Save the data from KEY_DATA env variable in the some variable and then parse it to JSON, so you have the object which you can pass for authentication purposes.
Example in Node.js:
const data = process.env.KEY_DATA;
const dataObj = JSON.parse(data);

How to set up different uploaded file storage locations for Laravel 5.2 in local deployment and AWS EB w/ S3?

I'm working on a Laravel 5.2 application where users can send a file by POST, the application stores that file in a certain location and retrieves it on demand later. I'm using Amazon Elastic Beanstalk. For local development on my machine, I would like the files to store in a specified local folder on my machine. And when I deploy to AWS-EB, I would like it to automatically switch over and store the files in S3 instead. So I don't want to hard code something like \Storage::disk('s3')->put(...) because that won't work locally.
What I'm trying to do here is similar to what I was able to do for environment variables for database connectivity... I was able to find some great tutorials where you create an .env.elasticbeanstalk file, create a config file at ~/.ebextiontions/01envconfig.config to automatically replace the standard .env file on deployment, and modify a few lines of your database.php to automatically pull the appropriate variable.
How do I do something similar with file storage and retrieval?
Ok. Got it working. In /config/filesystems.php, I changed:
'default' => 'local',
to:
'default' => env('DEFAULT_STORAGE') ?: 'local',
In my .env.elasticbeanstalk file (see the original question for an explanation of what this is), I added the following (I'm leaving out my actual key and secret values):
DEFAULT_STORAGE=s3
S3_KEY=[insert your key here]
S3_SECRET=[insert your secret here]
S3_REGION=us-west-2
S3_BUCKET=cameraflock-clips-dev
Note that I had to specify my region as us-west-2 even though S3 shows my environment as Oregon.
In my upload controller, I don't specify a disk. Instead, I use:
\Storage::put($filePath, $filePointer, 'public');
This way, it always uses my "default" disk for the \Storage operation. If I'm in my local environment, that's my public folder. If I'm in AWS-EB, then my Elastic Beanstalk .env file goes into effect and \Storage defaults to S3 with appropriate credentials.