Django best practice for storing access keys - django

What's the best way to store API access keys that you need in your settings.py but that you don't want to commit into git?

I use an environment file that stays on my computer and contains some variables linked to my environment.
In my Django settings.py (which is uploaded on github):
# MANDRILL API KEY
MANDRILL_KEY = os.environ.get('DJANGO_MANDRILL_KEY')
On dev env, my .env file (which is excluded from my Git repo) contains:
DJANGO_MANDRILL_KEY=PuoSacohjjshE8-5y-0pdqs
This is a "pattern" proposed by Heroku: https://devcenter.heroku.com/articles/config-vars
I suppose there is a simple way to setit without using Heroku though :)
To be honest, the primary goal to me is not security-related, but rather related to environment splitting. But it can help for both I guess.

I use something like this in settings.py:
import json
if DEBUG:
secret_file = '/path/to/development/config.json'
else:
secret_file = '/path/to/production/config.json'
with open(secret_file) as f:
SECRETS = json.loads(f)
secret = lambda n: str(SECRETS[n])
SECRET_KEY = secret('secret_key')
DATABASES['default']['PASSWORD'] = secret('db_password')
and the JSON file:
{
"db_password": "foo",
"secret_key": "bar"
}
This way you can omit the production config from git or move it outside your repository.

Related

cloud-init - I am trying to copy files to Windows EC2 instance through cloud init by passing it through user data

I am trying to copy files to Windows EC2 instance through cloud init by passing it through user data, the cloud init template runs, it creates a folder but doesnot copy the files, can you help me understand what I am doing wrong in my code.
this code is passed through launch configuration of an autoscaling group
data template_cloudinit_config ebs_snapshot_scripts {
gzip = false
base64_encode = false
part {
content_type = "text/cloud-config"
content = <<EOF
<powershell>
$path = "C:\aws"
If(!(test-path $path))
{
New-Item -ItemType Directory -Force -Path $path
}
</powershell>
write_files:
- content: |
${file("${path.module}/../../../../scripts/aws-ebs-snapshot-ps/1-start-ebs-snapshot.ps1")}
path: C:\aws\1-start-ebs-snapshot.ps1
permissions: '0744'
- content: |
${file("${path.module}/../../../../scripts/aws-ebs-snapshot-ps/2-run-backup.cmd")}
path: C:\aws\2-run-backup.cmd
permissions: '0744'
- content: |
${file("${path.module}/../../../../scripts/aws-ebs-snapshot-ps/3-ebs-snapshot.ps1")}
path: C:\aws\3-ebs-snapshot.ps1
permissions: '0744'
EOF
}
}
Your current approach involves using the Terraform template language to produce YAML by concatenating together strings, some of which are multi-line strings from an external file, and that will always be pretty complex to get right because YAML is a whitespace-sensitive language.
I have two ideas to make this easier. You could potentially do both of them, although doing one or the other could work too.
The first idea is to follow the recommendations about generating JSON and YAML from Terraform's templatefile function documentation. Although your template is inline rather than in a separate file, you can apply a similar principle here to have Terraform itself be responsible for producing valid YAML, and then you can just worry about making the input data structure be the correct shape:
part {
content_type = "text/cloud-config"
# JSON is a subset of YAML, so cloud-init should
# still accept this even though it's jsonencode.
content = jsonencode({
write_files = [
{
content = file("${path.module}/../../../../scripts/aws-ebs-snapshot-ps/1-start-ebs-snapshot.ps1")
path = "C:\\aws\\1-start-ebs-snapshot.ps1"
permissions = "0744"
},
{
content = file("${path.module}/../../../../scripts/aws-ebs-snapshot-ps/2-run-backup.cmd")
path = "C:\\aws\\2-run-backup.cmd"
permissions = "0744"
},
{
content = file("${path.module}/../../../../scripts/aws-ebs-snapshot-ps/3-ebs-snapshot.ps1")
path = "C:\\aws\\3-ebs-snapshot.ps1"
permissions = "0744"
},
]
})
}
The jsonencode and yamlencode Terraform functions know how to escape newlines and other special characters automatically, and so you can just include the file content as an attribute in the object and Terraform will encode it into a valid string literal automatically.
The second idea is to use base64 encoding instead of direct encoding. Cloud-init allows passing file contents as base64 if you set the additional property encoding to b64. You can then use Terraform's filebase64 function to read the contents of the file directly into a base64 string which you can then include into your YAML without any special quoting or escaping.
Using base64 also means that the files placed on the remote system should be byte-for-byte identical to the ones on disk in your Terraform module, whereas by using file into a YAML string there is the potential for line endings and other whitespace to be changed along the way.
On the other hand, one disadvantage of using base64 is that the file contents won't be directly readable in the terraform plan output, and so the plan won't be as clear as it would be with just plain YAML string encoding.
You can potentially combine both of these ideas together by using the filebase64 function as part of the argument to jsonencode in the first example:
# ...
{
encoding = "b64"
content = filebase64("${path.module}/../../../../scripts/aws-ebs-snapshot-ps/1-start-ebs-snapshot.ps1")
path = "C:\\aws\\1-start-ebs-snapshot.ps1"
permissions = "0744"
},
# ...
cloud-init only reliably writes files, so you have to provide content for them. I'd suggest storing your files in S3 (for example) and pulling them during boot.
Sorry for incoming MS-Linux mixup example.
Using same write files write a short script, eg.
#!/bin/bash
wget something.ps1
wget something-else.ps2
Then using runcmd/bootcmd run the files:
bootcmd:
- ./something.ps1
- ./something-else.ps2
Job is done, w/o encoding/character-escaping headache.

How to avoid having plaintext master-passwords for RDS when deployed through terraform and How to retrieve password to use it in a server

I'm new to stack overflow. Apologize if I didn't format it right.
I'm currently using terraform to provision aurora-rds. Problem is, I shouldn't be having the db master-password as a plaintext sitting in the .tf file.
I've been using this config initially with a plaintext password.
engine = "aurora-mysql"
engine_version = "5.7.12"
cluster_family = "aurora-mysql5.7"
cluster_size = "1"
namespace = "eg"
stage = "dev"
admin_user = "admin"
admin_password = "passwordhere"
db_name = "dbname"
db_port = "3306
I'm looking for a solution where I can skip a plaintext password like shown above and have something auto-generated and able to be included into terraform file. Also, I must be able to retrieve the password so that I can use that to configure wordpress server.
https://gist.github.com/smiller171/6be734957e30c5d4e4b15422634f13f4
I came across this solution but, not sure how to retrieve the password to use it in server. Well I haven't deployed this yet too.
As you mentioned in your question, there is a workaround, which you haven't yet tried.
I suggest to try that first and if its successful then to retrieve the password use output terraform resource.
output "db_password" {
value = ${random_string.db_master_pass.result}
description = "db password"
}
Once your terraform run is completed you can retrieve that value using terraform output db_password or if you want to refer that password somewhere in the terraform code itself then right away refer to that variable ${db_password}

Using boto3, how to put a publicly-readable object into S3 (or DigitalOcean Spaces)

Can someone provide a complete example of the following: Use boto3 and Python (2.7) to upload a file from a desktop computer to DigitalOcean Spaces, such that the uploaded file would be publically readable from Spaces.
DigitalOcean says their Spaces API is the same as the S3 API. I don't know if this is 100% true, but here is their API: https://developers.digitalocean.com/documentation/spaces
I can do the file-upload, with the code below, but I can't find an example of how to specify the file be publically readable. I do not want to make the entire Space (= S3 bucket) readable -- only the object.
import boto3
boto_session = boto3.session.Session()
boto_client = boto_session.client('s3',
region_name='nyc3',
endpoint_url='https://nyc3.digitaloceanspaces.com',
aws_access_key_id='MY_SECRET_ACCESS_KEY_ID',
aws_secret_access_key='MY_SECRET_ACCESS_KEY')
boto_client.upload_file( FILE_PATHNAME, BUCKETNAME, OBJECT_KEYNAME )
Changing the last statement to the following did not work:
boto_client.upload_file( FILE_PATHNAME, BUCKETNAME, OBJECT_KEYNAME,
ExtraArgs={ 'x-amz-acl': 'public-read', } )
Thank you.
Thanks to #Vorsprung for his comment above. The following works:
import boto3
boto_session = boto3.session.Session()
boto_client = boto_session.client('s3',
region_name='nyc3',
endpoint_url='https://nyc3.digitaloceanspaces.com',
aws_access_key_id='MY_SECRET_ACCESS_KEY_ID',
aws_secret_access_key='MY_SECRET_ACCESS_KEY')
boto_client.upload_file( FILE_PATHNAME, BUCKETNAME, OBJECT_KEYNAME )
boto_client.put_object_acl( ACL='public-read', Bucket=BUCKETNAME, Key=OBJECT_KEYNAME )

Using different settings on a aws lambda function coded in python

I'm using a lambda function, coded in python, as a backend to an aws-api-gateway method.
The api is completed, but now I have a new problem, the API should be deployed to multiple environments (production, test, etc), and each one should use a different configuration for the backend. Let's say that I had this handler:
import settings
import boto3
def dummy_handler(event, context):
logger.info('got event{}'.format(event))
utils = Utils(event["stage"])
response = utils.put_ticket_on_dynamodb(event["item"])
return json.dumps(response)
class Utils:
def __init__(self, stage):
self.stage = stage
def put_ticket_on_dynamodb(self, item):
# Write record to dynamoDB
try:
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table(settings.TABLE_NAME)
table.put_item(Item=item)
except Exception as e:
logger.error("Fail to put item on DynamoDB: {0}".format(str(e)))
raise
logger.info("Item successfully written to DynamoDB")
return item
Now, in order to use a different TABLE_NAME on each stage, I replace the setting.py file by a module, with this structure:
settings/
__init__.py
_base.py
_servers.py
development.py
production.py
testing.py
Following this answer here.
But I don't have any idea of how can I use it on my solution, considering that stage (passed as parameter to the Utils class), will match the settings filename in the module settings, What should I change in my class Utils to make it works?
Another alternative to handling this use case is to use API Gateway's stage variables and pass in the setting which vary by stage as parameters to your Lambda function.
Stage variables are name-value pairs associated with a specific API deployment stage and act like environment variables for use in your API setup and mapping templates. For example, you can configure an API method in each stage to connect to a different backend endpoint by setting different endpoint values in your stage variables.
Here is a blog post on using stage variables.
Here is the full documentation on using stage variables.
I finally used a different approach here. Instead of a python module for the setting, I used a single script for the settings, with a dictionary containing the configuration for each environment. I would like to use a separate settings script for each environment, but so far I can't find how.
So, now my settings file looks like this:
COUNTRY_CODE = 'CL'
TIMEZONE = "America/Santiago"
LOCALE = "es_CL"
DEFAULT_PAGE_SIZE = 20
ENV = {
'production': {
'TABLE_NAME': "dynamodbTable",
'BUCKET_NAME': "sssBucketName"
},
'testing': {
'TABLE_NAME': "dynamodbTableTest",
'BUCKET_NAME': "sssBucketNameTest"
},
'test-invoke-stage': {
'TABLE_NAME': "dynamodbTableTest",
'BUCKET_NAME': "sssBucketNameTest"
}
}
And my code:
def put_ticket_on_dynamodb(self, item):
# Write record to dynamoDB
try:
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table(settings.ENV[self.stage]["TABLE_NAME"])
table.put_item(Item=item)
except Exception as e:
logger.error("Fail to put item on DynamoDB: {0}".format(str(e)))
raise
logger.info("Item successfully written to DynamoDB")
return item

boto EC2 find all Ubuntu images?

How do I find all Ubuntu images available in my region?
Attempt:
from boto.ec2 import connect_to_region
conn = connect_to_region(**dict(region_name=region, **aws_keys))
if not filters: # Not as default arguments, as they should be immutable
filters = {
'architecture': 'x86_64',
'name': 'ubuntu/images/ebs-*'
}
print conn.get_all_images(owners=['amazon'], filters=filters)
I've also tried setting ubuntu/images/ebs-ssd/ubuntu-trusty-14.04-amd64-server-20140927, ubuntu*, *ubuntu and *ubuntu*.
The AWS API does not accept globs in search filters as far as I am aware. You can use the owner id to find it. 'Amazon' is not the owner of the ubuntu images Canonical is.
Change owners=['amazon'] to owners=['099720109477'].
There is no owner alias for canonical as far as I can see, so you will have to use the owner id instead.
Hope this helps.