Get secrets for GCP deployments from KMS - google-cloud-platform

I want to deploy a Cloud VPN tunnel in GCP using Deployment Manager
I set up a deployment script using Python for this and I don't want the shared secret for the VPN tunnel in plain text in my configuration.
So I tried to include the secret encrypted via KMS and then perform a call to the KMS in the python script to get the plain text secret.
The python code to decrypt the secret looks like this:
import base64
import googleapiclient.discovery
def decryptSecret(enc_secret,context):
""" decrypts the given Secret via KMS"""
# KMS Configuration
KEY_RING = <Key Ring>
KEY_ID = <Key>
KEY_LOCATION = REGION
KEY_PROJECT = context.env['project'],
# Creates an API client for the KMS API.
kms_client = googleapiclient.discovery.build('cloudkms', 'v1')
key_name = 'projects/{}/locations/{}/keyRings/{}/cryptoKeys/{}'.format(
KEY_PROJECT, KEY_LOCATION, KEY_RING, KEY_ID)
crypto_keys = kms_client.projects().locations().keyRings().cryptoKeys()
request = crypto_keys.decrypt(
name=key_name,
body={'ciphertext': enc_secret})
response = request.execute()
plaintext = base64.b64decode(response['plaintext'].encode('ascii'))
return plaintext
But if I deploy this code I just get the following error message from deployment manager:
Waiting for update [operation-<...>]...failed.
ERROR: (gcloud.deployment-manager.deployments.update) Error in Operation [operation-1517326129267-5640004f18139-450d8883-8d57c3ff]: errors:
- code: MANIFEST_EXPANSION_USER_ERROR
message: |
Manifest expansion encountered the following errors: Error compiling Python code: No module named googleapiclient.discovery Resource: cloudvpn-testenv.py Resource: config
I also tried to include the complete google-api-python-client library in my configuration yaml, but I still get this error.
Any idea someone?

To answer your question directly:
# requirements.txt
google-api-python-client
# main.py
import base64
import os
import googleapiclient.discovery
crypto_key_id = os.environ['KMS_CRYPTO_KEY_ID']
def decrypt(client, s):
response = client \
.projects() \
.locations() \
.keyRings() \
.cryptoKeys() \
.decrypt(name=crypto_key_id, body={"ciphertext":s}) \
.execute()
return base64.b64decode(response['plaintext']).decode('utf-8').strip()
kms_client = googleapiclient.discovery.build('cloudkms', 'v1')
auth = decrypt(kms_client, '...ciphertext...'])
You can find more examples and samples on GitHub.
To indirectly answer your question, you may be interested in Secret Manager instead.

Related

how to use pageSize or pageToken in python code for GCP

this is the python code I got from github. running it, I got 300. But when I use gcloud to get role number, I got a total of 479 roles. I was told by the GCP support that pageSize needs to be used. where can I find documentation of how and pageSize can be used? so in my code below, where should pageSize go? or perhaps pageToken needs to be used?
(gcptest):$ gcloud iam roles list |grep name |wc -l
479
(gcptest) : $ python quickstart.py
300
def quickstart():
# [START iam_quickstart]
import os
from google.oauth2 import service_account
import googleapiclient.discovery
import pprint
# Get credentials
credentials = service_account.Credentials.from_service_account_file(
filename=os.environ['GOOGLE_APPLICATION_CREDENTIALS'],
scopes=['https://www.googleapis.com/auth/cloud-platform'])
# Create the Cloud IAM service object
service = googleapiclient.discovery.build(
'iam', 'v1', credentials=credentials)
# Call the Cloud IAM Roles API
# If using pylint, disable weak-typing warnings
# pylint: disable=no-member
response = service.roles().list().execute()
roles = response['roles']
print(type(roles))
print(len(roles))
if name == 'main':
quickstart()
You will need to write code similar to this:
roles = service.roles()
request = roles.list()
while request is not None:
role_list = request.execute()
# process each role here
for role in role_list:
print(role)
# Get next page of results
request = roles.list_next(request, role_list)
Documentation link for the list_next method
In addition of the solution of #JohnHanley, you can also add the queries parameters in parameter of your method. Like this
# Page size of 10
response = service.roles().list(pageSize=10).execute()
Here the definition of this list method

Adding Headers to AWS API Gateway Response using Chalice

My use-case requires my app to return CORS headers when error response is 401.
This functionality was added by AWS last year (See this). It can be done using Cloudformation and Swagger template but I'm not sure if it's yet possible with Chalice.
I solved my problem by using a python script that adds the CORS headers for 401 response and redeploys the API. This redeploying of API takes a second or two since it doesn't have to deploy all Lambdas like Chalice.
deploy.sh
#!/usr/bin/env bash
cd services
A="$(chalice deploy --stage $1)"
cd ..
python update_api_response_headers.py "$A" "$1"
update_api_response_headers.py
import boto3
import sys
import re
if len(sys.argv) != 3:
print("usage: python script.py <CHALICE_DEPLOYMENT_RESULT> <STAGE>")
exit()
search = re.search('URL: https:\\/\\/([a-zA-Z0-9]+).+', sys.argv[1])
api_id = search.group(1)
print(api_id)
if not api_id:
print(sys.argv[1])
exit()
client = boto3.client('apigateway')
response = client.put_gateway_response(
restApiId=api_id,
responseType='UNAUTHORIZED',
statusCode='401',
responseParameters={
"gatewayresponse.header.Access-Control-Allow-Origin": "'*'",
"gatewayresponse.header.Access-Control-Allow-Headers": "'*'"
}
)
response = client.create_deployment(
restApiId=api_id,
stageName=sys.argv[2])
print(sys.argv[1])
Services folder contains my chalice app. deploy.sh and update_api_response_headers.py are placed one level above the chalice app. To deploy the app I simply have to use:
./deploy.sh stage_name

AWS S3 Bucket Upload/Transfer with boto3

I need to upload files to S3 and I was wondering which boto3 api call I should use?
I have found two methods in the boto3 documentation:
http://boto3.readthedocs.io/en/latest/reference/services/s3.html#S3.Client.upload_file
http://boto3.readthedocs.io/en/latest/reference/customizations/s3.html
Do I use the client.upload_file() ...
#!/usr/bin/python
import boto3
session = Session(aws_access_key_id, aws_secret_access_key, region)
s3 = session.resource('s3')
s3.Bucket('my_bucket').upload_file('/tmp/hello.txt', 'hello.txt')
or do I use S3Transfer.upload_file() ...
#!/usr/bin/python
import boto3
session = Session(aws_access_key_id, aws_secret_access_key, region)
S3Transfer(session).upload_file('/tmp/hello.txt', 'my_bucket', 'hello.txt')
Any suggestions would be appreciated. Thanks in advance.
.
.
.
possible solution...
# http://boto3.readthedocs.io/en/latest/reference/services/s3.html#examples
# http://boto3.readthedocs.io/en/latest/reference/services/s3.html#S3.Client.put_object
# http://boto3.readthedocs.io/en/latest/reference/services/s3.html#S3.Client.get_object
client = boto3.client("s3", "us-west-1", aws_access_key_id = "xxxxxxxx", aws_secret_access_key = "xxxxxxxxxx")
with open('drop_spot/my_file.txt') as file:
client.put_object(Bucket='s3uploadertestdeleteme', Key='my_file.txt', Body=file)
response = client.get_object(Bucket='s3uploadertestdeleteme', Key='my_file.txt')
print("Done, response body: {}".format(response['Body'].read()))
It's better to use the method on the client. They're the same, but using the client method means you don't have to setup things yourself.
You can use Client: low-level service access : I saw a sample code in https://www.techblog1.com/2020/10/python-3-how-to-communication-with-aws.html

AWS Python script vs AWS CLI

I downloaded the AWS cli and was able to successfully list objects from my bucket. But doing the same from a Python script does not work. The error is forbidden error.
How should I configure the boto to use the same default AWS credentials ( as used by AWS cli )
Thank you
import logging import urllib, subprocess, boto, boto.utils, boto.s3
logger = logging.getLogger("test") formatter =
logging.Formatter('%(asctime)s %(message)s') file_handler =
logging.FileHandler("test.log") file_handler.setFormatter(formatter)
stream_handler = logging.StreamHandler(sys.stderr)
logger.addHandler(file_handler) logger.addHandler(stream_handler)
logger.setLevel(logging.INFO)
# wait until user data is available while True:
logger.info('**************************** Test starts *******************************')
userData = boto.utils.get_instance_userdata()
if userData:
break
time.sleep(5)
bucketName = ''
deploymentDomainName = ''
if bucketName:
from boto.s3.key import Key
s3Conn = boto.connect_s3('us-east-1')
logger.info(s3Conn)
bucket = s3Conn.get_bucket('testbucket')
key.key = 'test.py'
key.get_contents_to_filename('test.py')
CLI is -->
aws s3api get-object --bucket testbucket --key test.py my.py
Is it possible to use the latest Python SDK from Amazon (Boto 3)? If so, set up your credentials as outlined here: Boto 3 Quickstart.
Also, you might check your environment variable. If they don't exist, that is okay. If they don't match those on your account, then that could be the problem as some AWS SDKs and other tools with use environment variables over the config files.
*nix:
echo $AWS_ACCESS_KEY_ID && echo $AWS_SECRET_ACCESS_KEY
Windows:
echo %AWS_ACCESS_KEY% & echo %AWS_SECRET_ACCESS_KEY%
(sorry if my windows-foo is weak)
When you use CLI by default it takes credentials from .aws/credentials file, but for running bot you will have to specify access key and secret key in your python script.
import boto
import boto.s3.connection
access_key = 'put your access key here!'
secret_key = 'put your secret key here!'
conn = boto.connect_s3(
aws_access_key_id = access_key,
aws_secret_access_key = secret_key,
host = 'bucketname.s3.amazonaws.com',
#is_secure=False, # uncomment if you are not using ssl
calling_format = boto.s3.connection.OrdinaryCallingFormat(),
)

django boto3: NoCredentialsError -- Unable to locate credentials

I am trying to use boto3 in my django project to upload files to Amazon S3. Credentials are defined in settings.py:
AWS_ACCESS_KEY = xxxxxxxx
AWS_SECRET_KEY = xxxxxxxx
S3_BUCKET = xxxxxxx
In views.py:
import boto3
s3 = boto3.client('s3')
path = os.path.dirname(os.path.realpath(__file__))
s3.upload_file(path+'/myphoto.png', S3_BUCKET, 'myphoto.png')
The system complains about Unable to locate credentials. I have two questions:
(a) It seems that I am supposed to create a credential file ~/.aws/credentials. But in a django project, where do I have to put it?
(b) The s3 method upload_file takes a file path/name as its first argument. Is it possible that I provide a file stream obtained by a form input element <input type="file" name="fileToUpload">?
This is what I use for a direct upload, i hope it provides some assistance.
import boto
from boto.exception import S3CreateError
from boto.s3.connection import S3Connection
conn = S3Connection(settings.AWS_ACCESS_KEY,
settings.AWS_SECRET_KEY,
is_secure=True)
try:
bucket = conn.create_bucket(settings.S3_BUCKET)
except S3CreateError as e:
bucket = conn.get_bucket(settings.S3_BUCKET)
k = boto.s3.key.Key(bucket)
k.key = filename
k.set_contents_from_filename(filepath)
Not sure about (a) but django is very flexible with file management.
Regarding (b) you can also sign the upload and do it directly from the client to reduce bandwidth usage, its quite sneaky and secure too. You need to use some JavaScript to manage the upload. If you want details I can include them here.