Adding Headers to AWS API Gateway Response using Chalice - amazon-web-services

My use-case requires my app to return CORS headers when error response is 401.
This functionality was added by AWS last year (See this). It can be done using Cloudformation and Swagger template but I'm not sure if it's yet possible with Chalice.

I solved my problem by using a python script that adds the CORS headers for 401 response and redeploys the API. This redeploying of API takes a second or two since it doesn't have to deploy all Lambdas like Chalice.
deploy.sh
#!/usr/bin/env bash
cd services
A="$(chalice deploy --stage $1)"
cd ..
python update_api_response_headers.py "$A" "$1"
update_api_response_headers.py
import boto3
import sys
import re
if len(sys.argv) != 3:
print("usage: python script.py <CHALICE_DEPLOYMENT_RESULT> <STAGE>")
exit()
search = re.search('URL: https:\\/\\/([a-zA-Z0-9]+).+', sys.argv[1])
api_id = search.group(1)
print(api_id)
if not api_id:
print(sys.argv[1])
exit()
client = boto3.client('apigateway')
response = client.put_gateway_response(
restApiId=api_id,
responseType='UNAUTHORIZED',
statusCode='401',
responseParameters={
"gatewayresponse.header.Access-Control-Allow-Origin": "'*'",
"gatewayresponse.header.Access-Control-Allow-Headers": "'*'"
}
)
response = client.create_deployment(
restApiId=api_id,
stageName=sys.argv[2])
print(sys.argv[1])
Services folder contains my chalice app. deploy.sh and update_api_response_headers.py are placed one level above the chalice app. To deploy the app I simply have to use:
./deploy.sh stage_name

Related

Cannot install a library in lambda layer and use it in lambda layer custom script

I am deploying a lambda function using CDK python.
This is my stack for the lambdas:
import os
from aws_cdk import (
aws_stepfunctions as _aws_stepfunctions,
aws_stepfunctions_tasks as _aws_stepfunctions_tasks,
aws_lambda,
App, Duration, Stack,
aws_ec2 as ec2,
aws_sns as sns,
aws_sns_subscriptions as sns_subs,
aws_iam as iam,
)
class LambdaStack(Stack):
def __init__(self, app: App,
id: str,
upload_image_bucket,
**kwargs) -> None:
super().__init__(app, id, **kwargs)
schema_response_layer = aws_lambda.LayerVersion(self, 'lambda-layer',
code = aws_lambda.AssetCode('lambdas/lambda_layers/schema_response_layer/'),
compatible_runtimes = [aws_lambda.Runtime.PYTHON_3_9],
layer_version_name="schema_response_layer"
)
policy_textract = iam.PolicyStatement( # Restrict to listing and describing tables
actions=[ "textract:AnalyzeDocument",
"textract:DetectDocumentText",
"textract:GetDocumentAnalysis",
"textract:GetDocumentTextDetection",
"textract:AnalyzeExpense"],
resources=["*"]
)
store_image = aws_lambda.Function(
self, 'store_imagey',
function_name="storage_image_test_1",
runtime=aws_lambda.Runtime.PYTHON_3_9,
code=aws_lambda.Code.from_asset('lambdas/lambda_functions/store_image'),
handler='store_image.store_image_handler',
environment={
'BUCKET_NAME': upload_image_bucket.bucket_name,
},
initial_policy=[policy_textract],
layers=[schema_response_layer]
)
upload_image_bucket.grant_read_write(store_image)
self.store_image_ld = store_image
as you can see, I am creating a lambda layer, that I want to use in my store_image function.
I can import this lambda layer without problems using this import:
from response_schema import Response
This is my layer python code:
from pydantic import BaseModel, Field, validator
class Headers(BaseModel):
content_type: str = "application/json"
access_control: str = "*"
allow = "GET, OPTIONS, POST"
access_control_allow_methods: str = "*"
access_control_allow_headers: str = "*"
class Response(BaseModel):
status_code: str = "200"
body: str
headers: Headers = Headers()
I am getting the following error:
Runtime.ImportModuleError: Unable to import module 'store_image': No module named 'pydantic'
I don't know how to install the pydantic library in my lambda layer and use this library in the lambda layer code.
The lambda layer structure folder is:
In the requirements.txt file I have:
pydantic==1.10.4
But it seems that is not installing the pydantic library in my lambda layer. I have tried to install the library in the lambda layer folder using:
pip install -t . pydantic==1.10.4
But is not working neither.
How can I install a library in my lambda layer and use it in my lambda layer custom script?
If you want to install Python packages from the requirements.txt file, you can use aws_cdk.aws_lambda_python_alpha.PythonFunction construct. In this case you need to replace your LayerVersion construct with PythonLayerVersion construct.
If you want to use Function and LayerVersion constructs, you need to download libraries to your project. You can use this article as a reference.

SSL EOF Error when using Python Requests Session

Summary
I have a flask application deployed to Kubernetes with python 2.7.12, Flask 0.12.2 and using requests library. I'm getting a SSLError while using requests.session to send a POST Request inside the container. When using requests sessions to connect to a https url , requests throws a SSLError
Some background
I have not added any certificates
The project works when I run a docker image locally but after deployment to kubernetes, from inside the container - the post request is not being sent to the url
verify=false does not work either
System Info - What I am using:
Python 2.7.12, Flask==0.12.2, Kubernetes, python-requests-2.18.4
Expected Result
Get HTTP Response code 200 after sending a POST request
Error Logs
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python2.7/site-packages/requests/adapters.py", line 511, in send
raise SSLError(e, request=request)
SSLError: HTTPSConnectionPool(host='dev.domain.nl', port=443): Max retries exceeded with url: /ingestion?LrnDevEui=0059AC0000152A03&LrnFPort=1&LrnInfos=TWA_100006356.873.AS-1-135680630&AS_ID=testserver&Time=2018-06-22T11%3A41%3A08.163%2B02%3A00&Token=1765b08354dfdec (Caused by SSLError(SSLEOFError(8, u'EOF occurred in violation of protocol (_ssl.c:661)'),))
/usr/local/lib/python2.7/site-packages/urllib3/connectionpool.py:858: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
InsecureRequestWarning)
Reproduction Steps
import requests
from flask import Flask, request, jsonify
from requests import Request, Session
sess = requests.Session()
adapter = requests.adapters.HTTPAdapter(max_retries = 200)
sess.mount('http://', adapter)
sess.mount('https://', adapter)
sess.cert ='/usr/local/lib/python2.7/site-packages/certifi/cacert.pem'
def test_post():
url = 'https://dev.domain.nl/ingestion/?'
header = {'Content-Type': 'application/json', 'Accept': 'application/json'}
response = sess.post(url, headers= header, params= somepara, data= json.dumps(data),verify=True)
print response.status_code
return response.status_code
def main():
threading.Timer(10.0, main).start()
test_post()
if __name__ == '__main__':
main()
app.run(host="0.0.0.0", debug=True, port=5001, threaded=True)
Docker File
FROM python:2.7-alpine
COPY ./web /web
WORKDIR /web
RUN pip install -r requirements.txt
ENV FLASK_APP app.py
EXPOSE 5001
EXPOSE 443
CMD ["python", "app.py"]
The problem may be in the Alpine Docker image that lacks CA certificates. On your laptop code works as it uses CA certs from you local workstation. I would think that running Docker image locally will fail too - so the problem is not k8s.
Try to add the following line to the Dockerfile:
RUN apk update && apk add ca-certificates && rm -rf /var/cache/apk/*
It will install CA certs inside the container.

Get secrets for GCP deployments from KMS

I want to deploy a Cloud VPN tunnel in GCP using Deployment Manager
I set up a deployment script using Python for this and I don't want the shared secret for the VPN tunnel in plain text in my configuration.
So I tried to include the secret encrypted via KMS and then perform a call to the KMS in the python script to get the plain text secret.
The python code to decrypt the secret looks like this:
import base64
import googleapiclient.discovery
def decryptSecret(enc_secret,context):
""" decrypts the given Secret via KMS"""
# KMS Configuration
KEY_RING = <Key Ring>
KEY_ID = <Key>
KEY_LOCATION = REGION
KEY_PROJECT = context.env['project'],
# Creates an API client for the KMS API.
kms_client = googleapiclient.discovery.build('cloudkms', 'v1')
key_name = 'projects/{}/locations/{}/keyRings/{}/cryptoKeys/{}'.format(
KEY_PROJECT, KEY_LOCATION, KEY_RING, KEY_ID)
crypto_keys = kms_client.projects().locations().keyRings().cryptoKeys()
request = crypto_keys.decrypt(
name=key_name,
body={'ciphertext': enc_secret})
response = request.execute()
plaintext = base64.b64decode(response['plaintext'].encode('ascii'))
return plaintext
But if I deploy this code I just get the following error message from deployment manager:
Waiting for update [operation-<...>]...failed.
ERROR: (gcloud.deployment-manager.deployments.update) Error in Operation [operation-1517326129267-5640004f18139-450d8883-8d57c3ff]: errors:
- code: MANIFEST_EXPANSION_USER_ERROR
message: |
Manifest expansion encountered the following errors: Error compiling Python code: No module named googleapiclient.discovery Resource: cloudvpn-testenv.py Resource: config
I also tried to include the complete google-api-python-client library in my configuration yaml, but I still get this error.
Any idea someone?
To answer your question directly:
# requirements.txt
google-api-python-client
# main.py
import base64
import os
import googleapiclient.discovery
crypto_key_id = os.environ['KMS_CRYPTO_KEY_ID']
def decrypt(client, s):
response = client \
.projects() \
.locations() \
.keyRings() \
.cryptoKeys() \
.decrypt(name=crypto_key_id, body={"ciphertext":s}) \
.execute()
return base64.b64decode(response['plaintext']).decode('utf-8').strip()
kms_client = googleapiclient.discovery.build('cloudkms', 'v1')
auth = decrypt(kms_client, '...ciphertext...'])
You can find more examples and samples on GitHub.
To indirectly answer your question, you may be interested in Secret Manager instead.

HTTP Deadline exceeded waiting for python Google Cloud Endpoints on python client localhost

I want to build a python client to talk to my python Google Cloud Endpoints API. My simple HelloWorld example is suffering from an HTTPException in the python client and I can't figure out why.
I've setup simple examples as suggested in this extremely helpful thread. The GAE Endpoints API is running on localhost:8080 with no problems - I can successfully access it in the API Explorer. Before I added the offending service = build() line, my simple client ran fine on localhost:8080.
When trying to get the client to talk to the endpoints API, I get the following error:
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/dist27/gae_override/httplib.py", line 526, in getresponse
raise HTTPException(str(e))
HTTPException: Deadline exceeded while waiting for HTTP response from URL: http://localhost:8080/_ah/api/discovery/v1/apis/helloworldendpoints/v1/rest?userIp=%3A%3A1
I've tried extending the http deadline. Not only did that not help, but such a simple first call on localhost should not be exceeding a default 5s deadline. I've also tried accessing the discovery URL directly within a browser and that works fine, too.
Here is my simple code. First the client, main.py:
import webapp2
import os
import httplib2
from apiclient.discovery import build
http = httplib2.Http()
# HTTPException happens on the following line:
# Note that I am using http, not https
service = build("helloworldendpoints", "v1", http=http,
discoveryServiceUrl=("http://localhost:8080/_ah/api/discovery/v1/apis/{api}/{apiVersion}/rest"))
# result = service.resource().method([parameters]).execute()
class MainPage(webapp2.RequestHandler):
def get(self):
self.response.headers['Content-type'] = 'text/plain'
self.response.out.write("Hey, this is working!")
app = webapp2.WSGIApplication(
[('/', MainPage)],
debug=True)
Here's the Hello World endpoint, helloworld.py:
"""Hello World API implemented using Google Cloud Endpoints.
Contains declarations of endpoint, endpoint methods,
as well as the ProtoRPC message class and container required
for endpoint method definition.
"""
import endpoints
from protorpc import messages
from protorpc import message_types
from protorpc import remote
# If the request contains path or querystring arguments,
# you cannot use a simple Message class.
# Instead, you must use a ResourceContainer class
REQUEST_CONTAINER = endpoints.ResourceContainer(
message_types.VoidMessage,
name=messages.StringField(1),
)
package = 'Hello'
class Hello(messages.Message):
"""String that stores a message."""
greeting = messages.StringField(1)
#endpoints.api(name='helloworldendpoints', version='v1')
class HelloWorldApi(remote.Service):
"""Helloworld API v1."""
#endpoints.method(message_types.VoidMessage, Hello,
path = "sayHello", http_method='GET', name = "sayHello")
def say_hello(self, request):
return Hello(greeting="Hello World")
#endpoints.method(REQUEST_CONTAINER, Hello,
path = "sayHelloByName", http_method='GET', name = "sayHelloByName")
def say_hello_by_name(self, request):
greet = "Hello {}".format(request.name)
return Hello(greeting=greet)
api = endpoints.api_server([HelloWorldApi])
Finally, here is my app.yaml file:
application: <<my web client id removed for stack overflow>>
version: 1
runtime: python27
api_version: 1
threadsafe: yes
handlers:
- url: /_ah/spi/.*
script: helloworld.api
secure: always
# catchall - must come last!
- url: /.*
script: main.app
secure: always
libraries:
- name: endpoints
version: latest
- name: webapp2
version: latest
Why am I getting an HTTP Deadline Exceeded and how to I fix it?
On your main.py you forgot to add some variables to your discovery service url string, or you just copied the code here without it. By the looks of it you were probably suppose to use the format string method.
"http://localhost:8080/_ah/api/discovery/v1/apis/{api}/{apiVersion}/rest".format(api='helloworldendpoints', apiVersion="v1")
By looking at the logs you'll probably see something like this:
INFO 2015-11-19 18:44:51,562 module.py:794] default: "GET /HTTP/1.1" 500 -
INFO 2015-11-19 18:44:51,595 module.py:794] default: "POST /_ah/spi/BackendService.getApiConfigs HTTP/1.1" 200 3109
INFO 2015-11-19 18:44:52,110 module.py:794] default: "GET /_ah/api/discovery/v1/apis/helloworldendpoints/v1/rest?userIp=127.0.0.1 HTTP/1.1" 200 3719
It's timing out first and then "working".
Move the service discovery request inside the request handler:
class MainPage(webapp2.RequestHandler):
def get(self):
service = build("helloworldendpoints", "v1",
http=http,
discoveryServiceUrl=("http://localhost:8080/_ah/api/discovery/v1/apis/{api}/{apiVersion}/rest")
.format(api='helloworldendpoints', apiVersion='v1'))

AWS Python script vs AWS CLI

I downloaded the AWS cli and was able to successfully list objects from my bucket. But doing the same from a Python script does not work. The error is forbidden error.
How should I configure the boto to use the same default AWS credentials ( as used by AWS cli )
Thank you
import logging import urllib, subprocess, boto, boto.utils, boto.s3
logger = logging.getLogger("test") formatter =
logging.Formatter('%(asctime)s %(message)s') file_handler =
logging.FileHandler("test.log") file_handler.setFormatter(formatter)
stream_handler = logging.StreamHandler(sys.stderr)
logger.addHandler(file_handler) logger.addHandler(stream_handler)
logger.setLevel(logging.INFO)
# wait until user data is available while True:
logger.info('**************************** Test starts *******************************')
userData = boto.utils.get_instance_userdata()
if userData:
break
time.sleep(5)
bucketName = ''
deploymentDomainName = ''
if bucketName:
from boto.s3.key import Key
s3Conn = boto.connect_s3('us-east-1')
logger.info(s3Conn)
bucket = s3Conn.get_bucket('testbucket')
key.key = 'test.py'
key.get_contents_to_filename('test.py')
CLI is -->
aws s3api get-object --bucket testbucket --key test.py my.py
Is it possible to use the latest Python SDK from Amazon (Boto 3)? If so, set up your credentials as outlined here: Boto 3 Quickstart.
Also, you might check your environment variable. If they don't exist, that is okay. If they don't match those on your account, then that could be the problem as some AWS SDKs and other tools with use environment variables over the config files.
*nix:
echo $AWS_ACCESS_KEY_ID && echo $AWS_SECRET_ACCESS_KEY
Windows:
echo %AWS_ACCESS_KEY% & echo %AWS_SECRET_ACCESS_KEY%
(sorry if my windows-foo is weak)
When you use CLI by default it takes credentials from .aws/credentials file, but for running bot you will have to specify access key and secret key in your python script.
import boto
import boto.s3.connection
access_key = 'put your access key here!'
secret_key = 'put your secret key here!'
conn = boto.connect_s3(
aws_access_key_id = access_key,
aws_secret_access_key = secret_key,
host = 'bucketname.s3.amazonaws.com',
#is_secure=False, # uncomment if you are not using ssl
calling_format = boto.s3.connection.OrdinaryCallingFormat(),
)