How to query from AppSynch using e.g. Urllib3 - amazon-web-services

I have create an AppSynch API on AWS. I can easily query it using the console or some AWS specific package. However I would want to query it using a simple package such as e.g. urllib3. Its surprisingly hard to find anyone doing using a direct api call (everyone uses some kind of aws related packages or solutions that i cant seem to get to work). The query I want to do is:
mutation provision {
provision(
noteId:
{ec2Instance: "t2.micro",
s3Bucket: "dev"})}
I have tried with different variations of:
query = """
mutation provision {
provision(
noteId:
{ec2Instance: "t2.micro",
s3Bucket: "dev"})}
"""
headers = {"x-api-key": api_key}
http = urllib3.PoolManager()
data = json.dumps("query": query, "variables": {}, "operationName": "somename")
r = http.request('POST', url, headers=headers,
data=data.encode('utf8'))
But I somehow cannot get it to work, i keep on getting messages that the API cant understand the POST request

I found the solution:
import requests
import json
APPSYNC_API_KEY = APPSYNC_API_KEY
APPSYNC_API_ENDPOINT_URL = APPSYNC_API_ENDPOINT_URL
headers = {
'Content-Type': "application/graphql",
'x-api-key': APPSYNC_API_KEY,
'cache-control': "no-cache",
}
query = """
mutation provision {
provision(
inputParameters:
{ec2Instance: "t2.micro",
s3Bucket: "dev"})}
"""
payload_obj = {"query": query}
payload = json.dumps(payload_obj)
response = requests.request("POST", APPSYNC_API_ENDPOINT_URL, data=payload, headers=headers)
print(response)

Related

How do you setup AWS Cloudfront to provide custom access to S3 bucket with signed cookies using wildcards

AWS Cloudfront with Custom Cookies using Wildcards in Lambda Function:
The problem:
On AWS s3 Storage to provide granular access control the preferred method is to use AWS Cloudfront with signed URL's.
Here is a good example how to setup cloudfront a bit old though, so you need to use the recommended settings not
the legacy and copy the generated policy down to S3.
https://medium.com/#himanshuarora/protect-private-content-using-cloudfront-signed-cookies-fd9674faec3
I have provided an example below on how to create one of these signed URL's using Python and the newest libraries.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-creating-signed-url-canned-policy.html
However this requires the creation of a signed URL for each item in the S3 bucket. To give wildcard access to a
directory of items in the S3 bucket you need use what is called a custom Policy. I could not find any working examples
of this code using Python, many of the online expamples have librarys that are depreciated. But attached is a working example.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-creating-signed-url-custom-policy.html
I had trouble getting the python cryptography package to work by building the lambda function on an Amazon Linux 2
instance on AWS EC2. Always came up with an error of a missing library. So I use Klayers for AWS and worked
https://github.com/keithrozario/Klayers/tree/master/deployments.
A working example for cookies for a canned policy (Means only a signed URL specific for each S3 file)
https://www.velotio.com/engineering-blog/s3-cloudfront-to-deliver-static-asset
My code for cookies for a custom policy (Means a single policy statement with URL wildcards etc). You must use the Cryptology
package type examples but the private_key.signer function was depreciated for a new private_key.sign function with an extra
argument. https://cryptography.io/en/latest/hazmat/primitives/asymmetric/rsa/#signing
from cryptography.hazmat.primitives import serialization
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives import hashes
import base64
import datetime
class CFSigner:
def sign_rsa(self, message):
private_key = serialization.load_pem_private_key(
self.keyfile, password=None, backend=default_backend()
)
signature = private_key.sign(message.encode(
"utf-8"), padding.PKCS1v15(), hashes.SHA1())
return signature
def _sign_string(self, message, private_key_file=None, private_key_string=None):
if private_key_file:
self.keyfile = open(private_key_file, "rb").read()
elif private_key_string:
self.keyfile = private_key_string.encode("utf-8")
return self.sign_rsa(message)
def _url_base64_encode(self, msg):
msg_base64 = base64.b64encode(msg).decode("utf-8")
msg_base64 = msg_base64.replace("+", "-")
msg_base64 = msg_base64.replace("=", "_")
msg_base64 = msg_base64.replace("/", "~")
return msg_base64
def generate_signature(self, policy, private_key_file=None):
signature = self._sign_string(policy, private_key_file)
encoded_signature = self._url_base64_encode(signature)
return encoded_signature
def create_signed_cookies2(self, url, private_key_file, keypair_id, expires_at):
policy = self.create_custom_policy(url, expires_at)
encoded_policy = self._url_base64_encode(
policy.encode("utf-8"))
signature = self.generate_signature(
policy, private_key_file=private_key_file)
cookies = {
"CloudFront-Policy": encoded_policy,
"CloudFront-Signature": signature,
"CloudFront-Key-Pair-Id": keypair_id,
}
return cookies
def sign_to_cloudfront(object_url, expires_at):
cf = CFSigner()
url = cf.create_signed_url(
url=object_url,
keypair_id="xxxxxxxxxx",
expire_time=expires_at,
private_key_file="xxx.pem",
)
return url
def create_signed_cookies(self, object_url, expires_at):
cookies = self.create_signed_cookies2(
url=object_url,
private_key_file="xxx.pem",
keypair_id="xxxxxxxxxx",
expires_at=expires_at,
)
return cookies
def create_custom_policy(self, url, expires_at):
return (
'{"Statement":[{"Resource":"'
+ url
+ '","Condition":{"DateLessThan":{"AWS:EpochTime":'
+ str(round(expires_at.timestamp()))
+ "}}}]}"
)
def lambda_handler(event, context):
response = event["Records"][0]["cf"]["response"]
headers = response.get("headers", None)
cf = CFSigner()
path = "https://www.example.com/*"
expire = datetime.datetime.now() + datetime.timedelta(days=3)
signed_cookies = cf.create_signed_cookies(path, expire)
headers["set-cookie"] = [{
"key": "set-cookie",
"value": "CloudFront-Policy={signed_cookies.get('CloudFront-Policy')}"
}]
headers["Set-cookie"] = [{
"key": "Set-cookie",
"value": "CloudFront-Signature={signed_cookies.get('CloudFront-Signature')}",
}]
headers["Set-Cookie"] = [{
"key": "Set-Cookie",
"value": "CloudFront-Key-Pair-Id={signed_cookies.get('CloudFront-Key-Pair-Id')}",
}]
print(response)
return response ```

Uploading to Amazon S3 using a signed URL

I'm implementing a file uploading functionality that would be used by an angular application. But I am having numerous issues getting it to work. And need help figuring out what I am missing. Here is an overview of the resources in place, and the testing and results I'm getting.
Infrastructure
I have an Amazon S3 bucket created with versioning enabled, encryption enabled and all public access is blocked.
An API gateway with a Lambda function that generates a pre-signed URL. The code is shown below.
def generate_upload_url(self):
try:
conditions = [
{"acl": "private"},
["starts-with", "$Content-Type", ""]
]
fields = {"acl": "private"}
response = self.s3.generate_presigned_post(self.bucket_name,
self.file_path,
Fields=fields,
Conditions=conditions,
ExpiresIn=3600)
except ClientError as e:
logging.error(e)
return None
return response
The bucket name and file path are set as part of the class constructor. In this example the bucket and file path are
def construct_file_names(self):
self.file_path = self.account_number + '-' + self.user_id + '-' + self.experiment_id + '-experiment-data.json'
self.bucket_name = self.account_number + '-' + self.user_id + '-resources'
Testing via Postman
Before implementing it within my angular application. I am testing the upload functionality via Postman.
The response from my API endpoint for the pre-signed URL is shown below
Using these values, I make another API call from Postman and receive the response below
If anybody can see what I might be doing wrong here. I have played around with different fields in the boto3 method, but ultimately, I am getting 403 errors with different messages related to Policy conditions. Any help would be appreciated.
Update 1
I tried to adjust the order of "file" and "acl" but received another error shown below
Update Two - Using signature v4
I updated the pre-signed URL code, shown below
def upload_data(x):
try:
config = Config(
signature_version='s3v4',
)
s3 = boto3.client('s3', "eu-west-1", config=config)
sts = boto3.client('sts', "eu-west-1")
data_upload = UploadData(x["userId"], x["experimentId"], s3, sts)
return data_upload.generate_upload_url()
except Exception as e:
logging.error(e)
When the Lambda function is triggered by the API call, the following is received by Postman
Using the new key values returned from the API, I proceeded to try another test upload. The results are shown below
Once again an error but I think I'm going in the correct direction.
Try moving acl above the file row. Make sure file is at the end.
I finally got this working, so I will post an answer here summarising the steps taken.
Python Code for generating pre-signed URL via boto in eu-west-1
Use signature v4 signing - https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html
def upload_data(x):
try:
config = Config(
signature_version='s3v4',
)
s3 = boto3.client('s3', "eu-west-1", config=config)
sts = boto3.client('sts', "eu-west-1")
data_upload = UploadData(x["userId"], x["experimentId"], s3, sts)
return data_upload.generate_upload_url()
except Exception as e:
logging.error(e)
def generate_upload_url(self):
try:
conditions = [
{"acl": "private"},
["starts-with", "$Content-Type", ""]
]
fields = {"acl": "private"}
response = self.s3.generate_presigned_post(self.bucket_name,
self.file_path,
Fields=fields,
Conditions=conditions,
ExpiresIn=3600)
except ClientError as e:
logging.error(e)
return None
return response
Uploading via Postman
Ensure the order is correct with "file" being last
Ensure "Content-Type" matches what you have in the code to generate the URL. In my case it was "". Once added, the conditions error received went away.
S3 Bucket
Enable a CORS policy if required. I needed one and it is shown below, but this link can help - https://docs.aws.amazon.com/AmazonS3/latest/userguide/ManageCorsUsing.html
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"PUT",
"POST",
"DELETE"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": [],
"MaxAgeSeconds": 3000
}
]
Upload via Angular
My issue arose during testing with Postman, but I was implementing the functionality to be used by an Angular application. Here is a code snippet of how the upload is done by first calling the API for the pre-signed URL, and then uploading directly.
Summary
Check your S3 infrastructure
Enable CORS if need be
Use sig 4 explicitly in the SDK of choice, if working in an old region
Ensure the form data order is correct
Hope all these pieces help others who are trying to achieve the same. Thanks for all the hints from SO members.

How to use NextToken in Boto3

The below-mentioned code is created for exporting all the findings from the security hub to an S3 bucket using lambda functions. The filters are set for exporting only CIS-AWS foundations benchmarks. There are more than 20 accounts added as the members in security hub. The issue that I'm facing here is even though I'm using the NextToken configuration. The output doesn't have information about all the accounts. Instead, it just displays any one of the account's data randomly.
Can somebody look into the code and let me know what could be the issue, please?
import boto3
import json
from botocore.exceptions import ClientError
import time
import glob
client = boto3.client('securityhub')
s3 = boto3.resource('s3')
storedata = {}
_filter = Filters={
'GeneratorId': [
{
'Value': 'arn:aws:securityhub:::ruleset/cis-aws-foundations-benchmark',
'Comparison': 'PREFIX'
}
],
}
def lambda_handler(event, context):
response = client.get_findings(
Filters={
'GeneratorId': [
{
'Value': 'arn:aws:securityhub:::ruleset/cis-aws-foundations-benchmark',
'Comparison': 'PREFIX'
},
],
},
)
results = response["Findings"]
while "NextToken" in response:
response = client.get_findings(Filters=_filter,NextToken=response["NextToken"])
results.extend(response["Findings"])
storedata = json.dumps(response)
print(storedata)
save_file = open("/tmp/SecurityHub-Findings.json", "w")
save_file.write(storedata)
save_file.close()
for name in glob.glob("/tmp/*"):
s3.meta.client.upload_file(name, "xxxxx-security-hubfindings", name)
TooManyRequestsException error is also getting now.
The problem is in this code that paginates the security findings results:
while "NextToken" in response:
response = client.get_findings(Filters=_filter,NextToken=response["NextToken"])
results.extend(response["Findings"])
storedata = json.dumps(response)
print(storedata)
The value of storedata after the while loop has completed is the last page of security findings, rather than the aggregate of the security findings.
However, you're already aggregating the security findings in results, so you can use that:
save_file = open("/tmp/SecurityHub-Findings.json", "w")
save_file.write(json.dumps(results))
save_file.close()

how to search in aws elastic search via lambda?

I am working through following aws documentation - https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/search-example.html, to create essentially an lambda function to search elastic search domain created. I am not 100% clear on how the search works here, the sample code below, is making a get request to url - 'https://' + host + '/' + index + '/_search'. what is the "/_search" here. and also index is part of the URL. how does indexing and searching in the index work in elastic search. in the event there are multiple indexes in the ES domain, and we want to set up and api gateway and lambda , how we can make it such that we can search within multiple indexes?
import boto3
import json
import requests
from requests_aws4auth import AWS4Auth
region = '' # For example, us-west-1
service = 'es'
credentials = boto3.Session().get_credentials()
awsauth = AWS4Auth(credentials.access_key, credentials.secret_key, region, service, session_token=credentials.token)
host = '' # For example, search-mydomain-id.us-west-1.es.amazonaws.com
index = 'movies'
url = 'https://' + host + '/' + index + '/_search'
# Lambda execution starts here
def handler(event, context):
# Put the user query into the query DSL for more accurate search results.
# Note that certain fields are boosted (^).
query = {
"size": 25,
"query": {
"multi_match": {
"query": event['queryStringParameters']['q'],
"fields": ["fields.title^4", "fields.plot^2", "fields.actors", "fields.directors"]
}
}
}
# ES 6.x requires an explicit Content-Type header
headers = { "Content-Type": "application/json" }
# Make the signed HTTP request
r = requests.get(url, auth=awsauth, headers=headers, data=json.dumps(query))
# Create the response and add some extra content to support CORS
response = {
"statusCode": 200,
"headers": {
"Access-Control-Allow-Origin": '*'
},
"isBase64Encoded": False
}
# Add the search results to the response
response['body'] = r.text
return response
Here is an example.
The host is your endpoint to elasticSearch , datacards/datacard is your index and _search is primary a key word for search. If you use Kibana this keyword will be there for all of your searches.
url = host + '/datacards/datacard/_search'
I think you can use Elasticsearch client for Python:
https://elasticsearch-py.readthedocs.io/en/master/
It's more "Pythonic" query Elasticsearch like this.

Python Fiware Orion Context Broker problems

I can't create an entity.
Payload:
datos = {
"id": "1",
"type": "Car"
}
Query:
jsonData = json.dumps(datos)
url = 'http://130.206.113.177:1026/v2/entities'
response = requests.post(url, data=jsonData, headers=head)
Error:
'{"error":"BadRequest","description":"attribute must be a JSON object, unless keyValues option is used"}'
Did you define head object? I don't see it defined in the code provided by you.
I have the intuition that you forgot to define the header 'Content-Type' which must be defined with the value:
"Content-Type": "application/json"
On the other hand, defining the headers in the following way works perfectly for me, even using the Orion instance that you pointed out in the description of your question.
import json
import requests
head = {"Content-Type": "application/json"}
datos = { "id": "1", "type": "Car"}
jsonData = json.dumps(datos)
url = 'http://130.206.113.177:1026/v2/entities'
response = requests.post(url, data=jsonData, headers=head)
print response
Note that if you invoke your example as it is, you will probably return an error HTTP 422, because the object already exists (the one that I created during the test).
Regards!