I'm in the process of building a Web API with AWS Lambda using .NET Core.
I have run into a problem, where the code piece below work as expected on my Windows machine (Echo the image back), but when deployed to AWS Lambda, the returned image is broken. After further investigation, the echoed back file's size is nearly double the size of the sending file when deployed on AWS?
[HttpPost]
public async Task<IActionResult> Post(IFormFile file)
{
using (var tmpStream = new MemoryStream())
{
await file.CopyToAsync(tmpStream);
var fileExtension = Path.GetExtension(file.FileName);
return File(tmpStream.ToArray(), file.ContentType);
}
}
Am I missing some configuration or overlooking something? AWS Gateway??
(I'm testing the issue via Postman)
Did you look at the contents of the file? My guess it is the html error result or something.
In this blog post (Serverless ASP.NET Core 2.0 Applications) they mention:
If your web application displays images, we recommend you serve those images from Amazon S3. This is more efficient for returning static content like images, Cascading Style Sheets, etc. Also, to return images from your Lambda function to the browser, you need to do extra configuration in API Gateway for binary data.
See API Gateway for binary data for how to configure that.
Incase anyone is looking for a solution, in addition to adding "multipart/form-data" as binary media type in the API Gateway settings, you need to add a model in the method request body of the resource.
Details can be found at https://github.com/aws/aws-lambda-dotnet/issues/635#issuecomment-616226910
Steps:
Add "multipart/form-data" as binary type in LambdaEntryPoint.cs file (if that is how it is named).
public class LambdaEntryPoint : APIGatewayProxyFunction
{
/// <summary>
/// The builder has configuration, logging and Amazon API Gateway already configured. The startup class
/// needs to be configured in this method using the UseStartup<>() method.
/// </summary>
/// <param name="builder"></param>
protected override void Init(IWebHostBuilder builder)
{
RegisterResponseContentEncodingForContentType("multipart/form-data", ResponseContentEncoding.Base64);
builder.UseStartup<Startup>();
}
}
Add BinaryMediaTypes in the settings section of the AWS API Gateway as shown
here - BinaryMediaTypes in API Gateway.
Create a new model for the API Gateway with the configuration as
{
"$schema": "http://json-schema.org/draft-04/schema#",
"title": "MediaFileUpload",
"type": "object",
"properties": {
"file": { "type": "string" }
}
}
Update the method request step by adding an entry in "Request Body" using the created model as a content type of "multipart/form-data"( as shown in API Gateway Resource).
Make sure that you deploy the API so that the changes take effect.
EDIT: Added code, and additional steps for clarity, as comment by #JeremyCaney
Add BinaryMediaTypes "multipart/form-data" in the settings section of the AWS API Gateway. Deploy the Api (If you have done it before, do it again after changing the settings).
Related
I get the following error message when calling actions for CloudWatch in API Gateway.
"Error": {
"Code": "InvalidAction",
"Message": "Could not find operation DescribeAlarms for version 2009-05-15",
"Type": "Sender"
}
I've been using DescribeAlarms for testing. My setup is as follows.
Integration Type = AWS Service
AWS Service = CloudWatch
HTTP method = POST
Action = DescribeAlarms
The error references the API Version 2009-05-15, which only has ListMetrics and GetMetricStatistics according to it's documentation on page 54. ListMetrics does indeed work as expected with my setup.
The current version is 2010-08-01 but I don't see anyway to reference that in API Gateway. In an example of a POST request in the documentation it shows a header labeled x-amz-target with a value of GraniteServiceVersion20100801.API_Name.
My interpretation is I can put Name = x-amz-target and value 'GraniteServiceVersion20100801.DescribeAlarms' in my http header for the Integration Request in API Gateway.
This doesn't change the response and gives the same error message.
I also used the --debug in CLI when calling describe-alarms, and in the body it shows...
"body": {
"Action":"DescribeAlarms",
"Version":"2010-08-01"
}
So I also set http headers to include Content-Type with a value of 'application/x-amz-json-1.1' and then put in
{
"Action":"DescribeAlarms",
"Version":"2010-08-01"
}
but nothing changed with that either.
Any help or guidance would be greatly appreciated.
Under Method Integration -> URL Query String Parameters
I added Version as the Name and '2010-08-01' under Mapped From.
All actions are now working as expected.
I'm trying to PutMetrics directly from Api Gateway -> Cloudwatch using PutMetricData, Version in the query string params didn't work for me.
These 3 HTTP headers in the Integration Request solved it for me:
Content-Type 'application/json'
X-Amz-Target 'GraniteServiceVersion20100801.PutMetricData'
Content-Encoding 'amz-1.0'
I can't see any option anywhere to set up a custom domain for my Google Cloud Function when using HTTP Triggers. Seems like a fairly major omission. Is there any way to use a custom domain instead of their location-project.cloudfunctions.net domain or some workaround to the same effect?
I read an article suggesting using a CDN in front of the function with the function URL specified as the pull zone. This would work, but would introduce unnecessary cost - and in my scenario none of the content is able to be cached so using a CDN is far from ideal.
If you connect your Cloud project with Firebase, you can connect your HTTP-triggered Cloud Functions to Firebase Hosting to get vanity URLs.
Using Cloudflare Workers (CDN, reverse proxy)
Why? Because it not only allows you to set up a reverse proxy over your Cloud Function but also allows you to configure things like - server-side rendering (SSR) at CDN edge locations, hydrating API response for the initial (SPA) webpage load, CSRF protection, DDoS protection, advanced caching strategies, etc.
Add your domain to Cloudflare; then go to DNS settings, add a A record pointing to 192.0.2.1 with Cloudflare proxy enabled for that record (orange icon). For example:
Create a Cloudflare Worker script similar to this:
function handleRequest(request) {
const url = new URL(request.url);
url.protocol = "https:";
url.hostname = "us-central1-example.cloudfunctions.net";
url.pathname = `/app${url.pathname}`;
return fetch(new Request(url.toString(), request));
}
addEventListener("fetch", (event) => {
event.respondWith(handleRequest(event.request));
});
Finally, open Workers tab in the Cloudflare Dashboard, and add a new route mapping your domain URL (pattern) to this worker script, e.g. example.com/* => proxy (script)
For a complete example, refer to GraphQL API and Relay Starter Kit (see web/workers).
Also, vote for Allow me to put a Custom Domain on my Cloud Function
in the GCF issue tracker.
Another way to do it while avoiding Firebase is to put a load balancer in front of the Cloud Function or Cloud Run and use a "Serverless network endpoint group" as the backend for the load balancer.
Once you have the load balancer set up just modify the DNS record of your domain to point to the load balancer and you are good to go.
https://cloud.google.com/load-balancing/docs/https/setting-up-https-serverless
Been a while for this answer.
Yes, now you can use a custom domain for your Google Cloud functions.
Go over to firebase and associate your project with firebase. What we are interested in here is the hosting. Install the Firebase CLI as per the firebase documentation - (very good and sweet docs here)
Now create your project and as you may have noticed on the docs, to add firebase to your project you type firebase init. Select hosting and that's it.
Once you are done, look for the firebase.json file. Then customize it like this
{
"hosting": {
"public": "public",
"ignore": [
"firebase.json",
"**/.*",
"**/node_modules/**"
],
"rewrites": [
{
"source": "myfunction/custom",
"function": "myfunction"
},
]
}
}
By default, you get a domain like https://project-name.web.app but you can add your own domain on the console.
Now deploy your site. Since you are not interested in web hosting probably you can leave as is. Now your function will execute like this
Function to execute > myfunction
Custom url > https://example.com/myfunction/custom
If you don't mind the final appearance of the url you could also setup a CNAME dns record.
function.yourdomain.com -> us-central1******.cloudfunctions.net
then you could call it like
function.yourdomain.com/function-1/?message=Hello+World
I want to run a PowerShell script in an AWS Lambda that will update an API Gateway endpoint to require IAM authorization. The API Gateway is auto generated from the swagger generated by another application (.NET Core C#).
I've completed some code that is getting me close, but I am getting an error that I'm not sure how to resolve. Here's what I have so far:
$patchOperation = New-Object -Type Amazon.APIGateway.Model.PatchOperation
$patchOperation.Path = '/ResourceMethods/PUT/AuthorizationType'
$patchOperation.Value = 'AWS_IAM'
$patchOperation.Op = 'add'
Update-AGResource -RestApiId $ApiId -ResourceId $resource.Id -PatchOperation $patchOperation
The error I'm getting is:
Invalid patch path '/ResourceMethods/GET/AuthorizationType' specified for op 'add'. Must be one of: []
The desired result is that the API Gateway endpoint specified by the IDs will be updated to require IAM authorization when using the verb GET. Ideally, the operation will be idempotent.
I was able to figure this out by using the network tab in my browser debugger to see the values sent to the server. The structure for the PatchOperation object I want is:
{"op":"replace","path":"/authorizationType","value":"AWS_IAM"}
I also determined this was not a great approach. We are generating the OpenAPI doc with SwaggerGen. We can use an OperationFilter to add the authentication to the endpoint like so:
public class ApiGatewayIntegrationFilter : IOperationFilter
{
public void Apply(Operation operation, OperationFilterContext context)
{
operation.Extensions.Add("x-amazon-apigateway-auth", new ApiGatewayAuth
{
Type = "AWS_IAM"
});
}
}
Background
We are implementing CKAN on AWS with the DataStore extension and interact with it via the python CKAN API. AWS is essentially split into two environments:
PRIV: for internal and QA/Staging purposes, hosted in https://private.company.com
PUB: for external end-users, hosted in https://private.company.com
PUB is created via CloudFront and uses Read Replica of the PRIV database. It is basically the same as PRIV except that it is read-only.
Challenge
Resource URLs in PUB point to the PRIV environment. For example, running PUB_ckan.resource_show(id='123') api call in the public environment returns the following:
{ ...
'datastore_active': False,
'id': '123',
'name': 'Resource 1',
'package_id': 'abc',
'state': 'active',
'url': 'https://private.company.com/dataset/f688/resource/e3c785/download/file.zip',
'url_type': 'upload'
... }
This is the same for files uploaded through the CKAN API or the DataStore extension (in which case they are labeled 'url_type': 'datastore).
Expectation
All of the package/resource metadata should be the same between environments, with the exception of Resource URL which must reflect the PUB URL so end-users make api calls against the highly available, secure, environment. i.e.:
'url': 'https://public.company.com/dataset/f688/resource/e3c785/download/file.zip'
So far, I have looked into whether the config file contains a setting for using relative URLs and also tried updating the URLs manually via a Python script, both without success. Any help with this would be greatly appreciated.
If you want to keep using the same underlying database then the easiest way is probably a small plugin on the public instance which implements the IResourceController interface and uses the before_show method of that interface to change the resource URL.
Note, however, the following warning from the documentation of before_show:
Be aware that this method is not only called for UI display, but also in other methods like when a resource is deleted because showing a package is used to get access to the resources in a package.
You should therefore definitely test that your URL modifications do not affect other parts of CKAN.
I can't see any option anywhere to set up a custom domain for my Google Cloud Function when using HTTP Triggers. Seems like a fairly major omission. Is there any way to use a custom domain instead of their location-project.cloudfunctions.net domain or some workaround to the same effect?
I read an article suggesting using a CDN in front of the function with the function URL specified as the pull zone. This would work, but would introduce unnecessary cost - and in my scenario none of the content is able to be cached so using a CDN is far from ideal.
If you connect your Cloud project with Firebase, you can connect your HTTP-triggered Cloud Functions to Firebase Hosting to get vanity URLs.
Using Cloudflare Workers (CDN, reverse proxy)
Why? Because it not only allows you to set up a reverse proxy over your Cloud Function but also allows you to configure things like - server-side rendering (SSR) at CDN edge locations, hydrating API response for the initial (SPA) webpage load, CSRF protection, DDoS protection, advanced caching strategies, etc.
Add your domain to Cloudflare; then go to DNS settings, add a A record pointing to 192.0.2.1 with Cloudflare proxy enabled for that record (orange icon). For example:
Create a Cloudflare Worker script similar to this:
function handleRequest(request) {
const url = new URL(request.url);
url.protocol = "https:";
url.hostname = "us-central1-example.cloudfunctions.net";
url.pathname = `/app${url.pathname}`;
return fetch(new Request(url.toString(), request));
}
addEventListener("fetch", (event) => {
event.respondWith(handleRequest(event.request));
});
Finally, open Workers tab in the Cloudflare Dashboard, and add a new route mapping your domain URL (pattern) to this worker script, e.g. example.com/* => proxy (script)
For a complete example, refer to GraphQL API and Relay Starter Kit (see web/workers).
Also, vote for Allow me to put a Custom Domain on my Cloud Function
in the GCF issue tracker.
Another way to do it while avoiding Firebase is to put a load balancer in front of the Cloud Function or Cloud Run and use a "Serverless network endpoint group" as the backend for the load balancer.
Once you have the load balancer set up just modify the DNS record of your domain to point to the load balancer and you are good to go.
https://cloud.google.com/load-balancing/docs/https/setting-up-https-serverless
Been a while for this answer.
Yes, now you can use a custom domain for your Google Cloud functions.
Go over to firebase and associate your project with firebase. What we are interested in here is the hosting. Install the Firebase CLI as per the firebase documentation - (very good and sweet docs here)
Now create your project and as you may have noticed on the docs, to add firebase to your project you type firebase init. Select hosting and that's it.
Once you are done, look for the firebase.json file. Then customize it like this
{
"hosting": {
"public": "public",
"ignore": [
"firebase.json",
"**/.*",
"**/node_modules/**"
],
"rewrites": [
{
"source": "myfunction/custom",
"function": "myfunction"
},
]
}
}
By default, you get a domain like https://project-name.web.app but you can add your own domain on the console.
Now deploy your site. Since you are not interested in web hosting probably you can leave as is. Now your function will execute like this
Function to execute > myfunction
Custom url > https://example.com/myfunction/custom
If you don't mind the final appearance of the url you could also setup a CNAME dns record.
function.yourdomain.com -> us-central1******.cloudfunctions.net
then you could call it like
function.yourdomain.com/function-1/?message=Hello+World