Upload file to s3 using api gateway and read query params inside lambda - amazon-web-services

I've found this example for uploading image to s3 bucket using API Gateway.
At the end it will be stored in the s3 bucket hitting the following endpoint
https://abc.execute-api.ap-southeast-1.amazonaws.com/v1/mybucket/myobject.jpeg
I have lambda function which accepts several parameters
public async Task<string> FunctionHandler(MyRequest request, ILambdaContext context)
{
...
}
public class MyRequest
{
public double Price { get; set; }
public string Name { get; set; }
public Guid Id { get; set; }
}
My question is:
Is it possible to expand file upload using api gateway to accept query params (MyRequest in this case) and to pass that params to lambda function. Idea is to trigger lambda function once the file is uploaded and to read passed params inside lambda function.
Another idea is to store all params as part of the filename, for example:
https://abc.execute-api.ap-southeast-1.amazonaws.com/v1/mybucket/price_20-name_boston-id_123-myobject.jpeg and to parse that filename from the lambda. In this case I would parse that in the lambda.
Or is there another option you would suggest?

Related

How to set environment variables for complex configuration parameters in AWS lambda using asp.net core 3.1 serverless?

In my asp.net core 3.1 web API launchsettings.json I have a environment variable named "AdminstratorConfig:AdminstratorPassword": "myPasswordValue"
Now in my code I also have a class named AppSettings defined like this:
public class AppSettings
{
public AdminstratorConfiguration AdminstratorConfig { get; set; }
}
public class AdminstratorConfiguration
{
public string AdminstratorPassword { get; set; }
}
When running in my local I can bind the environment variable into my AppSettings instance using something like this in the Startup
public class Startup
{
public IConfiguration Configuration { get; }
public Startup(IConfiguration configuration)
{
Configuration = configuration;
}
public void ConfigureServices(IServiceCollection services)
{
var appSettings = new AppSettings();
Configuration.Bind(appSettings);
// Here appSettings.AdminstratorConfig.AdminstratorPassword contains value 'myPasswordValue'
}
}
I cal also load the same from my appsettings.json if I have my configuration defined as
{
"AdminstratorConfig":
{
"AdminstratorPassword": "myPasswordValue"
}
}
However after deploying my application as AWS serverless lambda I tried to set the same environment variable in Lambda configuration section but it doesn't allow special characters here ' : '
Is there a way we can set and load these complex environment variables in AWS Lambda similar to my local?
if not what are the possible alternate approaches?
You can use __ (double underscore) instead of : (colon), so the environment variable in lambda would be AdministratorConfig__AdministratorPassword as the key and your myPasswordValue as the value.
See the documentation.

Access Amazon S3 public bucket

Hello I am trying to download data from one of an Amazon S3 public bucket.
For example https://registry.opendata.aws/noaa-gfs-bdp-pds/
The bucket has web accessible folder and I want to download the files inside the bucket.
I know I can do this with AWS CLI tool
But I want to know if there anyway to do this with AWs SDK Api (s3 client) (c# visual studio)?
I think the issue is authentication when creating connection to s3 client it requires credentials like access key ,I don't have an AWS account,and the bucket I try to get to is public so
Does anyone know how to access to this public bucket without any credentials via API?
Thanks.
If you specify the AnonymousAWSCredentials as the credentials object, any requests that are made to S3 will be unsigned. After that, interacting with the bucket is done like any other call:
using Amazon.Runtime;
using Amazon.S3;
using Amazon.S3.Model;
using System;
namespace S3TestApp
{
class Program
{
static void Main(string[] args)
{
var unsigned = new AnonymousAWSCredentials();
var client = new AmazonS3Client(unsigned, Amazon.RegionEndpoint.USEast1);
var listRequest = new ListObjectsRequest
{
BucketName = "noaa-gfs-bdp-pds",
Delimiter = "/",
};
ListObjectsResponse listResponse;
do
{
listResponse = client.ListObjects(listRequest);
foreach (var obj in listResponse.CommonPrefixes)
{
Console.WriteLine("PRE {0}", obj);
}
foreach (var obj in listResponse.S3Objects)
{
Console.WriteLine("{0} {1}", obj.Size, obj.Key);
}
listRequest.Marker = listResponse.NextMarker;
} while (listResponse.IsTruncated);
}
}
}

Internal error Unable to get object metadata from S3. Check object key, region and/or access permissions in aws Textract awssdk.core

I am trying to run the Document analysis request with the use of an S3 bucket, but it is giving me an internal error. I extracted table values from a document. Here is my code. Please note and using the AWS SDK for .Net.
public async Task<IActionResult> Index()
{
var res = await StartDocumentAnalysis(BucketName, S3File, "TABLES");
return View();
}
public async Task<string> StartDocumentAnalysis(string bucketName, string key, string featureType)
{
var request = new StartDocumentAnalysisRequest();
var s3Object = new S3Object
{
Bucket = bucketName,
Name = key
};
request.DocumentLocation = new DocumentLocation
{
S3Object = s3Object
};
request.FeatureTypes = new List<string> { featureType };
var response = _textract.StartDocumentAnalysisAsync(request).Result;
WaitForJobCompletion(response.JobId, 5000);
return response.JobId;
}
Error message:
Internal error Unable to get object metadata from S3. Check object key, region and/or access permissions in aws Textract awssdk.core

How to pass AWS Gateway parameter to AWS lambda function in Go

I deployed REST API into AWS lambda function. I am trying to use API Gateway GET method with lambda function to get a single value from MySQL database. I want to pass id in the URL. Here is my handler function
package main
import (
"github.com/aws/aws-lambda-go/events"
"github.com/aws/aws-lambda-go/lambda"
)
type User struct{
Id int `json:"id"`
Name string `json:"name"`
}
type Ids struct{
Id int `json:"id"`
}
//func handler(id Ids) (*User, error)
//id1 := id.Id
func handler( id events.APIGatewayProxyRequest) (*User, error) {
id1 := id.Body
user := &User{
Id: id1,
Name : "abc"
}
return user, nil
}
func main(){
lambda.Start(handler)
}
I am getting result while I create test event in lambda function with the test for handler(events.APIGatewayProxyRequest)
{
"body":123
}
or for handler(id Ids)
{
"id":123
}
When I use API Gateway to create GET method for the above lambda function, I am getting "internal server error" as it is giving null value to read database. I use the URL https://xxxx.xxxxx.amazonaws.com/test/user/123, but it is not reading the value of id. What am I doing wrong here? Should I need to pass the JSON for the handler function? If yes, how to pass JSON in the URL? or is there any other method where I can pass the value to handler?

How do we access and respond to CloudFormation custom resources using an AWS Lambda function written in Java?

I have am AWS Lambda function written in Java that I would like to use as part of a response to an AWS CloudFormation function. Amazon provides two detailed examples on how to create a CloudFormation custom resource that returns its value based on an AWS Lambda function written in Node.js, however I have been having difficulty translating the Lambda examples into Java. How can we setup our AWS Java function so that it reads the value of the pre-signed S3 URL passed in as a parameter to the Lambda function from CloudFormation and send back our desired response to the waiting CloudFormation template?
After back and forth conversation with AWS, here are some code samples I've created that accomplish this.
First of all, assuming you want to leverage the predefined interfaces for creating Handlers, you can implement RequestsHandler and define the HandleRequest methods like so:
public class MyCloudFormationResponder implements RequestHandler<Map<String, Object>, Object>{
public Object handleRequest(Map<String,Object> input, Context context) {
...
}
}
The Map<String, Object>is a Map of the values sent from your CloudFormation resource to the Lambda function. An example CF resource:
"MyCustomResource": {
"Type" : "Custom::String",
"Version" : "1.0",
"Properties": {
"ServiceToken": "arn:aws:lambda:us-east-1:xxxxxxx:function:MyCloudFormationResponderLambdaFunction",
"param1": "my value1",
"param2": ["t1.micro", "m1.small", "m1.large"]
}
}
can be analyzed with the following code
String responseURL = (String)input.get("ResponseURL");
context.getLogger().log("ResponseURLInput: " + responseURL);
context.getLogger().log("StackId Input: " + input.get("StackId"));
context.getLogger().log("RequestId Input: " + input.get("RequestId"));
context.getLogger().log("LogicalResourceId Context: " + input.get("LogicalResourceId"));
context.getLogger().log("Physical Context: " + context.getLogStreamName());
#SuppressWarnings("unchecked")
Map<String,Object> resourceProps = (Map<String,Object>)input.get("ResourceProperties");
context.getLogger().log("param 1: " + resourceProps.get("param1"));
#SuppressWarnings("unchecked")
List<String> myList = (ArrayList<String>)resourceProps.get("param2");
for(String s : myList){
context.getLogger().log(s);
}
The key things to point out here, beyond what is explained in the NodeJS examples in the AWS documentation are
(String)input.get("ResponseURL") is the pre-signed S3 URL that you need to respond back to (more on this later)
(Map<String,Object>)input.get("ResourceProperties") returns the map of your CloudFormation custom resource "Properties" passed into the Lambda function from your CF template. I provided a String and ArrayList as two examples of object types that can be returned, though several others are possible
In order to respond back to the CloudFormation template custom resource instantiation, you need to execute an HTTP PUT call back to the ResponseURL previously mentioned and include most of the following fields in the variable cloudFormationJsonResponse. Below is how I've done this
try {
URL url = new URL(responseURL);
HttpURLConnection connection=(HttpURLConnection)url.openConnection();
connection.setDoOutput(true);
connection.setRequestMethod("PUT");
OutputStreamWriter out = new OutputStreamWriter(connection.getOutputStream());
JSONObject cloudFormationJsonResponse = new JSONObject();
try {
cloudFormationJsonResponse.put("Status", "SUCCESS");
cloudFormationJsonResponse.put("PhysicalResourceId", context.getLogStreamName());
cloudFormationJsonResponse.put("StackId", input.get("StackId"));
cloudFormationJsonResponse.put("RequestId", input.get("RequestId"));
cloudFormationJsonResponse.put("LogicalResourceId", input.get("LogicalResourceId"));
cloudFormationJsonResponse.put("Data", new JSONObject().put("CFAttributeRefName", "some String value useful in your CloudFormation template"));
} catch (JSONException e) {
e.printStackTrace();
}
out.write(cloudFormationJsonResponse.toString());
out.close();
int responseCode = connection.getResponseCode();
context.getLogger().log("Response Code: " + responseCode);
} catch (IOException e) {
e.printStackTrace();
}
Of particular note is the node "Data" above which references an additional com.amazonaws.util.json.JSONObject in which I include any attributes that are required in my CloudFormation template. In this case, it would be retrieved in CF template with something like { "Fn::GetAtt": [ "MyCustomResource", "CFAttributeRefName" ] }
Finally, you can simply return null since nothing would be returned from this function as it's the HTTPUrlConnection that actually responds to the CF call.
Neil,
I really appreciate your great documentation here. I would add a few things that I found useful:
input.get("RequestType") - This comes back as "Create", "Delete", etc. You can use this value to determine what to do when a stack is created, deleted, etc..
As far as security, I uploaded the Lambda Functions and set the VPC, subnets, and security group (default) manually so I can reuse it with several cloudformationn scripts. That seems to be working okay.
I created one Lambda function that gets called by the CF scripts and one I can run manually in case the first one fails.
This excellent gradle aws plugin makes it easy to upload Java Lambda functions to AWS.
Gradle AWS Plugin