How to access params in Go AWS lambda function - amazon-web-services

I'm using Go on AWS lambda and I'm writing the following code:
func HanldeLambdaFunction(ctx context.Context, request events.APIGatewayProxyRequest) (Response, error) {
limitString := request.QueryStringParameters["limit"]
fmt.Println("limitString", limitString) //nothing is written
}
I'm testing the lambda function using the api gateway and providing params:
limit=2
The lambda function is being executed successfully but the limit is not being read.
How can I access the params?

The problem with the code you posted is, that it should not even compile. The biggest problem being, that there is no Response struct. It probably should be events.APIGatewayProxyResponse.
Furthermore, the code does not return anything, even though you define that it should return Response and error.
I took your code and fixed all of this and it works for me. The fixed code looks like this:
package main
import (
"context"
"fmt"
"github.com/aws/aws-lambda-go/events"
"github.com/aws/aws-lambda-go/lambda"
)
func HanldeLambdaFunction(ctx context.Context, request events.APIGatewayProxyRequest) (events.APIGatewayProxyResponse, error) {
limitString := request.QueryStringParameters["limit"]
fmt.Println("limitString", limitString) //nothing is written
return events.APIGatewayProxyResponse{StatusCode: 200}, nil
}
func main() {
lambda.Start(HanldeLambdaFunction)
}
The output is:
START RequestId: 0c63f94f-b0aa-49de-ba6d-b1150d711b8a Version: $LATEST
limitString 2
END RequestId: 0c63f94f-b0aa-49de-ba6d-b1150d711b8a
REPORT RequestId: 0c63f94f-b0aa-49de-ba6d-b1150d711b8a Duration: 0.56 ms Billed Duration: 100 ms Memory Size: 512 MB Max Memory Used: 34 MB
If I had to guess, then your code does not even run. If it would run but could not read the limit parameter, it should at least print limitString.
Remember, if you compile a go binary for AWS Lambda, you need to compile it for linux and amd64 like so:
$ GOOS=linux GOARCH=amd64 go build

Related

BigQuery Storage Write / managedwriter api return error server_shutting_down

As we know, the advantage of BigQuery Storage Write API, one month ago, we replace insertAll with managedwriter API on our server. It seems to work well for one month, however, we met the following errors recently
rpc error: code = Unavailable desc = closing transport due to: connection error:
desc = "error reading from server: EOF", received prior goaway: code: NO_ERROR,
debug data: "server_shutting_down"
The version of managedwriter API are:
cloud.google.com/go/bigquery v1.25.0
google.golang.org/protobuf v1.27.1
There is a piece of retrying logic for storage write API that detects error messages on our server-side. We notice the response time of storage write API becomes longer after retrying, as a result, OOM is happening on our server. We also tried to increase the request timeout to 30 seconds, and most of those requests could not be completed within it.
How to handle the error server_shutting_down correctly?
Update 02/08/2022
The default stream of managedwrite API is used in our server. And server_shutting_down error comes up periodically. And this issue happened on 02/04/2022 12:00 PM UTC and the default stream of managedwrite API works well for over one month.
Here is one wrapper function of appendRow and we log the cost time of this function.
func (cl *GBOutput) appendRows(ctx context.Context,datas [][]byte, schema *gbSchema) error {
var result *managedwriter.AppendResult
var err error
if cl.schema != schema {
cl.schema = schema
result, err = cl.managedStream.AppendRows(ctx, datas, managedwriter.UpdateSchemaDescriptor(schema.descriptorProto))
} else {
result, err = cl.managedStream.AppendRows(ctx, datas)
}
if err != nil {
return err
}
_, err = result.GetResult(ctx)
return err
}
When the error server_shutting_down comes up, the cost time of this function could be several hundred seconds. It is so weird, and it seems to there is no way to handle the timeout of appendRow.
Are you using the "raw" v1 storage API, or the managedwriter? I ask because managedwriter should handle stream reconnection automatically. Are you simply observing connection closes periodically, or something about your retry traffic induces the closes?
The interesting question is how to deal with in-flight appends for which you haven't yet received an acknowledgement back (or the ack ended in failure). If you're using offsets, you should be able to re-send the append without risk of duplication.
Per the GCP support guy,
The issue is hit once 10MB has been sent over the connection, regardless of how long it takes or how much is inflight at that time. The BigQuery Engineering team has identified the root cause and the fix would be rolled out by Friday, Feb 11th, 2022.

AWS Lambda - Body Size is Too Large Error, but Body Size is Under Limit

I am using a lambda function to service a REST API. In one endpoint I am getting "body size is too long" printed to the cloudwatch log.
The response I get from the function is status code 502 with response body { "message": "Internal server error" }. If I call the same endpoint but use a filter, the response body size is 2.26MB and works. This rules out that I am hitting asynchronous response body limit.
The response body size when it errors out is 5622338 bytes (5.36 MB).
This is how I am calculating the response size (python 2.7):
import urllib2
...
out = {}
resp = urllib2.urlopen(req)
out.statusCode = resp.getcode()
out.body = resp.read()
print("num bytes: " + str(len(bytearray(out.body, 'utf-8'))))
The advertised max response body size is for synchronous invocations is 6MB. From what I understand, I should not be receiving the error.
https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-limits.html
Other information:
Duration: 22809.11 ms Billed Duration: 22810 ms Memory Size: 138 MB Max Memory Used: 129 MB Init Duration: 1322.71 ms
Any help would be appreciated.
Update 4/22/21
After further research I found that the lambda function errors out if the size of the response is 1,048,574 bytes (0.999998 MB) or more.
If the response is 1,048,573 bytes (0.999997 MB) or less it works.
This is how I am returning responses. I hard code the view function to return a bytearray of a specific size.
Ex.
return bytearray(1048573)
I turned on logging for the AWS Gateway Stage that I am using, and the following error is getting written to the log. It implies that the function is erroring out. Not the invocation of the function:
Lambda execution failed with status 200 due to customer function error: body size is too long.
It's my understanding that the AWS Lambda Functions have a max response size of 6MB and the AWS Gateways have a max response size of 10MB.
AWS Lambda: Invocation payload response
https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-limits.html
API Gateway: API Gateway quotas for configuring and running REST API -> Payload size
https://docs.aws.amazon.com/apigateway/latest/developerguide/limits.html
Am I misunderstanding the limits?
I created a new lambda function for python 3.7 and found that it returns a more descriptive error. I was able to determine that the lambda function adds around 1 MB to the size of the response after it leaves the lambda function which explains why it was erroring out in the lambda function, but not in the endpoint code. In one case it added .82MB and another it add .98MB. It seems to be based off the size of the response. I suspect the response is getting base64 encoded, URL encoded, or something similar. I did not find documentation that could answer this and the responses were not encoded in any way on the receiving end.
Error Message returned by lambda function build with python 3.7:
Response payload size (8198440 bytes) exceeded maximum allowed payload size (6291556 bytes).
Lambda Function Code:
import requests, base64, json
def lambda_handler(event, context):
headers = {...}
url = ...
response = requests.get(url, headers=headers)
print (len(response.content))
print (len(response.content.decode("utf-8")))
return response.content.decode('UTF-8')
Below I have the size of the responses when printed inside the lambda function and the size of the response as determined by lambda in the error message.
I used these two print statements to determine the size of the response and these two print statements would always return the same length. Response.content is a byte array so my thought process is that getting the length of it will return the number of bytes in the response:
print (len(response.content))
print (len(response.content.decode("utf-8")))
Ex 1.
Size when printed inside the lambda function:
7,165,488 (6.8335 MB)
Size as defined in the error message:
8,198,440 (7.8186 MB)
Extra:
1,032,952 (0.9851 MB)
Error Message:
Response payload size (8198440 bytes) exceeded maximum allowed payload size (6291556 bytes).
Ex 2.
Size when printed inside the lambda function:
5,622,338 (5.3619 MB)
Size as defined in the error message:
6,482,232 (6.1819 MB)
Extra:
859,894 (0.820059 MB)
Error Message:
Response payload size (6482232 bytes) exceeded maximum allowed payload size (6291556 bytes).
I have decided to lower the soft limit in the endpoint code by another 1 MB (4.7MB) to prevent hitting the limit in the lambda function.
Tried the same in golang. events.APIGatewayV2HTTPResponse.Body can be ~5.300.000 bytes (didn't test the exact limit). Value was measured inside the lambda function (len(resp.Body))

V-lang: How to send +2500 HTTP requests per second?

I am planning to write my scraper with V and i need to send estimatedly ~2500 request per second but can't figure out what am i doing wrong, it should be sending concurrently but it is deadly slow right now. Feels like i'm doing something really wrong but i can't figure it out.
import net.http
import sync
import time
fn send_request(mut wg sync.WaitGroup) ?string {
start := time.ticks()
data := http.get('https://google.com')?
finish := time.ticks()
println('Finish getting time ${finish - start} ms')
wg.done()
return data.text
}
fn main() {
mut wg := sync.new_waitgroup()
for i := 0; i < 50; i++ {
wg.add(1)
go send_request(mut wg)
}
wg.wait()
}
Output:
...
Finish getting time 2157 ms
Finish getting time 2173 ms
Finish getting time 2174 ms
Finish getting time 2200 ms
Finish getting time 2225 ms
Finish getting time 2380 ms
Finish getting time 2678 ms
Finish getting time 2770 ms
V Version: 0.1.29
System: Ubuntu 20.04
You're not doing anything wrong. I'm getting similar results in multiple languages in multiple ways. Many sites have rate limiting software that prevent repeated reads like this, that's what you're running up against.
You could try using channels now that they're in, but you'll still run up against the rate limiter.
Best way to send that many get requests it too use what is called a Head request, it relies on status code rather than a response since it doesn't return any. Which is what makes the http requests faster.

Memory crash on sending 100MB+ file to S3 on chrome

I'm currently using Javascript to upload some video files to S3. The process works for files <100MB, but for ~100MB plus on chrome I run into an error (this works on safari). I am using ManagedUpload in this example which should be doing multipart/form-data in the background.
Code snippet:
...
let upload = new AWS.S3.ManagedUpload({
params:{
Bucket: 'my-bucket',
Key: videoFileName,
Body: videoHere,
ACL: "public-read"
}
});
upload.promise();
...
Chrome crashes with the error RESULT_CODE_INVALID_CMDLINE_URL, dev tools crash and in the Chrome terminal logs i get this:
[5573:0x3000000000] 27692 ms: Scavenge 567.7 (585.5) -> 567.7 (585.5) MB, 23.8 / 0.0 ms (average mu = 0.995, current mu = 0.768) allocation failure
[5573:0x3000000000] 28253 ms: Mark-sweep 854.6 (872.4) -> 609.4 (627.1) MB, 235.8 / 0.0 ms (+ 2.3 ms in 2 steps since start of marking, biggest step 1.4 ms, walltime since start of marking 799 ms) (average mu = 0.940, current mu = 0.797) allocation fa
<--- JS stacktrace --->
[5573:775:0705/140126.808951:FATAL:memory.cc(38)] Out of memory. size=0
[0705/140126.813085:WARNING:process_memory_mac.cc(93)] mach_vm_read(0x7ffee4199000, 0x2000): (os/kern) invalid address (1)
[0705/140126.880084:WARNING:system_snapshot_mac.cc(42)] sysctlbyname kern.nx: No such file or directory (2)
I've tried using HTTP PUT also, both work for smaller files but once i get bigger they both crash.
Any ideas? I've been through tons of SO posts / AWS docs but nothing helped this issue yet.
Edit: I've filed the issue with Chrome; seems like its an actual bug. Will update post when I have an answer.
This issue came from loading the big file into memory (several times) which would crash chrome before it even had a chance to upload.
The fix was using createObjectURL (a url pointing to the file) instead of readAsDataUrl (the entire file itself), and when sending the file to your API, use const newFile = new File([await fetch(objectURL).then(req => req.blob()], 'example.mp4', {type: 'video/mp4'});
This worked for me as I was doing many conversions to get readAsDataUrl to the file type i wanted, but in this way i use much less space.

How to test bytes.ErrTooLarge panic error

I want to simulate bytes.ErrTooLarge panic error on bytes.Buffer.Write method and test panic handling. I have tried to write unlimited data amount to exceed memory but then whole test crashed. What are the other options?
Sounds like a job for a mock object. Use this (badBuffer) in place of your bytes.Buffer during your test.
type badBuffer bytes.Buffer
func (b *badBuffer) Write(p []byte) (n int, err error) {
panic(bytes.ErrTooLarge)
}