Best approach to handle graphql for aws lambda? - amazon-web-services

I'm following the tutorial https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-lambda-resolvers.html
And have some doubts for using just a switch to handle graphql queries.
Is there a better approach to handle more complicated requests?

The choice is yours as to how to setup lambda within your AppSync API. It is entirely reasonable to have a lambda function per resolver and have a function be responsible for a single resolver. You can alternatively take an approach like the tutorial and use a single function and some lightweight routing code to take care of calling the correct function. Using a single function can often offer some performance benefits because of how lambda's container warming works (esp. for Java & C# where VM startup time can add up) but has less separation of concerns.
Here are some approaches I have taken in the past:
Option 1: JS
This approach uses JavaScript and should feel familiar to those who have run their own GraphQL servers before.
const Resolvers = {
Query: {
me: (source, args, identity) => getLoggedInUser(args, identity)
},
Mutation: {
login: (source, args, identity) => loginUser(args, identity)
}
}
exports.handler = (event, context, callback) => {
// We are going to wire up the resolver to give all this information in this format.
const { TypeName, FieldName, Identity, Arguments, Source } = event
const typeResolver = Resolvers[TypeName]
if (!typeResolver) {
return callback(new Error(`No resolvers found for type: "${TypeName}"`))
}
const fieldResolver = typeResolver[FieldName]
if (!fieldResolver) {
return callback(new Error(`No resolvers found for field: "${FieldName}" on type: "${TypeName}"`), null)
}
// Handle promises as necessary.
const result = fieldResolver(Source, Arguments, Identity);
return callback(null, result)
};
You can then use a standard lambda resolver from AppSync. For now we have to provide the TypeName and FieldName manually.
#**
The value of 'payload' after the template has been evaluated
will be passed as the event to AWS Lambda.
*#
{
"version" : "2017-02-28",
"operation": "Invoke",
"payload": {
"TypeName": "Query",
"FieldName": "me",
"Arguments": $util.toJson($context.arguments),
"Identity": $util.toJson($context.identity),
"Source": $util.toJson($context.source)
}
}
Option 2: Go
For the curious, I have also used go lambda functions successfully with AppSync. Here is one approach that has worked well for me.
package main
import (
"context"
"fmt"
"github.com/aws/aws-lambda-go/lambda"
"github.com/fatih/structs"
"github.com/mitchellh/mapstructure"
)
type GraphQLPayload struct {
TypeName   string                 `json:"TypeName"`
FieldName string                 `json:"FieldName"`
Arguments map[string]interface{} `json:"Arguments"`
Source     map[string]interface{} `json:"Source"`
Identity    map[string]interface{} `json:"Identity"`
}
type ResolverFunction func(source, args, identity map[string]interface{}) (data map[string]interface{}, err error)
type TypeResolverMap = map[string]ResolverFunction
type SchemaResolverMap = map[string]TypeResolverMap
func resolverMap() SchemaResolverMap {
return map[string]TypeResolverMap{
"Query": map[string]ResolverFunction{
"me": getLoggedInUser,
},
}
}
func Handler(ctx context.Context, event GraphQLPayload) (map[string]interface{}, error) {
// Almost the same as the JS option.
resolvers := resolverMap()
typeResolver := resolvers[event.TypeName]
if typeResolver == nil {
return nil, fmt.Errorf("No type resolver for type " + event.TypeName)
}
fieldResolver := typeResolver[event.FieldName]
if fieldResolver == nil {
return nil, fmt.Errorf("No field resolver for field " + event.FieldName)
}
return fieldResolver(event.Source, event.Arguments, event.Identity)
}
func main() {
lambda.Start(Handler)
}
/**
* Resolver Functions
*/
/**
* Get the logged in user
*/
func getLoggedInUser(source, args, identity map[string]interface{}) (data map[string]interface{}, err error) {
// Decode the map[string]interface{} into a struct I defined
var typedArgs myModelPackage.GetLoggedInUserArgs
err = mapstructure.Decode(args, &typedArgs)
if err != nil {
return nil, err
}
// ... do work
res, err := auth.GetLoggedInUser()
if err != nil {
return nil, err
}
// Map the struct back to a map[string]interface{}
return structs.Map(out), nil
}
// ... Add as many more as needed
You can then use the same resolver template as used in option 1. There are many other ways to do this but this is one method that has worked well for me.
Hope this helps :)

You are not forced to use one single AWS Lambda to handle each request. For this tutorial it's easier for newcomers to get the idea of it, therefore they used this approach.
But it's up to you how to implement it in the end. An alternative would be to create for each resolver a separate AWS Lambda to eliminate the switch and to follow Single Responsibility Principle (SRP).

You can proxy all the queries to a graphql-server

Apollo GraphQL Server provides a very good setup to deploy a GraphQL server in AWS Lambda.

Related

How to write a custom unmarshaller for AWS ION?

I'm using Amazon ION to marshal and unmarshal data received from various AWS services.
I need to write a custom unmarshal function, and I found an example of how thats achieved in Amazon ION's official docs, see here
Using the above example, I have written below code:
package main
import (
"bytes"
"fmt"
"github.com/amzn/ion-go/ion"
)
func main() {
UnmarshalCustomMarshaler()
}
type unmarshalMe struct {
Name string
custom bool
}
func (u *unmarshalMe) UnmarshalIon(r ion.Reader) error {
fmt.Print("UnmarshalIon called")
u.custom = true
return nil
}
func UnmarshalCustomMarshaler() {
ionBinary, err := ion.MarshalBinary(unmarshalMe{
Name: "John Doe",
})
if err != nil {
fmt.Println("Error marshalling ion binary: ", err)
panic(err)
}
dec := ion.NewReader(bytes.NewReader(ionBinary))
var decodedResult unmarshalMe
ion.UnmarshalFrom(dec, &decodedResult)
fmt.Println("Decoded result: ", decodedResult)
}
Problem: The above code does not work as expected. The UnmarshalIon function is not called, but according to docs is should get called. What am I doing wrong?
You are probably using the v1.1.3, the one by default which does not contain the feature.

What's the equivalent of `grpc.WithPerRPCCredentials` in C++?

We have some Go code which uses JWT as per RPC credentials with gRPC. I'm trying to implement a client in C++, but I can't figure out what the equivalent of the grpc.WithPerRPCCredentials dial option is in C++. The Go code is like this:
type claims struct {
claims map[string]string
secure bool
}
func (c claims) GetRequestMetadata(ctx context.Context, a ...string) (map[string]string, error) {
return c.claims, nil
}
func (c claims) RequireTransportSecurity() bool {
return c.secure
}
fn connect() {
clms := claims{
claims: make(map[string]string),
secure: !c.insecure,
}
clms.claims["token"] = ourJWT;
conn, err := grpc.DialContext(context.Background(), addr, grpc.WithPerRPCCredentials(clms))
}
What's the equivalent way in C++ to create credentials which attaches a JWT to each request?
The way to make a Credentials which adds metadata to each request is to use the MetadataCredentialsPlugin documented at https://grpc.io/docs/guides/auth/#extending-grpc-to-support-other-authentication-mechanisms.

AWS lambda - how to use conditionals depending on if query parameters exist?

I want my function to return a list of everything in a table if there are no query parameters, and a single row if the parameter id exists
var mysql = require('mysql');
var config = require('./config.json');
var pool = mysql.createPool({
host : config.host,
user : config.user,
password : config.password,
database : config.database
});
exports.handler = (event, context, callback) => {
var whereClause
if(event.queryStringParameters.id !== null){
let id = event.queryStringParameters.id
whereClause = ' where id='+id
}
context.callbackWaitsForEmptyEventLoop = false;
pool.getConnection(function(err, connection) {
// Use the connection
connection.query('SELECT * from users'+whereClause, function (error, results, fields) {
// And done with the connection.
connection.release();
// Handle error after the release.
if (err) callback(err);
else {
var response = {
"statusCode": 200,
"headers": {
"my_header": "my_value"
},
"body": JSON.stringify(results),
"isBase64Encoded": false
};
callback(null, response);
}
});
});
};
the function fails when no query parameter is present with the error
"Cannot read property 'id' of null"
why is that?
You didn't supply any line number information or a stack trace, so I'm guessing this if statement fails because event.queryStringParameters is null:
if(event.queryStringParameters.id !== null)
let id = event.queryStringParameters.id
whereClause = ' where id='+id
}
And you should instead write:
if (event.queryStringParameters && event.queryStringParameters.id !== null) {
let id = event.queryStringParameters.id;
whereClause = ' where id=' + id;
}
Having said that, you should not inject user-supplied values (such as id) into SQL queries using string concatenation. This opens you up to a SQL Injection attack. Here are ideas for how to write this code more safely: How to prevent SQL Injection in Node.js
Do you use AWS Lambda with Amazon API Gateway?
AWS Lambda with Amazon API Gateway
In this case:
Make sure that you create a body mapping template in API gateway (Integration Request->Body Mapping Templates). As an example here's a body mapping template that would pass along the query parameter email to your lambda function: { "id": "$input.params('id')" }
AWS Developer Forum

Stripe Error: No signatures found matching the expected signature for payload

I have a stripe webhook that call a Firebase function. In this function I need to verify that this request comes from Stripe servers. Here is the code :
const functions = require('firebase-functions');
const bodyParser = require('body-parser');
const stripe = require("stripe")("sk_test_****");
const endpointSecret = 'whsec_****';
const app = require('express')();
app.use(bodyParser.json({
verify: function (req, res, buf) {
var url = req.originalUrl;
if (url.startsWith('/webhook')) {
req.rawBody = buf.toString()
}
}
}));
app.post('/webhook/example', (req, res) => {
let sig = req.headers["stripe-signature"];
try {
console.log(req.bodyRaw)
let event = stripe.webhooks.constructEvent(req.body, sig, endpointSecret);
console.log(event);
res.status(200).end()
// Do something with event
}
catch (err) {
console.log(err);
res.status(400).end()
}
});
exports.app = functions.https.onRequest(app);
As mentioned in Stripe Documentation, I have to use raw body to perform this security check.
I have tried with my current code and with :
app.use(require('body-parser').raw({type: '*/*'}));
But I always get this error :
Error: No signatures found matching the expected signature for payload. Are you passing the raw request body you received from Stripe? https://github.com/stripe/stripe-node#webhook-signing
Cloud Functions automatically parses body content of known types. If you're getting JSON, then it's already parsed and available to you in req.body. You shouldn't need to add other body parsing middleware.
If you need to process the raw data, you should use req.rawBody, but I don't think you'll need to do that here.
Here is what is working for me:
add this line:
app.use('/api/subs/stripe-webhook', bodyParser.raw({type: "*/*"}))
(The first argument specifies which route we should use the raw body parser on. See the app.use() reference doc.)
just before this line:
app.use(bodyParser.json());
(it doesn't affect all your operation, just this: '/api/subs/stripe-webhook')
Note: If you are using Express 4.16+ you can replace bodyParser by express:
app.use('/api/subs/stripe-webhook', express.raw({type: "*/*"}));
app.use(express.json());
Then:
const endpointSecret = 'whsec_........'
const stripeWebhook = async (req, res) => {
const sig = req.headers['stripe-signature'];
let eventSecure = {}
try {
eventSecure = stripe.webhooks.constructEvent(req.body, sig, endpointSecret);
//console.log('eventSecure :', eventSecure);
}
catch (err) {
console.log('err.message :', err.message);
res.status(400).send(`Webhook Secure Error: ${err.message}`)
return
}
res.status(200).send({ received: true });
}
Here is code which is working for me:
app.use(bodyParser.json({
verify: function (req, res, buf) {
var url = req.originalUrl;
if (url.startsWith('/stripe')) {
req.rawBody = buf.toString();
}
}
}));
And then pass the req.rawBody for verification
stripe.checkWebHook(req.rawBody, signature);
Reference: https://github.com/stripe/stripe-node/issues/341
2 things to note:
pass req.rawBody instead of req.body to constructEvent
const event = stripe.webhooks.constructEvent(
req.rawBody,
sig,
STRIPE_WEBHOOK_SECRET
);
Make sure you're using the correct webhook secret. It's unique per webhook url!
2021 - Solution
I faced that error, and after a lot research I could not figure out the problem easily, but finally I could do it based in my architecture below:
//App.js
this.server.use((req, res, next) => {
if (req.originalUrl.startsWith('/webhook')) {
next();
} else {
express.json()(req, res, next);
}
});
//routes.js
routes.post(
'/webhook-payment-intent-update',
bodyParser.raw({ type: 'application/json' }),
//your stripe logic (Im using a controller, but wherever)
(req, res) => {
stripe.webhooks.constructEvent(...)
}
)
Two big warnings to pay attention:
Make sure to send the req.headers['stripe-signature']
Make sure that your endpointSecret is right, if not it will still saying the same error
Tips:
Test it locally by installing the Stripe CLI: https://stripe.com/docs/webhooks/test
Verify your key on stripe dashboard or you can also make sure if you have the right key by verifying you stripe log as below:
I hope it helps you. :)
// Use JSON parser for all non-webhook routes
app.use(
bodyParser.json({
verify: (req, res, buf) => {
const url = req.originalUrl;
if (url.startsWith('/api/stripe/webhook')) {
req.rawBody = buf.toString();
}
}
})
);
The above code will look fine for the above answers. But even I was made one mistake. After put the same thing I got the same error.
Finally, I've figured it out if you're configured body-parser below the rawBody code then it'll work.
Like this
// Use JSON parser for all non-webhook routes
app.use(
bodyParser.json({
verify: (req, res, buf) => {
const url = req.originalUrl;
if (url.startsWith('/api/stripe/webhook')) {
req.rawBody = buf.toString();
}
}
})
);
// Setup express response and body parser configurations
app.use(express.json());
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: true }));
Hopefully, it'll help someone.
It is late but will help others
Github answer
const payload = req.body
const sig = req.headers['stripe-signature']
const payloadString = JSON.stringify(payload, null, 2);
const secret = 'webhook_secret';
const header = stripe.webhooks.generateTestHeaderString({
payload: payloadString,
secret,
});
let event;
try {
event = stripe.webhooks.constructEvent(payloadString, header, secret);
} catch (err) {
console.log(`Webhook Error: ${err.message}`)
return res.status(400).send(`Webhook Error: ${err.message}`);
}
switch (event.type) {
case 'checkout.session.completed': {
......
enter code here
If you are trying to add a stripe webhook into your NextJS API Route, here's how to do so (ref):
import initStripe from "stripe";
import { buffer } from "micro";
import { NextApiRequest, NextApiResponse } from "next";
export const config = { api: { bodyParser: false } };
const handler = async (req: NextApiRequest, res: NextApiResponse) => {
const stripe = initStripe(process.env.STRIPE_SECRET_KEY||'');
const signature = req.headers["stripe-signature"];
const signingSecret = process.env.STRIPE_WEBHOOK_SECRET || '';
const reqBuffer = await buffer(req);
let event;
try {
event = stripe.webhooks.constructEvent(reqBuffer, signature, signingSecret);
} catch (error: any) {
console.log(error);
return res.status(400).send(`Webhook error: ${error?.message}`);
}
console.log({ event });
res.send({ received: true });
};
export default handler;
This is using buffer from the micro library, in combination with the modifying the default API request to use request's rawbody. In some frameworks (like NextJs), rawBody doesn't come OOTB, hence the workaround of retrieving the rawbody by reqBuffer, which is needed in the stripe.webhooks.constructEvent event.
I was able to obtain data from one webhook but not from a second one: the problem was that the secret key I used was the same as the one used for the first webhook, but I found out that every webhook has a different key, that's way I got that same message.
AWS API Gateway + Lambda (Express.js CRUD) I'm using this for Stripe webhook endpoint and it works for me:
app.use(require('body-parser').text({ type: "*/*" }));
This happened to me when sending a test webhook from the Stripe dashboard after I had renamed a firebase cloud function. All my other functions were working fine. Solved by re-setting in the terminal
firebase functions:config:set stripe.webhook_signature="Your webhook signing secret"
(if you're using that) and redeploying the functions firebase deploy --only functions
On a second occasion I solved the problem by rolling the stripe signature in the stripe dashboard.
Please use this script
app.use(
bodyParser.json({
verify: (req, res, buf) => {
req.rawBody = buf;
},
})
);
My fave was combining two of above great answers.
Then you can use req.rawbody when you construct the event.
Replace "webhook" with whatever route you wish you have a raw body for.
app.use(
"/webhook",
express.json({
verify: (req, res, buf) => {
req.rawBody = buf.toString();
},
})
);
BEFORE
app.use(express.json());
Works well if you are using routes and controllers.
To use raw body in express with a specific endpoint in a seperated middleware, my solution is just enabling router to use express.raw for the webhook endpoint.
-node.js v12
-express.js v4.17.1
export const handleBodyRequestParsing = (router: Router): void => {
router.use('/your_webhook_endpoint', express.raw({ type: '*/*' }))
router.use(express.json({ limit: '100mb' }))
router.use(express.urlencoded({ extended: true }))
}
Here is the Quick Tip which may save your hours !
If you are adding express payment to your exciting express app sometimes you may already pass your request as json in the beginning of application by using express middleware app.use(json()); or any other middleware (Bodyparser for example).
If you are doing that then change that to omit your webhook url
Exmaple:
Assume your payment webhook url is /paments/webhhok
app.use((req, res, next) => {
if (req.originalUrl.includes("/payments/webhook")) {
next();
} else {
express.json()(req, res, next);
}
});
When using Stripe in Express, if you have the following line in your code;
app.use(express.json());
it is going to prevent you from providing the raw body to the Stripe even when you explicitly set "bodyParser.raw", which will throw an error. This was the reason my code failed. Finally sorted it out.
I tried all the solutions above and no one worked, and figured out that the only solution was not to use express at all for this endpoint. you just have to create another http function
export const webhook = functions.https.onRequest(async (req, res) => {
try {
const sig = req.headers['stripe-signature']
const endpointSecret = 'web_secret'
const event = stripe.webhooks.constructEvent(
req.rawBody,
sig,
endpointSecret
)
console.log(event.data.object)
res.status(200).send(event.data.object)
} catch (err) {
console.error('ocorreu um erro', err)
res.status(400).send(`Webhook Error: ${err.message}`)
}
})

How to unit test google cloud storage in golang?

I'm writing an appengine app in Go that uses Google cloud storage.
For example, my "reading" code looks like:
client, err := storage.NewClient(ctx)
if err != nil {
return nil, err
}
defer func() {
if err := client.Close(); err != nil {
panic(err)
}
}()
r, err := client.Bucket(BucketName).Object(id).NewReader(ctx)
if err != nil {
return nil, err
}
defer r.Close()
return ioutil.ReadAll(r)
... where ctx is a context from appengine.
When I run this code in a unit test (using aetest), it actually sends requests to my cloud storage; I'd like to run this hermetically instead, similar to how aetest allows fake datastore calls.
(Possibly related question, but it deals with python, and the linked github issue indicates it's solved in a python-specific way).
How can I do this?
One approach, also suggested here is to allow your GCS client to have its downloader swapped out for a stub while unit testing. First, define an interface that matches how you use the Google Cloud Storage library, and then reimplement it with fake data in your unit tests.
Something like this:
type StorageClient interface {
Bucket(string) Bucket // ... and so on, matching the Google library
}
type Storage struct {
client StorageClient
}
// New creates a new Storage client
// This is the function you use in your app
func New() Storage {
return NewWithClient(&realGoogleClient{}) // provide real implementation here as argument
}
// NewWithClient creates a new Storage client with a custom implementation
// This is the function you use in your unit tests
func NewWithClient(client StorageClient) {
return Storage{
client: client,
}
}
It can be a lot of boilerplate to mock entire 3rd party APIs, so maybe you'll be able to make it easier by generating some of those mocks with golang/mock or mockery.
I have done something like this...
Since storage client is sending HTTPS request so I mocked the HTTPS server using httptest
func Test_StorageClient(t *testing.T) {
tests := []struct {
name string
mockHandler func() http.Handler
wantErr bool
}{
{
name: "test1",
mockHandler: func() http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Write([]byte("22\n96\n120\n"))
return
})
},
wantErr: false,
},
{
name: "test2 ",
mockHandler: func() http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusNotFound)
return
})
},
wantErr: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
serv := httptest.NewTLSServer(tt.mockHandler())
httpclient := http.Client{
Transport: &http.Transport{
TLSClientConfig: &tls.Config{
InsecureSkipVerify: true,
},
},
}
client, _ := storage.NewClient(context.Background(), option.WithEndpoint(serv.URL), option.WithoutAuthentication(), option.WithHTTPClient(&httpclient))
got, err := readFileFromGCS(client)
if (err != nil) != tt.wantErr {
t.Errorf("error = %v, wantErr %v", err, tt.wantErr)
return
}
})
}
}
Cloud Storage on the Python Development server is emulated using local files with the Blobstore service, which is why the solution of using a Blobstore stub with testbed (also Python-specific) worked. However there is no such local emulation for Cloud Storage on the Go runtime.
As Sachin suggested, the way to unit test Cloud Storage is to use a mock. This is the way it's done internally and on other runtimes, such as node.
I would advice you reduce the mocks as much as possible you might need to use an hermetic approach to make it almost similar to the real thing .
https://testing.googleblog.com/2012/10/hermetic-servers.html