WhatsApp Cloud API - Unable to verify webhook signature - facebook-graph-api

When I receive a webhook event I try to verify the signature as mentioned in the docs, sometimes the verification is successful and sometimes not.
What I'm doing wrong?
static createSha256Hash(data: string, key: string): string {
if (isEmpty(data)){
throw new ValidationException();
}
const sha256Hasher = crypto.createHmac('sha256', key);
return sha256Hasher.update(data).digest('hex');
}
public validateWebhookSignature(requestContext: RequestContextModel): void {
const signature = requestContext?.headers?.['X-Hub-Signature-256'].replace('sha256=', '');
const body = JSON.stringify(requestContext?.body);
const hash = CryptoUtil.createSha256Hash(body, this._appSecret);
if (!signature || !hash) {
throw new ValidationException();
} else if (signature !== hash) {
throw new UnauthorizedException();
}
}

Related

How to hide some dom element if user hasn't jwt token in cookies

As cookies, it is unable to retrieve from client side. What's the solution we can get verify in _middleware and passing isAuth to client side in nextjs?
// pages/_middleware.ts
const JWT_SECRET = process.env.JWT_TOKEN as string;
export async function middleware(req: NextRequest, ev: NextFetchEvent) {
const { cookies } = req;
const token = cookies.appToken;
/*
Passing some value to client side if it is not login yet.
*/
if( !token ) {
}
try {
// verify the token
const { payload: jwtData } = await jose.jwtVerify(
token, new TextEncoder().encode( JWT_SECRET )
)
return NextResponse.next();
} catch (error) {
return NextResponse.next();
// return NextResponse.redirect(new URL('/login', req.url));
}
}

How to make AWS API gateway to "understand" trailing slash?

I have an AWS API Gateway configured such that /auth method calls a Lambda.
However, an existing product tries to call /auth/ with trailing slash and it ends up as error 404.
What can I do so that /auth/ URL goes to the same route as /auth in the API Gateway?
Turns out that the way to solve it is to configure the path like so (Terraform code from API Gateway config)
WAS
"GET /auth" = {
integration_type = "AWS_PROXY"
integration_http_method = "POST"
payload_format_version = "2.0"
lambda_arn = module.my-lambda.this_lambda_function_invoke_arn
}
(this makes /auth work)
NOW
"GET /auth/{proxy+}" = {
integration_type = "AWS_PROXY"
integration_http_method = "POST"
payload_format_version = "2.0"
lambda_arn = module.my-lambda.this_lambda_function_invoke_arn
}
(this makes /auth/ work and breaks /auth).
You could configure the route as ANY /{proxy+} so that any HTTP method (GET, POST, PATCH, DELETE) for any routes are directed to the same handler. Alternatively, you could also specify the HTTP method to narrow it down, like POST /{proxy+}.
So...
What can I do so that /auth/ URL goes to the same route as /auth in the API Gateway?
Technically speaking, this solves your problem, but now it is up to you to differentiate routes and know what to do.
As far as I know, this is the only way to achieve it with API Gateway, since according to some RFC out there routes "/auth" and "/auth/" are actually different routes and API Gateway complies with that RFC.
This is what I ended up doing (using the ANY /{proxy+}) and, if it is any help, this is the code I have to handle my routes and know what to do:
// Queue.ts
class Queue<T = any> {
private items: T[];
constructor(items?: T[]) {
this.items = items || [];
}
get length(): number {
return this.items.length;
}
enqueue(element: T): void {
this.items.push(element);
}
dequeue(): T | undefined {
return this.items.shift()!;
}
peek(): T | undefined {
if (this.isEmpty()) return undefined;
return this.items[0];
}
isEmpty() {
return this.items.length == 0;
}
}
export default Queue;
// PathMatcher.ts
import Queue from "./Queue";
type PathMatcherResult = {
isMatch: boolean;
namedParameters: Record<string, string>;
};
const NAMED_PARAMETER_REGEX = /(?!\w+:)\{(\w+)\}/;
class PathMatcher {
static match(pattern: string, path: string): PathMatcherResult {
const patternParts = new Queue<string>(this.trim(pattern).split("/"));
const pathParts = new Queue<string>(this.trim(path).split("/"));
const namedParameters: Record<string, string> = {};
const noMatch = { isMatch: false, namedParameters: {} };
if (patternParts.length !== pathParts.length) return noMatch;
while (patternParts.length > 0) {
const patternPart = patternParts.dequeue()!;
const pathPart = pathParts.dequeue()!;
if (patternPart === "*") continue;
if (patternPart.toLowerCase() === pathPart.toLowerCase()) continue;
if (NAMED_PARAMETER_REGEX.test(patternPart)) {
const [name, value] = this.extractNamedParameter(patternPart, pathPart);
namedParameters[name] = value;
continue;
}
return noMatch;
}
return { isMatch: true, namedParameters };
}
private static trim(path: string) {
return path.replace(/^[\s\/]+/, "").replace(/[\s\/]+$/, "");
}
private static extractNamedParameter(
patternPart: string,
pathPart: string
): [string, string] {
const name = patternPart.replace(NAMED_PARAMETER_REGEX, "$1");
let value = pathPart;
if (value.includes(":")) value = value.substring(value.indexOf(":") + 1);
return [name, value];
}
}
export default PathMatcher;
export { PathMatcherResult };
Then, in my lambda handler, I do:
const httpMethod = event.requestContext.http.method.toUpperCase();
const currentRoute = `${httpMethod} ${event.rawPath}`;
// This will match both:
// GET /products/asdasdasdas
// GET /products/asdasdasdas/
const match = PathMatcher.match("GET /products/{id}", currentRoute);
if (match.isMatch) {
// Here, the id parameter has been extracted for you
const productId = match.namedParameters.id;
}
Of course you can build a registry of routes and their respective handler functions and automate that matching process and passing of parameters, but that is the easy part.

nswag generated service has no return logic

I have a asp.net WebAPI service for user login that takes an email and password. The api method has the following signature. LoginDto has two fileds, Email and password.
public async Task<IActionResult> Login(LoginDto dto)
Once the user is authenticated, WebAPI returns an object that has token and Id:
return Ok(new { Token = GenerateJwtTokenFromClaims(claims), Id=user.Id });
On the client side (Blazor app), I used nswag command line tool by running nswag run and it "successfully" generated the Service and Contract files. Everything complies. nswag generated code is pasted below.
When I want to use the login nswag Service, I have the following method (I also have an overloaded method with CancellationToken but I only use this method):
public System.Threading.Tasks.Task Login2Async(LoginDto body)
{
return Login2Async(body, System.Threading.CancellationToken.None);
}
The question that I have is that how do I get the response out of the nswag-generated-code that the WebAPI login sent back to the client? When I try to assign a var to the method, I get Cannot assign void to an implicitly-typed variable which makes sense since I don't see a return type. I also don't see any logic in the nswag generated service file to return the response to the caller. How do I get the response back from the nswag generated API call? Is there an option I have to set in nswag run to get a response object back? Thanks in advance.
public async System.Threading.Tasks.Task Login2Async(LoginDto body, System.Threading.CancellationToken cancellationToken)
{
var urlBuilder_ = new System.Text.StringBuilder();
urlBuilder_.Append(BaseUrl != null ? BaseUrl.TrimEnd('/') : "").Append("/api/Account/Login");
var client_ = _httpClient;
var disposeClient_ = false;
try
{
using (var request_ = new System.Net.Http.HttpRequestMessage())
{
var content_ = new System.Net.Http.StringContent(Newtonsoft.Json.JsonConvert.SerializeObject(body, _settings.Value));
content_.Headers.ContentType = System.Net.Http.Headers.MediaTypeHeaderValue.Parse("application/json");
request_.Content = content_;
request_.Method = new System.Net.Http.HttpMethod("POST");
PrepareRequest(client_, request_, urlBuilder_);
var url_ = urlBuilder_.ToString();
request_.RequestUri = new System.Uri(url_, System.UriKind.RelativeOrAbsolute);
PrepareRequest(client_, request_, url_);
var response_ = await client_.SendAsync(request_, System.Net.Http.HttpCompletionOption.ResponseHeadersRead, cancellationToken).ConfigureAwait(false);
var disposeResponse_ = true;
try
{
var headers_ = System.Linq.Enumerable.ToDictionary(response_.Headers, h_ => h_.Key, h_ => h_.Value);
if (response_.Content != null && response_.Content.Headers != null)
{
foreach (var item_ in response_.Content.Headers)
headers_[item_.Key] = item_.Value;
}
ProcessResponse(client_, response_);
var status_ = (int)response_.StatusCode;
if (status_ == 200)
{
return;
}
else
if (status_ == 400)
{
var objectResponse_ = await ReadObjectResponseAsync<ProblemDetails>(response_, headers_).ConfigureAwait(false);
throw new ApiException<ProblemDetails>("Bad Request", status_, objectResponse_.Text, headers_, objectResponse_.Object, null);
}
else
{
var responseData_ = response_.Content == null ? null : await response_.Content.ReadAsStringAsync().ConfigureAwait(false);
throw new ApiException("The HTTP status code of the response was not expected (" + status_ + ").", status_, responseData_, headers_, null);
}
}
finally
{
if (disposeResponse_)
response_.Dispose();
}
}
}
finally
{
if (disposeClient_)
client_.Dispose();
}
}
Big thanks to the NSwag team, the issue is resolved. I was returning anonymous object from the WebAPI method. The correct way to do is the following. Notice that IActionResult was changed to ActionResult passing a concrete object to return to the caller.
public async Task<ActionResult<LoginDtoResponse>> Login(LoginDto dto)
then returning
return Ok(new LoginDtoResponse { Token = GenerateJwtTokenFromClaims(claims), Id=user.Id });
After that I did that, the following code was generated:
if (status_ == 200)
{
var objectResponse_ = await ReadObjectResponseAsync<LoginDtoResponse>(response_, headers_).ConfigureAwait(false);
return objectResponse_.Object;
}

User Logging automation via Cloudwatch

I Have this task for my company where i have to do a monthly User access review via cloudwatch.
This is a manual process where i have to go to cloudwatch > cloudwatch_logs > log_groups > /var/log/example_access > example-instance and then document the logs for a list of users from random generated date. The example instance is a certificate manager box which is linked to our entire production fleet nodes. I also have to document what command that user used on a specific nodes.
Wondering is there any way i can automate this process and dump it into word docs? it's getting painful as the list of user/employees are increasing. Thanks
Sure there is, I don't reckon you want Word docs though, I'd launch an elasticsearch instance on AWS and then give users who want data Kibana access.
Also circulating word docs in an org is bad juju, depending on your windows/office version it carries risks.
Add this lambda function and then go into cloudwatch and add it as subscription filter to the right log groups.
Note you may get missing log entries if they're not logged in JSON format or have funky formatting, if you're using a standard log format it should work.
/* eslint-disable */
// Eslint disabled as this is adapted AWS code.
const zlib = require('zlib')
const elasticsearch = require('elasticsearch')
/**
* This is an example function to stream CloudWatch logs to ElasticSearch.
* #param event
* #param context
* #param callback
* #param utils
*/
export default (event, context, callback) => {
context.callbackWaitsForEmptyEventLoop = true
const payload = new Buffer(event.awslogs.data, 'base64')
const esClient = new elasticsearch.Client({
httpAuth: process.env.esAuth, // your params here
host: process.env.esEndpoint, // your params here.
})
zlib.gunzip(payload, (err, result) => {
if (err) {
return callback(null, err)
}
const logObject = JSON.parse(result.toString('utf8'))
const elasticsearchBulkData = transform(logObject)
const params = { body: [] }
params.body.push(elasticsearchBulkData)
esClient.bulk(params, (err, resp) => {
if (err) {
callback(null, 'success')
return
}
})
callback(null, 'success')
})
}
function transform(payload) {
if (payload.messageType === 'CONTROL_MESSAGE') {
return null
}
let bulkRequestBody = ''
payload.logEvents.forEach((logEvent) => {
const timestamp = new Date(1 * logEvent.timestamp)
// index name format: cwl-YYYY.MM.DD
const indexName = [
`cwl-${process.env.NODE_ENV}-${timestamp.getUTCFullYear()}`, // year
(`0${timestamp.getUTCMonth() + 1}`).slice(-2), // month
(`0${timestamp.getUTCDate()}`).slice(-2), // day
].join('.')
const source = buildSource(logEvent.message, logEvent.extractedFields)
source['#id'] = logEvent.id
source['#timestamp'] = new Date(1 * logEvent.timestamp).toISOString()
source['#message'] = logEvent.message
source['#owner'] = payload.owner
source['#log_group'] = payload.logGroup
source['#log_stream'] = payload.logStream
const action = { index: {} }
action.index._index = indexName
action.index._type = 'lambdaLogs'
action.index._id = logEvent.id
bulkRequestBody += `${[
JSON.stringify(action),
JSON.stringify(source),
].join('\n')}\n`
})
return bulkRequestBody
}
function buildSource(message, extractedFields) {
if (extractedFields) {
const source = {}
for (const key in extractedFields) {
if (extractedFields.hasOwnProperty(key) && extractedFields[key]) {
const value = extractedFields[key]
if (isNumeric(value)) {
source[key] = 1 * value
continue
}
const jsonSubString = extractJson(value)
if (jsonSubString !== null) {
source[`$${key}`] = JSON.parse(jsonSubString)
}
source[key] = value
}
}
return source
}
const jsonSubString = extractJson(message)
if (jsonSubString !== null) {
return JSON.parse(jsonSubString)
}
return {}
}
function extractJson(message) {
const jsonStart = message.indexOf('{')
if (jsonStart < 0) return null
const jsonSubString = message.substring(jsonStart)
return isValidJson(jsonSubString) ? jsonSubString : null
}
function isValidJson(message) {
try {
JSON.parse(message)
} catch (e) { return false }
return true
}
function isNumeric(n) {
return !isNaN(parseFloat(n)) && isFinite(n)
}
Now you should have your logs going into elastic, go into Kibana and you can search by date and even write endpoints to allow people to query their own data!
Easy way is just give stakeholders Kibana access and let them check it out.
Might not be exactly what ya wanted by I reckon it'll work better.

Cognito send confirmation email using custom email

There's a way to send an email other than the one specified in the "Message customisation" tab on Cognito user pool?
I would like to use different email based on some parameters.
E.g.
verification#my-service.com for verification email
welcome#my-service.com for welcome email
You can go to the general settings in Cognito then click on triggers. There you can select Post Confirmation lambda function(this example in node) to send the email. In the lambda function you can make the subject whatever you like and change from email address.
var aws = require('aws-sdk');
var ses = new aws.SES();
exports.handler = function(event, context) {
console.log(event);
if (event.request.userAttributes.email) {
// Pull another attribute if you want
sendEmail(event.request.userAttributes.email,
"Congratulations "+event.userName+", you have been registered!"
, function(status) {
context.done(null, event);
});
} else {
// Nothing to do, the user's email ID is unknown
console.log("Failed");
context.done(null, event);
}
};
function sendEmail(to, body, completedCallback) {
var eParams = {
Destination: {
ToAddresses: [to]
},
Message: {
Body: {
Text: {
Data: body
}
},
Subject: {
Data: "Welcome to My Service!"
}
},
Source: "welcome#my-service.com"
};
var email = ses.sendEmail(eParams, function(err, data){
if (err) {
console.log(err);
} else {
console.log("===EMAIL SENT===");
}
completedCallback('Email sent');
});
console.log("EMAIL CODE END");
};
You will also have to set up SES.
If you want to handle all emails yourself, you can specify this with a CustomEmailSender Lambda. This trigger isn't currently available through the AWS Console, but you can specify it with the CLI or CDK/CloudFormation. See the docs here.
Those docs are pretty terrible though. The gist is that you'll be given a code property on the event, which is a base64-encoded blob that was encrypted with the KMS key you specified on your user pool. Depending on the triggering event, this is the verification code, temporary password, etc, generated by Cognito. Here's a simplified version of what my Lambda looks like:
import { buildClient, CommitmentPolicy, KmsKeyringNode } from '#aws-crypto/client-node';
const { decrypt } = buildClient(CommitmentPolicy.REQUIRE_ENCRYPT_ALLOW_DECRYPT);
const kmsKeyring = new KmsKeyringNode({
keyIds: [process.env.COGNITO_EMAILER_KEY_ARN]
});
export async function lambdaHandler(event, context) {
try {
let payload = '';
if (event.request.code) {
const { plaintext, messageHeader } = await decrypt(
kmsKeyring,
Buffer.from(event.request.code, "base64")
);
if (event.userPoolId !== messageHeader.encryptionContext["userpool-id"]) {
console.error("Encryption context does not match expected values!");
return;
}
payload = plaintext.toString();
}
let messageHtml = "";
switch (event.triggerSource) {
case "CustomEmailSender_SignUp": {
const verificationCode = payload;
messageHtml = `<p>Use this code to verify your email: ${verificationCode}</p>`;
break;
}
case "CustomEmailSender_AdminCreateUser":
case "CustomEmailSender_ResendCode": {
const tempPassword = payload;
messageHtml = `<p>Your temporary password is ${tempPassword}</p>`;
break;
}
default: {
console.warn("unhandled trigger:", event.triggerSource);
return;
}
}
await sendEmail({
subject: "Automated message",
to: event.request.userAttributes.email,
messageHtml,
});
return true;
} catch (err) {
console.error(err.message);
process.exit(1);
}
}