Nest.js hanging after a certain time without any logs - amazon-web-services

We are currently using Nest.JS as our backend solution for almost 2 years now. It worked pretty fine at all and didn’t gave us any big issues until this last week. As the title describes, it hangs out all on a sudden, without returning us any kind of log. We use it combined with pm2 and it is hosted on a aws t3.medium machine running ubuntu.
We are using the exception filter provided by nest.js in their documentation:
import {
ExceptionFilter,
Catch,
ArgumentsHost,
HttpException,
HttpStatus,
} from '#nestjs/common';
import { HttpAdapterHost } from '#nestjs/core';
#Catch()
export class AllExceptionsFilter implements ExceptionFilter {
constructor(private readonly httpAdapterHost: HttpAdapterHost) {}
catch(exception: unknown, host: ArgumentsHost): void {
// In certain situations `httpAdapter` might not be available in the
// constructor method, thus we should resolve it here.
const { httpAdapter } = this.httpAdapterHost;
const ctx = host.switchToHttp();
const httpStatus =
exception instanceof HttpException
? exception.getStatus()
: HttpStatus.INTERNAL_SERVER_ERROR;
const responseBody = {
statusCode: httpStatus,
timestamp: new Date().toISOString(),
path: httpAdapter.getRequestUrl(ctx.getRequest()),
};
httpAdapter.reply(ctx.getResponse(), responseBody, httpStatus);
}
}
It’s possible to log the reason why the application was shutdown and the specific route that gave us problem? It would be easier to fix that way.
Edit: We also have subscribed to Sentry, but those logs doesn't reach it as well.

Related

AWS .Net Core SDK Simple Email Service Suppression List Not Working

I am trying to retrieve the SES account-level suppression list using AWS SDK in .Net Core:
Below is my code:
public class SimpleEmailServiceUtility : ISimpleEmailServiceUtility
{
private readonly IAmazonSimpleEmailServiceV2 _client;
public SimpleEmailServiceUtility(IAmazonSimpleEmailServiceV2 client)
{
_client = client;
}
public async Task<ListSuppressedDestinationsResponse> GetSuppressionList()
{
ListSuppressedDestinationsRequest request = new ListSuppressedDestinationsRequest();
request.PageSize = 10;
ListSuppressedDestinationsResponse response = new ListSuppressedDestinationsResponse();
try
{
response = await _client.ListSuppressedDestinationsAsync(request);
}
catch (Exception ex)
{
Console.WriteLine("ListSuppressedDestinationsAsync failed with exception: " + ex.Message);
}
return response;
}
}
But it doesn't seem to be working. The request takes too long and then returns empty response or below error if I remove try/catch:
An unhandled exception occurred while processing the request.
TaskCanceledException: A task was canceled.
System.Threading.Tasks.TaskCompletionSourceWithCancellation<T>.WaitWithCancellationAsync(CancellationToken cancellationToken)
TimeoutException: A task was canceled.
Amazon.Runtime.HttpWebRequestMessage.GetResponseAsync(CancellationToken cancellationToken)
Can anyone please guide if I am missing something?
Thank you!
I have tested your code and everything works correctly.
using Amazon;
using Amazon.SimpleEmailV2;
using Amazon.SimpleEmailV2.Model;
internal class Program
{
private async static Task Main(string[] args)
{
var client = new AmazonSimpleEmailServiceV2Client("accessKeyId", "secrectAccessKey", RegionEndpoint.USEast1);
var utility = new SimpleEmailServiceUtility(client);
var result = await utility.GetSuppressionList();
}
}
<PackageReference Include="AWSSDK.SimpleEmailV2" Version="3.7.1.127" />
Things that you can check:
Try again, maybe it was a temporary problem.
Try with the latest version that I am using(if not already)
How far are you from the region that you try to get the list? Try making the same request from an EC2 instance in that region.
Finally found the issue, I was using awsConfig.DefaultClientConfig.UseHttp = true;' in startup` which was causing the issue. Removing it fixed the issue and everything seems to be working fine now.

Is there a simple API to transfer ERC-1155(Polygon Mainnet) tokens to other users?

Sorry I am new to blockchain development, so pardon my silly basic question.
I have created a bunch of ERC-1155 using Polygon main net. I have all the address and ids of the token. Now I want to transfer them to other users from my backend(nodejs) api.
What I have tried till now:
Used opensea-js with following code but getting alchemy error.
import { OpenSeaPort, Network } from 'opensea-js'
import Web3 from "web3"
// This example provider won't let you make transactions, only read-only calls:
const provider = new Web3.providers.HttpProvider('https://polygon-mainnet.g.alchemy.com/v2/**************************')
const seaport = new OpenSeaPort(provider, {
networkName: Network.Main,
})
const transactionHash = await seaport.transfer({
asset: {
tokenId: '**************************************',
tokenAddress:'**********************************',
schemaName: "ERC1155"
},
fromAddress: '***********************************', // Must own the asset
toAddress: '*************************************',
quantity: 1,
})
console.log(transactionHash);
Its giving error "Unsupported method: eth_sendTransaction"
I then searched about this error and alchemy has some complex solution for this. But I believe this is a very simple task and there must be a simpler solution which I could not find.

aws lambda function using serverless template of asp.net core

I don't have enough knowledge of aws but my company asked me to do a job which I guess is what AWS Lambda does perfectly. The requirement is I have to create a service that has an endpoint that needs to be called twice a day. The approach I followed is I created a serverless web API through visual studio and created API gateway endpoint for each endpoint. Then added a trigger through cloud watch events to run it twice a day but whenever the function is triggered I get this error.
Object reference not set to an instance of an object.: NullReferenceException
at Amazon.Lambda.AspNetCoreServer.APIGatewayProxyFunction.MarshallRequest(InvokeFeatures features, APIGatewayProxyRequest apiGatewayRequest, ILambdaContext lambdaContext)
at Amazon.Lambda.AspNetCoreServer.AbstractAspNetCoreFunction`2.FunctionHandlerAsync(TREQUEST request, ILambdaContext lambdaContext)
at lambda_method(Closure , Stream , Stream , LambdaContextInternal )
I have the same issue and could fix it recently.
If you use Lambda with ASP.NET Core, you should have LambdaEntryPoint class to handle all the requests.
Try to override MarshallRequest method in this class, add logging and see what you have in apiGatewayRequest parameter. The code can look something like this:
protected override void MarshallRequest(InvokeFeatures features, APIGatewayProxyRequest apiGatewayRequest, ILambdaContext lambdaContext)
{
LambdaLogger.Log($"Request path: {apiGatewayRequest.Path}");
LambdaLogger.Log($"Request path parameters: {apiGatewayRequest.PathParameters}");
LambdaLogger.Log($"Request body: {apiGatewayRequest.Body}");
LambdaLogger.Log($"Request request context: {apiGatewayRequest.RequestContext}");
base.MarshallRequest(features, apiGatewayRequest, lambdaContext);
}
In my case, all these values were nulls. The reason for it was in using Amazon EventBridge for keeping Lambda online to avoid a cold start. If you also use EventBridge, try to configure the request there properly. If not, you can try to update MarshalRequest the following way:
protected override void MarshallRequest(InvokeFeatures features, APIGatewayProxyRequest apiGatewayRequest, ILambdaContext lambdaContext)
{
if(apiGatewayRequest.RequestContext == null) //Or other property
{
return;
}
base.MarshallRequest(features, apiGatewayRequest, lambdaContext);
}
A few days ago I had the same problem. Grigory Zhadko’s answer helped me a lot by knowing which method I should overwrite. LambdaEntryPoint requires for any other process to instantiate an ApiGatewayProxiRequest object manually (for example, eventBridge). The configuration I implemented to fix the problem is as follows.
protected override void MarshallRequest(InvokeFeatures features, APIGatewayProxyRequest apiGatewayRequest, ILambdaContext lambdaContext)
{
var endpoint = "my/endpoint";
if (apiGatewayRequest != null && apiGatewayRequest?.RequestContext == null)
{
apiGatewayRequest.Path = $"/{endpoint}";
apiGatewayRequest.Resource = $"/{endpoint}";
apiGatewayRequest.HttpMethod = "ANY METHOD";
apiGatewayRequest.RequestContext = new APIGatewayProxyRequest.ProxyRequestContext
{
Path = $"/path/{endpoint}", // your path request
Identity = new APIGatewayProxyRequest.RequestIdentity
{
ClientCert = new APIGatewayProxyRequest.ProxyRequestClientCert
{
Validity = new APIGatewayProxyRequest.ClientCertValidity()
}
},
ResourcePath = $"/{basePath}{eventEntpoint}",
HttpMethod = "ANY METHOD",
Authorizer = new APIGatewayCustomAuthorizerContext()
};
}
base.MarshallRequest(features, apiGatewayRequest, lambdaContext);
}

How do I call another micro-service from my micro-service?

This might sound a little odd, but I'm facing a situation where I have a micro-service that assembles some pricing logic, but for that, it needs a bunch of information that another micro-service provides.
I believe I have two options: (1) grab all the data I need from the database and ignore the GraphQL work that was done in this other micro-service or (2) somehow hit this other micro-service from within my current service and get the data I need.
How would someone accomplish (2)?
I have no clear path of how to get that done without creating a mess.
I imagine that turning my pricing micro-service into a small client could work, but I'm just not sure if that's bad practice.
After much consideration and reading the answers I got here, I decided to turn my micro-service into a mini-client by using apollo-client.
In short, I have something like this:
import { ApolloClient } from 'apollo-client';
import { InMemoryCache } from 'apollo-cache-inmemory';
import { HttpLink } from 'apollo-link-http';
// Instantiate required constructor fields
const cache = new InMemoryCache();
const link = new HttpLink({
uri: 'http://localhost:3000/graphql',
});
const client = new ApolloClient({
// Provide required constructor fields
cache: cache,
link: link,
});
export default client;
That HttpLink is the federated schema, so I can call it from my resolver or anywhere else like this:
const query = gql`
query {
something(uid: "${uid}") {
field1
field2
field3
anotherThing {
field1
field2
}
}
}
`;
const response = await dataSources.client.query({query});

Starting a StepFunction and exiting doesn't trigger execution

I have Lambda function tranportKickoff which receives an input and then sends/proxies that input forward into a Step Function. The code below does run and I am getting no errors but at the same time the step function is NOT executing.
Also critical to the design, I do not want the transportKickoff function to wait around for the step function to complete as it can be quite long running. I was, however, expecting that any errors in the calling of the Step Function would be reported back synchronously. Maybe this thought is at fault and I'm somehow missing out on an error that is thrown somewhere. If that's the case, however, I'd like to find a way which is able to achieve the goal of having the kickoff lambda function exit as soon as the Step Function has started execution.
note: I can execute the step function independently and I know that it works correctly
const stepFn = new StepFunctions({ apiVersion: "2016-11-23" });
const stage = process.env.AWS_STAGE;
const name = `transport-steps ${message.command} for "${stage}" environment at ${Date.now()}`;
const params: StepFunctions.StartExecutionInput = {
stateMachineArn: `arn:aws:states:us-east-1:999999999:stateMachine:transportion-${stage}-steps`,
input: JSON.stringify(message),
name
};
const request = stepFn.startExecution(params);
request.send();
console.info(
`startExecution request for step function was sent, context sent was:\n`,
JSON.stringify(params, null, 2)
);
callback(null, {
statusCode: 200
});
I have also checked from the console that I have what I believe to be the right permissions to start the execution of a step function:
I've now added more permissions (see below) but still experiencing the same problem:
'states:ListStateMachines'
'states:CreateActivity'
'states:StartExecution'
'states:ListExecutions'
'states:DescribeExecution'
'states:DescribeStateMachineForExecution'
'states:GetExecutionHistory'
Ok I have figured this one out myself, hopefully this answer will be helpful for others:
First of all, the send() method is not a synchronous call but it does not return a promise either. Instead you must setup listeners on the Request object before sending so that you can appropriate respond to success/failure states.
I've done this with the following code:
const stepFn = new StepFunctions({ apiVersion: "2016-11-23" });
const stage = process.env.AWS_STAGE;
const name = `${message.command}-${message.upc}-${message.accountName}-${stage}-${Date.now()}`;
const params: StepFunctions.StartExecutionInput = {
stateMachineArn: `arn:aws:states:us-east-1:837955377040:stateMachine:transportation-${stage}-steps`,
input: JSON.stringify(message),
name
};
const request = stepFn.startExecution(params);
// listen for success
request.on("extractData", req => {
console.info(
`startExecution request for step function was sent and validated, context sent was:\n`,
JSON.stringify(params, null, 2)
);
callback(null, {
statusCode: 200
});
});
// listen for error
request.on("error", (err, response) => {
console.warn(
`There was an error -- ${err.message} [${err.code}, ${
err.statusCode
}] -- that blocked the kickoff of the ${message.command} ITMS command for ${
message.upc
} UPC, ${message.accountName} account.`
);
callback(err.statusCode, {
message: err.message,
errors: [err]
});
});
// send request
request.send();
Now please bear in mind there is a "success" event but I used "extractData" to capture success as I wanted to get a response as quickly as possible. It's possible that success would have worked equally as well but looking at the language in the Typescript typings it wasn't entirely clear and in my testing I'm certain that the "extractData" method does work as expected.
As for why I was not getting any execution on my step functions ... it had to the way I was naming the function ... you're limited to a subset of characters in the name and I'd stepped over that restriction but didn't realize until I was able to capture the error with the code above.
For anyone encountering issues executing state machines from Lambda's make sure the permission 'states:StartExecution' is added to the Lambda permissions and the regions match up.
Promise based version:
import { StepFunctions } from 'aws-sdk';
const clients = {
stepFunctions: new StepFunctions();
}
const createExecutor = ({ clients }) => async (event) => {
console.log('Executing media pipeline job');
const params = {
stateMachineArn: '<state-machine-arn>',
input: JSON.stringify({}),
name: 'new-job',
};
const result = await stepFunctions.startExecution(params).promise();
// { executionArn: "string", startDate: number }
return result;
};
const startExecution = createExecutor({ clients });
// Pass in the event from the Lambda e.g S3 Put, SQS Message
await startExecution(event);
Result should contain the execution ARN and start date (read more)