Is there a TIMEOUT environment variable for Lambda functions in AWS? - amazon-web-services

I really don't understand why no one seems to have asked this question before, but is there a TIMEOUT environment variable which references the set timeout in the Lambda function in AWS?
It doesn't seem to be on the list of environment variables available, and that doesn't seem to make sense either: https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html

I'm not sure of your programming environment but ever Lambda environment I've seen includes a Context object. That has the ability to check how much time is left in a Lambda run. In Java, for example:
public class ShowTimeout implements RequestStreamHandler {
public void handleRequest(InputStream inputStream, OutputStream outputStream, Context context) throws IOException {
LambdaLogger logger = context.getLogger();
logger.log("there are " + context.getRemainingTimeInMillis() + "ms left to run");
}
}
logs how much time is left. By default this is 3 seconds but, of course, can change depending on how you configure the Lambda.
EDIT
This is not the total time set. But as the very first thing the Lambda does it's pretty close. For a 15 second timeout Lambda I got:
there are 14998ms left to run
and
there are 14999ms left to run
If you've started your Lambda and it looks like the number is too small then you can do something about it. If it's the first thing you do like my simple code then you'll be very close. I'd argue that a simple rounding would be accurate enough.

Related

Is there any Callback option in GCP cloud function?

I am looking for a way to "wake up" the cloud function when a related process is done.
To understand in-depth - these are the functions I have-
1. A cloud function that gets called any X time and its purpose is to call another function (function # 2).
2. An external provider function that requests information, (I can't edit the code, I have only the request body). The information is not received in real-time, but once it ends - it sends a callback. It should be noted that the process can take long minutes and even hours.
I want to create a process where every X time function 1 will call function 2 and as soon as the second is over it will return the information to function 1 and it will store it in DB.
Example code for func1:
import requests
def entry_point():// func1
response = requests.get('https://outsourceapi.com/get_info')// func2
save_response_in_DB(response.json())// This will happand after getting response
Because I can not keep function 1 awake for so long, is there a way to "wake it up" again?
Or alternatively another solution?

AWS Lambda NodeJS locale/variable isolation

There is a concern about potential problem with reusable variables in aws-lambda.
A user's locale is passed as
Browser cookies => AWS API Gateway => Lambda (NodeJS 6.10)
On the server side localization is implemented with a static variable in a class. Presenting typescript code for clarity but can be done in pure ECMAScript.
Module Language.ts
export default class Language
{
public static Current: LanguageCode = LanguageCode.es;
}
Static Language.Current variable is used across different parts of the application for manual localization and it works perfectly on the client side (react + redux).
Lambda function
import {APIGatewayEvent, Context, Callback} from 'aws-lambda';
import Language from './pathToModule/Language.ts';
export const api = function(event: APIGatewayEvent, context: Context, callback: Callback)
{
Language.Current = event.headers.cookie.locale;
// do the logic here
}
Potential problem
According to AWS documentation NodeJS instances can be reused for different requests. It means that famous concurrent problems have to be considered, e.g.
User 1 calls lambda function. The locale is set to English.
In parallel user 2 calls the same lambda instance. The local is changed to Spanish.
User 1 code continues and reads modified (wrong) locale variable from the shared module Language.
How do you resolve this problem?
For convenience it is good to have only one place for locale change. As I understand the same concern exists for all famous i18n npm packages (i18next, i18n, yahoo i18n, etc).
One of the best practices for Lambda functions is to try and not write code which maintains state.
Here you are initializing the locale based on an initial request and applying it to all future requests, which is inherently flawed even on server based code, forget server less.
To fix this, you will need to initialize the localization library for each request, or at least maintain an in memory lazy map, which you can make use of use the current request's locale to achieve the desired localization.
There are several solutions:
Node JS container is reused only after a function process is finished (callback or error is occurred) (thanks to #idbehold). Thus there is always a unique context per a function call.
Refactor code and pass a locale variable back and force (#Yeshodhan Kulkarni suggestion).
For example, return a function as an intermediate result and use it before calling the result back.
var localizableResult = ...;
var result = localizableResult.Localize(requestedLocale).
If there is a need to use a local stack (kind of a thread context) for other projects there is a npm package node-continuation-local-storage.
Case 1 makes it really simple to use global variables for current locale.

How to lock a long async call in a WebApi action?

I have this scenario where I have a WebApi and an endpoint that when triggered does a lot of work (around 2-5min). It is a POST endpoint with side effects and I would like to limit the execution so that if 2 requests are sent to this endpoint (should not happen, but better safe than sorry), one of them will have to wait in order to avoid race conditions.
I first tried to use a simple static lock inside the controller like this:
lock (_lockObj)
{
var results = await _service.LongRunningWithSideEffects();
return Ok(results);
}
this is of course not possible because of the await inside the lock statement.
Another solution I considered was to use a SemaphoreSlim implementation like this:
await semaphore.WaitAsync();
try
{
var results = await _service.LongRunningWithSideEffects();
return Ok(results);
}
finally
{
semaphore.Release();
}
However, according to MSDN:
The SemaphoreSlim class represents a lightweight, fast semaphore that can be used for waiting within a single process when wait times are expected to be very short.
Since in this scenario the wait times may even reach 5 minutes, what should I use for concurrency control?
EDIT (in response to plog17):
I do understand that passing this task onto a service might be the optimal way, however, I do not necessarily want to queue something in the background that still runs after the request is done.
The request involves other requests and integrations that take some time, but I would still like the user to wait for this request to finish and get a response regardless.
This request is expected to be only fired once a day at a specific time by a cron job. However, there is also an option to fire it manually by a developer (mostly in case something goes wrong with the job) and I would like to ensure the API doesn't run into concurrency issues if the developer e.g. double-sends the request accidentally etc.
If only one request of that sort can be processed at a given time, why not implement a queue ?
With such design, no more need to lock nor wait while processing the long running request.
Flow could be:
Client POST /RessourcesToProcess, should receive 202-Accepted quickly
HttpController simply queue the task to proceed (and return the 202-accepted)
Other service (windows service?) dequeue next task to proceed
Proceed task
Update resource status
During this process, client should be easily able to get status of requests previously made:
If task not found: 404-NotFound. Ressource not found for id 123
If task processing: 200-OK. 123 is processing.
If task done: 200-OK. Process response.
Your controller could look like:
public class TaskController
{
//constructor and private members
[HttpPost, Route("")]
public void QueueTask(RequestBody body)
{
messageQueue.Add(body);
}
[HttpGet, Route("taskId")]
public void QueueTask(string taskId)
{
YourThing thing = tasksRepository.Get(taskId);
if (thing == null)
{
return NotFound("thing does not exist");
}
if (thing.IsProcessing)
{
return Ok("thing is processing");
}
if (!thing.IsProcessing)
{
return Ok("thing is not processing yet");
}
//here we assume thing had been processed
return Ok(thing.ResponseContent);
}
}
This design suggests that you do not handle long running process inside your WebApi. Indeed, it may not be the best design choice. If you still want to do so, you may want to read:
Long running task in WebAPI
https://blogs.msdn.microsoft.com/webdev/2014/06/04/queuebackgroundworkitem-to-reliably-schedule-and-run-background-processes-in-asp-net/

Set or modify an AWS Lambda environment variable with Python boto3

i want to set or modify an environment variable in my lambda script.
I need to save a value for the next call of my script.
For exemple i create an environment variable with the aws lambda console and don't set value. After that i try this :
import boto3
import os
if os.environ['ENV_VAR']:
print(os.environ['ENV_VAR'])
os.environ['ENV_VAR'] = "new value"
In this case my value will never print.
I tried with :
os.putenv()
but it's the same result.
Do you know why this environment variable is not set ?
Thank you !
Consider using the boto3 lambda command, update_function_configuration to update the environment variable.
response = client.update_function_configuration(
FunctionName='test-env-var',
Environment={
'Variables': {
'env_var': 'hello'
}
}
)
I need to save a value for the next call of my script.
That's not how environment variables work, nor is it how lambda works. Environment variables cannot be set in a child process for the parent - a process can only set environment variables in its own and child process environments.
This may be confusing to you if you set environment variables at the shell, but in that case, the shell is the long running process setting and getting your environment variables, not the programs it calls.
Consider this example:
from os import environ
print environ['A']
environ['A'] = "Set from python"
print environ['A']
This will only set env A for itself. If you run it several times, the initial value of A is always the shell's value, never the value python sets.
$ export A="set from bash"
$ python t.py
set from bash
Set from python
$ python t.py
set from bash
Set from python
Further, even if that wasn't the case, it wouldn't work reliably with aws lambda. Lambda runs your code on whatever compute resources are available at the time; it will typically cache runtimes for frequently executed functions, so in these cases data could be written to the filesystem to preserve it. But if the next invocation wasn't run in that runtime, your data would be lost.
For your needs, you want to preserve your data outside the lambda. Some obvious options are: write to s3, write to dynamo, or, write to sqs. The next invocation would read from that location, achieving the desired result.
AWS Lambda just executes the piece of code with given set of inputs. Once executed, it returns the output and that's all. If you want to preserve the output for your next call, then you probably need to store that in DB or Queue as Dan said. I personally use SQS in conjunction with SNS that sends me notifications about current state. You can even store the end result like success or failure in SQS which you can use for next trigger. Just throwing the options here, rest all depends on your requirements.

Is it possible to change the tick count value returned from GetTickCount()?

I'm trying to do some testing and it requires the Windows system to be up and running for 15 Real-Time minutes before a certain action can ever occur. However, this is very time consuming to HAVE to wait the 15 real-time minutes.
Is there a way to change the value GetTickCount() returns so as to make it appear that the system has been running for 15 real-time minutes?
Edit: There is an app that does something close to what I want, but it doesn't quite seem to work and I have to deal with hexadecimal values instead of straight decimal values: http://ysgyfarnog.co.uk/utilities/AdjustTickCount/
Not directly.
Why not just mock the call, or replace the chunk of code that does the time check with a strategy object?
struct Waiter
{
virtual void Wait() = 0;
virtual ~Waiter() {};
};
struct 15MinWaiter : public Waiter
{
virtual void Wait()
{
//Do something that waits for 15 mins
}
};
struct NothingWaiter : public Waiter
{
virtual void Wait()
{
//Nill
}
};
You could do similar to mock out a call to GetTickCount, but doing this at the higher level of abstraction of whatever is doing the wait is probably better.
For debugging purposes, you can just replace all the calls to GetTickCount() with _GetTickCount(), which can implement to return with GetTickCount() or GetTickCount()+15min, depending whether or not you are debugging.
Why not make it one minute, confirm it works, then change it back to fifteen?
You could do something quite hideous like #define GetTickCount() MyReallyEvilReplacement().
You can use the Application Verifier provided with the Windows SDK to run your app with the "Miscellaneous > TimeRollOver" test. It will fake a tick count which starts at a time that will overflow after a short moment.
Another possibility is to to hibernate / hybrid shutdown / sleep a Windows system, then boot to the BIOS, change the date time to something you require, like add 30 days if you want to test unsigned tick counts. When Windows boots again, it has no way of detecting the appropiate time since the computer really started previously, and thinks it is running for 30 more days. It is important to use sleep / hibernate / hybrid shutdown (the latter being the default since Windows 8), not a full shutdown, as the up time is otherwise reset.
Yet another possibility could be to hook imports of GetTickCount to your own code and let it return arbitrary results.