Set or modify an AWS Lambda environment variable with Python boto3 - python-2.7

i want to set or modify an environment variable in my lambda script.
I need to save a value for the next call of my script.
For exemple i create an environment variable with the aws lambda console and don't set value. After that i try this :
import boto3
import os
if os.environ['ENV_VAR']:
print(os.environ['ENV_VAR'])
os.environ['ENV_VAR'] = "new value"
In this case my value will never print.
I tried with :
os.putenv()
but it's the same result.
Do you know why this environment variable is not set ?
Thank you !

Consider using the boto3 lambda command, update_function_configuration to update the environment variable.
response = client.update_function_configuration(
FunctionName='test-env-var',
Environment={
'Variables': {
'env_var': 'hello'
}
}
)

I need to save a value for the next call of my script.
That's not how environment variables work, nor is it how lambda works. Environment variables cannot be set in a child process for the parent - a process can only set environment variables in its own and child process environments.
This may be confusing to you if you set environment variables at the shell, but in that case, the shell is the long running process setting and getting your environment variables, not the programs it calls.
Consider this example:
from os import environ
print environ['A']
environ['A'] = "Set from python"
print environ['A']
This will only set env A for itself. If you run it several times, the initial value of A is always the shell's value, never the value python sets.
$ export A="set from bash"
$ python t.py
set from bash
Set from python
$ python t.py
set from bash
Set from python
Further, even if that wasn't the case, it wouldn't work reliably with aws lambda. Lambda runs your code on whatever compute resources are available at the time; it will typically cache runtimes for frequently executed functions, so in these cases data could be written to the filesystem to preserve it. But if the next invocation wasn't run in that runtime, your data would be lost.
For your needs, you want to preserve your data outside the lambda. Some obvious options are: write to s3, write to dynamo, or, write to sqs. The next invocation would read from that location, achieving the desired result.

AWS Lambda just executes the piece of code with given set of inputs. Once executed, it returns the output and that's all. If you want to preserve the output for your next call, then you probably need to store that in DB or Queue as Dan said. I personally use SQS in conjunction with SNS that sends me notifications about current state. You can even store the end result like success or failure in SQS which you can use for next trigger. Just throwing the options here, rest all depends on your requirements.

Related

Local machine environment variable

I have next problem: I have same environment for site on work and home PC, but I have different database records inside it.
So in that case for test requests on my local environment I constantly need to change tested values.
Postman has different scopes for variables (see documentation)
In my case in collection scope is saved production variables. On environment scope I rewrite this values by my local configuration.
Collection variables
SITE_DOMAIN - https://www.prod.com/
USER_ID - 1234567890
Environment variable
SITE_DOMAIN - https://dev.loc/
USER_ID - 123
At home I have the same domain, but another user id and I need to change it in Environment variable every time when I wanna run request at home.
I wanna setup USER_ID to another value only at home local machine.
Recorded interface example
Is it possible to rewrite variable with local machine scope? There is local layer, but it isn't described in documentation.
If I understand the question correctly:
You could add a value into the local environment file local_dev or something, to run a check to see if it’s there - then have some logic in the pre-request script that looks for the value, if it exists, then change the USER_ID variable to the one you want before the request is made and if not, do nothing.
Roughly, something kind of like this but more elegant:
if(pm.environment.get('local_dev') === 'some_value') {
pm.environment.set('USER_ID', 1234)
}
I might have totally misunderstood the question though.
As I see, local variable is that variable which we setup in Pre-request Script section in pm.variables scope.
So we can override environment value without changing it by
pm.variables.set("VAR_NAME", "VAR_VALUE");
Unfortunately it will run in all PC's on each send request. So we need to add some logic to it.
As it supposed by #Danny Dainton we can add some environment variable for dev PC position.
So as a workaround I add variable PC_ENV to Local environment and put some logic for this in Pre-request Script section.
if (pm.environment.get('PC_ENV') === 'home') {
pm.variables.set("USER_ID", "35");
}
How can we use this? When we start work with Postman we go to our environment and setup PC_ENV value to home or office depends on where we are now.
Recorded example
If we don't want to run Pre-script section every time we can add all local variables values for each PC and run it only once at the beginning of the work by setup required condition.
const needSetupEnvironment = true;//change to false when setup is finished
if (needSetupEnvironment) {
const currentEnvironment = 'home';//setup environment before start work
let userId;
switch (currentEnvironment) {
case 'home':
userId = 35;
break;
default:
userId = 123;
break;
}
pm.environment.set('USER_ID', userId);
}
We can enable script when we need to change environment variables, and than disable it after setup run it once with correct environment.
Recorded example

AWS Lambda NodeJS locale/variable isolation

There is a concern about potential problem with reusable variables in aws-lambda.
A user's locale is passed as
Browser cookies => AWS API Gateway => Lambda (NodeJS 6.10)
On the server side localization is implemented with a static variable in a class. Presenting typescript code for clarity but can be done in pure ECMAScript.
Module Language.ts
export default class Language
{
public static Current: LanguageCode = LanguageCode.es;
}
Static Language.Current variable is used across different parts of the application for manual localization and it works perfectly on the client side (react + redux).
Lambda function
import {APIGatewayEvent, Context, Callback} from 'aws-lambda';
import Language from './pathToModule/Language.ts';
export const api = function(event: APIGatewayEvent, context: Context, callback: Callback)
{
Language.Current = event.headers.cookie.locale;
// do the logic here
}
Potential problem
According to AWS documentation NodeJS instances can be reused for different requests. It means that famous concurrent problems have to be considered, e.g.
User 1 calls lambda function. The locale is set to English.
In parallel user 2 calls the same lambda instance. The local is changed to Spanish.
User 1 code continues and reads modified (wrong) locale variable from the shared module Language.
How do you resolve this problem?
For convenience it is good to have only one place for locale change. As I understand the same concern exists for all famous i18n npm packages (i18next, i18n, yahoo i18n, etc).
One of the best practices for Lambda functions is to try and not write code which maintains state.
Here you are initializing the locale based on an initial request and applying it to all future requests, which is inherently flawed even on server based code, forget server less.
To fix this, you will need to initialize the localization library for each request, or at least maintain an in memory lazy map, which you can make use of use the current request's locale to achieve the desired localization.
There are several solutions:
Node JS container is reused only after a function process is finished (callback or error is occurred) (thanks to #idbehold). Thus there is always a unique context per a function call.
Refactor code and pass a locale variable back and force (#Yeshodhan Kulkarni suggestion).
For example, return a function as an intermediate result and use it before calling the result back.
var localizableResult = ...;
var result = localizableResult.Localize(requestedLocale).
If there is a need to use a local stack (kind of a thread context) for other projects there is a npm package node-continuation-local-storage.
Case 1 makes it really simple to use global variables for current locale.

PBS Professional hook not updating Priority

I am trying to implement a hook to determine a job's priority upon entering the queue.
The hook is enabled, imported, and event type is "queuejob", so it is in place (like other hooks we have enabled). This hook however does not seem to alter a job's priority as I am expecting.
Here is a simplified example of how I'm trying to alter the Priority for a job:
import pbs
try:
e=pbs.event()
j=e.job
if j.server == 'myserver':
j.Priority = j.Priority + 50
e.accept()
except SystemExit:
pass
Whenever I submit a job after importing this hook, I run the 'qstat -f' on my job, the Priority is always 0, whether I set it to another value in my qsub script or leave it to the default.
Thank you.
Couple of things I discovered:
It appears that PBS does not like using j.Priority in a calculation and assignment, so I had to use another internal variable (which was fine since I had one already for something else)
i.e.:
j.Priority = High_Priority
if pbs.server() == 'myserver'
j.Priority = High_Priority + 50
Also, (as can be seen in the last example), j.server should actually be pbs.server().

How to increase deploy timeout limit at AWS Opsworks?

I would like to increase the deploy time, in a stack layer that hosts many apps (AWS Opsworks).
Currenlty I get the following error:
Eror
[2014-05-05T22:27:51+00:00] ERROR: Running exception handlers
[2014-05-05T22:27:51+00:00] ERROR: Exception handlers complete
[2014-05-05T22:27:51+00:00] FATAL: Stacktrace dumped to /var/lib/aws/opsworks/cache/chef-stacktrace.out
[2014-05-05T22:27:51+00:00] ERROR: deploy[/srv/www/lakers_test] (opsworks_delayed_job::deploy line 65) had an error: Mixlib::ShellOut::CommandTimeout: Command timed out after 600s:
Thanks in advance.
First of all, as mentioned in this ticket reporting a similar issue, the Opsworks guys recommend trying to speed up the call first (there's always room for optimization).
If that doesn't work, we can go down the rabbit hole: this gets called, which in turn calls Mixlib::ShellOut.new, which happens to have a timeout option that you can pass in the initializer!
Now you can use an Opsworks custom cookbook to overwrite the initial method, and pass the corresponding timeout option. Opsworks merges the contents of its base cookbooks with the contents of your custom cookbook - therefore you only need to add & edit one single file to your custom cookbook: opsworks_commons/libraries/shellout.rb:
module OpsWorks
module ShellOut
extend self
# This would be your new default timeout.
DEFAULT_OPTIONS = { timeout: 900 }
def shellout(command, options = {})
cmd = Mixlib::ShellOut.new(command, DEFAULT_OPTIONS.merge(options))
cmd.run_command
cmd.error!
[cmd.stderr, cmd.stdout].join("\n")
end
end
end
Notice how the only additions are just DEFAULT_OPTIONS and merging these options in the Mixlib::ShellOut.new call.
An improvement to this method would be changing this timeout option via a chef attribute, that you could in turn update via your custom JSON in the Opsworks interface. This means passing the timeout attribute in the initial Opsworks::ShellOut.shellout call - not in the method definition. But this depends on how the shellout method actually gets called...

Properly setting DeleteOnTermination on an existing EBS volume using boto

Digging through the code (consider this, for instance), I found that I can read the attribute using:
instance.block_device_mapping['/dev/sdz'].delete_on_termination
...and toggle it using:
instance.modify_attribute('blockdevicemapping', ['/dev/sdz=1']) # toggle on
instance.modify_attribute('blockdevicemapping', ['/dev/sdz']) # toggle off
But it's a-symmetrical and I feel like I'm missing some higher level functionality.
Shouldn't it be more like:
block_device_type = instance.block_device_mapping['/dev/sdz']
block_device_type.delete_on_termination = True
block_device_type.save() # I made this API up
?
You turn this setting on and off with a list of the formatted string '%s=%d'.
Switch to on
>>> inst.modify_attribute('blockDeviceMapping', ['/dev/sda1=1'])
Switch to off
>>> inst.modify_attribute('blockDeviceMapping', ['/dev/sda1=0'])
I verified changes outside of python after each attempt to change the setting using:
$ aws ec2 describe-instance-attribute --instance-id i-7890abcd --attribute blockDeviceMapping
Calling inst.modify_attribute('blockDeviceMapping', ['/dev/sda1']) (the string lacks =0) did not produce any change.
Assigning to inst.block_device_mapping['/dev/sda1'].delete_on_termination also did not produce any change.
After calling modify_attribute, the value of delete_on_termination on local block device objects is unchanged.
I walk through the whole process at:
http://f06mote.com/post/77239804736/amazon-ec2-instance-safety-tweak-turn-off-delete-on