How to access custom environment variables in an aws batch job - aws-batch

I'm setting an environment variable in my aws batch job like so.
BatchJobDef:
Type: 'AWS::Batch::JobDefinition'
Properties:
Type: container
JobDefinitionName: xxxxxxxxxx
ContainerProperties:
Environment:
- Name: 'PROC_ENV'
Value: 'dev'
When I look at my job definition I can see it listed in Environment variables configuration
Then I'm trying to access it in my job's python code like this:
env = os.environ['PROC_ENV']
but there is no PROC_ENV variable set, getting the following error when I go to run my job:
raise KeyError(key) from None
KeyError: 'PROC_ENV'
Can anyone tell me what I'm missing here? Am I accessing this environment variable the correct way?

Related

CFN-Lint error in nested Cloudformation stack

I am creating AWS resources using CloudFormation Nested Stack and the pipeline runs in Gitlab.
Resources:
CF-resource:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: local path of yaml file
Parameters:
when Using CFN-Lint test on parent stack I am getting below error.
** W3002 This code may only work with package cli command as the property (/TemplateURL) is a string**
Can you help me to solve this
It's a warning to let you know you can't deploy this template directly without first packaging it. You can suppress it by adding -i W3002.

Serverless framework - New variables resolver

When I run sls offline - I'm facing a deprecation warning:
Serverless: Deprecation warning: Variables resolver reports following resolution errors:
- Variable syntax error at "functions.Test.environment.TEST_URL": Invalid variable type at index 20 in "${file(./env.yml):${'${self:provider.stage}.TEST_URL'}}"
From a next major this will be communicated with a thrown error.
Set "variablesResolutionMode: 20210326" in your service config, to adapt to new behavior now
Documentation is not clear about it.
env.yml
dev:
TEST_URL: https://example.com/
serverless.yml
frameworkVersion: '2'
...
functions:
Test:
handler: handler.test
environment:
TEST_URL: ${file(./env.yml):${'${self:provider.stage}.TEST_URL'}} # <-------
It works correctly with frameworkVersion (>=1.1.0 <2.0.0).
What is a new approach to get data from another file?
This a new approach to get data from another file
environment:
TEST_URL: ${file(./env.yml):${self:provider.stage}.TEST_URL}

ECS with Docker Compose environment variables

I'm deploying to ECS with the Docker Compose API, however, I'm sort of confused about environment variables.
Right now my docker-compose.yml looks like this:
version: "3.8"
services:
simple-http:
image: "${IMAGE}"
secrets:
- message
secrets:
message:
name: "arn:aws:ssm:<AWS_REGION>:<AWS_ACCOUNT_ID>:parameter/test-env"
external: true
Now in my Container Definitions, I get a Simplehttp_Secrets_InitContainer that references this environment variable as message and with the correct ARN, but there is no variable named message inside my running container.
I'm a little confused, as I thought this was the correct way of passing env's such as DB-passwords, AWS credentials, and so forth.
In the docs we see:
services:
test:
image: "image"
environment:
- "FOO=BAR"
But is this the right and secure way of doing this? Am I missing something?
I haven't played much with secrets in this ECS/Docker integration but there are a couple of things that don't add up between your understanding and the docs. First the integration seems to be working with Secrets Manager and not SSM. Second, according to the doc the content won't be available as a variable but rather as a flat file at runtime at /run/secrets/message (in your example).
Check out this page for the fine details: https://docs.docker.com/cloud/ecs-integration/#secrets

How to inject environment variables in elastic beanstalk config.yml

In the link below
http://docs.shippable.com/deploy/aws-elastic-beanstalk/
it seems like environment variables are used in config.yml . How do we achieve that? It seems like official documentation of aws does not have details on using variables inside config.yml.
Any suggestions will be of great help.
I am looking to set something like
default_platform using env variables and not application variables alone.
Yes, you won't find anything in AWS documentation because using environment variables for templating the config.yml is a Shippable feature not an AWS feature.
Shippable details how to add extra ENVs (environment variables) in the documentation you posted:
Which reads (highlighted in bold):
Define deploy-eb-basic-params
Description: deploy-eb-basic-params is a params resource that defines variables we want to make easily configurable. These variables definitions replace the placeholders in the Dockerrun.aws.json and config.yml files.
Steps:
Add the following yml block to the resources section in your shippable.yml file.
# shippable.yml
resources:
- name: deploy-eb-basic-params
type: params
version:
params:
ENVIRONMENT: "sample"
PORT: 80
AWS_EB_ENVIRONMENT_SINGLE: "Sample-env"
AWS_EB_APPLICATION: "deploy-eb-basic"
CUSTOM_ENV_HERE: "some value" # <------------ your custom value here.
Then you should be able to reference that ENV CUSTOM_ENV_HERE in your config.yml
# config.yml
branch-defaults:
default:
environment: ${AWS_EB_ENVIRONMENT_SINGLE}
environment-defaults:
${AWS_EB_ENVIRONMENT_SINGLE}:
branch: null
repository: null
global:
application_name: ${AWS_EB_APPLICATION}
default_ec2_keyname: null
default_platform: ${CUSTOM_ENV_HERE} # <------------ you reference ENV here.
default_region: ${DEPLOYEBBASICCONFIG_POINTER_REGION}
instance_profile: null
platform_name: null
platform_version: null
profile: null
sc: null
workspace_type: Application
Best of luck.

How to use dynamic key for `parameter-store` in AWS CodeBuild spec file?

I have a buildspec.yml file in my CodeBuild that I want to read values out of EC2 Systems Manager Parameter Store. CodeBuild supports doing this via the parameter-store attribute in your spec file.
Problem is, I can't figure out how to use enviornment Variables that are set BEFORE the buidlspec executes.
Here is an example:
version: 0.2
env:
variables:
RUNTIME: "nodejs8.10"
#parameter-store vars are in the format /[stage]/[repo]/[branch]/[eyecatcher]/key
parameter-store: #see https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec-ref-syntax
LAMBDA_EXECUTION_ROLE_ARN: "/${STAGE}/deep-link/${BRANCH}/GetUri/lambdaExecutionRoleArn"
ENV_SAMPLE_KEY: "/${STAGE}/deep-link/${BRANCH}/GetUri/key1"
phases:
install:
commands:
...
As you can see I'm doing the AWS best practice for name-spacing the EC2 Systems Manager Parameter Store keys. I want to re-use this build spec for all my stages, so hard coding is not an option. The vars I use in the Value string are populated as EnvironmentVariables in my CodeBuild project - so they are available before the spec runs.
How do I dynamically populate the Value of the parameter-store Keys with something that is not hard coded?
This variable expansion is now supported in CodeBuild for parameter-store use case. You can define any environment variable in your buildspec and have that referenced in the path to fetch the parameter store. For example, if you have an environment variable called $stage you could use it like this:
version: 0.2
env:
variables:
stage: PRE_PROD
parameter-store:
encryptedVar: CodeBuild-$stage
phases:
build:
commands:
- echo $encryptedVar
I found this StackOverflow post - unfortunately the feature you describe does not seem to exist.
It would have been nice to be able to use parameters and functions akin to the features in CloudFormation templates.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/dynamic-references.html
It doesnt say it explicitly but I'm guessing you can use a !Sub in whatever cloudformation template you are using to build that resolve string, and use it in a ParameterOverride to pass into your buildspec in the regular parameter block instead of a parameter-store block