How to inject environment variables in elastic beanstalk config.yml - amazon-web-services

In the link below
http://docs.shippable.com/deploy/aws-elastic-beanstalk/
it seems like environment variables are used in config.yml . How do we achieve that? It seems like official documentation of aws does not have details on using variables inside config.yml.
Any suggestions will be of great help.
I am looking to set something like
default_platform using env variables and not application variables alone.

Yes, you won't find anything in AWS documentation because using environment variables for templating the config.yml is a Shippable feature not an AWS feature.
Shippable details how to add extra ENVs (environment variables) in the documentation you posted:
Which reads (highlighted in bold):
Define deploy-eb-basic-params
Description: deploy-eb-basic-params is a params resource that defines variables we want to make easily configurable. These variables definitions replace the placeholders in the Dockerrun.aws.json and config.yml files.
Steps:
Add the following yml block to the resources section in your shippable.yml file.
# shippable.yml
resources:
- name: deploy-eb-basic-params
type: params
version:
params:
ENVIRONMENT: "sample"
PORT: 80
AWS_EB_ENVIRONMENT_SINGLE: "Sample-env"
AWS_EB_APPLICATION: "deploy-eb-basic"
CUSTOM_ENV_HERE: "some value" # <------------ your custom value here.
Then you should be able to reference that ENV CUSTOM_ENV_HERE in your config.yml
# config.yml
branch-defaults:
default:
environment: ${AWS_EB_ENVIRONMENT_SINGLE}
environment-defaults:
${AWS_EB_ENVIRONMENT_SINGLE}:
branch: null
repository: null
global:
application_name: ${AWS_EB_APPLICATION}
default_ec2_keyname: null
default_platform: ${CUSTOM_ENV_HERE} # <------------ you reference ENV here.
default_region: ${DEPLOYEBBASICCONFIG_POINTER_REGION}
instance_profile: null
platform_name: null
platform_version: null
profile: null
sc: null
workspace_type: Application
Best of luck.

Related

How to access custom environment variables in an aws batch job

I'm setting an environment variable in my aws batch job like so.
BatchJobDef:
Type: 'AWS::Batch::JobDefinition'
Properties:
Type: container
JobDefinitionName: xxxxxxxxxx
ContainerProperties:
Environment:
- Name: 'PROC_ENV'
Value: 'dev'
When I look at my job definition I can see it listed in Environment variables configuration
Then I'm trying to access it in my job's python code like this:
env = os.environ['PROC_ENV']
but there is no PROC_ENV variable set, getting the following error when I go to run my job:
raise KeyError(key) from None
KeyError: 'PROC_ENV'
Can anyone tell me what I'm missing here? Am I accessing this environment variable the correct way?

Environment variables in codebuild project are overridden by action level when using CDK

I am using CDK to deploy codepipeline and codebuild to AWS. I found the environment variables in project level are completely replaced by action level.
Below is the code:
new actions.CodeBuildAction({
actionName,
type: actions.CodeBuildActionType.BUILD,
input,
outputs,
project:
new codebuild.PipelineProject(this, name, {
projectName: name,
environment: {
...
},
environmentVariables: envs,
role: this.codeBuildRole,
buildSpec: codebuild.BuildSpec.fromSourceFilename(specFile),
}),
environmentVariables: additionalEnvs,
runOrder:1,
});
as you can see above code, there are two environmentVariables configurations one under CodeBuildAction and one is under PipelineProject. Based on AWS doc: https://docs.aws.amazon.com/cdk/api/v1/docs/#aws-cdk_aws-codepipeline-actions.CodeBuildActionProps.html#environmentvariables,
(optional, default: No additional environment variables are specified.)
The environment variables to pass to the CodeBuild project when this action executes.
If a variable with the same name was set both on the project level, and here, this value will take precedence.
the environmentVariables under CodeBuildAction should be additional variables. But what happens is the environment variable list in the codebuild project becomes empty. During execution, the environment variables on the running codebuild project only have the ones from the action level. It seems it is a complete replacement rather than appending. Does anyone know what I did wrong here?

Understanding the following environment entry in manifest.yml pivotal cloud foundry

I have this manifest.yml:
applications:
- name: xx
buildpack: java-bp480-v2
instances: 2
memory: 2G
path: webapp/build/libs/trid.war
services:
- xxservice
- xxservice
- xxcktbrkrcnfgsvc
- xxappdynamics
- autoscaler-xx
env:
spring_profiles_active: cloud
swagger_active: false
JAVA_OPTS: -Dspring.profiles.active=cloud -Xmx1G -Xms1G -XX:NewRatio=1 -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps
What will env do?.
Will that create three environment variables or it will append JAVA_OPTS to the start command if the spring profile active is cloud?.
What will env do?.
The env block will instruct the cf cli to create environment variables on your behalf. Entries take the form variable_name: variable_value. From your example, you'll end up with a variable named spring_profiles_active with a value of cloud. Plus the other two you have defined.
JAVA_OPTS is a special env variable for the Java buildpack. Whatever you put into JAVA_OPTS will be included in the start command for your application. It is an easy way to add additional arguments, system properties and configuration flags to the JVM.
Please note, at least in the example above, the spacing is wrong on your env: block. It's all the way to the left, but the env:should be indented two spaces. Then each env variable defined under theenv:` block should be indented two more spaces for a total of four spaces. YAML is very picky about spaces and indentation. When in doubt, use a YAML validator to confirm your YAML is valid.

Use custom folder instead of ~/.aws

I want to find a way to not have to specify aws_access_key and aws_secret_key when use aws modules.
Is that aws default try to use credentials in ~/.aws to run against playbooks?
If yes, how to instruct ansible to use aws credentials under whatever folder you want, e.g: ~/my_ansible_folder.
I ask this because I really want to use ansible to create a vault: cd ~/my_ansible_folder; ansible-vault create aws_keys.yml under ~/my_ansible_folder then run playbook ansible-playbook -i ./inventory --ask-vault-pass site.yml that will use aws credential in the vault that I don't have to specify aws_access_key and aws_secret_key in tasks.. that need to use aws credentials.
The list of boto3 configuration options will interest you, most most notably the $AWS_SHARED_CREDENTIALS_FILE environment variable.
I would expect you can create that shared credentials file using a traditional copy: content="[default]\naws_access_key_id=whatever\netc\netc\n" and then set the ansible_python_interpreter fact to be env AWS_SHARED_CREDENTIALS_FILE=/path/to/that/credential-file /the/original/ansible_python_interpreter to cause the actual python invocation to carry that environment variable with it. For non-boto modules, doing that will just cost you running env as well as python, but to be honest the bizarre module serialization and deserialization that ansible does anyway will cause that extra binary runtime to be invisible in the scheme of things.
You may have to override $AWS_CONFIG_FILE and $BOTO_CONFIG in the same manner, even pointing them at /dev/null in order to force boto to not look in your $HOME/.aws directory
So, for clarity:
- name: create our boto config
copy:
content: |
[default]
aws_access_key_id={{ access_key_from_vault }}
aws_secret_access_key={{ secret_key_from_vault }}
dest: /somewhere/sekrit
mode: '0600'
no_log: yes
register: my_aws_config
- name: grab existing python interp
set_fact:
backup_a_py_i: '{{ ansible_python_interpreter | default(ansible_playbook_python) }}'
- name: patch in our env-vars
set_fact:
ansible_python_interpreter: >-
env AWS_SHARED_CREDENTIALS_FILE={{ my_aws_config.path }}
{{ backup_a_py_i }}
# and away you go!
- ec2_instance_facts:
# optionally put this in a "rescue:" or whatever you think is reasonable
- file: path={{ my_aws_config.path }} state=absent

How to use dynamic key for `parameter-store` in AWS CodeBuild spec file?

I have a buildspec.yml file in my CodeBuild that I want to read values out of EC2 Systems Manager Parameter Store. CodeBuild supports doing this via the parameter-store attribute in your spec file.
Problem is, I can't figure out how to use enviornment Variables that are set BEFORE the buidlspec executes.
Here is an example:
version: 0.2
env:
variables:
RUNTIME: "nodejs8.10"
#parameter-store vars are in the format /[stage]/[repo]/[branch]/[eyecatcher]/key
parameter-store: #see https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec-ref-syntax
LAMBDA_EXECUTION_ROLE_ARN: "/${STAGE}/deep-link/${BRANCH}/GetUri/lambdaExecutionRoleArn"
ENV_SAMPLE_KEY: "/${STAGE}/deep-link/${BRANCH}/GetUri/key1"
phases:
install:
commands:
...
As you can see I'm doing the AWS best practice for name-spacing the EC2 Systems Manager Parameter Store keys. I want to re-use this build spec for all my stages, so hard coding is not an option. The vars I use in the Value string are populated as EnvironmentVariables in my CodeBuild project - so they are available before the spec runs.
How do I dynamically populate the Value of the parameter-store Keys with something that is not hard coded?
This variable expansion is now supported in CodeBuild for parameter-store use case. You can define any environment variable in your buildspec and have that referenced in the path to fetch the parameter store. For example, if you have an environment variable called $stage you could use it like this:
version: 0.2
env:
variables:
stage: PRE_PROD
parameter-store:
encryptedVar: CodeBuild-$stage
phases:
build:
commands:
- echo $encryptedVar
I found this StackOverflow post - unfortunately the feature you describe does not seem to exist.
It would have been nice to be able to use parameters and functions akin to the features in CloudFormation templates.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/dynamic-references.html
It doesnt say it explicitly but I'm guessing you can use a !Sub in whatever cloudformation template you are using to build that resolve string, and use it in a ParameterOverride to pass into your buildspec in the regular parameter block instead of a parameter-store block