So, I have 3 different cdk.json files with different name/value pairs: cdk.json, cdk-west.json cdk-east.json
When, I run a synth or deploy, can I specify a specific json file instead of the default cdk.json one?
I know you can change profiles with the --profile option, but not sure about cdk.json
Unfortunately, there is no such option.
A good way around this is to add an "environment" attribute to your cdk.json file with attributes for each environment that contain settings for that environment.
{
...
"environments":{
"xxxxxxxxxxxx":{
"name": "dev",
"awsAccountId":"xxxxxxxxxxxx",
"awsRegion":"us-west-2",
"logLevel":"debug",
"customSetting": "value 1",
...
},
"prod-west-1-account-id":{
"name": "prod-us-west-1",
...
},
"prod-west-2-account-id":{
"name": "prod-us-west-2",
...
}
}
Then each Stack can grab the appropriate environment configuration based on the value of "this.account"
Related
When I develop sample apps by using AWS CDK,I found app section in cdk.json.I have question about app. what is app ? it seems that app.js is executed.
When this command will be executed ? when we run cdk synth,is this command executed?
I don't know the purpose of this use.
Thanks
{
"app": "node packages/cdk/dist/app",
"output": "build/cdk.out",
"context": {
"#aws-cdk/aws-apigateway:usagePlanKeyOrderInsensitiveId": true,
"#aws-cdk/core:enableStackNameDuplicates": "true",
"aws-cdk:enableDiffNoFail": "true",
"#aws-cdk/core:stackRelativeExports": "true",
"#aws-cdk/aws-ecr-assets:dockerIgnoreSupport": true,
"#aws-cdk/aws-secretsmanager:parseOwnedSecretName": true,
"#aws-cdk/aws-kms:defaultKeyPolicies": true,
"#aws-cdk/aws-s3:grantWriteWithoutAcl": true,
"#aws-cdk/aws-ecs-patterns:removeDefaultDesiredCount": true,
"#aws-cdk/aws-rds:lowercaseDbIdentifier": true,
"#aws-cdk/aws-efs:defaultEncryptionAtRest": true,
"systemName": "devio",
"envType": "stg"
}
}
Per the CDK documentation:
Since the AWS CDK supports programs written in a variety of languages, it uses a configuration option to specify the exact command necessary to run your app. This option can be specified in two ways.
First, and most commonly, it can be specified using the app key inside the file cdk.json, which is in the main directory of your AWS CDK project. The CDK Toolkit provides an appropriate command when creating a new project with cdk init.
Here is the cdk.json from a fresh TypeScript project, for instance:
{
"app": "npx ts-node bin/hello-cdk.ts"
}
I have a dockerized Django application that I want to deploy on SAP Cloud Platform via cloudfoundry cli utility. I have added couple of User Provided Services with their own set of credentials. For example, I added AWS S3 as a User Provided Service and provided credentials for the same.
Now those credentials are available as environment variable as
VCAP_SERVICES={"user-provided":[{
"label": "user-provided",
"name": "s3",
"tags": [
],
"instance_name": "s3",
"binding_name": null,
"credentials": {
"aws_access_key": "****",
"aws_secret_key": "****",
"bucket": "****",
"region": "****",
"endpoint": "*****"
},
"syslog_drain_url": "",
"volume_mounts": [
]
}]}
I have .env file where in I have variables defined, eg. AWS_ACCESS_KEY. Usually I pass string value to the variable which is then consumed by my app. But, given that I have configured it via User Provided Service mechanism and credentials are already there, I was wondering how do I get to access those credentials.
There are a few ways to extract service information in Python applications.
You can do it programmatically using the cfenv library. You would simply integrate this library into the start-up of your application. This generally offers the most flexibility, but can sometimes be difficult to integrate with frameworks, depending on how frameworks expect the configuration to be feed in.
You can generate environment variables or configuration files, from your example the .env file, on the fly. This can be done using a .profile script. A .profile script, if placed into the root of your application will execute prior to your application but inside the runtime container. This allows you to adjust the configuration of your application at just the last moment.
A .profile script is just a shell script and in it, you can use tools like jq or sed to extract information from the VCAP_SERVICES environment variable and put that information elsewhere (possibly other environment variables or into a .env file).
Because you are pushing a Python application, the .profile could also execute a Python script. The Python buildpack will run and guarantee that a Python runtime is available on the PATH for use by your .profile script. Thus you can do something like this, to execute a Python script.
.profile script:
python $HOME/.profile.py
.profile.py script (made up this name, you can call it anything):
#!/usr/bin/env python3
print("Hello from python .profile script")
You can even import Python libraries in your script that are included in your requirements.txt file from this script.
I am trying to specify a build request with the source specified as a repoSource:
{
"source": {
"repoSource": {
"repoName": "myRepo",
"branchName": "master"
}
},
"steps": [
{
"name": "gcr.io/cloud-builders/docker",
"args": ["build", "-t", "gcr.io/$PROJECT_ID/zookeeper", "."]
}
],
"images": [
"gcr.io/$PROJECT_ID/zookeeper"
]
}
However, when I attempt to submit it with gcloud, I get an error:
$ gcloud container builds submit --no-source --config cloudbuild.json
ERROR: (gcloud.container.builds.submit) cloudbuild.json: config cannot specify source
In "Writing Custom Build Requests" it says:
When you submit build requests using the gcloud command-line tool, the source field may not be necessary if you specify the source as a command-line argument. You may also specify a source in your build request file. See the gcloud documentation for more information.
Note: The storageSource and repoSource fields described in the Build resource documentation differ from the source field. storageSource instructs Container Builder to find the source files in a Cloud Storage bucket, and repoSource refers to a Cloud Source Repository containing the source files.
So how, then, do you specify repoSource with gcloud? I am only seeing the gs:// url prefix documented.
I'm developing a PWA using Ionic 3.
Normally the JavaScript files generated by the build to browser process of Ionic 3 are in www/build folder.
I wish to change this folder to www/build/v1 and, of course, keep the application working.
How could I change the build process of Ionic 3 to achieve this result?
The simplest way is add "config" section to your package.json
{
"name": "projectname",
"version": "0.0.1",
"author": "Ionic Framework",
...,
"config": {
"ionic_www_dir": "www/v1",
"ionic_build_dir": "www/v1/build"
},
"dependencies": {
...
}
You can read "Custom Project Structure" here ionic docs.
You may want to try the config option in the package.json file to provide custom build configuration.
To get started, add a config entry to the package.json file. From there, you can provide your own configuration file.
Here's an example of specifying a custom configuration:
"config": {
...
"ionic_rollup": "./config/rollup.config.js",
"ionic_cleancss": "./config/cleancss.config.js",
...
},
You may want to see this Ionic documentation for more information.
The Amazon Elastic Beanstalk blurb says:
Elastic Beanstalk lets you "open the hood" and retain full control ... even pass environment variables through the Elastic Beanstalk console.
http://aws.amazon.com/elasticbeanstalk/
How to pass other environment variables besides the one in the Elastic Beanstalk configuration?
As a heads up to anyone who uses the .ebextensions/*.config way: nowadays you can add, edit and remove environment variables in the Elastic Beanstalk web interface.
The variables are under Configuration → Software Configuration:
Creating the vars in .ebextensions like in Onema's answer still works.
It can even be preferable, e.g. if you will deploy to another environment later and are afraid of forgetting to manually set them, or if you are ok with committing the values to source control. I use a mix of both.
Only 5 values is limiting, or you may want to have a custom environment variable name. You can do this by using the configuration files. Create a directory at the root of your project called
.ebextensions/
Then create a file called environment.config (this file can be called anything but it must have the .config extension) and add the following values
option_settings:
- option_name: CUSTOM_ENV
value: staging
After you deploy your application you will see this new value under
Environment Details -> Edit Configuration -> Container
for more information check the documentation here:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html#customize-containers-format-options
Update
To prevent committing to your repository values like API keys, secrets and so on, you can put a placeholder value.
option_settings:
- option_name: SOME_API_KEY
value: placeholder-value-change-me
Later you can go to the AWS admin panel (Environment Details -> Edit Configuration -> Container) and update the values there. In my experience these values do not change after subsequent deployments.
Update 2
As #Benjamin stated in his comment, since the new look and feel was rolled out July 18, 2013 it is possible to define any number of environment variables directly from the console:
Configuration > Software Configuration > Environment Properties
In the 2016 Java8 Tomcat8 AMI, ElasticBeanstalk fails to set environment variables from the web configuration. They are really setting jvm -D properties instead.
--
"The following properties are passed into the application as environment variables. Learn more."
This statement is incorrect for the Java Tomcat ami. Amazon does not set these as environment variables. They are set as System properties passed on the command line to Tomcat as a -D property for jvm.
The method in Java to get environment variables is not the same for getting a property.
System.getenv vs System.getProperty
I ssh'd into the box and verified that the environment variable was never set. However, in the tomcat logs I can see the -D property is set.
I've changed my code to check for both locations now as a workaround.
AWS will interpret CloudFormation template strings in your environment variables. You can use this to access information about your EB environment inside your application:
In the AWS web interface the following will be evaluated as the name of your environment (note the back ticks):
`{ "Ref" : "AWSEBEnvironmentName" }`
Or, you can use an .ebextensions/*.config and wrap the CloudFormation template in back ticks (`):
{
"option_settings": [
{
"namespace": "aws:elasticbeanstalk:application:environment",
"option_name": "ENVIRONMENT_NAME",
"value": "`{ \"Ref\" : \"AWSEBEnvironmentName\" }`"
}
]
}
Alternatively, you could use the Elastic Beanstalk CLI to set environment variables.
To set an environment variable: eb setenv FOO=bar
To view the environment variables: eb printenv
Environment Details -> Edit Configuration -> Container
This seems to be the only way to set ENVs with dynamic values in beanstalk. I came up with a workaround that works for my multi-docker setup:
1) Add this to your Dockerfile before building + uploading to your ECS
repository:
CMD eval `cat /tmp/envs/env_file$`; <base image CMD goes here>;
2) In your Dockerrun.aws.json file create a volume:
{
"name": "env-file",
"host": {
"sourcePath": "/var/app/current/envs"
}
}
3) Mount volume to your container
{
"sourceVolume": "env-file",
"containerPath": "/tmp/envs",
"readOnly": true
}
4) In your .ebextensions/options.config file add a container_commands
block like so:
container_commands:
01_create_mount:
command: "mkdir -p envs/"
02_create_env_file:
command: { "Fn::Join" : [ "", [ 'echo "', "export ENVIRONMENT_NAME=" , { "Ref", "RESOURCE" }, ';" > envs/env_file;' ] ] }
5) eb deploy and your ENVS should be available in your docker container
You can add more ENVs by adding more container_commands like:
02_create_env_file_2:
command: { "Fn::Join" : [ "", [ 'echo "', "export ENVIRONMENT_NAME_2=" , { "Ref", "RESOURCE2" }, ';" >> envs/env_file;' \] \] }
Hope this helps!