Deploy from multiple sources with CodePipeline - amazon-web-services

I am trying to set up a project on AWS. I am using CodePipeline to deploy my code to Elastic Beanstalk, and the source is coming from a git repository. This works fine.
The project has some configuration files (passwords and settings and such) that I don't want to include in the git repository. Since they are not in the git repository, they are not deployed by CodePipeline.
How can I include the configuration files in the CodePipeline without including them in the git repository?
Idea: I have tried adding an extra S3 source in the CodePipeline, containing the configuration files. I then had to add an extra deployment action to deploy the new S3 source. But then the two deployment processes get in conflict with each other, and only one of them succeeds. If I retry the one that fails, whatever was deployed by the one that succeeded is removed again. It doesn't seem to be possible to add two input artifacts (sources) to a single deployment action.

It is possible to use .ebextensions to copy files from an S3 bucket or another
source during deployment. Amazon describes it well in their documentation.
Here is an example:
Resources:
AWSEBAutoScalingGroup:
Metadata:
AWS::CloudFormation::Authentication:
S3Auth:
type: "s3"
buckets: ["elasticbeanstalk-us-west-2-123456789012"]
roleName:
"Fn::GetOptionSetting":
Namespace: "aws:autoscaling:launchconfiguration"
OptionName: "IamInstanceProfile"
DefaultValue: "aws-elasticbeanstalk-ec2-role"
files:
"/tmp/data.json" :
mode: "000755"
owner: root
group: root
authentication: "S3Auth"
source: https://s3-us-west-2.amazonaws.com/elasticbeanstalk-us-west-2-123456789012/data.json

Rather than storing the configuration files in a repository I'd recommend using the Software Configuration feature that Elastic Beanstalk has.
Here's a related answer explaining how to do that: https://stackoverflow.com/a/17878600/7433105
If you want to model your config as a separate source action then you would either have to have a build step that merges the source artifacts into one deployable artifact, or have some independent deployment process for the configuration that won't interfere with your application deployment (eg. copy to S3 in a Lambda function, then pull down the configuration when your application starts).

Related

How to rename AWS CodePipeline object

I use AWS CodePipeline linked with an S3 bucket to deploy applications to some Elastic Beanstalks. It is a very simple pipeline and has only 2 phase: one Source (bucket S3 -> war file) and one Deploy (EB reference).
Say I have an application called "app.war", when I deploy it manually to EB in aws there is an incremental number appended to my application name (app-1.war, app-2.war, ... ) based on how many times I deployed an application with the same name.
I want to achieve this with CodePipeline, there is something that I can do? Some phases that I have to configure? Variables?
I would like to rename the "code-pipeline-123abc..." name from my war file to be more specific with my application name like with manual deploy.

How do I create a dev and prod instance of the same cloud function using serverless?

I have a Cloud Function named getVendors that is deployed using serverless. I need to create both a dev and production instance of the same function. When I deploy using the dev variable in my yml file, it overwrites the function currently in GCP.
How do I deploy a dev instance of the same cloud function without overwriting the production instance?
For reference, I have two stacks (dev and prod) visible in the Deployment Manager. It's when I look at Cloud Functions in the project, there is only one function listed in the project. This is the function that gets overwritten.
service: get-vendor-info
provider:
name: google
stage: production
runtime: nodejs8
region: us-central1
project: {PROJECT NAME IS HIDDEN}
# the path to the credentials file needs to be absolute
credentials:{MY_CREDENTIALS}.json
plugins:
- serverless-google-cloudfunctions
package:
exclude:
- node_modules/**
- .gitignore
- .git/**
functions:
getVendors:
handler: getVendors
events:
- event:
eventType: providers/cloud.pubsub/eventTypes/topic.publish
resource: projects/{MY_PROJECT_NAME}/topics/getVendors
I use two different GCP projects; one for dev/test and another for prod. If there are multiple developers on the team you should consider giving each of them their own dev project as well.
By doing that you reduce the risk of development or testing work messing up production data. You could copy the production database into the dev/test project nightly, and potentially add special test data to it as well. This special test data could be corner cases, well-known data to run automated tests against, etc.
For a more in-depth discussion, check out this talk from Google Cloud Next: https://youtu.be/41QvqGfbz9o?t=1142

Serverless deploying to AWS, Azure or GCP

Does anyone using the Serverless framework know if it's possible to use the same serverless deploy file to deploy to all three cloud providers if the underlying code is capable?
Or are the serverless files specific to each cloud provider?
Thanks
Assuming all your function code are provider agnostic...
Each provider have their own specific way of defining and configuring things so you would expect that the low-level details of the serverless.yml file for each would be different.
That being said, the high-level properties of the serverless.yml are pretty much common for most, if not all, providers.
service:
provider:
plugins:
functions:
This would allow you to have one serverless.yml for all providers that simply references other YAML files depending on an environment variable. Assuming you have serverless-aws.yml, serverless-azure.yml, and serverless-google.yml for your provider-specific configuration, you should be able to use this in your serverless.yml,
service: ${file(serverless-${env:PROVIDER}.yml):service}
plugins: ${file(serverless-${env:PROVIDER}.yml):plugins}
provider: ${file(serverless-${env:PROVIDER}.yml):provider}
functions:
getSomething: ${file(serverless-${env:PROVIDER}.yml):functions.getSomething}
createSomething: ${file(serverless-${env:PROVIDER}.yml):functions.createSomething}
updateSomething: ${file(serverless-${env:PROVIDER}.yml):functions.updateSomething}
deleteSomething: ${file(serverless-${env:PROVIDER}.yml):functions.deleteSomething}
Whenever you deploy, you can choose which provider to use by specifying the PROVIDER environment variable.
$ PROVIDER=aws sls deploy # Deploys to AWS
$ PROVIDER=azure sls deploy # Deploys to Azure
$ PROVIDER=google sls deploy # Deploys to GCP
#dashmug's answer should work but doesn't. If you try to include the entire provider section, it doesn't get evaluated -- i.e. srs print just spits out the un-evaluated expression:
provider: ${file(serverless-${env:PROVIDER}.yml):provider}
Trying to parameterize each key doesn't work because it changes to order, which seems to cause the deploy to fail:
//serverless.yml
...
provider:
name: ${file(serverless-${env:PROVIDER}.yml):provider.name}
runtime: ${file(serverless-${env:PROVIDER}.yml):provider.runtime}
stage: ${file(serverless-${env:PROVIDER}.yml):provider.stage}
...
Results in this:
> sls print
service: my-crossplatform-service
provider:
stage: prod
name: aws
runtime: nodejs8.10
I ended up just maintaining separate serverless.yml files and deploying with a little bash script that copies the appropriate file first:
#!/bin/bash
if [ "$1" != "" ]; then
echo "copying serverless-$1.yml to serverless.yml and running serverless deploy"
cp serverless-$1.yml serverless.yml && sls deploy
else
echo "Please append provider, like 'deploy.sh aws' or 'deploy.sh azure'"
fi
Really wish you could just specify the config file as a deploy option, as requested here: https://github.com/serverless/serverless/issues/4485

I'm trying to integrate Bitbucket into AWS Code Pipeline? What is the best approach?

I want to integrate my code from Bitbucket into AWS Code Pipeline. I unable to find proper examples on the same. My source code is in .Net.
Can someone please guide me.
Thanks.
You can integrate Bitbucket with AWS CodePipeline by using webhooks that call to an AWS API Gateway, which invokes a Lambda function (which calls into CodePipeline). There is an AWS blog that walks you thru this: Integrating Git with AWS CodePipeline
BitBucket has a service called PipeLines which can deploy code to AWS services. Use Pipelines to package and push updates from your master branch to an S3 bucket which is hooked up to CodePipeline
Note:
You must enable PipeLines in your repository
PipeLines expects a file named bitbucket-pipelines.yml which must be placed inside your project
Ensure you set your accounts AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY in the BitBucket Pipelines UI. This comes with an option to encrypt so all is safe and secure
Here is an example bitbucket-pipelines.yml which copies the contents of a directory named DynamoDb to an S3 bucket
pipelines:
branches:
master:
- step:
script:
- apt-get update # required to install zip
- apt-get install -y zip # required if you want to zip repository objects
- zip -r DynamoDb.zip .
- apt-get install -y python-pip
- pip install boto3==1.3.0 # required for s3_upload.py
# the first argument is the name of the existing S3 bucket to upload the artefact to
# the second argument is the artefact to be uploaded
# the third argument is the the bucket key
- python s3_upload.py LandingBucketName DynamoDb.zip DynamoDb.zip # run the deployment script
Here is a working example of a Python upload script which should be deployed alongside the bitbucket-pipelines.yml file in your project. Above I have named my Python script s3_upload.py:
from __future__ import print_function
import os
import sys
import argparse
import boto3
from botocore.exceptions import ClientError
def upload_to_s3(bucket, artefact, bucket_key):
"""
Uploads an artefact to Amazon S3
"""
try:
client = boto3.client('s3')
except ClientError as err:
print("Failed to create boto3 client.\n" + str(err))
return False
try:
client.put_object(
Body=open(artefact, 'rb'),
Bucket=bucket,
Key=bucket_key
)
except ClientError as err:
print("Failed to upload artefact to S3.\n" + str(err))
return False
except IOError as err:
print("Failed to access artefact in this directory.\n" + str(err))
return False
return True
def main():
parser = argparse.ArgumentParser()
parser.add_argument("bucket", help="Name of the existing S3 bucket")
parser.add_argument("artefact", help="Name of the artefact to be uploaded to S3")
parser.add_argument("bucket_key", help="Name of the S3 Bucket key")
args = parser.parse_args()
if not upload_to_s3(args.bucket, args.artefact, args.bucket_key):
sys.exit(1)
if __name__ == "__main__":
main()
Here is an example CodePipeline with only one Source stage (you may want to add more):
Pipeline:
Type: "AWS::CodePipeline::Pipeline"
Properties:
ArtifactStore:
# Where codepipeline copies and unpacks the uploaded artifact
# Must be versioned
Location: !Ref "StagingBucket"
Type: "S3"
DisableInboundStageTransitions: []
RoleArn:
!GetAtt "CodePipelineRole.Arn"
Stages:
- Name: "Source"
Actions:
- Name: "SourceTemplate"
ActionTypeId:
Category: "Source"
Owner: "AWS"
Provider: "S3"
Version: "1"
Configuration:
# Where PipeLines uploads the artifact
# Must be versioned
S3Bucket: !Ref "LandingBucket"
S3ObjectKey: "DynamoDb.zip" # Zip file that is uploaded
OutputArtifacts:
- Name: "DynamoDbArtifactSource"
RunOrder: "1"
LandingBucket:
Type: "AWS::S3::Bucket"
Properties:
AccessControl: "Private"
VersioningConfiguration:
Status: "Enabled"
StagingBucket:
Type: "AWS::S3::Bucket"
Properties:
AccessControl: "Private"
VersioningConfiguration:
Status: "Enabled"
Reference to this Python code along with other examples can be found here: https://bitbucket.org/account/user/awslabs/projects/BP
Follow up for anyone finding this now:
AWS CodeBuild now supports Atlassian Bitbucket Cloud as a Source Type, making it the fourth alongside the existing supported sources: AWS CodeCommit, Amazon S3, and GitHub.
This means you no longer need to implement a lambda function as suggested in #Kirkaiya's link to integrate with Bitbucket - it is still a valid solution depending on your use case or if you're integrating with the non-cloud version of Bitbucket.
Posted on the AWS blog Aug 10, 2017 -
https://aws.amazon.com/about-aws/whats-new/2017/08/aws-codebuild-now-supports-atlassian-bitbucket-cloud-as-a-source-type/
And to clarify for the commenters, this link talks about integrating with CodeBuild not CodePipeline: you still need to find a way to trigger the pipeline, but when it is triggered CodeBuild will pull the code from BitBucket rather than having to copy the code to S3 or AWS CodeCommit before triggering the pipeline.
If you are looking for a way to automate your build deploy process using AWS CodePipeline with source as bitbucket without using lambdas do the following steps.
Create CodeBuild which supports BitBucket as of now. https://docs.aws.amazon.com/codebuild/latest/userguide/sample-bitbucket-pull-request.html
Also create a web-hook which rebuilds every time a code is pushed to repository.
You cannot use a web-hook if you use a public Bitbucket repository.
Code Build will trigger automatically on commit,and will create a zip file and store it in s3 bucket.
Create Code Pipeline with source as S3,and deploy it using codeDeploy. As S3 is a valid source.
Note -1. In order to create a webhook , you need to have bitbucket admin access
So the process from commit to deployment is totally automated.
2. As of now(April'19) CodeBuild does not support webhook on Pull request merge.If you want you can create trigger which will trigger code build say every day.
You can also create triggers to build code periodically https://docs.aws.amazon.com/codebuild/latest/userguide/trigger-create.html
Update - (June'19) - Pull Request builds for PR_Merge is supported in CodeBuild now. Reference: https://docs.aws.amazon.com/codebuild/latest/userguide/sample-bitbucket-pull-request.html#sample-bitbucket-pull-request-filter-webhook-events.
an alternative to the answer of #binary, and clarification to #OllyTheNinja's answer:
in short: let CodeBuild listen to Bitbucket's Webhook and write to an S3 object. in the pipeline listen to the update event of the latter.
In AWS codesuite
define a CodeBuild project, with
Source: Bitbucket that uses its WebHook to listen to git-push events.
Buildspec: build the project according to buildspec.yml
Artifact store output of the build directly to an S3 container.
define the pipeline:
Source: listen to updates to the previously defined S3 object
remove Build step
add other steps, configure deploy step
AWS CodeBuild Now Supports Building Bitbucket Pull Requests, and we can make use of this for a better solution without using webhooks/API Gateway/Lambda
You can use a CodeBuild to zip your code to s3 and use that as a source in your CodePipeline
https://lgallardo.com/2018/09/07/codepipeline-bitbucket
For me, the best way to integrate Bitbucket with any AWS Service, is to use Pipelines to mirror any commit into a (mirror) AWS CodeCommit repo. From there, you have prime integration into any service on AWS.
You can find an excellent how-to: here :
In 12/2019 AWS launched a support for Atlassian Bitbucket Cloud in beta mode.
So now you can natively integrate your AWS CodePipeline with Bitbucket Cloud

How to specify sensitive environment variables at deploy time with Elastic Beanstalk

I am deploying a Python Flask application with Elastic Beanstalk. I have a config file /.ebextensions/01.config where among other things I set some environment variables - some of which should be secret.
The file looks something like this:
packages:
yum:
gcc: []
git: []
postgresql93-devel: []
option_settings:
"aws:elasticbeanstalk:application:environment":
SECRET_KEY: "sensitive"
MAIL_USERNAME: "sensitive"
MAIL_PASSWORD: "sensitive"
SQLALCHEMY_DATABASE_URI: "sensitive"
"aws:elasticbeanstalk:container:python:staticfiles":
"/static/": "app/static/"
What are the best practices for keeping certain values secret? Currently the .ebextensions folder is under source control and I like this because it is shared with everyone, but at the same time I do not want to keep sensitive values under source control.
Is there a way to specify some environment variables through the EB CLI tool when deploying (e.g. eb deploy -config ...)? Or how is this use case covered by the AWS deployment tools?
The AWS documentation recommends storing sensitive information in S3 because environment variables may be exposed in various ways:
Providing connection information to your application with environment
properties is a good way to keep passwords out of your code, but it's
not a perfect solution. Environment properties are discoverable in the
Environment Management Console, and can be viewed by any user that has
permission to describe configuration settings on your environment.
Depending on the platform, environment properties may also appear in
instance logs.
The example below is from the documentation, to which you should refer for full details. In short, you need to:
Upload the file to S3 with minimal permissions, possibly encrypted.
Grant read access to the role of the instance profile for your Elastic Beanstalk autoscaling group. The policy would be like:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "database",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::my-secret-bucket-123456789012/beanstalk-database.json"
]
}
]
}
Add a file with a name like s3-connection-info-file.config to /.ebextensions in your application bundle root with these contents:
Resources:
AWSEBAutoScalingGroup:
Metadata:
AWS::CloudFormation::Authentication:
S3Auth:
type: "s3"
buckets: ["my-secret-bucket-123456789012"]
roleName: "aws-elasticbeanstalk-ec2-role"
files:
"/tmp/beanstalk-database.json" :
mode: "000644"
owner: root
group: root
authentication: "S3Auth"
source: https://s3-us-west-2.amazonaws.com/my-secret-bucket-123456789012/beanstalk-database.json
Then update your application code to extract the values from the file /tmp/beanstalk-database.json (or wherever you decide to put it in your actual config.)
This question already has an answer, but I want to contribute an alternative solution to this problem. Instead of having to keep secrets in environment variables (which then have to be managed and stored somewhere out of version control, plus you need to remember to set them at deployment), I put all my secrets in an encrypted S3 bucket only accessible from the role the EB is running as. I then fetch the secrets at startup. This has the benefit of completely decoupling deployment from configuration, and you never ever have to fiddle with secrets in the command line again.
If needed (for example if secrets are needed during app setup, such as keys to repositories where code is fetched) you can also use an .ebextensions config file with an S3Auth directive to easily copy the contents of said S3 bucket to your local instance; otherwise just use the AWS SDK to fetch all secrets from the app at startup.
EDIT: As of April 2018 AWS offers a dedicated managed service for secrets management; the AWS Secrets Manager. It offers convenient secure storage of secrets in string or json format, versioning, stages, rotation and more. It also eliminates some of the configuration when it comes to KMS, IAM etc for a quicker setup. I see no real reason using any other AWS service for storing static sensitive data such as private keys, passwords etc.
You should be able to specify sensitive values as environment variables from eb web console: Your EB app -> Your EB environment -> Configuration -> Software Configuration -> Environment Properties
Alternatively, you can make use of this: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb3-setenv.html
EDIT: While this was accepted answer in 2015, this should not be how you handle it. Now you can use AWS Secrets Manager for this purpose.
I have been using another shell script, something ./deploy_production.sh to set environment specific variables. In the shell script, you can use "eb setenv NAME1=VAR1 NAME2=VAR2 ..." to set env var.
And this file doesn't need to go into git repo.
Some of the other answers are mentioning that there might be a better way with Parameter Store / Secrets Manager.
I described how I did this with AWS Systems Manager Parameter Store (which also gives you an interface to Secrets Manager) in this answer: https://stackoverflow.com/a/59910941/159178. Basically, you give your Beanstalk ECS IAM role access to the relevant parameter and then load it from your application code at startup.