How to have multiple codepipeline triggers in aws? - amazon-web-services

How can I have Aws CodePipeline be triggered by multiple sources? Imagine I want to have the same pipeline to be triggered whenever I push to two different repositories? Plus, the build stage must know which repository triggered the pipeline and pull from the right repository

Well, it will depend on the pipeline itself. Aws says codepipelines are made per project only. However, one way you could tackle this problem is by:
building a lambda function what triggers codebuild
the lambda function will have as many triggers as the number of repositories you want to trigger the same pipeline
the lambda function will pass environment variables to CodeBuild and trigger its execution
CodeBuild will work out which repo to pull from depending on the value of the environment variable
How-To:
To begin with log into the aws console and head to Lambda functions
Create a new function - or edit one, if you prefer
Choose to create a new function from scratch if you created the function on the step above
Chose the running environment --- which in this example is going to be NodeJs 10.x but you can choose the one of your preference
Use default permission settings or use an already created role for this function if you have them already. This is important because we will be editing these permissions later
Click create function to create it. - If you already have a function you can jump to the next step!
You should be prompted with this screen. Click on "Add trigger"
After that you must choose your provider. This could be any of the options, including CodeCommit. However, if you want another service to be your trigger and it is not listed here you can always create an SNS topic and make that service subscribe to it and then make the SNS topic be the trigger for your function. This is going to be the subject of another tutorial later on...
In this case it is going to be CodeCommit and you should choose it from the list. Then you will be prompted with a screen like this one to configure your preferences
One thing to keep in mind is that the Event name is quite crucial since we are going to use it in our function to choose what is going to happen. Thus, choose it properly
After that you should end up with something like this Now it's time to code our function
Since we are going to have multiple repositories trigger the same CodeBuild project you can always refer to the AWS SDK CodeBuild documentation here
The method we are going to use is the CodeBuild StartBuild method
To configure it properly we are going to have to know in which region our build project is. You can see it by going to your project on the aws console and looking at the url prefix here
Coming back to our lambda function we are going to create a json file that will store all of our data that is going to be trasfered to codebuild when the function runs. It is important to name it WITH THE EXTENSION The value of the environment variables are going to depend from trigger to trigger and it is now that the trigger name is so important. It is going to be the key of our selector. The selector is going to take the event trigger name and use it to look into the json and define the environment variables
In the repositories.json file we are going to put all the data we want and, if you know a little bit of json you can store whatever you want and pass it to our function when you want it to be passed to the CodeBuild as an environment variable
The code is as follows:
const AWS = require('aws-sdk'); // importing aws sdk
const fs = require('fs'); // importing fs to read our json file
const handler = (event, context) => {
AWS.config.update({region:'us-east-2'}); // your region can vary from mine
var codebuild = new AWS.CodeBuild(); // creating codebuild instance
//this is just so you can see aws sdk loaded properly so we are going to have it print its credentials. This is not required but shows us that the sdk loaded correctly
AWS.config.getCredentials(function(err) {
if (err) console.log(err.stack);
// credentials not loaded
else {
console.log("Access key:", AWS.config.credentials.accessKeyId);
console.log("Secret access key:", AWS.config.credentials.secretAccessKey);
}
});
var repositories = JSON.parse(fs.readFileSync('repositories.json').toString());
var selectedRepo = event.Records[0].eventTriggerName;
var params = {
projectName: 'lib-patcher-build', /* required */
artifactsOverride: {
type: 'CODEPIPELINE', /* required*/
},
environmentVariablesOverride: [
{
name: 'name-of-the-environment-variable', /* required */
value: 'its-value', /* required */
type: 'PLAINTEXT'
},
{
name: 'repo-url', /* required */
value: 'repositories[selectedRepo].url', /* required */
type: 'PLAINTEXT'
}
/* more items */
],
};
codebuild.startBuild(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
};
exports.handler = handler;
Then, in our build project we can perform a test example to see if our function worked by trying to print the environment variable we passed from our json file into our function and out to the CodeBuild Project
For our function to be able of starting the build of our CodeBuild project we must grant access to it. So:
Go back to your lambda and click on the permissions tab
Click on Manage these permissions
Click your policy name but keep in mind it will NOT be the same as mine
Edit your function's policy
Click to add permissions
Click to choose a service and choose CodeBuild
Add the permission for our function to start the build
Now you have two choices: You can either use this permission on all build projects or restrict it to specific projects. In our case we are going to restrict it since we have only one build project
To restrict you have to click the dropdown button on resources, choose specific and click on Add ARN
Then we need to open a new tab and go to our CodeBuild project. Then, copy the project's ARN
Paste the ARN on the permissions tab you previously were on and click on save changes
Review the policy
Click to save the changes
Now, if you push anything to the repository you configured the build project should get triggered and, in the build logs session you should see the value of the variable you set in your function

Related

Evaluate AWS CDK Stack output to another Stack in different account

I am creating two Stack using AWS CDK. I use the first Stack to create an S3 bucket and upload lambda Zip file to the bucket using BucketDeployment construct, like this.
//FirstStack
const deployments = new BucketDeployment(this, 'LambdaDeployments', {
destinationBucket: bucket,
destinationKeyPrefix: '',
sources: [
Source.asset(path)
],
retainOnDelete: true,
extract: false,
accessControl: BucketAccessControl.PUBLIC_READ,
});
I use the second Stack just to generate CloudFormation template to my clients. In the second Stack, I want to create a Lambda function with parameters S3 bucket name and key name of the Lambda zip I uploaded in the 1st stack.
//SecondStack
const lambdaS3Bucket = "??"; //TODO
const lambdaS3Key = "??"; //TODO
const bucket = Bucket.fromBucketName(this, "Bucket", lambdaS3Bucket);
const lambda = new Function(this, "LambdaFunction", {
handler: 'index.handler',
runtime: Runtime.NODEJS_16_X,
code: Code.fromBucket(
bucket,
lambdaS3Key
),
});
How do I refer the parameters automatically from 2nd Lambda?
In addition to that, the lambdaS3Bucket need to have AWS::Region parameters so that my clients can deploy it in any region (I just need to run the first Stack in the region they require).
How do I do that?
I had a similar usecase to this one.
The very simple answer is to hardcode the values. The bucketName is obvious.
The lambdaS3Key You can look up in the synthesized template of the first stack.
More complex answer is to use pipelines for this. I've did this and in the build step of the pipeline I extracted all lambdaS3Keys and exported them as environment variable, so in the second stack I could reuse these in the code, like:
code: Code.fromBucket(
bucket,
process.env.MY_LAMBDA_KEY
),
I see You are aware of this PR, because You are using the extract flag.
Knowing that You can probably reuse this property for Lambda Key.
The problem of sharing the names between the stacks in different accounts remains nevertheless. My suggestion is to use pipelines and the exported constans there in the different steps, but also a local build script would do the job.
Do not forget to update the BucketPolicy and KeyPolicy if You use encryption, otherwise the customer account won't have the access to the file.
You could also read about the AWS Service Catalog. Probably this would be a esier way to share Your CDK products to Your customers (CDK team is going to support the out of the box lambda sharing next on)

AWS CDK V2: How to create CodePipeline Action Group within a Stage

I have an AWS CodePipeline defined in AWS CDK V2 (Typescript). I'm looking to add an 'action group' to my beta deployment stage. Currently I only see a way to add a list of actions which all execute concurrently through the 'actions' property in the StageProps.
In the AWS console there is an option to add an action group that allows another set of actions which don't execute until the first set of actions complete (almost like a sub-stage). (You can view by going to your pipeline then Edit -> Edit Stage -> Add action group. (Sorry I don't have the reputation to upload a screenshot yet)
How do I define and add action groups to my CodePipeline in CDK? Is it even possible? I have some arrays of deployment actions that I want to run sequentially, however currently I am having to run them concurrently. I know I could just make a separate stage to run each list of actions but I would prefer to have them in the same stage.
Please see my pipeline code below:
let stagesToDeployInOrder = []
// Deploy infrastructure for each stageStageConfigurations.ACTIVE_STAGES.forEach((stage: Stage) => {
const stageToDeploy: StageProps = {
stageName: `${stage.stageType}`,
transitionToEnabled: true,
actions: [
...codeDeploymentManager.getDeploymentActionsForStage(stage.stageType),
...stage.stageDeploymentActions
]
}
stagesToDeployInOrder.push(stageToDeploy);
});
// Define Pipeline itself. Stages are in order of deployment.
new Pipeline(this, `Code-pipeline`, {
pipelineName: `ProjectNamePipeline`,
crossAccountKeys: false,
stages: stagesToDeployInOrder
});
You can create actions groups with the CDK by adding the key runOrder.
If you want to run one or more actions sequentially you can give them the same runOrder. Any action with a higher runOrder will be run after the ones with a lower runOrder have been executed.
More details can be found in the documentation here

Is it possible to deploy a background Function "myBgFunctionInProjectB" in "project-b" and triggered by my topic "my-topic-project-a" from "project-a"

It's possible to create a topic "my-topic-project-a" in project "project-a" so that it can be publicly visible (this is done by setting the role "pub/sub subscriber" to "allUsers" on it).
Then from project "project-b" I can create a subscription to "my-topic-project-a" and read the events from "my-topic-project-a". This is done using the following gcloud commands:
(these commands are executed on project "project-b")
gcloud pubsub subscriptions create subscription-to-my-topic-project-a --topic projects/project-a/topics/my-topic-project-a
gcloud pubsub subscriptions pull subscription-to-my-topic-project-a --auto-ack
So ok this is possible when creating a subscription in "project-b" linked to "my-topic-project-a" in "project-a".
In my use case I would like to be able to deploy a background function "myBgFunctionInProjectB" in "project-b" and triggered by my topic "my-topic-project-a" from "project-a"
But ... this doesn't seem to be possible since gcloud CLI is not happy when you provide the full topic name while deploying the cloud function:
gcloud beta functions deploy myBgFunctionInProjectB --runtime nodejs8 --trigger-topic projects/project-a/topics/my-topic-project-a --trigger-event google.pubsub.topic.publish
ERROR: (gcloud.beta.functions.deploy) argument --trigger-topic: Invalid value 'projects/project-a/topics/my-topic-project-a': Topic must contain only Latin letters (lower- or upper-case), digits and the characters - + . _ ~ %. It must start with a letter and be from 3 to 255 characters long.
is there a way to achieve that or this is actually not possible?
Thanks
So, it seems that is not actually possible to do this. I have found it by checking it in 2 different ways:
If you try to create a function through the API explorer, you will need to fill the location where you want to run this, for example, projects/PROJECT_FOR_FUNCTION/locations/PREFERRED-LOCATION, and then, provide a request body, like this one:
{
"eventTrigger": {
"resource": "projects/PROJECT_FOR_TOPIC/topics/YOUR_TOPIC",
"eventType": "google.pubsub.topic.publish"
},
"name":
"projects/PROJECT_FOR_FUNCTION/locations/PREFERRED-LOCATION/functions/NAME_FOR_FUNTION
}
This will result in a 400 error code, with a message saying:
{
"field": "event_trigger.resource",
"description": "Topic must be in the same project as function."
}
It will also say that you missed the source code, but, nonetheless, the API already shows that this is not possible.
There is an already open issue in the Public Issue Tracker for this very same issue. Bear in mind that there is no ETA for it.
I also tried to do this from gcloud, as you tried. I obviously had the same result. I then tried to remove the projects/project-a/topics/ bit from my command, but this creates a new topic in the same project that you create the function, so, it's not what you want.

Can i story config in memory and use it in AWS Lambda

I have lambda function which listens to dynamo stream and process records for any update or insert in dynamo.
Currently this lambda code has list of variables which i want to convert to a config, since this list can change.
So i want my lambda function to read this list from a config, but i don't want any network call so i cant make call to s3/dynamo every time. i want this config stored locally in memory.
I want to initialize lambda and in this initialization read this config from table and store it in some variable and use it in every invocation.
Can i do this?
I have my lambda functions (nodejs) read static config files from a yaml file. You could do the same with a json file as needed. The app also reads dynamic data in from S3 at run time, noting that this is not what you want to do.
This means I was able to move the variables out of the code as hard-coded values, and have a separate config file that you can change pre-deployment with CI tools or such per environment. It also means you can exclude your config from your version control if needed.
The only downside is the config has to be uploaded with the lamda function when you deploy it, so it's available with the other lambda assets at run time. AFAIK you can't write back to a config during runtime.
You can see in the project folder I have a config.yml. I'm using a nodejs module node-yaml-config to load into memory the config file each time the lambda is instantiated. It doesn't require any network call either.
In the config file I have all the params I need:
# Handler runtime config set
default:
sourceRssUri: http://www.sourcerss.com/rss.php?key=abcd1234
bucket: myappbucket
region: us-east-1
dataKey: data/rssdata
dataOutKey: data/rssdata
rssKey: myrss.xml
I load the config in at runtime, and then can reference any config items in my code by the key name. I just so happen to be using it for s3 operations here, you can do whatever.
const yaml_config = require("node-yaml-config");
const config = yaml_config.load(__dirname + "/config.yml");
const aws = require("aws-sdk");
const bbpromise = require("bluebird");
const s3 = bbpromise.promisifyAll(new aws.S3({}));
var params = {
Bucket: config.bucket,
Key: config.dataOutKey,
Body: JSON.stringify(feed.entries),
ContentType: "application/json"
};
s3.putObjectAsync(params).catch(SyntaxError, function(e) {
console.log("Error: ", e);
}).catch(function(e) {
console.log("Catch: ", e);
});
This makes it super easy to add new configuration for the lambda handler, as anything I add to config.yml such as myNewVariable is now available to reference in the handler as config.myNewVariable without any new work.
It allows the config to change per environment, or before each deployment. The config is then loaded before the handler and stored locally in memory during the period of the lambda execution.
No you can't. Lambda is stateless - you can't count on anything you read into memory on one invocation to be available to the next invocation. You will need to store your config information somewhere, and read it back in each time.

How to use AWS Lambda to check file in S3

Brand new to AWS Lambda so I'm not even sure if this is the right tool to accomplish what I'm trying to do.
Basically what I'm trying to do is check if a file exists or if it was updated recently in S3. If that file isn't there or wasn't updated recently I want an AMI to be cloned to an AWS instance.
Is the above possible?
Is Lambda the right tool for the job?
I'm fairly competent in JavaScript but have never used node.js or Python so writing a Lambda function seems complex to me.
Do you know of any resources that can help with building Lambda functions?
Thanks!
Is will be easy if you've know about Javascript and know about NPM. Let me give you easy way with node js :
login to your AWS.
go to AWS console menu, the button at right top corner.
choose lambda, click function and create new function.
click skip button on blue print page.
skip configuration trigger page.
you will see configuration function page, then you can fill the function name, runtime use NodeJS.4.3, and choose code type Edit Code Inline
at the bottom from Edit Code Inline box, you must choose IAM Role that you ever have. if you don't have any IAM Roles, please go to AWS Console and choose Identity and Access Management(IAM), select Roles and create it new.
If you have finish fill all field required, you can click next and create Lambda Function.
NOTE : in your Edit Code Inline Box, please write down this code :
exports.handler = function(event, context, callback) {
var AWS = require('aws-sdk');
AWS.config.update({accessKeyId: 'xxxxxxxxxxx', secretAccessKey: 'xxxxxxxxxxxxxxxxxxxx'});
var s3 = new AWS.S3();
var params = {Bucket: 'myBucket', Key: 'myFile.html'};
s3.getObject(params, function(err, data) {
if (err) {
console.log(err, err.stack);
// file does not exist, do something
}
else {
console.log(data);
//file exist, do something
}
});
};
you can get accessKeyId from IAM menu -> Users -> Security Credentials -> Create Access Key. then you will get secretAccessKey too.
Hope this answer will be help you.