I have been trying to use the update functionality of the AWS CLI to update codepipeline with the following command:
aws codepipeline update-pipeline --cli-input-json file://Pipelines/AWS/SomeName.json
And I keep getting the following error
Unknown parameter in pipeline.stages[0].actions[0]: "region", must be one of: name, actionTypeId, runOrder, configuration, outputArtifacts, inputArtifacts, roleArn
I have checked the documentation for AWS and I donĀ“t think theres anything wrong with the way actions is set up, here is the snippet from JSON:
"actions": [
{
"name": "Source",
"actionTypeId": {
"category": "Source",
"owner": "AWS",
"provider": "S3",
"version": "1"
},
"runOrder": 1,
"configuration": {
"PollForSourceChanges": "false",
"S3Bucket": "some-bucket-name",
"S3ObjectKey": "someEnvironment/someZip.zip"
},
"outputArtifacts": [
{
"name": "SourceArtifact"
}
],
"inputArtifacts": [],
"region": "eu-west-1"
},...
]
According to the documentation provided at https://docs.aws.amazon.com/codepipeline/latest/userguide/action-reference-S3.html
everything seems to be correct. Removing the region parameter updates the pipeline correctly but I am unsure of the consequences that could have on the updates itself
Any help is appreciated.
Cheers
Sky
If you try to create the pipeline through the AWS console and choose S3 as a source you will notice the region option is not available (as shown in the screenshot below). I would say this is a current limitation to the service more then a chosen design and a gap in documentation (however, happy to be proven wrong).
However, you could try include the full S3 bucket Arn which would include the region. Or take comfort in that any action deployed (without a region specified) defaults to the same region that the codepipeline is in, as per the AWS documentation.
Related
I am trying to set up a resource pipeline, where I want to deploy all my resources using CloudFormation. I have a separate pipeline to deploy code.
Using below CloudFormation template
{
"AWSTemplateFormatVersion": "2010-09-09",
"Resources": {
"S3Bucket": {
"Type": "AWS::S3::Bucket",
"DeletionPolicy": "Retain",
"Properties": {
"BucketName": "devanimalhubstorage"
}
},
"HelloLambdaRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"RoleName": "HelloLambdaRole",
"AssumeRolePolicyDocument": {
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
}
},
"AnimalhubLambdaFunction": {
"Type": "AWS::Lambda::Function",
"Properties": {
"FunctionName": "AnimalhubLambdaFunction",
"Role": {
"Fn::GetAtt": ["HelloLambdaRole","Arn"]
},
"Code": {},
"Runtime": "dotnetcore2.1",
"Handler": "myfirstlambda::myfirstlambda.Function::FunctionHandler"
}
}
}
}
Problem Resource handler returned message: "Please provide a source for function code. (Service: Lambda, Status Code: 400, Request ID: 377fff66-a06d-495f-823e-34aec70f0e22, Extended Request ID: null)" (RequestToken: 9c9beee7-2c71-4d5d-e4a8-69065e12c5fa, HandlerErrorCode: InvalidRequest)
I want to build a separate pipeline for code build and deployment. Can't we deploy Lambda function without code?
What is the recommended solution in AWS? (I used to follow this approach in Azure, am new in AWS)
First, I would advise you to look into AWS SAM. It is very helpful when creating and deploying serverless applications and will have a lot of examples to help you with your use case.
Second, using separate pipelines for this purpose is not the recommended way in AWS. Using dummy code, as the other answer suggests, is also quite dangerous, since an update to your cloudformation would override any other code that you have deployed to the lambda function using your other pipeline.
In a serverless application like this, you could make a separation into two or more cloudformation stacks. For example, you could create your S3 buckets and other more "stable" infrastructure in one stack, and deploy this either manually or in a pipeline. And deploy your code in a separate pipeline using another cloudformation stack. Any values (ARNs etc) that would be needed from the more stable resources, you could inject as a parameter in the template, or use the ImportValue function of CloudFormation. I'd personally recommend using the parameter since it is more flexible for future changes.
I have set up a replication rule on my S3 bucket to populate a preprod bucket for testing purposes. This means I will want to be able to turn the replication on and off easily, and likely dump and refresh the replication bucket as necessary. I'm creating a script for this but am having a hard time finding a way to easily turn the replication rule on and off outside of using the AWS Console.
Is there an option beyond put-bucket-replication? That works but is basically restating the whole replication config each time, instead of just enabling or disabling the existing one.
It looks like the only solution is to pass different put-bucket-replications with the Status as disabled or enabled. Example of disabled below using python and boto3:
import boto3
client = boto3.client('s3')
##Enable
client.put_bucket_replication(Bucket='yoursourcebucketname', ReplicationConfiguration={
"Role": "arn:aws:iam::999999999:role/service-role/yourrolename",
"Rules": [
{
"Status": "Disabled",
"Priority": 1,
"DeleteMarkerReplication": { "Status": "Disabled" },
"Filter" : { "Prefix": ""},
"Destination": {
"Bucket": "arn:aws:s3:::yourlandingbucket",
"Account": "838382828"
}
}
]
}
)
I am trying to setup CI/CD with AWS codepipeline and now I am stuck with pipeline autostart.
Looks like cloudwatch does not detect ECR events so does not start a pipeline.
Target and role configured correctly, but in access advisor for role I don`t see any role invocations.
Region us-west-2.
Here is event pattern that I use:
{
"detail": {
"eventName": [
"PutImage"
],
"requestParameters": {
"imageTag": [
"service.develop.latest"
],
"repositoryName": [
"repository"
]
}
},
"source": [
"aws.ecr"
]
}
I can see PutImage events in cloudtrail but this rule does not work. Any help appreciated, thanks.
Well, magically this thing started to work.
Looks like an AWS bug as I did not fixed it somehow.
I have a series of tasks defined in ECS that run on a recurring schedule. I recently made a minor change to update my task definition in Terraform to change default environment variables for my container (from DEBUG to PRODUCTION):
"environment": [
{"name": "ENVIRONMENT", "value": "PRODUCTION"}
]
I had this task running using the Scheduled Tasks feature of Fargate, setting it at a rate of every 4 hours. However, after updating my task definition, I began to see that the tasks were not being triggered by CloudWatch, since my last container log was from several days ago.
I dug deeper into the issue using CloudTrail, and noticed one particular part of the entry for a RunTask event:
"eventTime": "2018-12-10T17:26:46Z",
"eventSource": "ecs.amazonaws.com",
"eventName": "RunTask",
"awsRegion": "us-east-1",
"sourceIPAddress": "events.amazonaws.com",
"userAgent": "events.amazonaws.com",
"errorCode": "InvalidParameterException",
"errorMessage": "TaskDefinition is inactive",
Further down in the log, I noticed that the task definition ECS was attempting to run was
"taskDefinition": "arn:aws:ecs:us-east-1:XXXXX:task-
definition/important-task-name:2",
However, in my ECS task definitions, the latest version of important-task-name was 3. So it looks like the events are not triggering because I am using an "inactive" version of my task definition.
Is there any way for me to schedule tasks in AWS Fargate without having to manually go through the console and stop/restart/update each cluster's scheduled update? Isn't there any way to simply ask CloudWatch to pull the latest active task definition?
You can use CloudWatch Event Rules to control scheduled tasks and whenever you update a task definition you can also update your rule. Say you have two files:
myRule.json
{
"Name": "run-every-minute",
"ScheduleExpression": "cron(0/1 * * * ? *)",
"State": "ENABLED",
"Description": "a task that will run every minute",
"RoleArn": "arn:aws:iam::${IAM_NUMBER}:role/ecsEventsRole",
"EventBusName": "default"
}
myTargets.json
{
"Rule": "run-every-minute",
"Targets": [
{
"Id": "scheduled-task-example",
"Arn": "arn:aws:ecs:${REGION}:${IAM_NUMBER}:cluster/mycluster",
"RoleArn": "arn:aws:iam::${IAM_NUMBER}:role/ecsEventsRole",
"Input": "{\"containerOverrides\":[{\"name\":\"myTask\",\"environment\":[{\"name\":\"ENVIRONMENT\",\"value\":\"production\"},{\"name\":\"foo\",\"value\":\"bar\"}]}]}",
"EcsParameters": {
"TaskDefinitionArn": "arn:aws:ecs:${REGION}:${IAM_NUMBER}:task-definition/myTaskDefinition",
"TaskCount": 1,
"LaunchType": "FARGATE",
"NetworkConfiguration": {
"awsvpcConfiguration": {
"Subnets": [
"subnet-xyz1",
"subnet-xyz2",
],
"SecurityGroups": [
"sg-xyz"
],
"AssignPublicIp": "ENABLED"
}
},
"PlatformVersion": "LATEST"
}
}
]
}
Now, whenever there's a new revision of myTaskDefinition you may update your rule, e.g.:
aws events put-rule --cli-input-json file://myRule.json --region $REGION
aws events put-targets --cli-input-json file://myTargets.json --region $REGION
echo 'done'
But of course, replace IAM_NUMBER and REGION with your credentials,
Cloud Map seems like a solution for these types of problems.
https://aws.amazon.com/about-aws/whats-new/2018/11/aws-fargate-and-amazon-ecs-now-integrate-with-aws-cloud-map/
I had previously created a Jenkins build provider using CodePipeline console. During creation, it asks for a Jenkins server URL.
Now, I need to change my Jenkins server URL, but when I try to edit, there isn't any option to change the build provider. See snapshot below:
The only solution I see is to add a new one.
I tried to get the pipeline using aws-cli,
aws codepipeline get-pipeline --name <pipeline-name>
But the JSON response just has a reference to to the build provider:
...
},
{
"name": "Build",
"actions": [
{
"inputArtifacts": [
{
"name": "APIServer"
}
],
"name": "Build",
"actionTypeId": {
"category": "Build",
"owner": "Custom",
"version": "1",
"provider": "jenkins-api-server"
},
"outputArtifacts": [
{
"name": "APIServerTarball"
}
],
"configuration": {
"ProjectName": "api-server-build"
},
"runOrder": 1
}
]
},
{
I couldn't find any other command to manage the build provider either. So my question is where and how should I update the existing build providers configuration in AWS CodePipeline?
The Jenkins action is actually defined as a custom action in your account. If you want to update the action configuration you can define a new version using the create custom action type API. Your changes will be a new "version" of the action type, so you then update the actionTypeId in your pipeline to point to your new version.
Once you're done, you can also delete the old version to prevent it appearing in the action list.
Regarding the Jenkins URL changing, one solution to this is to setup a DNS record (eg. via Route53) pointing to your Jenkins instance and use the DNS hostname in your action configuration. That way you can remap the DNS record in future without updating your pipeline.