I am having a weird issue in CodePipeline + CodeDeploy, we have checked all the aws forums and stackoverflow but no one has had the particular issue and close issues suggestion have been already been taken into account but nothing has helped.
The issue in particular is the following :
We have a CodePipeline:
It happens that "randomly" we get the error:
(x) An AppSpec file is required, but could not be found in the revision
But the required file is in the Revision, we have checked dozens of times, and the files are in there and are the same name and format as the times that follow without problems.
This is happening in the same Deployment Group, with the same configuration, so is not a poorly configured Group because most of the times work without issues.
Just to be sure i add both .yml and .yaml versions in the revision. And the appspec is as simple as this:
version: 0.0
Resources:
- TargetService:
Type: AWS::ECS::Service
Properties:
TaskDefinition: "arn:aws:ecs:us-east-1:xxxxxxxx:task-definition/my_app_cd:258"
LoadBalancerInfo:
ContainerName: "nginx_main"
ContainerPort: 80
PlatformVersion: null
The above error I suspect is related to the wrong configuration for your codepipeline. To perform ECS codedeploy deployments, the provider in your codepipeline stage for deployment must be "ECS (blue/green)" not "Codedeploy" ( codedeploy is used for EC2 deployments.
Even though in the back-end it uses codedeploy, the name of the provider is "ECS (blue/green)".
Pipeline configuration can be checked as:
$ aws codepipeline get-pipeline --name <pipeline_name>
{
"name": "Deploy",
"blockers": null,
"actions": [
{
"name": "Deploy",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"provider": "CodeDeploy", <===== should be "CodeDeployToECS"
"version": "1"
},
Related
I am implementing a Blue/Green deployment using aws-code-deploy orb. My infrastructure is being implemented using terraform which consists of the following resources.
S3 bucket → stores the appspec.yml which is used to create the deployment.
VPC for networking ( It was easier to spin mine up for this demo. Too lazy to navigate the Legitscript networking lol )
An application Load balancer, 2 Listener Groups and 2 Target Groups. On initial deployment of infrastructure, go to EC2 → Target groups and you will see TG1 will have a healthy target associated with it but TG2 will not. It will change once we implement the Blue green deployment.
ECS → A cluster, service and task definition will be available.
CodeDeploy → CodeDeploy application and deployment group.
This is my terraform file for S3 resource :-
resource "aws_s3_bucket" "bucket" {
bucket = "blue-green-cd-ls"
}
resource "aws_s3_object" "appspec" {
bucket = aws_s3_bucket.bucket.id
key = "appspec.yaml"
content = templatefile("${path.module}/appspec.yaml.tpl", {
task_definition_arn = var.task_definition_arn
})
}
Which successfuly creates the S3 bucket with the appspec.yml file in it. I am trying to create a deployment using CircleCI and my config.yml looks like this :-
version: 2.1
orbs:
aws-cli: circleci/aws-cli#3.1.3
aws-code-deploy: circleci/aws-code-deploy#2.0.0
jobs:
deploy:
executor: aws-cli/default
steps:
- checkout
- aws-cli/setup
- aws-code-deploy/deploy-bundle:
application-name: "blue-green"
bundle-bucket: "blue-green-cd-ls"
bundle-key: "appspec.yaml"
deployment-group: "blue-green-ls"
bundle-type: "YAML"
deployment-config: "CodeDeployDefault.ECSAllAtOnce"
workflows:
build-and-deploy:
jobs:
- deploy
But my deployment keeps on failing with the following error :-
Deployment failed!
{
"deploymentInfo": {
"applicationName": "blue-green",
"deploymentGroupName": "*************",
"deploymentConfigName": "CodeDeployDefault.ECSAllAtOnce",
"deploymentId": "d-85LKXCPMJ",
"revision": {
"revisionType": "S3",
"s3Location": {
"bucket": "blue-green-cd-ls",
"key": "appspec.yaml.YAML",
"bundleType": "YAML"
}
},
"status": "Failed",
"errorInformation": {
"code": "INVALID_REVISION",
"message": "The AppSpec file cannot be located in the specified S3 bucket. Verify your AppSpec file is present and that the name and key value pair specified for your S3 bucket are correct. The S3 bucket must be in your current region"
I double checked and the S3 bucket is definitely in the right region i.e. us-east-1. Anyone has any ideas what might be wrong? Thank you.
I am following the instrutions at https://aws.amazon.com/getting-started/projects/build-modern-app-fargate-lambda-dynamodb-python/module-four/
aws apigateway create-deployment --rest-api-id a2kpkzqme1 --stage-name prod
An error occurred (BadRequestException) when calling the CreateDeployment operation: The Vpc link is not yet available for deployment
I had the same issue. So I checked the status of creation of the VPC Link:
aws apigateway get-vpc-link --vpc-link-id MY_VPC_LINK_ID
It showed, that creation failed:
{
"id": "xxxxxx",
"name": "MysfitsApiVpcLink",
"targetArns": [
"arn:aws:elasticloadbalancing:MY_REGION:MY_ID:loadbalancer/net/mysfits-nlb"
],
"status": "FAILED",
"statusMessage": "NLB ARN is malformed",
"tags": {}
}
So the failure I made was a wrong NLB ARN. I forgot something at the end.
I repeated all steps from the creation of the VPC Link (with right NLB ARN). Then it worked for me.
When building a docker image with CodeBuild one specifies a imagedefinitions.json file. As I've understood it, it is used to specify the container definition of a task definition.
However, I would like to include environment variables such as in this definition:
{
"containerDefinitions": [{
"secrets": [{
"name": "environment_variable_name",
"valueFrom": "arn:aws:ssm:region:aws_account_id:parameter/parameter_name"
}]
}]
}
How can I update the imagedefinitions.json file to include this information?
You cannot include any other parameter besides container_name and image_uri, as stated here.
imagedefinitions.json file is used by CodePipeline in ECS deploy stages to pass both fields to ECS for further task versioning and eventual deployment.
containerDefinitions in the other hand is a property within a task definition config file. That one is from CloudFormation documentation, just for you to check.
Hope it helps.
I created an AWS CodePipeline pipeline to pull from Github, build with Jenkins, and deploy to an ElasticBeanstalk project. I can deploy the war to beanStack directly and validate.
When i try to do the same from CodePipeLine i see the below error in AWS CodePipeline Polling Log of Jenkins -
ERROR: Failed to record SCM polling for hudson.model.FreeStyleProject#ae44565e6[AppPortal]
com.amazonaws.services.codepipeline.model.ActionTypeNotFoundException: ActionType (Category: 'Build', Owner: 'Custom', Provider: 'MPiplelineProvider', Version: '1') is not available (Service: AWSCodePipeline; Status Code: 400; Error Code: ActionTypeNotFoundException; Request ID: e35456561d-999f-56e7-3rgf-75985675533b3)
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1401)
at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:945)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:723)
at com.amazonaws.http.AmazonHttpClient.doExecute(AmazonHttpClient.java:475)
at com.amazonaws.http.AmazonHttpClient.executeWithTimer(AmazonHttpClient.java:437)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:386)
at com.amazonaws.services.codepipeline.AWSCodePipelineClient.doInvoke(AWSCodePipelineClient.java:2078)
I have set the SCM poll to * * * * * for testing purpose.
Post-build Actions - AWS CodePipeline publisher - Location - target/AppPortal
I installed only AWS Codepipeline pulgin in jenkins.
Can you let me know what I'm missing.
Thanks
Did you register the Jenkins custom action type in CodePipeline, in the same region you're polling?
Check your Jenkins job configuration for:
AWS Region
Category
Provider
Version
From your error message:
ActionType (Category: 'Build', Owner: 'Custom', Provider: 'MPiplelineProvider', Version: '1')
Then use the AWS CLI to list your custom action types, in that region, and make sure the Category, Provider, and Version match:
aws codepipeline list-action-types --action-owner-filter Custom --region us-west-2
If you created the Jenkins action type through the AWS Console, it should have these values:
ActionType (Category: 'Build', Owner: 'Custom', Provider: 'Jenkins', Version: '1')
If that's the case, updating your Jenkins job Provider from MPiplelineProvider to Jenkins should fix your problem.
In our scenario:
Trigger: moving the Jenkins master (ec2) behind a Load Balancer.
Symptom: we started getting the same error (as above) after updating all security group setting so that load balancer does not get in the way.
Resolution:
On the Jenkins (ec2) box, we deleted the "project" and re-creating it with the exact same setting (including name) as before. This allowed Jenkins to reconnect with Code Pipeline and job started working again.
Here is the codepipeline stage action settings:
{
"inputArtifacts": [],
"name": "foobar-test",
"region": "us-west-2",
"actionTypeId": {
"category": "Test",
"owner": "Custom",
"version": "1",
"provider": "foobar-provider"
},
"outputArtifacts": [],
"configuration": {
"ProjectName": "foobar-api-qa-aws_trigger"
},
"runOrder": 1
Kubernetes docs say using AWS ECR is supported, but it’s not working for me. My nodes have an EC2 instance role associated with all the correct permissions but kubectl run debug1 -i --tty --restart=Never --image=672129611065.dkr.ecr.us-west-2.amazonaws.com/debug:v2 results in failed to "StartContainer" for "debug1" with ErrImagePull: "Authentication is required."
Details
The instances all have a role associated and that role has this policy attached:
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:GetRepositoryPolicy",
"ecr:DescribeRepositories",
"ecr:ListImages",
"ecr:BatchGetImage"
],
"Resource": "*"
}]
}
And the kubelet logs look like:
Apr 18 19:02:12 ip-10-0-170-46 kubelet[948]: I0418 19:02:12.004611 948 provider.go:91] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider
Apr 18 19:02:12 ip-10-0-170-46 kubelet[948]: E0418 19:02:12.112142 948 pod_workers.go:138] Error syncing pod b21c2ba6-0593-11e6-9ec1-065c82331f7b, skipping: failed to "StartContainer" for "debug1" with ErrImagePull: "Authentication is required."
Apr 18 19:02:27 ip-10-0-170-46 kubelet[948]: E0418 19:02:27.006329 948 pod_workers.go:138] Error syncing pod b21c2ba6-0593-11e6-9ec1-065c82331f7b, skipping: failed to "StartContainer" for "debug1" with ImagePullBackOff: "Back-off pulling image \"672129611065.dkr.ecr.us-west-2.amazonaws.com/debug:v2\""
From those logs, I suspect one of three things:
You haven't passed the --cloud-provider=aws arg on the kubelet.
The correct IAM permissions weren't in place when your kubelet started up. If this is the case, a simple bounce of the kubelet daemon should work for you.
You're on a k8s version < 1.2. Although, this one seems unlikely, given the date of your question.
I think you also need the image pull secret configured for ecr images. You can reffer the below links for details.
http://kubernetes.io/docs/user-guide/images/#specifying-imagepullsecrets-on-a-pod
http://docs.aws.amazon.com/AmazonECR/latest/userguide/ECR_GetStarted.html
https://github.com/kubernetes/kubernetes/issues/499
1) Retrieve the docker login command that you can use to authenticate your Docker client to your registry:
aws ecr get-login --region us-east-1
2) Run the docker login command that was returned in the previous step.
3) Docker login secret saved /root/.dockercfg
4)Encode docker config file
echo $(cat /root/.dockercfg) | base64 -w 0
5)Copy and paste result to secret YAML based on the old format:
apiVersion: v1
kind: Secret
metadata:
name: aws-key
type: kubernetes.io/dockercfg
data:
.dockercfg: <YOUR_BASE64_JSON_HERE>
6) Use this aws-key secret to access the image
apiVersion: v1
kind: Pod
metadata:
name: foo
namespace: awesomeapps
spec:
containers:
- name: foo
image: janedoe/awesomeapp:v1
imagePullSecrets:
- name: aws-key
Generally if you change permissions on an InstanceProfile they take effect immediately. However, there must be some kind of setup phase for the Kubelet that requires the permissions to already be set. I completely bounced my CloudFormation stack so that the booted with the new permissions active and that did the trick. I can now use ECR images without issue.