I had previously created a Jenkins build provider using CodePipeline console. During creation, it asks for a Jenkins server URL.
Now, I need to change my Jenkins server URL, but when I try to edit, there isn't any option to change the build provider. See snapshot below:
The only solution I see is to add a new one.
I tried to get the pipeline using aws-cli,
aws codepipeline get-pipeline --name <pipeline-name>
But the JSON response just has a reference to to the build provider:
...
},
{
"name": "Build",
"actions": [
{
"inputArtifacts": [
{
"name": "APIServer"
}
],
"name": "Build",
"actionTypeId": {
"category": "Build",
"owner": "Custom",
"version": "1",
"provider": "jenkins-api-server"
},
"outputArtifacts": [
{
"name": "APIServerTarball"
}
],
"configuration": {
"ProjectName": "api-server-build"
},
"runOrder": 1
}
]
},
{
I couldn't find any other command to manage the build provider either. So my question is where and how should I update the existing build providers configuration in AWS CodePipeline?
The Jenkins action is actually defined as a custom action in your account. If you want to update the action configuration you can define a new version using the create custom action type API. Your changes will be a new "version" of the action type, so you then update the actionTypeId in your pipeline to point to your new version.
Once you're done, you can also delete the old version to prevent it appearing in the action list.
Regarding the Jenkins URL changing, one solution to this is to setup a DNS record (eg. via Route53) pointing to your Jenkins instance and use the DNS hostname in your action configuration. That way you can remap the DNS record in future without updating your pipeline.
Related
We have an AWS Amplify project that I am in the process of migrating the API from Transformer 1 to 2.
As part of this, we have a number of custom resolvers that previously had their own stack JSON template in the stacks/ folder as generated by the Amplify CLI.
As per the migration instructions, I have created new custom resources using amplify add custom which allow me to create either CDK (Cloud development kit) resource or a CloudFormation template. I just want a lift n shift for now so I've gone with the template option and moved the content from the stack JSON to the new custom resolver JSON template.
This seems like it should work, but the custom templates no longer have access to the parameters shared from the parent stack:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Metadata": {},
"Parameters": {
"AppSyncApiId": {
"Type": "String",
"Description": "The id of the AppSync API associated with this project."
},
"S3DeploymentBucket": {
"Type": "String",
"Description": "The S3 bucket containing all deployment assets for the project."
},
"S3DeploymentRootKey": {
"Type": "String",
"Description": "An S3 key relative to the S3DeploymentBucket that points to the root of the deployment directory."
}
},
...
}
So these are standard parameters that were used previously and my challenge now is in accessing the deployment bucket and root key as these values are generated upon deployment.
The exact use case is for the AppSync function configuration when I attempt to locate the request and response mapping template S3 locations:
"RequestMappingTemplateS3Location": {
"Fn::Sub": [
"s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/MyCustomResolver.req.vtl",
{
"S3DeploymentBucket": {
"Ref": "S3DeploymentBucket"
},
"S3DeploymentRootKey": {
"Ref": "S3DeploymentRootKey"
}
}
]
},
The error message I am receiving is
AWS::CloudFormation::Stack Tue Feb 15 2022 18:20:42 GMT+0000 (Greenwich Mean Time) Parameters: [S3DeploymentBucket, AppSyncApiId, S3DeploymentRootKey] must have values
I feel like I am missing a step to plumb the output values to the parameters in the JSON but I can't find any documentation to suggest how to do this using the updated Amplify CLI options.
Let me know if you need any further information and fingers crossed it is something simple for you Amplify/CloudFormation ninjas out there!
Thank you in advance!
I am trying to set up a resource pipeline, where I want to deploy all my resources using CloudFormation. I have a separate pipeline to deploy code.
Using below CloudFormation template
{
"AWSTemplateFormatVersion": "2010-09-09",
"Resources": {
"S3Bucket": {
"Type": "AWS::S3::Bucket",
"DeletionPolicy": "Retain",
"Properties": {
"BucketName": "devanimalhubstorage"
}
},
"HelloLambdaRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"RoleName": "HelloLambdaRole",
"AssumeRolePolicyDocument": {
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
}
},
"AnimalhubLambdaFunction": {
"Type": "AWS::Lambda::Function",
"Properties": {
"FunctionName": "AnimalhubLambdaFunction",
"Role": {
"Fn::GetAtt": ["HelloLambdaRole","Arn"]
},
"Code": {},
"Runtime": "dotnetcore2.1",
"Handler": "myfirstlambda::myfirstlambda.Function::FunctionHandler"
}
}
}
}
Problem Resource handler returned message: "Please provide a source for function code. (Service: Lambda, Status Code: 400, Request ID: 377fff66-a06d-495f-823e-34aec70f0e22, Extended Request ID: null)" (RequestToken: 9c9beee7-2c71-4d5d-e4a8-69065e12c5fa, HandlerErrorCode: InvalidRequest)
I want to build a separate pipeline for code build and deployment. Can't we deploy Lambda function without code?
What is the recommended solution in AWS? (I used to follow this approach in Azure, am new in AWS)
First, I would advise you to look into AWS SAM. It is very helpful when creating and deploying serverless applications and will have a lot of examples to help you with your use case.
Second, using separate pipelines for this purpose is not the recommended way in AWS. Using dummy code, as the other answer suggests, is also quite dangerous, since an update to your cloudformation would override any other code that you have deployed to the lambda function using your other pipeline.
In a serverless application like this, you could make a separation into two or more cloudformation stacks. For example, you could create your S3 buckets and other more "stable" infrastructure in one stack, and deploy this either manually or in a pipeline. And deploy your code in a separate pipeline using another cloudformation stack. Any values (ARNs etc) that would be needed from the more stable resources, you could inject as a parameter in the template, or use the ImportValue function of CloudFormation. I'd personally recommend using the parameter since it is more flexible for future changes.
I have been trying to use the update functionality of the AWS CLI to update codepipeline with the following command:
aws codepipeline update-pipeline --cli-input-json file://Pipelines/AWS/SomeName.json
And I keep getting the following error
Unknown parameter in pipeline.stages[0].actions[0]: "region", must be one of: name, actionTypeId, runOrder, configuration, outputArtifacts, inputArtifacts, roleArn
I have checked the documentation for AWS and I donĀ“t think theres anything wrong with the way actions is set up, here is the snippet from JSON:
"actions": [
{
"name": "Source",
"actionTypeId": {
"category": "Source",
"owner": "AWS",
"provider": "S3",
"version": "1"
},
"runOrder": 1,
"configuration": {
"PollForSourceChanges": "false",
"S3Bucket": "some-bucket-name",
"S3ObjectKey": "someEnvironment/someZip.zip"
},
"outputArtifacts": [
{
"name": "SourceArtifact"
}
],
"inputArtifacts": [],
"region": "eu-west-1"
},...
]
According to the documentation provided at https://docs.aws.amazon.com/codepipeline/latest/userguide/action-reference-S3.html
everything seems to be correct. Removing the region parameter updates the pipeline correctly but I am unsure of the consequences that could have on the updates itself
Any help is appreciated.
Cheers
Sky
If you try to create the pipeline through the AWS console and choose S3 as a source you will notice the region option is not available (as shown in the screenshot below). I would say this is a current limitation to the service more then a chosen design and a gap in documentation (however, happy to be proven wrong).
However, you could try include the full S3 bucket Arn which would include the region. Or take comfort in that any action deployed (without a region specified) defaults to the same region that the codepipeline is in, as per the AWS documentation.
I need to set a custom environment variable in EMR to be available when running a spark application.
I have tried adding this:
...
--configurations '[
{
"Classification": "spark-env",
"Configurations": [
{
"Classification": "export",
"Configurations": [],
"Properties": { "SOME-ENV-VAR": "qa1" }
}
],
"Properties": {}
}
]'
...
and also tried to replace "spark-env with hadoop-env
but nothing seems to work.
There is this answer from the aws forums. but I can't figure out how to apply it.
I'm running on EMR 5.3.1 and launch it with a preconfigured step from the cli: aws emr create-cluster...
Add the custom configurations like below JSON to a file say, custom_config.json
[
{
"Classification": "spark-env",
"Properties": {},
"Configurations": [
{
"Classification": "export",
"Properties": {
"VARIABLE_NAME": VARIABLE_VALUE,
}
}
]
}
]
And, On creating the emr cluster, pass the file reference to the --configurations option
aws emr create-cluster --configurations file://custom_config.json --other-options...
For me replacing spark-env to yarn-env fixed issue.
Use classification yarn-env to pass environment variables to the worker nodes.
Use classification spark-env to pass environment variables to the driver, with deploy mode client. When using deploy mode cluster, use yarn-env.
I would like to create a simple single-instance environment with AWS Elastic Beanstalk. I am able to do this from the AWS Console, but when I try to do it from the CLI, it will create a load balancer for me - seemingly regardless of what I put in my option_settings.
Here is the config file I've placed in my .ebextensions folder:
{
"option_settings": [
{
"namespace": "aws:autoscaling:launchconfiguration",
"option_name": "InstanceType",
"value": "t2.micro"
},
{
"namespace": "aws:elasticbeanstalk:environment",
"option_name": "EnvironmentType",
"value": "SingleInstance"
},
{
"namespace": "aws:autoscaling:launchconfiguration",
"option_name": "SecurityGroups",
"value": "sg-XXXXXXX"
},
{
"namespace": "aws:autoscaling:launchconfiguration",
"option_name": "EC2KeyName",
"value": "XXXXXXXX"
},
{
"namespace": "aws:ec2:vpc",
"option_name": "VPCId",
"value": "vpc-XXXXXX"
},
{
"namespace": "aws:ec2:vpc",
"option_name": "Subnets",
"value": "subnet-XXXXXXX"
},
{
"namespace": "aws:autoscaling:asg",
"option_name": "MinSize",
"value": 1
},
{
"namespace": "aws:autoscaling:asg",
"option_name": "MaxSize",
"value": 1
}
],
"packages": {
"yum": {
"postgresql94-devel": [],
"git": []
}
}
}
I see the load balancer listed in the "Network Tier" section of my EB environment configuration dashboard, and I've verified that it was created in the EC2 section of the AWS console.
How can I launch a Beanstalk environment from the CLI without a load balancer? Any help would be greatly appreciated.
Which CLI are you using aws cli or eb cli? I am guessing eb cli.
At least the eb cli and AWS Management Console will pass some option settings by default in the API parameters in addition to the option settings you specify in your ebextensions. In the web console you get a dropdown to select a load balanced or single instance environment. In the EB CLI I think you get a prompt to select a load balanced environment. You can also pass --single option to eb create. If you don't specify anything it assumes the default which is load balanced. So even though you specified the option setting in the ebextension there is an option setting being passed in the API parameter. Elastic Beanstalk gives preference to the value set using the API over the value in ebextension.
As this documentation says:
The EB command line interface (CLI) and Elastic Beanstalk console provide recommended values for some configuration options. These values can be different from the default values and are set at the API level when your environment is created. Recommended values allow Elastic Beanstalk to improve the default environment configuration without making backwards incompatible changes to the API.
For example, both the EB CLI and Elastic Beanstalk console set the configuration option for EC2 instance type (InstanceType in the aws:autoscaling:launchconfiguration namespace). Each client provides a different way of overriding the default setting. In the console you can choose a different instance type from a drop down menu on the Configuration Details page of the Create New Environment wizard. With the EB CLI, you can use the --instance_type parameter for eb create.