Terraform issue in gitlab - amazon-web-services

I have an application whose build is configured in gitlab and makes use of terraform, and software is finally deployed in AWS.
I see following error during deployment:
null_resource.server_canary_bouncer (local-exec): Executing: ["/bin/sh" "-c" "./bouncer canary -a 'my-asg':$(aws autoscaling describe-auto-scaling-groups --auto-scaling-group-name 'my-asg' --query 'AutoScalingGroups[0].DesiredCapacity')"]
null_resource.server_canary_bouncer (local-exec): /bin/sh: ./bouncer: No such file or directory
Error: Error running command './bouncer canary -a 'my-asg':$(aws autoscaling describe-auto-scaling-groups --auto-scaling-group-name 'my-asg' --query 'AutoScalingGroups[0].DesiredCapacity')': exit status 127. Output: /bin/sh: ./bouncer: No such file or directory
[terragrunt] 2020/11/12 12:16:31 Hit multiple errors:
exit status 1
Cleaning up file based variables
00:01
ERROR: Job failed: exit code 1
I don't have much knowledge of terraform and hence don't really understand what to make out of this log.
Any idea how this can be solved?

Output: /bin/sh: ./bouncer: No such file or directory you are trying to run a file/script/command and it does not exist in the dir you are running terraform.

Related

AWS ElasticBeanstalk deployment is throwing error, "An error occurred during execution of command [app-deploy] - [PostBuildEbExtension]"

I am trying to deploy my Laravel application into ElasticBeanstalk environment. I am using "eb deploy" command for the deployment. The command was working fine. I had been deploying my application successfully. At some point, I updated my CloudFormation template to change the environment name of the ElasticBeanstalk.
Then I also updated the .elasticbeanstalk/config.yml file as follow.
branch-defaults:
master:
environment: PatheinDirectoryTesting
group_suffix: null
environment-defaults:
MyanEat-test-env:
branch: null
repository: null
global:
application_name: PatheinDirectoryApplication
default_ec2_keyname: null
default_platform: arn:aws:elasticbeanstalk:eu-west-1::platform/64bit Amazon Linux
2 v3.1.0 running PHP 7.3
default_region: eu-west-1
include_git_submodules: true
instance_profile: null
platform_name: null
platform_version: null
profile: null
sc: git
workspace_type: Application
Then in the terminal, I updated the "eb" command to use the right environment running the following command.
eb use PatheinDirectoryTesting
Then I run "eb deploy" to deploy my application. It deployed the zip file successfully. Then it threw the error after uploading the zip.
Then I run the "eb logs" to get the error in the logs. This is the error I found in the logs.
2020/08/29 21:42:29.413325 [INFO] Running command /bin/sh -c /opt/aws/bin/cfn-init -s arn:aws:cloudformation:eu-west-1:733553390213:stack/awseb-e-gntnptfj8v-stack/90004470-ea3e-11ea-8e57-02d39f83e350 -r AWSEBAutoScalingGroup --regio
n eu-west-1 --configsets Infra-EmbeddedPostBuild
2020/08/29 21:42:31.691170 [ERROR] An error occurred during execution of command [app-deploy] - [PostBuildEbExtension]. Stop running the command. Error: Container commands build failed. Please refer to /var/log/cfn-init.log for more
details.
I tried deleting the CloudFormation template and deploying again. Then I deploy the application. The error persists. How can I fix it?

Why do I get /bin/sh: aws: command not found in local-exec provisioner?

I want to perform a lambda invocation from terraform when destroy is running.
The terraform job is run by Jenkins on remote server.
According to this documentation I defined the following provisioner:
provisioner "local-exec" {
when = destroy
command = "aws lambda invoke --function-name ${var.lambda_name} --payload '{ \"someProperty\": \"someValue\" }' response.json"
}
the command syntax of lambda invoke is according to AWS CLI Command Reference
But, when running Terraform getting the following error:
Error running command 'aws lambda invoke --function-name my-lambda-name --payload '{ "someProperty": "someValue" }' response.json': exit status 127. Output: /bin/sh: aws: command not found
Why do I get /bin/sh: aws: command not found in local-exec provisioner?
local-exec is running on your local machine instead of the resource. So I would guess you are missing the aws-cli on your machine.
The command which aws will show whether or not you have aws-cli installed and available within your PATH? If it is not installed, follow the installation instructions for your OS.

Run aws cli command in CodeBuild with environment variable substituion

I am trying to run an aws cli command at the end of a CodeBuild from the buildspec.yml.
The container/image is "aws/codebuild/amazonlinux2-x86_64-standard:1.0"
I have an environment variable of $Branch (currently set to 'master')
and I want to run the command "aws codepipeline start-pipeline-execution --name bbentityinterface-master-Pipeline"
I have tried "aws codepipeline start-pipeline-execution --name $(bbentityinterface-$Branch-Pipeline)"
and "aws codepipeline start-pipeline-execution --name bbentityinterface-$Branch-Pipeline"
and both fail.
"aws codepipeline start-pipeline-execution --name $(bbentityinterface-$Branch-Pipeline)" fails as below (from the log):
How can I properly construct this line to execute the command?
Running command aws codepipeline start-pipeline-execution --name $(bbentityinterface-$Branch-Pipeline)
/codebuild/output/tmp/script.sh: line 4: bbentityinterface-master-Pipeline: command not found
usage: aws [options] [ ...] [parameters]
To see help text, you can run:
aws help
aws help
aws help
aws: error: argument --name: expected one argument
[Container] 2020/01/08 15:46:40 Command did not exit successfully aws codepipeline start-pipeline-execution --name $(bbentityinterface-$Branch-Pipeline) exit status 2
Figured it out...
eval "aws codepipeline start-pipeline-execution --name bbentityinterface-$Branch

Failed to successfully validate kops cluster state

I am using this command to try and get a jenkins-x cluster set up and running :
jx create cluster aws --ng
I've also tried :
jx create cluster aws
the output looks like this :
Waiting to for a valid kops cluster state...
WARNING: retrying after error: exit status 2
error: Failed to successfully validate kops cluster state: after 25 attempts, last error: exit status 2
All help appreciated.
Try the kops validate cluster command as shown below
AWS_ACCESS_KEY_ID=<YOUR_KEY_HERE> AWS_SECRET_ACCESS_KEY=<YOUR_SECRET_KEY_HERE> kops validate cluster --wait 10m --state="s3://<YOUR_S3_BUCKET_NAME_HERE>" --name=<YOUR_CLUSTER_NAME_HERE>

Using command line to create AWS Elastic Beanstalk fail

I tried to setup EB for worker tier by using the following command
eb create -t worker
But I receive the following error
2015-11-04 16:44:01 UTC+0800 ERROR Stack named 'awseb-e-wh4epksrzi-stack' aborted operation. Current state: 'CREATE_FAILED' Reason: The following resource(s) failed to create: [AWSEBWorkerCronLeaderRegistry, AWSEBSecurityGroup].
2015-11-04 16:43:58 UTC+0800 ERROR Creating security group named: sg-7ba1f41e failed Reason: Resource creation cancelled
Is there something specific to run the command line ?
I found the eb command line buggy. try to use the web console. much more reliable.