AWS CodePipeline - artifact file permission denied - amazon-web-services

Have and issue with yaml / json and error like below. Didn't change lines responsible for producing artifacts in AWS CodePipeline yet it throws error like below...
[Container] 2020/07/30 17:18:27 Running command printf '[{"name":"production-celery","imageUri":"%s"}]' $CELERY_REPO_URI:$IMAGE_TAG > build/codebuild/imagedefinitions-prod-celery.json || true
[Container] 2020/07/30 17:18:27 Running command ls -la build/codebuild/
total 36
drwxr-xr-x 2 root root 4096 Jul 30 17:18 .
drwxr-xr-x 5 root root 4096 Jul 30 17:16 ..
-rw-rw-r-- 1 root root 2569 Jul 30 17:06 Buildspec_production.yml
-rw-rw-r-- 1 root root 1157 Jul 30 17:06 Buildspec_staging.yml
-rw-rw-r-- 1 root root 351 Jul 30 17:06 buildspec_ci.yml
-rw-rw-r-- 1 root root 351 Jul 30 17:06 buildspec_prod_ci.yml
-rw-r--r-- 1 root root 110 Jul 30 17:18 imagedefinitions-prod-app.json
-rw-r--r-- 1 root root 108 Jul 30 17:18 imagedefinitions-prod-celery.json
-rw-rw-r-- 1 root root 580 Jul 30 17:06 imagedefinitions-staging.json
[Container] 2020/07/30 17:18:27 Running command cat build/codebuild/imagedefinitions-prod-celery.json
[{"name":"production-celery","imageUri":"xxxxxxxxxxx.dkr.ecr.eu-central-1.amazonaws.com/celery-repo:7fb56ff"}]
[Container] 2020/07/30 17:18:27 Running command build/codebuild/imagedefinitions-prod-celery.json
/codebuild/output/tmp/script.sh: 4: /codebuild/output/tmp/script.sh: build/codebuild/imagedefinitions-prod-celery.json: Permission denied
[Container] 2020/07/30 17:18:27 Command did not exit successfully build/codebuild/imagedefinitions-prod-celery.json exit status 126
[Container] 2020/07/30 17:18:27 Phase complete: POST_BUILD State: FAILED
[Container] 2020/07/30 17:18:27 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: build/codebuild/imagedefinitions-prod-celery.json. Reason: exit status 126
Have no idea what is wrong and why it throws Permission denied ??? Anyone did encounter such error?
EDIT: yesterday evening was working fine... no changes...

Running command build/codebuild/imagedefinitions-prod-celery.json
This seems like an invalid command:
build/codebuild/imagedefinitions-prod-celery.json
If a command is found but is not executable, the return status is 126.
Please check your command in buildspec.

Related

GCP cloudbuild react-scripts build don't find env.file

I'm doing something I thought was simple:
# Fetch config
- name: 'gcr.io/cloud-builders/gsutil'
volumes:
- name: 'vol1'
path: '/persistent_volume'
args: [ 'cp', 'gs://servicesconfig/devs/react-app/env.server', '/persistent_volume/env.server' ]
# Install dependencies
- name: node:$_NODE_VERSION
entrypoint: 'yarn'
args: [ 'install' ]
# Build project
- name: node:$_NODE_VERSION
volumes:
- name: 'vol1'
path: '/persistent_volume'
entrypoint: 'bash'
args:
- -c
- |
cp /persistent_volume/env.server .env.production &&
cat .env.production &&
ls -la &&
yarn run build:prod
while in my package.json:
"build:prod": "sh -ac '. .env.production; react-scripts build'",
All of this works well in local but the output in gcp cloud build:
Already have image: node:14
REACT_APP_ENV="sandbox"
REACT_APP_CAPTCHA_ENABLED=true
REACT_APP_CAPTCHA_PUBLIC_KEY="akey"
REACT_APP_DEFAULT_APP="home-btn"
REACT_APP_API_URL="akey2"
REACT_APP_STRIPE_KEY="akey3"
REACT_APP_COGNITO_POOL_ID="akey4"
REACT_APP_COGNITO_APP_ID="akey5"
total 2100
drwxr-xr-x 6 root root 4096 Feb 25 12:15 .
drwxr-xr-x 1 root root 4096 Feb 25 12:15 ..
-rw-r--r-- 1 root root 382 Feb 25 12:15 .env.production <- it's here!
drwxr-xr-x 8 root root 4096 Feb 25 12:13 .git
-rw-r--r-- 1 root root 230 Feb 25 12:13 .gitignore
-rw-r--r-- 1 root root 371 Feb 25 12:13 Dockerfile
-rw-r--r-- 1 root root 3787 Feb 25 12:13 README.md
-rw-r--r-- 1 root root 1019 Feb 25 12:13 cloudbuild.yaml
drwxr-xr-x 1089 root root 36864 Feb 25 12:14 node_modules
-rw-r--r-- 1 root root 1580131 Feb 25 12:13 package-lock.json
-rw-r--r-- 1 root root 1896 Feb 25 12:13 package.json
drwxr-xr-x 2 root root 4096 Feb 25 12:13 public
drwxr-xr-x 9 root root 4096 Feb 25 12:13 src
-rw-r--r-- 1 root root 535 Feb 25 12:13 tsconfig.json
-rw-r--r-- 1 root root 478836 Feb 25 12:13 yarn.lock
/workspace
yarn run v1.22.17
$ sh -ac '. .env.production; react-scripts build'
sh: 1: .: .env.production: not found
error Command failed with exit code 2.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
I'm unsure if I'm doing something completely wrong or if it's a bug on GCP side?
Alright, I'm not expert enough into bash and zh documentation to understand what the issue is, but I ended up solving it.
One thing to pay attention to:
everything is actually shared between raw steps in cloudbuild, no need for a volume or any specific path
So on the cloudbuild side I changed the yaml to reflect:
- name: node:$_NODE_VERSION
entrypoint: 'bash'
args:
- -c
- |
mv env.server .env.production &&
yarn run build:prod
And on the package.json I'm now using an extra lib env-cmd
which changes the build command to:
"build:prod": "env-cmd -f .env.production react-scripts build",
this works like a charm.
I'm a bit annoyed I had to add another lib for this but, well.

How to properly configure my terraform folders for AWS deployment?

This is how my folder structure looks like
total 248
drwxrwxr-x 6 miki miki 4096 Mar 7 16:01 ./
drwxrwxr-x 5 miki miki 4096 Mar 3 14:53 ../
-rw-rw-r-- 1 miki miki 460 Mar 4 11:59 application_01.tf
drwxrwxr-x 3 miki miki 4096 Mar 8 10:54 application-server/
-rw-rw-r-- 1 miki miki 862 Mar 4 09:06 ecr.tf
-rw-rw-r-- 1 miki miki 3169 Mar 4 11:36 iam.tf
-rw-rw-r-- 1 miki miki 1023 Mar 4 14:11 jenkins_01.tf
drwxrwxr-x 2 miki miki 4096 Mar 7 15:33 jenkins-config/
-rw------- 1 miki miki 3401 Mar 3 09:41 jenkins.key
-r-------- 1 miki miki 753 Mar 3 09:41 jenkins.pem
drwxrwxr-x 3 miki miki 4096 Mar 8 10:53 jenkins-server/
I run yesterday both terraform init and terraform apply
I found out that my application-server folder content is not implemented.
I have script file(update ,install docker,login to ECR, and pull image)
sudo yum update -y
sudo amazon-linux-extras install docker
sudo systemctl start docker
sudo systemctl enable docker
/bin/sh -e -c 'echo $(aws ecr get-login-password --region us-east-1) | docker login -u AWS --password-stdin ${repository_url}'
sudo docker pull ${repository_url}:release
sudo docker run -p 80:8000 ${repository_url}:release
Anyway I checked the instance from the console
I run
terraform plan
and this it says
No changes. Your infrastructure matches the configuration.
Your configuration already matches the changes detected above. If you'd like to update the Terraform state to match, create and apply a refresh-only plan:
terraform apply -refresh-only
My application.tf file
module "application-server" {
source = "./application-server"
ami-id = "ami-0742b4e673072066f" # AMI for an Amazon Linux instance for region: us-east-1
iam-instance-profile = aws_iam_instance_profile.simple-web-app.id
key-pair = aws_key_pair.simple-web-app-key.key_name
name = "Simple Web App"
device-index = 0
network-interface-id = aws_network_interface.simple-web-app.id
repository-url = aws_ecr_repository.simple-web-app.repository_url
}
And APPLICATION_SERVER folder
-rw-rw-r-- 1 miki miki 417 Mar 2 11:18 application-server_main.tf
-rw-rw-r-- 1 miki miki 164 Mar 2 11:21 application-server_output.tf
-rw-rw-r-- 1 miki miki 398 Mar 2 11:17 application-server_variables.tf
drwxr-xr-x 3 miki miki 4096 Mar 8 10:54 .terraform/
-rw-r--r-- 1 miki miki 1076 Mar 8 10:54 .terraform.lock.hcl
-rw-rw-r-- 1 miki miki 866 Mar 4 14:39 user_data.sh
And application-server_main.tf
resource "aws_instance" "default" {
ami = var.ami-id
iam_instance_profile = var.iam-instance-profile
instance_type = var.instance-type
key_name = var.key-pair
network_interface {
device_index = var.device-index
network_interface_id = var.network-interface-id
}
user_data = templatefile("${path.module}/user_data.sh", {repository_url = var.repository-url})
tags = {
Name = var.name
}
}
My scirpt is not executed. Why? How to structure properly Terraform across many folders?

loop over a list of instances to do yum update failed with exit status 126

I need to automate yum update across a list of instances, I tried something like aws ssm send-command --document-name "AWS-RunShellScript" --parameters 'commands=["sudo yum -y update"]' --targets "Key=instanceids,Values=<target instance id>" --timeout-seconds 600 in my local terminal (MFA enabled, logged in as IAM user, can list all ec2 instance under all regions by aws ec2 describe-instances) got the output with StatusDetails": "Pending" and the update never took place.
I checked the ssm log after starting an ssm session on the target instance
2021-12-08 00:03:32 INFO [ssm-agent-worker] [MessagingDeliveryService] Sending reply {
"additionalInfo": {
"agent": {
"lang": "en-US",
"name": "amazon-ssm-agent",
"os": "",
"osver": "1",
"ver": ""
},
"dateTime": "2021-12-08T00:03:32.061Z",
"runId": "",
"runtimeStatusCounts": {
"Failed": 1
}
},
"documentStatus": "InProgress",
"documentTraceOutput": "",
"runtimeStatus": {
"aws:runShellScript": {
"status": "Failed",
"code": 126,
"name": "aws:runShellScript",
"output": "\n----------ERROR-------\nsh: /var/lib/amazon/ssm/i-074cfdd5be7fe517b/document/orchestration/2d917bcc-fc6e-4e4b-b500-cc2e2b7bd4d6/awsrunShellScript/0.awsrunShellScript/_script.sh: Permission denied\nfailed to run commands: exit status 126",
"startDateTime": "2021-12-08T00:03:32.024Z",
"endDateTime": "2021-12-08T00:03:32.061Z",
"outputS3BucketName": "",
"outputS3KeyPrefix": "",
"stepName": "",
"standardOutput": "",
"standardError": "sh: /var/lib/amazon/ssm/i-074cfdd5be7fe517b/document/orchestration/2d917bcc-fc6e-4e4b-b500-cc2e2b7bd4d6/awsrunShellScript/0.awsrunShellScript/_script.sh: Permission denied\nfailed to run commands: exit status 126"
}
}
}
I checked the directory permission
ls -al /var/lib/amazon/
total 4
drwxr-xr-x 3 root root 17 Jul 26 23:53 .
drwxr-xr-x 32 root root 4096 Aug 6 18:49 ..
drwxr-xr-x 6 root root 80 Aug 7 00:03 ssm
and further one level down
ls -al /var/lib/amazon/ssm
total 0
drwxr-xr-x 6 root root 80 Aug 7 00:03 .
drwxr-xr-x 3 root root 17 Jul 26 23:53 ..
drw------- 2 root root 6 Aug 7 00:03 daemons
drw------- 8 root root 111 Dec 8 00:03 i-074cfdd5be7fe517b
drwxr-x--- 2 root root 39 Aug 7 00:03 ipc
drw------- 3 root root 23 Aug 7 00:03 localcommands
I also tried more basic commands like echo HelloWorld and got the same 126 error.

Terraform aws Provider issue

Can anyone suggest how to fix this issue?
Initializing provider plugins...
terraform.io/builtin/terraform is built in to Terraform
Finding hashicorp/aws versions matching "~> 3.56.0"...
╷
│ Error: Failed to query available provider packages
│
│ Could not retrieve the list of available versions for provider
│ hashicorp/aws: provider registry.terraform.io/hashicorp/aws was not found
│ in any of the search locations
│
│ - /var/terraform/terraform-provider-aws-3.56.0/
drwxr-xr-x. 1 root root 4096 Oct 15 03:21 terraform-provider-aws-3.56.0
[root#7c3369092d09 terraform]# pwd
/var/terraform
[root#7c3369092d09 terraform]#
[root#7c3369092d09 terraform]# cd terraform-provider-aws-3.56.0/
[root#7c3369092d09 terraform-provider-aws-3.56.0]# ll
total 572
drwxr-xr-x. 5 root root 151552 Aug 26 22:30 aws
drwxr-xr-x. 5 root root 4096 Aug 26 22:30 awsproviderlint
-rw-r--r--. 1 root root 275106 Aug 26 22:30 CHANGELOG.md
drwxr-xr-x. 4 root root 4096 Aug 26 22:30 docs
drwxr-xr-x. 27 root root 4096 Aug 26 22:30 examples
-rw-r--r--. 1 root root 6140 Aug 26 22:30 GNUmakefile
-rw-r--r--. 1 root root 1137 Aug 26 22:30 go.mod
-rw-r--r--. 1 root root 62213 Aug 26 22:30 go.sum
drwxr-xr-x. 3 root root 4096 Aug 26 22:30 infrastructure
-rw-r--r--. 1 root root 16725 Aug 26 22:30 LICENSE
-rw-r--r--. 1 root root 580 Aug 26 22:30 main.go
-rw-r--r--. 1 root root 2067 Aug 26 22:30 README.md
-rw-r--r--. 1 root root 7105 Aug 26 22:30 ROADMAP.md
drwxr-xr-x. 2 root root 4096 Aug 26 22:30 scripts
-rw-r--r--. 1 root root 86 Aug 26 22:30 staticcheck.conf
drwxr-xr-x. 2 root root 4096 Aug 26 22:30 tools
drwxr-xr-x. 2 root root 4096 Aug 26 22:30 version
drwxr-xr-x. 3 root root 4096 Aug 26 22:30 website
[root#7c3369092d09 terraform-provider-aws-3.56.0]#
I am using creating docker image/container to install the terraform and do the operation on it:
Docker file snippet:
It will download the provider installer from our repo i.e. name terraform-provider-aws_3.56.0_linux_amd64
RUN wget $FILES_REPO_LOCAL/sd.svtstand.files/distributive/terraform/providers/terraform-provider-aws_${PROVIDER_VERSION}_linux_amd64.zip \
&& unzip terraform-provider-aws_*.zip -d /tmp \
&& rm terraform-provider-aws_*.zip \
&& mkdir -p /opt/terraform/plugins/registry.terraform.io/hashicorp/aws/${PROVIDER_VERSION}/linux_amd64 \
&& mv /tmp/terraform-provider-aws_* /opt/terraform/plugins/registry.terraform.io/hashicorp/aws/${PROVIDER_VERSION}/linux_amd64 \
&& chmod 775 /opt/terraform/plugins/registry.terraform.io/hashicorp/aws/${PROVIDER_VERSION}/linux_amd64
This will initialize the terraform
terraform init -plugin-dir=/opt/terraform/plugins
#Provider block
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = ">=3.35.0"
}
}
}
This is the output:
2021-10-18T13:47:01.766Z [DEBUG] will search for provider plugins in [/opt/terraform/plugins] Initializing provider plugins... - terraform.io/builtin/terraform is built in to Terraform - Finding hashicorp/aws versions matching "~> 3.62.0"... - Installing hashicorp/aws v3.62.0...
Output in the container: [root#bf34ab9c5277 aws]# pwd /opt/terraform/plugins/registry.terraform.io/hashicorp/aws
[root#bf34ab9c5277 aws]# ll
total 0 drwxr-xr-x 3 root root 25 Oct 18 12:59 3.62.0
[root#bf34ab9c5277 aws]#

AWS CodeDeploy script exited with code 127

This is my first time using AWS CodeDeploy and I'm having problems creating my appspec.yml file.
This is the error I'm getting:
2019-02-16 19:28:06 ERROR [codedeploy-agent(3596)]:
InstanceAgent::Plugins::CodeDeployPlugin::CommandPoller:
Error during perform:
InstanceAgent::Plugins::CodeDeployPlugin::ScriptError -
Script at specified location: deploy_scripts/install_project_dependencies
run as user root failed with exit code 127 -
/opt/codedeploy-agent/lib/instance_agent/plugins/codedeploy/hook_executor.rb:183:in `execute_script'
This is my appspec.yml file
version: 0.0
os: linux
files:
- source: /
destination: /var/www/html/admin_panel_backend
hooks:
BeforeInstall:
- location: deploy_scripts/install_dependencies
timeout: 300
runas: root
- location: deploy_scripts/start_server
timeout: 300
runas: root
AfterInstall:
- location: deploy_scripts/install_project_dependencies
timeout: 300
runas: root
ApplicationStop:
- location: deploy_scripts/stop_server
timeout: 300
runas: root
And this is my project structure
drwxr-xr-x 7 501 20 224 Feb 6 20:57 api
-rw-r--r-- 1 501 20 501 Feb 16 16:29 appspec.yml
-rw-r--r-- 1 501 20 487 Feb 14 21:54 bitbucket-pipelines.yml
-rw-r--r-- 1 501 20 3716 Feb 14 20:43 codedeploy_deploy.py
drwxr-xr-x 4 501 20 128 Feb 6 20:57 config
-rw-r--r-- 1 501 20 1047 Feb 4 22:56 config.yml
drwxr-xr-x 6 501 20 192 Feb 16 16:25 deploy_scripts
drwxr-xr-x 264 501 20 8448 Feb 6 17:40 node_modules
-rw-r--r-- 1 501 20 101215 Feb 6 20:57 package-lock.json
-rw-r--r-- 1 501 20 580 Feb 6 20:57 package.json
-rw-r--r-- 1 501 20 506 Feb 4 08:50 server.js
And deploy_scripts folder
-rwxr--r-- 1 501 20 50 Feb 14 22:54 install_dependencies
-rwxr--r-- 1 501 20 61 Feb 16 16:25 install_project_dependencies
-rwxr--r-- 1 501 20 32 Feb 14 22:44 start_server
-rwxr--r-- 1 501 20 31 Feb 14 22:44 stop_server
This is my install_project_dependencies script
#!/bin/bash
cd /var/www/html/admin_panel_backend
npm install
All the other scripts are working ok, but this one (install_project_dependencies).
Thanks you all
After reading a lot! I realized I was having the same problem as NPM issue deploying a nodejs instance using AWS codedeploy , I didn't have my PATH variable set.
So leaving my start_script as this worked fine!
#!/bin/bash
source /root/.bash_profile
cd /var/www/html/admin_panel_backend
npm install
Thanks!
I had the exact same problem because npm was installed for EC2-user and not for root. I solved it by adding this line to my install_dependencies script.
su - ec2-user -c 'cd /usr/local/nginx/html/node && npm install'
You can replace your npm install line with the line above to install as your user.