allure workspace issue in jenkins - unit-testing

I am trying to run allure from Jenkins. I have installed the Allure Jenkins Plugin
Version2.30.2 and my current Jenkins version is Jenkins 2.346.1.
Logs:
[Pipeline] { (Declarative: Post Actions)
[Pipeline] script
[Pipeline] {
[Pipeline] allure
[useruk_pipeline-2_develop] $ /var/lib/jenkins/tools/ru.yandex.qatools.allure.jenkins.tools.AllureCommandlineInstallation/allure/bin/allure generate -c -o /var/lib/jenkins/workspace/useruk_pipeline-2_develop/allure-report
allure-results does not exist
Report successfully generated to /var/lib/jenkins/workspace/useruk_pipeline-2_develop/allure-report
Allure report was successfully generated.
Creating artifact for the build.
Artifact was added to the build.
[Pipeline] }
Code:
post {
always {
script {
allure([
includeProperties: false,
jdk: '',
properties: [],
reportBuildPolicy: 'ALWAYS',
results: [[path: " ${env.WORKSPACE}/allure-results"]]
//results: [[path: " ${ALLURE_DIR}/allure-results"]]
])
}
deleteDir()
}
It tries to find the report under '/var/lib/jenkins/workspace/useruk_pipeline-2_develop/allure-report' . Once I made a login to Jenkins box via putty and can not find the allure-results in the workspace useruk_pipeline-2_develop.
jenkins#ip-xxx.xx.x.xx:~/workspace/useruk_pipeline-2_develop$ ls
Dockerfile Jenkinsfile behave.ini features requirements.txt amt
But I could see the 'allure-results' in useruk_pipeline-2_develop#2 workspace.
jenkins#ip-xxx.xx.x.xx:~/workspace/useruk_pipeline-2_develop#2$ ls -l | grep "all*"
total 4332
drwxr-xr-x 2 jenkins jenkins 282624 Aug 18 12:14 allure-results
-rw-r--r-- 1 jenkins jenkins 889 Aug 3 11:49 allure.py
drwxr-xr-x 3 jenkins jenkins 4096 Aug 3 11:49 allure_behave
drwxr-xr-x 2 jenkins jenkins 4096 Aug 3 11:49 allure_behave-2.5.2.dist-info
drwxr-xr-x 3 jenkins jenkins 4096 Aug 3 11:49 allure_commons
drwxr-xr-x 2 jenkins jenkins 4096 Aug 3 11:49 allure_python_commons-2.5.2.dist-info
Could someone please assist and provide some pointers as to where should I keep my investigation towards? Also any links would be appreciated.

I could eventually load up test results by manually creating target/allure-results folder. Looks like Jenkins infrastructure folders get created based on pipeline nodes (my case there are 6 nodes + 1 infra, latest runs agent and attempts to create allure-results, splitting results and reports into different paths).
As work around I added target/allure-results exactly where allure-reports was generated and copy folder over i.e.
Folder structure:
./home/fyre/home/fyre/workspace/Jobs/Sandbox/Eduardo/copy_sw_test_validation_test#2/target/allure-results
./home/fyre/home/fyre/workspace/Jobs/Sandbox/Eduardo/copy_sw_test_validation_test#2/target/allure-report
pipeline script:
stage('Allure Report') {
agent{
node{
label "node-sw-slave${BUILD_NUMBER}"
}
}
steps {
ws("/home/fyre/workspace/Jobs/Sandbox/Eduardo/copy_sw_test_validation_test#2/"){
script {
allure([
includeProperties: false,
jdk: '',
properties: [],
reportBuildPolicy: 'ALWAYS',
results: [[path: "target/allure-results"]]
])
}
}
}
}
Hopefully, it will also be your Jenkins issue at your end.

Related

AWS CodeBuild not pausing on breakpoint

Using steps provided here, I kicked off a CodeBuild with the following advanced options checked:
Enable session connection
Allow AWS CodeBuild to modify this service role so it can be used with this build project
The buildspec included a codebuild-breakpoint:
version: 0.2
phases:
pre_build:
commands:
- ls -al
- codebuild-breakpoint
- cd "${SERVICE_NAME}"
- ls -al
- $(aws ecr get-login)
- TAG="$SERVICE_NAME"
build:
commands:
- docker build --tag "${REPOSITORY_URI}:${TAG}" .
post_build:
commands:
- docker push "${REPOSITORY_URI}:${TAG}"
- printf '{"tag":"%s"}' $TAG > ../build.json
artifacts:
files: build.json
The build started and produced the following logs without pausing:
[Container] 2022/02/28 13:49:03 Entering phase PRE_BUILD
[Container] 2022/02/28 13:49:03 Running command ls -al
total 148
drwxr-xr-x 2 root root 4096 Feb 28 13:49 .
drwxr-xr-x 3 root root 4096 Feb 28 13:49 ..
-rw-rw-rw- 1 root root 1818 Feb 28 10:54 user-manager\Dockerfile
-rw-rw-rw- 1 root root 140 Feb 28 10:34 user-manager\body.json
-rw-rw-rw- 1 root root 0 Feb 28 10:54 user-manager\shared-modules\
-rw-rw-rw- 1 root root 4822 Feb 21 14:52 user-manager\shared-modules\config-helper\config.js
-rw-rw-rw- 1 root root 2125 Feb 21 14:52 user-manager\shared-modules\config-helper\config\default.json
-rw-rw-rw- 1 root root 366 Feb 21 14:52 user-manager\shared-modules\config-helper\package.json
-rw-rw-rw- 1 root root 9713 Feb 21 14:52 user-manager\shared-modules\dynamodb-helper\dynamodb-helper.js
-rw-rw-rw- 1 root root 399 Feb 21 14:52 user-manager\shared-modules\dynamodb-helper\package.json
-rw-rw-rw- 1 root root 451 Feb 21 14:52 user-manager\shared-modules\token-manager\package.json
-rw-rw-rw- 1 root root 13885 Feb 21 14:52 user-manager\shared-modules\token-manager\token-manager.js
-rw-rw-rw- 1 root root 44372 Feb 28 10:34 user-manager\src\cognito-user.js
-rw-rw-rw- 1 root root 706 Feb 28 10:34 user-manager\src\package.json
-rw-rw-rw- 1 root root 32734 Feb 28 10:34 user-manager\src\server.js
[Container] 2022/02/28 13:49:03 Running command codebuild-breakpoint
2022/02/28 13:49:03 Build is paused temporarily and you can use codebuild-resume command in the session to resume this build
[Container] 2022/02/28 13:49:03 Running command cd "${SERVICE_NAME}"
/codebuild/output/tmp/script.sh: 4: cd: can't cd to user-manager
My primary question is: Why didn't the build pause and session manager link become available?
Side-quest: The reason I'm trying to debug the session is to try to determine why the process can't CD to the user-manager folder (which clearly exists). Any ideas why?
TLDR: The image on the build machine was too old.
Main quest
The template specified aws/codebuild/ubuntu-base:14.04 as the CodeBuild image. Presumably that image pre-dated the Session Manager functionality (which requires a specific version of the SSM agent to be installed).
I update the agent to aws/codebuild/standard:5.0 and was able to successfully pause on the breakpoint and connect to the session.
Side quest
Once I connected I was able to investigate the cause of the inability to CD to the folder. I can confirm that Tim's shot in the dark was correct! All the entries were in fact files - no folders.
This QuickStart is the gift that keeps on giving! When/if I get all the issues resolved I'll submit a PR to update the project. Those interested in the cause of the file/folder issue can follow up there.
Side quest update
The strange flattening behaviour was due to creating the zip file on a Windows machine and unzipping it on a unix machine (the build agent uses an Ubuntu image). Just zipped it using 7-Zip and that did the job.

Define specific docker-compose file to use for AWS Elastic Beanstalk Deployment

Before I run eb create command, how can I tell Elastic Beanstalk to use a DIFFERENT docker-compose file?
For example, my project directory:
HelloWorldDocker
├──.elasticbeanstalk
│ └──config.yml
├──app/
├──proxy/
└──docker-compose.prod.yml
└──docker-compose.yml
My docker-compose.yml is what I use for local development
My docker-compose.prod.yml is what I want to use for production
Is there a way to define this configuration before running the eb create command from the EB CLI?
Stating the obvious: I realize I could use docker-compose.yml for my production file and a docker-compose.dev.yml for my local development but then running the docker-compose up command becomes more tedious locally (ie: docker-compose -f docker-compose.dev.yml up --build...). Further, I'm mainly interested if this is even possible as I'm learning Elastic Beanstalk, and how I could do it if I wanted to.
EDIT / UPDATE: June 11, 2021
I attempted to rename docker-compose.prod.yml to docker-compose.yml in .ebextensions/docker-settings.config with this:
container_commands:
rename_docker_compose:
command: mv docker-compose.prod.yml docker-compose.yml
>eb deploy:
2021-06-11 16:44:45 ERROR Instance deployment failed.
For details, see 'eb-engine.log'.
2021-06-11 16:44:45 ERROR Instance deployment: Both
'Dockerfile' and 'Dockerrun.aws.json' are missing in your
source bundle. Include at least one of them. The deployment
failed.
In eb-engine.log, I see:
2021/06/11 16:44:45.818876 [ERROR] An error occurred during
execution of command [app-deploy] - [Docker Specific Build
Application]. Stop running the command. Error: Dockerfile and
Dockerrun.aws.json are both missing, abort deployment
Based on my testing, this is due to AWS needing to call /bin/sh -c docker-compose config before getting to the later steps of container_commands.
Edit / Update #2
If I use commands instead of container_commands:
commands:
rename_docker_compose:
command: mv docker-compose.prod.yml docker-compose.yml
cwd: /var/app/staging
it does seem to do the replacement successfully:
2021-06-11 21:40:44,809 P1957 [INFO] Command find_docker_compose_file
2021-06-11 21:40:45,086 P1957 [INFO] -----------------------Command Output-----------------------
2021-06-11 21:40:45,086 P1957 [INFO] ./var/app/staging/docker-compose.prod.yml
2021-06-11 21:40:45,086 P1957 [INFO] ------------------------------------------------------------
2021-06-11 21:40:45,086 P1957 [INFO] Completed successfully.
but I still am hit with:
2021/06/11 21:40:45.192780 [ERROR] An error occurred during
execution of command [app-deploy] - [Docker Specific Build
Application]. Stop running the command. Error: Dockerfile and
Dockerrun.aws.json are both missing, abort deployment
EDIT / UPDATE: June 12, 2021
I'm on a Windows 10 machine. Before running eb deploy command locally, I opened up Git Bash which uses MINGW64 terminal. I cdd to the prebuild directory where build.sh exists. I ran:
chmod +x build.sh
If I do ls -l, it returns:
-rwxr-xr-x 1 Jarad 197121 58 Jun 12 12:31 build.sh*
I think this means the file is executable.
I then committed to git.
I then ran eb deploy.
I am seeing a build.sh: permission denied error in eb-engine.log. Below is an excerpt of the relevant portion.
...
2021/06/12 19:41:38.108528 [INFO] application/zip
2021/06/12 19:41:38.108541 [INFO] app source bundle is zip file ...
2021/06/12 19:41:38.108547 [INFO] extracting /opt/elasticbeanstalk/deployment/app_source_bundle to /var/app/staging/
2021/06/12 19:41:38.108556 [INFO] Running command /bin/sh -c /usr/bin/unzip -q -o /opt/elasticbeanstalk/deployment/app_source_bundle -d /var/app/staging/
2021/06/12 19:41:38.149125 [INFO] finished extracting /opt/elasticbeanstalk/deployment/app_source_bundle to /var/app/staging/ successfully
2021/06/12 19:41:38.149142 [INFO] Executing instruction: RunAppDeployPreBuildHooks
2021/06/12 19:41:38.149190 [INFO] Executing platform hooks in .platform/hooks/prebuild/
2021/06/12 19:41:38.149249 [INFO] Following platform hooks will be executed in order: [build.sh]
2021/06/12 19:41:38.149255 [INFO] Running platform hook: .platform/hooks/prebuild/build.sh
2021/06/12 19:41:38.149457 [ERROR] An error occurred during execution of command [app-deploy] - [RunAppDeployPreBuildHooks]. Stop running the command. Error: Command .platform/hooks/prebuild/build.sh failed with error fork/exec .platform/hooks/prebuild/build.sh: permission denied
2021/06/12 19:41:38.149464 [INFO] Executing cleanup logic
2021/06/12 19:41:38.149572 [INFO] CommandService Response: {"status":"FAILURE","api_version":"1.0","results":[{"status":"FAILURE","msg":"Engine execution has encountered an error.","returncode":1,"events":[{"msg":"Instance deployment failed. For details, see 'eb-engine.log'.","timestamp":1623526898,"severity":"ERROR"}]}]}
2021/06/12 19:41:38.149706 [INFO] Platform Engine finished execution on command: app-deploy
...
Any idea why I am getting a permission denied error?
My Conclusion From This Madness
Elastic Beanstalk's EB CLI eb deploy command does not zip files (the app_source_bundle it creates) correctly on Windows machines.
Proof
I was able to recreate Marcin's example by zipping it locally and manually uploading it through the Elastic Beanstalk online interface. When I do that and check the source bundle, it shows that build.sh does have executable permissions (-rwxr-xr-x).
[root#ip-172-31-11-170 deployment]# zipinfo app_source_bundle
Archive: app_source_bundle
Zip file size: 993 bytes, number of entries: 5
drwxr-xr-x 3.0 unx 0 bx stor 21-Jun-13 03:08 .platform/
drwxr-xr-x 3.0 unx 0 bx stor 21-Jun-13 03:08 .platform/hooks/
drwxr-xr-x 3.0 unx 0 bx stor 21-Jun-13 03:08 .platform/hooks/prebuild/
-rwxr-xr-x 3.0 unx 58 tx defN 21-Jun-13 03:09 .platform/hooks/prebuild/build.sh
-rw-r--r-- 3.0 unx 98 tx defN 21-Jun-13 03:08 docker-compose.prod.yml
When I initialize and create using the EB CLI and the exact same files, build.sh does NOT have executable permissions (-rw-rw-rw-).
[ec2-user#ip-172-31-5-39 deployment]$ zipinfo app_source_bundle
Archive: app_source_bundle
Zip file size: 1092 bytes, number of entries: 5
drwxrwxrwx 2.0 fat 0 b- stor 21-Jun-12 20:32 ./
-rw-rw-rw- 2.0 fat 98 b- defN 21-Jun-12 20:08 docker-compose.prod.yml
-rw-rw-rw- 2.0 fat 993 b- defN 21-Jun-12 20:15 myzip.zip
drwxrwxrwx 2.0 fat 0 b- stor 21-Jun-12 20:08 .platform/hooks/prebuild/
-rw-rw-rw- 2.0 fat 58 b- defN 21-Jun-12 20:09 .platform/hooks/prebuild/build.sh
Therefore, I think this is a bug with AWS EB CLI deploy command in regards to how it zips files for Windows users.
You can't do this from command level. But I guess you could write container_commands script to rename your docker-compose file from docker-compose.dev.yml to docker-compose.yml:
You can use the container_commands key to execute commands that affect your application source code. Container commands run after the application and web server have been set up and the application version archive has been extracted, but before the application version is deployed.
UPDATE 12 Jun 2021
I tried to replicate the issue using simplified setup with just docker-compose.prod.yml and Docker running on 64bit Amazon Linux 2 3.4.1 EB platform.
docker-compose.prod.yml
version: "3"
services:
client:
image: nginx
ports:
- 80:80
I can confirm and reproduce the issue with container_commands. So in my tests, the solution was to setup prebuild deployment hook.
So my deployment zip had the structure:
├── docker-compose.prod.yml
└── .platform
└── hooks
└── prebuild
└── build.sh
where
build.sh
#!/bin/bash
mv docker-compose.prod.yml docker-compose.yml
I also made the build.sh executable before creating deployment zip.
app_source_bundle permissions (zipinfo -l)
Zip file size: 1008 bytes, number of entries: 5
drwxr-xr-x 3.0 unx 0 bx 0 stor 21-Jun-12 07:37 .platform/
drwxr-xr-x 3.0 unx 0 bx 0 stor 21-Jun-12 07:37 .platform/hooks/
drwxr-xr-x 3.0 unx 0 bx 0 stor 21-Jun-12 07:38 .platform/hooks/prebuild/
-rwxr-xr-x 3.0 unx 77 tx 64 defN 21-Jun-12 07:24 .platform/hooks/prebuild/build.sh
-rw-r--r-- 3.0 unx 92 tx 68 defN 21-Jun-12 07:01 docker-compose.prod.ym
I was able to circumvent this annoying bug by:
Using git and AWS CodeCommit
Running git add --chmod=+x .platform/hooks/prebuild/build.sh
This circumvents the Windows-related issue because:
When you configure CodeCommit with your EB CLI repository, the EB CLI
uses the contents of the repository to create source bundles. When you
run eb deploy or eb create, the EB CLI pushes new commits and uses the
HEAD revision of your branch to create the archive that it deploys to
the EC2 instances in your environment.
Source: Deploying from your CodeCommit repository

error: error creating output file /var/lib/logrotate.status.tmp: Permission denied

I am trying to logrotate my log files. Here is my configuration file:
/home/deploy/apps/production_app/current/log/*.log {
daily
missingok
rotate 52
compress
create 0644 deploy deploy
delaycompress
notifempty
sharedscripts
copytruncate
}
And this is result of
ll apps/production_app/current/log/
on my log files:
-rw-rw-r-- 1 deploy deploy 0 Jul 1 10:01 production.log
-rw-rw-r-- 1 deploy deploy 1124555 Jul 1 10:01 production.log.1
And when I run this command
logrotate -v /etc/logrotate.d/production_app
I get following:
error: error creating output file /var/lib/logrotate.status.tmp:
Permission denied
And here is permission on my log-rotate config file
lrwxrwxrwx 1 root root 67 Feb 25 2019 /etc/logrotate.d/production_app -> /home/deploy/apps/production_app/shared/config/log_rotation
please check whether the dir "var/lib" is readonly.

What is the default user that codedeploy runs the hook scripts as?

Background: I am facing this error AWS codedeploy deployment throwing "[stderr] Could not open input file" while trying to invoke a php file from the sh file at afterInstall step
In the afterInstall step, I am trying to run a php file from the afterInstall.sh file and I am getting this error - unable to open php file.
I am not sure what exactly to do. Thought of trying to manually check if I could run the file as that user.
The CodeDeploy agent default user is root.
The directory listing below shows the ownership of the deployed files in their destination folder, /tmp, after a successful deployment.
ubuntu#ip-10-0-xx-xx:~$ ls -l /tmp
total 36
-rw-r--r-- 1 root root 85 Aug 2 05:04 afterInstall.php
-rw-r--r-- 1 root root 78 Aug 2 05:04 afterInstall.sh
-rw-r--r-- 1 root root 1397 Aug 2 05:04 appspec.yml
-rw------- 1 root root 3189 Aug 2 05:07 codedeploy-agent.update.log
drwx------ 2 root root 16384 Aug 2 03:01 lost+found
-rw-r--r-- 1 root root 63 Aug 2 05:04 out.log
runas is an optional filed in the AppSpec file. The user to impersonate when running the script. By default, this is the AWS CodeDeploy agent running on the instance(If you don't specify a non-root user, it will be root).
To run host agent as a non-root user, the environment variable CODEDEPLOY_USER needs to be set, as the link to the host agent source code show. The env variable can be set to whatever user you want the host agent to run as.

AWS Code Deploy Failing Scripts Due To Permissions

I am attempting to run a few scripts while deploying using AWS Code Deploy, but they never run due to not having permissions to run the scripts.
Here is my appspec.yml file:
version: 0.0
os: linux
files:
- source: /
destination: /var/www/html
permissions:
- object: /var/www/html/codedeploy-scripts
owner: root
mode: 777
type:
- directory
hooks:
ApplicationStop:
- location: codedeploy-scripts/application-stop
timeout: 300
runas: root
BeforeInstall:
- location: codedeploy-scripts/before-install
timeout: 300
runas: root
AfterInstall:
- location: codedeploy-scripts/after-install
timeout: 600
runas: root
ApplicationStart:
- location: codedeploy-scripts/application-start
timeout: 300
runas: root
ValidateService:
- location: codedeploy-scripts/validate-service
timeout: 300
runas: root
The codedeploy-scripts folder get deployed with the app and the permissions I set on the folder does not get set. The permissions on the folder always get reset to:
[ec2-user#ip-10-0-8-181 html]$ ls -al
total 156
drwxrwsr-x 7 ec2-user www 4096 Oct 13 16:36 .
drwxrwsr-x 3 ec2-user www 4096 Oct 13 15:01 ..
-rw-rw-r-- 1 ec2-user www 740 Oct 13 16:28 appspec.yml
drwxr-sr-x 2 ec2-user www 4096 Oct 13 16:36 codedeploy-scripts
...
The files in the folder seem to have executable rights:
[ec2-user#ip-10-0-8-181 alio]$ ls -al codedeploy-scripts
total 28
drwxr-sr-x 2 ec2-user www 4096 Oct 13 16:36 .
drwxrwsr-x 7 ec2-user www 4096 Oct 13 16:36 ..
-rwxr-xr-x 1 ec2-user www 343 Oct 13 16:28 after-install
-rwxr-xr-x 1 ec2-user www 12 Oct 13 16:28 application-start
-rwxr-xr-x 1 ec2-user www 12 Oct 13 16:28 application-stop
-rwxr-xr-x 1 ec2-user www 889 Oct 13 16:28 before-install
-rwxr-xr-x 1 ec2-user www 12 Oct 13 16:28 validate-service
Why doesn't the code get deployed with the permissions i set in the appspec file. The codedeploy-scripts folder should have 777 permissions but it never does.
This is the error i get in /var/log/aws/codedeploy-agent/codedeploy-agent.log for each of those scripts:
2015-10-13 16:36:23 WARN [codedeploy-agent(9918)]: InstanceAgent::Plugins::CodeDeployPlugin::HookExecutor: Script at specified location: codedeploy-scripts/validate-service is not executable. Trying to make it executable.
Any help would be appreciated.
The agent is executing the scripts directly from the extracted archive bundle not from any arbitrary places you might have copied them using the files section. You'll need to set the execute bit in your archive in S3 or Git repository.
What you have as is does this:
Copy all the files to /var/www/html.
Set permissions on the directory on the contents of /var/www/html/codedeploy-scripts to 777 but not the directory itself (See the appspec.yml reference). This will also be affected by umask, which you might be setting /etc/profile.
Execute each of the scripts for the lifecycle events (as they occur) from the archive root. So your ValidateSerivce script is running from <deployment-archive-root>/codedeploy-scripts/validate-service not from /var/www/html/codedeploy-scripts/validate-service
Note: ApplicationStop is special because it runs before new new archive bundle is downloaded.
Without more details, I won't be able to speak to why setting your scripts to be executable fixed your issue, but the accepted answer shouldn't have resolved anything other than the log statement you were seeing.
Take a closer look at the log:
2015-10-13 16:36:23 WARN [codedeploy-agent(9918)]: InstanceAgent::Plugins::CodeDeployPlugin::HookExecutor: Script at specified location: codedeploy-scripts/validate-service is not executable. Trying to make it executable.
It's only a warning, not an error. The Code Deploy agent noticed that your validate_service.sh script wasn't executable and it was "Trying to make it executable". If we look at the relevant Code Deploy agent code, you'll see that the agent will chmod +x the script itself.
When you set your scripts to be executable, you only silenced this warning, and it shouldn't have affected anything else. Looking back at the Code Deploy agent code, in L106, if the agent wasn't able to make your scripts executable you would have seen an error in your logs.
To answer your question on the permissions, you have a misconfigured appspec.yml. When you say:
permissions:
- object: /var/www/html/codedeploy-scripts
owner: root
mode: 777
type:
- directory
You are telling Code Deploy to set all files of type "directory" within /var/www/html/codedeploy-scripts to have permissions 777.
All of your scripts under codedeploy-scripts are "file" types (not "directory"), which is why their permissions weren't set, and the permissions only apply to files under the directory you specify, which is why the permissions on the codedeploy-scripts directory weren't set.
Here's the description of the appspec.yml permission's type option from the AWS docs:
type – Optional. The types of objects to apply the specified permissions to. This can be set to file or directory. If file is specified, the permissions will be applied only to files that are immediately contained within object after the copy operation (and not to object itself). If directory is specified, the permissions will be recursively applied to all directories/folders that are anywhere within object after the copy operation (but not to object itself).
I'd like to expand on an issue mentioned by Jonathan Turpie which can create a very weird situation.
From the docs on ApplicationStop:
This deployment lifecycle event occurs even before the application revision is downloaded. ... The AppSpec file and scripts used for this deployment lifecycle event are from the previous successfully deployed application revision.
Now imagine this situation:
A revision was deployed with botched ApplicationStop script permissions. The deployment still went fine because a previous version was used.
A new revision is pushed and fails the ApplicationStop step (because now it tried to execute the botched script from step 1).
You notice your mistake, fix the code, publish a new revision, but it still fails with the same error!
At this point it's not possible to fix the error by deploying new code. You only have two options:
In the deployment settings enable "Ignore Stop failures" (e.g. with the --ignore-application-stop-failures CLI flag [1])
Manually fix the file permissions in the previous successful deployment's root.
This concerns any stop script failures, not just permissions of course.
[1] https://docs.aws.amazon.com/cli/latest/reference/deploy/create-deployment.html
Solving permission issues:
Hoping you are in the root directory where all your scripts .sh files reside:
chmod +x ./*.sh
This makes all .sh files executable
Add some script change_permissions.sh and add the following in the file:
#!/bin/bash
chmod -R 777 /var/www/html/
This will give your destination folder - /var/www/html/ executable permissions.
Finally asspec.yml file add somehow following:
BeforeInstall:
- location: change_permissions.sh
timeout: 6
runas: root
This will at run time in your ec2 instance apply the executable permission to the files.