Failure : Build failed with an exception - build

Error:FAILURE: Build failed with an exception.
What went wrong:
Task 'stacktrace' not found in root project 'GatePass14mar17'.
Try:
Run gradle tasks to get a list of available tasks. Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.
I am getting this error while running the application

Related

Template launch failed while running a streaming dataflow job using flex-template

I'm trying to automate provisioning of streaming job using cloud build, for the POC I tried https://github.com/GoogleCloudPlatform/python-docs-samples/tree/master/dataflow/flex-templates/streaming_beam
It worked as expected when I manually ran the commands.
When I add the commands in cloudbuild.yaml file the build gets created successfully but the dataflow job fails each time with the below error:
Error occurred in the launcher container: Template launch failed. See console logs
This is the only error log that I get, I tried to add extra permissions to Cloud Build service account but that didn't help either.
Since there's no other info mentioned in the log file I find it hard to debug it as well.

mlUnitTest not working for MarkLogic Unit Testing

I have setup the UnitTetsing Framework for Marklogic using the below url-
https://github.com/marklogic-community/marklogic-unit-test
I am able to deploy my project using mlDeploy and able to load my Testing module using mlUnitTestLoadModules but while running the test cases from gradle mlUnitTest i am getting the below error-
Task ':mlUnitTest' is not up-to-date because:
Task has not declared any outputs.
Releasing connection
:mlUnitTest (Thread[Daemon worker Thread 16,5,main]) completed. Took 0.064 secs
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':mlUnitTest'.
> java.lang.NullPointerException (no error message)
* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to
get more log output. Run with --scan to get full insights.
* Get more help at https://help.gradle.org
Any Suggestions ??
We had this issue come up recently when migrating to marklogic-unit-test-1.0.0. There is a work around thanks to one of the MarkLogic gurus. If this is still an
issue let us know.

AWS Code Deploy Deployment Failed for shell scripts

Am trying to create CodeDeploy Deployment Group using the Cloud Formation Stack. Every time I run the stack, am getting script errors like Bad Interpreter, rm/ll command not found, /r /n errors. I tried to change the shell script files using dos2unix and zip those files and upload to CodeDeploy but no success.
Following is the error statement I get in logs:
2018-09-01 10:41:45 INFO [codedeploy-agent(2681)]: [Aws::CodeDeployCommand::Client 200 0.037239 0 retries] put_host_command_complete(command_status:"Failed",diagnostics:{format:"JSON",payload:"{\"error_code\":4,\"script_name\":\"BeforeInstall.sh\",\"message\":\"Script at specified location: BeforeInstall.sh run as user root failed with exit code 127\",\"log\":\"LifecycleEvent - BeforeInstall\\nScript - BeforeInstall.sh\\n[stderr]/usr/bin/env: bash\\r: No such file or directory\\n\"}"},host_command_identifier:"WyJjb20uYW1hem9uLmFwb2xsby5kZXBsb3ljb250cm9sLmRvbWFpbi5Ib3N0Q29tbWFuZElkZW50aWZpZXIiLHsiZGVwbG95bWVudElkIjoiQ29kZURlcGxveS91cy1lYXN0LTEvUHJvZC9hcm46YXdzOnNkczp1cy1lYXN0LTE6OTkzNzM1NTM2Nzc4OmRlcGxveW1lbnQvZC05V0kzWk5DNlYiLCJob3N0SWQiOiJhcm46YXdzOmVjMjp1cy1lYXN0LTE6OTkzNzM1NTM2Nzc4Omluc3RhbmNlL2ktMDk1NGJlNjk4OTMzMzY5MjgiLCJjb21tYW5kTmFtZSI6IkJlZm9yZUluc3RhbGwiLCJjb21tYW5kUG9zaXRpb24iOjMsImNvbW1hbmRBdHRlbXB0IjoxfV0=")
2018-09-01 10:41:45 ERROR [codedeploy-agent(2681)]: InstanceAgent::Plugins::CodeDeployPlugin::CommandPoller: Error during perform: InstanceAgent::Plugins::CodeDeployPlugin::ScriptError - Script at specified location: BeforeInstall.sh run as user root failed with exit code 127 - /opt/codedeploy-agent/lib/instance_agent/plugins/codedeploy/hook_executor.rb:173:in `execute_script'
......
......
2018-09-01 10:41:45 INFO [codedeploy-agent(2681)]: [Aws::CodeDeployCommand::Client 200 0.018288 0 retries] put_host_command_complete(command_status:"Failed",diagnostics:{format:"JSON",payload:"{\"error_code\":5,\"script_name\":\"\",\"message\":\"Script at specified location: BeforeInstall.sh run as user root failed with exit code 127\",\"log\":\"\"}"},host_command_identifier:"WyJjb20uYW1hem9uLmFwb2xsby5kZXBsb3ljb250cm9sLmRvbWFpbi5Ib3N0Q29tbWFuZElkZW50aWZpZXIiLHsiZGVwbG95bWVudElkIjoiQ29kZURlcGxveS91cy1lYXN0LTEvUHJvZC9hcm46YXdzOnNkczp1cy1lYXN0LTE6OTkzNzM1NTM2Nzc4OmRlcGxveW1lbnQvZC05V0kzWk5DNlYiLCJob3N0SWQiOiJhcm46YXdzOmVjMjp1cy1lYXN0LTE6OTkzNzM1NTM2Nzc4Omluc3RhbmNlL2ktMDk1NGJlNjk4OTMzMzY5MjgiLCJjb21tYW5kTmFtZSI6IkJlZm9yZUluc3RhbGwiLCJjb21tYW5kUG9zaXRpb24iOjMsImNvbW1hbmRBdHRlbXB0IjoxfV0=")
What can be the possible reason for failing?
The logs indicate that there is some problem with your scripts, specifically BeforeInstall.sh. Something in that script is failing with an exit code of 127. I would recommend adding logs to that script to see where it's actually failing. Once you identify the command that's failing, you can see what exit code 127 means for that particular command.
If you want help debugging that particular script, you should open up another question and provide the script, including the logs when it's gets run.
A note of CodeDeploy lifecycle hooks
In your case, your BeforeInstall script is failing, which will be the script that gets deployed with your application. However, if had been your ApplicationStop script that was failing, it's important to understand that ApplicationStop uses scripts from the last successful deployment, so if the last successful deployment had a fault script, it can cause future deployments to fail until these steps are followed.

Google Cloud Container Build trigger crashes during gradle build

I was trying to setup a build trigger for an kotlin app that is build using gradle. For that I put together the following Dockerfile:
FROM gradle:jdk8 as builder
WORKDIR /home/gradle/project
COPY . .
WORKDIR ./Kuroji-Eventrouter-Server
RUN gradle shadowJar
FROM openjdk:8-jre-alpine
WORKDIR /app
COPY --from=builder /home/gradle/project/Kuroji-Eventrouter-Server/build/libs/kuroji-eventrouter-server-*-all.jar kuroji-eventrouter-server.jar
ENTRYPOINT ["java", "-jar", "kuroji-eventrouter-server.jar"]
And that file works on my machine with docker build and it starts normally on google container registry however during the RUN gradle shadowJar task it crashes with some gradle error:
Step 5/9 : RUN gradle shadowJar
---> Running in ddd190fc2323
Starting a Gradle Daemon (subsequent builds will be faster)
[91m
[0m[91mFAILURE: [0m[91mBuild failed with an exception.[0m[91m
[0m[91m
[0m[91m* What went wrong:
[0m[91mCould not create service of type ScriptPluginFactory using BuildScopeServices.createScriptPluginFactory().
[0m[91m> [0m[91mCould not create service of type CrossBuildFileHashCache using BuildSessionScopeServices.createCrossBuildFileHashCache().
[0m[91m
[0m[91m* Try:
[0m[91mRun with [0m[91m--stacktrace[0m[91m option to get the stack trace. Run with --info[0m[91m or --debug[0m[91m option to get more log output. Run with [0m[91m--scan[0m[91m to get full insights.[0m[91m
[0m[91m
[0m[91m* Get more help at https://help.gradle.org
[0m[91m
[0m[91mBUILD FAILED in 3s
The command '/bin/sh -c gradle shadowJar' returned a non-zero code: 1
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: exit status 1
[0m
I tried building the Image on docker HUB and the same thing happend: https://hub.docker.com/r/usbpc/kuroji-eventrouter-server/builds/bnknnpqowwabdy82ydxiypc/
This is very confusing to me as I thought containers should be able to run anywhere and not depend on the enviroment. What can I do to make google build my container?
The problem was a file permission problem. Using the --stacktrace option I found that the gradle process didn't have permissions to create a folder inside the sources.
The solution I would like to do is use the --chown=gradle:gradle option on the COPY instruction, unfortunatly this it not supported in the google cloud yet.
So the solution is to add USER root before executing the gradle build.

I am deploying with 'eb deploy' in aws but getting the following error

I am using 'eb deploy' for deploying my commits but getting this error
WARNING: You have uncommitted changes.
Creating application version archive "6fea".
Uploading: [##################################################] 100% Done...
INFO: Environment update is starting.
INFO: Deploying new version to instance(s).
ERROR: [Instance: i-10d1f9ec] Command failed on instance. Return code: 126
Output: /bin/sh: ./scripts/update-ftp-dns.sh: /bin/sh^M: bad interpreter: No such file or directory.
container_command 07-update_ftp_dns in .ebextensions/03-vsftpd.config failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
INFO: Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
INFO: New application version was deployed to running EC2 instances.
ERROR: Update environment operation is complete, but with errors. For more information, see troubleshooting documentation.
Please help me to solve this.
The error message is a bit hidden, but it's in there:
Output: /bin/sh: ./scripts/update-ftp-dns.sh: /bin/sh^M: bad interpreter: No such file or directory.
If I had to guess, you have a line break with both a line feed and a carriage return in it. It's treating the carriage return character as if it's part of the name of the executable.
Make sure that you've converted the /scripts/update-ftp-dns.sh script so that it uses Unix line endings only.
See ./configure : /bin/sh^M : bad interpreter
I had something similar, and the cause was having Git's autocrlf value set to true. What this means is that Git will convert the file to a Windows-formatted one when git checkout is run - which unfortunately means that the Elastic Beanstalk tool tries to upload Windows-formatted files to your Linux server, which will manifest in errors like this.
I fixed it by switching autoclrf to false, and committing the relevant file again. Do be aware of the repercussions of this however.