In GoCD mvn clean install is giving mvn' is not recognized as an internal or external command but normal cmd project builds successfully - go-cd

In GoCD mvn clean install is giving error mvn' is not recognized as an internal or external command but in normal cmd I am able to build the project successfully
Error in GoCD build
Normal build:

It looks like mvn is not in the PATH that the go-agent uses.
Try specifying an absolute path to the mvn execute in your GoCD task.

Related

How to install Conan build tools in the Jenkins machine

I have an installation of Jenkins on Azure and wish to build a c++ project using Conan. Many examples show the following pipeline command to initiate Conan:
def conanClient = Artifactory.newConanClient()
however this throws an error:
sh: 1: conan: not found ERROR: Couldn't execute Conan task.
RuntimeException: Conan build failed with exit code 127
I assumed the newConanClient() would install Conan but that is not the case as verified by:
sh 'conan -v' resulting in conan: not found
From the JFrog documentation you would think there shouldn't be any problems as they say:
There is no need for any special setup for it, just install Conan and
your build tools in the Jenkins machine and call the needed Conan
commands.
https://docs.conan.io/en/latest/integrations/ci/jenkins.html?highlight=jenkins
So how does one "just install Conan" in Jenkins?
From the Documentation:
Conan can be installed in many Operating Systems. It has been extensively
used and tested in Windows, Linux (different distros), OSX, and is also
actively used in FreeBSD and Solaris SunOS. There are also several
additional operating systems on which it has been reported to work.
Based on your OS, you can install from here. newConanClient() method is part of the Artifactory plugin for Jenkins but doesn't install Conan
Now you can verify the installation
conan -v
and further can execute commands in Jenkins pipeline
sh 'conan build .'

AWS EB deploy fails: npm WARN config production Use `--omit=dev` instead

I'm trying to circumvent the well-known issue with the latest versions of NPM and AWS Elastic Beanstalk where npm install fails because it can't find node_modules. I'm using platform hooks with my NUXTJS application.
It fails when AWS Code Pipeline runs a deploy and returns with this warning:
[ERROR] An error occurred during execution of command [app-
deploy] - [Use NPM to install dependencies]. Stop running the
command. Error: Command /bin/sh -c npm --production install
failed with error signal: killed. Stderr:npm WARN config
production Use `--omit=dev` instead.
So, I've added platform hooks at the app root but it's still failing. Also, I have added an environment variable to the EBS environment of:
NODE_ENV=production
Here's what my platform hooks look like. I thought this would work but something is obviously wrong. Can anyone spot it? Thanks for any helpful tips.
The custom-prebuild-script.sh looks like this:
#!/bin/bash
mkdir node_modules

Virtualenv and django not working in bash on windows 10

I have a problem with using virtualenv and django in bash. If I type python -m venv env in cmd, then env\Scripts\activate, and then virtualenv - I get 'virtualenv' is not recognized as an internal or external command, operable program or batch file.. If I do the same in bash I get bash: virtualenv: command not found. How do I fix this?
Try the following to resolve your issue.
Check all of the environment variables related to the software you require to be used at least.
Check the permissions for files and folders for the software.
Sometimes uninstalling and installing the software with issues can solve problems quickly.
If you have performed number 2. and you are still have errors, proceed to number 3.
You may have dependencies missing, a good tool i have used on Windows is Dependency Walker, and the software will check if any file and dependencies are missing, and you should be able to download them.
An error message may output a file is not found but in fact a dependency is missing, relating to the software you are trying to run.
Try the following steps in the terminal, it may solve your problem.
using terminal, mkdir to make a directory for your project
cd to your project folder/dir
type pip3 freeze, it will show up all the installed packages and dependencies on global scope/system
but we gonna have a venv where we will install our necessary packages and dependencies
type python3 -m venv ./venv to create venv inside your current project folder, please ensure you are inside the folder before running this command
[if you are not using python 3, then the command will be python -m venv ./venv]
to actiavte environment,
on mac, run source ./venv/bin/activate ||
on windows, run .\venv\Scripts\activate.bat [if it doesn't work, try to put your absolute path]
you can check what is installed inside venv using pip freeze, you will see nothing inside the venv
Now you can install django inside venv for your project
to deactivate the environment, just type deactivate

Error while setting up Movesense platform with Cmake commands

I've been trying to set up movesense platform in my windows 10 machine and facing issues with cmake commands.
I pulled the movesense container using docker
docker pull movesense/sensor-build-env:latest
I cloned the movesense repo using the below code
git clone git#bitbucket.org:suunto/movesense-device-lib.git
Then I moved to the cloned folder
cd movesense-device-lib
Then I started the docker image on the terminal
docker run -it --rm -v c:/My/Project/Folder/movesense-device-lib:/movesense:delegated movesense/sensor-build-env:latest
The docker prompted and I followed the below commands
cd /movesense
mkdir myBuild
cd myBuild
Now, I ran the CMake "Run the CMake (needs to be done only once unless you add files to the project). It's possible to build both the debug and release version. In both cases the command will contain the following:" by the following command
cmake -G Ninja -DMOVESENSE_CORE_LIBRARY=../MovesenseCoreLib/ -DCMAKE_TOOLCHAIN_FILE=../MovesenseCoreLib/toolchain/gcc-nrf52.cmake <sample_directory>
I created a sample folder as build1 and saved. In the place of sample_directory, I pasted "build" and executed the command.
But in return I get an error as
CMake Error: The source directory "/movesense/myBuild/build" does not appear to contain CMakeLists.txt. Specify --help for usage, or press the help button on the CMake GUI.
The objective is to create a project as zip file and run it in visual studio. Please help me to solve the issue. I've attached the links of the documents which I followed from movesense.
Movesense Set up document
The "<sample_directory>" should be a path to the folder where the firmware source code is (i.e. the sample app folder). If you create the build folder as /movesense/myBuild and cd into it, the path would be ../samples/blinky_app if you are building the blinky_app -sample.
Full disclosure: I work for the movesense team

Running gcloud run deploy from inside Cloud Build results in error

I have a custom build step in Google Cloud Build, which first builds a docker image and then deploys it as a cloud run service.
This last step fails, with the following log output;
Step #2: Deploying... Step #2: Setting IAM Policy.........done Step
2: Creating Revision............................................................................................................................failed
Step #2: Deployment failed Step #2: ERROR: (gcloud.run.deploy) Cloud
Run error: Invalid argument error. Invalid ENTRYPOINT. [name:
"gcr.io/opencobalt/silo#sha256:fb860e758eb1957b90ff3761fcdf68dedb9d10f832f2bb21375915d3de2aaed5"
Step #2: error: "Invalid command \"/bin/sh\": file not found" Step #2:
]. Finished Step #2 ERROR ERROR: build step 2
"gcr.io/cloud-builders/gcloud" failed: step exited with non-zero
status: 1
The build steps look like this;
["run","deploy","silo","--image","gcr.io/opencobalt/silo","--region","us-central1","--platform","managed","--allow-unauthenticated"]}
The image is built an exists in the registry, and if I change the last build step to deploy a compute engine VM instead, it works. Those build steps looks like this;
{"name":"gcr.io/cloud-builders/gcloud","args":["compute","instances",
"create-with-container","silo","--container-image","gcr.io/opencobalt/silo","--zone","us-central1-a","--tags","silo,pharo"]}
I can also build the image locally but run into the same error when running gcloud run deploy locally.
I am trying to figure out how to solve this problem. The image works, since it runs fine locally and runs fine when deployed as a Compute Engine VM, the error only show up when I'm trying to deploy the image as a Cloud Run service.
(added) The Dockerfile looks like this;
######################################
# Based on Ubuntu image
######################################
FROM ubuntu
######################################
# Basic project infos
######################################
LABEL maintainer="PeterSvensson"
######################################
# Update Ubuntu apt and install some tools
######################################
RUN apt-get update \
&& apt-get install -y wget \
&& apt-get install -y git \
&& apt-get install -y unzip \
&& rm -rf /var/lib/apt/lists/*
######################################
# Have an own directory for the tool
######################################
RUN mkdir webapp
WORKDIR webapp
######################################
# Download Pharo using Zeroconf & start script
######################################
RUN wget -O- https://get.pharo.org/64/80+vm | bash
COPY service_account.json service_account.json
RUN export certificate="$(cat service_account.json)"
COPY load.st load.st
COPY setup.sh setup.sh
RUN chmod +x setup.sh
RUN ./setup.sh; echo 0
RUN ./pharo Pharo.image load.st; echo 0
######################################
# Expose port 8080 of Zinc outside the container
######################################
EXPOSE 8080
######################################
# Finally run headless as server
######################################
CMD ./pharo --headless Pharo.image --no-quit
Any advice warmly welcome.
Thank you.
After a lot of testing, I managed to come further. It seems that the /bin/sh missing file thing is a red herring.
I tried to change the startup command from CMD to ENTRYPOINT, since that was mentioned in the error, but it did not work. However, when I copied the startup instruction into a new file 'startup.sh' and changed the last line of the Dockerfile to;
ENTRYPOINT ./startup.sh
It did work. I needed to chmod +x the new file of course, but the strange thing is that ENTRYPOINT ./pharo --headless Pharo.image --no-quit gave the same error, and even ENTRYPOINT ["./pharo", "--headless", "Pharo.image", "--no-quit"] also gave the same error.
But having just one argument to ENTRYPOINT made cloud run work. Go figure.
It appears that Google Cloud Run has a dislike for the ubuntu:20.04 image. I have the exact same problem with a Play framework application.
The command
ENTRYPOINT /opt/play-codecheck/bin/play-codecheck -Dconfig.file=/opt/codecheck/production.conf
failed with
error: "Invalid command \"/bin/sh\": file not found"
I also tried
ENTRYPOINT ["/bin/bash", "/opt/play-codecheck/bin/play-codecheck", "-Dconfig.file=/opt/codecheck/production.conf"]
and was rewarded with
error: "Invalid command \"/bin/bash\": file not found"
The trick of putting the command in a shell script didn't work for me either. However, when I changed
FROM ubuntu:20.04
to
FROM ubuntu:18.04
the image deployed. At this point, that's an acceptable fix for me, but it seems like something that Google needs to address.
See also:
Unable to deploy Ubuntu 20.04 Docker container on Google Cloud Run
My workaround was to use a CMD directive that calls Python directly rather than a shell (either /bin/sh or /bin/bash). It's working well so far.