I am new to Jenkins, specially with using python script in Jenkins. The problem I am facing is as follow:
I am trying to run a python script from a python file in the post-build step of the Jenkins. I have added all the plugins required for that purpose to my understanding. i.e I have included Post-BuildScript plugin, python jenkins plugin etc.
Now when I build console output shows invalid script command caused the failure. I have attached the results below. can anybody help me with that please?
In post build step I am providing the full or absolute path to the python script file i.e
ExecutepythonScriptpath
Results
It may be useful to mention here I have also tried using just the path without writing python preceding the path, also tried with forward as well as backward slash in the path. without any success.
I have managed to resolve that issue. There are two parts of solution:
First one is if you want to run simple python script in post-build -->Add a post build step for Execute python Script (That will require you install plugin for post build ) . In that window created after adding post build step you can simply put any python command to run.
Second part of the solution is for, when user would like to run a list of commands from a python script file from the same post build step window in that case user has to make sure to put all the required python files which you want to execute into the Jenkins workspace->project directory(project for which we are running the Jenkins ) .
Moreover, for Python2.7 in order to execute that python script file user simply need to write script as
execfile(file.py)
One more thing to remember is insert python.exe path in the environment variables.
Related
I have an EMR cluster 5.28.1 running in AWS but I forgot to install from python libraries as part of the bootstrap action. Now that the cluster is running, I was simply attempting to add a step via the EMR console. Here are my settings
JAR: s3://us-east-1.elasticmapreduce/libs/script-runner/script-runner.jar
Main class: None
Arguments: s3://xxxx/install_python_libraries.sh
Unfortunately, I get the following error.
Cannot run program "s3://xxxxx/install_python_libraries.sh" (in directory "."): error=2, No such file or directory
I am not sure what I am doing wrong. The shell script looks like this.
#!/bin/bash -xe
# Non-standard and non-Amazon Machine Image Python modules:
sudo pip-3.6 install boto3
sudo pip-3.6 install xmltodict
I also tried this by simply using 'command-runner.jar' but I get the same error. Can you please help me figure out the problem so I do this via the console? I would like to install the libraries on all nodes - master and core.
Thanks
The issue is the xxx.sh files EOL/carriage return type.
In other words, if it is Windows ("\r\n") then it will not work and return the ./ file not found error.
Convert it to unix type ("\n") using something like notepad++ and it will run fine.
(In notepad++ edit>EOL Conversion>Unix(LF) hit save and try again)
I am migrating a Django application from Openshift v2 to v3 (In case you don't know, RedHat is shutting down v2 on September 30th, see: https://blog.openshift.com/migrate-to-v3-v2-eol/)
So, I am following this blog post to help me: https://blog.openshift.com/migrating-django-applications-openshift-3/ . I am new to all these Docker / Kubernetes concepts the new version is build upon.
I was able to make some progress : I managed to get a successful build of my app. Yet it crashes at deployment time:
---> Running application from script (app.sh) ...
/usr/libexec/s2i/run: line 42: /opt/app-root/src/app.sh: Permission denied
Indeed, app.sh has lost its x permission. I log into the failing container as debug and see it:
> oc debug dc/<my app>
> (app-root)sh-4.2$ ls -l /opt/app-root/src/app.sh
-rw-rw-r--. 1 default root 127 Sep 6 21:20 /opt/app-root/src/app.sh
The blog posts states "Ensure that the app.sh file is executable by running chmod +x app.sh.", which I did on my local repo. Whatever, I want to do it again directly in the pod, but it doesn't work:
(app-root)sh-4.2$ chmod +x /opt/app-root/src/app.sh
chmod: changing permissions of ‘/opt/app-root/src/app.sh’: Operation not permitted
So, how can I set the x permission to app.sh ? Thank you
Without looking into more details, any S2I builder image will gladly use your custom supplied run script to start the application in an alternative way.
Create .s2i/bin/ (mind the dot) in your source code directory, place the run script into it and rebuild the app in OpenShift - it will automatically use your custom run script upon deployment.
This is the preferred way of starting applications using custom commands in OpenShift.
Regarding your immediate problem, there is a very simple reason why you can not change the permissions of the script: you were trying to modify the permissions in the deployed pod, and not the builder pod. Deployed pods run using different UIDs, usually somewhere in the range of 100000000, and definitely do not match the file ownership as generated by the build. Hence permission denied.
The root cause of your problem (app.sh losing executable permissions) must be in the way the build process installs those files, and indeed looking at the /usr/libexec/s2i/assemble script in the base image does seem to reveal the culprit. The last two lines are:
# set permissions for any installed artifacts
fix-permissions /opt/app-root
If you wanted to change this part of the build instead of using a custom run script, I suggest you then create .s2i/bin/assemble in your project's source code and make it look sort of like this:
#!/bin/bash
echo "Running stock build:"
${STI_SCRIPTS_PATH}/assemble
echo "Fixing the mess:"
chmod 755 /opt/app-root/src/app.sh
This will fix whatever the stock build process does to file permissions, and will do it using the same UID as the rest of the build, so file ownership shouldn't be an issue.
as I stumbled upon this issue myself I've found a way to resolve it.
You have to make your file app.sh executable and push it in your repo as such.
If git does not track this modification as it did for me, you have to use: git update-index --chmod=+x app.sh for it to work.
I'm trying to run a grunt build in the container_commands section of my .ebextensions/init.config file. The command in question looks like this:
10-run-grunt:
command: $NODE_HOME/bin/node $NODE_HOME/bin/grunt --gruntfile /tmp/deploy/application/Gruntfile.js build
Since you would usually run the grunt build command from within the application root, which I'm unable to do from container_commands, you can see I'm specifying the path to the gruntfile using the --gruntfile <path_to_gruntfile> option.
My question is: how do you get the path to application files from within a container_commands command? I assume there is an environment variable which represents this.
I'm trying to come up with some sensible solution for a build written using SCons, which relies on quite a lot of applications to be accessible in a Unix-like way, using Unix-like paths etc. However, when I'm trying to use SCons plugin, or Git plugin in Jenkins, it tries to invoke the plugins using something like cmd /c git.exe - and this will certainly fail, because Git was installed using Cygwin and is only known in Cygwin shell, but not in CMD. But even if I could make git and the rest available to cmd.exe, other problems arise: the Cygwin version of Git expects paths to have forward slashes and treats backward slashes as escape characters. Idiotic Windows file-system related issues kick in too (I can't give Jenkins permissions to delete my own files!).
So, is there a way to somehow make Jenkins only use Cygwin shell, and never cmd.exe? Or should I be prepared to run some Linux in a VM to have this handled?
You could configure Jenkins to execute the cygwin command with the specific shell command, as follows:
c:\cygwin\bin\mintty --hold always --exec /cygdrive/c/path/to/bash/script.sh
Where script.sh will execute all the commands needed for the Jenkins execution.
Just for the record here's what I ended up doing:
Added a user SYSTEM to Cygwin, mkpasswd -u SYSTEM
Edited /etc/passwd by adding the newly created user's home directory to the record. Now it looks something like the below:
SYSTEM:*:18:544:,S-1-5-18:/home/SYSTEM:
Copied my own user's configuration settings such as .netrc, .ssh and so on into the SYSTEM home. Then, from Windows Explorer, through an array of popups I've claimed ownership of all of these files to SYSTEM user. One by one! I love Microsoft!
In Jenkins I now run a wrapper for my build that sets some other environment variables etc. by calling c:\cygwin\bin\bash --login -i /path/to/script/script
Gave it up because of other difficulties in configuration and made Jenkins service run under my user rather then SYSTEM. Here's a blog post on how to do it: http://antagonisticpleiotropy.blogspot.co.il/2012/08/running-jenkins-in-windows-with-regular.html but, basically, you need to open Windows services, then find Jenkins service, select it's properties, go to "Logon" tab and change the user to the "this user".
One way to do this is to start your "execute shell" build steps with
#!c:\cygwin\bin\bash --login
The trick is of course that it resets your current directory so you need to
cd `cygpath $WORKSPACE`
to get back to the workspace.
Adding to thon56's good answer: this is helpful: "set -ex"
#!c:\cygwin\bin\bash --login
cd `cygpath $WORKSPACE`
set -ex
Details:
-e to exit on error. This is important if you want your jobs to fail on error.
-x to echo command to the screen, if desired.
You can also use #!c:\cygwin\bin\bash --login -ex, but that echos a lot of login steps that you most likely don't care to see.
When I build project from terminal by using 'xcodebuild' command I succeed, but when I try to do run same script from cron task I receive error
"Code Sign error: The identity '****' doesn't match any valid certificate/private key pair in the default keychain"
I think problem is in settings and permissions of crontab utility, it seems crontab does not see my keychain
Can anyone provide me terminal command how to make my keychain visible for crontab
I encountered a similar issue with trying to build nightly via cron. The only resolution I found was to create a plist in /Library/LaunchDaemons/ and load it via launchctl. The key necessary is "SessionCreate" otherwise you will quickly run in to problems similar to what was encountered with trying to use cron -- namely that your user login.keychain is not available to the process. "SessionCreate" is similar to "su -l" in that (as far as I understand) it simulates a login and thus default keychains you expect will be available; otherwise, you are stuck with only the System keychain despite the task running as your user.
I found the answers (though not the top answer currently) here useful in troublw shooting this issue: Missing certificates and keys in the keychain while using Jenkins/Hudson as Continuous Integration for iOS and Mac development
You execute your cron job with which account ?
most probably the problem !!
You can add
echo `whoami`
at the beginning of your script to see with which user the script is launched.
Also when a Bash script is launched from cron, it don't use the same environment variable (non login shell) as when you launch it as a user.
When the script launches from cron, it doesn't load your $HOME/.profile (or .bash_profile). Anything you run from cron has to be 100% self-sufficient in terms of it's environment. I'd suggest you make yourself a file called something like "set_build_env.sh" It should contain everything from your .profile that you need to build, such as $PATH, $HOME, $CLASSPATH etc. Then in your build script, load set_build_env.sh using the dot notation or source cmd as ericc said. You should also remove the build-specific lines from your.profile and then source set_build_env from there too so only one place to maintain. Example:
source /home/dmitry/set_build_env.sh #absolute path
. /home/dmitry/set_build_env.sh #dot-space notation same as "source"