I'm using Hudson for the expected purpose of testing our Django application. In initial testing, I would deploy Hudson using the war method:
java -jar hudson.war
This worked great. However, we wanted to run the Hudson instance on Tomcat for stability and better flexibility for security.
However, now with Tomcat running Hudson does not seem to recognize previously-recognized Python libraries like Virtualenv. Here's an output from a test:
+ bash ./config/testsuite/hudson-build.sh
./config/testsuite/hudson-build.sh: line 5: virtualenv: command not found
./config/testsuite/hudson-build.sh: line 6: ./ve/bin/activate: No such file or directory
./config/testsuite/hudson-build.sh: line 7: pip: command not found
virtualenv and pip were both installed using sudo easy_install, where are they?
virtualenv: /usr/local/bin/virtualenv
pip: /usr/local/bin/pip
Hudson now runs under the tomcat6 user. If I su into the tomcat6 user and check for virtualenv, it recognizes it. Thus, I am at a loss as to why it doesn't recognize it there.
I tried removing the commands from a script and placing it line-by-line into the shell execute box in Hudson and still same issue.
Any ideas? Cheers.
You can configure your environment variables globally via Manage Hudson ->
Environment Variables or per machine via Machine -> Configure ->
Environment Variables (or per build with the Setenv plugin). It sounds like
you may need to set the PATH and PYTHONPATH appropriately; at least that's the
simple solution.
Edited to add: I feel as though the following is a bit of a rant, though not really directed at you or your situation. I think that you already have the right mindset here since you're using virtualenv and pip in the first place -- and it's not unreasonable for you to say, "we expect our build machines to have virtualenv and pip installed in /usr/local," and be done with it. Take the rest as you will...
While the PATH is a simple thing to set up, having different build
environments (or relying on a user's environment) is an integration "smell".
If you depend on a certain environment in your build, then you should either
verify the environment or explicitly set it up as part of the build. I put
environment setup in the build scripts rather than in Hudson.
Maybe your only assumption is that virtualenv and pip are in the PATH (because
those are good tools for managing other dependencies), but build
assumptions tend to grow and get forgotten (until you need to set up a new
machine or user). I find it useful to either have explicit checks, or refer to
explicit executable paths that are part of my defined build environment. It is
especially useful to have a explicitly defined environment when you have
legacy builds or if you depend on specific versions of your build tools.
As part of builds where I've had environment problems (especially on Windows
with cygwin), I print the environment as the first build step. (But I tend to
be a little paranoid proactive.)
I don't mean to sound so preachy, I'm just trying to share my perspective.
Just to add to Dave Bacher's comment:
If you set your path in .profile, it is most likely not executed when running tomcat. The .profile (or whatever the name is on your system) is only executed when you have a login shell. To set necessary environment variables, you have to use a different set of file. Sometimes they are called .env and they exist on global and user level. In my environment (AIX), the user level .env file can have a different name (name is set in the env variable either in global environment file (eg. /etc/environment) or by parameter, when starting the shell).
Disclaimer: This is for the IBM AIX ksh, but should be the same for ksh on other systems.
P.S. I just found a nice explanation for .profile and .env from the HP site. Notice that they speak of a login shell (!) when they speak about the execution of the .profile file.
Related
I'm trying to set up a work environment on a new machine and I am a bit confused how best to procede.
I've set up a new windows machine and have WSL2 set-up; I plan on using that with VS Code for my development environment.
I have a previous django project that I want to continue working on stored in a folder in a thumb drive.
Do I move the [windows] project folder into the linux folder system and everything is magically ready to go?
Will my previous virtual environment in the existing folder still work or do I need to start a new one?
Is it better to just start a new folder via linux terminal and pull the project from github?
I haven't installed pip, python, or django on the windows OR linux side just yet either.
Any other things to look out for while setting this up would be really appreciated. I'm trying to avoid headaches later by getting it all set-up correctly now!
I would pull it from github, and make sure you have the correct settings for line endings, since they are different between windows and linux. Just let git manage these though:
https://docs.github.com/en/get-started/getting-started-with-git/configuring-git-to-handle-line-endings
Some other suggestions:
Use a version manager in linux to manage your python versions - something like pyenv or asdf. It will make life easier.
Make sure to always create a virtual environment for everything and don't pip install anything in your main python. (I use direnv for virtual env management)
The single exception to the previous suggestion is pipx, which I do install in the main python and then use to install things like cli tools, black, isort, pip-tools etc.
Configure VScode to use the pipx installed versions of black, flake8 etc. for linting purposes.
If you're using Docker, enable the WSL integration for your WSL flavour (probably Ubuntu). Note that docker desktop needs starting before your WSL session.
This is a Django and Python and maybe just a general web development question.
what's the difference between using virtualenv vs vagrant vs virtual box and etc... ?
I'm kinda confused as to when to use which one :/ I've been using virtual env this whole time and creating new virtual environments for different projects....
Is this the right way to do it?
One virtualenv per project?
I'm not really sure when and where vagrant comes into play...Am I supposed to set up vagrant and then use virtualenv?
This is probably a silly question but...if I were to be working on this project with other people. Would they too have to set up a virtual env? Just to collaborate?
Wouldn't it make more sense that we all work on our local machines and then push it into the main branch? I'm just kinda confused .... I feel like I'm doing it all wrong...
Thanks for the replies everybody!
Virtualenv sets up a local sandbox for you to install Python modules into.
Vagrant is an automation tool for creating Virtual Machines.
VirtualBox is a free, open source environment for running virtual machines, like those created by Vagrant.
Virtualenv is really about all you'll need to do sandboxed development on your local machine. We use Vagrant at my work to automate the creation of VMs. This way new developers coming on to a project have basically zero configuration to do in order to start working.
If you're collaborating with other devs, they don't need to do any of the above to work on your Django project, but if there's a lot of configuration involved that can't be done with pip and a requirements.txt, then you might look at Vagrant to ease some of that automation.
But you are correct in your assumption that you can all just work on a local branch and push back to the repo. Everything else is just icing.
Virtualenv is a python construct that holds a specific set of packages, separate from your system packages. The version of Python and its packages that came with your OS or that you installed separately is a "system package".
Virtualbox is totally different -- it's a VM, an entire operating system in a box.
I'm not familiar with Vagrant.
All you need is virtualenv. Create a new virtualenv for each project (they're very lightweight!) You need to do this because the whole point of virtualenv is to isolate the exact packages and versions of those packages you need for your project. Then activate the virtualenv and use pip install to install the packages you need, presumably starting with Django itself.
Once you have all the packages you need, use pip freeze > requirements.txt to create a file called requirements.txt that records all of the packages you've decided to use.
When other people collaborate on your project, they can start a virtualenv, pull your code into it, and run pip install -r requirements.txt to replicate your environment. They can even modify requirements.txt, push that back to you via your version control system, and you can run pip install -r requirements.txt yourself to modify your environment to match their changes.
This is all essential because without virtualenv, the problem of, for instance, having one project on your computer that requires Django 1.4 and one that requires Django 1.5 becomes very complicated.
Virtualenv is not an entire operating system in a box, just a python environment, so even if you are using it, you are still working on your local machine.
We use virtualenv and a Ubuntu virtual machine. Here's why:
virtualenv allows us to have isolated Python environments on a given operating system instance
Using Ubuntu dekstop in a virtual machine for our Python development mimics what it will look like when deployed on the server which is also Ubuntu. This means that we understand precisely the external OS package dependencies and configuration. You don't get this easily when you use OSX or Windows for development and Linux for deployment.
One important point is that a virtual machine is portable. You can take a snapshot and deploy it elsewhere easily. With Vagrant and Ansible combination you can automate a remote deployment.
I'm a strong proponent of version control, and am starting work on a Django project. I've done a few before, and have tried a few different approaches, but I haven't yet found a decent structure that I actually feel comfortable with.
Here's what I want:
a) Source code checked into version control
b) Preferably the environment is not checked into version control (something like buildout or pip requirements.txt is fine for setting up the environment)
c) A reasonable "get a new developer going" story
d) A reasonable deployment story - preferably the entire deployment environment could be generated by a script on the server
It seems to me like someone has to have done this before, but many hours of searching have all led to half-baked solutions that don't really address all of these.
Any thoughts on where I should look?
Look at fabric to manage deployments.
This is what I use to manage servers/deployments with fabric: louis (it is just a collection of fabric commands). I keep a louisconf.py file with each project.
I'd recommend using a distributed VCS (git, hg,...) instead of svn. The reason being that the ease of branching allows for several schemes for deployment. You can have, for example, production and staging branches. Then you enforce that the only merges into production happen from staging by convention.
As for getting developers started quickly you have it right with pip and requirements.txt. I think that also means that you are using virtualenv, but if not that's the third piece. I'd recommend getting a basic README in place. Have the first assignment of each developer that joins a project be to update the README.
The rough way to get someone on board is to have her checkout the code, create a virtualenv, and install the requirements.
I'd recommend having a settings.py file that works with sqlite3 and such that a new developer can use to just get going fast (ie after installing the requirements). However, how you manage the different settings files depends on your project layout. There should be some set of default settings for new developers to use, though.
I keep a projects/ directory in my home directory (on Linux). When I need to start a new project, I make a new, shortly-named (that sufficiently describes the project) dir in projects/; that becomes the root of a new virtualenv (with --no-site-packages) for that project.
Inside that dir (after I've installed the venv, sourced it, and installed the copy of django I'll be working with), I "django-admin.py startproject" a subdir, normally by the same short name. That dir becomes the root of my hg repo (with a quick hg init and ci), no matter how small the project.
If there's any chance of sharing the project with other developers (a project for work, for example), I include a pip requirements.txt at the repo root. Only project requirements go in there; django-debug-toolbar and django-extensions, staples for my dev workflow, are not project requirements, for example. South, when we use it, is.
As for the django project, I normally keep the default settings.py, possibly with a few changes, and add the local_settings convention to the end of it (try: from local_settings import *; except ImportError: pass). My and other devs' specific environment settings (adding django-extensions and django-debug-toolbar to installed apps, for example) go in local_settings.py, which is not checked in to version control. To help a new dev out, you could provide a template of that file as local_settings.py.temp, or some other name that won't be used for any other purpose, but I find that this unnecessarily clutters the repo.
For personal projects, I normally include a README if I plan on releasing it publicly. At work, we maintain Trac environments and good communication to get new devs up to speed on a project.
As for deployment, as rz mentioned, I hear fabric is really good for that kind of automated local/remote scripting, though I haven't really taken the chance myself to look into it.
For the uninitiated, a typical shell session for this might look like the following:
$ cd ~/projects/
$ mkdir newproj
$ cd newproj/
$ virtualenv --no-site-packages .
$ source bin/activate
(newproj)$ pip install django django-debug-toolbar django-extensions
... installing stuff ...
(newproj)$ django-admin.py startproject newproj
(newproj)$ cd newproj/
(newproj)$ hg init .; hg ci -A -m "Initial code"
I have to deploy a Django application onto a SuSE Linux Enterprise 11 system. Corporate rules say I need to deploy using RPMs only. While I can use ./setup.py bdist_rpm for each dependency, it's not really sane, since RPM doesn't record all of the dependencies yet. Therefore I'd have no real advantage in using RPMs and managing dependencies manually is somewhat cumbersome and I would like to avoid it.
Now I had the following idea: While building a package, I could create a virtualenv, install all my dependencies via pip there and then package it up with the rest of the code into one solid RPM.
How sensible is this approach?
I've been using this approach for about a year now and it has worked out pretty well.
One gotcha is that you'll want to check out the bang lines in any python scripts written to the virtualenv's bin directory. These will end up being full path names used in your build environment, which probably won't be the same directory where you end up installing the virtualenv. So you may need to add some sed calls in your RPM's postinstall to adjust the paths.
Is there a way to run Pinax without virtualenv?
I want to run it without virtualenv as I want to run it on a django-container on mediatemples grid-hosting service. Their containers can scale upto 1Gb of dedicated memory, so I wouldnt have to worry about my own VPS or scaling issues. But their response was:
" because of the way the DjangoContainer works, you won't be able to configure your server to use your virtualenv. Essentially the DjangoContainer is a virtualized server (to which you don't have access other than the AccountCenter tools, or the 'mtd' command line tool) with the specific purpose of serving your Django applications. It mounts your django container folder so that it has your application code, but you cannot modify the version or location of python it uses. This probably means you'll have to use Pinax without virtualenv support, as the general idea of using virtualenv in this way would be to create a custom environment for your Pinax application, which as I mentioned here is impossible to instruct the server to use. "
As of 0.9a1, Pinax can be used without pinax-boot.py which was the virtualenv dependency (we bundled it). Requirements are project-level and must be installed with pip. However, setup_project does enforce a virtual environment when installing requirements (it calls pip for you as a convenience; I would be open to not enforcing a virtual environment here). You can pass --no-reqs to setup_project forcing it to skip dependency installation. You can then run pip yourself and install it however you like.
technically yes, but you would have to change out quite a bit of the configuration that is handed out and hand install a lot of libraries. Pinax has virtualenv as a very low level built in assumption.
you can, all you need to do is find out what is in the virtualenv. set it up and install yolk in the virtual env and type yolk -l to see what you need to install to get it to work.