How publish a directory with ring middleware? - clojure

I am trying to use Clojure + Compojure + Ring in combination with the qooxdoo JS library. This is actually going well, but qooxdoo runs in two modes "build" (that works for me) and "source" (not so good). In the latter case, the JS generated by qooxdoo actually hardcodes references (well, using relative addresses ../../..) back to the qooxdoo installation and at run time it asks for sth like:
http://localhost:3000/opt/qooxdoo-5.0.1-sdk/framework/source/class/qx/bom/client/OperatingSystem.js
...since I have the library installed under /opt/qooxdoo-5.0.1-sdk.
Serious sanity check: if I directly open the index.html in the browser, it works fine. So it seems I just have to figure out how to serve the static files under the /opt library install.
I have tried wrap-file from the ring.middleware.file because that sounds like what I want but no matter what path I give it I get hundreds of 404s as it tries to pick up each framework file individually from the installed library.
I can work OK under "build" (qooxdoo cobbles together a single minified .js I serve with wrap-resource) but on occasion I need to run in source mode to find JS bugs.
Am I missing something simple?

The correct way to handle this is to configure Qooxdoo to tell it what URIs you would like to use - by default the source build does just use relative paths, but you can easily override this by editing the config.json.
In your config.json you will have a "jobs" section containing a "libraries" section, containing a "library" array - your application is a library, as is Qooxdoo, as is any contribs so it will look something like this:
"jobs" : {
"libraries" : {
"=library" : [ {
"manifest" : "${QOOXDOO_PATH}/framework/Manifest.json"
}, {
"manifest" : "Manifest.json"
}
},
Each "library" object can have a "uri" property, so for your example you probably want something like this:
"jobs" : {
"libraries" : {
"=library" : [ {
"manifest" : "${QOOXDOO_PATH}/framework/Manifest.json",
"uri" : "/opt/qooxdoo-5.0.1-sdk"
}, {
"manifest" : "Manifest.json"
}
},

Simple indeed: (wrap-file "/")
In development/source mode qooxdoo works off the installation directory instead of code pulled into my local tree, and does so by hardcoding relative paths that resolve to an absolute /opt/qooxdoo-etc path.
This looks to the server like a relative "opt/qooxdoo..." file request, so I had to offer root ("/") as a search directory.

(wrap-file "/") is a serious security issue since you're offering the root directory for all.
What you can potentially do is to make a dedicate directory to server your static file, and serve your content from there.
su
mkdir -p /var/clojure/static/opt
cd /var/clojure/static/opt
ln -s /opt/qooxdoo-5.0.1-sdk qooxdoo-5.0.1-sdk
chown -r YOUR-USER-ID /var/clojure/static/opt
And use the middleware: (wrap-file "/var/clojure/static") to serve your file.

There is a section in the manual that deals with the issue of serving a source version through a web server.
I guess you found the basic insight yourself, that the web server must export a root where the relative paths used in the generated loader match URL paths under that web server. (The rational behind this is that the source version uses all the JS files from all involved libraries directly from disk.)
In the worst case that might mean you need to export the file system root ("/") from your web server. As you are doing this on a local development machine this shouldn't be much of a problem. If security is a concern you might want to collect your qooxdoo environment under some innocent path like /home/kenny/devel/qooxdoo, containing the qooxdoo SDK, your app, and all libs/contribs you might be using.
If you follow the above link you will also find some help from the qooxdoo tool chain. E.g. if you run generate.py source-httpd-config[*] it will tell you which path on your local disk is the closest parent directory that will encompass all necessary libraries, i.e. needs to be exported in your web server for the source version to work.
Alternatively, as John wrote you can export each qooxdoo lib through an individual path under your web server, and then tell your main application where to find it. You might actually need to give a proper URL like http://localhost/libs/qooxdoo-5.0.1-sdk/framework, not just a file system path as John suggests. Also remember that you have to go as far as the directory where the Manifest file resides (hence the above path ending in .../framework). See this manual section for a deep dive.

Related

External jars with Dropwizard

I am trying to write a Dropwizard application and its doc tells me that I need to ship everything as an uber jar.
However, in my application I need to support multiple databases and this requires multiple database JDBC driver jars in my classpath, all of which are not expected to be shipped together with my application. Users are expected to place the corresponding JDBC jar like mysql-connector-java-5.1.39.jar in a particular folder by their own.
After reading Dropwizard's documentation I am not sure if this kind of usage is supported. Does anyone have experience making it to work this way?
Since java 6, you can wildcard classpaths.
Using the application plugin, the generated bin folder will have a start script that contains the classpath. What we want to do, is to instead of listing every possible jar in the bin folder, we simply include all of them.
Note: You can also do the same thing with different folders if you want the classpath in a different location.
This can be achieved (in a workaround manner since there are problems with this plugin in my version) in the easiest way as follows. In build.gradle you do:
startScripts {
doLast {
def windowsScriptFile = file getWindowsScript()
def unixScriptFile = file getUnixScript()
windowsScriptFile.text = windowsScriptFile.text.replaceAll('CLASSPATH=.*', 'CLASSPATH=\\$APP_HOME/lib/*')
unixScriptFile.text = unixScriptFile.text.replaceAll('CLASSPATH=.*', 'CLASSPATH=\\$APP_HOME/lib/*')
}
}
This will wildcard your lib folder in the start scripts. When starting up, your classpath will simply be
lib/*
When you drop jars into that folder, they will automatically be picked up (on startup, not on runtime).
I hope this helps,
Artur

Using AsConfigured and still be able to get UnitTest results in TFS

So I am running into an issue when I go to build my projects using tfs build controller using the Output location "AsConfigred" it will not detect my unit tests. Let me give a little info on my setup.
TFS 2013 Update 2, Default Process Template
Here is a few screenshots that can hopefully help fill in what I can't in typing. I am copying my build out to a file share on our network so that we can use other utilities use the output. I don't want to use "PerProject" or "SingleFolder" because they mess up the file structure we have configured (These both will run the tests). So i have the files copy to folder names "SingleOutputFolder" which is a child of the DropLocation. I would like to be able to run from the drop folder or run from the bin folder for each of my tests (I don't care which). However it doesn't seem to detect/run ANY of the tests. Any help would be greatly appreciated. Please let me know if you need any additional information.
I have tried using ***test*.dll, Install\SingleFolderOutput**.test.dll, and $(TF_BUILD_DROPLOCATION)\Install\SingleFolderOutput*test*.dll
But I am not sure what variables are available and understand where the scope of its execution is.
Given that you're using Build Output location set to AsConfigured you have to change the default values of the Test sources spec setting to allow build to find the test libraries in the bin folders. Here's an example.
If the full path to the unit test libraries is:
E:\Builds\7\<TFS Team Project>\<Build Definition>\src\<Unit Test Project>\bin\Release\*test*.dll
use
..\src\*UnitTest*\bin\*\*test*.dll;
This question was asked on MSDN forums here.
MSDN Forums Suggested Workaround
The suggested workaround in the accepted answer (as of 8 a.m. on June 20) is to specify the full path to the test projects' binary folders: For example:
C:\Builds\{agentId}\{teamProjectName}\{buildDefinitionName}\src\{solutionName}\{testProjectName}\bin*\Debug\*test*.dll*
which really should have been shown as
{agentWorkingFolder}\src\{relativePathToTestProjectBinariesFolder}\*test*.dll
However this approach is very brittle, for the following reasons:
Any new test projects you add to the solution will not be executed until you add them to the build definition's list of test sources:
It will break under any of the following circumstances:
the build definition is renamed
the working folder in build agent properties is modified
you have multiple build agents, and a different agent than the one you specified in {id} runs the build
Improved Workaround
My workaround mitigates the issues listed in #2 (can't do anything about #1).
In the path specified above, replace the initial part:
{agentWorkingFolder}
with
..
so you have
..\src\{relativePathToTestProjectBinariesFolder}\*test*.dll
This works because the internal working directory is apparently the \binaries\ folder that is a sibling of the \src\ folder. Navigating up to the parent folder (whatever it is named, we don't care) and back in to \src\ before specifying the path to the test projects binaries does the trick.
Note: If you have multiple test projects, you add additional entries, separated with semicolons:
..\src\{relativePathToTestProjectONEBinariesFolder}\*test*.dll;..\src\{relativePathToTestProjectTWOBinariesFolder}\*test*.dll;..\src\{relativePathToTestProjectTHREEBinariesFolder}\*test*.dll;
What I ended up doing was adding a post build event to copy all of the test.dll into the staging location folder in the specific build that is basically equivalent to where it would go on a SingleFolder build and do that on each test project.
if "$(TeamBuildOutDir)" == "" (
echo "Building Interactively not in TFS"
) else (
echo "Building in TFS"
xcopy "$(TargetDir)*.*" "$(TeamBuildBinaries)\" /Y /E /S
)
MSBUILD parameter in the build def that told it to basically drop in the folder that TFS looks for them.
/p:TeamBuildBinaries="$(TF_BUILD_BINARIESDIRECTORY)"
Kept the default Test assembly file specification:
**\*test*.dll
View this link for the information on the variable that I used and what relative path it exists at.
Another solution is to do the reverse.
Leave all of the files in the root so that all of the built in functionality works. There is more than just test execution in there. What about static code analysis, impact analysis..among others. You would have to do something custom for them all.
Instead use a pre-drop powershell script to create your Install arrangement from the root files.
If it is an application then you can use the _ApplicationFolder Nuget package to create an _PublishApplications folder same as you get for web applications.

Deploying an executable with a configuration file

I'm new to deploying programs written in C/C++ on Linux and I'm wondering what you'd do in this situation.
I have a binary file (compiled with GNU Make) that needs to read a config file (such as myprogram.conf). But when I write a Makefile to deploy this file to /usr/bin/, where should the config file go? And how does the executable know where it is?
You have endless options, but the best way depends on a couple of things. First, is it a user-specific configuration file, or is it global to all users?
If it's user specific, you could, for example, keep it in ~/.myprogram/config.file and have the program check there. As a service to your users, it's up to you to decide what to do if it's not found -- perhaps copy a default config there from somewhere else, or generate a default, or use hard-coded default options, or display a configuration wizard, or just fail. That's entirely up to you.
If it's global, the traditional place to put it on Linux is in /etc, e.g. /etc/config.file or /etc/myprogram/config.file. See Linux File System Structure. You will generally always have a /etc on Linux. Handling a situation where the file does not exist is the same as above - there's no "right" way to handle that, it's based purely on how convenient you want to make it for a user.
What I usually do for global config files is put them in /etc/wherever on install, have the program default to loading the config file from /etc/wherever, but also give a command line option to override the configuration file (especially useful for testing or other situations).
What I usually do to handle missing config files depends entirely on the application. I'll generally either have hard-coded defaults (if that's appropriate) or simply fail and direct the user to some documentation describing a config file (which I find adequate in situations where my installer installs a config file).
Hope that helps.
It kind of depends on what the configuration parameters are, and whether they are "per system" or "per user" or "per group" or ...
System configurations typically live somewhere in /etc/.... In the same directory that the program lives is a very good place too.
User confgiurations, in the home directory of the user.
Group configurations are the trickiest, as you'll probably need to come up with a scheme where there is a configuration file per "group". /etc/myprog/groups/<groupname>/config or something similar would work.
On Linux, the usual location for configuration files is '/etc', so it is acceptable to deploy a configuration file like /etc/myprog.conf. That requires root privileges however. Other good options include putting a configuration file in the user's home directory, making it something like ~/.myprog.conf or ~/.myprog/.conf to use a folder where you can have several config files, a cache or something else that you want.
As for how the executable knows where the file is, one solution is to look for the file in several common locations. For example, if you decided to place your config in the user's home directory, look for it there first, if not found, look under /etc. And allow a special command line argument that would let a different config file to be loaded. So, say, an invocation of myprog can check for a config file in the home folder, but myprog -c /some/path/config will use /some/path/config as the file. It's also a good idea to have some default settings that you can fall back to if there is no valid config file anywhere.
The config file can go anywhere, but I'd try to put it in the same directory as any other files the program will read or write.
As for how the executable will find it, I'd pass the config file's path to the executable on the command line as an argument, with a default value of "." (which is the current directory, the one you're in when you launch the executable).

Django tutorial path: TEMPLATE_PATH dir on a Mac networked computer

I'm (slowly!) working my way through the Django tutorial, and I've reached the point in Part 2 where I'm supposed to set the template_dir. I'm on a Mac (at work) where my user profile resides on a server, and I can't figure out how to set the path.
The tutorial files are in a folder called "tutorialshell", inside a folder called "Django," which is a first-level file inside my user folder "mattshepherd". That folder is the native folder when I launch the Terminal, for instance: it always starts me inside "mattshepherd".
I've tried
"~/Django/tutorialshell/templates"
and
"home/Django/tutorialshell/templates"
with no luck so far. I imagine there's some trick to doing this, as the files I'm trying to link to are on the network drive in my user folder, not on my local hard drive. Advice?
You want the absolute, not relative path. If you go to ~/Django/tutorialshell/templates in your terminal and then type pwd, it will tell you the full path to that folder. That's the value you should enter for the path.
Also: I assume you're actually talking about TEMPLATE_DIRS? If so, keep in mind that it's a list of paths, so it should look like:
TEMPLATE_DIRS = (
"/path/to/Django/tutorialshell/templates", # don't forget that trailing comma!
)
/Users/mattshepherd/Django/tutorialshell/templates
Please keep in mind that there is a LIST of template directories as mentioned by Jordan. The above location should work. In mac the user home directories are located at /Users/ + yourusername
It might be /home/ if /Users/ does not work.
As others have said you need the absolute path. But i would really suggest you to use relative path. It will make your code much more flexible. There are many ways to do it, this is what i usually do:
import os
ROOT = os.path.abspath(os.path.dirname(__file__))
then for the templates just do:
os.path.join(ROOT, 'templates')
You said you are using a Mac but you can make your code even more flexible by replacing "//" with "/" in the case you are using windows.

What should my LESS #import path be?

Here's the scenario:
I'm running Django 1.3.1, utilizing staticfiles, and django-compressor (latest stable) to, among other things, compile LESS files.
I have an "assets" directory that's hooked into staticfiles with STATICFILES_DIRS (for project-wide static resources). In that directory I have a "css" directory and in that a "lib.less" file that contains LESS variables and mixins.
So the physical path is <project_root>/assets/css/lib.less and it's served at /static/css/lib.less.
In one of my apps' static directory, I have another LESS file that needs to import the one above. The physical path for that is <project_root>/myapp/static/myapp/css/file.less and it would be served at /static/myapp/css/file.less.
My first thought was:
#import "../../css/lib.less"
(i.e. based on the URL, go up to levels from /static/myapp/css to /static/, then traverse down into /static/css/lib.less).
However, that doesn't work, and I've tried just about every combination of URLs and physical paths I can think of and all of them give me FilterErrors in the template, resulting from not being able to find the file to import.
Anyone have any ideas what the actual import path should be?
After tracking down exactly where the error was coming from in the django-compressor source. It turns out to be directly passed from the shell. Which clued me into removing all the variables and literally just trying to get the lessc compiler to parse the file.
Turns out it wants a relative path from the source file to the file to be imported in terms of the physical filesystem path. So I had to back all the way out to my <project_root> and then reference assets/css/lib.less from there. The actual import that finally worked was:
#import "../../../../assets/css/lib.less"
What's very odd though is that lessc would not accept an absolute filesystem path (i.e. /path/to/project/assets/css/lib.less). I'm not sure why.
UPDATE (02/08/2012)
Had a complete "DUH" moment when I finally pushed my code to my staging environment and ran collectstatic. The #import path I was using worked fine in development because that was the physical path to the file then, but once collectstatic has done it's thing, everything is moved around and relative to <project_root>/static/.
I toyed with the idea of using symbolic links to try to match up the pre and post-collectstatic #import paths, but I decided that that was far too complicated and fragile in the long run.
SO... I broke down and moved all the LESS files together under <project_root>/assets/css/, and rationalized moving the LESS files out of the apps because since they're tied to a project-level file in order to function, they're inherently project-level themselves.
I'm sort of in the same bind and this is what I've come up with for the most recent versions of compressor and lessc to integrate with staticfiles. Hopefully this will help some other people out
As far as I can tell from experimenting, lessc doesn't have a notion of absolute or relative paths. Rather, it seems to maintain a search path, which includes the current directory, the containing directory of the less file, and whatever you pass to it via --include-path
so in my configuration for compressor I put
COMPRESS_PRECOMPILERS = (
('text/less', 'lessc --include-path=%s {infile} {outfile}' % STATIC_ROOT),
)
Say, after running collectstatic I have bootstrap living at
STATIC_ROOT/bootstrap/3.2.0/bootstrap.css.
Then from any less file, I can now write
#import (less, reference) "/bootstrap/3.2.0/bootstrap.css"
which allows me to use the bootstrap classes as less mixins in any of my less files!
Every time I update a less file, I have to run collectstatic to aggregate them in a local directory so that compressor can give less the right source files to work on. Otherwise, compressor handles everything smoothly. You can also use collectstatic -l to symlink, which means you only need to collect the files when you add a new one.
I'm considering implementing a management command to smooth out the development process that either subclasses runserver to call collectstatic each time the server is reloaded, or uses django.utils.autoreload directly to call collectstatic when things are updated.
Edit (2014/12/01): My approach as outlined above requires a local static root. I was using remote storage with offline compression in my production environment, so deployment requires a couple extra steps. In addition to calling collectstatic to sync the static files to the remote storage, I call collectstatic with different django config file that uses local storage. After I have collected the files locally, I can call 'compress', having configured it to upload the result files to remote storage, but look in local storage for source files.