I've been looking for a webserver for my project but I haven't been able to satisfy myself. I need a http server that has support for compiled CGI scripts (exe), for Windows, and must be able to use relative paths. It would be a bonus if the server could be a minimal/lightweight as possible.
The hardest part in my search thus far is finding a server that supports both CGI and has relative path support. When I say relative path, I mean the server root directory. I want to be able to pack this along with my project and therefore the paths in the conf files cannot be absolute.
The only one I've gotten to satisfy every criteria is Abyss Web but their license is proprietary and only free for personal use.
EDIT:
I have found the error in my ways. I started the process via cmd at root. Thus the relative paths apache was using in the conf was relative to where I was, at root. By changing to the root dir and running the process there, everything works as gbjbaanb has mentioned. But I suppose the context of my question is may still be valid. If I were to run the server as a process in a my program (C# .NET), what would be the 'current directory' then? Would I have to make sure I've changed the current directory environment variable prior to launching it?
What's wrong with Apache? You can set DocumentRoot to any directory (though I've not tried it for Windows on C:)
It also does apply to the webserver root directory - don't begin the directive with a / and it works.
I've quickly booted up a mock version of the server that I'm supposed to use and it seems what gbjbaanb is valid for Windows as well. As with *nix, the relative paths work based on what the current directory is when apache is launched. So for Windows, just make sure to set the environment variable for the current directory to the one you want apache to be relative of. For .NET, you just set System.IO.Directory.SetCurrentDirectory() or System.Environment.CurrentDirectory appropriately. I suppose for *nix, you would either cd into the directory before running or use chroot.
Related
We are building Qt 5.10 internally, and installing it to a given prefix on the build environments.
We would like to be able to relocate the installation (notably, but not only for, distribution). We are aware of qt.conf, as pointed out by this answer.
Yet, is there a maintained way to directly edit the values of those hardcoded paths in the installed files?
EDIT:
More rationale behind why we thing qt.conf is inferior to directly patching the binaries.
On development machines, it means that instead of simply patching the installed binaries once, we have to provide a configuration file in each folder containing an application depending on Qt.
Even worse than that, we discovered through failures (and the help of this post) that qtwebengineprocess.exe, in qtprefix/bin, expects its own qt.conf file, otherwise it will use the paths hardcoded in the libraries. This means that we have to touch the the library folder anyway, in otder to edit the configuration file to make it match the folder location on each development machine.
I was wondering whether there is a convention for storing config files in c++. At the moment I am storing the config files in the same directory as the source code that uses them. When building I make sure via the CMakeLists to copy them to the correct location so I can just access them in a relative way (e.g. "config.cfg" iso "/foo/bar/config.cfg) for convenience.
The practice for config files is non portable and operating system dependent. You have also to ask yourself if your configuration is defined per installation/system or per user.
In general, it is a very bad idea to store config in the same directory as your execuable, at least once the developpement is finished. In general executables may be shared between several users and should therefore for security reasons be located in directories that are write-protected for everybody but the system administrator.
For unix/linux, you could for example consider:
/etc or a subfolder thereof, if your configuration is per installed system,
~/ if it's user defined configuration. The usual practice would be to start the filename with a dot. This article will tell you more.
For windows systems, you should consider:
the usual approach now, goes to the registry. Of course, this uses the windows api and is fully non portable.
a subfolder of C:\ProgramData or C:\Users\All users\AppData\Local if your configuration is per installed system,
a subfolder of C:\Users\%USERNAME%\AppData\Local for the users's own configuration.
This SO questions shows how to find the right folders.
Currently I am messing around with a Click-Once WPF application. That application is some third-party application that was not developed by me. I also do not have access to its sources.
It is run on a Windows server periodically and automatically (using a self made launcher written in standard C++) by executing the corresponding *.appref-ms link that was placed in the start menu path on installation of the application. This works fine.
Due to periodically arising problems with that application my launcher needs to wipe all configuration files before starting it so I get a well defined run at all times. Those files are placed in one of the application's folders. That config path for its settings reads like this (I found it by searching the AppData tree manually):
C:\Users\<UserName>\AppData\Local\Apps\2.0\Data\WM4WPKCW.P5Z\67QVXD6C.0NT\<app>_f6187a2321850a68_0003.0004_1a67f9f1633c43fc\Data\AppFiles\
Please note that this config path is pretty different from the application path (which uses differently named folders):
C:\Users\<User>\AppData\Local\Apps\2.0\5HN2CKMO.MPL\YOL20MYR.O8L\<app>_f6187a2321850a68_0003.0004_f6ab8c93b3a43b7c\
Since this config path changes on each update of the Click-Once application I need to find it by code (preferably C++) automatically. Unfortunately I could not figure out a way to do this.
How can I make my launcher find the config path of the Click-Once application based on its *.appref-ms file?
From Raghavendra Prabhu’s blog entry “Client Settings FAQ”:
” If you want to get to the path programmatically, you can do it using the Configuration Management API (you need to add a reference to System.Configuration.dll). For example, here is how you can get the local user.config file path:
Configuration config =
ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.PerUserRoamingAndLocal);
Console.WriteLine("Local user config path: {0}", config.FilePath);
The code is C# (evidently), but shouldn't be that hard to translate to C++/CLI.
Raghavendra Prabhu further writes:
” If you need to store the settings in a different location for some reason, the recommended way is to write your own SettingsProvider. This is fairly simple to implement and you can find samples in the .NET 2.0 SDK that show how to do this. Keep in mind however that you may run into the same isolation issues mentioned above .
Disclaimer: I have not tested any of this.
I'm trying to use NetBeans (7.3.1) to work on a remote project.
My configuration is following:
My local machine is a Windows 7 laptop. It doesn't have any tools. In particular neither compiler nor debugger. But it does have NetBeans IDE and PuTTY for example.
Source code, Make scripts and (eventually) build results are located on a remote storage shared across servers and "locals". (I might switch to a single server only storage as it is faster but I don't think that it matters at all.)
I'm accessing it using SSHFS Manager. SSHFS Manager takes server name, path on the server, user name and SSH private key. In result it mounts that directory on the server as a disk on Windows. This works fine. (Although some directories, possibly links, are represented as files in Windows Explorer, I don't know if that matters...)
NetBeans project is located on local machine but I don't think that it matters and I could place it remotely as well. But I would prefer to keep it "off source" so that I don't have to add any ignores to version control.
In NetBeans I did procedure described in Remote Developement Tutorial. It seems to be successful. NetBeans connected to the server and found GNU Compiler Collection.
Then I added the project using File | New Project..., there C/C++ | C/C++ Project with Existing Sources. It seems to be successful. All files are visible and all that staff.
The issue is however that our work "procedure" requires us to setup the environment first. So when I log in with PuTTY for example I have to first call setsee with proper argument. And that heavily influences the environment by adding lots of variables for example including:
GCC_HOME which is set to /opt/gcc/linux64/ix86/gcc_4.3.2-7p3, as opposed to /user/bin/g++ which is shown by NetBeans in its GNU Compiler Collection for C++ Compiler and
CPLUS_INCLUDE_PATH which points to some path (while NetBeans doesn't see many includes, probably lacking that path).
So is there a way to tell NetBeans to call setsee on the remote server before doing anything else?
It turned out that setsee is more or less an internal tool. Yet “the core question” remains: how to have an arbitrary script executed on behalf of an SSH session created by IDE, before IDE actually uses that script.
Answer to the “How can I set environment variables for a remote rsync process?” question on Super User says it all.
To summarize it shortly: in ~/.ssh/authorized_keys one has to modify an entry corresponding to the key with which the IDE will log in. There one has to add a command parameter that is a script to be executed after logging in but before “returning control”.
There is also another thing associated with that solution. If the script to be executed within the command option outputs anything the it will break many tools (possibly the IDE as well). Such tools often expect on the output whatever the called tool does. Parsing such output fail then on what the command script outputs.
A simple solution is to use tail. But disadvantage of that is that you loos “progress”. A lengthy operation will look like hung and then output everything in one shot. Also in some cases it simply doesn’t work (for example doing git clone --progress through SSH on Tortoise Git will fail if the command script outputs anything).
I have NUnit installed on my machine in "C:\Program Files\NUnit 2.4.8\" but on my integration server(running CruiseControl.Net) I have it installed in "D:\Program Files\NUnit 2.4.8\". The problem is that on my development machine my NAnt build file works correctly because in the task I'm using the path "C:\Program Files\NUnit 2.4.8\bin\NUnit.Framework.dll" to add reference to the 'NUnit.Framework.dll' assembly but this same build file cannot build the file on my integration server(because the reference path is different). Do I have to have my NUnit installed at the same location as it is in my integration server? This solution seems too restrictive to me. Are there any better ones? What is the general solution to this kind of problem?
Typically I distribute NUnit and any other dependencies with my project, in some common location (for me that's a libs directory in the top level).
/MyApp
/libs
/NUnit
/NAnt
/etc...
/src
/etc...
I then just reference those libs from my application, and they're always in the same location relative to the project solution.
In general, dependencies on absolute paths should be avoided. As far as CI goes, you should be able to build and run your solution on a clean machine completely from scatch using only resources found in your source code control via automated scripts.
The "ultimate" solution can be to have the entire tool-chain stored in your source-control, and to store any libraries/binaries you build in source-control as well. Set up correctly, this can ensure you have the ability to rebuild any release, from any point in time, exactly as it was shipped, but that, furthermore, you don't need to do that as every binary you#ve ever generated is source-controlled.
However, getting to that point is some serious work.
I'd use two approaches:
1) use two different staging scripts (dev build/integration build) with different paths.
2) put all needed executables in you path folder and call them directly.
I'd agree that absolute paths are evil. If you can't get around them, you can at least set an NUNIT_HOME property within your script that defaults to C:... and in your CI server call your script passing in the NUNIT_HOME property at the command line.
Or you can set your script to require an NUNIT_HOME environment variable to be set in order for NUNIT to work. Now, instead of requiring that the machine it runs on has nUnit in some exact location, your script requires that nunit be present and available in the environment variable.
Either approach would allow you to change the version of nunit you are using without modifying the build script, is that what you want?
The idea of having all the tools in the tool chain under version control is a good one. But while on your path there you can use a couple of different techniques to specify different paths per machine.
NAnt let's you define a <property> that you can override with -Dname=value. You could use this to have a default location for your development machines that you override in your CI system.
You can also get values of environment variables using environment::get-variable to change the location per machine.