Problems building geocommons geocoder - geocoding

I've downloaded the geocommons/geocoder source and have one small sample TigerLine zip file from the census site saved into /opt/tiger/tl_2010_01_state10.zip
I've tried to run the tiger_import tool on this file with the command:
build/tiger_import /opt/tiger/geocoder.db /opt/tiger
with all of the prerequisite gems installed, specifically: Text, fastercsv and sqlite3-ruby gems as well as running make and make install.
However, when I execute tiger_import, I get the error:
ls: /opt/tiger/*/*/tl_*_edges.zip: No such file or directory
although there seems to be a geocoder.db file created in /opt/tiger.
Does any have better information on the steps necessary to build the tiger lines data with the geocoder?

The script expects a directory structure more like the 2009 data:
ftp://ftp2.census.gov/geo/tiger/TIGER2009/
Under your tiger directory, you'll need one state (01_ALABAMA, for instance) and one county (01001_Autauga_County), and inside that you'll need the _addr.zip, _edges.zip, and _featnames.zip files.
It's true: the 2010 data isn't set up this way (there's a giant directory for each shape type, and a file for each county in those directories) but the import script isn't set up to use the 2010 data as it's written now. Given the way the script is set up, you might have less heartache using the 2009 data until the import scripts get updated for the 2010 data. Or until the 2010 all gets published.

Related

flatpak-builder with local sources and dependancies

How I can build local sources and dependancies with flatpak-builder?
I can build local sources
flatpak build ../dictionary ./configure --prefix=/app
I can extract and build application with dependancies with a .json
flatpak-builder --repo=repo dictionary2 org.gnome.Dictionary.json
But no way to build dependancies and local sources? I don't find sources type
like dir or other, only archive, git (no hg?) ...
flatpak-builder is meant to automate the whole build process, with a single entry-point: the JSON manifest.
Everything else it obtains from Git, Bazaar or tarballs. Note that for these the "url" property may be a local URL starting with file://.
(There is indeed no support for Hg. If that's important for you, feel free to request it.)
In addition to that, there are a few more source types (see the flatpak-manifest(5) manpage), which can be used to modify the extracted sources:
file which point to a local file to copy somewhere in the extracted sources;
patch which point to a local patch file to apply to the extracted sources;
script which creates a script in the extracted sources, from an array of commands;
shell which modifies the extracted sources by running an array of commands;
Adding a dir source type might be useful.
However (and I only flatpaked a few apps, and contributed 2 or 3 patches to the code, so I might be completely wrong) care must be taken as this would easily make builds completely unreproducible, which is one thing flatpak-builder tries very hard to enable.
For example, when using a local file source, flatpak-builder will base64-econde the content of that file and use it as a data:text/plain;charset=utf8;base64,<content> URL for the file which it stores in the manifest included inside the final build.
Something similar might be needed for a dir source (tar the folder then base64-encode the content of the tar?), otherwise it would be impossible to reproduce the build. I've just been told (after submitting this answer) that this changed in Git master, in favour of a new flatpak-builder --bundle-sources option. This would probably make it easier to support reproducible builds with a dir source type.
In any case, feel free to start the conversation around a new dir source type in the upstream bug tracker. :)
There's a expermental cli tool if you want to use it https://gitlab.com/csoriano/flatpak-dev-cli
You can read the docs
http://docs.flatpak.org/en/latest/building-simple-apps.html
http://docs.flatpak.org/en/latest/flatpak-builder.html
In a nutshell this is what you need to use flatpak as develop workbench
https://github.com/albfan/gnome-builder/wiki/flatpak

input cascade could not be found/opened

Guyss I need your help!!!. I am making my own haarcascade.xml file for vehicle detection using haartraining function in opencv . Anyway I stopped my training at 9th stage and following files created.
params
stage0
stage1
stage2
.
.
.
stage9
all these are xml files
Then i compiled convert_cascade.c in opencv sample folder and got the .exe file to get the final xml file from those created xml files files . Then I gave parameters like this in cmd (after entering the project folder)
convert_cascade --size="40x40" file_path_to_created xml files vehicle.xml
to that exe file and it says "input cascade could not be found/opened". I searched all over the internet but not found any working solution. Tell me how to solve this problem.
Note- I compiled convert_cascade.c(not in opencv derectory. in another directory) as c++ file in vs 2010 environment(opencv linked) and it built successfully.
My OS is windows 7.
opencv 2.4.8.
tell me if something unclear in my question. I will edit them
it seems, you aborted the training (pressed Ctrl^c), thus the cascade.xml was not generated.
no worries, just restart your cmdline for the opencv_traincascade tool in the same way, and with the same args as you did before, just with -stage 9 . (maybe even 8, the last stage might be corrupted from the abort).
this should try to finish the training at the last stage, and generate a cascade.xml in the data folder.
then use that as argument to generate your vehicle.xml:
convert_cascade --size="40x40" data\cascade.xml vehicle.xml

Using AsConfigured and still be able to get UnitTest results in TFS

So I am running into an issue when I go to build my projects using tfs build controller using the Output location "AsConfigred" it will not detect my unit tests. Let me give a little info on my setup.
TFS 2013 Update 2, Default Process Template
Here is a few screenshots that can hopefully help fill in what I can't in typing. I am copying my build out to a file share on our network so that we can use other utilities use the output. I don't want to use "PerProject" or "SingleFolder" because they mess up the file structure we have configured (These both will run the tests). So i have the files copy to folder names "SingleOutputFolder" which is a child of the DropLocation. I would like to be able to run from the drop folder or run from the bin folder for each of my tests (I don't care which). However it doesn't seem to detect/run ANY of the tests. Any help would be greatly appreciated. Please let me know if you need any additional information.
I have tried using ***test*.dll, Install\SingleFolderOutput**.test.dll, and $(TF_BUILD_DROPLOCATION)\Install\SingleFolderOutput*test*.dll
But I am not sure what variables are available and understand where the scope of its execution is.
Given that you're using Build Output location set to AsConfigured you have to change the default values of the Test sources spec setting to allow build to find the test libraries in the bin folders. Here's an example.
If the full path to the unit test libraries is:
E:\Builds\7\<TFS Team Project>\<Build Definition>\src\<Unit Test Project>\bin\Release\*test*.dll
use
..\src\*UnitTest*\bin\*\*test*.dll;
This question was asked on MSDN forums here.
MSDN Forums Suggested Workaround
The suggested workaround in the accepted answer (as of 8 a.m. on June 20) is to specify the full path to the test projects' binary folders: For example:
C:\Builds\{agentId}\{teamProjectName}\{buildDefinitionName}\src\{solutionName}\{testProjectName}\bin*\Debug\*test*.dll*
which really should have been shown as
{agentWorkingFolder}\src\{relativePathToTestProjectBinariesFolder}\*test*.dll
However this approach is very brittle, for the following reasons:
Any new test projects you add to the solution will not be executed until you add them to the build definition's list of test sources:
It will break under any of the following circumstances:
the build definition is renamed
the working folder in build agent properties is modified
you have multiple build agents, and a different agent than the one you specified in {id} runs the build
Improved Workaround
My workaround mitigates the issues listed in #2 (can't do anything about #1).
In the path specified above, replace the initial part:
{agentWorkingFolder}
with
..
so you have
..\src\{relativePathToTestProjectBinariesFolder}\*test*.dll
This works because the internal working directory is apparently the \binaries\ folder that is a sibling of the \src\ folder. Navigating up to the parent folder (whatever it is named, we don't care) and back in to \src\ before specifying the path to the test projects binaries does the trick.
Note: If you have multiple test projects, you add additional entries, separated with semicolons:
..\src\{relativePathToTestProjectONEBinariesFolder}\*test*.dll;..\src\{relativePathToTestProjectTWOBinariesFolder}\*test*.dll;..\src\{relativePathToTestProjectTHREEBinariesFolder}\*test*.dll;
What I ended up doing was adding a post build event to copy all of the test.dll into the staging location folder in the specific build that is basically equivalent to where it would go on a SingleFolder build and do that on each test project.
if "$(TeamBuildOutDir)" == "" (
echo "Building Interactively not in TFS"
) else (
echo "Building in TFS"
xcopy "$(TargetDir)*.*" "$(TeamBuildBinaries)\" /Y /E /S
)
MSBUILD parameter in the build def that told it to basically drop in the folder that TFS looks for them.
/p:TeamBuildBinaries="$(TF_BUILD_BINARIESDIRECTORY)"
Kept the default Test assembly file specification:
**\*test*.dll
View this link for the information on the variable that I used and what relative path it exists at.
Another solution is to do the reverse.
Leave all of the files in the root so that all of the built in functionality works. There is more than just test execution in there. What about static code analysis, impact analysis..among others. You would have to do something custom for them all.
Instead use a pre-drop powershell script to create your Install arrangement from the root files.
If it is an application then you can use the _ApplicationFolder Nuget package to create an _PublishApplications folder same as you get for web applications.

Installing OpenCart extensions locally

When installing OpenCart extensions, you´re generally given a bunch of folders that should be copied to the root directory and the extension files will find their way to the right subfolders. This works great in FTP software, but on a local installation (Mac OSX) using Finder, this operation makes Finder want to overwrite the folders completely, deleting the actual site and just keep the extension.
I can hold Alt when dragging the folders and it will give me the option to not overwrite, the problem is I have hidden files visible, which means there's now a .DS_STORE file in each folder and the ”Hold ALT”-approach doesn’t work in case there are ANY duplicate files in any of the folders.
I’m sure someone out there has stumbled upon the same problem, any ideas for how to solve such a simple but annoying problem? I do not wish to use FTP software for local file management.
I have the same problem, and i found 3 different ways to solve this:
a - use another file manager, i personally use "Transmit" to do this sort of things;
b - use terminal, like: ditto <source> <destination>. Or easier way just type ditto, and drag the source folder, then drag the destination folder, all inside source will merge inside destination;
c - unzip the plugin, inside the OC folder using the terminal, like: tar -zxvf plugin.zip;

PyAIML not loading startup

I am beginning a project on Python that implements PyAIML and I wrote the following code to create a brain for my project:
import aiml
k=aiml.Kernel()
k.learn("std-startup.xml")
k.respond("LOAD AIML B")
k.saveBrain("jarvis.brn")
When I run the program I get this error: WARNING: No match found for input: LOAD AIML B
I understand that I needed to download an AIML set to begin development. So I did, but I'm stuck there.
Please help. I'm a noob programmer so don't be rough on me for this dumb mistake.
Thanks in advance!
The .learn() method will not throw an error if the file you pass it does not exist, and I'm guessing that you are trying to learn patterns from "std-startup.xml" without having this file in your directory.
Make sure the file std-startup.xml is in the directory you are running your script from. You should also have a directory called standard in your working directory that contains the standard set of aiml files. Basically your directory should look like this:
mydir/my_script.py
mydir/std-startup.xml
mydir/standard/a-bunch-of-std-aiml-files.aiml
These files can be found in the "Other Files/Standard AIML Set/" folder on the pyaiml source forge site. Go to that folder and download the one of the tarballs or the zip.
A few things:
If your AIML is loading properly, pyAIML will respond with a line that will read something like:
Loading std-startup.aiml... done (1.00 seconds)
It will not necessarily throw an error if it does not find a file to load, so if you don't see this line, pyAIML has not loaded the AIML file.
I don't see 'std-startup.xml' in the sourceforge directory either, but this shouldn't matter. All that you're loading is any AIML file that will allow you to test the kernel. Try loading the 'self-test.aiml' file in the /aiml directory instead. (Double-check to make sure the file suffix in your code is .aiml and not .xml)
k.respond() is for giving the bot some input and 'LOAD AIML B' is just a test phrase. Once you've loaded 'self-test.aiml' try k.respond('test date') and you should get
The date is Wed Mar 13 01:37:07 2013 in response.