I'm following the instructions on github.com/jcjohnson/torch-rnn and have it working until the training section. When I use th train.lua -input_h5 my_data.h5 -input_json my_data.jsonI get the error Error: unable to locate HDF5 header file at /usr/local/Cellar/hdf5/1.10.0-patch1/include;/usr/include;/usr/local/opt/szip/include/hdf5.h
I'm new to luarocks and torch, so I'm not sure what's wrong. I installed torch-hd5f. Any advice would be very much appreciated.
Check that the header file exists and that you have the correct path.
If the header file is missing you skipped the preprocess step. If the header file exists it's likely in your data directory and not in the same directory as the sample.lua code:
th train.lua -input_h5 data/my_data.h5 -input_json data/my_data.json
Related
I recently finished the basic tutorial for C++ here and wanted to set up a project based on the proto files from here. I followed a similar directory structure as the tutorial and changed up the CMakeLists.txt file to accommodate the new files.
I'm currently trying to just compile the manager.proto file. I was able to compile the file and get my server/client files, however the files are being outputted within cmake/build/minknow_api rather than the expected cmake/build folder. This meant the make command would return the error:
clang: error: no such file or directory: '/Users/name/Documents/grpc/examples/cpp/minknow_api/cmake/build/manager.grpc.pb.cc'
clang: error: no input files
I read that it was because protoc outputs based on the imports of the proto file, i.e the file comes with import minknow_api/device.proto for instance. I copied the files from within the cmake/build/minknow_api into cmake/build/ and reran make and it seemed to work, however, the C++ file imports as expected are searching within a minknow_api directory, meaning I'd have to manually edit these imports to look within the current directory instead of to then compile successfully.
I've tried experimenting with trying to get rid of the minknow_api from the proto imports, however had no luck and only got more import issues during compilation. It seems some files have the same names for messages etc, which means I had to keep the minknow.somename as the package so I can thus distinguish in the files which imported values I wanted to access. I've also tried moving files into their own directories like instance.proto which has package minknow.instance would go inside of instance directory, but still no luck.
I was wondering if anyone could figure out how to get rid of the minknow_api out of my proto imports properly so that I won't get these import and output directory issues down the track?
I'm new to electron and want to call c++ dll, that works fine. I now want to know how to add my own header file in electron. I tried root dictionary and some other deeper place, but only got:
fatal error C1083: cannot open file: “MyDriver.h”: No such file or directory [c:\Users\75803\Documents\GitHub\native_addon\node_modules\hello\build\hello.vcxproj
Any help will be appreciate!
OP worked this out by himself.
Solution is to put the custom header file in the folder
iojs-(yoour electron version)/src
I would like to compile a c++ file as a mex file in MATLAB namely: "mexLasso.cpp".
These are the steps I take and I get the following error. What is the problem?
1. Put the files "mexLasso.cpp" and "mexutils.h" in a folder.
2. Set the compiler:
mex -setup C++
I recieve:
MEX configured to use 'Microsoft Visual C++ 2013 Professional' for C++ language compilation.
3. Run the command:
mex C:\...\mexLasso.cpp
I recieve the following error:
Error using mex
mexLasso.cpp
C:\...\mexLasso.cpp(33) : fatal error C1083: Cannot
open include file: 'mexutils.h': No such file or directory
Can somebody help us what we are missing?
From the mex command line tool reference, there is an argument for adding include paths:
-Ipathname Adds pathname to the list of folders to search for #include files.
Do not add a space between I and pathname.
Like this:
mex -v -IC:\path\to\mexutils_h\ mexLasso.cpp
Note that with -I you are not specifying the header, you are specifying a path containing one or more header files.
Thanks for your comments. I think the besy way to solve my problem is to install SPAMS in my machine and successfully compile it. I have posted the steps I take in this post :
How to install SPAMS toolbox in Matlab 2014b under windows 8.1
Can I have your opinion there. Sorry for duplicated message, only for relevancy and importance of the discussed topic.
Many thanks.
I currently have a file called addressbook.proto in next to my protoc.exe. I am having difficulty generating the .h and the .cc file. Here is what I am doing
protoc --cpp_out=c:\addressbook.proto
However I get the following response
Missing input file.
Any suggestions on what I might be doing wrong ?
The -cpp_out tag specifies the output directory for generated c source code.
I would suggest trying (if proto is actually stored under the c: directory c:\addressbook.proto)
protoc c:\addressbook.proto --cpp_out=./
or
protoc addressbook.proto --cpp_out=./
Since the only answer in this thread didn't lead me to the solution I needed, here it is.
The syntax for calling the protoc.exe is as follows:
protoc --proto_path=<proto_directory> --cpp_out=<output_directory> <proto_file>
Important is that the argument to proto_path is a directory instead the specific .proto-file path. The actual proto files used are appended at the end of the command (<proto_file>).
I am beginning a project on Python that implements PyAIML and I wrote the following code to create a brain for my project:
import aiml
k=aiml.Kernel()
k.learn("std-startup.xml")
k.respond("LOAD AIML B")
k.saveBrain("jarvis.brn")
When I run the program I get this error: WARNING: No match found for input: LOAD AIML B
I understand that I needed to download an AIML set to begin development. So I did, but I'm stuck there.
Please help. I'm a noob programmer so don't be rough on me for this dumb mistake.
Thanks in advance!
The .learn() method will not throw an error if the file you pass it does not exist, and I'm guessing that you are trying to learn patterns from "std-startup.xml" without having this file in your directory.
Make sure the file std-startup.xml is in the directory you are running your script from. You should also have a directory called standard in your working directory that contains the standard set of aiml files. Basically your directory should look like this:
mydir/my_script.py
mydir/std-startup.xml
mydir/standard/a-bunch-of-std-aiml-files.aiml
These files can be found in the "Other Files/Standard AIML Set/" folder on the pyaiml source forge site. Go to that folder and download the one of the tarballs or the zip.
A few things:
If your AIML is loading properly, pyAIML will respond with a line that will read something like:
Loading std-startup.aiml... done (1.00 seconds)
It will not necessarily throw an error if it does not find a file to load, so if you don't see this line, pyAIML has not loaded the AIML file.
I don't see 'std-startup.xml' in the sourceforge directory either, but this shouldn't matter. All that you're loading is any AIML file that will allow you to test the kernel. Try loading the 'self-test.aiml' file in the /aiml directory instead. (Double-check to make sure the file suffix in your code is .aiml and not .xml)
k.respond() is for giving the bot some input and 'LOAD AIML B' is just a test phrase. Once you've loaded 'self-test.aiml' try k.respond('test date') and you should get
The date is Wed Mar 13 01:37:07 2013 in response.