wodi64: ocamlopt issues an error - ocaml

I installed wodi64 on windows 7. When I try to compile a simple hello world program with:
ocamlopt -o hello hello.ml
I get an error:
File "hello.ml", line 1:
Error: Corrupted compilation unit description
C:/wodi64/opt/wodi64/lib/ocaml/std-lib\pervasives.cmx
The contents of the hello.ml file are just:
print_string "Hello world!\n";;
Any idea on how to solve this?
Thanks.

First of all, check that your files are still ok. There are various anti-virus software, that don't like the ocaml compiler and manipulates/remove it's files.
Instructions (from the installed cygwin shell):
cd /tmp # or: wget 'http://wodi.forge.ocamlcore.org/wodi64o.md5sum' -O /tmp/wodi64o.md5sum
godi_console wget 'http://wodi.forge.ocamlcore.org/wodi64o.md5sum'
cd /opt/wodi64
md5sum -c /tmp/wodi64o.md5sum
# install md5sum via cygwin's setup, if it's not already installed
There can be some mismatches, because configuration files will be updated during operation (e.g /opt/wodi64/lib/ocaml/std-lib/ld.conf, Makefile.config will differ ); but binary files should be identical.

Related

How do i compile and use mariadb c++ Connector library in debian 10?

Setup: Beagleboneblack, debian10, arm, mariadb v10.3.36.
Following this guide: https://mariadb.com/docs/connect/programming-languages/cpp/install/
I reach this step:
$ sudo install include/mariadb/* /usr/include/mariadb/
when executing the above command I get the following message; install: omitting directory 'include/mariadb/conncpp'
I ran through the rest of the Install MariaDB Connector/C++
guide, but when i try to compile my task.cpp app using:
$ g++ -o tasks tasks.cpp -std=c++11 -lmariadbcpp
following this example: https://mariadb.com/docs/connect/programming-languages/cpp/sample-app/ i get:
# g++ -o tasks task.cpp -std=c++11 -lmariadbcpp
task.cpp:3:10: fatal error: mariadb/conncpp.hpp: No such file or directory
#include <mariadb/conncpp.hpp>
The main issue is with the -lmariadbcpp i think, its not installed in the correct place or whatever. Can someone explain to me how mariadb connector-library is installed, where it resides, and how i can use it when compiling?
It might be i can't use the c++ connector with my version of mariadb, since c++ connector requires that its a "enterprise" version of maria db. However it should be able to compile/install. Please help me understand the installation process of a library in debian? ?? :)
Update:
After some struggle, it seems i can install the files by cd into the include directories and installing the files manually be explicitly using the file name. Atleast i think there is some progress now...

C++ compiling project with shared object (libtensorflow_cc.so) failed

at the moment I'm facing some problems compiling (and running) a (huge) own project with support of Tensorflow. On my own system (Ubuntu 16.04 LTS) everything works fine. Same procedure on a cluster leads to a compile error and I'm not able to find a solution yet.
System information
Have I written custom code: YES
OS Platform and Distribution (e.g., Linux Ubuntu 16.04): CentOS 7.4.1708
TensorFlow installed from (source or binary): source (using the git repo)
TensorFlow version (use command below): 1.9
Python version: 2.7.15
Bazel version (if compiling from source): 0.16
GCC/Compiler version (if compiling from source): 7.30
CUDA/cuDNN version: not used
GPU model and memory: Tesla K20m
Exact command to reproduce:
Cloned tensorflow repo from github
Configure Bazel (./configure in tensorflow repo)
Built libtensorflow_cc.so with bazel (worked fine!!!)
Downloaded dependencies with delivered script tensorflow/contrib/makefile/download_dependencies.sh
(tried installing protobuf & eigen manually, too)!
Installed protobuf with ./autogen.sh && ./configure && make && make install
Installed Eigen from downloaded dependencies
Copied libraries, headers and includes into own project:
$ cp bazel-bin/tensorflow/libtensorflow_cc.so ../tf_project/lib/
$ cp bazel-bin/tensorflow/libtensorflow_framework.so ../tf_project/lib/
$ cp /tmp/proto/lib/libprotobuf.a ../tf_project/lib/
$ mkdir -p ../tf_project/include/tensorflow
$ cp -r bazel-genfiles/ * ../tf_project/include/
$ cp -r tensorflow/cc ../tf_project/include/tensorflow
$ cp -r tensorflow/core ../tf_project/include/tensorflow
$ cp -r third_party ../tf_project/include
$ cp -r /tmp/proto/include/ * ../tf_project/include
$ cp -r /tmp/eigen/include/eigen3/ * ../tf_project/include
Remark: On my own system it works this way for weeks now. I can use tensorflow in my own project with an exported model trained with keras inside a python project. I make predictions using client_sessions and many other functions of the tensorflow framework and it works properly.
Problem: At the cluster I can compile tensorflow as dynamic library, install protobuf and eigen.
When I try to compile my project (similar process regarding my own system) without significant changes it doesn't work and stops with following error message:
.../tensorflow/include/tensorflow/core/framework/tensor.pb.h:12:2: error: #error This file was generated by a newer version of protoc which is
#error This file was generated by a newer version of protoc which is
^~~~~
.../include/tensorflow/core/framework/tensor.pb.h:13:2: error: #error incompatible with your Protocol Buffer headers. Please update
#error incompatible with your Protocol Buffer headers. Please update
^~~~~
.../include/tensorflow/core/framework/tensor.pb.h:14:2: error: #error your headers.
#error your headers.
^~~~~
.../include/tensorflow/core/framework/tensor.pb.h:27:10: fatal error: google/protobuf/inlined_string_field.h: No such file or directory
#include <google/protobuf/inlined_string_field.h>
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
So, clearly this should be the issue:
This file was generated by a newer version of protoc which is incompatible with your Protocol Buffer headers. Please update your headers.
and
fatal error: google/protobuf/inlined_string_field.h: No such file or directory
Tried solutions:
I tried different versions of protobuf but there was an error every single try.
I tried installing protobuf and eigen manually (without the download_dependencies.sh script)
I'm wondering because my own installation following exact the same steps works properly. Maybe there is an issue with one of the components unless I tried different versions to make sure that these aren't "new" issues.
Can someone help me solving this error that I can compile and run this project at the other machine?
Looking forward to get helpful solutions :) Thank you very much for the support!
Best regards from Germany!
First, I checked on github, tensorflow r1.9 requires protobuf >=3.6.0. With download_dependencies.sh script, you always get protobuf 3.5.0, in which lack of inlined_string_field.h and some other headers.
Second, some errors that ask you to update protobuf version. I tried many versions of protobuf too. Only version 3.6.0 works well, rather than version 3.6.1 or older ones.
For these errors,
error This file was generated by a newer version of protoc which is
fatal error: google/protobuf/inlined_string_field.h: No such file or directory
It is a mismatch version problem. My solutions is that manually download protobuf 3.6.0 from
https://github.com/protocolbuffers/protobuf/releases/download/v3.6.0/protoc-3.6.0-linux-x86_64.zip
install it and cp /usr/local/include/google to somewhere/tf/include. It works quite well for me. I don't know if you tried version 3.6.0.
download_dependencies.sh script would provide a mismatch version of
protobuf. See the issue I posted on github
https://github.com/tensorflow/tensorflow/issues/22536
Also, I noticed your copied files are different from mine. I did in this way.
sudo mkdir /usr/local/tensorflow/include
sudo cp -r tensorflow/contrib/makefile/downloads/eigen/Eigen /usr/local/tensorflow/include/
sudo cp -r tensorflow/contrib/makefile/downloads/eigen/unsupported /usr/local/tensorflow/include/
sudo cp -r tensorflow/contrib/makefile/gen/protobuf/include/google /usr/local/tensorflow/include/
sudo cp tensorflow/contrib/makefile/downloads/nsync/public/* /usr/local/tensorflow/include/
sudo cp -r bazel-genfiles/tensorflow /usr/local/tensorflow/include/
sudo cp -r tensorflow/cc /usr/local/tensorflow/include/tensorflow
sudo cp -r tensorflow/core /usr/local/tensorflow/include/tensorflow
sudo mkdir /usr/local/tensorflow/include/third_party
sudo cp -r third_party/eigen3 /usr/local/tensorflow/include/third_party/
sudo mkdir /usr/local/tensorflow/lib
sudo cp bazel-bin/tensorflow/libtensorflow_*.so /usr/local/tensorflow/lib
Btw, I run build_all_linux.sh instead of download_dependencies.sh.
I hope it will be helpful for you.
I meet this problem before, following is my solution:
I successfully built tensorflow-r1.8 with bazel. And I found protoc in following path is 3.5.0:
/home/zsb/.cache/bazel/_bazel_zsb/1372f28eb0671f692e7ac38330377d8c/execroot/org_tensorflow/bazel-out/host/bin/external/protobuf_archive/protoc --version
but actually my system config is using protoc version 3.4.0
I confirmed this by type "protoc --version" directly.
So finally I update system protobuffer version to 3.5.0 and this problem fixed.

clang compiler not working on terminal Mac OSX

I just upgraded to El Capitan and found out that the C compiler (Clang) is not working under the command line. I wrote a "hello word" test, tried to compile and I get the following error:
$ cc test.c -o test
$ error: unable to open output file
'/var/folders/Ge/GeRStfi8Ek8jojLcqf1vsE+++TI/-Tmp-/test-ad7039.o': 'No
such file or directory'
1 error generated.
... do I have a permission problems somewhere? Thanks!
Either you're running into permissions problems (the compiler is unable to create a folder inside var, and so there's no such file or directory) or the ability to open the file in the current directory of compilation isn't allowed. Check your permissions on
The file
The directory
Run the command under sudo. If that fixes your problem, then use ls -la to check your permissions in the current folder. Then, use chown or chmod to change the permissions on the file/folder.
Example:
chown owner-user test.c
Now, you may actually not actually have access to the /var/ folder. If so, then the temp folder cc is creating is the problem. So then, you'd sudo call cc. For a more permanent fix, you can chown the binary or directory clang is in.

Squid cross compile

I've been trying to cross compile the Squid 3.5.7 on ARM Cortex A8 (Linux).
I downloaded it from http://www.squid-cache.org/Versions/v3/3.5/
I have arm-linux-gnueabi-gcc and arm-linux-gnueabi-g++.
tar -zxvf squid-3.5.7.tar.gz
cd squid-3.5.7
./configure --prefix=/usr/local/squid
make all
make install
Next I copy folders /usr/local/squid and ~/squid-3.5.7 to SD card.
When I try open ./squid -z from SD card on the board with ARM I have problem:
root#am335x:/# ls
bin etc lib mnt srv usr
boot findHelp linuxrc proc sys var
dev home media sbin tmp
root#am335x:/media/mmcblk0/squid/sbin# ls
squid
root#am335x:/media/mmcblk0/squid/sbin# ./squid -z
./squid: line 20: syntax error: ")" unexpected
root#am335x:/media/mmcblk0/squid/sbin# ./squid
./squid: line 20: syntax error: ")" unexpected
root#am335x:/media/mmcblk0/squid/sbin#
I don't know what to do :/
The binary which you have built is built for your PC architecture. To build squid for arm follow the below instructions.
The configure is trying to run a test which will fail if you are using a cross compiler so add a cache file to override those tests.
For example create a cache file squid.cache with the line shown below
squid_cv_gnu_atomics=no
Export the BUILDCXX variable required for compiling squid
export BUILDCXX=g++
Make sure you have exported the toolchain path to the path variable($PATH)
export PATH=<TOOLCHAIN_PATH>:$PATH
Then configure the squid by running the configure as shown below
./configure --host=arm-linux-gnueabi --cache-file=squid.cache --prefix=<install/dir>
Finally compile the squid by running make
make
Then install the binaries using make install
make install

ocaml-glpk (glpk bindings) and OASIS

Preface: I am new to OCaml, OPAM, and OASIS.
tldr question: How do I properly set up a package with opam that is not already available in the repository (I can't just do opam install X)? More details follow:
I am trying to include ocaml-glpk in an OCaml project. I installed ocaml-glpk just by running make and make install as stated in the README, and the given example compiles and runs correctly. However, I am using OASIS to generate the build system of my project, and I am not sure how to set it up. I have the same example (renamed to glpkExample.ml in a src folder) and the following in my _oasis file:
Executable "glpkExample"
Path: src
MainIs: glpkExample.ml
CompiledObject: best
BuildDepends:
glpk
After running oasis setup -setup-update dynamic, I run make and get the following error:
ocaml setup.ml -build
Finished, 0 targets (0 cached) in 00:00:00.
+ /home/dimitrios/.opam/system/bin/ocamlfind ocamlopt -g -linkpkg -package glpk src/glpkExample.cmx -o src/glpkExample.native
File "_none_", line 1:
Error: Cannot find file /home/dimitrios/.opam/system/lib/glpk/glpk.cmxa
Command exited with code 2.
Compilation unsuccessful after building 4 targets (3 cached) in 00:00:00.
E: Failure("Command ''/usr/bin/ocamlbuild' src/glpkExample.native -tag debug' terminated with error code 10")
make: *** [build] Error 1
It seems the glpk library is missing a cmxa file needed to compile a native executable. I am not sure how to fix this. To compile glpkExample.ml correctly, my Makefile includes /home/dimitrios/.opam/system/lib/glpk and also uses the OCamlMakefile, which is extremely long and convoluted. Any help on setting this up with OASIS or how to get ocaml-glpk to work nicely with OASIS would be greatly appreciated.
Thanks!
This website is not appropriate for bug reports. You should really report it here.
The temporary solution is to use CompiledObject: byte to compile in bytecode.
If you're using opam then it is best to install application with it, not manually. Try to clean up your system and remove whatever you installed, and then do:
$ eval `opam config env`
$ opam install ocaml-glpk
Afterwards, if glpk is packaged in opam correctly, it should work with your setup, i.e., just with oasis's BuildDepends field and nothing more.