I am trying to build a windows openssl library for a rasbery pi pico microcontroller, with an arm cortex m0 processor.
I downloaded the openssl library from the official site and unzipped it. Installed MSYS2 and launched it. It indicated the path to the directory with the openssl library:
cd /C/openssl-3.0.0-beta1
Selected settings for the config file:
./Configure gcc --cross-compile-prefix=arm-none-eabi- --prefix=/K/OpenSSL-x32-arm -mcpu=cortex-m0plus PROCESSOR=ARM -DL_ENDIAN no-shared -DNO_SYSLOG -DOPENSSL_NO_X509 -DOPENSSL_NO_X509V3 -DOPENSSL_NO_X509_VFY no-idea no-camellia no-seed no-bf no-cast no-des no-rc4 no-rc5 no-md2 no-md4 no-ripemd no-mdc2 no-dsa no-dh no-ec no-ecdsa no-ecdh no-sock no-ssl2 no-ssl3 no-err no-engine no-hw
and started compiling:
make depend && make
it throws an error:
In file included from c:\msys64\mingw64\arm-none-eabi\include\dirent.h:39,
from crypto/LPdir_unix.c:44,
from crypto/o_dir.c:28:
c:\msys64\mingw64\arm-none-eabi\include\sys\dirent.h:10:2: error: #error "<dirent.h> not supported"
10 | #error "<dirent.h> not supported"
What am I doing wrong?
The raspberry pi pico does not actually have a filesystem, so it does not support file operations. Furthermore, it only has 246kb of programmable memory. Are you sure you want to blow it all on OpenSSL? There are far smaller TLS stacks such as Mbed-TLS (<30kb) and WolfSSL (<100kb).
Related
I'm trying to cross-compile Google Breakpad. I'm executing the following commands:
$ ./configure --prefix=/opt/breakpad CFLAGS="-Os" CC=PATH_ARM_COMPILER/arm-linux-gcc CXX=PATH_ARM_COMPILER/arm-linux-g++ --host=arm
$ make
$ make install
It generates and installs some files in the prefix path. In the include path it has:
|-common
|-google_breakpad
|-processor
but it should has:
|-client
|-common
|-google_breakpad
|-processor
|-third_party
It seems to be a problem related to Breakpad client. What should be the right way to cross-compile Breakpad?
My host is a Ubuntu 18.04 x86-64, target ARM-32.
I have reproduced your problem on my side. In fact, the issue is related to --host compilation flag.
Breakpad documentation shows that:
when building on Linux it will also build the client libraries.
So, In order to get the client binaries and headers, you should use the correct compiler prefix.
For example if you are using the GNU cross compiler arm-linux-gnueabihf-gcc, the --host flag value should be arm-linux-gnueabihf.
In your case (arm-linux-gcc) try to change your configure command as following:
./configure --prefix=/opt/breakpad CFLAGS="-Os" CC=PATH_ARM_COMPILER/arm-linux-gcc CXX=PATH_ARM_COMPILER/arm-linux-g++ --host=arm-linux
I'm trying to compile C/C++ code from my Debian partition to generate some executable files for Windows.
Running $ uname -a on the command line gives Linux machine 5.14.0-2-amd64 #1 SMP Debian 5.14.9-2 (2021-10-03) x86_64 GNU/Linux. My processor is an Intel® Core™ i5-1035G4 CPU # 1.10GHz × 8, with a Mesa Intel® Iris(R) Plus Graphics (ICL GT1.5) integrated GPU.
A minimal example to show my current situation includes the following code (called code.cpp):
#include <iostream>
#include <CL/opencl.hpp>
int main()
{
std::vector <cl::Platform> all_platforms; //Get all platforms
cl::Platform::get(&all_platforms);
if (all_platforms.size() == 0)
{
std::cout << "No platforms found. Check OpenCL installation." << std::endl;
exit(1);
}
int pz = all_platforms.size();
std::cout << "Platforms size: " << pz << std::endl;
for (int i = 0; i < pz; i++)
{
cl::Platform default_platform = all_platforms[i];
std::cout << "Using platform: " << default_platform.getInfo<CL_PLATFORM_NAME>() << std::endl;
}
return(0);
}
which uses OpenCL to print all recognized devices. I compile my code writing g++ code.cpp -o code.out -lOpenCL. The executable file code.out works fine, doing what you would expect it to do. I have another program which uses GSL (GNU Scientific Library) written in C which also works well, linking with -lgsl (therefore I think there's not a problem with my code or the regular compilation process). Both OpenCL and GSL were installed from the official repositories (~# apt install ...) with no problem at all. When I execute code.out the output is
Platforms size: 2
Using platform: Intel(R) OpenCL HD Graphics
Using platform: Portable Computing Language
I installed mingw (via ~# apt install mingw-w64) to create executable files to be run on Windows, and for basic programs (i.e. without "external" libraries) it works well (replacing gcc by x86_64-w64-mingw32-gcc or i686-w64-mingw32-gcc). However for the code written above (and for the one using GSL) it doesn't work. Most of the error outputs are very similar for both examples, and I will show the command line outputs for the code using OpenCL.
When I try x86_64-w64-mingw32-g++ code.cpp -o code.out -lOpenCL the output is
code.cpp:2:10: fatal error: CL/opencl.hpp: No such file or directory
2 | #include <CL/opencl.hpp>
| ^~~~~~~~~~~~~~~
compilation terminated.
I thought this meant that I needed to be more specific when linking and including, so I gave the explicit path where the headers are located (found them via dpkg -S opencl.hpp or dpkg -S gsl*.h), and the .so file for OpenCL was found via dpkg -S *OpenCL.so, while the one for GSL was found using dpkg -S *gsl.so. When I try x86_64-w64-mingw32-g++ code.cpp -o code.out -I/usr/include/ -L/usr/lib/x86_64-linux-gnu/libOpenCL.so the output is
In file included from /usr/lib/gcc/x86_64-w64-mingw32/10-win32/include/c++/cwchar:44,
from /usr/lib/gcc/x86_64-w64-mingw32/10-win32/include/c++/bits/postypes.h:40,
from /usr/lib/gcc/x86_64-w64-mingw32/10-win32/include/c++/iosfwd:40,
from /usr/lib/gcc/x86_64-w64-mingw32/10-win32/include/c++/ios:38,
from /usr/lib/gcc/x86_64-w64-mingw32/10-win32/include/c++/ostream:38,
from /usr/lib/gcc/x86_64-w64-mingw32/10-win32/include/c++/iostream:39,
from code.cpp:1:
/usr/include/wchar.h:27:10: fatal error: bits/libc-header-start.h: No such file or directory
27 | #include <bits/libc-header-start.h>
| ^~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
Therefore it seems that MinGW needs additional instructions to properly find, include and/or link the libraries. I don't know how to solve this problem. Those are my attempts based on some answers I've found, and the documentation provided by MinGW says nothing about this. The exact same problem occurs no matter if I use x86_64-w64-mingw32-g++ or i686-w64-mingw32-g++, or their gcc counterparts.
When cross-compiling make sure you are only linking things targeting the same platform together. In other words, your dependencies (and their dependencies) must be for the same target platform. You can't link with those libraries for your build platform.
So if you have a Windows 64-bit application that depends on OpenCL, you will need to link it against a Windows 64-bit build of OpenCL.
The OpenCL the sources can be found here:
https://github.com/KhronosGroup/OpenCL-Headers
https://github.com/KhronosGroup/OpenCL-ICD-Loader
so you would need to build those first.
I am trying to cross compile NPM Sqlite3 with sqlcipher support. I am using Ubuntu 16.04 to cross compile for linux armv7 based SOC(system on chip).
So I started with cross-compiling OpenSSL to build sqlcipher for arm. I successfully cross compiled sqlcipher to produce a static library (libsqlcipher.a).
Now I am trying to get the NodeJS side of the project. I need sqlite with sqlcipher support, compiled for arm. I am using SOC SDK to built till now.
I am using node v4.6.1 and npm v2.15.9 to cross compile. I made sure I have the same version installed on Ubuntu as the SOC.
The command I use to cross compile is as follows :
npm install sqlite3 --target_arch=arm --enable-static=yes --build-from-source --sqlite_libname=sqlcipher -fPIC --sqlite=home/onkar/Library/sqlcipher-master/.libs --verbose
I exported the location of the libsqlcipher.a to LDFLAGS. I get the following error when I try to cross compile. Can someone help me with this error?
/home/linuximage/sdk/sysroots/x86_64-angstromsdk-linux/usr/libexec/arm-angstrom-linux-gnueabi/gcc/arm-angstrom-linux-gnueabi/5.2.1/real-ld: error: /home/Library/sqlcipher-master/.libs/libsqlcipher.a(sqlite3.o): requires unsupported dynamic reloc R_ARM_THM_MOVW_ABS_NC; recompile with -fPIC
collect2: error: ld returned 1 exit status
node_sqlite3.target.mk:129: recipe for target 'Release/obj.target/node_sqlite3.node' failed
make: *** [Release/obj.target/node_sqlite3.node] Error 1
Please let me know if you require any additional information, I would be more than happy to provide you with the same.
Thanks,
Onkar
In the first instance, you should check if the -fPIC (position independent code) flag was correctly applied when the libsqlcipher.a file was originally created.
In your output above, it looks like the linker is using the file at:
/home/Library/sqlcipher-master/.libs/libsqlcipher.a
Run the command
objdump -r /home/Library/sqlcipher-master/.libs/libsqlcipher.a | more
... and check for a line close to the start of the output beginning with the text
RELOCATION RECORDS FOR
If you see this line, then the library doesn't contain position independent code.
Compiling Xcode Project fails with following errors:
'missing required architecture arm64 in file /Users/*/Git/ocr/opencv2.framework/opencv2'
It works well, if i change Architectures(under Build Settings) to (armv7, armv7s) instead of (armv7, armv7s).
How to change the opencv python build script, to add arm64 support to opencv2.framework?
The latest OpenCV iOS framework supports 64 bit by default
It can be downloaded at: OpenCV download page
I modified the following to make it build, though I haven't got an arm64 iOS device to test at the moment.
Edit: I also had to follow https://stackoverflow.com/a/17025423/1094400
Assuming "opencv" is the folder containing the opencv source from Github:
in each of gzlib.c, gzread.c, gzwrite.c located in opencv/3rdparty/zlib/ add:
#include <unistd.h>
at the top after the existing include.
In addition open opencv/platforms/ios/cmake/Modules/Platform/iOS.cmake and change line 88 from:
set (CMAKE_OSX_ARCHITECTURES "$(ARCHS_STANDARD_32_BIT)" CACHE string "Build architecture for iOS")
to:
set (CMAKE_OSX_ARCHITECTURES "$(ARCHS_STANDARD_INCLUDING_64_BIT)" CACHE string "Build architecture for iOS")
Furthermore change the buildscript at opencv/platforms/ios/build_framework.py in lines 99 and 100 from:
targets = ["iPhoneOS", "iPhoneOS", "iPhoneSimulator"]
archs = ["armv7", "armv7s", "i386"]
to:
targets = ["iPhoneOS", "iPhoneOS", "iPhoneOS", "iPhoneSimulator", "iPhoneSimulator"]
archs = ["armv7", "armv7s", "arm64", "i386", "x86_64"]
The resulting library will include the following:
$ xcrun -sdk iphoneos lipo -info opencv2
Architectures in the fat file: opencv2 are: armv7 armv7s i386 x86_64 arm64
Although I have a remaining concern regarding opencv/platforms/ios/cmake/Toolchain-iPhoneOS_Xcode.cmake which defines the size of a data pointer to be 4 in lines 14 and 17.
It should be 8 for 64bit I guess, so as I haven't tested if the compiled library is working for arm64 I would suggest further investigations at this point if it does not run properly.
micahp's answer was almost perfect, but missed the simulator version. So modify platforms/ios/build_framework.py to:
targets = ["iPhoneOS", "iPhoneOS", "iPhoneOS", "iPhoneSimulator", "iPhoneSimulator"]
archs = ["armv7", "armv7s", "arm64", "i386", "x86_64"]
You'll need to download the command line tools for Xcode 5.0.1 and then run
python opencv/platforms/ios/build_framework.py ios
Try to wait a next month. Will release a new XCode with more powerful supporting of 32/64 bit.
https://developer.apple.com/news/index.php?id=9162013a
Modify "build_frameworks.py" to:
def build_framework(srcroot, dstroot):
"main function to do all the work"
targets = ["iPhoneOS", "iPhoneOS", "iPhoneOS", "iPhoneSimulator"]
archs = ["armv7", "armv7s", "arm64", "i386"]
for i in range(len(targets)):
build_opencv(srcroot, os.path.join(dstroot, "build"), targets[i], archs[i])
put_framework_together(srcroot, dstroot)
#Jan, I followed your instructions, but OpenCV still doesn't run on arm64. You made such a detailed and wonderful answer - why not check it out on a simulator and see if you can make it run? :-)
FWIW, I think it might be harder than it seems. On the openCV stackoverflow clone, there's an indication that this problem might be non-trivial.
Instead of using terminal commands given in the opencv installation guide in official website, use the following commands. Worked for me.
cd OpenCV-2.3.1
mkdir build
cd build
cmake -G "Unix Makefiles" ..
make
sudo make install
I was having a similar error, but the issue wasn't related with the arm64 coompilation.fixed adding the framework libc++.dylib
I am trying to build omniORB libraries on RHEL 5.5.
I tried running configure with
CC=gcc and CXX=g++ and PYTHON=bin/omnipython
I run into this problem where it complains about
gmake[3]: Entering directory `/home/local/NT/jayanthv/omniORB-4.1.4/src/lib/omniORB'
../../../bin/omniidl -bcxx -p../../../src/lib/omniORB -Wbdebug -Wba -p../../../src/lib/omniORB -Wbdebug -v -ComniORB4 ../../../idl/Naming.idl
omniidl: ERROR!
omniidl: Could not open IDL compiler module _omniidlmodule.so
omniidl: Please make sure it is in directory /home/local/NT/jayanthv/omniORB-4.1.4/lib
omniidl: (or set the PYTHONPATH environment variable)
omniidl: (The error was '/home/local/NT/jayanthv/omniORB-4.1.4/lib/_omniidlmodule.so: wrong ELF class: ELFCLASS64')
So, I tried to use the Intel C++ compiler instead, with
export CXX=/opt/intel/Compiler/11.1/080/bin/ia32/icc
export LD_LIBRARY_PATH=/opt/intel/Compiler/11.1/080/lib/ia32
export PYTHON=/home/local/NT/jayanthv/omniORB-4.1.4/bin/omnipython
But, now it complains about
../../../bin/omniidl -bcxx -p../../../src/lib/omniORB -Wbdebug -Wba -p../../../src/lib/omniORB -Wbdebug -v -ComniORB4 ../../../idl/Naming.idl
omniidl: ERROR!
omniidl: Could not open IDL compiler module _omniidlmodule.so
omniidl: Please make sure it is in directory /home/local/NT/jayanthv/omniORB-4.1.4/lib
omniidl: (or set the PYTHONPATH environment variable)
omniidl: (The error was '/home/local/NT/jayanthv/omniORB-4.1.4/lib/_omniidlmodule.so: undefined symbol: __cxa_pure_virtual')
The OS is RHEL 5.5 with x86_64 architecture, and I am trying to build the 32 bit binaries. Would appreciate any insight into this problem.
That's because omniidl is implemented as a Python extension module.
The Python executable you are using is a 64 bit executable, so it
can't load a 32 bit library.
Check this out http://objectmix.com/object/196129-compiling-omniorb-32bits-libraries-64bits-machine-suse.html
I finally found the magic combination to building omniORB on Linux using Intel compiler.
You see where it complains about '__cxa_pure_virtual' not found, this happens under gcc because it can't find a lib called libstdc++
So, make CC="icc -lstdc++" or CC="gcc -lstdc++" depending on which compiler you are using . Do the same for CXX (if using g++, specify it at g++)
And for Python, I used the omnipython which is a python1.5, PYTHON=bin/omnipython
which means it is looking relative to the omniORB root path.
You can see where it complains about 'wrong ELF class: ELFCLASS64', this is because you are trying to link a 32 bit binary using a 64 bit linker.
So, force your compiler and linker flags to 32.
CFLAGS=-m32 CXXFLAGS=-m32 LDFLAGS=-m32
Once done, run your configure
./configure --prefix=/opt/omniInst --build=i686-pc-linux-gnu
Run gmake followed by gmake install, and you will see all the binaries and libs under omniInst or whichever prefix directory you suggested.