I am trying to cross compile(for ARM64) DPDK from source as instructed here:
https://doc.dpdk.org/guides/linux_gsg/cross_build_dpdk_for_arm64.html
But when I run make, I see this:
$ make config T=arm64_armv8_linux_gcc
make: Nothing to be done for 'config'.
I have the checkout the main branch, and wondering if compiling through "Makefile" is not supported any more and MESON build system has replaced it ?
I am on top commit of master branch:
https://github.com/DPDK/dpdk/commit/9d620630ea30386d7fc2ff192656a9051b6dc6b5
DPDK version:
21.02.0-rc0
Toolchain version is:
aarch64-linux-gnu-gcc --version
aarch64-linux-gnu-gcc (Linaro GCC 7.3-2018.05) 7.3.1 20180425 [linaro- 7.3-2018.05 revision d29120a424ecfbc167ef90065c0eeb7f91977701]
Host machine details are:
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.7 LTS
Release: 16.04
Codename: xenial
DPDK has removed the support for Makefile from 20.11. One has to rely on meson-ninja for the same.
Please use the below as guide for your cross build
meson arm64-build --cross-file config/arm/arm64_armv8_linux_gcc
ninja -C arm64-build
DPDK LTS 19.11.6 still uses Makefile.
Related
I'm using an m1 pro macbook pro.
Up until now, I used intel macbook.
My program is written in c/c++ and the target is ubuntu x86_64.
I tried running ubuntu x86 docker (qemu) and it's super slow - to the point it's unusable.
I have linux ubuntu (arm) installed using parallels and would like to compile for x86 target instead of arm.
How do I do it?
On Ubuntu, I would suggest an apt install gcc-x86-64-linux-gnu g++-x86-64-linux-gnu, and then invoking the installed compiler with the x86-64-linux-gnu prefix (for gcc, x86-64-linux-gnu-gcc) to create x86_64 binaries.
Do note that if you target x86_64 you won't be able to run the programs you build natively, but you should be able to package the binaries created for execution on an x86_64 machine.
Install docker-desktop on your mac and run this docker container with the command:
docker container run --platform=linux/amd64 -it -p 6080:6080 -e WIDTH=1920 -e HEIGHT=1080 yoas1/xubuntu-desktop:1.0
Don't forget to create volume to the code directory.
In your browser go to: http://localhost:6080/vnc.html to access the xubuntu desktop
Image on Dockerhub
Good morning. I have created a new phoenix 1.3 app on MacOS Sierra 10.12.6 (16G29), Erlang 19.3 and Elixir 1.4.4. I added ejabberd as a dependency ({:ejabberd, ">= v2.1.13", github: "processone/ejabberd"}) and ran mix.deps without error. I try to compile the project and it fails with the following:
==> fast_xml
Unchecked dependencies for environment prod:
* p1_utils (Hex package)
the dependency is not locked (run "mix deps.get" to generate "mix.lock" file)
* elixir_make (Hex package)
the dependency is not available, run "mix deps.get"
could not compile dependency :fast_xml, "mix compile" failed. You can recompile this dependency with "mix deps.compile fast_xml", update it with "mix deps.update fast_xml" or clean it with "mix deps.clean fast_xml"
==> sebago
** (Mix) Can't continue due to errors on dependencies
If I try to compile fast_xml from the deps directory using rebar (rebar 2.6.4 18 20170508_132308 git 2.6.4-6-g2a52f60), I initially get the error that it is missing dependencies so I do mix deps.get followed by mix compile and get the following:
==> fast_xml
make: Makefile: No such file or directory
make: *** No rule to make target `Makefile'. Stop.
** (Mix) Could not compile with "make" (exit status: 2).
Depending on your OS, make sure to follow these instructions:
Mac OS X: You need to have gcc and make installed. Try running the
commands "gcc --version" and / or "make --version". If these programs
are not installed, you will be prompted to install them.
Linux: You need to have gcc and make installed. If you are using
Ubuntu or any other Debian-based system, install the packages
"build-essential". Also install "erlang-dev" package if not
included in your Erlang/OTP version. If you're on Fedora, run
"dnf group install 'Development Tools'".
gcc --version:
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1
Apple LLVM version 8.1.0 (clang-802.0.42)
Target: x86_64-apple-darwin16.7.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
make --version:
GNU Make 3.81
Copyright (C) 2006 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.
There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE.
This program built for i386-apple-darwin11.3.0
Does anybody have a suggestion where to start?
Thanks.
I am compiling an application using the gcc arm cross compiler(arm-eabi-g++). I want to execute the code on the jetson tk1 target. When I copy the executable and run it on the target. I get an error saying -bash: ./Proj: No such file or directory
Should I include any extra conditions while building, inorder to run it on the target?
Can anyone suggest any other cross compiler that works?
It might be differing of system architecture and program architecture.
Look for the architecture of TX1 with command
uname -p
From JetPack v2.2 Jetson TX1 can be installed with aarch64 or armhf architecture
Simpliest way to cross compile is to use arm-linux-gnueabihf-g++ for armhf and aarch64-linux-gnu-g++ for aarch64. You can run armhf programs on aarch64 but you need to install armhf versions of all necessary libraries like libc or libstdc++:
sudo apt-get install libc6-dev:armhf
Currently running Ubuntu 14.04 x86_64. I want to cross compile rethinkdb for my RPi for experimental purposes, which is supported in 1.12 (and people have apparently successfully compiled).
I have installed the toolchain:
sudo apt-get install g++-4.7-arm-linux-gnueabi gcc-arm-linux-gnueabi
export CXX=/usr/bin/arm-linux-gnueabi-g++-4.7
export CC=/usr/bin/arm-linux-gnueabi-gcc-4.7
export AR=/usr/bin/arm-linux-gnueabi-ar
export LD=/usr/bin/arm-linux-gnueabi-ld
Configuration runs:
./configure --ccache --allow-fetch --without-tcmalloc
* Detecting system configuration
Bash: 4.3.8(1)-release
Use ccache: yes
C++ Compiler: GCC 4.7 (/usr/bin/arm-linux-gnueabi-g++-4.7)
Host System: arm-linux-gnueabi
Build System: Linux 3.13.0-24-generic x86_64
Cross-compiling: yes
Host Operating System: Linux
Without tcmalloc: yes
Build client drivers: no
Build Architecture: x86_64
Precompiled web assets: no
Protobuf compiler: /usr/bin/protoc
Node.js package manager: /usr/bin/npm
LESS css: external/less_1.6.2
CoffeeScript: external/coffee-script_1.7.1
Handlebars: external/handlebars_1.3.0
Browserify: external/browserify_3.24.13
ProtoBuf.js: external/protobufjs_2.0.4
wget: /usr/bin/wget
curl: /usr/bin/curl
protobuf: external/protobuf_2.5.0
v8: external/v8_3.22.24.17
RE2: external/re2_20140111
z: external/zlib_1.2.8
Google Test: external/gtest_1.6.0
termcap: no
Test protobuf: external/protobuf_2.5.0
Test boost: external/boost_1.55.0
Installation prefix: /usr/local
Configuration prefix: /usr/local/etc
Runtime data prefix: /usr/local/var
* Warning: ARM support is still experimental
* Wrote configuration to config.mk
However make fails,
/bin/bash: ccache: command not found
Any pointers to getting this working?
Any pointers to getting this working?
Just install ccache. It would be helpful even if you did not have this issue because it somewhat speeds the compilation up on embedded. We are also using it at the company, fwiw, even with icecream, eventually.
sudo apt-get install ccache
The actual state of cross compiling rethinkdb for raspberry-pi is complicated right now. It seems that this is not actively maintained and it became complicated to set it up over the years. This is however possible with a little bit of effort. It requires dwelving into the rethinkdb build framework based on good ol' makefiles...
You can find an attempt at doing so here in a Dockerfile. It basically creates a Docker container with all the dependencies (especially the cross compiler), modify the config and build rethinkdb. The outcome is the rethinkdb package for raspberry.
You can either use it as is or have a look to it and reproduce it on your own.
Are there specific steps I can take to build the Xuggle Xuggler source code from Windows 32-bit, Windows 64-bit, Linux 32-bit, and Linux 64-bit? I've tried multiple times on multiple systems and keep getting lots of different errors.
Update
I spent several days trying to get Xuggle Xuggler to compile (and
cross-compile). I successfully tackled compiling both the original GPL
version of the code and an LGPL version. I thought I'd post an
answer to my own question on Stack Overflow to share my knowledge.
Update on Raspberry Pi
I was also able to build and run Xuggler on the Raspberry Pi following these same basic instructions below. I just used
my LGPL version of the code that I maintain on Github, and made modifications for the Pi. I can use the compiled JAR file and binaries on my Radxa Rock (another ARM device) too. If you're interested
in building on the Pi, you can use my pi branch:
https://github.com/e-d/xuggle-xuggler
If you are lazy and just want the precompiled .jar files for the Pi/ARM:
GPL Version (supports H.264)
LGPL Version (no H.264 support)
Here is a formatted version of my answer in a published Google Document.
For completeness (and in case the link goes dead one day), here is less-nicely-formatted text:
Building Xuggle Xuggler (GPL and LGPL Licensed Versions)
[Linux 32-bit, Linux 64-bit, Windows 32-bit, Windows 64-bit]
To build the Xuggle Xuggler library, you will need two Linux virtual machines running Ubuntu 11.10 (32-bit and 64-bit operating systems). A 32-bit version of the OS is required to build Linux 32-bit, cross compiling Windows 32-bit, and cross compiling Windows 64-bit binaries. A 64-bit version of the OS is required to build Linux 64-bit binaries.
Using VirtualBox, I created the two virtual machines discussed above with the ubuntu-11.10-server-i386.iso and ubuntu-11.10-server-amd64.iso disk images. These are headless server versions of Ubuntu. After installation of the OS, follow these steps to build Xuggler (you are welcome to try different dependency versions and not use the root user, but this is what I did to build successfully):
Change to root user:
sudo su
Just use root’s home directory:
cd /root
Update apt-get to use specific repository:
apt-get install python-software-properties
add-apt-repository ppa:ferramroberto/java
apt-get update
Install Java:
apt-get install sun-java6-jdk sun-java6-plugin
Verify the HotSpot Java 6 JVM is the default java:
java -version
If the incorrect version of Java appears, configure the default by running:
update-alternatives --config java
Install gcc, g++, make and all the other build essentials:
apt-get install build-essential
Install YASM:
apt-get install yasm
Install Open SSL:
apt-get install openssl
Install Package Config:
apt-get install pkg-config
Install Git:
apt-get install git
Install Ant:
apt-get install ant-optional
Install JUnit:
apt-get install junit
Install MingGW to be able to build for Windows (mingw-w64 can do 32 and 64-bit Windows):
apt-get install mingw-w64
Download the LGPL configured Xuggle source code (Ed’s fork of the code from Jeff Wallace’s fork from the original GPL xuggle code) or the original GPL version:
LGPL: git clone https://github.com/e-d/xuggle-xuggler.git
GPL: git clone https://github.com/xuggle/xuggle-xuggler.git
Compile and build the JAR files (with binaries inside). Be sure to run the 64-bit Linux build on the 64-bit version of Ubuntu. Also note that between builds you will need to run “ant clobber” to remove all of the compiled files from the previous architecture. To build run:
(32/64-bit Linux): ant stage
(64-bit Windows): ant -Dbuild.configure.os=x86_64-w64-mingw32 stage
(32-bit Windows): ant -Dbuild.configure.os=i686-w64-mingw32 stage
The JAR files will be in the /dist/lib directory.
If you need the Linux binaries to additionally work on CentOS, you’ll now need to change the version of GCC and G++ to use 3.4 instead of 3.6.
Install GCC 4.4:
apt-get install gcc-4.4
Update symbolic links to use 4.4 (the arch-specific link will be different on 32-bit VM):
rm /usr/bin/gcc
ln -s /usr/bin/gcc-4.4 /usr/bin/gcc
rm /usr/bin/x86_64-linux-gnu-gcc
ln -s /usr/bin/x86_64-linux-gnu-gcc-4.4 /usr/bin/x86_64-linux-gnu-gcc
Install C++ (G++) 4.4:
apt-get install c++-4.4
Update symbolic links to use 4.4 (the arch-specific link will be different on 32-bit VM):
rm /usr/bin/cpp
ln -s /usr/bin/cpp-4.4 /usr/bin/cpp
rm /usr/bin/x86_64-linux-gnu-cpp
ln -s /usr/bin/x86_64-linux-gnu-cpp-4.4 /usr/bin/x86_64-linux-gnu-cpp
rm /usr/bin/g++
ln -s /usr/bin/g++-4.4 /usr/bin/g++
rm /usr/bin/x86_64-linux-gnu-g++
ln -s /usr/bin/x86_64-linux-gnu-g++-4.4 /usr/bin/x86_64-linux-gnu-g++
Verify default versions:
gcc --version
c++ --version
cpp --version
gcc --version
You can now run the builds the same way as before (you only need to re-build Linux binaries). The binaries will now be compatible with slightly older versions of many Linux distros (including CentOS compatibility). These 4.4 compiled binaries should still work everywhere the 4.6 compiled versions would run too.
Special thanks to this blog for pointing me in the right direction and giving me the majority of what I detailed above.