make: *** [package/mac80211/compile] Error 1 in OpenWRT - build

I compiled my SDK several times and always I have the same result when I did make V=99, here are the errors that appear:
build_dir/linux-brcm47xx/compat-wireless-2011-05-27/drivers/net/wireless/b43/main.c:4240:3: error: implicit declaration of function 'ssb_commit_settings'
make[8]: *** [/home/rik/client/openwrt/build_dir/linux-brcm47xx/compat-wireless-2011-05-27/drivers/net/wireless/b43/main.o] Error 1
make[3]: Leaving directory `/home/rik/client/openwrt/package/mac80211'
make[2]: *** [package/mac80211/compile] Error 2
make[2]: Leaving directory `/home/rik/client/openwrt'
make[1]: *** [/home/rik/client/openwrt/staging_dir/target-mipsel_uClibc-0.9.32/stamp/.package_compile] Error 2

The answer for the first error can be found here: Why this "Implicit declaration of function 'X'"?
For the other part of the question ("I compiled my SDK several times and always I have the same result when I did make V = 99" and make[1][2][3] errors) you should keep in mind that if during the cross-compilation of the package you got an error, you first need to (obviously) get rid of the error in your source code (main.c in your case) and also (important!) go to /home/rik/client/openwrt/dl and delete [name_of_your_package].tar.gz. For some reason the toolchain fetches the source file ([name_of_your_package].tar.gz) only once and does not overwrite it if you run make package/[name]/compile V=99 even after you changed your source code. I.e. you need to delete that file manually. You got those errors because the toolchain was always trying to compile the very first source code you wrote and of course the result'd be always the same.
Simply put, the cross-compilation steps are as follows:
run make menuconfig and choose the desired package
run make package/[name]/compile
if(!) you got a compilation error, delete [name_of_your_package].tar.gz from /home/rik/client/openwrt/dl
correct the source code and repeat from step 1.
That is, every time gcc raises an error, you first need to delete the source fetched by the toolchain before trying to compile again.

Related

Buildroot compile error - Maybe you need to increase the filesystem size (BR2_TARGET_ROOTFS_EXT2_SIZE)

Appreciate help if anyone have insight how to fix following compile error for buildroot that I have been struggling for almost a week.
I used following commands to pull buildroot repo and have tried this multiple times. I am using arm64 default config and before make enabling following flags
BR2_TARGET_ROOTFS_CPIO=y
BR2_TARGET_GENERIC_GETTY=y
BR2_TARGET_GENERIC_GETTY_PORT=”ttyAMA0″
After compile starts
git clone git://git.buildroot.net/buildroot
make qemu_aarch64_virt_defconfig
make
I see following error
mke2fs 1.46.3 (27-Jul-2021)
mkfs.ext4: No such file or directory while trying to determine filesystem size
*** Maybe you need to increase the filesystem size (BR2_TARGET_ROOTFS_EXT2_SIZE)
fs/ext2/ext2.mk:46: recipe for target '/home/jn4/linux-buildroot/buildroot/output/images/rootfs.ext2' failed
make[1]: *** [/home/jn4/linux-buildroot/buildroot/output/images/rootfs.ext2] Error 1
Makefile:84: recipe for target '_all' failed
make: *** [_all] Error 2
To change file system size I have tried modifying following parameter in .config file
BR2_TARGET_ROOTFS_EXT2_SIZE="250M"
I have tried many sizes - 60M, 120M, 250M, 256M, 512M, 1024M, however all have failed to compile with same error. This seems like a common problem with buildroot and there are few other posts in git or other places which recommends size of 250M to solve the problem. I continue to see compile error with many sizes I have tried.
Appreciate any insight since I am stuck at this point. Thank you.

CARPET driver creates errors

I am using Einstein toolkit on Windows via Cygwin.
When I use carpet driver, I have found errors because of HDF5 library.
I installed following packages;
curl,perl,subversion,git,gcc-{core,fortran,g++},make,patch,libjpeg-devel,openssl-devel,xgraph,vim.
It's working well with PUGH but CARPET is not working.
Kindly,tell me how I can fix it.
The errors:
/home/hp/cactus/configs/carpet/build/CarpetLib/limits.cc:
In function ‘void CarpetLib::set_system_limits()’:
/home/hp/cactus/configs/carpet/build/CarpetLib/limits.cc:27:13:
error: ‘RLIMIT_RSS’ was not declared in this scope set_limit(RLIMIT_RSS, "resident set size", max_memory_size_MB);
/home/hp/cactus/configs/carpet/build/CarpetLib/limits.cc:27:13:
note: suggested alternative: ‘RLIMIT_AS’ set_limit(RLIMIT_RSS, "resident set size", max_memory_size_MB);
Running configuration script for thorn MPI:
MPI selected, but MPI_DIR is not set.
Computing settings... Found MPI compiler wrapper at /usr/bin/mpic++! Successfully configured MPI.
Finished running configuration script for thorn MPI.
make[3]: *** [/home/hp/cactus/configs/carpet/config-data/make.config.rules:281: limits.cc.o] Error 1
make[2]: *** [/home/hp/cactus/lib/make/make.thornlib:113: make.checked] Error 2
make[1]: *** [/home/hp/cactus/lib/make/make.configuration:179: /home/hp/cactus/configs/carpet/lib/libthorn_CarpetLib.a] Error 2
make: *** [Makefile:263: carpet] Error 2
This was reported in 2013:
The warnings that are reported are harmless, since the content of the file does not matter -- what only matters is that there is at least one file generated when the self-test succeeds.
In general, scheduling a routine into a non-existing schedule bin means that this routine is not executed.
In many cases, this is just the right thing to do. In other cases, this is e.g. due to an error in schedule.ccl, which is why we moved from "silently not scheduling" to reporting warnings about these.
In this case, the warnings are harmless and no need to worry, since the thorns Boundary and SymBase are not actually required by CartGrid3D. One wishes there was a way to indicate this in the schedule.ccl so that these warnings could be omitted.
Regarding the use of CARPET, et the errors related to HDF5, here are all the current issues for the component CARPET with HDF5 in its description
A similar error was seen in this thread.
It illustrates that the error messages before the make/Error lines could help knowing what is going on.

While compiling with --fast flag, I ran into a error I am not sure of

I am using the --fast flag the first time I tried I got this error
warning: --specialize was set, but CHPL_TARGET_CPU is 'unknown'.
If you want any specialization to occur please set CHPL_TARGET_CPU to a proper value.
so I input this command
export CHPL_TARGET_CPU=aarch64
since it is the architecture of my Jetson Nano board
then I got this error:
/home/chico/chapel-1.20.0/third-party/gasnet/Makefile.setup:6: /home/chico/chapel-1.20.0/third-party/gasnet/install/linux64-gnu-aarch64-none/substrate-udp/seg-everything/nodbg/include/udp-
conduit/udp-par.mak: No such file or directory
make: *** No rule to make target '/home/chico/chapel-1.20.0/third-party/gasnet/install/linux64-gnu-aarch64-none/substrate-udp/seg-everything/nodbg/include/udp-conduit/udp-par.mak'. Stop
.error: compiling generated source
I do not get an executable after trying to compile my code.
This error is a (poor) indication that the Chapel runtime has not been built for your current CHPL_* configuration, where in this case, the change to CHPL_TARGET_CPU is the issue. If you do cd $CHPL_HOME && make (or gmake) while CHPL_TARGET_CPU is still set, the runtime will be rebuilt for your current settings and when recompiling the Chapel program, the error should go away.
Note that multiple builds of Chapel can co-exist simultaneously with different CHPL_TARGET_CPU settings.

Error when build Ogre on Linux: narrowing conversion

I am trying to get Ogre on Linux. I downloaded the source files and uncompressed them. Then I created the build folder then I ran "cmake ..". After that completed, I ran "make -j4" (I do have 4 cores and I have also tried just make). I get to 49% and it stops every time. I have downloaded the cmake gui and ran configure and checked all the boxes. I hit configure again and then generate. I tried running "make" again.
Downloads/ogre_src_v1-8-1/RenderSystems/GL/src/atifs/src/ps_‌​1_4.cpp:689:1: error: narrowing conversion of ‘-35051’ from ‘int’ to ‘uint {aka unsigned int}’ inside { } [-Wnarrowing] };
That is the error that pops up several times except they refer to a different line of the code in ps_1_4.cpp and the number ‘-35051’ is different.
Also, There are several warnings for casting the const GLboolean* to GLboolean* throughout the build but this is the message that I have at the end:
RenderSystems/GL/CMakeFiles/RenderSystem_GL.dir/build.make:542: recipe for target 'RenderSystems/GL/CMakeFiles/RenderSystem_GL.dir/__/__/RenderSystem_GL/compile_RenderSystem_GL_0.cpp.o' failed
make[2]: *** [RenderSystems/GL/CMakeFiles/RenderSystem_GL.dir/__/__/RenderSystem_GL/compile_RenderSystem_GL_0.cpp.o] Error 1
CMakeFiles/Makefile2:1057: recipe for target 'RenderSystems/GL/CMakeFiles/RenderSystem_GL.dir/all' failed
make[1]: *** [RenderSystems/GL/CMakeFiles/RenderSystem_GL.dir/all] Error 2
Makefile:160: recipe for target 'all' failed
make: *** [all] Error 2
Also every time that I have tried a new way I delete the build folder and start all over. Each time it appears to end with this message. I am still relatively new to Linux and CMake, so can you explain what is going on and how you came to this conclusion?
Note: I have found one forum that talks about this but I don't know where the build function is or how to change the CXX_FLAG.
Referenced post suggests that Ogre can be successfully built using gnu++98 standard (which is actually a c++98 plus GNU extensions).
The standard is set via compiler flags, in case of cmake flags may be passed as:
cmake -DCMAKE_CXX_FLAGS="--std=gnu++98" ..

opencv_perf_stitching_Release.gch not generated by cmakefiles

I am trying to configure openCV with codeblocks but It got stuck during mingw32-make step, giving me this error:
[ 94%] Generating
perf_precomp.hpp.gch/opencv_perf_stitching_Release.gch
C:/openCV/opencv/build/x86/mingw/modules/stitching/perf_precomp.hpp:1:0:
fatal error: can't creat e precompiled header
C:/opeCV/opencv/build/x86/mingw/modules/stitching/perf_precomp.hpp.gch/opencv_perf_stitching_Release.gch:
No such file or directory #ifdef __GNUC__ ^ compilation terminated.
modules\stitching\CMakeFiles\pch_Generate_opencv_perf_stitching.dir\build.make:61:
recipe for tar get
'modules/stitching/perf_precomp.hpp.gch/opencv_perf_stitching_Release.gch'
failed mingw32-make[2]: ***
[modules/stitching/perf_precomp.hpp.gch/opencv_perf_stitching_Release.gch]
E rror 1 CMakeFiles\Makefile2:6569: recipe for target
'modules/stitching/CMakeFiles/pch_Generate_opencv_pe
rf_stitching.dir/all' failed mingw32-make[1]: ***
[modules/stitching/CMakeFiles/pch_Generate_opencv_perf_stitching.dir/all]
Er ror 2 Makefile:159: recipe for target 'all' failed mingw32-make:
*** [all] Error 2
I am unable to resolve it!
I am using windows 7 32bit
I met the similar problem that may be caused by cmd input limitation.
The cmd input limitation in win7 is about 8000 characters and in most case that is enough.
However in opencv build,one built component (opencv_perf_stitching_Release) might break this limition due to a long executed command.Because of different folder length for different users,the cutting point in executed command is located at unfixed position.Their error messages are also totally distinctive.In my example,I got a message unrecognized command line option '-sse' but i doubt we met the same problem.
The simplest solutiuon is to reduce your building folder level.In your case,you can change your install path from C:/opeCV/opencv/ to C:/opencv/ and try again.