Adding opengl to core-image-base in yocto - opengl

I am trying create a basic linux image with OpenGL support for a rpi3. I have succesfully created a bootable image and the sdk, but it doesn't contains OpenGL.
I have run
bitbake core-image-base
with the following changes/additions conf/local.conf without success.
MACHINE ??= "raspberrypi3
...
IMAGE_FEATURES += "ssh-server-dropbear"
MACHINE_FEATURES_append=" vc4graphics"
DISTRO_FEATURES_append = " opengl"
It correctly adds the ssh server, but when I log into the flashed rpi, I can't find any opengl libraries.
What is the correct way to add OpenGL to the image?

Related

How to transfer realsense data via ROS to other Device(s)

I want to make the program under
" catkin_ws/src/realsense-ros-3.2.13/realsense2_camera "
directory from the "ros wrapper site" transfer realsense data to other devices.
I've chosen to use ".cpp file" since I've already made the program which uses C++ and realsense camera.
I want to know the relationships of the files inside " realsense2_camera/src " directory
and how and where the executable files appears after catkin_make etc., because I want to remodel that file(s),if it's able to transfer the realsense data to other Devices to do it.
I think it's related with roscpp_tutorials, rospy_tutorials and beginner_tutorials' Publisher and Subscriber Programs and I was able to make Publisher and Subscriber Programs communicating with different Devices, though I don't know about the theory of why they run. (how and where the executable files appears)
Environment
Device OS: ubuntu 18.04
Device HW: Jetson nano
rosdistro: melodic
python: Python 2.7.17
Realsense ROS Wrapper: 2.2.13
Realsense Viewer Version: 2.34.0
Camera: D435
D435 Firmware: 5.12.03.00
Thank you for your information.
I've been wandering why I couldn't get the D435's Data, but I solved it.
Starting "realsense-viewer" when "roslaunch" the ".launch file" and restarting the "Terminal" SEVERAL TIME can make "default .launch file" launch, so re-writing the "launch file" might be able to move the C++/Python file(s).
And I think your " include 'ing method" will make the problem simple to solve.

Gstreamer qmlgl plug-in enable via Yocto

I would like to use qmlgl plug-in (qmlglsink, qmlglsrc) in my application, but it is not available in the image.
Also, my environment is ARM-based board - Phytec_nunki.
gst-inspect-1.0 | grep qml does not receive any result.
I use Yocto for building images. As I understand from this link - qmlgl is located in "GStreamer Good Plug-in" bunch, but it is not enables by default.
I inspected the sources of gstreamer which is downloaded by Yocto - the files with "qmlgl" are there. So I guess I have to enable it in some config file.
I tried to add
CORE_IMAGE_EXTRA_INSTALL += " \
gst-plugins-good-qmlgl\
"
into my local.conf file. Bitbake was executed successfully but the plug-in was not appeared.
So, does anyone have an idea for solving it?
#UncleSav using your own layer, do:
Example your layer is meta-xpto.
meta-xpto/recipes-multimedia/gstreamer/gstreamer1.0-plugins-good_%.bbappend
Inside the .bbappend add:
inherit qmake5_paths
PACKAGECONFIG[qt5] = '--enable-qt \
--with-moc="${OE_QMAKE_PATH_EXTERNAL_HOST_BINS}/moc" \
--with-uic="${OE_QMAKE_PATH_EXTERNAL_HOST_BINS}/uic" \
--with-rcc="${OE_QMAKE_PATH_EXTERNAL_HOST_BINS}/rcc" \
,--disable-qt,gstreamer1.0-plugins-base qtbase qtdeclarative qtbase-native'
PACKAGECONFIG_append = "qt5"
With this change, we inform gstreamer1.0-plugins-good that we want to compile with the qt flag and inform the necessary dependencies.
Additionally, if you are using i.MX8 with newer BSPs especially with 5.x Linux kernel, the packageconfig option should be:
QT5WAYLANDDEPENDS = "${#bb.utils.contains("DISTRO_FEATURES", "wayland", "qtwayland", "", d)}"
PACKAGECONFIG[qt5] = "-Dqt5=enabled,-Dqt5=disabled,qtbase qtdeclarative qtbase-native ${QT5WAYLANDDEPENDS}"
PACKAGECONFIG_append = "qt5"

OpenGL version 2.1 is not supported by graphics driver and maybe need to update

I am using mayavi to do some visualization task on my remote server with GPUs.When my code run mlab.show(),the following error occurred
qt.glx: qglx_findConfig: Failed to finding matching FBConfig (8 8 8 0)
...
qt.glx: qglx_findConfig: Failed to finding matching FBConfig (1 1 1 0)
ERROR: In /work/standalone-x64-build/VTK-source/Rendering/OpenGL2/vtkOpenGLRenderWindow.cxx, line 797
vtkXOpenGLRenderWindow (0x559c336fd4e0): GL version 2.1 with the gpu_shader4 extension is not supported by your graphics driver but is required for the new OpenGL rendering backend. Please update your OpenGL driver. If you are using Mesa please make sure you have version 10.6.5 or later and make sure your driver in Mesa supports OpenGL 3.2.
I am using Ubuntu16.04 and here is some info about my remote server.
(base) zz#SYS-4028GR-TR:~$ glxinfo | grep OpenGL
OpenGL vendor string: Mesa project: www.mesa3d.org
OpenGL renderer string: Mesa GLX Indirect
OpenGL version string: 1.3 Mesa 4.0.4
OpenGL extensions:
(base) zz#SYS-4028GR-TR:~$ glxinfo | grep render
direct rendering: No (If you want to find out why, try setting LIBGL_DEBUG=verbose)
GLX_MESA_multithread_makecurrent, GLX_MESA_query_renderer,
OpenGL renderer string: Mesa GLX Indirect
Does anyone have some ideas about this situation?I try to found some ways to update Mesa in Ubuntu but failed.If there is any way to deal with this kind of problem, it would be very helpful.
I am using mayavi to do some visualization task on my remote server with GPUs.
"Remote Server", that's your problem right there. If you log in via SSH forwarding the X11 connection, all OpenGL commands are serialized as GLX commands and tunneled through the X11 connection over the network to your computer to be executed on your local graphics system.
If you have a GPU on the remote system, your best option these days is to use Xpra, configuring so that it launches its backing X server on the GPU and not with a virtual framebuffer device.
What this comes down to is to install the regular Xorg server. Modify /etc/X11/Xwrapper to allow start by a regular user. You can then start the X server with Xpra being the first client with the command line
startx /usr/bin/Xpra start :100 --use-display --daemon=no -- :100
If you don't want to fix your display, then create a executable file /usr/local/bin/xpra_display
#!/bin/sh
exec xpra start $DISPLAY --use-display --daemon=no
which you can then launch with
startx /usr/local/bin/xpra_display
without further arguments

Realtime desktop capturing Mac OS X Mojave and X11

I'm working on project which streams desktop image from Mac OS X computer to iOS device in realtime. My main problem is the screen capture. I'm not allowed to use ready libraries, which allow to write some lines of code in 5 mins and stream video over the World.
I've found a really good thing on GitHub which gets image of the whole screen using X11 and C++ :
https://github.com/Butataki/cpp-x11-make-screenshot
I've tested this code on my Ubuntu and everything works like a charm : it takes about 12ms just to capture 1 frame without saving data, and about 25ms with encoding to .jpg and saving on the disk.
To be able to build it, I've had done this :
$ sudo apt install libjpeg-dev libpng-dev libx11-dev
, changed 'true' to 'TRUE' in those lines :
//(screenshot.cpp : 232,233 lines)
jpeg_set_quality (&cinfo, quality, TRUE);
jpeg_start_compress(&cinfo, TRUE);
and changed Z_BEST_COMPRESSION to PNG_Z_DEFAULT_COMPRESSION
The problem is that I did almost the same operations in my XCode (Mac OS Mojave 10.14), downloaded and linked all necessary libraries, ran executable and finally....I got a blank image. No errors occured, everything works 'fine' and saves .jpg image in my folder on desktop.
Then I figured out that X11 has a something called the 'root window', which covers all of the desktop and you can just find this window and capture everyting on your screen. But I think it's true for Ubuntu, not for my Mac.
Actually, there is something about 'root window' in this article, but I just can't fix anything :
https://finkers.wordpress.com/running-x11/#intro.rootless
P.S If it's not a good way, maybe there are some another ways to acomplish my task (realtime screencapturing on Mac OS)?

x11vnc on embedded raspberry QT5 app

I need to start a VNC server (x11vnc) in my Raspberry pi 3. It's running without X-server (Raspbian Lite). My app (C++ QT 5) writes directly to Linux framebuffer.
Following to some instructions in Qt creator forum, I've done some progress.
At this moment I can start a x11vnc server, connect to Raspberry via a regular VNC client and use my app with mouse and keyboard.
YES, IT'S WORKING.
BUT... If I change the Raspberry resolution (raspi-config) to something different from 1280x720, I don't know why, I can't see the screen perfectly. The VNC client shows a distorted display, like the image bellow.
(And, unfortunately I can't set a fixed resolution)
This is my actual settings to start x11vnc:
x11vnc -permitfiletransfer -nopw -rawfb +/dev/fb0 -forever -noxrecord -noxfixes -noxdamage -xrandr -bg -shared -pipeinput UINPUT:accel=0.7,reset=0 -cursor none -nodragging
I already tried to start with -clip 1280x720+0+0, -geometry 1280x720 and -scale 1280x720, (with other values too) but had the same problem. =/
And I start my application like this:
my-app -platform linuxfb
Both are started with root user.
If someone have an idea of how to fix this, please tell me know! Thanks!