I have a GUI program written in C++ which I need to run Valgrind on. When I setup the VNC server on random display (instance :35) I always run into the same problem:
Xlib: extension "GLX" missing on display ":35".
I've ran this manually and connected to the VNC server and try to run the program with the exact same error message.
glxinfo only shows the same error message. I'm running an openbox-session on the VNC and Gnome on the desktop with Nvidia's proprietary driver.
I'm currently running Fedora 24 on 4.7.9.
Do you guys know how to solve this problem? I've even tried to run the application with vglrun -d :35 ./application with the following error:
Xlib: extension "GLX" missing on display ":35".
Xlib: extension "GLX" missing on display ":35".
[VGL] ERROR: in glXGetConfig--
[VGL] 1071: Could not obtain RGB visual on the server suitable for off-screen rendering
Running nvidia-smi displays the following:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 367.44 Driver Version: 367.44 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 760 Off | 0000:03:00.0 N/A | N/A |
| 29% 44C P8 N/A / N/A | 308MiB / 1998MiB | N/A Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 Not Supported |
+-----------------------------------------------------------------------------+
What is the problem? How can I solve this?
Related
While upgrading DPDK from version 17.02 to 21.11 rte_eth_dev_configure is failing with return code -22. Due to that, my application is not working.
PFB the details about the system.
Using Intel Corporation Ethernet Connection X722.
lspci | grep "Ethernet"
3d:00.0 Ethernet controller: Intel Corporation Ethernet Connection X722 for 10GBASE-T (rev 09)
3d:00.1 Ethernet controller: Intel Corporation Ethernet Connection X722 for 10GBASE-T (rev 09)
af:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection
Driver used :
ethtool -i eth0
driver: i40e
version: 2.7.29
firmware-version: 3.31 0x80000d31 1.1767.0
expansion-rom-version:
bus-info: 0000:3d:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes
Number of RX queues : 4
enabled log using --log-level=pmd,8 , PFB the dpdk.log output.
"[Wed Jan 11 04:00:34 2023][ms_dpi: 1150] Starting DPDK logging
session EAL: Detected CPU lcores: 40 EAL: Detected NUMA nodes: 1 EAL:
Static memory layout is selected, amount of reserved memory can be
adjusted with -m or --socket-mem EAL: Detected shared linkage of DPDK
EAL: Trace dir: /root/dpdk-traces/rte-2023-01-11-AM-04-00-34 EAL:
Multi-process socket /var/run/dpdk/rte/mp_socket EAL: Selected IOVA
mode 'PA' EAL: NUMA support not available consider that all memory is
in socket_id 0 EAL: Probe PCI driver: net_i40e (8086:37d2) device:
0000:3d:00.1 (socket 0) eth_i40e_dev_init(): >> i40e_pf_reset(): Core
and Global modules ready 0 i40e_init_shared_code():
i40e_init_shared_code i40e_set_mac_type(): i40e_set_mac_type
i40e_set_mac_type(): i40e_set_mac_type found mac: 3, returns: 0
i40e_init_nvm(): i40e_init_nvm i40e_allocate_dma_mem_d(): memzone
i40e_dma_0 allocated with physical address: 65496862720"
The application was working fine with DPDK version 17.02. Is there any change with respect to the i40e driver which is causing this issue? All libraries needed to build the application are present in the system like -Wl,-lrte_net_enic -Wl,-lrte_net_i40e.
Did anybody face the same issue with provided configuration?
I am not able to figure out the root cause of this error. Any help is appreciated. Thanks
Issue has been resolved. Application code was earlier using eth_config.rx_adv_conf.rss_conf.rss_hf = 260 (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6).
After running the testpmd with same configuration (rx queues = 4 and tx queue = 1) got to know the supported RSS offload attributes.
Output of the testpmd command shows supported offload flags.
testpmd> show port 0 rss-hash
RSS functions:
ipv4-frag ipv4-other ipv6-frag ipv6-other
testpmd> quit
Supported offload attributes by i40e driver.
#define I40E_RSS_OFFLOAD_ALL ( \
RTE_ETH_RSS_FRAG_IPV4 | \
RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
RTE_ETH_RSS_FRAG_IPV6 | \
RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
RTE_ETH_RSS_L2_PAYLOAD)
eth_config.rx_adv_conf.rss_conf.rss_hf = 840 (RTE_ETH_RSS_FRAG_IPV4| RTE_ETH_RSS_NONFRAG_IPV4_OTHER|RTE_ETH_RSS_FRAG_IPV6 |RTE_ETH_RSS_NONFRAG_IPV6_OTHER)
After providing the right hash value. Application started running fine.
My system is V100 with the following information:
| NVIDIA-SMI 450.80.02 Driver Version: 450.80.02 CUDA Version: 11.6 |
NVIDIA Nsight Systems version 2021.5.2.53-28d0e6e
sudo sh -c “echo 2 >/proc/sys/kernel/perf_event_paranoid”
/bin/bash: /proc/sys/kernel/perf_event_paranoid: Read-only file system
Note that perf_event_paranoid is 3.
Output:
Generated:
/home/build/Baseline.nsys-rep
That’s my command prefix:
nsys profile --capture-range=cudaProfilerApi --trace-fork-before-exec true --force-overwrite true -s cpu --cudabacktrace=all --stats=true -t cuda,nvtx,osrt,cudnn,cublas -o Baseline -w true
That's when I check nsys status:
nsys status -e
Timestamp counter supported: No
Sampling Environment Check
Linux Kernel Paranoid Level = -1: OK
Linux Distribution = Ubuntu
Linux Kernel Version = 5.0.0-1032-azure: OK
Linux perf_event_open syscall available: OK
Sampling trigger event available: OK
Intel(c) Last Branch Record support: Not Available
Sampling Environment: OK
That's the output from the Nsight viewer: (No Kernel data)
Profile Output
That's the diagnostics view:
Diagnostics View
I tried CUDA Version 11.0 and that only made Nsight produce profiles with my device driver. Other Cuda versions were not getting me the NSight Profiles.
Please check the following post for more details:
https://forums.developer.nvidia.com/t/nsys-does-not-show-the-kernels-output/229526/17
I have a windows 10 computer with atom version 1.52.0 and g++ (MinGW.org GCC Build-2) 9.2.0. I can run c++ programs in Atom with the gpp-compiler, but I don't like how the program output is in a new window rather than at the bottom of the Atom window. I'm trying to set up c++ with the script package, but when I run the program with the script package I get the following error.
g++: error: /mnt/c/Users/user/Documents/USACO/2015-2016/December/Silver/test.cpp: No such file or directory
g++: fatal error: no input files
compilation terminated.
I can run java programs with the script package btw. screenshot
A bit late to reply. But for those to come here from Google, on the script package page, it clearly says:
+---------+------------+-----------------+------------------+--------------------+---------------------------------------------------------+
| Grammer | File Based | Selection Based | Required Package | Required in PATH | Notes |
+---------+------------+-----------------+------------------+--------------------+---------------------------------------------------------+
| C++ | Yes | Yes | | xcrun clang++/g++ | Available only on macOS and Linux. Run with -std=c++14. |
+---------+------------+-----------------+------------------+--------------------+---------------------------------------------------------+
Available only on macOS and Linux. Run with -std=c++14.
So, it seems it's not available for Windows. Instead, you can use another package called gpp-compiler:
https://atom.io/packages/gpp-compiler
It works fine on windows:
You'll need to install MinGW and add it to your PATH.
When I run my game on win32 platform, my sounds don't play. But sounds are playing normally in android platform.
I'm using:
auto audio = CocosDenshion::SimpleAudioEngine::getInstance();
audio->playEffect("sounds/jump.ogg");
How can fix this? thanks
According to cocos2d-x wiki, .mid and .wav only supported on window desktop.
Sound Effects
| Platform | supported sound effects formats |
|-----------------|:-----------------------------------:|
| Android Supports| .ogg , .wav format. |
| iOS | .mp3, .wav, .caf |
| Windows Desktop | .mid and .wav only |
Above file format table is for SimpleAudioEngine, currently I am using .ogg file on win32 desktop with new experimental AudioEngine
#include "audio/include/AudioEngine.h"
experimental::AudioEngine::play2d("sounds/jump.ogg", false, 1.0);
I'm looking for strategies to improve my build times with googletest and am wondering if what I'm seeing is typical, if there's a particular feature that can be avoided to improve build times, or if I'm just doing something wrong.
I have seen this post but its about 2 years old now.
I've been profiling with a moderately simple test fixture that has 24 tests and uses the following googlemock features. I apologize for not being able to provide a complete example here, but obviously for trivial examples, the build times are negligible. If you have experience on this topic and have a hunch, I can certainly fill in more details upon request. In total, the build is about 37 files including the googletest sources.
using ::testing::_;
using ::testing::AnyNumber;
using ::testing::DoAll;
using ::testing::Exactly;
using ::testing::InSequence;
using ::testing::Mock;
using ::testing::NiceMock;
using ::testing::Return;
using ::testing::SetArgReferee;
I've built my example with both clang 3.7.0 and mingw64-g++ 5.3.0 using cmake and ninja. See the times below. A full build is time required for all sources in project, including googletest. The compile+link is time required to build the single test fixture source and link. And link is time to create test executable. I tried the tuple flag, but as you can see, that didn't make much difference.
With the times as they are, they present some challenges for keeping the fix/build/test cycle snappy. It was interesting to me that configuration made such a huge difference and that Release was faster than Debug. I expected release to spend more time on optimizations.
GTEST_USE_OWN_TR1_TUPLE=1
Compiler | Config | Full | Compile+Link | Link
clang | Debug | 29.975s | 16.166s | 10.046s
clang | Release | 29.621s | 13.317s | 0.972s
mingw64 | Debug | 1m6.751s | 39.590s | 22.923s
mingw64 | Release | 44.287s | 15.075s | 1.031s
GTEST_USE_OWN_TR1_TUPLE=0
Compiler | Config | Full | Compile+Link | Link
clang | Debug | 28.565s | 15.815s | 9.545s
clang | Release | 28.354s | 12.955s | 1.075s
mingw64 | Debug | 1m7.954s | 37.611s | 24.304s
mingw64 | Release | 42.615s | 15.329s | 0.895s
Further dissection of the build time for the release clang build
#include <gmock/gmock.h> ~ 2s
Instantiating 11 mocks ~ 9s
24 test cases ~ 1s
EDIT:
I followed the advice in the cookbook - Making Compilation Faster, and that helped alot. I also did a comparison with release-1.6.0. I don't know what the consensus is regarding how fast is fast enough (0 seconds, -2 seconds, time travel?). Sub 10 seconds I start feeling is tolerable, 5 seconds is certainly preferred. Ideally, I would like to push this example sub 1 second, but perhaps that's not possible. Given that its a rather simple example, my hunch is that things won't hold under scale, but maybe that's not true.
googlemock 1.7+ ddb8012
GTEST_USE_OWN_TR1_TUPLE=0
Compiler | Config | Full | Compile+Link | Link
clang | Debug | 31.151s | 7.572s | 4.567s
clang | Release | 39.806s | 4.742s | 0.689s
googlemock 1.6
GTEST_USE_OWN_TR1_TUPLE=0
Compiler | Config | Full | Compile+Link | Link
clang | Debug | 26.551s | 5.104s | 3.218s
clang | Release | 28.932s | 3.777s | 0.676s