Is there a way of creating a dummy sound card on an EC2 instance?
Unable to find snd-dummy, or any other other snd-modules using modprobe.
sudo apt-get install linux-generic, also didn't exactly help.
My goal is to run alsa with a dummy sound card.
lscpi output:
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 01)
00:02.0 VGA compatible controller: Cirrus Logic GD 5446
00:03.0 Unassigned class [ff80]: XenSource, Inc. Xen Platform Device (rev 01)
uname output:
Linux 5.4.0-1037-aws
lsb_release -a output:
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.5 LTS
Release: 18.04
Codename: bionic
Any help would be greatly appreciated.
Related
I create a deep learning VM to run a project using some custom tensorflow models and google vision api, google nlu api.
I set up a machine with Debian10 and tensorflow 2.4(cuda11) and I choose 1 nvidia K80 GPU. I installed cuda11 using this link. when I run nvidia-smi, I get this famous ugly message:
NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.
I try to install cuda10 or any other but it does not exist for debian at all: see this cuda 10
How to resolve this problem, please!
I tried to reproduce this error in my own project.
I have installed a VM Instance with the following characteristics:
Machine type: n1-standard-1
GPUs: 1 x NVIDIA Tesla K80
Boot disk: debian-10-buster-v20201216
As you mentioned in your post there are no drivers for Linux: CUDA Toolkit 10, So I used the steps described in this link to install it, I had some complications to install the drivers and at the end I was able to reproduce your issue and I got the following message after the installation:
$ sudo nvidia-smi
NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.
I tried again, but now I changed my installation a little bit:
Machine type: n1-standard-1
GPUs: 1 x NVIDIA Tesla K80
Boot disk: c0-common-gce-gpu-image-20200128
The boot disk I used this time c0-common-gce-gpu-image-20200128 is a GPU Optimized Debian image , m32 (with CUDA 10.0), A Debian 9 based image with CUDA/CuDNN/NCCL pre-installed
When I accessed to this instance through ssh the first time I received the following question:
This VM requires Nvidia drivers to function correctly. Installation takes ~1 minute.
Would you like to install the Nvidia driver? [y/n] y
Installing Nvidia driver.
And it automatically installed the drivers.
$ sudo nvidia-smi
Thu Jan 7 19:08:06 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.104 Driver Version: 410.104 CUDA Version: 10.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla K80 Off | 00000000:00:04.0 Off | 0 |
| N/A 75C P0 91W / 149W | 0MiB / 11441MiB | 100% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
I also tried with a TensorFlow image as you mentioned that you are using TensorFlow: c0-deeplearning-tf-1-15-cu110-v20201229-debian-10
That according to the information of this image, it is a Deep Learning Image: TensorFlow 1.15, m61 CUDA 110, A debian-10 Linux based image with TensorFlow 1.15 (With CUDA 110 and Intel(TM) MKL-DNN, IntelĀ® MKL) plus Intel(TM)optimized NumPy, SciPy, and scikit-learn.
In this case I verified the TensorFlow installation too:
$ python -c "import tensorflow as tf;print(tf.reduce_sum(tf.random.normal([1000, 1000])))"
2021-01-07 20:29:02.854218: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.11.0
Tensor("Sum:0", shape=(), dtype=float32)
And it works well.
Hence, there seems to be a problem between the image installed (Devian 10) and the CUDA Toolkit needed for the GPU type (NVIDIA K80).
My suggestion here is to use a Deep Learning VM image, You could see the the full list at this link: Choosing an image
I had gcc-5 and gcc-7 installed, and when I tried to compile a cuda sample with 'make' i got lot's of errors, after some research i saw that i needed to downgrade my gcc, so i thought the system was using gcc-7 instead of the other and so i uninstalled it using purge, but then gcc was not even recognized, gcc --version gave error. So i purge the other gcc too and installed again with 'sudo apt-get update' and 'suda apt-get install build essential'. 'gcc --version' now already works, but my cuda drivers aren't working anymore. nvidia-smi results in "command not found" and i can't run any cuda sample, although now i can compile it. For example, deviceQuery returns:
cudaGetDeviceCount returned 35
-> CUDA driver version is insufficient for CUDA runtime version
Result = FAIL
'nvcc --version' also works, here's the output:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2017 NVIDIA Corporation
Built on Fri_Sep__1_21:08:03_CDT_2017
Cuda compilation tools, release 9.0, V9.0.176
Running 'lshw -numeric -C display' results in:
WARNING: you should run this program as super-user.
*-display
description: 3D controller
product: GM107M [GeForce GTX 950M] [10DE:139A]
vendor: NVIDIA Corporation [10DE]
physical id: 0
bus info: pci#0000:01:00.0
version: a2
width: 64 bits
clock: 33MHz
capabilities: bus_master cap_list rom
configuration: driver=nvidia latency=0
resources: irq:38 memory:f6000000-f6ffffff memory:e0000000-efffffff memory:f0000000-f1ffffff ioport:e000(size=128) memory:f7000000-f707ffff
*-display
description: VGA compatible controller
product: 4th Gen Core Processor Integrated Graphics Controller [8086:416]
vendor: Intel Corporation [8086]
physical id: 2
bus info: pci#0000:00:02.0
version: 06
width: 64 bits
clock: 33MHz
capabilities: vga_controller bus_master cap_list rom
configuration: driver=i915 latency=0
resources: irq:34 memory:f7400000-f77fffff memory:d0000000-dfffffff ioport:f000(size=64) memory:c0000-dffff
WARNING: output may be incomplete or inaccurate, you should run this program as super-user.
I didn't change nothing on my drivers, but reinstalling gcc broke them. How can I solve this?
Thanks
-- EDIT --
When i do 'locate nvidia-smi' i get the following result:
/etc/alternatives/x86_64-linux-gnu_nvidia-smi.1.gz
/usr/bin/nvidia-smi
/usr/share/man/man1/nvidia-smi.1.gz
Although when i go into those directories, like /usr/bin there is no nvidia-smi executable, under /usr/share/man/man1/ there is no nvidia-smi.1.gz
Doing 'cat /proc/driver/nvidia/version' i get:
NVRM version: NVIDIA UNIX x86_64 Kernel Module 384.111 Tue Dec 19 23:51:45 PST 2017
GCC version: gcc version 7.2.0 (Ubuntu 7.2.0-1ubuntu1~16.04)
It still shows the old gcc, i now have gcc-5, not 7
I managed to solve this, actually it was very simple, i just had to reinstall my nvidia drivers by doing:
sudo apt-get purge nvidia*
sudo apt-get update
sudo apt-get install nvidia-384
I'm running that configuration :
Ubuntu 12.04
Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor Graphics Controller
glxinfo give me that parameters:
OpenGL vendor string: Intel Open Source Technology Center
OpenGL renderer string: Mesa DRI Intel(R) Ivybridge Desktop
OpenGL version string: 3.0 Mesa 10.1.3
OpenGL shading language version string: 1.30
OpenGL extensions:
I did
1.) Add the PPA Repository
$ sudo add-apt-repository ppa:oibaf/graphics-drivers
2.) Update sources
$ sudo apt-get update
3.) Dist-upgrade (rebuilds many packages)
$ sudo apt-get dist-upgrade
4.) Reboot!
I got mesa 10,but OpenGL is still 3.0.I found some people say Intel Graphics doesn't support OpenGL 3.3 yet. but some say in the last official release of Mesa (10.0), GL 3.3 only works on Intel hardware.
but some people work out.
Does Ubuntu 12.04 support OpenGL 3.3?
Does Intel Graphic support it?
What should I do to enable GL3.3
I compiled simple program on my i386 laptop. Then i try to execute it on amd64 PC and it starting, but then
SDL_Init Error: No available video device
OS: debian 7.6.0 on both computers.
I installed libc6:i386, it didn't help
my code finally compiles fine on my arm cluster.
now i want to run it.
it does not run on the arm itself, as there is no screen attached
1 OpenCL Platforms found
Platform 0: (EMBEDDED_PROFILE OpenCL 1.1 ) Vivante Corporation Vivante OpenCL Platform
1 OpenCL devices found for this platform
Device 0: Vivante Corporation Vivante OpenCL Device
Initializing GLUT...
freeglut (./prognonmpi): failed to open display ''
when I access the cluster with ssh -Y name i get the following erro message
1 OpenCL Platforms found
Platform 0: (EMBEDDED_PROFILE OpenCL 1.1 ) Vivante Corporation Vivante OpenCL Platform
1 OpenCL devices found for this platform
Device 0: Vivante Corporation Vivante OpenCL Device
Initializing GLUT...
init 160 x 100
Loading extensions: Missing GL version
Error: failed to get minimal extensions for demo
This sample requires:
OpenGL version 1.5
GL_ARB_vertex_buffer_object
GL_ARB_pixel_buffer_object
glxinfo glxgears and so on are running fine and show on my screen, when run on the cluster
which package is missing to get the program running?