Pyaudio: no method 'get_device_count' - python-2.7

I'm using the Python Speech Recognition library to recognize speech input from the microphone.
This works fine with my default microphone.
This is the code I'm using. According to what I understood of the documentation
Creates a new Microphone instance, which represents a physical
microphone on the computer. Subclass of AudioSource.
If device_index is unspecified or None, the default microphone is used
as the audio source. Otherwise, device_index should be the index of
the device to use for audio input. https://pypi.python.org/pypi/SpeechRecognition/
The problem is that when I want to get the node with pyaudio.get_device_count() - 1. I'm getting this error.
AttributeError: 'module' object has no attribute 'get_device_count'
So I'm not sure how to configure the microphone to use a usb microphone
import pyaudio
import speech_recognition as sr
index = pyaudio.get_device_count() - 1
print index
r = sr.Recognizer()
with sr.Microphone(index) as source:
audio = r.listen(source)
try:
print("You said " + r.recognize(audio))
except LookupError:
print("Could not understand audio")

myPyAudio=pyaudio.PyAudio()
print "Seeing pyaudio devices:",myPyAudio.get_device_count()

That's a bug in the library. I just pushed out a fix in 1.3.1, so this should now be fixed!
Version 1.3.1 retains full backwards compatibility with previous versions.

Related

How to access a IP Camera in python using ONVIF to record a video

Friends, i'm trying to record a video using IP Camera in python2.
I can able to get only device name using "devicemgmt", similarly i am doing for "media" and "recording".Error comes like this.
for media: "WARNING:suds.umx.typed:attribute (ViewMode) type, not-found"
for recording: "onvif.exceptions.ONVIFError: Unknown error: Device doesn`t support service: recording"
Can anyone please share the idea if you know.
In your situation, ONVIF is just a way to get the RTSP address of the video stream to capture. Instead, you might search for a way to capture RTSP.
If you can't find the RTSP address of the camera, you might try ONVIF Device Manager. With this software, you will be able to retrieve the RTSP address of the camera. Here are some screenshots of how to find the RTSP address: https://surveilleur.com/2019/02/25/adresse-rtsp-dune-camera-onvif/
You can use software motion, for motion detection and recording video. It is highly configurable.
I can share with you, piece of my code (python), where I capturing only one frame from IP camera using openCV.
import urllib.request
import cv2
import numpy as np
def CaptureFrontCamera():
_bytes = bytes()
stream = urllib.request.urlopen('http://192.168.0.51/video.cgi?resolution=1920x1080')
while True:
_bytes += stream.read(1024)
a = _bytes.find(b'\xff\xd8')
b = _bytes.find(b'\xff\xd9')
if a != -1 and b != -1:
jpg = _bytes[a:b+2]
_bytes = _bytes[b+2:]
filename = '/home/pi/capture.jpeg'
i = cv2.imdecode(np.fromstring(jpg, dtype=np.uint8), cv2.IMREAD_COLOR)
cv2.imwrite(filename, i)
return filename

Changing volume in Python program on Raspbery Pi

I use Raspbery Pi B+ 2. I have a Python program that uses the ultrasonic sensor to measure the distance to an object. What I would like to is change the volume depending on the distance to a human. Having a Python code to obtain the distance, I have no idea, how can I change the Raspbery Pi volume by a code in Python.
Is there any way to do that?
You can use the package python-alsaaudio.
The installation and usage is very simple.
To install run:
sudo apt-get install python-alsaaudio
In your Python script, import the module:
import alsaaudio
Now, you need to get the main mixer and get/set the volume:
m = alsaaudio.Mixer()
current_volume = m.getvolume() # Get the current Volume
m.setvolume(70) # Set the volume to 70%.
If the line m = alsaaudio.Mixer() throws an error, then try:
m = alsaaudio.Mixer('PCM')
this might happen because the Pi uses PCM rather than a Master channel.
You can see more information about your Pi's audio channels, volume (etc..), by running the command amixer.
Collecting actual mixers available (will provide a list of available cards):
import alsaaudio as audio
scanCards = audio.cards()
print("cards:", scanCards)
In my case I've got following list:
[u'PCH', u'headset']
Scanning for the mixers per each card:
for card in scanCards:
scanMixers = audio.mixers(scanCards.index(card))
print("mixers:", scanMixers)
In my case I've got the following two lists:
[u'Master', u'Headphone', u'Speaker', u'PCM', u'Mic', u'Mic Boost', u'IEC958', u'IEC958', u'IEC958', u'IEC958', u'IEC958', u'Beep', u'Capture', u'Auto-Mute Mode', u'Internal Mic Boost', u'Loopback Mixing']
[u'Headphone', u'Mic', u'Auto Gain Control']
As you may see, "Master" is not always the mixer available, but traditionally expected an equivalent of the Master mixer to be at index 0. (Doesn't mean always!)
To control volume in this case for the USB Headset would be following procedure.
Volume Up
def volumeMasterUP():
mixer = audio.Mixer('Headphone', cardindex=1)
volume = mixer.getvolume()
newVolume = int(volume[0])+10
if newVolume <= 100:
mixer.setvolume(newVolume)
Volume Down
def volumeMasterDOWN():
mixer = audio.Mixer('Headphone', cardindex=1)
volume = mixer.getvolume()
newVolume = int(volume[0])-10
if newVolume >= 0:
mixer.setvolume(newVolume)
I made a simple python service for a two button volume control. Based on what #ant0nisk put.
https://gist.github.com/peteristhegreat/3c94963d5b3a876b27accf86d0a7f7c0
It shows getting and setting the volume, and muting.

Issues reading frames from an RTSP h.264 IP camera

The Problem:
I'm working on a reader for multiple ip camera streams. The application needs to run on a ubuntu AWS EC2 instance. I've been unsuccessful in trying to reliably fetch and decode the frames from RTSP h.264 streams.
What I've Tried:
I've used OpenCV's and SciKit-Video's VideoCapture classes, neither of which could both fetch and decode the frames from my test stream. I have verified that my test stream is readable using VLC and openRTSP, so I believe this is an encoding issue.
I've also attempted to build some solutions using python's subprocess module to run the aforementioned command-line applications. This allows me to read the stream reliably, but it raises the issue that the decoder fails due to it (apparently) not finding the keyframe data it needs to decode the stream.
Below is the code for this example. It tells openRTSP to periodically save some amount of video as a separate file and uses an openCV VideoCapture to get a single frame from each of those samples. Code:
def openrtsp_thread(queue, feed_name, source, sample_time, intruder, cleanup=True):
(major, minor, subminor) = (cv2.__version__).split('.')
cmd = 'openRTSP -V -4 -v -P ' + str(sample_time) + ' ' + source
out_dir = '/aws_odw/frame-store/'+feed_name.replace(' ','-')+'/'
try:
os.mkdir(out_dir)
except:
print '[?]['+feed_name+']: Not creating new frame directory'
pass
os.chdir(out_dir)
p = subprocess.Popen(cmd.split(' '))
while True:
try:
for dirs, files, filenames in os.walk(out_dir):
for f in filenames:
cap = cv2.VideoCapture(os.path.join(out_dir, f))
if int(major) < 3:
fps = cap.get(cv2.cv.CV_CAP_PROP_FPS)
else:
fps = cap.get(cv2.CAP_PROP_FPS)
#for i in range(int(float(sample_time)*fps*0.5)):
ret, frame = cap.read()
cap.release()
print 'enqueueing...'
queue.put((feed_name, frame, intruder))
except (KeyboardInterrupt, SystemExit):
print '[x]['+feed_name+']: keyboard interrupt, cleaning up...'
break
p.send_signal(signal.SIGUSR1)
p.wait()
print '[*]['+feed_name+']: exiting gracefully.'
Can anyone offer any pointers? I don't know much about video encoding, so I'm feeling pretty lost. Any help would be very appreciated.
Edit: the end goal here is to queue the frames in python for real-time precessing in a computer vision application.

Analog output from USB6009 using python and NIDAQmx base on Mac OSX

All,
I'm attempting to use Python and DAQmx Base to record analog input and generate analog output from my USB 6009 device. I've been using a wrapper I found and have been able to get AI but am struggling with AO.
There is a base class NITask which handles task generation etc. The class i'm calling is below. The function throws an error when I try to configure the clock. When I do not there is no error but nor is there voltage generated on the output. Any help would be appreciated.
Thanks!
class AOTask(NITask):
def __init__(self, min=0.0, max=5.0,
channels=["Dev1/ao0"],
timeout=10.0):
NITask.__init__(self)
self.min = min
self.max = max
self.channels = channels
self.timeout = timeout
self.clockSource ="OnboardClock"
sampleRate=100
self.sampleRate = 100
self.timeout = timeout
self.samplesPerChan = 1000
self.numChan = chanNumber(channels)
if self.numChan is None:
raise ValueError("Channel specification is invalid")
chan = ", ".join(self.channels)
self.CHK(self.nidaq.DAQmxBaseCreateTask("",ctypes.byref(self.taskHandle)))
self.CHK(self.nidaq.DAQmxBaseCreateAOVoltageChan(self.taskHandle, "Dev1/ao0", "", float64(self.min), float64(self.max), DAQmx_Val_Volts, None))
self.CHK(self.nidaq.DAQmxBaseCfgSampClkTiming(self.taskHandle, "", float64(self.sampleRate), DAQmx_Val_Rising, DAQmx_Val_FiniteSamps, uInt64(self.samplesPerChan)))
"""Data needs to be of type ndarray"""
def write(self, data):
nWritten = int32()
# data = numpy.float64(3.25)
data = data.astype(numpy.float64)
self.CHK(self.nidaq.DAQmxBaseWriteAnalogF64(self.taskHandle,
int32(1000), 0,float64(-1),DAQmx_Val_GroupByChannel,
data.ctypes.data,None,None))
# if nWritten.value != self.numChan:
# print "Expected to write %d samples!" % self.numChan
Your question covers two problems:
Why does DAQmxBaseCfgSampClkTiming return an error?
Without using that function, why isn't any output generated?
1. Hardware vs Software Timing
rjb3 wrote:
The function throws an error when I try to configure the clock. When I do not there is no error but nor is there voltage generated on the output.
Your program receives the error because the USB 600x devices do not support hardware-timed analog output [1]:
The NI USB-6008/6009 has two independent analog output channels that can generate outputs from 0 to 5 V. All updates of analog output channels are software-timed. GND is the ground-reference signal for the analog output channels.
"Software-timed" means a sample is written on demand by the program whenever DAQmxBaseWriteAnalogF64 is called. If an array of samples is written, then that array is written one at a time. You can learn more about how NI defines timing from the DAQmx help [2]. While that document is for DAQmx, the same concepts apply to DAQmx Base since the behavior is defined by the devices and not their drivers. The differences are in how much of the hardware's capabilities are implemented by the driver -- DAQmx implements everything, while DAQmx Base is a small select subset.
2. No Output When Software Timed
rjb3 wrote:
When I do not there is no error but nor is there voltage generated on the output.
I am not familiar with the Python bindings for the DAQmx Base API, but I can recommend two things:
Try using the installed genVoltage.c C example and confirm that you can see voltage on the ao channel.
Examples are installed here: /Applications/National Instruments/NI-DAQmx Base/examples
If you see output, you've confirmed that the device and driver are working correctly, and that the bug is likely in the python file.
If you don't see output, then the device or driver has a problem, and the best place to get help troubleshooting is the NI discussion forums at http://forums.ni.com.
Try porting genVoltage.c using the python bindings. At first glance, I would try:
Use DAQmxBaseStartTask before DAQmxBaseWriteAnalogF64
Or set the autostart parameter in your call to DAQmxBaseWriteAnalogF64 to true.
References
[1] NI USB-6008/6009 User Guide And Specifications :: Analog Output (page 16)
http://digital.ni.com/manuals.nsf/websearch/CE26701AA052E1F0862579AD0053BE19
[2] Timing, Hardware Versus Software
http://zone.ni.com/reference/en-XX/help/370466V-01/TOC11.htm

How to count cameras in OpenCV 2.3?

I want to get the number of available cameras.
I tried to count cameras like this:
for(int device = 0; device<10; device++)
{
VideoCapture cap(device);
if (!cap.isOpened())
return device;
}
If I have a camera connected, it never failed to open.
So I tried to preview different devices but I get always the image of my camera.
If I connect a second camera, device 0 is camera 1 and device 1-10 are camera 2.
I think there is a problem with DirectShow devices.
How to solve this problem? Or is there a function like in OpenCV1 cvcamGetCamerasCount()?
I am using Windows 7 and USB cameras.
OpenCV still has no API to enumerate the cameras or get the number of available devices. See this ticket on OpenCV bug tracker for details.
Behavior of VideoCapture is undefined for device numbers greater then number of devices connected and depends from API used to communicate with your camera. See OpenCV 2.3 (C++,QtGui), Problem Initializing some specific USB Devices and Setups for the list of APIs used in OpenCV.
Even if it's an old post here a solution for OpenCV 2/C++
/**
* Get the number of camera available
*/
int countCameras()
{
cv::VideoCapture temp_camera;
int maxTested = 10;
for (int i = 0; i < maxTested; i++){
cv::VideoCapture temp_camera(i);
bool res = (!temp_camera.isOpened());
temp_camera.release();
if (res)
{
return i;
}
}
return maxTested;
}
Tested under Windows 7 x64 with :
OpenCV 3 [Custom Build]
OpenCV 2.4.9
OpenCV 2.4.8
With 0 to 3 Usb Cameras
This is a very old post but I found that under Python 2.7 on Ubuntu 14.04 and OpenCv 3 none of the solutions here worked for me. Instead I came up with something like this in Python:
import cv2
def clearCapture(capture):
capture.release()
cv2.destroyAllWindows()
def countCameras():
n = 0
for i in range(10):
try:
cap = cv2.VideoCapture(i)
ret, frame = cap.read()
cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
clearCapture(cap)
n += 1
except:
clearCapture(cap)
break
return n
print countCameras()
Maybe someone will find this useful.
I do this in Python:
def count_cameras():
for i in range(10):
temp_camera = cv.CreateCameraCapture(i-1)
temp_frame = cv.QueryFrame(temp_camera)
del(temp_camera)
if temp_frame==None:
del(temp_frame)
return i-1 #MacbookPro counts embedded webcam twice
Sadly Opencv opens the Camera object anyway, even if there is nothing there, but if you try to extract its content, there will be nothing to attribute to. You can use that to check your number of cameras. It works in every platform I tested so it is good.
The reason for returning i-1 is that MacBookPro Counts its own embedded camera twice.
Python 3.6:
import cv2
# Get the number of cameras available
def count_cameras():
max_tested = 100
for i in range(max_tested):
temp_camera = cv2.VideoCapture(i)
if temp_camera.isOpened():
temp_camera.release()
continue
return i
print(count_cameras())
I have also faced similar kind of issue. I solved the problem by using videoInput.h library instead of Opencv for enumerating the cameras and passed the index to Videocapture object. It solved my problem.