I am using the following python code on the Raspberry Pi to collect an audio signal and output the volume. I can't understand why my output is only integer.
#!/usr/bin/env python
import alsaaudio as aa
import audioop
# Set up audio
data_in = aa.PCM(aa.PCM_CAPTURE, aa.PCM_NONBLOCK, 'hw:1')
data_in.setchannels(2)
data_in.setrate(44100)
data_in.setformat(aa.PCM_FORMAT_S16_LE)
data_in.setperiodsize(256)
while True:
# Read data from device
l,data = data_in.read()
if l:
# catch frame error
try:
max_vol=audioop.max(data,2)
scaled_vol = max_vol/4680
if scaled_vol==0:
print "vol 0"
else:
print scaled_vol
except audioop.error, e:
if e.message !="not a whole number of frames":
raise e
Also, I don't understand the syntax in this line:
l,data = data_in.read()
It's likely that it's reading in a byte. This line l,data = data_in.read() reads in a tuple (composed of l and data). Run the type() builtin function on those variables and see what you've got to work with.
Otherwise, look into the documentation for PCM Terminology and Concepts located within the documentation for the pyalsaaudio package, located here.
Related
I have a stream created at port 9999 of my computer.
I have to implement DGIM Algorithm on it.
However I am not able to read the bits in the Data stream one by one.
Below is my code:
from pyspark import SparkContext
from pyspark.streaming import StreamingContext
import math
sc = SparkContext("local[2]", "NetworkWordCount")
ssc = StreamingContext(sc, 1)
when I use the following command I am able to print the stream in batches:
lines.pprint()
ssc.start() # Start the computation
ssc.awaitTermination()
But when I try to print each bit it gives an error:
for l in lines.iter_lines():
print l
ssc.start() # Start the computation
ssc.awaitTermination()
Can someone tell me how can I read each bit from the stream so that I could
implement the algorithm on it?
I used the following code:
streams.foreachRDD(lambda c: function(c))
function(c):
c.collect()
This makes an rdd out of each stream and the function collects all the streams
This question already has answers here:
Python to show special characters
(3 answers)
Closed 4 years ago.
Hi there I am trying to make python recognize ® as a symbol( if it doesn't show up that well here but it is the symbol with a capital R within a circle known as the 'registered' symbol)
I understand that it is not recognized in python due to ASCII however i was wondering if anyone knows of a way to use a different decoding system that includes this symbol or a method to make python 'ignore' it.
For some context:
I am trying to make an auto checkout program for a website so my program needs to match the item that the user wants. To do this I am using Beatifulsoup to scrape information however this symbol '®' is within the names of a few of the items causing python to crash.
Here is the current command that I am using but is not working due to ASCII:
for colour in soup.find_all('a', attrs={"class":"name-link"}, href=True):
CnI.append(str(colour.text))
Uhrefs.append(str(colour.get('href')))
Any help would be appreciated
Here is the entirety of the program so far(ignore the mess nowhere near done):
import time
import webbrowser
from selenium import webdriver
import mechanize
from bs4 import BeautifulSoup
import urllib2
from selenium.webdriver.support.ui import Select
CnI = []
item = []
colour = []
Uhrefs = []
Whrefs = []
FinalColours = []
selectItemindex = []
selectColourindex = []
#counters
Ccounter = 0
Icounter = 0
Splitcounter = 1
#wanted items suffix options:jackets, shirts, tops_sweaters, sweatshirts, pants, shorts, hats, bags, accessories, skate
suffix = 'accessories'
Wcolour = 'Black'
Witem = '2-Tone Nylon 6-Panel'
driver=webdriver.Chrome()
driver.get('http://www.supremenewyork.com/shop/all/'+suffix)
html = driver.page_source
soup = BeautifulSoup(html, 'html.parser')
print(soup)
for colour in soup.find_all('a', attrs={"class":"name-link"}, href=True):
CnI.append(str(colour.text))
Uhrefs.append(str(colour.get('href')))
print(colour)
print('#############')
for each in CnI:
each.split(',')
print(each)
while Splitcounter<=len(CnI):
item.append(CnI[Splitcounter-1])
FinalColours.append(CnI[Splitcounter])
Whrefs.append(Uhrefs[Splitcounter])
Splitcounter+=2
print(Uhrefs)
for each in item:
print(each)
for z in FinalColours:
print(z)
for i in Whrefs:
print(i)
##for i in item:
## hold = item.index(i)
## print(hold)
## if Witem == i and Wcolour == FinalColours[i]:
## print('correct')
##
##
for count,elem in enumerate(item):
if Witem in elem:
selectItemindex.append(count+1)
for count,elem in enumerate(FinalColours):
if Wcolour in elem:
selectColourindex.append(count+1)
print(selectColourindex)
print(selectItemindex)
for each in selectColourindex:
if selectColourindex[Ccounter] in selectItemindex:
point = selectColourindex[Ccounter]
print(point)
else:
Ccounter+=1
web = 'http://www.supremenewyork.com'+Whrefs[point-1]
driver.get(web)
elem1 = driver.find_element_by_name('commit')
elem1.click()
time.sleep(1)
elem2 = driver.find_element_by_link_text('view/edit basket')
elem2.click()
time.sleep(1)
elem3 = driver.find_element_by_link_text('checkout now')
elem3.click()
"®" is not a character but a unicode codepoint so if you're using Python2, your code will never work. Instead of using str(), use something like this:
unicode(input_string, 'utf8')
# or
unicode(input_string, 'unicode-escape')
Edit: Given the code surrounding the initial snippet that was posted later and the fact that BeautifulSoup actually returns unicode already, it seems that removal of str() might be the best course of action and #MarkTolonen's answer is spot-on.
BeautifulSoup returns Unicode strings. Stop converting them back to byte strings. Best practice when dealing with text is to:
Decode incoming text to Unicode (what BeautifulSoup is doing).
Process all text using Unicode.
Encode outgoing text to Unicode (to file, to database, to sockets, etc.).
Small example of your issue:
text = u'\N{REGISTERED SIGN}' # syntax to create a Unicode codepoint by name.
bytes = str(text)
Output:
Traceback (most recent call last):
File "test.py", line 2, in <module>
bytes = str(text)
UnicodeEncodeError: 'ascii' codec can't encode character u'\xae' in position 0: ordinal not in range(128)
Note the first line works and supports the character. Converting it to a byte string fails because it defaults to encoding in ASCII. You can explicitly encode it with another encoding (e.g. bytes = text.encode('utf8'), but that breaks rule 2 above and creates other issues.
Suggested reading:
https://nedbatchelder.com/text/unipain.html
https://www.joelonsoftware.com/2003/10/08/the-absolute-minimum-every-software-developer-absolutely-positively-must-know-about-unicode-and-character-sets-no-excuses/
Recently i came across a question & confused with a possible solution,
code part is
// code part in result reader
result = map(int, input())
// consumer call
result_consumer(result)
its not about how do they work, the problem is when you are running in python2 it will raise an exception, on result fetching part, so result reader can handle the exception, but incase of python3 a map object is returned, so only consumer will be able to handle exception.
is there any solution keeping map function & handle the exception in python2 & python3
python3
>>> d = map(int, input())
1,2,3,a
>>> d
<map object at 0x7f70b11ee518>
>>>
python2
>>> d = map(int, input())
1,2,3,'a'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: invalid literal for int() with base 10: 'a'
>>>
the behavior of map is not the only difference between python2 and python3, input is also difference, you need to keep in mind the basic differences between the two to make code compatible for both
python 3 vs python 2
map = itertools.imap
zip = itertools.izip
filter = itertools.ifilter
range = xrange
input = raw_input
so to make code for both, you can use alternatives like list comprehension that work the same for both, and for those that don't have easy alternatives, you can make new functions and/or use conditional renames, like for example
my_input = input
try:
raw_input
except NameError: #we are in python 3
my_input = lambda msj=None: eval(input(msj))
(or with your favorite way to check which version of python is in execution)
# code part in result reader
result = [ int(x) for x in my_input() ]
# consumer call
result_consumer(result)
that way your code do the same regardless of which version of python you run it.
But as jsbueno mentioned, eval and python2's input are dangerous so use the more secure raw_input or python3's input
try:
input = raw_input
except NameError: #we are in python 3
pass
(or with your favorite way to check which version of python is in execution)
then if your plan is to provide your input as 1,2,3 add an appropriate split
# code part in result reader
result = [ int(x) for x in input().split(",") ]
# consumer call
result_consumer(result)
If you always need the exception to occur at the same place you can always force the map object to yield its results by wrapping it in a list call:
result = list(map(int, input()))
If an error occurs in Python 2 it will be during the call to map while, in Python 3, the error is going to surface during the list call.
The slight downside is that in the case of Python 2 you'll create a new list. To avoid this you could alternatively branch based on sys.version and use the list only in Python 3 but that might be too tedious for you.
I usually use my own version of map in this situations to escape any possible problem may occur and it's
def my_map(func,some_list):
done = []
for item in some_list:
done.append( func(item) )
return done
and my own version of input too
def getinput(text):
import sys
ver = sys.version[0]
if ver=="3":
return input(text)
else:
return raw_input(text)
if you are working on a big project add them to a python file and import them any time you need like what I do.
I would like to retreive tweets on a specific date based on their hashtag. For the purpose I'm using tweepy and the following code:
results = api.search('#brexit OR #EUref', since="2016-06-24",
until="2016-06-30", monitor_rate_limit=True,wait_on_rate_limit=True)
with open('24june_bx.txt', 'w') as f:
for tweet in results:
try:
f.write('{}\n'.format(tweet.text.decode('utf-8')))
except BaseException as e:
print 'ascii codec can\'t encode characters'
continue
As you can see, I'm trying to get all the tweets with the hashtag '#brexit' or 'EUref', the day after the vote and store them in the file '24june_bx.txt'.
It kind of works... but in the file I only get about 10 tweets. The terminal also reports 7 times the exception and prints 'ascii codec...'.
What do you think may be the problem?
Sorry for the noobish question.
Many thanks.
You can use Tweepy's Cursor in conjunction with api.search to get as many tweets as you want.
def search_tweets_from_twitter_home(query, max_tweets, from_date, to_date):
"""search using twitter search_home. "result_type=mixed" means both
'recent' & 'popular' tweets will be returned in search results.
returns the generator (for memory efficiency)
"""
searched_tweets = ( status._json for status in tweepy.Cursor(api.search,
q=query, count=300, since=from_date, until=to_date,
result_type="mixed", lang="en" ).items(max_tweets) )
return searched_tweets
This will return as many tweets as you mention in max_tweets, assuming that that many tweets are available to return.
You can then iterate over the generator and write it to a file.
Use the io lib, setting the encoding to utf-8 to handle your encoding errors:
import io
with io.open('24june_bx.txt', 'w', encoding="utf-8") as f:
for tweet in results:
try:
f.write(u'{}\n'.format(tweet.text))
except UnicodeEncodeError as e:
print(e)
If you use the regular open you need to encode to utf-8 as you already have a unicode string:
with open('24june_bx.txt', 'w') as f:
for tweet in results:
try:
f.write('{}\n'.format(tweet.text.encode("utf-8")))
except UnicodeEncodeError as e:
print(e)
'#brexit OR #EUref'
I think using this as the search query will return tweets which contain that particular string. Try using only '#brexit' and '#EUref' and later concatenating the results.
Try adding
# -*- coding: utf-8 -*-
at the first line of your script
I have written following code I am able to print out the parsed values of Lat and lon but i am unable to write them to a file. I tried flush and also i tried closing the file but of no use. Can somebody point out whats wrong here.
import os
import serial
def get_present_gps():
ser=serial.Serial('/dev/ttyUSB0',4800)
ser.open()
# open a file to write gps data
f = open('/home/iiith/Desktop/gps1.txt', 'w')
data=ser.read(1024) # read 1024 bytes
f.write(data) #write data into file
f = open('/home/iiith/Desktop/gps1.txt', 'r')# fetch the required file
f1 = open('/home/iiith/Desktop/gps2.txt', 'a+')
for line in f.read().split('\n'):
if line.startswith('$GPGGA'):
try:
lat, _, lon= line.split(',')[2:5]
lat=float(lat)
lon=float(lon)
print lat/100
print lon/100
a=[lat,lon]
f1.write(lat+",")
f1.flush()
f1.write(lon+"\n")
f1.flush()
f1.close()
except:
pass
while True:
get_present_gps()
You're covering the error up by using the except: pass. Don't do that... ever. At least log the exception.
One error which it definitely covers is lat+",", which is going to fail because it's float+str and it's not implemented. But there may be more.