Unable to write to file using Python 2.7 - python-2.7

I have written following code I am able to print out the parsed values of Lat and lon but i am unable to write them to a file. I tried flush and also i tried closing the file but of no use. Can somebody point out whats wrong here.
import os
import serial
def get_present_gps():
ser=serial.Serial('/dev/ttyUSB0',4800)
ser.open()
# open a file to write gps data
f = open('/home/iiith/Desktop/gps1.txt', 'w')
data=ser.read(1024) # read 1024 bytes
f.write(data) #write data into file
f = open('/home/iiith/Desktop/gps1.txt', 'r')# fetch the required file
f1 = open('/home/iiith/Desktop/gps2.txt', 'a+')
for line in f.read().split('\n'):
if line.startswith('$GPGGA'):
try:
lat, _, lon= line.split(',')[2:5]
lat=float(lat)
lon=float(lon)
print lat/100
print lon/100
a=[lat,lon]
f1.write(lat+",")
f1.flush()
f1.write(lon+"\n")
f1.flush()
f1.close()
except:
pass
while True:
get_present_gps()

You're covering the error up by using the except: pass. Don't do that... ever. At least log the exception.
One error which it definitely covers is lat+",", which is going to fail because it's float+str and it's not implemented. But there may be more.

Related

How can I conver a transcribed .wav into txt in full extent. - Google Speech API

I'm having trouble with converting full transcribed speech to a text file. Eventually, I get what I need but not the entire text from the audio file. Let me note this (1 Pic), I can see the whole text when I use print() function but get only one line of that text when I try to write it to .txt file (2 Pic).
Also, you can look at my code if you need additional info and stuff. Thank you in advance!
from google.cloud import speech
import os
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = 'PATH'
client = speech.SpeechClient()
with open('sample.wav', "rb") as audio_file:
content = audio_file.read()
audio = speech.RecognitionAudio(content=content)
config = speech.RecognitionConfig(
encoding=speech.RecognitionConfig.AudioEncoding.LINEAR16,
sample_rate_hertz=8000,
language_code="en-US",
# Enable automatic punctuation
enable_automatic_punctuation=True,
)
response = client.recognize(config=config, audio=audio)
for result in response.results:
extr = result.alternatives[0].transcript
print(extr)
with open("guru9.txt","w+") as f:
f.write(extr)
f.close()
What happens in your code is, per iteration you open, write, close your file. You should move out your opening and closing of your file outside the loop.
myfile = open("guru9.txt","w+")
for result in response.results:
extr = result.alternatives[0].transcript
myfile.write(extr)
myfile.close()

download log - modify and use last line

I'm trying to shorten or simplify my code.
I want to download a log file from an internal server which is updated every 10 seconds, but I'm only running my script every 10 or 15 minutes.
The log file is semicolon seperated and has many rows in it I don't use. So my workflow is as following.
get current date in YYYYMMDD format
download the file
delay for waiting that the file is finished downloading
trim the file to the rows I need
only process last line of the file
delete the files
I'm new to python and if you could help me to shorten/simplify my code in less steps I would be thankful.
import urllib
import time
from datetime import date
today = str(date.today())
import csv
url = "http://localserver" + today + ".log"
urllib.urlretrieve (url, "output.log")
time.sleep(15)
with open("output.log","rb") as source:
rdr= csv.reader(source, delimiter=';')
with open("result.log","wb") as result:
wtr= csv.writer( result )
for r in rdr:
wtr.writerow( (r[0], r[1], r[2], r[3], r[4], r[5], r[15], r[38], r[39], r[42], r[54], r[90], r[91], r[92], r[111], r[116], r[121], r[122], r[123], r[124]) )
with open('result.log') as myfile:
print (list(myfile)[-1]) #how do I access certain rows here?
You could probably make use of the advanced module, requests as below. The timeout can be increased depending on the time it takes for the download to complete successfully. Furthermore, the two with open statements can be consolidated in a single line. What is more, in order to load the line one by one in to the memory, we can make use of iter_lines generator. Note that stream=True should be set in order to load line one at a time.
from datetime import date
import csv
import requests
# Declare variables
today = str(date.today())
url = "http://localserver" + today + ".log"
outfile = 'output.log'
# Instead of waiting for 15 seconds explicitly consider using requests module
# with timeout parameter
response = requests.get(url, timeout=15, stream=True)
if response.status_code != 200:
print('Failed to get data:', response.status_code)
with open(outfile, 'w') as dest:
writer = csv.writer(dest)
# Walk through the request response line by line w/o loadin gto memory
line = list(response.iter_lines())[-1]
# Decode the response to string and split line by line
reader = csv.reader(line.decode('utf-8').splitlines(), delimiter=';')
# Read line by line for the splitted content and write to file
for r in reader:
writer.writerow((r[0], r[1], r[2], r[3], r[4], r[5], r[15], r[38], r[39], r[42], r[54], r[90], r[91], r[92],
r[111], r[116], r[121], r[122], r[123], r[124]))
print('File written successfully: ' + outfile)

3D Drawing from a file in an extra directory [duplicate]

I'm trying to get a data parsing script up and running. It works as far as the data manipulation is concerned. What I'm trying to do is set this up so I can enter multiple user defined CSV's with a single command.
e.g.
> python script.py One.csv Two.csv Three.csv
If you have any advice on how to automate the naming of the output CSV so that if input = test.csv, output = test1.csv, I'd appreciate that as well.
Getting
TypeError: coercing to Unicode: need string or buffer, list found
for the line
for line in csv.reader(open(args.infile)):
My code:
import csv
import pprint
pp = pprint.PrettyPrinter(indent=4)
res = []
import argparse
parser = argparse.ArgumentParser()
#parser.add_argument("infile", nargs="*", type=str)
#args = parser.parse_args()
parser.add_argument ("infile", metavar="CSV", nargs="+", type=str, help="data file")
args = parser.parse_args()
with open("out.csv","wb") as f:
output = csv.writer(f)
for line in csv.reader(open(args.infile)):
for item in line[2:]:
#to skip empty cells
if not item.strip():
continue
item = item.split(":")
item[1] = item[1].rstrip("%")
print([line[1]+item[0],item[1]])
res.append([line[1]+item[0],item[1]])
output.writerow([line[1]+item[0],item[1].rstrip("%")])
I don't really understand what is going on with the error. Can someone explain this in layman's terms?
Bear in mind I am new to programming/python as a whole and am basically learning alone, so if possible could you explain what is going wrong/how to fix it so I can note it for future reference.
args.infile is a list of filenames, not one filename. Loop over it:
for filename in args.infile:
base, ext = os.path.splitext(filename)
with open("{}1{}".format(base, ext), "wb") as outf, open(filename, 'rb') as inf:
output = csv.writer(outf)
for line in csv.reader(inf):
Here I used os.path.splitext() to split extension and base filename so you can generate a new output filename adding 1 to the base.
If you specify an nargs argument to .add_argument, the argument will always be returned as a list.
Assuming you want to deal with all of the files specified, loop through that list:
for filename in args.infile:
for line in csv.reader(open(filename)):
for item in line[2:]:
#to skip empty cells
[...]
Or if you really just want to be able to specify a single file; just get rid of nargs="+".

Issue with writing multiple lines into a file in python

I want to download multiple specific links(images´ urls) into a txt file(or any file where all links can be listed underneath each others).
I get them but the code wrtite each link on the top of the other one and at the end it stays only a link :(. Also I want not repeated urls
def dlink(self, image_url):
r = self.session.get(image_url, stream=True)
with open('Output.txt','w') as f:
f.write(image_url + '\n')
The issue is most simply that opening a file with mode 'w' truncates any existing file. You should change 'w' to 'a' instead. This will open an existing file for writing, but append instead of truncating.
More fundamentally, the problem may be that you are opening the file over and over in a loop. This is very inefficient. The only time the approach you use could be really useful is if your program is approaching the OS-imposed limit on number of open files. If this is not the case, I would recommended putting the loop inside the with block, keeping the mode as 'w' since you open the file just once now, and passing the open file to your dlink function.
Edit
Huge mistake of my part, as it is a method, and you will call it several times, if you open it in write mode ('w') or similar, it will Overwrites the existing file if the file exists.
So, if you use the 'a' way, you can see that:
Opens a file for appending. The file pointer is at the end of the file
if the file exists. That is, the file is in the append mode. If the
file does not exist, it creates a new file for writing.
The other problem radics in image_url is a list, so you need to write it line by line:
def dlink(self, image_url):
r = self.session.get(image_url, stream=True)
with open('Output.txt','a') as f:
for url in list(set(image_url)):
f.write(image_url + '\n')
another way to do it:
your_file = open('Output.txt', 'a')
r = self.session.get(image_url, stream=True)
for url in list(set(image_url)):
your_file.write("%s\n" % url)
your_file.close() #dont forget close it :)
the file open mode is wrong,'w' mode make this file was overwritten every time you open it,not appended to it. replace it to 'a' mode.
you can see this https://stackoverflow.com/a/23566951/8178794 for more detail
Open a file with option w overwrite the file if existring, use the mode a to append data to an existing file.
Try :
import requests
from os.path import splitext
# use mode='a' to append result without erasing filename
def dlink(url, filename, mode='w'):
r = requests.get(url)
if r.status_code != 200:
return
# here the link is valid
with open(filename, mode) as desc:
desc.write(url)
def dimg(img_url, img_name):
r = requests.get(img_url, stream=True)
if r.status_code != 200:
return
_, ext = splitext(img_url)
with open(img_name + ext, 'wb') as desc:
for chunk in r:
desc.write(chunk)
dlink('https://image.flaticon.com/teams/slug/freepik.jpg', 'links.txt')
dlink('https://image.flaticon.com/teams/slug/freepik.jpg', 'links.txt', 'a')
dimg('https://image.flaticon.com/teams/slug/freepik.jpg', 'freepik')

Python on Raspberry Pi: results are only integers

I am using the following python code on the Raspberry Pi to collect an audio signal and output the volume. I can't understand why my output is only integer.
#!/usr/bin/env python
import alsaaudio as aa
import audioop
# Set up audio
data_in = aa.PCM(aa.PCM_CAPTURE, aa.PCM_NONBLOCK, 'hw:1')
data_in.setchannels(2)
data_in.setrate(44100)
data_in.setformat(aa.PCM_FORMAT_S16_LE)
data_in.setperiodsize(256)
while True:
# Read data from device
l,data = data_in.read()
if l:
# catch frame error
try:
max_vol=audioop.max(data,2)
scaled_vol = max_vol/4680
if scaled_vol==0:
print "vol 0"
else:
print scaled_vol
except audioop.error, e:
if e.message !="not a whole number of frames":
raise e
Also, I don't understand the syntax in this line:
l,data = data_in.read()
It's likely that it's reading in a byte. This line l,data = data_in.read() reads in a tuple (composed of l and data). Run the type() builtin function on those variables and see what you've got to work with.
Otherwise, look into the documentation for PCM Terminology and Concepts located within the documentation for the pyalsaaudio package, located here.