AssertionError: expect 202335260 bytes, found 203934260 Soccer Ball Detection using YOLOv2 (Darkflow) - assertion

https://github.com/deep-diver/Soccer-Ball-Detection-YOLOv2
I get loading yolo.weights .... and then
AssertionError: expect 202335260 bytes, found 203934260
However, when I run the same command with default dataset it works. I downloaded the weights file from https://drive.google.com/drive/folders/0B1tW_VtY7onidEwyQ2FtQVplWEU
I modify the line self.offset = 16 in the ./darkflow/utils/loader.py file and replace with self.offset = 20. But can not solve the issue.
How I can solve this issue?

Just to add to # Zrufy's answer In darkflow/utils/loader.py
class weights_walker(object):
"""incremental reader of float32 binary files"""
def __init__(self, path):
self.eof = False # end of file
self.path = path # current pos
if path is None:
self.eof = True
return
else:
self.size = os.path.getsize(path)# save the path
major, minor, revision, seen = np.memmap(path,
shape = (), mode = 'r', offset = 0,
dtype = '({})i4,'.format(4))
self.transpose = major > 1000 or minor > 1000
self.offset = 16 + 203934260 - 202335260
Make the change so that the last line is of the form
self.offset = 16 + found_value - expected_value
found_value and expected_value can be taken from the assertion error that you face.

I got the same problem, and solved it with Ign0reLee's help.
You can find the detail in https://github.com/deep-diver/Soccer-Ball-Detection-YOLOv2/issues/3
Basically, it happened when your network configuration (.cfg) and weight file (.weights) mismatch., I think the cfg file in this repo is not correct for the official weight file.
Please try this weight file
https://pjreddie.com/media/files/yolov2.weights
with this cfg file Ign0reLee put in the issue page
Wish you luck

the method in which you need to change from 16 to 20 the self.offset does not work.
The only working method for this bug is:
updated self.offset = old_offset_value + (found_value - expected_value)
for example in your case put in place in self.offset instead of 16 this:
16+(203934260-202314760)
let me know!

Related

multiple openpyxl xlsx workbooks into one .zip file for download

I am trying to get some xlsx files from a form, i load them using openpyxl and do some data processing.. and finally i need to download all processed xlsx files zipped to the user.
here is an example of what i did so far
if form.is_valid():
s = StringIO.StringIO()
zf = zipfile.ZipFile(s, mode="w")
for xlsx in request.FILES.getlist('xlsxs'):
element_column = "G"
element_row = 16
massar_column = "C"
massar_row_start = 18
loop = column_index_from_string(element_column)
while (loop <= ws.max_column):
for i in range(massar_row_start, ws.max_row+1):
# ...
ws["%s%s" % (element_column,i)] = 0
# ...
loop+=2
element_column = get_column_letter(loop)
buf = save_virtual_workbook(wb)
zf.write(buf) # or zf.write(wb)
zf.close()
response = HttpResponse(s.getvalue(), content_type="application/x-zip-compressed")
response['Content-Disposition'] = "attachment; filename=notes.zip"
return response
I get the error
TypeError at My_view
stat() argument 1 must be encoded string without null bytes, not str
Thanks in advance for any help you can offer.
save_virtual_workbook returns a bytestream - source.
You are passing this value to ZipFile.write which is expecting a filename.
I think you should be using ZipFile.writestr, and you need to provide a filename that will be used inside the archive. I'm not sure how you are getting the error message you see, but this is the first mistake I can see.

PyYAML, safe_dump adding line breaks and indent to the YAML file

I want to receive following YAML file:
---
classes:
- apache
- ntp
apache::first: 1
apache::package_ensure: present
apache::port: 999
apache::second: 2
apache::service_ensure: running
ntp::bla: bla
ntp::package_ensure: present
ntp::servers: '-'
After parsing, I received such output:
---
apache::first: 1
apache::package_ensure: present
apache::port: 999
apache::second: 2
apache::service_ensure: running
classes:
- apache
- ntp
ntp::bla: bla
ntp::package_ensure: present
ntp::servers: '-'
Here, I have found the properties that give possibility to style document. I tried to set line_break and indent, but it does not work.
with open(config['REPOSITORY_PATH'] + '/' + file_name, 'w+') as file:
yaml.safe_dump(data_map, file, indent=10, explicit_start=True, explicit_end=True, default_flow_style=False,
line_break=1)
file.close()
Please, advice me simple approach to style the output.
You cannot do that in PyYAML. The indent option only affects mappings and not sequences. PyYAML also doesn't preserve order of mapping keys on round-tripping.
If you use ruamel.yaml (dislaimer: I am the author of that package), then getting the exact same input as output is easy:
import ruamel.yaml
yaml_str = """\
---
classes:
- apache # keep the indentation
- ntp
apache::first: 1
apache::package_ensure: present
apache::port: 999
apache::second: 2
apache::service_ensure: running
ntp::bla: bla
ntp::package_ensure: present
ntp::servers: '-'
"""
data = ruamel.yaml.round_trip_load(yaml_str)
res = ruamel.yaml.round_trip_dump(data, indent=4, block_seq_indent=2,
explicit_start=True)
assert res == yaml_str
please note that it also preserves the comment I added to the first sequence element.
You can build this from "scratch" but adding a newline is not something for which a call exists in ruamel.yaml:
import ruamel.yaml
from ruamel.yaml.tokens import CommentToken
from ruamel.yaml.error import Mark
from ruamel.yaml.comments import CommentedMap, CommentedSeq
data = CommentedMap()
data['classes'] = classes = CommentedSeq()
classes.append('apache')
classes.append('ntp')
data['apache::first'] = 1
data['apache::package_ensure'] = 'present'
data['apache::port'] = 999
data['apache::second'] = 2
data['apache::service_ensure'] = 'running'
data['ntp::bla'] = 'bla'
data['ntp::package_ensure'] = 'present'
data['ntp::servers'] = '-'
m = Mark(None, None, None, 0, None, None)
data['classes'].ca.items[1] = [CommentToken('\n\n', m, None), None, None, None]
# ^ 1 is the last item in the list
data.ca.items['apache::service_ensure'] = [None, None, CommentToken('\n\n', m, None), None]
res = ruamel.yaml.round_trip_dump(data, indent=4, block_seq_indent=2,
explicit_start=True)
print(res, end='')
You will have to add the newline as comment (without '#') to the last element before the newline, i.e. the last list element and the apache::service_ensure mapping entry.
Apart from that you should ask yourself if you really want to use PyYAML which only supports (most of) YAML 1.1 from 2005 and not the latest revision YAML 1.2 from 2009.
The wordpress page you linked to doesn't seem very serious (it doesn't even have the package name, PyYAML, correct).

What is wrong with Django csv upload code?

Here is my code. I would like to import csv and save it to database via model.
class DataInput(forms.Form):
file = forms.FileField(label="Select CSV file")
def save(self, mdl):
records = csv.reader(self.cleaned_data["file"].read().decode('utf-8'), delimiter=',')
if mdl=='auction':
auction = Auction()
for line in records:
auction.auction_name = line[0]
auction.auction_full_name = line[1]
auction.auction_url = line[2]
auction.is_group = line[3]
auction.save()
Now, it throws the following error.
Exception Type: IndexError
Exception Value: list index out of range
csv file
RTS,Rapid Trans System,www.rts.com,TRUE
ZAA,Zelon Advanced Auton,www.zaa.info,FALSE
Really stuck. Please, help.
First of all, the full stacktrace should reveal exactly where the error is. Give Django the --traceback argument, e.g. ./manage.py --traceback runserver.
As Burhan Khalid mentioned 10 minutes ago you miss the 5th column in your csv file (index 4), so that is the root of the error.
Once you read the file with .read(), you are passing in the complete string - which is why each row is an individual character.
You need to pass the entire file object, without reading it first:
records = csv.reader(self.cleaned_data["file"], delimiter=',')
If you need to decode it first, then you had better run through the file yourself:
for line in self.cleaned_data['file'].read().decode('utf-8').split('\n'):
if line.strip():
try:
name, full_name, url, group = line.split(',')
except ValueError:
print('Invalid line: {}'.format(line))
continue
i = Auction()
i.auction_name = name
i.action_full_name = full_name
i.auction_url = url
i.is_group = group
i.save()

Reading zipped ESRI BIL files with Python

I have precipitation data from the PRISM Climate Group which are now offered in .bil format (ESRI BIL, I think) and I'd like to be able to read these datasets with Python.
I've installed the spectral package, but the open_image() method returns an error:
def ReadBilFile(bil):
import spectral as sp
b = sp.open_image(bil)
ReadBilFile(r'G:\truncated\ppt\1950\PRISM_ppt_stable_4kmM2_1950_bil.bil')
IOError: Unable to determine file type or type not supported.
The documentation for spectral clearly says that it supports BIL files, can anyone shed any light on what's happening here? I am also open to using GDAL, which supposedly supports the similar/equivalent ESRI EHdr format, but I can't find any good code snipets to get started.
It's now 2017 and there is a slightly better option. The package rasterio supports bil files.
>>>import rasterio
>>>tmean = rasterio.open('PRISM_tmean_stable_4kmD1_20060101_bil.bil')
>>>tmean.affine
Affine(0.041666666667, 0.0, -125.0208333333335,
0.0, -0.041666666667, 49.9375000000025)
>>> tmean.crs
CRS({'init': 'epsg:4269'})
>>> tmean.width
1405
>>> tmean.height
621
>>> tmean.read().shape
(1, 621, 1405)
Ok, I'm sorry to post a question and then answer it myself so quickly, but I found a nice set of course slides from Utah State University that has a lecture on opening raster image data with GDAL. For the record, here is the code I used to open the PRISM Climate Group datasets (which are in the EHdr format).
import gdal
def ReadBilFile(bil):
gdal.GetDriverByName('EHdr').Register()
img = gdal.Open(bil)
band = img.GetRasterBand(1)
data = band.ReadAsArray()
return data
if __name__ == '__main__':
a = ReadBilFile(r'G:\truncated\ppt\1950\PRISM_ppt_stable_4kmM2_1950_bil.bil')
print a[44, 565]
EDIT 5/27/2014
I've built upon my answer above and wanted to share it here since the documentation seems to be lacking. I now have a class with one main method that reads the BIL file as an array and returns some key attributes.
import gdal
import gdalconst
class BilFile(object):
def __init__(self, bil_file):
self.bil_file = bil_file
self.hdr_file = bil_file.split('.')[0]+'.hdr'
def get_array(self, mask=None):
self.nodatavalue, self.data = None, None
gdal.GetDriverByName('EHdr').Register()
img = gdal.Open(self.bil_file, gdalconst.GA_ReadOnly)
band = img.GetRasterBand(1)
self.nodatavalue = band.GetNoDataValue()
self.ncol = img.RasterXSize
self.nrow = img.RasterYSize
geotransform = img.GetGeoTransform()
self.originX = geotransform[0]
self.originY = geotransform[3]
self.pixelWidth = geotransform[1]
self.pixelHeight = geotransform[5]
self.data = band.ReadAsArray()
self.data = np.ma.masked_where(self.data==self.nodatavalue, self.data)
if mask is not None:
self.data = np.ma.masked_where(mask==True, self.data)
return self.nodatavalue, self.data
I call this class using the following function where I use GDAL's vsizip function to read the BIL file directly from a zip file.
import prism
def getPrecipData(years=None):
grid_pnts = prism.getGridPointsFromTxt()
flrd_pnts = np.array(pd.read_csv(r'D:\truncated\PrismGridPointsFlrd.csv').grid_code)
mask = prism.makeGridMask(grid_pnts, grid_codes=flrd_pnts)
for year in years:
bil = r'/vsizip/G:\truncated\PRISM_ppt_stable_4kmM2_{0}_all_bil.zip\PRISM_ppt_stable_4kmM2_{0}_bil.bil'.format(year)
b = prism.BilFile(bil)
nodatavalue, data = b.get_array(mask=mask)
data *= mm_to_in
b.write_to_csv(data, 'PrismPrecip_{}.txt'.format(year))
return
# Get datasets
years = range(1950, 2011, 5)
getPrecipData(years=years)
You've already figured out a good solution to reading the file so this answer is just in regard to shedding light on the error you encountered.
The problem is that the spectral package does not support the Esri multiband raster format. BIL (Band Interleaved by Line) is not a specific file format; rather, it is a data interleave scheme (like BIP and BSQ), which can be used in many file formats. The spectral package does support BIL for the file formats that it recognizes (e.g., ENVI, Erdas LAN) but the Esri raster header is not one of those.

Python: Trouble getting image to download/save to file

I am new to Python and seem to be having trouble getting an image to download and save to a file. I was wondering if someone could point out my error. I have tried two methods in various ways to no avail. Here is my code below:
# Ask user to enter URL
url= http://hosted.ap.org/dynamic/stories/A/AF_PISTORIUS_TRIAL?SITE=AP&SECTION=HOME&TEMPLATE=DEFAULT&CTIME=2014-04-15-15-48-52
timestamp = datetime.date.today()
soup = BeautifulSoup(urllib2.urlopen(url).read())
#soup = BeautifulSoup(requests.get(url).text)
# ap
links = soup.find("td", {'class': 'ap-mediabox-td'}).find_all('img', src=True)
for link in links:
imgfile = open('%s.jpg' % timestamp, "wb")
link = link["src"].split("src=")[-1]
imgurl = "http://hosted.ap.org/dynamic/files" + link
download_img = urllib2.urlopen(imgurl).read()
#download_img = requests.get(imgurl, stream=True)
#imgfile.write(download_img.content)
imgfile.write(download_img)
imgfile.close()
# link outputs: /photos/F/f5cc6144-d991-4e28-b5e6-acc0badcea56-small.jpg
# imgurl outputs: http://hosted.ap.org/dynamic/files/photos/F/f5cc6144-d991-4e28-b5e6-acc0badcea56-small.jpg
I receive no console error, just an empty picture file.
The relative path of the image can be obtained as simply as by doing:
link = link["src"]
Your statement:
link = link["src"].split("src=")[-1]
is excessive. Replace it with above and you should get the image file created. When I tried it out, I could get the image file to be created. However, I was not able to view the image. It said, the image was corrupted.
I have had success in the past doing the same task using python's requests library using this code snippet:
r = requests.get(url, stream=True)
if r.status_code == 200:
with open('photo.jpg', 'wb') as f:
for chunk in r.iter_content():
f.write(chunk)
f.close()
url in the snippet above would be your imgurl computed with the changes I suggested at the beginning.
Hope this helps.