I want to show 40 images in image upload
I am also trying to change the pagination in controller/file manager.php from 16 to 36 and when I am running this then getting error
unexpected Syntex class commonController in line 1
what may be the problem
Please modify filemanager.php under admin\controller. Replace 16 with 40 and 14 with 40-2= 38.
You can increase as many images per page as you can. I have put in upto 144 and working like a charm.
Please do not forget to refresh Modifications for every change otherwise you dont see the changes.
Raj
Related
from the item it seems to be a pretty obvious issue right?
But for the life of me I swear I have 21 labels and 21 classes.
So just as a sanity check I thought i'd ask!
I have a load of training images (640,640)
I've gone through them and used DataTurks to annotate the data.
From that I've created a set of PNG masks where I've used 255 for blank space then tan Int for the corresponding number to make an NP array to then convert to a png.
I've then followed this sagemaker example for segmentation which seems to work until I run ss_model.fit.
This is where I start to get some errors. The full log can be seen in this Gist
The first error to jump out at me is:
label maps not provided, using defaults.
Which is strange as I believe I've loaded them correctly in S3 <bucket>/label_map/train_label_map.json
That label map looks like so : Gist (Perhaps it fails as it's not valid JSON however I was copying how another sagemaker example uses it?)
The second error to jump out is the one in the title.
Now it could be that my masks are competely wrong (I'm still very new to ML) but them look like this but 640x640:
[
255, 255, 255
255, 2, 2,
255, 2, 2
]
Where 255 is null and 2 is the annotation.
Could this error be because I'm not including the 255: "null" in the label_map ?
Any insight would be really helpful! Thanks.
-- But for the life of me I swear I have 21 labels and 21 classes.
If you have 21 classes, the maximum label should be 20 and not 21, therefore the error is thrown. Label indices start at 0. Notes for this can be found at the documentation page.
From your comment on your post, it seems like you have 23 classes if you have to set number of classes to 22. num_classes is only for classes and does not include the 255 or hole class. Note that the algorithm will work with no error if you give num_classes > you number of labels. This is because the num_classes parameter is used to create the softmax layer. If you have a num_classes more than the actual number of labels seen, some labels are simply not learnt.
Diving a little deeper, the label map in the link that you shared is wrong. Label maps only accept ints and not strings. Its an int-int mapping.
Following on, it is not simply enough to have the label_map in the S3 bucket, it needs to provided as a data channel into the algorithm while creating the training job.
The gcov data files (*.gcda) accumulate the counts across multiple tests. That is a wonderful thing. The problem is, I can't figure out how to get the .gcov files to accumulate in the same way the .gcda files do.
I have a large project (53 header, 54 cpp) and some headers are used in multiple cpp files. The following example is radically simplified; the brute force approach will take days of manual, tedious work if that is required.
Say for example I have xyz.hpp that defines the xyz class. On line 24 it defines the build() method that builds xyz data, and on line 35 it defines the data() method that returns a reference to the data.
Say I run my test suite, then I execute gcov on abc.cpp. The xyz.hpp.gcov report has a count of 5 for line 24 (build) and a count of zero for line 35 (data). Now I run gcov on def.cpp, and the xyz.hpp.gcov report has a count of zero for line 24 and a count of 7 for line 35. So, instead of accumulating the report information and having a count of 5 for line 24 (build) and 7 for line 35 (data), it replaces the xyz.hpp.gcov each time so all counts are reset. I understand why that's the default behavior, but I can't seem to override it. If I'm unable to accumulate the .gcov reports programatically, I'll be forced to manually compare, say, a dozen different xyz.hpp.gcov in order to assess the coverage.
It looks like LCOV is able to do this accumulation, but it takes weeks to get new software installed in my current work culture.
Thanks in advance for any help.
I came across a problem where I could not find an elegant way to solve it...
We have an application that monitors audio-input and tries to assign matches based on acoustic fingerprints.
The application gets a sample every few seconds, then does a lookup and stores the timestamped result in the database.
The fingerprinting is not always accurate, so it happens that "wrong" items get assigned. So the data looks something like:
timestamp foreign_id my comment
--------------------------------------------------
12:00:00 17
12:00:10 17
12:00:20 17
12:00:30 17
12:00:40 723 wrong match
12:00:50 17
12:01:00 17
12:01:10 17
12:01:20 None no match
12:01:30 17
12:01:40 18
12:01:50 18
12:02:00 18
12:02:10 18
12:02:20 18
12:02:30 992 wrong match
12:02:40 18
12:02:50 18
So I'm looking for a way to "clean up" the data periodically.
Could anyone imagine a nice way to achieve this? In the given example - the entry with the foreign-id of 723 should be corrected to 17 etc. And - if possible - with a threshold about how many entries back and forth should be taken into account.
Not sure if my question is clear enough this way, but any inputs welcome!
Check that a foreign id is in the database so many times, then check if those times are close together?
Why not just disregard the 'bad' data when using the data?
What's the best way to write a dataset to a file that is frequently changing?
i.e a 12 meg dataset that has 4 kb segments that change every 2 seconds. Re-writing the entire 12 megs seems like a waste.
Is there anyway to do this using C/C++?
Yes you can save from a particular offset in a file. WIth c it is the seek command so if you look for something similar in C++ you probably will find it.
See http://www.cplusplus.com/reference/clibrary/cstdio/fseek/ for an example
I'm trying to generate a chart via the following url:
http://chart.apis.google.com/chart?chxl=0:|Mon|Tue|Wed|Thu|Fri|Sat|Sun&chxt=x&chbh=a,6,10&chs=320x225&cht=bvg&chco=A2C180&chds=0,95&chd=t:0,0,300,500,0,0,0&chtt=Test
The values are 0 for everything except for Wed and Thu when they are 300 and 500 each. However, the bars in the chart are identically long for wed and thur although wed is representing 300 and thu is representing 500.
I've checked the url format many times and can't find any problem with it. Am I doing something wrong or is this a bug with google charts?
I figured it out.
The chds argument must contain the highest value of the dataset against which all other values are compared, I had copied & pasted my url and had '95' as the chds which is why both 300 and 500 showed equally, changing it fixed the problem.
It might be a scale problem ; can you print values on the vertical bar, every 10 for example ?