How to visualize more than one kernel per layer in histograms using tensorboard - tensorboard

I am currently using Tensorflow 2.0 with a simple CNN, i am initializing the first layer with some handcrafted filters that i would like to visualize during the learning process.
In the histogram part of tensorboard i only see the first kernel of the layer but i would like to see all of them. Is there an easy way to do this?
Thanks in advance

Creating a small function that does this on the displaycallback during the epoch end is the way i solved it, is not the cleanest , and would be nice if someone can correct it :)
class DisplayCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
variables_names = [v.name for v in model.trainable_variables]
with file_writer_cm.as_default():
for i in range(model.layers[0].get_weights()[0].shape[3]):
tf.summary.histogram(variables_names[0].split('/')[0]+"/kernel_"+str(i), model.layers[0].get_weights()[0][:,:,:,i], step=epoch)

Related

how to add RMSE to a premade estimator

I have been struggling with efficiently adding RMSE output for my premade estimator model like you get when using keras' train function. I was eyeing with using add_metrics, but I am not even sure if you can use it for premade estimators and if yes how? Meaning how do I need to code the metric_fn?
The way google uses via calling predict and transforming this into a np.array takes ages for me.
I am happy receiving any idea on how to make this work.
Thanks in advance!

Caffe Batch processing no speedup

I would like to speedup the forward pass of classification of a CNN using caffe.
I have tried batch classification in Caffe using code provided in here:
Modifying the Caffe C++ prediction code for multiple inputs
This solution enables me to give a vector of Mat, but it does not speed up anything. Even though the input layer is modified.
I am processing pretty small images (3x64x64) on a powerful pc with two GTX1080, and there is no issue in terms of memory.
I tried also changing the deploy.prototxt, but I get the same result.
It seems that at one point the forward pass of the CNN becomes sequential.
I have seen someone pointing this out here also:
Batch processing mode in Caffe - no performance gains
Another similar thread, for python : batch size does not work for caffe with deploy.prototxt
I have seen some things about MemoryDataLayer, but I am not sure this will solve my problem.
So I am kind of lost on what to do exactly... does anyone have any information on how to speedup classification time.
Thanks for any help !

Using Microsoft Solver Foundation to solve a linear programming task requiring thousands of data points

Using Microsoft Solver Foundation,I am trying to solve a linear program of the form Ax <= b where A is a matrix containing thousands of data points.
I know that I can new up a Model object and then use the AddConstraint method to add constraints in equation form. However putting those equations together where each contains thousands of variables is just not possible. I looked at the Model Class and can not find a way to just give it the matrix and other info.
How can I do this?
Thanks!
You can make A a parameter and bind data to it. Warning: Microsoft Solver Foundation has been discontinued a while ago, so you are advised to consider an alternative modeling system.

Can I receive a boudingPoly for LABEL_DETECTION results?

How can this be completed with the Google Vision-API please?
send image to vision-api
request: 'features': [{': 'LABEL_DETECTION','maxResults': 10,}]
receive the labels in particular the one I'm interest in is a "clock"
receive the boundingPoly so that I know the exact location of the clock within the image
having received the boundingPoly I would want to use it to create a dynamic AR marker to be tracked by the AR library
Currently it doesn't look like Google Vision-API supports a boudingPoly for LABELS hence the question if there is a way to solve it with the Vision-API.
Currently Label Detection does not provide this functionality. We are always looking at ways to enhance the API
After two years, its the same. I am facing similar challenges and I am thinking of opting other solutions. I think custom solutions like TensorFlow object detection API or DarkNet YOLO object API will do this job very easily.

min and max in opencv

I have a huge amount of points data set. so i want to find min and max values from these points set. now i am using normal for loop for this purpose nice it is working but i want to know posibility to use opencv library since i wish to use this library. so plese any one help me. thanks
There are several options. Using OpenCV for this may give you an easy way to use SSE or other partially par
http://docs.opencv.org/search.html?q=minMax&check_keywords=yes&area=default
Some of those can use the GPU to help. Of course, the GPU will only be faster if your data was already in the GPU. Pushing data across the bus onto your video card just for this kind of search would be a net loss.
Use std::max_element() with a single channel cv::Mat like this:
img = img / *max_element(img.begin<float>(), img.end<float>());
No need for OpenCV in this case: it's already in the standard library (std::min_element and std::max_element).