Integrated Gradient on question classification - gradient

I'm trying to implement the Integrated gradient on question classification (https://github.com/ankurtaly/Integrated-Gradients). The problem is that I don't get how to compute the gradient of the predictions. Does anyone have some hint? Thank you

Related

Are there any existing models to aid in extracting shot chart data from game video?

I'm currently struggling with extracting data by way of basketball game footage captured from the traditional 'broadcast' camera angle. Since the perspective is warped it is obviously not straight-forward to get coordinates on the court for each shot, but I was wondering whether a relevant model was available for this purpose? If not, does anyone who has experience with this sort of analysis have recommendations for a methodology to adopt?

Hand rectangle detection with OpenPose

I'm using openpose, and I have no clue of how to start this task.
I need to draw a rectangle over the people's hand (not the pose of the fingers, just the rectangle), using the skeleton estimation that open pose provides, but I don't really have experience with this framework, and I'm having a hard time understanding the openpose code, so I don't know if somebody could give any advice or clue about how to achieve this task, might be with the right direction I could get it done.
Any comment is welcome, thanks in advance for any help.
It is not easy to start at first, you should read more from the dlib home page before starting. After that you could follow my below suggestion to achieve your desire.
Download the source code from github Dlib on github
Implement by the function void work(TDatums& tDatums) in the file /include/openpose/pose/wPoseExtractor.hpp
Get the keypoints and image as below code
auto &ProceedData = (*tDatums)[0];
Array keypoints = ProceedData.poseKeypoints;
Mat proceedImage = ProceedData.cvOutputData;
You can draw your own image.

Feature tracking using Open CV ++

i am trying to do feature tracking for a white bead(circular in shape) under the micro scope , I tried using a threshold , get contours and the use a kalman filter but the problem is that there is no major color difference between the bead and the noisy background, also the light from microscope is not uniform so thresholding is not working good, i read about adaptive thresolding but i dont understand it really well. so any help or tips to track it would be highly appreciated

Remove noise from the computed optical flow

I compute the optical flow on grayscale videos which contains true-white and noisy-black patch besides the useful information. I want to remove those patches because the correspondant optical flow is foolish.
Those patches are on the edges of the image and their sizes vary from a video to another. My goal is to extract a bounding box describing the useful information in my video thanks to the optical flow.
How can I compute this bounding box ? Or at least, how can I remove the computed optical flow in those regions ?
Edit : I saw your answers. I'll try that next week end then come back to discuss about that. Tank you !
Remove noise from optical flow could be a complicated task. A simple and dummy way could be to use a threshold on the optical flow vector intensity.
But if you only need to find bounding boxes why just do not use a simple background/motion object segmentation? Like MOG, GMG, opencv has nice implementations of them and they works well and are quite fast. See this tutorial.
It's a little tough to understand what the problem is, if the noises is true-white and noisy-black patches in a grayscale image as you have said, then I suggest you look at eroding and dilating. More information can be found here: Eroding and Dilating
Should this not be what you are asking, do post some sample images with the patches and comment so that I can have a clearer idea on what the problem is. Cheers.
If I understand correctly, you are getting noisy optical flow in patches which are grey/white or basically uniform. A simple approach would be to divide the image into small patches and compute the entropy over each patch. Now, patches which have a very low entropy can be discarded by choosing an appropriate threshold because they do not contain much information.

stitching aerial images to create a map

I am working on a project to stitch together around 400 high resolution aerial images around 36000x2600 to create a map. I am currently using OpenCV and so far I have obtained the match points between the images. Now I am at a lost in figuring out how to get the matrix transformation of the images so I can begin the stitching process. I have absolutely no background in working with images nor graphics so this is a first time for me. Can I get some advice on how I would approach this?
The images that I received also came with a data sheet showing longitude, latitude, airplane wing angle, altitude, etc. of each image. I am unsure how accurate these data are, but I am wondering if I can use these information to perform the proper matrix transformation that I need.
Thanks
Do you want to understand the math behind the process or just have an superficial idea of whats going on and just use it?
The regular term for "image snitching" is image alignment. Feed google with it and you'll find tons of sources.
For example, here.
Best regards,
zhengtonic
In recent opencv 2.3 release...they implemented a whole process of image stitching. Maybe it is worth looking at.