I have data of events in time. At the moment they are represented by (black) markers, at y = 1 on a scatter plot where the x-axis is time.
Unfortunately there is uncertainty in x of about 2 hours. So I want to turn my black markers into more like grey sausage shapes - where there is an equal probability of the event occurring at each point in time in that 2 hours, and the shading is not black such that when all the events are plotted I can see the density building up at different points in time.
Any idea how I blur my precise markers over time into differentially-shaded uniform-probability sausages?
Thanks very much and let me know if this is not clearly stated as a question.
Related
I have written a code to calculate the spectrogram of a sine & cos signals, applied the Hann Window, calculated FFT, Calculated log magnitude of frequency coefficients.
I tested that it is all working by writing a simple function in openGL to plot a magnitude-frequency spectrum and I got the following results:
As you can see, there are 2 bars which indicates the sine * cos waves.
I have all the information I need to plot a spectrogram (frequencies,magnitude,time)
Now my question is how can I draw that? my first thought was to draw dots, so I'll use the time array for the interval time I need to draw the dots on the X axie, frequencies array to where to draw them on the Y axis, and the magnitude would be the color of the dot.
Maybe that's an inefficient idea because I saw that drawing dots is really inefficient in openGL so I don't know what's a better idea, I couldn't find any "simple" examples of openGL spectrogram online.
#HolyBlackCat comment is the answer
"Make an array of colors, fill it once, then give it to GL as a texture"
First of all, sorry for my bad English,
I have an object like following picture, the object always spin around a horizontal axis. Anybody can recommend me how to I can take a photo that's full label of tube when the tube is spinning ? I can take a image from my camera via OpenCV C++, but when I'm trying to spin the tube around, I can't take a perfect photo (my image is blurry, not clearly).
My tube is perfectly facing toward camera. Its rotating speed is about 500 RPM.
Hope to get your help soon,
Thank you very much!
this is my object:
Some sample images:
Here my image when I use camera of Ip5 with flash:
Motion blur
this can be improved by lowering the exposure time but you need to increase light conditions to compensate. Most modern compact cameras can not set the exposure time directly (so the companies can sold the expensive profi cameras) even if it is just few lines of GUI code but if you increase the light the automatic exposure should lower on its own.
In industry this problem is solved by special TDI cameras like
HAMAMATSU TDI Line Scan Cameras
The TDI means Time delay integration which means the camera CCD pixels are passing its charge to the next pixel synchronized with the motion. This results in effect like you would move the camera synchronously with your object surface. The blur is still present but much much smaller (only a fraction of real exposure time)
In computer vision and DIP you can de-blur the image by deconvolution process if you know the movement properties (which you know) It is inversion of gaussian blur filter with use of FFT and optimization process to find the inverse filter.
Out of focus blur
This is due the fact your surface is curved and camera chip is not. So outer pixels have different distance to chip then the center pixels. Without special optics you can handle this by Line cameras. Of coarse I do not expect you got one so you can use your camera for this too.
Just mount your camera so one of the camera axis is parallel to you object rotation axis (surface) for example x axis. Then sample more images with constant time step and use only the center line/slice of the image (height of the line/slice depends on your exposure time and the object speed, they should overlap a bit). then just combine these lines/slices from all the sampled images to form the focused image .
[Edit1] home made TDI setup
So mount camera so its view axis is perpendicular to surface.
Take burst shots or video with constant frame-rate
The shorter exposure time (higher frame-rate) the more focused whole image will be (due to optical blur) and the bigger area dy from motion blur. And the higher the rotation RPM the smaller the dy will be. So find the best option for your camera,RPM and lighting conditions (usually adding strong light helps if you do not have reflective surfaces on the tube).
For correct output you need to compromise each parameter so:
exposure time is as short as it can
focused areas are overlapping between the shots (if not you can sample more rounds similar to old FDD sector reading...)
extract focused part of shots
You need just the focused middle part of all shots so empirically take few shots from your setup and choose the dy size. Then use that as a constant latter. So extract the middle part (slice) from the shots. In my example image it is the red area.
combine slices
You just copy (or average overlapped part) the slices together. They should overlap a bit so you do not have holes in final image. As you can see my final image example has smaller slices then acquired to make that more obvious.
Your camera image can be off by few pixels due to vibrations so If that is a problem in final image then you can use SIFT/SURF + RANSAC for auto-stitching for higher precision output.
Can anyone help?
I work on detection of contour of red blood cells in phase contrast images. After transformation from Cartesian coordinate system into radial coordinate system I try to find exact contour position by Gaussian fit.
This results in 360 x frames Gaussian fits, which results in ~20 min of calculation time per image stack...way too long.
Is there a method to speed up the process or to outsource the problem to a c/c++ function?
Many thanks in advance.
I'm currently working on a simple raytracer, and until now I've successfully implemented several features, like antialiasing, depth of field and soft shadows with area lights.
An image representing my work could be this one:
(there's no AA here)
The next step was adding reality to the render with some global illumination algorithm, so I decided to move to the photon mapping way, which seemed the easiest one.
To do so I read some papers I found on the web, such as this one: http://graphics.stanford.edu/courses/cs348b-01/course8.pdf
which is very well written.
Now my program can shoot photons in the scene and store them after the first bounce (diffuse or specular), then scale the power of every single photon to LIGHT_POWER / PHOTON_AMOUNT.
A direct visualization of this is represented in these images, where I shot 1000k and 50k photons, each allowed to bounce 6 times, for a total of 5000k and 250k photons in the global map:
I thought the effect is right, so I moved to the next part, the one where the photons within a certain radius over the intersection point of the raytraced rays are used to calculate the indirect illumination.
In my raytracer I do as follows:
for each pixel I send a ray trough it to intersect the scene and calculate the direct illumination (dot(N, L) * primitive.color * primitive.diffuseFactor * light.power), and the specular term;
Here is the tricky part: I look for the nearest photons which lie in a fixed radius disc around the point of intersection and sum the light produced by each one this way:
for each photon within radius
calculate light the same way as for direct lighting
(dot(-photonDir, N) * primitive.color * photonColor)
and sum everything up.
When every interesting photon has been processed and added its contribution to the final color I divide it by the area of the disc which defines the search area.
The problem is that doing so I don't get the desired result, in particular the ceiling is very dark compared to images I found on the web (I can't get how the ceiling can be as bright as the floor if the latter has an additional contribution from the direct lighting, and how it can be white if the photons on it are only red or green).
An image representing the problem is the following:
This has been rendered using 150k photons, with 4 bounces each, and the direct illumination has been divided by PI.
Also, if you know how I can remove those ugly artifacts from the corners, please tell me.
First, thanks a lot for all your help.
Second, I'm here to announce that after some trouble and after a period in which I didn't touch the code, I finally got it.
I haven't understood what I was doing wrong, maybe the algorithm to get a random direction within a hemisphere, maybe the photon gathering pass...
The point is that after reformatting a bit the code (and after implementing a final gathering step and a 2.2 gamma correction) I was able to render the following with 200k photons, 10 diffuse bounces, 20 samples for direct lighting and 100 samples of FG (taken in random - cosine weighted - direction).
I'm very happy with this since it looks almost the same as a reproduction of the scene in c4d path traced with V-Ray.
I still haven't clear what is the utility to store the photon's incident direction, ahahahahahahah, but it works, so it's ok.
Thank's again.
I want to obtain all the pixels in an image with pixel values closest to certain pixels in an image. For example, I have an image which has a view of ocean (deep blue), clear sky (light blue), beach, and houses. I want to find all the pixels that are closest to deep blue in order to classify it as water. My problem is sky also gets classified as water. Someone suggested to use K nearest neighbor algorithm, but there are few examples online that use old C style. Can anyone provide me example on K-NN using OpenCv C++?
"Classify it as water" and "obtain all the pixels in an image with pixel values closest to certain pixels in an image" are not the same task. Color properties is not enough for classification you described. You will always have a number of same colored points on water and sky. So you have to use more detailed analysis. For instance if you know your object is self-connected you can use something like water-shred to fill this region and ignore distant and not connected regions in sky of the same color as water (suppose you will successfully detect by edge-detector horizon-line which split water and sky).
Also you can use more information about object you want to select like structure: calculate its entropy etc. Then you can use also K-nearest neighbor algorithm in multi-dimensional space where 1st 3 dimensions is color, 4th - entropy etc. But you can also simply check every image pixel if it is in epsilon-neighborhood of selected pixels area (I mean color-entropy 4D-space, 3 dimension from color + 1 dimension from entropy) using simple Euclidean metric -- it is pretty fast and could be accelerated by GPU .