XTK lesson 13 (ami example) - Removing the boundary lines in slices obtained from 2D volume rendering - xtk

I've implemented XTK lesson 13 from the AMI examples (this - https://fnndsc.github.io/ami/#xtk_lesson13) by passing a set of DICOM images.
In the slices obtained, there are these colored dotted lines that I'm assuming is a boundary of some sort. I need to remove them but I'm not sure what exactly they are. Can someone tell me how they are created or what they represent?
Edit:
I'm not at liberty to post a screenshot from my implementation but this is a screenshot from the example url. Sorry the image isn't very clear, the example loads super small images. The slice is the image on the bottom.
yellow dashed lines
When you move the image (slice) around in the actual example, the yellow dashed lines appear on the image. That's what I need to get rid of. I've also been referring to them as dotted lines, I suppose they are actually dashed lines. Sorry about that.

You should set stackHelper.slice.canvasWidth and stackHelper.slice.canvasHeight to 0 for the dotted lines to disappear.
Those dotted lines are meant to inform that a part of the slice if partially off-canvas.

Related

Finding lines from text image OpenCV C++

I have images that are noised with some random lines like the following one:
I want to apply on them some preprocessing in order to find the lines (the lines that distort the writing).
I was seen some ways, but it is in Python, not C++:
Remove noisy lines from an image
Remove non straight lines from text image
In C++, I was try but result images:
the result which I want (I do it with Photoshop):
How to find lines in that images in C++ with OpenCV? thanks
I am not sure about this. Like #Chistoph Racwit said, you might need to use some sort of OCR.
But just to try it out, I think you can apply a horizontal filter that highlights any horizontal line in the image. It might not give the best-looking result but with some clean-up, you could end up with where the lines are in the image.
You can use this image to detect lines' locations and draw them in the original image with red color.

Inkscape: enlarge figure without creating distortions

(a) what I have, (b) what I get, (c) what I want
I have a simple vector graphic in Inkscape, which consists of a rectangle, filled points and stars. Since the axis ranges are not really nice (the height equals approximatly 3 times the width of the picture) for a publication, I want to rescale the picture. However, I do not have the raw data, such that I can plot it again. How can I rescale my graphic (see figure (a)), such that the x-range is more wide (see figure (c)) without getting distortions (see figure (b))? In the end I want to create a PDF file out of it.
Any ideas on that?
Thanks for your help.
You can try to do it in 2 steps, using the Object -> Transform tool (Shift-Ctrl-M).
First, select everything, and with the transform tool select the Scale tab, and scale horizontally by, say, 300%. All figures will be distorted.
Now, unselect the rectangle, and scale horizontally again by 33.3%, but first click on Apply to each object separately. This will undo the distortion (but not the translation) of each object.
Note that 300% followed by 33.3% should leave the individual objects with the same size.
Documentation here.

How to use OpenCV to process the example picture

I need some suggestion in processing the following image.
Basically, I want to remove the background which is a dark red table and leave the filter (a white background and a gray spot in the middle)
This picture is taken by the camera. I think in a real experiment, the background may not be a single color but the type of filter always be the same.
edit:
The desired object could be anywhere in the picture.
The background could be anything not just the dark red
But the object is always white
I am using OpenCV to process this image, any suggestion about what types of method of OpenCV should be used for this example.
Thanks

Taking parts of one image to create another image

I'm working with images from which I would like to take parts out and make one new image. I can make use of ImageMagick or OpenCV. Here is a sample image:
From this image I would like to take out the title, two annotated texts (one in circle one in rectangle), and the text from bottom.
So, the final image would have: Image Title, Annotated Text1, Annotated TExt, and This is some test. These parts of the image don't have to be in any particular order in the new image.
Questions
What kind of strategy can I use to do this?
Will hough or canny help?
I'm thinking that since the parts of the image I want back are all text, maybe hough line can detect the straight lines and then I crop out those parts of the images...
My main goal is to extract text so I can send it to an OCR
I've tried to erode the image and came up with this:
My Strategy
Following is my strategy to only keep parts of the image with white background and text. However, I'm not sure if this is doable with OpenCV...
There will be different ROI's in the image
there will always be white background on top of the image, lets call this space title. So I crop out the rectangle part on top of the image and save it as a separate image
there will always be white background at bottom of the image, lets call this body. So I crop out the rectangle part at bottom of the image and save it as a separate image
there will be some text on top of the image, lets call this annotated text. This will be in squares or circles. I can use technique mentioned in this answer to crop out those parts of the image and save them as a separate image.
If you are dealing with only similar looking fonts, and you are not looking for something super efficient, you can simply perform correlation with each letter of the alphabet (26 upper and 26 lower). Threshold out the peaks and add them together. You can then just define you bounding boxes around the peaks.

How to detect Text Area from image?

i want to detect text area from image as a preprocessing step for tesseract OCR engine, the engine works well when the input is text only but when the input image contains Nontext content it falls, so i want to detect only text content in image,any idea of how to do that will be helpful,thanks.
Take a look at this bounding box technique demonstrated with OpenCV code:
Input:
Eroded:
Result:
Well, I'm not well-experienced in image processing, but I hope I could help you with my theoretical approach.
In most cases, text is forming parallel, horisontal rows, where the space between rows will contail lots of background pixels. This could be utilized to solve this problem.
So... if you compose every pixel columns in the image, you'll get a 1 pixel wide image as output. When the input image contains text, the output will be very likely to a periodic pattern, where dark areas are followed by brighter areas repeatedly. These "groups" of darker pixels will indicate the position of the text content, while the brighter "groups" will indicate the gaps between the individual rows.
You'll probably find that the brighter areas will be much smaller that the others. Text is much more generic than any other picture element, so it should be easy to separate.
You have to implement a procedure to detect these periodic recurrences. Once the script can determine that the input picture has these characteristics, there's a high chance that it contains text. (However, this approach can't distinguish between actual text and simple horisontal stripes...)
For the next step, you must find a way to determine the borderies of the paragraphs, using the above mentioned method. I'm thinking about a pretty dummy algorithm, witch would divide the input image into smaller, narrow stripes (50-100 px), and it'd check these areas separately. Then, it would compare these results to build a map of the possible areas filled with text. This method wouldn't be so accurate, but it probably doesn't bother the OCR system.
And finally, you need to use the text-map to run the OCR on the desired locations only.
On the other side, this method would fail if the input text is rotated more than ~3-5 degrees. There's another backdraw, beacuse if you have only a few rows, then your pattern-search will be very unreliable. More rows, more accuracy...
Regards, G.
I am new to stackoverflow.com, but I wrote an answer to a question similar to this one which may be useful to any readers who share this question. Whether or not the question is actually a duplicate, since this one was first, I'll leave up to others. If I should copy and paste that answer here, let me know. I also found this question first on google rather than the one i answered so this may benefit more people with a link. Especially since it provides different ways of going about getting text areas. For me, when I looked up this question, it did not fit my problem case.
Detect text area in an image using python and opencv
In the Current time, the best way to detect the text is by using EAST (An Efficient and Accurate Scene Text Detector)
The EAST pipeline is capable of predicting words and lines of text at arbitrary orientations on 720p images, and furthermore, can run at 13 FPS, according to the authors.
EAST quick start tutorial can be found here
EAST paper can be found here