I am still a beginner in coding. I am currently working on a program in C/C++ that is determining pixel position of a defined mark (which is a black circle with white surroundings) in a photo.
I made a mask from the mark and a vector, which contains mask's every pixel value as it's elements (using Magick++ I summed values for Red, Green and Blue). Vector contains aprox. 10 000 values since the mask is 100x100px. I also used threshold functions for simplifying the image.
Than I made a grid, that is doing the same for the picture, where I want to find the coordinates of the mark. It is basically a loop, that is going throught the image and when the program knows pixel values in the grid it immediately compares them with the mask. Main idea is to find lowest difference between the mask and one of the grid positions.
The problem is however that this procedure of evaluating all grids position takes huge amount of time (e.g. the image has 1920x1080px so more than 2 million vectors containing 10 000 values). I decided to cycle the grid not every pixel but for example every 10th column and row, and than for the best corellation from this procedure I selected area where I used every pixel loop. But, this still takes lot of time.
I would like to ask you, if there is some way of improving this method for better (faster) results or this whole idea is not time efficient and I should use different approach.
Thanks for every advice!
Edit: The program will be used for processing multiple images and on all of them the size will be same. This is the picture after threshold, the mark is the big black dot.
Image
The idea that I find interesting is a pyramidal scheme - or progressive refinement: you find the spot at a lower size image then search only a small rectangle in the larger image.
If you reduce your image by 2 in each dimension then you would reduce the time by 4 plus some search effort in the larger image.
This has some problems: the reduction will affect accuracy I expect. You might miss the spot.
You have to cut the sample (template) by the same so you create a half-size template in this case. As you half half half... the template will get blurred into the surrounding objects so it will not be possible to have a valid template; for half size once I guess the dot has a couple of pixels around it.
As you haven't specified a tool or OS, I will choose ImageMagick which is installed on most Linux distros and is available for OSX and Windows. I am just using it at the command-line here but there are C, C++, Python, Perl, PHP, Ruby, Java and .Net bindings available.
I would use a "Connect Components Analysis" or "Blob Analysis" like this:
convert image.png -negate \
-define connected-components:area-threshold=1200 \
-define connected-components:verbose=true \
-connected-components 8 -auto-level result.png
I have inverted your image with -negate because in morphological operations, the foreground is usually white rather than black. I have excluded blobs smaller than 1200 pixels because your circles seem to have a radius of 22 pixels which makes for an area of 1520 pixels (Pi * 22^2).
That gives this output, which means 7 blobs - one per line - with the bounding box and area of each:
Objects (id: bounding-box centroid area mean-color):
0: 1358x1032+0+0 640.8,517.0 1296947 gray(0)
3: 341x350+1017+287 1206.5,468.9 90143 gray(255)
106: 64x424+848+608 892.2,829.3 6854 gray(255)
95: 38x101+44+565 61.5,619.1 2619 gray(255)
49: 17x145+1341+379 1350.3,446.7 2063 gray(0)
64: 43x43+843+443 864.2,464.1 1451 gray(255)
86: 225x11+358+546 484.7,551.9 1379 gray(255)
Note that, as your circle is 42x42 pixels you will be looking for a blob that is square-ish and close to that size - so I am looking at the second to last line. I can draw that in in red on your original image like this:
convert image.png -fill none -stroke red -draw "rectangle 843,443 886,486" result.png
Also, note that as you are looking for a circle, you would expect the area to be pi * r^2 or around 1500 pixels and you can check that in the penultimate column of the output.
That runs in 0.4 seconds on a reasonable spec iMac. Note that you could divide the image into 4 and run each quarter in parallel to speed things up. So, if you do something like this:
#!/bin/bash
# Split image into 4 (maybe should allow 23 pixels overlap)
convert image.png -crop 1x4# tile-%02d.mpc
# Do Blob Analysis on 4 strips in parallel
for f in tile-*mpc; do
convert $f -negate \
-define connected-components:area-threshold=1200 \
-define connected-components:verbose=true \
-connected-components 8 info: &
done
# Wait for all 4 to finish
wait
That runs in around 0.14 seconds.
Related
I am trying to automate image conversion using ImageMagick CLI. One of the biggest problems with my image set is with tiny artifacts that should be cut out.
My images are generally consistent, with big objects (c.a. 50% of image space) on a white background. Unfortunately, sometimes tiny artifacts may just look bad and make trimming less efficient.
E.g. something like that:
In reality, the big object is not a solid color, it's just a simplified example. It is not necessarily a circle either, it can be a square, rectangle, or something irregular.
I cannot also use any morphology like opening, closing, or erosion. Filters like gaussian or median also are out of the question. I need to keep the big object untouched since the highest possible quality is required.
An ideal solution would be something similar to Contours known for example from OpenCV, where I could find all the uniform objects and if they don't meet certain rules (e.g. threshold of size greater than 5% of the whole image) - fill them with white color.
Is there any similar mechanism in ImageMagick CLI? I've gone through the docs and haven't found a suitable solution to my problem.
Thanks in advance!
EDIT (ImageMagick version):
Version: ImageMagick 7.1.0-47 Q16-HDRI x86_64 20393 https://imagemagick.org
Copyright: (C) 1999 ImageMagick Studio LLC
License: https://imagemagick.org/script/license.php
Features: Cipher DPC HDRI Modules OpenMP(5.0)
Delegates (built-in): bzlib fontconfig freetype gslib heic jng jp2 jpeg lcms lqr ltdl lzma openexr png ps raw tiff webp xml zlib
Compiler: gcc (4.2)
EDIT (Real-life example):
As requested, here is a real-life example. A picture of a coin on a white background, but with some artifacts:
noise under the coin (slightly on the left)
dot under the coin (slightly on the right)
gray irregular shape in the top right corner
The objects might not be necessarily circles like coins but we may assume that there always will be one object with a strong border (no white spaces on the border, like here) and the rest is noise.
Here is one way to do that im Imagemagick 7. First threshold the image so the background is white and the object(s) is black. That will likely be image dependent. NOTE: that JPG is a lousy format, since solid colors are not really truly solid due to the compression. If you can save your images in some non-lossy compressed or uncompress format, that would be better. Then decide on the largest area you need to remove. Use that with connected components processing so that you have only two regions, one white background and one black object. This will be a mask. If you have several objects that is fine also, but they need to be black. I show the textual output showing the two regions. The mask is just the object with the noise removed. So now use the original input, a white image and the mask to composite the first two images so that where the mask is black, the object is used and where the mask is white, the white image will be used. Note, I create the white image by making a copy (clone) of the input and colorizing it 100% with white. The following is in Unix syntax.
Input:
magick coin.jpg -negate -threshold 2% -negate -type bilevel \
-define connected-components:verbose=true \
-define connected-components:area-threshold=1000 \
-define connected-components:mean-color=true \
-connected-components 4 mask.png
Objects (id: bounding-box centroid area mean-color):
0: 1000x1000+0+0 525.8,555.7 594824 gray(255)
44: 722x720+101+58 460.9,417.0 405176 gray(0)
magick coin.jpg \
\( +clone -fill white -colorize 100 \) \
mask.png \
-compose over -composite \
coin_result.png
Mask
Result:
See https://imagemagick.org/script/connected-components.php
and https://imagemagick.org/Usage/compose/#compose and Composite Operator of Convert (-composite, -geometry) at https://imagemagick.org/Usage/layers/#convert
I am writing a disparity matching algorithm using block matching, but I am not sure how to find the corresponding pixel values in the secondary image.
Given a square window of some size, what techniques exist to find the corresponding pixels? Do I need to use feature matching algorithms or is there a simpler method, such as summing the pixel values and determining whether they are within some threshold, or perhaps converting the pixel values to binary strings where the values are either greater than or less than the center pixel?
I'm going to assume you're talking about Stereo Disparity, in which case you will likely want to use a simple Sum of Absolute Differences (read that wiki article before you continue here). You should also read this tutorial by Chris McCormick before you read more here.
side note: SAD is not the only method, but it's really common and should solve your problem.
You already have the right idea. Make windows, move windows, sum pixels, find minimums. So I'll give you what I think might help:
To start:
If you have color images, first you will want to convert them to black and white. In python you might use a simple function like this per pixel, where x is a pixel that contains RGB.
def rgb_to_bw(x):
return int(x[0]*0.299 + x[1]*0.587 + x[2]*0.114)
You will want this to be black and white to make the SAD easier to computer. If you're wondering why you don't loose significant information from this, you might be interested in learning what a Bayer Filter is. The Bayer Filter, which is typically RGGB, also explains the multiplication ratios of the Red, Green, and Blue portions of the pixel.
Calculating the SAD:
You already mentioned that you have a window of some size, which is exactly what you want to do. Let's say this window is n x n in size. You would also have some window in your left image WL and some window in your right image WR. The idea is to find the pair that has the smallest SAD.
So, for each left window pixel pl at some location in the window (x,y) you would the absolute value of difference of the right window pixel pr also located at (x,y). you would also want some running value, which is the sum of these absolute differences. In sudo code:
SAD = 0
from x = 0 to n:
from y = 0 to n:
SAD = SAD + absolute_value|pl - pr|
After you calculate the SAD for this pair of windows, WL and WR you will want to "slide" WR to a new location and calculate another SAD. You want to find the pair of WL and WR with the smallest SAD - which you can think of as being the most similar windows. In other words, the WL and WR with the smallest SAD are "matched". When you have the minimum SAD for the current WL you will "slide" WL and repeat.
Disparity is calculated by the distance between the matched WL and WR. For visualization, you can scale this distance to be between 0-255 and output that to another image. I posted 3 images below to show you this.
Typical Results:
Left Image:
Right Image:
Calculated Disparity (from the left image):
you can get test images here: http://vision.middlebury.edu/stereo/data/scenes2003/
I am working on a project to losslessly compress a specific style of BMP images that look like this
I have thought about doing pattern recognition, to find repetitive blocks of N x N pixels but I feel like it wont be fast enough execution time.
Any suggestions?
EDIT: I have access to the dataset that created these images too, I just use the image to visualize my data.
Optical illusions make it hard to tell for sure but are the colors only black/blue/red/green? If so, the most straightforward compression would be to simply make more efficient use of pixels. I'm thinking pixels use a fixed amount of space regardless of what color they are. Thus, chances are you are using 12x as many pixels as you really need to be. Since a pixel can be a lot more colors than just those four.
A simple way to do that would be to do label the pixels with the following base 4 numbers:
Black = 0
Red = 1
Green = 2
Blue = 3
Example:
The first four colors of the image seems to be Blue-Red-Blue-Blue. This is equal to 3233 in base 4, which is simply EF in base 16 or 239 in base 10. This is enough to define what the red color of the new pixel should be. The next 4 would define the green color and the final 4 define what the blue color is. Thus turning 12 pixels into a single pixel.
Beyond that you'll probably want to look into more conventional compression software.
I want to merge 2 images. How can i remove the same area between 2 images?
Can you tell me an algorithm to solve this problem. Thanks.
Two image are screenshoot image. They have the same width and image 1 always above image 2.
When two images have the same width and there is no X-offset at the left side this shouldn't be too difficult.
You should create two vectors of integer and store the CRC of each pixel row in the corresponding vector element. After doing this for both pictures you find the CRC of the first line of the lower image in the first vector. This is the offset in the upper picture. Then you check that all following CRCs from both pictures are identical. If not, you have to look up the next occurrence of the initial CRC in the upper image again.
After checking that the CRCs between both pictures are identical when you apply the offset you can use the bitblit function of your graphics format and build the composite picture.
I haven't come across something similar before but I think the following might work:
Convert both to grey-scale.
Enhance the contrast, the grey box might become white for example and the text would become more black. (This is just to increase the confidence in the next step)
Apply some threshold, converting the pictures to black and white.
afterwards, you could find the similar areas (and thus the offset of overlap) with a good degree of confidence. To find the similar parts, you could harper's method (which is good but I don't know how reliable it would be without the said filtering), or you could apply some DSP operation(s) like convolution.
Hope that helps.
If your images are same width and image 1 is always on top. I don't see how that hard could it be..
Just store the bytes of the last line of image 1.
from the first line to the last of the image 2, make this test :
If the current line of image 2 is not equal to the last line of image 1 -> continue
else -> break the loop
you have to define a new byte container for your new image :
Just store all the lines of image 1 + all the lines of image 2 that start at (the found line + 1).
What would make you sweat here is finding the libraries to manipulate all these data structures. But after a few linkage and documentation digging, you should be able to easily implement that.
I have a freshly compiled libjpeg version 9 and tried running jpegtran.exe in command line with the arguments:
.\jpegtran.exe -rotate 180 -outfile test_output1.jpg testimg.jpg
testimg.jpg: test_output1.jpg:
As you can see it does rotate the image but it clips it and it's not put together correctly. The usage.txt file that comes with the package isn't totally up to date because I had to use the -outfile switch instead of what it says:
jpegtran uses a command line syntax similar to cjpeg or djpeg. On
Unix-like systems, you say:
jpegtran [switches] [inputfile] >outputfile
On most non-Unix systems, you say:
jpegtran [switches] inputfile outputfile
where both the input and output files are JPEG
files.
To specify the coded JPEG representation used in the output file,
jpegtran accepts a subset of the switches recognized by cjpeg:
-optimize Perform optimization of entropy encoding parameters.
-progressive Create progressive JPEG file.
-arithmetic Use arithmetic coding.
-restart N Emit a JPEG restart marker every N MCU rows, or every N MCU blocks if "B" is attached to the number.
-scans file Use the scan script given in the specified text file.
See the previous discussion of cjpeg for more details about these
switches. If you specify none of these switches, you get a plain
baseline-JPEG output file. The quality setting and so forth are
determined by the input file.
The image can be losslessly transformed by giving one of these
switches:
-flip horizontal Mirror image horizontally (left-right).
-flip vertical Mirror image vertically (top-bottom).
-rotate 90 Rotate image 90 degrees clockwise.
-rotate 180 Rotate image 180 degrees.
-rotate 270 Rotate image 270 degrees clockwise (or 90 ccw).
-transpose Transpose image (across UL-to-LR axis).
-transverse Transverse transpose (across UR-to-LL axis).
Oddly enough (or maybe not), if I execute .\jpegtran.exe -rotate 180 -outfile test_output2.jpg test_output1.jpg I get the original image back without any clipping issues. It's flipping the clipped parts but just not lining it up right with the rest of the image.
test_output2.jpg:
I get the same result by executing jpegtran.exe -rotate 90 twice.
Also, I tried it on a larger .jpg file which resulted in the same issue but the file size was 18KB smaller for the output. I imagine the issue is related to this.
Edit - I also found this blurb which seems to describe the problem:
jpegtran's default behavior when transforming an odd-size image is
designed to preserve exact reversibility and mathematical consistency
of the transformation set. As stated, transpose is able to flip the
entire image area. Horizontal mirroring leaves any partial iMCU
column at the right edge untouched, but is able to flip all rows of
the image. Similarly, vertical mirroring leaves any partial iMCU row
at the bottom edge untouched, but is able to flip all columns. The
other transforms can be built up as sequences of transpose and flip
operations; for consistency, their actions on edge pixels are defined
to be the same as the end result of the corresponding
transpose-and-flip sequence.
The -trim switch works, if you can call it that, and trims out the disorganized data but the image is smaller and lost data.
test_output5.jpg:
Adding the -perfect switch which supposedly stops the above from happening results in this: transformation is not perfect for output and no image.
So is it not possible to losslessly rotate a .jpg? I could, myself, go into paint and reconstruct the original image by simply moving the edge lines into their correct place. Is there a method to do this within libjpeg?
A lossless rotation works with whole DCT blocks contained within the JPEG file. These blocks are always 8x8 or 16x16 pixels (depending on the compression downsampling settings). The file contains a width and height so the extra pixels can be thrown away when the image is decoded, but there's no way to move the clipping from the right/bottom edge to the left/top edge. The software is doing the best it can with an impossible problem.
As you've discovered the way around this problem is to make the width and height evenly divisible by 16. You'll find that images from cameras for example will have this property.