I want to merge 2 images. How can i remove the same area between 2 images?
Can you tell me an algorithm to solve this problem. Thanks.
Two image are screenshoot image. They have the same width and image 1 always above image 2.
When two images have the same width and there is no X-offset at the left side this shouldn't be too difficult.
You should create two vectors of integer and store the CRC of each pixel row in the corresponding vector element. After doing this for both pictures you find the CRC of the first line of the lower image in the first vector. This is the offset in the upper picture. Then you check that all following CRCs from both pictures are identical. If not, you have to look up the next occurrence of the initial CRC in the upper image again.
After checking that the CRCs between both pictures are identical when you apply the offset you can use the bitblit function of your graphics format and build the composite picture.
I haven't come across something similar before but I think the following might work:
Convert both to grey-scale.
Enhance the contrast, the grey box might become white for example and the text would become more black. (This is just to increase the confidence in the next step)
Apply some threshold, converting the pictures to black and white.
afterwards, you could find the similar areas (and thus the offset of overlap) with a good degree of confidence. To find the similar parts, you could harper's method (which is good but I don't know how reliable it would be without the said filtering), or you could apply some DSP operation(s) like convolution.
Hope that helps.
If your images are same width and image 1 is always on top. I don't see how that hard could it be..
Just store the bytes of the last line of image 1.
from the first line to the last of the image 2, make this test :
If the current line of image 2 is not equal to the last line of image 1 -> continue
else -> break the loop
you have to define a new byte container for your new image :
Just store all the lines of image 1 + all the lines of image 2 that start at (the found line + 1).
What would make you sweat here is finding the libraries to manipulate all these data structures. But after a few linkage and documentation digging, you should be able to easily implement that.
Related
I am writing a disparity matching algorithm using block matching, but I am not sure how to find the corresponding pixel values in the secondary image.
Given a square window of some size, what techniques exist to find the corresponding pixels? Do I need to use feature matching algorithms or is there a simpler method, such as summing the pixel values and determining whether they are within some threshold, or perhaps converting the pixel values to binary strings where the values are either greater than or less than the center pixel?
I'm going to assume you're talking about Stereo Disparity, in which case you will likely want to use a simple Sum of Absolute Differences (read that wiki article before you continue here). You should also read this tutorial by Chris McCormick before you read more here.
side note: SAD is not the only method, but it's really common and should solve your problem.
You already have the right idea. Make windows, move windows, sum pixels, find minimums. So I'll give you what I think might help:
To start:
If you have color images, first you will want to convert them to black and white. In python you might use a simple function like this per pixel, where x is a pixel that contains RGB.
def rgb_to_bw(x):
return int(x[0]*0.299 + x[1]*0.587 + x[2]*0.114)
You will want this to be black and white to make the SAD easier to computer. If you're wondering why you don't loose significant information from this, you might be interested in learning what a Bayer Filter is. The Bayer Filter, which is typically RGGB, also explains the multiplication ratios of the Red, Green, and Blue portions of the pixel.
Calculating the SAD:
You already mentioned that you have a window of some size, which is exactly what you want to do. Let's say this window is n x n in size. You would also have some window in your left image WL and some window in your right image WR. The idea is to find the pair that has the smallest SAD.
So, for each left window pixel pl at some location in the window (x,y) you would the absolute value of difference of the right window pixel pr also located at (x,y). you would also want some running value, which is the sum of these absolute differences. In sudo code:
SAD = 0
from x = 0 to n:
from y = 0 to n:
SAD = SAD + absolute_value|pl - pr|
After you calculate the SAD for this pair of windows, WL and WR you will want to "slide" WR to a new location and calculate another SAD. You want to find the pair of WL and WR with the smallest SAD - which you can think of as being the most similar windows. In other words, the WL and WR with the smallest SAD are "matched". When you have the minimum SAD for the current WL you will "slide" WL and repeat.
Disparity is calculated by the distance between the matched WL and WR. For visualization, you can scale this distance to be between 0-255 and output that to another image. I posted 3 images below to show you this.
Typical Results:
Left Image:
Right Image:
Calculated Disparity (from the left image):
you can get test images here: http://vision.middlebury.edu/stereo/data/scenes2003/
I want to compare to images of the same size with some text on it.
Let's say the two words are: 'google' and 'gooogle'.
Before measuring the image difference in PS, I am blurring the images using Gauß.
The neat thing in PS is, no matter how you arrange the layers - gooogle on top or google on top - the difference of the layers stay the same.
You get a black background and the difference as (more or less) white pixels.
I am unable to reproduce this functionality in Python.
How did PS manage to get commutativity in there?
I was able to find the solution:
You need to take the absolute. Problem is, you need to convert the RGB image (uint8) to a bigger datatype.
After that you can subtract the images and take the absolute. In the end you need to convert it back to RGB, which is uint8.
def ps_like_diff:(img1, img2):
img1_ = img1.astype(int)
img2_ = img2.astype(int)
diff = img1_ - img2_
return (np.abs(diff)).astype('uint8')
I want to find the background in multiple images captured with a fixed camera. Camera detect moving objects(animal) and captured sequential Images. So I need to find a simple background model image by process 5 to 10 captured images with same background.
Can someone help me please??
Is your eventual goal to find foreground? Can you show some images?
If animals move fast enough they will create a lot of intensity changes while background pixels will remain closely correlated among most of the frames. I won’t write you real code but will give you a pseudo-code in openCV. The main idea is to average only correlated pixels:
Mat Iseq[10];// your sequence
Mat result, Iacc=0, Icnt=0; // Iacc and Icnt are float types
loop through your sequence, i=0; i<N-1; i++
matchTemplate(Iseg[i], Iseq[i+1], result, CV_TM_CCOEFF_NORMED);
mask = 1 & (result>0.9); // get correlated part, which is probably background
Iacc += Iseq[i] & mask + Iseq[i+1] & mask; // accumulate background infer
Icnt += 2*mask; // keep count
end of loop;
Mat Ibackground = Iacc.mul(1.0/Icnt); // average background (moving parts fade away)
To improve the result you may reduce mage resolution or apply blur to enhance correlation. You can also clean every mask from small connected components by erosion, for example.
If
each pixel location appears as background in more than half the frames, and
the colour of a pixel does not vary much across the subset of frames in which it is background,
then there's a very simple algorithm: for each pixel location, just take the median intensity over all frames.
How come? Suppose the image is greyscale (this makes it easier to explain, but the process will work for colour images too -- just treat each colour component separately). If a particular pixel appears as background in more than half the frames, then when you take the intensities of that pixel across all frames and sort them, a background-coloured pixel must appear at the half-way (median) position. (In the worst case, all background-coloured pixels get pushed to the very front or the very back in this order, but even then there are enough of them to cover the half-way point.)
If you only have 5 images it's going to be hard to identify background and most sophisticated techniques probably won't work. For general background identification methods, see Link
I am working on a project to losslessly compress a specific style of BMP images that look like this
I have thought about doing pattern recognition, to find repetitive blocks of N x N pixels but I feel like it wont be fast enough execution time.
Any suggestions?
EDIT: I have access to the dataset that created these images too, I just use the image to visualize my data.
Optical illusions make it hard to tell for sure but are the colors only black/blue/red/green? If so, the most straightforward compression would be to simply make more efficient use of pixels. I'm thinking pixels use a fixed amount of space regardless of what color they are. Thus, chances are you are using 12x as many pixels as you really need to be. Since a pixel can be a lot more colors than just those four.
A simple way to do that would be to do label the pixels with the following base 4 numbers:
Black = 0
Red = 1
Green = 2
Blue = 3
Example:
The first four colors of the image seems to be Blue-Red-Blue-Blue. This is equal to 3233 in base 4, which is simply EF in base 16 or 239 in base 10. This is enough to define what the red color of the new pixel should be. The next 4 would define the green color and the final 4 define what the blue color is. Thus turning 12 pixels into a single pixel.
Beyond that you'll probably want to look into more conventional compression software.
I am fairly new to image processing using Visual C++, I am in search of a way that reads black and white TIFF files and writes the image as an array hex values representing 0 or 1, then get the location information of either 0 (black) or 1 (white).
After a bit of research on Google and an article here https://www.ibm.com/developerworks/linux/library/l-libtiff/#resources I have the following trivial questions, please do point me to relevant articles, there are so many that I couldn’t wrap my heads around them, meanwhile I will keep on searching and reading.
Is it possible to extract pixel location information from TIFF using LIBTIFF?
Again being new to all image formats, I couldn't help to think that an image is made up of a 2D array of hex values , is the bitmap format more appropriate? Thinking in that way makes me wonder if I can just write the location of “white” or “black” mapped from its height onto a 2D array. I want the location of either “black” or “white”, is that possible? I don’t really want an array of 0s and 1s that represents the image, not sure if I even need them.
TIFF Example of a 4 by 2 BW image:
black white
white black
black black
white white
Output to something like ptLocArray[imageHeight][imageWidth] under a function named mappingNonPrint()
2 1
4 4
or under a function named mappingPrint()
1 2
3 3
With LIBTIFF, in what direction/ways does it read TIFF files?
Ideally I would like TIFF file to be read vertically from top to bottom, then shifts to the next column, then start again from top to bottom. But my gut tells me that's a fairy tale, being very new to image processing, I like to know how TIFF is read for example a single strip ones.
Additional info
I think I should add why I want the pixel location. I am trying to make a cylindrical roll printer, using a optical rotary encoder providing a location feedback The height of the document represents the circumference of the roll surface, the width of the document represent the number of revolutions.
The following is a grossly untested logic of my ENC_reg(), pretty much unrelated to the image processing part, but this may help readers to see what I am trying to do with the processed array of a tiff image. i is indexed from 0 to imageHeight, however, each element may be an arbitrary number ranging from 0 to imageHeight, could be 112, 354 etc that corresponds to whatever the location that contains black in the tiff image. j is indexed from 0 to imageWidth, and each element there is also starting 0 to imageWidth. For example, ptLocArray[1][2] means the first location that has a black pixel in column 2, the value stored there could be 231. the 231st pixel counting from the top of column 2.
I realized that the array[imageHeight][] should be instead array[maxnum_blackPixelPerColumn], but I don't know how to count the number of 1s, or 0s per column....because I don't know how to convert the 1s and 0s in the right order as I mentioned earlier..
void ENC_reg()
{
if(inputPort_ENC_reg == true) // true if received an encoder signal
{
ENC_regTemp ++; // increment ENC register temp value by one each time the port registers a signal
if(ENC_regTemp == ptLocArray[i][j])
{
Rollprint(); // print it
i++; // increment by one to the next height location that indicates a "black", meaning print
if (ENC_regTemp == imageHeight)
{
// Check if end of column reached, end of a revolution
j++; // jump to next column of the image (starting next revolution)
i = 0; // reset i index to 0, to the top of the next column
ENC_regTemp = 0; // reset encoder register temp to 0 for the new revolution
};
};
ENC_reg(); // recall itself;
};
}
As I read your example, you want two arrays for each column of the image, one to contain the numbers of rows with a black pixel in that column, the other with indices of rows with a white pixel. Right so far?
This is certainly possible, but would require huge amounts of memory. Even uncompressed, a single pixel of a grayscale image will take one byte of memory. You can even use bit packing and cram 8 pixels into a byte. On the other hand, unless you restrict yourself to images with no more than 255 rows, you'll have multiple bytes to represent each column index. And you'd need extra logic to mark the end of the column.
My point is: try working with the “array of 0s and 1s” as this should be both easier to accomplish, less demanding in terms of memory, and more efficient to run.