Swap two colors using color matrix - c++

How can I swap two colors using a color matrix? For instance swapping red and blue is easy. The matrix would look like:
0 0 1 0 0
0 1 0 0 0
1 0 0 0 0
0 0 0 1 0
0 0 0 0 1
So how can I swap any two colors in general? For example, there is Color1 with R1, G1, B1 and Color2 with R2, G2, B2.
EDIT: By swap I mean Color1 will translate into color2 and color2 will translate into color1. Looks like I need a reflection transformation. How to calculate it?
GIMP reference removed. Sorry for confusion.

This appears to be the section of the color-exchange.c file in the GIMP source that cycles through all the pixels and if a pixel meets the chosen criteria(which can be a range of colors), swaps it with the chosen color:
for (y = y1; y < y2; y++)
{
gimp_pixel_rgn_get_row (&srcPR, src_row, x1, y, width);
for (x = 0; x < width; x++)
{
guchar pixel_red, pixel_green, pixel_blue;
guchar new_red, new_green, new_blue;
guint idx;
/* get current pixel-values */
pixel_red = src_row[x * bpp];
pixel_green = src_row[x * bpp + 1];
pixel_blue = src_row[x * bpp + 2];
idx = x * bpp;
/* want this pixel? */
if (pixel_red >= min_red &&
pixel_red <= max_red &&
pixel_green >= min_green &&
pixel_green <= max_green &&
pixel_blue >= min_blue &&
pixel_blue <= max_blue)
{
guchar red_delta, green_delta, blue_delta;
red_delta = pixel_red > from_red ?
pixel_red - from_red : from_red - pixel_red;
green_delta = pixel_green > from_green ?
pixel_green - from_green : from_green - pixel_green;
blue_delta = pixel_blue > from_blue ?
pixel_blue - from_blue : from_blue - pixel_blue;
new_red = CLAMP (to_red + red_delta, 0, 255);
new_green = CLAMP (to_green + green_delta, 0, 255);
new_blue = CLAMP (to_blue + blue_delta, 0, 255);
}
else
{
new_red = pixel_red;
new_green = pixel_green;
new_blue = pixel_blue;
}
/* fill buffer */
dest_row[idx + 0] = new_red;
dest_row[idx + 1] = new_green;
dest_row[idx + 2] = new_blue;
/* copy alpha-channel */
if (has_alpha)
dest_row[idx + 3] = src_row[x * bpp + 3];
}
/* store the dest */
gimp_pixel_rgn_set_row (&destPR, dest_row, x1, y, width);
/* and tell the user what we're doing */
if (!preview && (y % 10) == 0)
gimp_progress_update ((gdouble) y / (gdouble) height);
}
EDIT/ADDITION
Another way you could have transformed red to blue would be with this matrix:
1 0 0 0 0
0 1 0 0 0
0 0 1 0 0
0 0 0 1 0
-1 0 1 0 1
The only values that really matter are the bottom ones in this matrix.
This would be the same as saying subtract 255 from red, keep green the same, and then add 255 to blue. You could cut the alpha in half like so as well like so:
-1 0 1 -0.5 1
So (just like the gimp source) you just need to find the difference between your current color and your target color, for each channel, and then apply the difference. Instead of channel values from 0 to 255 you would use values from 0 to 1.
You could have changed it from red to green like so:
-1 1 0 0 1
See here for some good info:
http://msdn.microsoft.com/en-us/library/windows/desktop/ms533875%28v=vs.85%29.aspx
Good luck.

I solved it by creating a reflection matrix via D3DXMatrixReflect using a plane that's perpendicular to the vector AB and intersects the midpoint of the AB.
D3DXVECTOR3 AB( colorA.r-colorB.r, colorA.g-colorB.g, colorA.b-colorB.b );
D3DXPLANE plane( AB.x, AB.y, AB.z, -AB.x*midpoint.x-AB.y*midpoint.y-AB.z*midpoint.z );
D3DXMatrixReflect

Related

How should I implement the Grassfire Algorithm in C++

So in my program, I generate a random grid using 2D Arrays where all indexes are initialized to 0. Now, a certain percentage of random indexes are filled with -1 which means that they are impassable/ act like a wall. The user also inputs a certain target index say (i,j) from where he starts and his goal is to reach index (0,0) by taking the shortest path possible.
To find the shortest path, I have to check for the neighbours of each cell, starting from the target location. If they have neighbours, I increment the neighbour value by 1. Refer to my figure for more details. I got the code on how to calculate the shortest path, but I'm stuck with this incrementation part. I tried writing a code but it doesn't seem to work. Any help would be appreciated:-
GRID is generated in the following way:
1 is the user input location, and the goal is to reach X i.e 0,0
-X 0 0 0 0 0 0 0 0 -1
-0 0 0 -1 -1 0 0 0 0 0
-0 0 0 0 -1 0 0 0 0 0
-0 0 0 0 0 0 0 0 0 -1
-0 0 0 0 0 0 0 1 0 0
Starting by incrementing
-X 0 0 0 0 0 0 0 0 -1
-0 0 0 -1 -1 0 0 0 0 0
-0 0 0 0 -1 3 3 3 3 3
-0 0 0 0 0 3 2 2 2 -1
-0 0 0 0 0 3 2 1 2 3
I have only showed it till 3, but it keeps on going until index 0,0 is reached.
void waveAlgorithm(int *array, int height, int width, int x, int y)
{
while (array != NULL)
{
// Assume that index 0 0 is never 1
if (currX == 0 && currY == 0){
break;
}
// Check South
int currX = x;
int currY = y + 1;
if (currX < width && currX > 0 && currY < height && currY >= 0)
{
if (*(array + currX * width + currY) == 0)
{
(*(array + currX * width + currY))++;
}
}
// Check North
currX = x;
currY = y - 1;
if (currX < width && currX > 0 && currY < height && currY >= 0)
{
if (*(array + currX * width + currY) != -1)
{
(*(array + currX * width + currY))++;
}
}
// Check West
currX = x - 1;
currY = y;
if (currX < width && currX > 0 && currY < height && currY >= 0)
{
if (*(array + currX * width + currY) != -1)
{
(*(array + currX * width + currY))++;
}
}
// Check East
currX = x + 1;
currY = y;
if (currX < width && currX > 0 && currY < height && currY >= 0)
{
if (*(array + currX * width + currY) != -1)
{
(*(array + currX * width + currY))++;
}
}
}
}
I am kinda stuck while implementing this program, especially for the the directions that are combinational i.e North East, South East, etc. I tried writing a recursive program but couldn't figure out how to increment the cells
waveAlgorithm(int *arr)
{
if(index is 0,0)
return;
waveAlgorithm(int[i+1][j]);
waveAlgorithm(int[i][j+1]);
waveAlgorithm(int[i-1][j]);
waveAlgorithm(int[i][j-1]);
}

Image rotation by very small step size in openCV c++

I'm trying to rotate an image by small degrees like 1 degree or below 1 degree,
Consider this is my Source Image and I'm trying to rotate it by 1 degree in MATLAB and OpenCV (C++) :
0 0 0 0
0 255 255 0
0 255 255 0
0 0 0 0
When I rotate it in MATLAB by 1 degree this is the result:
0 0 2.223835069639144 0
2.223835069639553 252.7942370468069 2.527942370468065 0
0 252.7942370468065 252.794237046807 2.223835069639648
0 2.223835069639184 0 0
This is the code in MATLAB:
sourceImage = [0 0 0 0; 0 255.0 255.0 0; 0 255.0 255.0 0; 0 0 0 0];
rotationDegree = 1.0;
shearInX_Y1 = (cosd(rotationDegree)-1)/sind(rotationDegree);
shearInX_Y2 = sind(rotationDegree);
transformationMatrix = [(1 + shearInX_Y1*shearInX_Y2), (2*shearInX_Y1 + ...
((shearInX_Y1)^2)*shearInX_Y2), 0; (shearInX_Y2), (1 + shearInX_Y1*shearInX_Y2), 0; 0, 0, 1];
tform = affine2d(transformationMatrix);
imref = imref2d(size(sourceImage));
imref.XWorldLimits = imref.XWorldLimits-mean(imref.XWorldLimits);
imref.YWorldLimits = imref.YWorldLimits-mean(imref.YWorldLimits);
transformedImg2 = imwarp(sourceImage , imref, tform, 'OutputView', imref, 'Interp', 'bilinear');
transformedImg2
The transformationMatrix in Matlab (which is our rotation matrix) is:
transformationMatrix =
9.998476951563913e-01 -1.745240643728003e-02 0
1.745240643728351e-02 9.998476951563913e-01 0
0 0 1.000000000000000e+00
And the content of "tform.T" is:
9.998476951563913e-01 -1.745240643728003e-02 0
1.745240643728351e-02 9.998476951563913e-01 0
0 0 1
And the content of "rotationMatrix" in OpenCV is:
9.998476951563913e-01 1.745240643728351e-02 -0.008650050796837391
-1.745240643728351e-02 9.998476951563913e-01 0.008802355640446121
But when I do the rotation by 1 degree in OpenCV (C++) This is the result:
(which is same as source image!) which means openCV has a problem with small rotations!
0 0 0 0
0 255 255 0
0 255 255 0
0 0 0 0
This is the code I use for rotation in OpenCV (C++): (Rotation is done with respect to image center)
#include <opencv2/highgui/highgui.hpp>
#include <iostream>
using namespace cv;
using namespace std;
# define INTERPOLATION_METHOD INTER_CUBIC
// INTER_AREA
// INTER_LINEAR
// INTER_NEAREST
Mat rotateImage(Mat sourceImage, double rotationDegree);
int main(){
Mat sourceImage = Mat::zeros(4, 4, CV_64F);
sourceImage.at<double>(1, 1) = 255.0;
sourceImage.at<double>(1, 2) = 255.0;
sourceImage.at<double>(2, 1) = 255.0;
sourceImage.at<double>(2, 2) = 255.0;
double rotationDegree = 1.0;
Mat rotatedImage = rotateImage(sourceImage, rotationDegree);
cout << "sourceImage: \n" << sourceImage << endl << endl;
cout << "rotatedImage : \n" << rotatedImage << endl << endl;
return 0;
}
Mat rotateImage(Mat sourceImage, double rotationDegree){
double rowOfImgCenter;
double colOfImgCenter;
rowOfImgCenter = sourceImage.rows / 2.0 - 0.5;
colOfImgCenter = sourceImage.cols / 2.0 - 0.5;
Point2d imageCenter(colOfImgCenter, rowOfImgCenter);
Mat rotationMatrix;
rotationMatrix = getRotationMatrix2D(imageCenter, rotationDegree, 1.0);
Mat rotatedImage;
warpAffine(sourceImage, rotatedImage, rotationMatrix, sourceImage.size(), INTERPOLATION_METHOD);
return rotatedImage;
}
Any idea would be appreciated.

Converting 1-bit bmp file to array in C/C++ [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I'm looking to turn a 1-bit bmp file of variable height/width into a simple two-dimensional array with values of either 0 or 1. I don't have any experience with image editing in code and most libraries that I've found involve higher bit-depth than what I need. Any help regarding this would be great.
Here's the code to read a monochrome .bmp file
(See dmb's answer below for a small fix for odd-sized .bmps)
#include <stdio.h>
#include <string.h>
#include <malloc.h>
unsigned char *read_bmp(char *fname,int* _w, int* _h)
{
unsigned char head[54];
FILE *f = fopen(fname,"rb");
// BMP header is 54 bytes
fread(head, 1, 54, f);
int w = head[18] + ( ((int)head[19]) << 8) + ( ((int)head[20]) << 16) + ( ((int)head[21]) << 24);
int h = head[22] + ( ((int)head[23]) << 8) + ( ((int)head[24]) << 16) + ( ((int)head[25]) << 24);
// lines are aligned on 4-byte boundary
int lineSize = (w / 8 + (w / 8) % 4);
int fileSize = lineSize * h;
unsigned char *img = malloc(w * h), *data = malloc(fileSize);
// skip the header
fseek(f,54,SEEK_SET);
// skip palette - two rgb quads, 8 bytes
fseek(f, 8, SEEK_CUR);
// read data
fread(data,1,fileSize,f);
// decode bits
int i, j, k, rev_j;
for(j = 0, rev_j = h - 1; j < h ; j++, rev_j--) {
for(i = 0 ; i < w / 8; i++) {
int fpos = j * lineSize + i, pos = rev_j * w + i * 8;
for(k = 0 ; k < 8 ; k++)
img[pos + (7 - k)] = (data[fpos] >> k ) & 1;
}
}
free(data);
*_w = w; *_h = h;
return img;
}
int main()
{
int w, h, i, j;
unsigned char* img = read_bmp("test1.bmp", &w, &h);
for(j = 0 ; j < h ; j++)
{
for(i = 0 ; i < w ; i++)
printf("%c ", img[j * w + i] ? '0' : '1' );
printf("\n");
}
return 0;
}
It is plain C, so no pointer casting - beware while using it in C++.
The biggest problem is that the lines in .bmp files are 4-byte aligned which matters a lot with single-bit images. So we calculate the line size as "width / 8 + (width / 8) % 4". Each byte contains 8 pixels, not one, so we use the k-based loop.
I hope the other code is obvious - much has been told about .bmp header and pallete data (8 bytes which we skip).
Expected output:
0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0
0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0
0 0 0 0 0 0 1 1 1 1 0 0 1 1 0 0
0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0
0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0
0 0 0 1 0 0 1 1 1 1 0 0 0 0 0 0
0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0
0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0
0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0
0 0 0 0 0 0 1 1 1 1 0 0 1 0 0 0
0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0
0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 0
0 0 0 0 0 1 1 1 1 1 0 0 0 0 1 0
0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 0
0 0 0 1 0 1 1 1 1 1 0 0 0 0 0 0
0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 0
I tried the solution of Viktor Lapyov on a 20x20 test image:
But with his code, I get this output (slightly reformatted but you can see the problem):
The last 4 pixels are not read. The problem is here. (The last partial byte in a row is ignored.)
// decode bits
int i, j, k, rev_j;
for(j = 0, rev_j = h - 1; j < h ; j++, rev_j--) {
for(i = 0 ; i < w / 8; i++) {
int fpos = j * lineSize + i, pos = rev_j * w + i * 8;
for(k = 0 ; k < 8 ; k++)
img[pos + (7 - k)] = (data[fpos] >> k ) & 1;
}
}
I rewrote the inner loop like this:
// decode bits
int i, byte_ctr, j, rev_j;
for(j = 0, rev_j = h - 1; j < h ; j++, rev_j--) {
for( i = 0; i < w; i++) {
byte_ctr = i / 8;
unsigned char data_byte = data[j * lineSize + byte_ctr];
int pos = rev_j * w + i;
unsigned char mask = 0x80 >> i % 8;
img[pos] = (data_byte & mask ) ? 1 : 0;
}
}
and all is well:
The following c code works with monochrome bitmaps of any size. I'll assume you've got your bitmap in a buffer with heights and width initialized from file. So
// allocate mem for global buffer
if (!(img = malloc(h * w)) )
return(0);
int i = 0, k, j, scanline;
// calc the scanline. Monochrome images are
// padded with 0 at every line end. This
// makes them divisible by 4.
scanline = ( w + (w % 8) ) >> 3;
// account for the paddings
if (scanline % 4)
scanline += (4 - scanline % 4);
// loop and set the img values
for (i = 0, k = h - 1; i < h; i++)
for (j = 0; j < w; j++) {
img[j+i*w] = (buffer[(j>>3)+k*scanline])
& (0x80 >> (j % 8));
}
Hope this help's. To convert it to 2D is now a trivial matter: But if u get lost here is the math to convert 1D array to 2D suppose r & c are row and column and w is the width then:
. c + r * w = r, c
If you got further remarks hit me back, am out!!!
Lets think of a1x7 monochrome bitmap i.e. This is a bitmap of a straight line with 7 pixels wide. To store this image on a Windows OS; since 7 is not evenly divisible by 4 it's going to pad in it an extra 3 bytes.
So the biSizeImage of the BITMAPINFOHEADER structure will show a total of 4 bytes. Nonetheless the biHeight and biWidth members will correctly state the true bitmap dimensions.
The above code will fail because 7 / 8 = 0 (by rounding off as with all c compilers do). Hence loop "i" will not execute so will "k".
That means the vector "img" now contains garbage values that do not correspond to the pixels contained in " data" i.e. the result is incorrect.
And by inductive reasoning if it does not satisfy the base case then chances are it wont do much good for general cases.

Pixel color calculation 255 to 0

I have been using the algorithm from Microsoft here:
INT iWidth = bitmap.GetWidth();
INT iHeight = bitmap.GetHeight();
Color color, colorTemp;
for(INT iRow = 0; iRow < iHeight; iRow++)
{
for(INT iColumn = 0; iColumn < iWidth; iColumn++)
{
bitmap.GetPixel(iColumn, iRow, &color);
colorTemp.SetValue(color.MakeARGB(
(BYTE)(255 * iColumn / iWidth),
color.GetRed(),
color.GetGreen(),
color.GetBlue()));
bitmap.SetPixel(iColumn, iRow, colorTemp);
}
}
to create a gradient alpha blend. Theirs goes left to right, I need one going from bottom to top, so I changed their line
(BYTE)(255 * iColumn / iWidth)
to
(BYTE)(255 - ((iRow * 255) / iHeight))
This makes row 0 have alpha 255, through to the last row having alpha 8.
How can I alter the calculation to make the alpha go from 255 to 0 (instead of 255 to 8)?
f(x) = 255 * (x - 8) / (255 - 8)?
Where x is in [8, 255] and f(x) is in [0, 255]
The original problem is probably related with the fact that if you have width of 100 and you iterate over horizontal pixels, you'll only get values 0 to 99. So, dividing 99 by 100 is never 1. What you need is something like 255*(column+1)/width
(BYTE)( 255 - 255 * iRow / (iHeight-1) )
iRow is between 0 and (iHeight-1), so if we want a value between 0 and 1 we need to divide by (iHeight-1). We actually want a value between 0 and 255, so we just scale up by 255. Finally we want to start at the maximum and descend to the minimum, so we just subtract the value from 255.
At the endpoints:
iRow = 0
255 - 255 * 0 / (iHeight-1) = 255
iRow = (iHeight-1)
255 - 255 * (iHeight-1) / (iHeight-1) = 255 - 255 * 1 = 0
Note that iHeight must be greater than or equal to 2 for this to work (you'll get a divide by zero if it is 1).
Edit:
This will cause only the last row to have an alpha value of 0. You can get a more even distribution of alpha values with
(BYTE)( 255 - 256 * iRow / iHeight )
however, if iHeight is less than 256 the last row won't have an alpha value of 0.
Try using one of the the following calculations (they give the same result):
(BYTE)(255 - (iRow * 256 - 1) / (iHeight - 1))
(BYTE)((iHeight - 1 - iRow) * 256 - 1) / (iHeight - 1))
This will only work if using signed division (you use the type INT which seems to be the same as int, so it should work).

implementing erosion, dilation in C, C++

I have theoretical understanding of how dilation in binary image is done.
AFAIK, If my SE (structuring element) is this
0 1
1 1.
where . represents the centre, and my image(binary is this)
0 0 0 0 0
0 1 1 0 0
0 1 0 0 0
0 1 0 0 0
0 0 0 0 0
so the result of dilation is
0 1 1 0 0
1 1 1 0 0
1 1 0 0 0
1 1 0 0 0
0 0 0 0 0
I got above result by shifting Image in 0, +1 (up) and and -1(left) direction, according to SE, and taking the union of all these three shifts.
Now, I need to figure out how to implement this in C, C++.
I am not sure how to begin and how to take the union of sets.
I thought of representing original image,three shifted images and final image obtained by taking union; all using matrix.
Is there any place where I can get some sample solution to start with or any ideas to proceed ?
Thanks.
There are tons of sample implementations out there.. Google is your friend :)
EDIT
The following is a pseudo-code of the process (very similar to doing a convolution in 2D). Im sure there are more clever way to doing it:
// grayscale image, binary mask
void morph(inImage, outImage, kernel, type) {
// half size of the kernel, kernel size is n*n (easier if n is odd)
sz = (kernel.n - 1 ) / 2;
for X in inImage.rows {
for Y in inImage.cols {
if ( isOnBoundary(X,Y, inImage, sz) ) {
// check if pixel (X,Y) for boundary cases and deal with it (copy pixel as is)
// must consider half size of the kernel
val = inImage(X,Y); // quick fix
}
else {
list = [];
// get the neighborhood of this pixel (X,Y)
for I in kernel.n {
for J in kernel.n {
if ( kernel(I,J) == 1 ) {
list.add( inImage(X+I-sz, Y+J-sz) );
}
}
}
if type == dilation {
// dilation: set to one if any 1 is present, zero otherwise
val = max(list);
} else if type == erosion {
// erosion: set to zero if any 0 is present, one otherwise
val = min(list);
}
}
// set output image pixel
outImage(X,Y) = val;
}
}
}
The above code is based on this tutorial (check the source code at the end of the page).
EDIT2:
list.add( inImage(X+I-sz, Y+J-sz) );
The idea is that we want to superimpose the kernel mask (of size nxn) centered at sz (half size of mask) on the current image pixel located at (X,Y), and then just get the intensities of the pixels where the mask value is one (we are adding them to a list). Once extracted all the neighbors for that pixel, we set the output image pixel to the maximum of that list (max intensity) for dilation, and min for erosion (of course this only work for grayscale images and binary mask)
The indices of both X/Y and I/J in the statement above are assumed to start from 0.
If you prefer, you can always rewrite the indices of I/J in terms of half the size of the mask (from -sz to +sz) with a small change (the way the tutorial I linked to is using)...
Example:
Consider this 3x3 kernel mask placed and centered on pixel (X,Y), and see how we traverse the neighborhood around it:
--------------------
| | | | sz = 1;
-------------------- for (I=0 ; I<3 ; ++I)
| | (X,Y) | | for (J=0 ; J<3 ; ++J)
-------------------- vect.push_back( inImage.getPixel(X+I-sz, Y+J-sz) );
| | | |
--------------------
Perhaps a better way to look at it is how to produce an output pixel of the dilation. For the corresponding pixel in the image, align the structuring element such that the origin of the structuring element is at that image pixel. If there is any overlap, set the dilation output pixel at that location to 1, otherwise set it to 0.
So this can be done by simply looping over each pixel in the image and testing whether or not the properly shifted structuring element overlaps with the image. This means you'll probably have 4 nested loops: x img, y img, x se, y se. So for each image pixel, you loop over the pixels of the structuring element and see if there is any overlap. This may not be the most efficient algorithm, but it is probably the most straightforward.
Also, I think your example is incorrect. The dilation depends on the origin of the structuring element. If the origin is...
at the top left zero: you need to shift the image (-1,-1), (-1,0), and (0,-1) giving:
1 1 1 0 0
1 1 0 0 0
1 1 0 0 0
1 0 0 0 0
0 0 0 0 0
at the bottom right: you need to shift the image (0,0), (1,0), and (0,1) giving:
0 0 0 0 0
0 1 1 1 0
0 1 1 0 0
0 1 1 0 0
0 1 0 0 0
MATLAB uses floor((size(SE)+1)/2) as the origin of the SE so in this case, it will use the top left pixel of the SE. You can verify this using the imdilate MATLAB function.
OpenCV
Example: Erosion and Dilation
/* structure of the image variable
* variable n stores the order of the square matrix */
typedef struct image{
int mat[][];
int n;
}image;
/* function recieves image "to dilate" and returns "dilated"*
* structuring element predefined:
* 0 1 0
* 1 1 1
* 0 1 0
*/
image* dilate(image* to_dilate)
{
int i,j;
int does_order_increase;
image* dilated;
dilated = (image*)malloc(sizeof(image));
does_order_increase = 0;
/* checking whether there are any 1's on d border*/
for( i = 0 ; i<to_dilate->n ; i++ )
{
if( (to_dilate->a[0][i] == 1)||(to_dilate->a[i][0] == 1)||(to_dilate->a[n-1][i] == 1)||(to_dilate->a[i][n-1] == 1) )
{
does_order_increase = 1;
break;
}
}
/* size of dilated image initialized */
if( does_order_increase == 1)
dilated->n = to_dilate->n + 1;
else
dilated->n = to_dilate->n;
/* dilating image by checking every element of to_dilate and filling dilated *
* does_order_increase serves to cope with adjustments if dilated 's order increase */
for( i = 0 ; i<to_dilate->n ; i++ )
{
for( j = 0 ; j<to_dilate->n ; j++ )
{
if( to_dilate->a[i][j] == 1)
{
dilated->a[i + does_order_increase][j + does_order_increase] = 1;
dilated->a[i + does_order_increase -1][j + does_order_increase ] = 1;
dilated->a[i + does_order_increase ][j + does_order_increase -1] = 1;
dilated->a[i + does_order_increase +1][j + does_order_increase ] = 1;
dilated->a[i + does_order_increase ][j + does_order_increase +1] = 1;
}
}
}
/* dilated stores dilated binary image */
return dilated;
}
/* end of dilation */