proper visualization of warped image - c++

I am trying to implement image warping in C++ and OpenCV. My code is as follows:
Mat input = imread("Lena.jpg",CV_LOAD_IMAGE_GRAYSCALE);
Mat out;
double xo, yo;
input.convertTo(input, CV_32FC1);
copyMakeBorder(input, input, 3, 3, 3, 3, 0);
int height = input.rows;
int width = input.cols;
out = Mat(height, width, input.type());
for(int j = 0; j < height; j++){
for(int i =0; i < width; i++){
xo = (8.0 * sin(2.0 * PI * j / 128.0));
yo = (8.0 * sin(2.0 * PI * i / 128.0));
out.at<float>(j,i) = (float)input.at<float>(((int)(j+yo+height)%height),((int)(i+xo+width)%width));
}
}
normalize(out, out,0,255,NORM_MINMAX,CV_8UC1);
imshow("output", out);
This produces the following image:
As it is clearly visible, the values near the border are non-zero. Can anyone tell me how do I get black border as shown in the following image instead of artifacts that I get from my code?
Only the black border of this image should be considered, i.e the image should be wavy (sinusoidal) but without artifacts.
Thanks...

Here:
xo = (8.0 * sin(2.0 * PI * j / 128.0));
yo = (8.0 * sin(2.0 * PI * i / 128.0));
out.at<float>(j,i) = (float)input.at<float>(((int)(j+yo+height)%height),((int)(i+xo+width)%width));
You calculate the location of the source pixel, but you take the mod with width/height to ensure it's within the image. This results in pixels wrapping around at the edge. Instead you need to set any pixel outside of the image to black (or, if your source image has a black border, clamp to the edge).
As you have a border already, you could just clamp the coordinates, like this:
int ix = min(width-1, max(0, (int) (i + xo)));
int iy = min(height-1, max(0, (int) (j + yo)));
out.at<float>(j,i) = (float)input.at<float>(iy,ix);

Related

how to implement a c++ function which creates a swirl on an image

imageData = new double*[imageHeight];
for(int i = 0; i < imageHeight; i++) {
imageData[i] = new double[imageWidth];
for(int j = 0; j < imageWidth; j++) {
// compute the distance and angle from the swirl center:
double pixelX = (double)i - swirlCenterX;
double pixelY = (double)j - swirlCenterY;
double pixelDistance = pow(pow(pixelX, 2) + pow(pixelY, 2), 0.5);
double pixelAngle = atan2(pixelX, pixelY);
// double swirlAmount = 1.0 - (pixelDistance/swirlRadius);
// if(swirlAmount > 0.0) {
// double twistAngle = swirlTwists * swirlAmount * PI * 2.0;
double twistAngle = swirlTwists * pixelDistance * PI * 2.0;
// adjust the pixel angle and compute the adjusted pixel co-ordinates:
pixelAngle += twistAngle;
pixelX = cos(pixelAngle) * pixelDistance;
pixelY = sin(pixelAngle) * pixelDistance;
// }
(this)->setPixel(i, j, tempMatrix[(int)(swirlCenterX + pixelX)][(int)(swirlCenterY + pixelY)]);
}
}
I am trying to implement a c++ function (code above) based on the following pseudo-code
which is supposed to create a swirl on an image, but I have some continuity problems on the borders.
The function I have for the moment is able to apply the swirl on a disk of a given size and to deform it almost as I whished but its influence doesn't decrease as we get close to the borders. I tried to multiply the angle of rotation by a 1 - (r/R) factor (with r the distance between the current pixel in the function and the center of the swirl, and R the radius of the swirl), but this doesn't give the effect I hoped for.
Moreover, I noticed that at some parts of the border, a thin white line appears (which means that the values of the pixels there is equal to 1) and I can't exactly explain why.
Maybe some of the problems I have are linked to the atan2 C++ standard function.

How to convert image storage order from channel-height-width to height-width-channel?

I would like to know how to convert an image stored as a 1D std::vector<float> from CHW format (Channel, Height, Width) to HWC format (Height, Width, Channel) in C++. The format change is needed due to requirements of a neural network.
I used OpenCV to read and show the image as below:
cv::namedWindow("Screenshot", cv::WINDOW_AUTOSIZE );
cv::imshow("Screenshot", rgbImage);
Then I converted the cv::Mat rgbImage to a 1D std::vector<float> in format CHW:
size_t channels = 3;
std::vector<float> data(channels*ROS_IMAGE_HEIGHT*ROS_IMAGE_WIDTH);
for(size_t j=0; j<ROS_IMAGE_HEIGHT; j++){
for(size_t k=0; k<ROS_IMAGE_WIDTH; k++){
cv::Vec3b intensity = rgbImage.at<cv::Vec3b>(j, k);
for(size_t i=0; i<channels; i++){
data[i*ROS_IMAGE_HEIGHT*ROS_IMAGE_WIDTH + j*ROS_IMAGE_HEIGHT + k] = (float) intensity[i];
}
}
}
Now I want to convert the format of std::vector<float> data to HWC. How can I do this?
I found some description of the "CHW" and "HWC" formats here.
If the storage order is HWC, it means that
Each sample is stored as a column-major matrix (height, width) of float[numChannels] (r00, g00, b00, r10, g10, b10, r01, g01, b01, r11, g11, b11).
Thus a pixel (x, y, c) is found using
xStride = channels;
yStride = channels * width;
cStride = 1;
data[x*xStride + y*yStride + c*cStride]
If the storage order is CHW, it means that each channel is a different plane. A pixel (x, y, c) is found using
xStride = 1;
yStride = width;
cStride = width * height;
data[x*xStride + y*yStride + c*cStride]
Note that in the code in the question, data[i*ROS_IMAGE_HEIGHT*ROS_IMAGE_WIDTH + j*ROS_IMAGE_HEIGHT + k] is incorrect, j is the y-coordinate and should be multiplied by ROS_IMAGE_WIDTH.
The code in the question can be modified to yield a std::vector in the HWC format by replacing the line in the innermost loop by:
data[i + j*ROS_IMAGE_WIDTH*channels + k*channels] = (float) intensity[i];

Character recognition from an image C++

*Note: while this post is pretty much asking about bilinear interpolation I kept the title more general and included extra information in case someone has any ideas on how I can possibly do this better
I have been having trouble implementing a way to identify letters from an image in order to create a word search solving program. For mainly educational but also portability purposes, I have been attempting this without the use of a library. It can be assumed that the image the characters will be picked off of contains nothing else but the puzzle. Although this page is only recognizing a small set of characters, I have been using it to guide my efforts along with this one as well. As the article suggested I have an image of each letter scaled down to 5x5 to compare each unknown letter to. I have had the best success by scaling down the unknown to 5x5 using bilinear resampling and summing the squares of the difference in intensity of each corresponding pixel in the known and unknown images. To attempt to get more accurate results I also added the square of the difference in width:height ratios, and white:black pixel ratios of the top half and bottom half of each image. The known image with the closest "difference score" to the unknown image is then considered the unknown letter. The problem is that this seems to have only about a 50% accuracy. To improve this I have tried using larger samples (instead of 5x5 I tried 15x15) but this proved even less effective. I also tried to go through the known and unknown images and look for features and shapes, and determine a match based on two images having about the same amount of the same features. For example shapes like the following were identified and counted up (Where ■ represents a black pixel). This proved less effective as the original method.
■ ■ ■ ■
■ ■
So here is an example: the following image gets loaded:
The program then converts it to monochrome by determining if each pixel has an intensity above or below the average intensity of an 11x11 square using a summed area table, fixes the skew and picks out the letters by identifying an area of relatively equal spacing. I then use the intersecting horizontal and vertical spaces to get a general idea of where each character is. Next I make sure that the entire letter is contained in each square picked out by going line by line, above, below, left and right of the original square until the square's border detects no dark pixels on it.
Then I take each letter, resample it and compare it to the known images.
*Note: the known samples are using arial font size 12, rescaled in photoshop to 5x5 using bilinear interpolation.
Here is an example of a successful match:
The following letter is picked out:
scaled down to:
which looks like
from afar. This is successfully matched to the known N sample:
Here is a failed match:
is picked out and scaled down to:
which, to no real surprise does not match to the known R sample
I changed how images are picked out, so that the letter is not cut off as you can see in the above images so I believe the issue comes from scaling the images down. Currently I am using bilinear interpolation to resample the image. To understand how exactly this works with downsampling I referred to the second answer in this post and came up with the following code. Previously I have tested that this code works (at least to a "this looks ok" point) so it could be a combination of factors causing problems.
void Image::scaleTo(int width, int height)
{
int originalWidth = this->width;
int originalHeight = this->height;
Image * originalData = new Image(this->width, this->height, 0, 0);
for (int i = 0; i < this->width * this->height; i++) {
int x = i % this->width;
int y = i / this->width;
originalData->setPixel(x, y, this->getPixel(x, y));
}
this->resize(width, height); //simply resizes the image, after the resize it is just a black bmp.
double factorX = (double)originalWidth / width;
double factorY = (double)originalHeight / height;
float * xCenters = new float[originalWidth]; //the following stores the "centers" of each pixel.
float * yCenters = new float[originalHeight];
float * newXCenters = new float[width];
float * newYCenters = new float[height];
//1 represents one of the originally sized pixel's side length
for (int i = 0; i < originalWidth; i++)
xCenters[i] = i + 0.5;
for (int i = 0; i < width; i++)
newXCenters[i] = (factorX * i) + (factorX / 2.0);
for (int i = 0; i < height; i++)
newYCenters[i] = (factorY * i) + (factorY / 2.0);
for (int i = 0; i < originalHeight; i++)
yCenters[i] = i + 0.5;
/* p[0] p[1]
p
p[2] p[3] */
//the following will find the closest points to the sampled pixel that still remain in this order
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
POINT p[4]; //POINT used is the Win32 struct POINT
float pDists[4] = { FLT_MAX, FLT_MAX, FLT_MAX, FLT_MAX };
float xDists[4];
float yDists[4];
for (int i = 0; i < originalWidth; i++) {
for (int j = 0; j < originalHeight; j++) {
float xDist = abs(xCenters[i] - newXCenters[x]);
float yDist = abs(yCenters[j] - newYCenters[y]);
float dist = sqrt(xDist * xDist + yDist * yDist);
if (xCenters[i] < newXCenters[x] && yCenters[j] < newYCenters[y] && dist < pDists[0]) {
p[0] = { i, j };
pDists[0] = dist;
xDists[0] = xDist;
yDists[0] = yDist;
}
else if (xCenters[i] > newXCenters[x] && yCenters[j] < newYCenters[y] && dist < pDists[1]) {
p[1] = { i, j };
pDists[1] = dist;
xDists[1] = xDist;
yDists[1] = yDist;
}
else if (xCenters[i] < newXCenters[x] && yCenters[j] > newYCenters[y] && dist < pDists[2]) {
p[2] = { i, j };
pDists[2] = dist;
xDists[2] = xDist;
yDists[2] = yDist;
}
else if (xCenters[i] > newXCenters[x] && yCenters[j] > newYCenters[y] && dist < pDists[3]) {
p[3] = { i, j };
pDists[3] = dist;
xDists[3] = xDist;
yDists[3] = yDist;
}
}
}
//channel is a typedef for unsigned char
//getOPixel(point) is a macro for originalData->getPixel(point.x, point.y)
float r1 = (xDists[3] / (xDists[2] + xDists[3])) * getOPixel(p[2]).r + (xDists[2] / (xDists[2] + xDists[3])) * getOPixel(p[3]).r;
float r2 = (xDists[1] / (xDists[0] + xDists[1])) * getOPixel(p[0]).r + (xDists[0] / (xDists[0] + xDists[1])) * getOPixel(p[1]).r;
float interpolated = (yDists[0] / (yDists[0] + yDists[3])) * r1 + (yDists[3] / (yDists[0] + yDists[3])) * r2;
channel r = (channel)round(interpolated);
r1 = (xDists[3] / (xDists[2] + xDists[3])) * getOPixel(p[2]).g + (xDists[2] / (xDists[2] + xDists[3])) * getOPixel(p[3]).g; //yDist[3]
r2 = (xDists[1] / (xDists[0] + xDists[1])) * getOPixel(p[0]).g + (xDists[0] / (xDists[0] + xDists[1])) * getOPixel(p[1]).g; //yDist[0]
interpolated = (yDists[0] / (yDists[0] + yDists[3])) * r1 + (yDists[3] / (yDists[0] + yDists[3])) * r2;
channel g = (channel)round(interpolated);
r1 = (xDists[3] / (xDists[2] + xDists[3])) * getOPixel(p[2]).b + (xDists[2] / (xDists[2] + xDists[3])) * getOPixel(p[3]).b; //yDist[3]
r2 = (xDists[1] / (xDists[0] + xDists[1])) * getOPixel(p[0]).b + (xDists[0] / (xDists[0] + xDists[1])) * getOPixel(p[1]).b; //yDist[0]
interpolated = (yDists[0] / (yDists[0] + yDists[3])) * r1 + (yDists[3] / (yDists[0] + yDists[3])) * r2;
channel b = (channel)round(interpolated);
this->setPixel(x, y, { r, g, b });
}
}
delete[] xCenters;
delete[] yCenters;
delete[] newXCenters;
delete[] newYCenters;
delete originalData;
}
I have utmost respect for anyone even remotely willing to sift through this to try and help. Any and all suggestion will be extremely appreciated.
UPDATE:
So as suggested I started augmenting the known data set with scaled down letters from word searches. This greatly improved accuracy from about 50% to 70% (percents calculated from a very small sample size so take the numbers lightly). Basically I'm using the original set of chars as a base (this original set was actually the most accurate out of other sets I've tried ex: a set calculated using the same resampling algorithm, a set using a different font etc.) And I just am manually adding knowns to that set. I basically will manually assign the first 20 or so images picked out in a search their corresponding letter and save that into the known set folder. I still am choosing the closest out of the entire known set to match a letter. Would this still be a good method or should some kind of change be made? I also implemented a feature where if a letter is about a 90% match with a known letter, I assume the match is correct and and the current "unknown" to the list of knowns. I could see this possibly going both ways, I feel like it could either a. make the program more accurate over time or b. solidify the original guess and possibly make the program less accurate over time. I have actually not noticed this cause a change (either for the better or for the worse). Am I on the right track with this? I'm not going to call this solved just yet, until I get accuracy just a little higher and test the program from more examples.

OpenCV: Random alpha channel artifacts when overlaying images with transparency in iOS

In my iOS Project i am adding small PNG Images including alpha channel as overlay on a JPEG Picture. The result on my device in DEBUG mode is as expected, the tears are drawn correctly.
When i run the same code on Simulator or when i archive and export the App in RELEASE mode i get random artifacts in alpha channel.
The underlying cv::Mat all contain header infos and a valid data section. Even on green background the error is reproducible.
The behaviour seem to be totally random as from time to time no artifacts are drawn (image 3: right tear, image 4: left tear).
Ideas, anybody?
const char *cpath1 = [#"" cStringUsingEncoding:NSUTF8StringEncoding];//overlay image path , within #"" pass your image path which is in NSString
const char *cpath = [#"" cStringUsingEncoding:NSUTF8StringEncoding];//underlay imagepath
cv::Mat overlay = cv::imread(cpath1,-1);//-1 is for read .png images
cv::Mat underlay = cv::imread(cpath,-1);
//convert mat image in to RGB channel
cv::Mat overlayAlpha;
std::vector<Mat> channels1;
split(overlay, channels1);
channels1[3].copyTo(overlayAlpha);
cv::Mat underlayAlpha;
std::vector<Mat> channels2;
split(underlay, channels2);
channels2[3].copyTo(underlayAlpha);
overlayImage( &underlay, &overlay,cv::Point(10,10);
convert final image to RGB channel
cv::split(underlay,channels1);
std::swap(channels1[0],channels1[2]);// swap B and R channels.
cv::merge(channels1,underlay);//merge channels
MatToUIImage(background); //display your final image, it returns cv::Mat image
and overlay function is like below
overlay function referenced from : http://answers.opencv.org/question/73016/how-to-overlay-an-png-image-with-alpha-channel-to-another-png/
void overlayImage(Mat* src, Mat* overlay, const cv::Point& location){
for (int y = max(location.y, 0); y < src->rows; ++y)
{
int fY = y - location.y;
if (fY >= overlay->rows)
break;
for (int x = max(location.x, 0); x < src->cols; ++x)
{
int fX = x - location.x;
if (fX >= overlay->cols)
break;
double opacity = ((double)overlay->data[fY * overlay->step + fX * overlay->channels() + 3]) / 255;
for (int c = 0; opacity > 0 && c < src->channels(); ++c)
{
unsigned char overlayPx = overlay->data[fY * overlay->step + fX * overlay->channels() + c];
unsigned char srcPx = src->data[y * src->step + x * src->channels() + c];
src->data[y * src->step + src->channels() * x + c] = srcPx * (1. - opacity) + overlayPx * opacity;
}
}
}
}

OpenCV Assertion Error mat.hpp line 570

When i try to implement this code to call all the color to be use later.. it shows an error.. Does anyone know what is wrong with this code.. tq
// Extracting pure colors to use in demo
const int ncolors = 16;
std::vector<Scalar> colors;
for (int n = 0; n < ncolors; ++n) {
Mat color(Size(1, 1), CV_32FC3);
color.at<float>(0) = (360) / ncolors * n;
color.at<float>(1) = 1.0;
color.at<float>(2) = 0.7;
cvtColor(color, color, CV_HSV2BGR);
color = color * 255;
colors.push_back(Scalar(color.at<float>(0), color.at<float>(1), color.at<float>(2)));
}
The matrix color is a 1x1 matrix with 3 channels, so you should access as:
color.at<Vec3f>(0)[0] = 360.f / ncolors * n;
color.at<Vec3f>(0)[1] = 1.f;
color.at<Vec3f>(0)[2] = 0.7f;
You should access it similarly when you construct the Scalar.