I have 400x400 photo. I took the pieces as 4 separate photos of 100 * 400 and recorded them as 1, 2, 3, 4.jpg.
I have to combine 4 images croped from the one photo and get the original photo of 400 * 400. How can I do it ?
I don't want to use a ready(set) function. I need to do with for loop.
This is not going to be easy. Just reading a JPG without using a library could take weeks or months for a professional.
You should use a library. I would recommend CImg from here as a good starting point.
Failing that, I would suggest using ImageMagick to convert your JPGs to NetPBM PPM format then you can read them much more easily.
magick 1.jpg -depth 8 1.ppm
When you have written the code to combine them, you can convert the combined PPM file back into a JPG with:
magick combined.ppm combined.jpg
If you don't want to use available functions, you can create a 400x400 mask and assign the pixel values of each piece to the mask.
Here is the code:
#include <opencv2/highgui.hpp>
#include <opencv2/core.hpp>
#include <opencv2/imgproc.hpp>
#include <iostream>
using namespace cv;
using namespace std;
int main() {
// 4 Pieces
Mat piece_1 = imread("/ur/image/directory/1.jpg",CV_LOAD_IMAGE_GRAYSCALE);
Mat piece_2 = imread("/ur/image/directory/2.jpg",CV_LOAD_IMAGE_GRAYSCALE);
Mat piece_3 = imread("/ur/image/directory/3.jpg",CV_LOAD_IMAGE_GRAYSCALE);
Mat piece_4 = imread("/ur/image/directory/4.jpg",CV_LOAD_IMAGE_GRAYSCALE);
// Mask
Mat source_image = Mat::zeros(Size(400,400),CV_8UC1);
for(int i=0;i<source_image.rows;i++)
{
for(int j=0;j<source_image.cols;j++)
{
if(i<=99)
source_image.at<uchar>(i,j)=piece_1.at<uchar>(i,j);
if(i>99 && i<=199)
source_image.at<uchar>(i,j)=piece_2.at<uchar>(i-100,j);
if(i>199 && i<=299)
source_image.at<uchar>(i,j)=piece_3.at<uchar>(i-200,j);
if(i>299 && i<=399)
source_image.at<uchar>(i,j)=piece_4.at<uchar>(i-300,j);
}
}
imshow("Result",source_image);
waitKey(0);
return 0;
}
Related
I've written an OpenCV code to evaluate NN for its capability of classification for segmenting. It took a long time to get the result for a small portion of code to be executed; due to the high availability of Microsoft Azure, it would be a chance to increase the pace for modeling.
I am interested in how to run this cpp OpenCV code on Azure cloud, including how to transfer cpp code and some training and test images on Azure. My code is long and though it is not included here but I can send the link if needed.
I've googled for how to use Microsoft Azure with cpp OpenCV but since I'm not much into cloud stuff and Azure explanations are confusing, I really appreciate any help with images or recorded screen showing the solution.
Based on Azure documentation, I need to a serverless framework to host code execution only.
Below is an intuitive OpenCV code and I am seeking a way to execute this. It reads an image, converts it to HSV, and saves something in an XML file.
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/imgcodecs.hpp>
#include <fstream>
using namespace cv;
using namespace std;
Mat image;
int main()
{
//read an image
image = imread("water (8).png", 1);
//check for existance of data
if (!image.data)
{
printf("no image data.\n"); return -1;
}
if (image.isContinuous()) {
cout << "loaded image is continuous" << "\n";
}
// hold HSV conversion of image
Mat HSVimage;
// convert to hsv - 0 < hsvimage < 255
cv::cvtColor(image, HSVimage, cv::COLOR_BGR2HSV);
FileStorage PrintInXMLFile("results.xml", FileStorage::WRITE);
PrintInXMLFile << "some_text" << "Hello!";
PrintInXMLFile.release();//release the file after writing
cv::waitKey(0);
return 0;
}// end main
Im using a Basler camera, and I'm trying to save the grabbed image with OpenCV. However, when i try to use imwrite(), I get this error:
imwrite_('C:/Users/Uporabnik/Desktop/slika.png'): can't write data: unknown exception
My conversion of the grabbed image:
openCvImage = Mat(image.GetHeight(), image.GetWidth(), CV_16U, (uint8_t *)image.GetBuffer());
Trying to save the image:
cv::imwrite("C:/Users/Uporabnik/Desktop/slika.png", openCvImage);
I am also using basler camera. You also need to share codes which includes basler configurations.Here is how I used basler camera to get frame in the format of opencv:
#include <pylon/usb/BaslerUsbInstantCamera.h>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <pylon/PylonIncludes.h>
using namespace std;
using namespace cv;
using namespace Pylon;
using namespace GenApi;
using namespace Basler_UsbCameraParams;
int main()
{
Mat openCvImage;
Pylon::PylonAutoInitTerm autoInitTerm;
CBaslerUsbInstantCamera camera(CTlFactory::GetInstance().CreateFirstDevice());
CImageFormatConverter formatConverter;
CPylonImage pylonImage;
camera.MaxNumBuffer = 1;
formatConverter.OutputPixelFormat= PixelType_BGR8packed;
camera.StartGrabbing( c_countOfImagesToGrab);
const uint8_t *pImageBuffer = (uint8_t *) ptrGrabResult->GetBuffer();
formatConverter.Convert(pylonImage, ptrGrabResult);
openCvImage= cv::Mat(ptrGrabResult->GetHeight(), ptrGrabResult>GetWidth(), CV_8UC3, (uint8_t *) pylonImage.GetBuffer());
imshow("Basler Frame",openCvImage);
waitKey(0);
return 0;
}
But first of all you need to use "imshow" function to see image.
If you can, the problem is in your directory part. Otherwise you need to share code about initiating basler camera.
Please just pay attention to the dimension of your image, you can simply print and see if the shape is correct. In my case, I include the batchsize dimension to the image shape by mistake, and I solve it by image = image[0].
#include "opencv2/objdetect/objdetect.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
#include <stdio.h>
using namespace std;
using namespace cv;
int main()
{
Mat src = imread("image.png", 1);
namedWindow("src", 1);
imshow("src", src);
vector<Mat> rgbChannels(3);
split(src, rgbChannels);
namedWindow("R", 1);
imshow("R", rgbChannels[2]);
waitKey(0);
return 0;
}
.
I was expecting something like the following:
why doesn't the above code show the Red channel? why does it show a grayscale image?
if the image is split into 3 channels, each matrix should show one of the colors of r, g, and b. isn't that so?
Your code is correct; however, OpenCV is showing the channel as grayscale. Mat does not keep the information about "where" the data came from. In other words, it does not know it was a red channel, so when you call imshow, it displays it as a single-channel image.
What you can do is build up an empty image with 2 zero'd channels and the one you want to visualize.
I have seen some algorithms on how to remove a shadow from an image using OpenCV with C++. I have looked around but haven't find the way to not just erase the shadow, but store it on a new image alone.
What I am doing with this code is to convert the original image (that I obtained from the Internet) to the HSV color space, change the value of V=180, which somehow removes the shadow, and then converting the image back to the BGR color space. I am clueless on how to 'extract' the removed shadow and save it to a different image...
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc.hpp>
using namespace cv;
using namespace std;
int main()
{
Mat srcImg;
Mat hsvImg;
Mat bgrImg;
srcImg = imread("pcb-2008.jpg");
cvtColor(srcImg, hsvImg, CV_BGR2HSV);
imwrite("1.hsv.jpg", hsvImg);
Mat channel[3];
split(hsvImg, channel);
channel[2] = Mat(hsvImg.rows, hsvImg.cols, CV_8UC1, 180);
merge(channel, 3, hsvImg);
imwrite("2.hsvNoShadow.jpg", hsvImg);
cvtColor(hsvImg, bgrImg, CV_HSV2BGR);
imwrite("3.backToBgr.jpg", bgrImg);
return 0;
}
Sample image of a PCB
I need to stitch few images using OpenCV in C++, so I wrote the following code:
#include <opencv2/opencv.hpp>
#include <opencv2/stitching.hpp>
#include <cstdio>
#include <vector>
void main()
{
std::vector<cv::Mat> vImg;
cv::Mat rImg;
vImg.push_back(cv::imread("./stitching_img/S1.png"));
vImg.push_back(cv::imread("./stitching_img/S2.png"));
vImg.push_back(cv::imread("./stitching_img/S3.png"));
cv::Stitcher stitcher = cv::Stitcher::createDefault();
unsigned long AAtime = 0, BBtime = 0;
AAtime = cv::getTickCount();
cv::Stitcher::Status status = stitcher.stitch(vImg, rImg);
BBtime = cv::getTickCount();
printf("%.2lf sec \n", (BBtime - AAtime) / cv::getTickFrequency());
if (cv::Stitcher::OK == status)
cv::imshow("Stitching Result", rImg);
else
std::printf("Stitching fail.");
cv::waitKey(0);
}
Unfortunately, it always says "Stitching fail" on the following files -- http://imgur.com/a/32ZNS while it works on these files -- http://imgur.com/a/ve5sY
What am I doing wrong? How can I fix it?
Thanks in advance.
cv::Stitchers works by finding common features in the separate images and use those to figure out where the images fit together. In your samples where the stitching works you can find a lot of overlap: the blue roof, the features of the buildings across the road, etc.
In the set where it fails for you, there is no overlap, so the algorithm can't figure out how to fit them together. It seems like you can 'stitch' these images by just putting them next together. For this you can use hconcat as described at this answer: https://stackoverflow.com/a/20079134/1737727
There is a very simple way of displaying two images side by side. The following function can be used which is provided by opencv.
Mat image1, image2;
hconcat(image1,image2,image1);//Syntax->
hconcat(source1,source2,destination);
This function can also be used to copy a set of columns from an image to another image.
Mat image;
Mat columns=image.colRange(20,30);
hconcat(image,columns,image);
vconcat is a similar function to stich images vertically.