Writing very small characters on top of image OpenCV? - c++
So Im trying to write a single character using putText() on top of an image to fit into a 25x25 box but the text is too small to render, it just looks like a blob of whatever color I choose the text to be. Is there any way to create small, readable text to overlay onto an image with OpenCV?
Here is an example using both putText() and also loading a character from a file created in Photoshop or GIMP.
#include <iostream>
#include <opencv2/opencv.hpp>
#include <string>
using namespace cv;
using namespace std;
int
main(int argc,char*argv[])
{
// Make a 3 channel image
cv::Mat main(200,300,CV_8UC3);
// Fill entire image with magenta
main = cv::Scalar(255,0,255);
// Load a character "M" from a file and overlay
Mat txt = cv::imread("M.png",-CV_LOAD_IMAGE_ANYDEPTH);
txt.copyTo(main(cv::Rect(80,120,txt.cols,txt.rows)));
// Now use puttext() to do a white S
int fontFace = FONT_HERSHEY_COMPLEX_SMALL;
double fontScale=1.5;
string text="S";
putText(main,"S",Point(60,100),fontFace,fontScale,Scalar(255,255,255));
// Save to disk
imwrite("result.png",main);
}
Here's the M.png file:
Here's the result:
I also notice that the anti-aliased fonts (on the right side in image below) look somewhat easier to read:
#include <iostream>
#include <opencv2/opencv.hpp>
#include <string>
using namespace cv;
using namespace std;
int
main(int argc,char*argv[])
{
// Make a 3 channel image
cv::Mat main(280,800,CV_8UC3);
// Fill entire image with magenta
main = cv::Scalar(255,0,255);
double fontScale=1.5;
int thickness=1;
int x=10,y=40;
putText(main,"Simplex",Point(x,y),CV_FONT_HERSHEY_SIMPLEX,fontScale,Scalar(255,255,255),thickness,8);
putText(main,"Simplex AA",Point(x+400,y),CV_FONT_HERSHEY_SIMPLEX,fontScale,Scalar(255,255,255),thickness,CV_AA);
y+=40;
putText(main,"Plain",Point(x,y),CV_FONT_HERSHEY_PLAIN,fontScale,Scalar(255,255,255),thickness,8);
putText(main,"Plain AA",Point(x+400,y),CV_FONT_HERSHEY_PLAIN,fontScale,Scalar(255,255,255),thickness,CV_AA);
y+=40;
putText(main,"Duplex",Point(x,y),CV_FONT_HERSHEY_DUPLEX,fontScale,Scalar(255,255,255),thickness,8);
putText(main,"Duplex AA",Point(x+400,y),CV_FONT_HERSHEY_DUPLEX,fontScale,Scalar(255,255,255),thickness,CV_AA);
y+=40;
putText(main,"Complex",Point(x,y),CV_FONT_HERSHEY_COMPLEX,fontScale,Scalar(255,255,255),thickness,8);
putText(main,"Complex AA",Point(x+400,y),CV_FONT_HERSHEY_COMPLEX,fontScale,Scalar(255,255,255),thickness,CV_AA);
y+=40;
putText(main,"Triplex",Point(x,y),CV_FONT_HERSHEY_TRIPLEX,fontScale,Scalar(255,255,255),thickness,8);
putText(main,"Triplex AA",Point(x+400,y),CV_FONT_HERSHEY_TRIPLEX,fontScale,Scalar(255,255,255),thickness,CV_AA);
y+=40;
putText(main,"Script",Point(x,y),CV_FONT_HERSHEY_SCRIPT_SIMPLEX,fontScale,Scalar(255,255,255),thickness,8);
putText(main,"Script AA",Point(x+400,y),CV_FONT_HERSHEY_SCRIPT_SIMPLEX,fontScale,Scalar(255,255,255),thickness,CV_AA);
// Save to disk
imwrite("result.png",main);
}
Related
How to display an image on any computer using Opencv
I want to display the image on any computer that opens the program .. But the software shows the picture only that the image and the software are in the same place (I want the image to be inside the software) And if it is not in the same place then it shows me this error: (image) https://i.imgur.com/bEtdaif.png #include <iostream> #include <Windows.h> #include <opencv2/opencv.hpp> #include "opencv2\highgui.hpp" using namespace std; using namespace cv; int main() { Mat img = imread("d.png"); namedWindow("Image"); imshow("Image", img); waitKey(0); cout << "h"; int i; cin >> i; }
One way is to write a program that converts the image to a string of the form std::vector<uint8_t> image{ 0x01, 0x02 ... }; list each byte. Then save that string in a file. Then #include that file into your program and read the image data from the image variable. This way the image will be embedded in your executable by the compiler.
why doesn't the following code show the Red channel of an image?
#include "opencv2/objdetect/objdetect.hpp" #include "opencv2/highgui/highgui.hpp" #include "opencv2/imgproc/imgproc.hpp" #include <iostream> #include <stdio.h> using namespace std; using namespace cv; int main() { Mat src = imread("image.png", 1); namedWindow("src", 1); imshow("src", src); vector<Mat> rgbChannels(3); split(src, rgbChannels); namedWindow("R", 1); imshow("R", rgbChannels[2]); waitKey(0); return 0; } . I was expecting something like the following: why doesn't the above code show the Red channel? why does it show a grayscale image? if the image is split into 3 channels, each matrix should show one of the colors of r, g, and b. isn't that so?
Your code is correct; however, OpenCV is showing the channel as grayscale. Mat does not keep the information about "where" the data came from. In other words, it does not know it was a red channel, so when you call imshow, it displays it as a single-channel image. What you can do is build up an empty image with 2 zero'd channels and the one you want to visualize.
store monochrome bitmap image pixel value(0,1) in text file in opencv
I am using opencv for reading monochrome bitmap image and saving its pixel value to another text file. According to my knowledge, monochrome bitmap has values 0 and 1 and not between 0 and 255.When I am trying to save the values in text file, then 0 and 255 are getting stored. If I divide the pixel value with 255 then I am getting 0 and 1 but output is not acceptable as it is not forming any character(monochrome bitmap image is scanned text file with characters). I think there is a problem with depth, type or number of channels but not able to solve it. Please help. Thanks in advance. Here is my code: #include<opencv2/core/core.hpp> #include<opencv2/highgui/highgui.hpp> #include <opencv2/opencv.hpp> #include "math.h" #include <fstream> using namespace cv; using namespace std; int main(int argc,char **argv) { ofstream fout("monochrome_file.txt"); Mat img=imread("1_mono.bmp",CV_THRESH_BINARY); uchar val;int x; if(img.empty()) { cout<<"File Not Opened"<<endl; } for(int i=0;i<img.rows;i++) { for(int j=0;j<img.cols;j++) { val=img.at<uchar>(i,j); x=(int)val; x=x/255; fout<<x; } } waitKey(); return 0; }
open monochrome_file.txt in visual studio instead of notepad. The required patterns will be visible.
I get different results for same image in a program
I made a small program that calculates the number of white pixels in a grayscale image. I get different results if I open the image twice in the same program. Same if I display the intensity of the pixels, it changes even if it is the same image. If anyone sees where the problem is please help. #include "opencv2/imgcodecs.hpp" #include "opencv2/highgui.hpp" #include "opencv2/imgproc.hpp" #include <iostream> using namespace std; using namespace cv; int main() { int i=0,j,nbr=0,nbr1=0; Mat image=imread("2_.png",CV_LOAD_IMAGE_GRAYSCALE); Mat image2=imread("2_.png",CV_LOAD_IMAGE_GRAYSCALE); for(i=0;i<image.rows;i++) { for( j=0;j<image.cols;j++) {if (image.at<int>(i,j)!=0) nbr++; if (image2.at<int>(i,j)!=0) nbr1++; } } printf("%d\n %d\n",nbr,nbr1); return 0;} Thank you for your help.
It is may be because you need to avoid using int, but uchar for grayscale image. Using int you go out of image memory.
Merging two images showing brightness
I am trying to blend two image or you can say put one image on other image , when i apply blending overlay on the image or simple merge two image it show me brightness in it. here are my two images (first vignette is empty from inside , its not containing brightness in the centre ) and the other is The code which i did is int main( int argc, char** argv ) { Mat img=imread("E:\\vig.png",-1); Mat ch[4]; split(img,ch); Mat im2 = ch[3]; // here's the vignette im2 = 255 - im2; // eventually cure the inversion Mat img2 = imread("E:\\ew.jpg"); Mat out2; blending_overlay3(img2 , im2 , out2); imshow("image",out2); imwrite("E:\\image.jpg",out2); waitKey();} It show me the result like but i require result like EDIT The first image is hollow/empty from center (the vignette one) , but when i read the image (vignette one) with my program then it become solid(bright) from the center , the history behind its implementation is here There is the only problem and its with first (vignette) image reading , if it read as it is , like hollow/empty from the center , so that the other image with which we merge/blend/weight whatever apply it didn't effect the center part of the image , not even show brightness etc , that's what i want to do
it's me again :) It seems you are writing new photoshop. The result I've got: The code: #include <iostream> #include <vector> #include <stdio.h> #include <functional> #include <algorithm> #include <numeric> #include <cstddef> #include "opencv2/opencv.hpp" #include <iostream> #include <fstream> using namespace std; using namespace cv; int main( int argc, char** argv ) { namedWindow("Image"); Mat Img1=imread("Img1.png",-1); Mat Img2=imread("Img2.png"); cv::resize(Img1,Img1,Img2.size()); Img1.convertTo(Img1,CV_32FC4,1.0/255.0); Img2.convertTo(Img2,CV_32FC3,1.0/255.0); vector<Mat> ch; split(Img1,ch); Mat mask = ch[3].clone(); // here's the vignette ch.resize(3); Mat I1,I2,result; cv::multiply(mask,ch[0],ch[0]); cv::multiply(mask,ch[1],ch[1]); cv::multiply(mask,ch[2],ch[2]); merge(ch,I1); vector<Mat> ch2(3); split(Img2,ch2); cv::multiply(1.0-mask,ch2[0],ch2[0]); cv::multiply(1.0-mask,ch2[1],ch2[1]); cv::multiply(1.0-mask,ch2[2],ch2[2]); merge(ch2,I2); result=I1+I2; imshow("Image",result); waitKey(0); }