ImageMagick C++ Version 7 Modify Pixel Value in Blank Image - c++

I have the following code that creates a blank black image and then attempts to write to that image by modifying each pixel to red.
Magick::Image image(Magick::Geometry(1024, 1024),
Magick::Color(std::uint8_t(0), std::uint8_t(0), std::uint8_t(0)));
assert(image.channels() == 3 && "Created wrong image format.");
image.type(Magick::TrueColorType);
image.fillColor("black");
std::size_t w = image.columns();
std::size_t h = image.rows();
assert(image.columns() == 1024 && image.rows() == 1024);
Magick::Quantum *mpixels = image.setPixels(0, 0, w, h);
for (int row = 0; row < h - 1; ++row) {
for (int col = 0; col < w - 1; ++col) {
std::size_t offset = (w * row + col);
std::size_t moffset = image.channels() * offset;
mpixels[moffset + 0] = 255;
mpixels[moffset + 1] = 0;
mpixels[moffset + 2] = 0;
}
}
image.syncPixels();
image.write(out.c_str());
However, after inspecting the image it is still all black after changing the pixel values. What do I need to change to modify the pixel values?

I suspect that you are using the Q16 version of ImageMagick which means that each pixel channel value will be in the range 0-65535 and you are using 255 for the red channel which is really close to black. I think the following will fix your issue:
mpixels[moffset + 0] = 65535;
You could also decide to switch to the Q8 version of ImageMagick if channels in the range 0-255 would be sufficient for you.

Related

Why my bitmap image have another color overlay after converting 32-bit to 8-bit

Im working on resizing bitmap image and converting bitmap image to 8-bit (grayscale). But I have the problem that when I convert 32-bit image to 8-bit image, the result has another color overlay while it works perfectly on 24-bit. I guess the cause is in the alpha color. but I dont know where the problem exactly is.
This is my code to generate 8-bit palette color and write it after DIB part:
char* palette = new char[1024];
for (int i = 0; i < 256; i++) {
palette[i * 4] = palette[i * 4 + 1] = palette[i * 4 + 2] = (char)i;
palette[i * 4 + 3] = 255;
}
fout.write(palette, 1024);
delete[] palette;
As I said, my code works perfectly on 24-bit. In 32-bit the color is still kept after resizing, but when converting to 8-bit, it will look like this:
expected image (when converted from 24-bit) //
unexpected image (when converted from 32-bit)
This is how I get the colors and save it to srcPixel[]:
int i = 0;
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
int index = getIndex(width, x, y);
srcPixel[index].A = srcBMP.pImageData[i];
i += alpha;
srcPixel[index].B = srcBMP.pImageData[i++];
srcPixel[index].G = srcBMP.pImageData[i++];
srcPixel[index].R = srcBMP.pImageData[i++];
}
i += padding;
}
And this is the code I converted it by getting average of 4 colors A, B, G and R from that srcPixel[]:
int i = 0;
for (int y = 0; y < dstHeight; y++) {
for (int x = 0; x < dstWidth; x++) {
int index = getIndex(dstWidth, x, y);
dstBMP.pImageData[i++] = (srcPixel[index].A + srcPixel[index].B + srcPixel[index].G + srcPixel[index].R) / 4;
}
i += dstPadding;
}
If I remove and skip all alpha bytes in my code, when converting my image is still like that and I will have another problem is when resizing, my image will have another color overlay like the problem when converting to 8-bit: resizing without alpha channel.
If I skip the alpha channel while getting average (change into dstBMP.pImageData[i++] = (srcPixel[index].B + srcPixel[index].G + srcPixel[index].R) / 3, there is almost nothing different, the overlay still exists.
If I remove palette[i * 4 + 3] = 255; or doing anything with it, the result is still not affected.
Thank you very much.
You add alpha channel to the color and that's why it becomes brighter. From here I found that opaque is 255 and transparent 0 - therefore you add another channel which is set to 'white' to your result.
Remove alpha channel from your equation and see if I'm right.

OpenCV: lab color quantization to predefined colors

I trying to reduce my image colors to some predefined colors using the following function:
void quantize_img(cv::Mat &lab_img, std::vector<cv::Scalar> &lab_colors) {
float min_dist, dist;
int min_idx;
for (int i = 0; i < lab_img.rows*lab_img.cols * 3; i += lab_img.cols * 3) {
for (int j = 0; j < lab_img.cols * 3; j += 3) {
min_dist = FLT_MAX;
uchar &l = *(lab_img.data + i + j + 0);
uchar &a = *(lab_img.data + i + j + 1);
uchar &b = *(lab_img.data + i + j + 2);
for (int k = 0; k < lab_colors.size(); k++) {
double &lc = lab_colors[k](0);
double &ac = lab_colors[k](1);
double &bc = lab_colors[k](2);
dist = (l - lc)*(l - lc)+(a - ac)*(a - ac)+(b - bc)*(b - bc);
if (min_dist > dist) {
min_dist = dist;
min_idx = k;
}
}
l = lab_colors[min_idx](0);
a = lab_colors[min_idx](1);
b = lab_colors[min_idx](2);
}
}
}
However it does not seem to work properly! For example the output for the following input looks amazing!
if (!(src = imread("im0.png")).data)
return -1;
cvtColor(src, lab, COLOR_BGR2Lab);
std::vector<cv::Scalar> lab_color_plate_({
Scalar(100, 0 , 0), //white
Scalar(50 , 0 , 0), //gray
Scalar(0 , 0 , 0), //black
Scalar(50 , 127, 127), //red
Scalar(50 ,-128, 127), //green
Scalar(50 , 127,-128), //violet
Scalar(50 ,-128,-128), //blue
Scalar(68 , 46 , 75), //orange
Scalar(100,-16 , 93) //yellow
});
//convert from conventional Lab to OpenCV Lab
for (int k = 0; k < lab_color_plate_.size(); k++) {
lab_color_plate_[k](0) *= 255.0 / 100.0;
lab_color_plate_[k](1) += 128;
lab_color_plate_[k](2) += 128;
}
quantize_img(lab, lab_color_plate_);
cvtColor(lab, lab, CV_Lab2BGR);
imwrite("im0_lab.png", lab);
Input image:
Output image
Can anyone explain where the problem is?
After checking your algorithm I noticed that the algorithm is correct 100% and the problem is your color space.... Let's take one of the colors that is changed "wrongly" like the green from the trees.
Using a color picker tool in GIMP it tells you that at least one of the green used is in RGB (111, 139, 80). When this is converted to LAB, you get (54.4, -20.7, 28.3). The distance to green is (by your formula) 21274.34 , and with grey the distance is 1248.74... so it will choose grey over green, even though it is a green color.
A lot of values in LAB can generate a green value. You can test it out the color ranges in this webpage. I would suggest you to use HSV or HSL and compare the H values only which is the Hue. The other values changes only the tone of green, but a small range in the Hue determines that it is green. This will probably give you more accurate results.
As some suggestion to improve your code, use Vec3b and cv::Mat functions like this:
for (int i = 0; i < lab_img.rows; ++i) {
for (int j = 0; j < lab_img.cols; ++j) {
Vec3b pixel = lab_img.at<Vec3b>(i,j);
}
}
This way the code is more readable, and some checks are done in debug mode.
The other way would be to do a one loop since you don't care about indices
auto currentData = reinterpret_cast<Vec3b*>(lab_img.data);
for (size_t i = 0; i < lab_img.rows*lab_img.cols; i++)
{
auto& pixel = currentData[i];
}
This way is also better. This last part is just a suggestion, there is nothing wrong with your current code, just harder to read understand to the outside viewer.

Incorrect Pattern Image Generation in OpenCV

Content: Image Processing in OpenCV C++.
The Requirement is to create tiles of Mat pattern of size 256 X 256 on an outer Mat Image. The user specifies the width and the height of the outer Mat Image.
To do this, I created the below OpenCV C++ function:
Mat GenerateDiagonalFade(int width, int height)
{
// Creating a Mat Image in user defined dimension
Mat image(height, width, CV_8UC1, Scalar(0));
//Looping through all rows and columns of the outer Image
for (int row = 0; row < image.rows; row ++)
{
for (int col = 0; col < image.cols; col ++)
{
//Here, I am giving the condition to access the pixel values
//The pattern should be 255 X 255 and they must fill in the entire image
if ((row % 256 + col % 256) <= 255)
{
image.at<uchar>(row, col) = (row % 256 + col % 256);
}
else
{
//Here is where I get error
image.at<uchar>(row, col) = abs(row % 256 - col % 256);
}
}
}
return image;
}
If you can see the else statement above, I tried to make the inverse of the first condition and make the value absolute.
The output I get is as seen below:
The Expected Output is the inverse of the first part of the diagonal. Darker to lighter shade towards the diagonal.
I tried replacing abs(row % 256 - col % 256); with many statements. I am struct with the output.
The changes should be made only in the else statement. Rest of my code is correct as half of my output( top diagonal) is right.
I appreciate any help from you in order to solve this. Trust me, it's quite interesting to work out all graphical[X-Y axis] and mathematical calculations[pixel access] to get the desired output.
I would begin by splitting the problem into two parts:
Generating a single tile containing the correct pattern
Using that tile (or algorithm) to generate the whole image
Generating a Tile
The goal is to generate a 256x256 grayscale image containing a gradient such that:
Top left corner is all black
Bottom right corner is all black
The diagonal going from bottom left to top right is all white
You got the part above the diagonal right, but let's inspect that anyway.
The coordinates of the top left corner are (0, 0) and we expect intensity of 0. --> row + col == 0
The coordinates of one end of the diagonal are (255, 0) and we expect intensity of 255. --> row + col == 255
The other end of the diagonal is at (0, 255) -> row + col == 255
Let's try another point on the diagonal, (254,1) --> again row + col == 255
OK, now a point just above the diagonal, (254,0) -> row + col == 254 -- slightly less white, as we would expect.
Next, let's try a point just below the diagonal, say (255, 1) --> row + col == 256. If we cast this to an 8 bit integer, we get a 0, yet we expect 254, just like in the previous case.
Finally, bottom right corner (255, 255) -> row + col == 510. If we cast this to an 8 bit integer, we get a 254, yet we expect 0.
Let's try something:
256 + 254 == 510
510 + 0 == 510
And we see an algorithm:
* If the sum of row + col is less than 256, then use the sum
* Otherwise subtract the sum from 510 and use the result
Sample code:
cv::Mat make_tile()
{
int32_t const TILE_SIZE(256);
cv::Mat image(TILE_SIZE, TILE_SIZE, CV_8UC1);
for (int32_t r(0); r < TILE_SIZE; ++r) {
for (int32_t c(0); c < TILE_SIZE; ++c) {
int32_t sum(r + c);
if (sum < TILE_SIZE) {
image.at<uint8_t>(r, c) = static_cast<uint8_t>(sum);
} else {
image.at<uint8_t>(r, c) = static_cast<uint8_t>(2 * (TILE_SIZE - 1) - sum);
}
}
}
return image;
}
Single tile:
Generating Image of Tiles
Now that we have a complete tile, we can simply generate the full image by iterating over tile-sized ROIs of the target image, and copying a tile ROI of identical size to them.
Sample code:
#include <opencv2/opencv.hpp>
#include <cstdint>
cv::Mat make_tile()
{
int32_t const TILE_SIZE(256);
cv::Mat image(TILE_SIZE, TILE_SIZE, CV_8UC1);
for (int32_t r(0); r < TILE_SIZE; ++r) {
for (int32_t c(0); c < TILE_SIZE; ++c) {
int32_t sum(r + c);
if (sum < TILE_SIZE) {
image.at<uint8_t>(r, c) = static_cast<uint8_t>(sum);
} else {
image.at<uint8_t>(r, c) = static_cast<uint8_t>(2 * (TILE_SIZE - 1) - sum);
}
}
}
return image;
}
int main()
{
cv::Mat tile(make_tile());
cv::Mat result(600, 800, CV_8UC1);
for (int32_t r(0); r < result.rows; r += tile.rows) {
for (int32_t c(0); c < result.cols; c += tile.cols) {
// Handle incomplete tiles
int32_t end_r(std::min(r + tile.rows, result.rows));
int32_t end_c(std::min(c + tile.cols, result.cols));
// Get current target tile ROI and source ROI of same size
cv::Mat target_roi(result(cv::Range(r, end_r), cv::Range(c, end_c)));
cv::Mat source_roi(tile(cv::Range(0, target_roi.rows), cv::Range(0, target_roi.cols)));
// Copy the tile
source_roi.copyTo(target_roi);
}
}
cv::imwrite("gradient.png", tile);
cv::imwrite("gradient_big.png", result);
}
Complete image:

OpenCV VLFeat Slic function call

I am trying to use the vl_slic_segment function of the VLFeat library using an input image stored in an OpenCV Mat. My code is compiling and running, but the output superpixel values do not make sense. Here is my code so far :
Mat bgrUChar = imread("/pathtowherever/image.jpg");
Mat bgrFloat;
bgrUChar.convertTo(bgrFloat, CV_32FC3, 1.0/255);
cv::Mat labFloat;
cvtColor(bgrFloat, labFloat, CV_BGR2Lab);
Mat labels(labFloat.size(), CV_32SC1);
vl_slic_segment(labels.ptr<vl_uint32>(),labFloat.ptr<const float>(),labFloat.cols,labFloat.rows,labFloat.channels(),30,0.1,25);
I have tried not converting it to the Lab colorspace and setting different regionSize/regularization, but the output is always very glitchy. I am able to retrieve the label values correctly, the thing is the every labels is usually scattered on a little non-contiguous area.
I think the problem is the format of my input data is wrong but I can't figure out how to send it properly to the vl_slic_segment function.
Thank you in advance!
EDIT
Thank you David, as you helped me understand, vl_slic_segment wants data ordered as [LLLLLAAAAABBBBB] whereas OpenCV is ordering its data [LABLABLABLABLAB] for the LAB color space.
In the course of my bachelor thesis I have to use VLFeat's SLIC implementation as well. You can find a short example applying VLFeat's SLIC on Lenna.png on GitHub: https://github.com/davidstutz/vlfeat-slic-example.
Maybe, a look at main.cpp will help you figuring out how to convert the images obtained by OpenCV to the right format:
// OpenCV can be used to read images.
#include <opencv2/opencv.hpp>
// The VLFeat header files need to be declared external.
extern "C" {
#include "vl/generic.h"
#include "vl/slic.h"
}
int main() {
// Read the Lenna image. The matrix 'mat' will have 3 8 bit channels
// corresponding to BGR color space.
cv::Mat mat = cv::imread("Lenna.png", CV_LOAD_IMAGE_COLOR);
// Convert image to one-dimensional array.
float* image = new float[mat.rows*mat.cols*mat.channels()];
for (int i = 0; i < mat.rows; ++i) {
for (int j = 0; j < mat.cols; ++j) {
// Assuming three channels ...
image[j + mat.cols*i + mat.cols*mat.rows*0] = mat.at<cv::Vec3b>(i, j)[0];
image[j + mat.cols*i + mat.cols*mat.rows*1] = mat.at<cv::Vec3b>(i, j)[1];
image[j + mat.cols*i + mat.cols*mat.rows*2] = mat.at<cv::Vec3b>(i, j)[2];
}
}
// The algorithm will store the final segmentation in a one-dimensional array.
vl_uint32* segmentation = new vl_uint32[mat.rows*mat.cols];
vl_size height = mat.rows;
vl_size width = mat.cols;
vl_size channels = mat.channels();
// The region size defines the number of superpixels obtained.
// Regularization describes a trade-off between the color term and the
// spatial term.
vl_size region = 30;
float regularization = 1000.;
vl_size minRegion = 10;
vl_slic_segment(segmentation, image, width, height, channels, region, regularization, minRegion);
// Convert segmentation.
int** labels = new int*[mat.rows];
for (int i = 0; i < mat.rows; ++i) {
labels[i] = new int[mat.cols];
for (int j = 0; j < mat.cols; ++j) {
labels[i][j] = (int) segmentation[j + mat.cols*i];
}
}
// Compute a contour image: this actually colors every border pixel
// red such that we get relatively thick contours.
int label = 0;
int labelTop = -1;
int labelBottom = -1;
int labelLeft = -1;
int labelRight = -1;
for (int i = 0; i < mat.rows; i++) {
for (int j = 0; j < mat.cols; j++) {
label = labels[i][j];
labelTop = label;
if (i > 0) {
labelTop = labels[i - 1][j];
}
labelBottom = label;
if (i < mat.rows - 1) {
labelBottom = labels[i + 1][j];
}
labelLeft = label;
if (j > 0) {
labelLeft = labels[i][j - 1];
}
labelRight = label;
if (j < mat.cols - 1) {
labelRight = labels[i][j + 1];
}
if (label != labelTop || label != labelBottom || label!= labelLeft || label != labelRight) {
mat.at<cv::Vec3b>(i, j)[0] = 0;
mat.at<cv::Vec3b>(i, j)[1] = 0;
mat.at<cv::Vec3b>(i, j)[2] = 255;
}
}
}
// Save the contour image.
cv::imwrite("Lenna_contours.png", mat);
return 0;
}
In addition, have a look at README.md within the GitHub repository. The following figures show some example outputs of setting the regularization to 1 (100,1000) and setting the region size to 30 (20,40).
Figure 1: Superpixel segmentation with region size set to 30 and regularization set to 1.
Figure 2: Superpixel segmentation with region size set to 30 and regularization set to 100.
Figure 3: Superpixel segmentation with region size set to 30 and regularization set to 1000.
Figure 4: Superpixel segmentation with region size set to 20 and regularization set to 1000.
Figure 5: Superpixel segmentation with region size set to 20 and regularization set to 1000.

Accessing certain pixel RGB value in openCV

I have searched internet and stackoverflow thoroughly, but I haven't found answer to my question:
How can I get/set (both) RGB value of certain (given by x,y coordinates) pixel in OpenCV? What's important-I'm writing in C++, the image is stored in cv::Mat variable. I know there is an IplImage() operator, but IplImage is not very comfortable in use-as far as I know it comes from C API.
Yes, I'm aware that there was already this Pixel access in OpenCV 2.2 thread, but it was only about black and white bitmaps.
EDIT:
Thank you very much for all your answers. I see there are many ways to get/set RGB value of pixel. I got one more idea from my close friend-thanks Benny! It's very simple and effective. I think it's a matter of taste which one you choose.
Mat image;
(...)
Point3_<uchar>* p = image.ptr<Point3_<uchar> >(y,x);
And then you can read/write RGB values with:
p->x //B
p->y //G
p->z //R
Try the following:
cv::Mat image = ...do some stuff...;
image.at<cv::Vec3b>(y,x); gives you the RGB (it might be ordered as BGR) vector of type cv::Vec3b
image.at<cv::Vec3b>(y,x)[0] = newval[0];
image.at<cv::Vec3b>(y,x)[1] = newval[1];
image.at<cv::Vec3b>(y,x)[2] = newval[2];
The low-level way would be to access the matrix data directly. In an RGB image (which I believe OpenCV typically stores as BGR), and assuming your cv::Mat variable is called frame, you could get the blue value at location (x, y) (from the top left) this way:
frame.data[frame.channels()*(frame.cols*y + x)];
Likewise, to get B, G, and R:
uchar b = frame.data[frame.channels()*(frame.cols*y + x) + 0];
uchar g = frame.data[frame.channels()*(frame.cols*y + x) + 1];
uchar r = frame.data[frame.channels()*(frame.cols*y + x) + 2];
Note that this code assumes the stride is equal to the width of the image.
A piece of code is easier for people who have such problem. I share my code and you can use it directly. Please note that OpenCV store pixels as BGR.
cv::Mat vImage_;
if(src_)
{
cv::Vec3f vec_;
for(int i = 0; i < vHeight_; i++)
for(int j = 0; j < vWidth_; j++)
{
vec_ = cv::Vec3f((*src_)[0]/255.0, (*src_)[1]/255.0, (*src_)[2]/255.0);//Please note that OpenCV store pixels as BGR.
vImage_.at<cv::Vec3f>(vHeight_-1-i, j) = vec_;
++src_;
}
}
if(! vImage_.data ) // Check for invalid input
printf("failed to read image by OpenCV.");
else
{
cv::namedWindow( windowName_, CV_WINDOW_AUTOSIZE);
cv::imshow( windowName_, vImage_); // Show the image.
}
The current version allows the cv::Mat::at function to handle 3 dimensions. So for a Mat object m, m.at<uchar>(0,0,0) should work.
uchar * value = img2.data; //Pointer to the first pixel data ,it's return array in all values
int r = 2;
for (size_t i = 0; i < img2.cols* (img2.rows * img2.channels()); i++)
{
if (r > 2) r = 0;
if (r == 0) value[i] = 0;
if (r == 1)value[i] = 0;
if (r == 2)value[i] = 255;
r++;
}
const double pi = boost::math::constants::pi<double>();
cv::Mat distance2ellipse(cv::Mat image, cv::RotatedRect ellipse){
float distance = 2.0f;
float angle = ellipse.angle;
cv::Point ellipse_center = ellipse.center;
float major_axis = ellipse.size.width/2;
float minor_axis = ellipse.size.height/2;
cv::Point pixel;
float a,b,c,d;
for(int x = 0; x < image.cols; x++)
{
for(int y = 0; y < image.rows; y++)
{
auto u = cos(angle*pi/180)*(x-ellipse_center.x) + sin(angle*pi/180)*(y-ellipse_center.y);
auto v = -sin(angle*pi/180)*(x-ellipse_center.x) + cos(angle*pi/180)*(y-ellipse_center.y);
distance = (u/major_axis)*(u/major_axis) + (v/minor_axis)*(v/minor_axis);
if(distance<=1)
{
image.at<cv::Vec3b>(y,x)[1] = 255;
}
}
}
return image;
}