Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I don't understand how to convert sRGB to CIELab and backward. Help me please. it's desirable in с++ code
Convert from sRGB to RGB by applying inverse gamma, convert RGB to XYZ using 3x3 matrix, convert XYZ to Lab using standard formula and D6500 white point
References:
Lab http://en.wikipedia.org/wiki/Lab_color_space
sRBG http://en.wikipedia.org/wiki/SRGB_color_space
The rest... you can do on your own :-)
In case, I have prepared a list of links (in different programming languages) which can be helpful for the conversion process (sRGB to LAB and back) and also, conversion of sRGB to linear RGB. Linear RGB can be further used for white balance and color calibration of an image (provided color patch, like Macbeth color chart).
Interesting links:
(i) Understanding sRGB and linear RGB space: http://filmicgames.com/archives/299; http://www.cambridgeincolour.com/tutorials/gamma-correction.htm
(ii) MATLAB tutorial: https://de.mathworks.com/help/vision/ref/colorspaceconversion.html
(iii) Python package: http://pydoc.net/Python/pwkit/0.2.1/pwkit.colormaps/
(iv) C code: http://svn.int64.org/viewvc/int64/colors/color.c?view=markup
(v) OpenCV does not provide sRGB to linear RGB conversion but it does the conversion inside color.cpp code (OpenCV_DIR\modules\imgproc\src\color.cpp). Check out method called initLabTabs(), there is a gamma encoding and decoding. OpenCV color conversion API: http://docs.opencv.org/3.1.0/de/d25/imgproc_color_conversions.html
Related
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 years ago.
Improve this question
I exported a grayscale image as PGM using MATLAB (and OpenCV) and got this file as an output.
im = imread(src);
im = rgb2gray(im);
imwrite(im, dst);
According to the PGM Specification the header contains the "magic number", the width, the height, and the max value of the image.
But below the header, there should be a matrix of grayscale intensity values written in plaintext. But as you can see in the pasted file, I just get some kind of junk out (although it's a completely valid, viewable image)
I want to be able to read in the PGM files and access the individual intensity values as integers using a C/C++ program but I don't know how to interpret this output since it doesn't follow the spec. Perhaps the text encoding is different?
Thanks for any assistance.
You're misreading the spec.
Each gray value is represented in pure binary by either 1 or 2 bytes. If the Maxval is less than 256, it is 1 byte. Otherwise, it is 2 bytes. The most significant byte is first.
So each pixel is either one or two bytes (depending on the Maxval) and in binary, not ASCII.
I think you're reading the definition of the "plain" format (magic number P2), but you have a "raw" PBM file (magic number P5). You might want to pipe through pnmtopnm -plain to access ASCII-format encoding.
This question already has answers here:
Algorithm to convert RGB to HSV and HSV to RGB in range 0-255 for both
(14 answers)
Closed 4 years ago.
So if I have a circle ranging from red to violet(0-360), can I get a colour if I have an angle? I have been searching but I have only found code to convert from different formats and nothing really to do with angles. I would really like to know the math that goes behind this.
I'm just writing a c++ program for my arduino with a joystick and an RGB led. I've got the easy stuff done but I don't even know where to begin with the colour.
The RGB color space is based on cartesian coordinates. If you want an angle that means you want something akin to polar coordinates, the color spaces you are looking for is either HSL or HSV.
https://en.wikipedia.org/wiki/HSL_and_HSV#From_HSV
In HSV, you can for example use maxium Saturation and maxium Value, then you only have to pick the Hue (which is an angle).
That being said, you can also make-up one and use for example:
(R, G, B) = (256*cos(x), 256*cos(x + 120), 256*cos(x - 120))
Where cos is using degrees.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
So I'm staring a project about image processing with C++.
The thing is that everything I find online about this matter (blurring an image in C++) comes either with CUDA or with OpenCV.
Is there a way to blur an image with C++ only? (for starters)
If yes, can somebody please share the code or explain?
Thanks!
Firstly you need the image in memory.
Then you need a second buffer to use as a workspace.
Then you need a filter. A common filter would be
1 4 1
4 -20 4
1 4 1
For each pixel, we apply the filter. So we're setting the image to a weighted average of the pixels around it, then subtracting to avoid the overall image going lighter or darker.
Applying a small filter is very simple.
for(y=0;y<height;y++)
for(x=0;x<width;x++)
{
total = image[(y+1)*width+x+1];
for(fy=0; fy < 3; fy++)
for(fx = 0; fx < 3; fx++)
total += image[(y+fy)*width+x+fx] * filter[fy*3+x];
output[(y+1)*width+x+1] = clamp(total, 0, 255);
}
You need to special case the edges, which is just fiddly but doesn't add any theoretical complexity.
When we use faster algorithms that the naive one it becomes important to set up edges correctly. You then do the calculations in the frequency domain and it's a lot faster with a big filter.
If you would like to implement the blurring on your own, you have to somehow store the image in memory. If you have a black and white image, an
unsigned char[width*height]
might be sufficient to store the image; if it is a colour image, perhaps you will have the same array, but three or four times the size (one for each colour channel and one for the so-called alpha-value which describes the opacity).
For the black and white case, you would have to sum up the neighbours of each pixel and calculate its average; this approach transfers to colour images by applying the operation to each colour channel.
The operation described above is a special case of the so-called kernel filter, which can also be used to implement different operations.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I want implement a function in C++/RealBasic to create a color gradient by the parameters:
Width and height of the image
2 colors of the gradient
Angle (direction) of the gradient
Strength of the gradient
The following links show some examples of the desired output image:
http://www.artima.com/articles/linear_gradients_in_flex_4.html, http://i.stack.imgur.com/4ssfj.png
I have found multiple examples but they give me only vertical and horizontal gradients, while I want to specify the angle and strength too.
Can someone help me please?
P.S.: I know only a little about geometry!! :(
Your question is very wide and as is, this is a pretty complex exercise with a lot of code, including image rendering, image format handling, writing file to disk, etc. These are not the matter of a single function. Because of this, I focus on making an arbitrary linear color gradient of 2 colors.
Linear color gradient
You can create a linear color "gradient" by linear interpolation between 2 colors. However simple linear interpolation makes really harsh-looking transitions. For visually more appealing results I recommend to use some kind of S-shaped interpolation curve like the Hermite interpolation based smoothstep.
Regarding the angle, you can define a line segment by the start (p0) and end (p1) points of the color gradient. Let's call the distance between them d01, so d01 = distance(p0, p1). Then for each pixel point p of the image, you have to compute the closest point p2 on this segment. Here is an example how to do that. Then compute t = distance(p0, p2) / d01. This will be the lerp parameter t in the range [0, 1].
Interpolate between the 2 gradient color by this t and you got the color for the given point p.
This can be implemented multiple ways. You can use OpenGL to render the image, then read the pixel buffer back to the RAM. If you are not familiar with OpenGL or the rendering process, you can write a function which takes a point (the 2D coordinates of a pixel) and returns an RGB color - so you can compute all the pixels of the image. Finally you can write the image to disk using an image format, but that's an another story.
The following are example C++14 implementations of some functions mentioned above.
Simple linear interpolation:
template <typename T, typename U>
T lerp(const T &a, const T &b, const U &t)
{
return (U(1) - t)*a + t*b;
}
, where a and b are the two values (colors in this case) you want to interpolate between, and t is the interpolation parameter in the range [0, 1] representing the transition between a and b.
Of course the above function requires a type T which supports multiplication by a scalar. You can simply use any 3D vector type for this purpose, since colors are actually coordinates in color space.
Distance between two 2D points:
#include <cmath>
auto length(const Point2 &p)
{
return std::sqrt(p.x*p.x + p.y*p.y);
}
auto distance(const Point2 &a, const Point2 &b)
{
Point delta = b - a;
return length(delta);
}
Image from https://developer.mozilla.org/en-US/docs/Web/CSS/linear-gradient
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am trying to understand how the marching cubes algorithm works.
Source:
http://paulbourke.net/geometry/polygonise/
What i don't understand is how do you calculate the "GRIDCELL" values. To be exact the
double val[8];
part is not clear for me what it actually supposed to contain.
typedef struct {
XYZ p[8];
double val[8];
} GRIDCELL;
As i understand XYZ p[8]; are the vertex coordinates for the output cube. But what val[8]; is?
The marching cubes algorithm is -- as explained in the linked description -- an algorithm to build a polygonal representation from sampled data. The
double val[8];
are the samples for the 8 vertices of the cube. So they are not computed they are measurements from e.g. MRI scans. So the algorithm is the other way around: take a set of measured numbers and construct a surface representation for visualization from it.
Te val is the level of "charge" for each vertex of the cell, it depends of the tipe of shape that you want to creae.
f.e.: if you want to made a ball you can sample the values with the formula:
for (int l = 0; l < 8; ++l){
float distance = sqrtf(pow(cell.p[l].x - chargepos.x, 2.0) + pow(cell.p[l].y - chargepos.y, 2.0) + pow(cell.p[l].z - chargepos.z, 2.0));
cell.val[l] = chargevalue /pow(distance, 2.0);}
After further reading and research the explanation is quite simple.
First off all:
A voxel represents a value on a regular grid in three-dimensional space.
This value simply represents the so called "isosurface". Or in other words the density of the space.
double val[8];
To simplify:
Basically this should be a value between -1.0f to 0.0f.
Where -1.0f means solid and 0.0f empty space.
For ISO values a perlin/simplex noise can be used for example.