Function parameters: input image, first color, second color
I am taking an image, looking at the height and width of it then iterating through to find a pixel. If the pixel color is closest to the first color (color1) then change that pixel color to color1, if the pixel color is closest to color2 then change it to color2. My problem is believed to be at the code abs(color2-color1)/2 when trying to compare the two parameter colors.
void Preprocessor(BMP pix, RGB color1, RGB color2) {
int height = pix.GetHeight();
int width = pix.GetWidth();
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
if (pix[i][j]->red + pix[i][j]->green + pix[i][j]->blue >
abs(color2 - color1) / 2) { // pixel color closest to color1
pix[i][j] = color1;
pix[i][j] = color1;
pix[i][j] = color1;
} else { // pixel color closest to color2
pix[i][j] = color2;
pix[i][j] = color2;
pix[i][j] = color2;
}
}
}
}
Choosing the correct metric
The problem of which color is closer to this one is non trivial problem. There can be several approaches to combat this question. You might want to roughly have the same luminosity or maybe hue or vibrance, or something else for that matter.
So you chose abs(color2 - color1) / 2 and this does not have any intuitive value. You may consider explaining what was your reasoning for this exact approach.
I suggest that you start with brightness (kind of). Lets say that you want to estimate the distance of a color from a different color in a Taxicab metric. And then choosing the smaller one.
// Taxicab metric (Manhattan)
double distance(RGB c1, RGB c2) {
return
abs(c1->red - c2->red)
+ abs(c1->green - c2->green)
+ abs(c1->bule - c2->blue);
}
void Preprocessor(BMP pix, RGB color1, RGB color2) {
int height = pix.GetHeight();
int width = pix.GetWidth();
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
double d1 = distance(color1, pix[i][j]);
double d2 = distance(color2, pix[i][j]);
if (d1 < d2) { // pixel color closest to color1
pix[i][j] = color1;
} else { // pixel color closest to color2
pix[i][j] = color2;
}
}
}
}
Considerations
You might want to also experiment with other metrics (Euclidean) and also with other color schemes that are more suitable to comapre in this way HSV, HSL
Related
I'm making a small game engine in which i want to draw stuff using OpenGL. I abstracted all the OpenGL objects into classes (Buffers, VertexArrays, Shaders, Programs...). Everything worked fine until i got to 3D rendering. I implemented my own matrices and vectors(i didn't use like glm), and when i multiply my vertex position in the shader with any matrix, the z coordinate flips (z = -z). I even tried with the identity matrix. Here is the vertex shader:
#version 330 core
layout(location = 0) in vec4 i_pos;
layout(location = 1) in vec4 i_color;
out vec4 p_color;
uniform mat4 u_MVP;
uniform vec4 u_pos;
void main()
{
gl_Position = u_MVP * (i_pos + u_pos);
p_color = i_color;
}
I used the u_Pos uniform just for debugging reasons. And here i set the uniforms:
void Frame() override
{
deltaTime = timer.Reset();
if (Input::GetKey(Key::W).value == KeyDown) pos.z += deltaTime;
if (Input::GetKey(Key::S).value == KeyDown) pos.z -= deltaTime;
//mat4f(1.0f) creates a identity matrix
shaderSelection.SetUniform("u_MVP", mat4f(1.0f));
shaderSelection.SetUniform("u_pos", vec4f(pos));
ren.DrawTriangles(vertexArray, indexBuffer, shaderSelection);
}
Although im sure there's nothing with the matrix struct, here it is:
template<typename T = float, int sizeX = 4, int sizeY = 4>
struct BLAZE_API mat
{
private:
T v[sizeY][sizeX];
public:
mat()
{
for (unsigned i = 0; i < sizeX * sizeY; i++)
((T*)v)[i] = 0;
}
mat(T* ptr, bool transpose = false)
{
if (transpose)
for (unsigned i = 0; i < sizeX * sizeY; i++)
((T*)v)[i] = ptr[i];
else
for (unsigned i = 0; i < sizeX * sizeY; i++)
((T*)v)[i] = ptr[i % sizeY * sizeX + i / sizeY];
}
mat(T n)
{
for (int x = 0; x < sizeX; x++)
for (int y = 0; y < sizeY; y++)
if (x == y)
operator[](x)[y] = n;
else
operator[](x)[y] = 0;
}
mat(const mat<T, sizeX, sizeY>& mat)
{
for (int x = 0; x < sizeX; x++)
for (int y = 0; y < sizeY; y++)
v[x][y] = mat[x][y];
}
inline T* operator[] (unsigned i) const { return (T*)(v[i]); }
inline void operator= (const mat<T, sizeX, sizeY>& mat)
{
for (int x = 0; x < sizeX; x++)
for (int y = 0; y < sizeY; y++)
v[x][y] = mat[x][y];
}
};
And the SetUniform does this:
glUniformMatrix4fv( ... , 1, GL_FALSE, m[0]);
I made the matrix struct such that i don't have to use GL_TRUE for transpose parameter in glUniformMatrix4fv. I am pretty sure it isnt my matrix implementation that is inverting the z coordinate.
It is like the camera is looking in the -Z direction, but when i move a object in the +X direction it moves also +X on the screen(also applies for Y direction), which it shouldn't if the camera is facing -Z.
Is this supposed to happen, if so can i change it?
If you do not transform the vertex coordinates (or transform it by the Identity matrix), then you directly set the coordinates in normalized device space. The NDC is a unique cube, with the left, bottom, near of (-1, -1, -1) and the right, top, far of (1, 1, 1). That means the X-axis is to the right, the Y-axis is upwards and the Z-axis points into the view.
In common the OpenGL coordinate system is a Right-handed system. In view space the X-axis points to the right and the Y-axis points up.
Since the Z-axis is the Cross product of the X-axis and the Y-axis, it points out of the viewport and appears to be inverted.
To compensate the difference in the direction of the Z-axis in view space in compare to normalized device space the Z-axis has to be inverted.
A typical OpenGL projection matrix (e.g. glm::ortho, glm::perspective or glm::frustum) turns the right handed system to a left handed system and mirrors the Z-axis.
That means, if you use a (typical) projection matrix (and no other transformations), then the vertex coordinates are equal to the view space coordinates. The X-axis is to the right, the Y-axis is upwards and the Z-axis points out of the view.
In simplified words, in normalized device space the camera points in +Z. In view space (before the transformation by a typical projection matrix) the camera points in -Z.
Note if you setup a Viewing frustum, then 0 < near and near < far. Both conditions have to be fulfilled. The geometry has to be between the near and the far plane, else it is clipped. In common a view matrix is used to look at the scene from a certain point of view. The near and far plane of the viewing frustum are chosen in that way, that the geometry is in between.
Since the depth is not linear (see How to render depth linearly in modern OpenGL with gl_FragCoord.z in fragment shader?), the near plane should be placed as close as possible to the geometry.
I've been trying to draw a circle in c++ using openGL. So far i have a compresses circle and it just has a random line going across the screen.
This is the function I'm using to get this shape.
void Sprite::init(int x, int y, int width, int height, Type mode, float scale) {
_x = x;
_y = y;
_width = width;
_height = height;
//generate buffer if it hasn't been generated
if (_vboID == 0) {
glGenBuffers(1, &_vboID);
}
Vertex vertexData[360];
if (mode == Type::CIRCLE) {
float rad = 3.14159;
for (int i = 0; i < 359; i++) {
vertexData[i].setPosition((rad * scale) * cos(i), (rad * scale) * sin(i));
}
}
//Tell opengl to bind our vertex buffer object
glBindBuffer(GL_ARRAY_BUFFER, _vboID);
//Upload the data to the GPU
glBufferData(GL_ARRAY_BUFFER, sizeof(vertexData), vertexData, GL_STATIC_DRAW);
//Unbind the buffer
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
What is causing the line? Why is my circle being compressed?
Sorry if this is a dumb question or if this question doesn't belong on this website I'm very new to both c++ as well as this website.
It is difficult to be sure without testing the code myself, but I'll guess anyway.
Your weird line is probably caused by the buffer not being fully initialized. This is wrong:
Vertex vertexData[360];
for (int i = 0; i < 359; i++) {
It should be:
for (int i = 0; i < 360; i++) {
or else the position at vertexData[359] is left uninitialized and contains some far away point.
About the ellipse instead of a circle, that is probably caused by your viewport not having the same scale horizontally and vertically. If you configure the viewport plus transformation matrices to have a viewing frustum of X=-10..10, Y=-10..10, but the actual viewport is X=0..800 and the Y=0..600, for example, then the scale would be different and you'll get your image distorted.
The solution would be one of:
Create a square viewport instead of rectangular. Check your arguments to glViewport().
Define a view matrix to consider the same ratio your viewport has. You don't show how you set the view/world matrix, maybe you are not even using matrices... If that is the case, you should probably use one.
I don't understand, exactly, what you want obtain but... cos() and sin() receive a radiant argument; so, instead of cos(i) and sin(i), I suppose you need cos((2*rad*i)/360.0)) and sin((2*rad*i)/360.0)) or, semplified, cos((rad*i)/180.0)) and cos((rad*i)/180.0))
And what about the center and the radious of the circle?
(x, y) should be the center of the circle?
scale is the radious?
In this case, I suppose you should write something like (caution: not tested)
Vertex vertexData[360];
float rad = 3.14159;
if (mode == Type::CIRCLE) {
for (int i = 0; i < 359; ++i) {
float angle = (rad / 180) * i; // (thanks Rodrigo)
vertexData[i].setPosition(x + scale * cos(angle), y + scale * sin(angle));
}
}
or, loosing precision but avoidind some moltiplication,
Vertex vertexData[360];
float rad = 3.14159;
float angIncr = rad / 180.0;
if (mode == Type::CIRCLE) {
for (int i = 0, float angle = 0.0; i < 359; ++i, angle += angIncr) {
vertexData[i].setPosition(x + scale * cos(angle), y + scale * sin(angle));
}
}
But what about width and heigth?
p.s.: sorry for my bad English.
--- modified with suggestion from Rodrigo --
I am trying to implement a gaussian blur with convolution matrix on my shader.
This is the code i have:
float4 ppPS(float2 uv : TEXCOORD0, uniform sampler2D t1) : COLOR {
//kernel matrix
float3x3 kernel={1*(1/16),2*(1/16),1*(1/16),
2*(1/16),4*(1/16),2*(1/16),
1*(1/16),2*(1/16),1*(1/16)
};
int x,y;
float2 sum = 0;
for (x = -1; x <= 1; x++)
{
for (y = -1; y <= 1; y++)
{
float2 fl;
fl.x = uv.x+x;
fl.y = uv.y+y;
sum += (fl)*(kernel[x+1][y+1]);
}
}
return tex2D(t1, sum);
}
but for some reason, i get a picture all in one solid color.
Here is the image without the blur:
Here is the image with the so called blur:
any idea of what am i doing wrong over here?
Try to change the float3x3 initialize values into floating point format (.0f) otherwise all the values will end up as 0.
//kernel matrix
static const float3x3 kernel={1*(1.0f/16.0f),2*(1.0f/16.0f),1*(1.0f/16.0f),
2*(1.0f/16.0f),4*(1.0f/16.0f),2*(1.0f/16.0f),
1*(1.0f/16.0f),2*(1.0f/16.0f),1*(1.0f/16.0f)
};
After this change you wouldn't see the blank output image !!!
I have to draw a conical gradient in Qt C++ but I can not use the QConicalGradient. I did have a linear gradient, but I do not know how to make a conical gradient. I do not want the finished code, but I ask for a simple algorithm.
for(int y = 0; y < image.height(); y++){
QRgb *line = (QRgb *)image.scanLine(y);
for(int x = 0; x < image.width(); x++){
QPoint currentPoint(x, y);
QPoint relativeToCenter = currentPoint - centerPoint;
float angle = atan2(relativeToCenter.y(), relativeToCenter.x);
// I have a problem in this line because I don't know how to set a color:
float hue = map(-M_PI, angle, M_PI, 0, 255);
line[x] = (red << 16) + (grn << 8) + blue;
}
}
Can you help me?
Here is some pseudo code:
Given some area to paint on, and a defined center for your gradient...
For each point that you are painting on in the area, calculate the angle to the center of your gradient.
// QPoint currentPoint; // created/populated with a x, y value by two for loops
QPoint relativeToCenter = currentPoint - centerPoint;
angle = atan2(relativeToCenter.y(), relativeToCenter.x());
Then map that angle to a color using your linear gradient, or some sort of mapping function.
float hue = map(-PI, angle, PI, 0, 255); // convert angle in radians to value
// between 0 and 255
Paint that pixel, and repeat for every pixel in your area.
EDIT: Depending on the pattern of the gradient, you will want to create a different QColor pixel. For example if you had a "rainbow" gradient, just going from one hue to the next, you could use a linear mapping function like this:
float map(float x1, float x, float x2, float y1, float y2)
{
if(true){
if(x<x1)
x = x1;
if(x>x2)
x = x2;
}
return y1 + (y2-y1)/(x2-x1)*(x-x1);
}
Then you create a QColor object using the outputted value:
float hue = map(-PI, angle, PI, 0, 255); // convert angle in radians to value
// between 0 and 255
QColor c;
c.setHsl( (int) hue, 255, 255);
Then use this QColor object with your QPainter or QBrush or QPen that you are using. Or if you are putting a qRgb value back in:
line[x] = c.rgb();
http://qt-project.org/doc/qt-4.8/qcolor.html
Hope that helps.
I am after smooth texture based outline effect in OpenGL. So far I tried mostly all kinds of edge detection algorithms which result mostly in crude and jagged outlines. Then I read about Distance Field. I found an example which does pretty nice distance field. Here is the GLSL code:
#version 420
layout(binding=0) uniform sampler2D colorMap;
flat in vec4 diffuseOut;
in vec2 uvsOut;
out vec4 outputColor;
const float ALPHA_THRESHOLD = 0.9;
const float NUM_SPOKES = 36.0; // Number of radiating lines to check in.
const float ANGULAR_STEP =360.0 / NUM_SPOKES;
const int ZERO_VALUE =128; // Color channel containing 0 => -128, 128 => 0, 255 => +127
int in_StepSize=15; // Distance to check each time (larger steps will be faster, but less accurate).
int in_MaxDistance=30; // Maximum distance to search out to. Cannot be more than 127!
vec4 distField(){
vec2 pixel_size = 1.0 / vec2(textureSize(colorMap, 0));
vec2 screenTexCoords = gl_FragCoord.xy * pixel_size;
int distance;
if(texture(colorMap, screenTexCoords).a == 0.0)
{
// Texel is transparent, search for nearest opaque.
distance = ZERO_VALUE + 1;
for(int i = in_StepSize; i < in_MaxDistance; i += in_StepSize)
{
if(find_alpha_at_distance(screenTexCoords, float(i) * pixel_size, 1.0))
{
i = in_MaxDistance + 1; // BREAK!
}
else
{
distance = ZERO_VALUE + 1 + i;
}
}
}
else
{
// Texel is opaque, search for nearest transparent.
distance = ZERO_VALUE;
for(int i = in_StepSize; i <= in_MaxDistance; i += in_StepSize)
{
if(find_alpha_at_distance(screenTexCoords, float(i) * pixel_size, 0.0))
{
i = in_MaxDistance + 1; // BREAK!
}
else
{
distance = ZERO_VALUE - i;
}
}
}
return vec4(vec3(float(distance) / 255.0) * diffuseOut.rgb, 1.0 - texture(colorMap, screenTexCoords).a);
}
void main()
{
outputColor= distField();
}
The result of this shader covers the whole screen using the diffuse color for filling the screen area outside the Distance Field outline.Here is how it looks like :
What I need is to leave all the area which has the solid red fill outside the distance field as transparent.
I came to the solution by using Distance Field gray scale 8 bit alpha map.Stefan Gustavson
describes in detail how to do it.Basically one needs to generate the distance field version of the original texture.Then this texture is rendered with the primitive normally in the first pass into an FBO.In the second pass the alpha blending mode should be on.The texture from the first pass in used with the screen quad.At this stage the the fragment shader samples the alpha from that texture.This results in both smooth edges and alpha transparency around the edges.
Here is the result:
Based on the screenshot I'm assuming you're rendering a fullscreen quad? If that's the case Tim just provided the answer, try:
glEnable( GL_BLEND );
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Before you render the quad. Obviously if you're going to render non-transparent stuff too, I advise you to render those first so you won't get depth buffer problems. When you're done drawing the transparent stuff, call:
glDisable( GL_BLEND );
To turn alphablending off again.