I have a colour with rgb values 17, 30, 62.
I did linear gradient fill of a rect with this colour.
On mouse over of rect i want to change the colour value in a text box.
Is there any function or ratio to increase and decrease RGB values programatically
Then i can get the colour back with Raphael.rgb
Raphael cannot help you get the color at a specified pixel, as it only deals with vector graphics (SVG/VML). The rendering is done by the browser.
In the color picker example, the color is obtained from the coordinates in the circle -- the picker knows beforehand, which color it would find in a specified point. It does not check the color of the pixel under the cursor.
If you have a linear gradient with known edge colors it is a matter of linear interpolation to figure out the color of the gradient in any point (unless there is some transparency involved). Find the distance from the two anchor points of the gradient, estimate the relative distance to each and combine the colors using these coefficients.
Related
I want to make a shader that replace a color to be applied to a plain color character, but I can't just replace the color because the image contains pixels that are an average of two border colors.
For example the image looks like this:
Assuming that I want to change the color of the shirt, I want to replace the red color for a green one, but at the edges there are pixels that are not red:
Any ideas how to calculate the resultant color of one of those pixels?
Do you know which are the major colours in advance?
If not then a simple solution for finding them is to generate a histogram — scan the entire image and for each pixel that is the same as all four of its neighbours, add one to a count for the colour it contains. At the end, keep only those colours that fill at least a non-negligible portion of the display, e.g. at least 5% of those pixels that are not transparent.
Dealing with black borders is easy: use a luminance/chrominance colour space, and always leave luminance alone, remapping only chrominance. Factoring out brightness has a bonus: it collapses colour substitution from a 3d problem to a 2d problem.
If this weren't GLSL then a solid solution might be for each pixel that is not one of the selected major colours might be (i) find the nearest pixel that is a major colour; (ii) then find the nearest pixel that is a major colour but not the one found in (i). Use normal linear algebra to figure out the distance of that pixel on the 2d line from the one colour to the other. Substitute the colours, reinterpolate and output.
Being that it is GLSL, so "find the nearest" isn't especially realistic, assuming the number of major colours is small then just do it as distance from those lines. E.g. suppose you have five colours. Then that's 10 potential colour transitions in total — from each of the five colours there are four other options, suggesting twenty transitions, but half of them are exactly the same as the other half because they're just e.g. red to blue instead of blue to red. So ten.
Load those up as uniforms and just figure out which transition gradient the colour is closest to. Substitute the basis colours. Output.
So, in net:
transform (R, G, B) to (Y, x, y) — whether YUV or YIQ or Y doesn't matter, just pick one;
perform distance from a line for (x, y) and the colour transition gradients identified for this image;
having found the transition this pixel is closest to and its distance along that transition, substitute the end points, remap;
recombine with the original Y, convert back to RGB and output.
That's two dot products per colour transition gradient to establish closest, then a single mix to generate the output (x, y)/
Let Rx, Gx, Bx = Pixel values of color X (Red in your case) to be removed/replaced.
Let Ry, Gy, By = Pixel values of color Y (Green in your case) to be used as new color.
Then you will iterate over all pixels and using clever condition (below), identify the pixel that needs to be processed.
If Rc is current value of the selected pixel color (does not matter what combination of red and yellow is), then final values of the pixel are:
Rf = Rc - Rx + Ry
Gf = Gc - Gx + Gy
Bf = Bc - Bx + By
Of course, this processing should NOT happy for all pixels. Clever condition to identify only relevant pixels could be : If pixel color is Red or least one adjacent pixel is Red/Yellow.
UPDATE: Another clever condition using current pixel only:
This involves removing border colors YELLOW or BLACK color from the current color and checking if it is RED.
Rc - R(yellow) == R(RED) AND
Gc - G(yellow) == G(RED) AND
Bc - B(yellow) == B(RED)
OR
Rc - R(black) == R(RED) AND
Gc - G(black) == G(RED) AND
Bc - B(black) == B(RED)
I am working on a project for my thesis and I am building my own path tracer. Afterwards, I have to modify it in such a way to be able to implement the following paper:
https://mediatech.aalto.fi/publications/graphics/GPT/kettunen2015siggraph_paper.pdf
Of course I DO NOT want you to read the paper, but I link it anyway for those who are more curious. In brief, instead of rendering an image by just using the normal path tracing procedure, I have to calculate the gradients for each pixel, which means: if before we were shooting only rays through each pixel, we now shoot also rays for the neighbouring pixels, 4 in total, left, right, top, bottom. Let me explain in other words, I shoot one ray through a pixel and calculate its final colour as for normal path tracing, but, moreover, I shoot rays for its neighbour pixels, calculate the same final colour for those and, in order to calculate the gradients, I subtract their final colours from the main pixel. It means that for each pixel I will have 5 values in total:
colour of the pixel
gradient with right pixel = colour of the right pixel - colour of the pixel
gradient with left pixel = colour of the left pixel - colour of the pixel
gradient with top pixel = colour of the top pixel - colour of the pixel
gradient with bottom pixel = colour of the bottom pixel - colour of the pixel
The problem is that I don't know how to build the final image by both using the main colour and the gradients. What the paper says is that I have to use the screened Poisson reconstruction.
"Screened Poisson reconstruction combines the image and its
gradients using a parameter α that specifies the relative weights of
the sampled image and the gradients".
Everywhere I search for this Poisson reconstruction I see, of course, a lot of math but it comes hard to apply it to my project. Any idea? Thanks in advance!
I have grey scale images with objects darker than the background with each object and the background having the same shade throughout itself. There are mainly 3-4 "groups of shades" in each picture. I want to group these pixels to find the approximate background shade (brightness) to later extract it.
And a side question: How can I calculate the angles on a contour produced by findContours.or maybe the minimum angle on a contour.
I think that you can set a range to group pixels. For example, all the pixel which have intensity value in the range (50 - 100) should have the intensity value 100. Similarly, all the pixels which have intensity value in the range (100-150) should have intensity value 150. And so on.
After doing the above procedure, you can have have only 3-4 fixed values for all pixels (as you have mentioned that there are 3-4 groups in each image.)
I'm trying to implement tiled deferred rendering method and now I'm stuck. I'm computing min/max depth for each tile (32x32) and storing it in texture. Then I want to compute screen space bounding box (bounding square) represented by left down and top right coords of rectangle for every pointlight (sphere) in my scene (see pic from my app). This together with min/max depth will be used to check if light affects actual tile.
Problem is I have no idea how to do this. Any idea, source code or exact math?
Update
Screen-space is basically a 2D entity, so instead of a bounding box think of a bounding rectangle.
Here is a simple way to compute it:
Project 8 corner points of your world-space bounding box onto the screen using your ModelViewProjection matrix
Find a bounding rectangle of these points (which is just min/max X and Y coordinates of the points)
A more sophisticated way can be used to compute a screen-space bounding rect for a point light source. We calculate four planes that pass through the camera position and are tangent to the light’s sphere of illumination (the light radius). Intersections of each tangent plane with the image plane gives us 4 lines on the image plane. This lines define the resulting bounding rectangle.
Refer to this article for math details: http://www.altdevblogaday.com/2012/03/01/getting-the-projected-extent-of-a-sphere-to-the-near-plane/
What I want is, I want to apply colors to the mesh linearly. It could be from Vertex[0] to Vertex[n] or from -Min.x to Max.x. The Min.x should have dark red color then with the increment of vertices position, the red color should transform into green then again increment and end with blue.
The second option could be, if I specify any random Vertex[any] and start color mapping from that vertex, then the color map should be transform into RGB with the incremental order.
You can call it Color Map from XYZ to RGB. How can I do this ? Any idea ?
First how can I transform XYZ into RGB, then how can I make color map or gradient or whatever u can say it.
The attached figure can give you an idea. In this figure, the the normalized vertices X=R, Y=G, Z=B are rendered as colors. Its just a test.
Give me an idea how can I make it perfect linear map.
First off, you must choose a range to map color values from since the RGB components are limited whilst you may have a huge mesh. Consider using the bounding box of your mesh. Given p1 and p2 are the min and max corners of your box, respectively, such mapping could be:
color[i] = 255 * (position[i] - p1) / (p2 - p1);
You can either perform this mapping in your vertex shader or with the CPU to get an array for the color attribute of your vertices, which you may use directly with OpenGL with smooth interpolation. Note that smooth interpolation should be enabled by default. With OpenGL 2.1, use glShadeModel (GL_SMOOTH) to enable it.
If your mesh is UV mapped and you want your color mapping to get onto a texture, render your mesh using texture coordinates as position and then retrieve the rendered image using either glReadPixels() or FBOs.