I have been trying to use the code below to set up a gradient, but it displays a gradient from black to white (which are not the colors, obviously).
cairo_t *cr;
cr = gdk_cairo_create(widget->window);
cairo_pattern_t *pat1;
pat1 = cairo_pattern_create_linear(0.0, 0.0, 50.0, 512);
cairo_pattern_add_color_stop_rgb(pat1, 0, 0, 0, 0);
cairo_pattern_add_color_stop_rgb(pat1, 1, 1, 254, 255);
cairo_pattern_add_color_stop_rgb(pat1, 2, 2, 253, 255);
cairo_pattern_add_color_stop_rgb(pat1, 3, 3, 252, 255);
cairo_pattern_add_color_stop_rgb(pat1, 4, 4, 251, 255);
cairo_pattern_add_color_stop_rgb(pat1, 5, 5, 250, 255);
cairo_pattern_add_color_stop_rgb(pat1, 6, 6, 249, 255);
cairo_pattern_add_color_stop_rgb(pat1, 7, 6, 249, 255);
cairo_pattern_add_color_stop_rgb(pat1, 8, 7, 248, 255);
cairo_pattern_add_color_stop_rgb(pat1, 9, 8, 247, 255);
cairo_rectangle(cr, 0, 0, 50, 512);
cairo_set_source(cr, pat1);
cairo_fill(cr);
cairo_pattern_destroy(pat1);
cairo_destroy(cr);
However, this code displays a gradient from red to purple to blue:
cairo_t *cr;
cr = gdk_cairo_create(widget->window);
cairo_pattern_t *pat1;
pat1 = cairo_pattern_create_linear(0.0, 0.0, 50.0, 512);
cairo_pattern_add_color_stop_rgb(pat1, 0, 256, 0, 0);
cairo_pattern_add_color_stop_rgb(pat1, 1, 0, 0, 256);
cairo_pattern_add_color_stop_rgb(pat1, 2, 0, 256, 256);
cairo_rectangle(cr, 0, 0, 50, 512);
cairo_set_source(cr, pat1);
cairo_fill(cr);
cairo_pattern_destroy(pat1);
cairo_destroy(cr);
Why does the first one display a grayscale, while the second one does not?
The top has non-grayscale colors, so I have no clue why it wouldn't work.
EDIT: An answer explained that values above 1 are clamped so I changed my code to this:
cairo_pattern_add_color_stop_rgb(pat1, 0, 0, 0, 0);
cairo_pattern_add_color_stop_rgb(pat1, 1, (1/256), (254/256), (255/256));
cairo_pattern_add_color_stop_rgb(pat1, 2, (2/256), (253/256), (255/256));
cairo_pattern_add_color_stop_rgb(pat1, 3, (3/256), (252/256), (255/256));
cairo_pattern_add_color_stop_rgb(pat1, 4, (4/256), (251/256), (255/256));
cairo_pattern_add_color_stop_rgb(pat1, 5, (5/256), (250/256), (255/256));
cairo_pattern_add_color_stop_rgb(pat1, 6, (6/256), (249/256), (255/256));
cairo_pattern_add_color_stop_rgb(pat1, 7, (6/256), (249/256), (255/256));
cairo_pattern_add_color_stop_rgb(pat1, 8, (7/256), (248/256), (255/256));
cairo_pattern_add_color_stop_rgb(pat1, 9, (8/256), (247/256), (255/256));
The bar is now completely black.
https://www.cairographics.org/manual/cairo-cairo-t.html#cairo-set-source-rgb
The color and alpha components are floating point numbers in the range 0 to 1. If the values passed in are outside that range, they will be clamped.
The 256s in the one that work are clamped to 1. So you get (1,0,0) to (0,0,1) to (0,1,1).
In the one that doesn't work, the first one remains (0,0,0) and every other stop is clamped to (1,1,1). In short, black to white are the colors used in the first one. So cairo displays black to white.
Related
I'm trying to render text using SDL. Obviously SDL does not support rendering text by itself, so I went with this approach:
load font file
raster glyphs in the font to a bitmap
pack all bitmaps in a large texture, forming a spritesheet of glyphs
render text as a sequence of glyph-sprites: copy rectangles from the texture to the target
First steps are handled using FreeType library. It can generate bitmaps for many kinds of fonts and provide a lot of extra info about the glyphs. FreeType-generated bitmaps are (by default) alpha channel only. For every glyph I basically get a 2D array of A values in range 0 - 255. For simplicity reasons the MCVE below needs only SDL, I already embedded FreeType-generated bitmap in the source code.
Now, the question is: how should I manage the texture that consists of such bitmaps?
What blending mode should I use?
What modulation should I use?
What should the texture be filled with? FreeType provides alpha channel only, SDL generally wants a texture of RGBA pixels. What values should I use for RGB?
How do I draw text in specific color? I don't want to make a separate texture for each color.
FreeType documentation says: For optimal rendering on a screen the bitmap should be used as an alpha channel in linear blending with gamma correction. SDL blending mode documentation doesn't list anything named linear blending so I used a custom one but I'm not sure if I got it right.
I'm not sure if I got some of SDL calls right as some of them are poorly documented (I already know that locking with empty rectangles crashes on Direct3D), especially how to copy data using SDL_LockTexture.
#include <string>
#include <stdexcept>
#include <SDL.h>
constexpr unsigned char pixels[] = {
0, 0, 0, 0, 0, 0, 0, 30, 33, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 1, 169, 255, 155, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 83, 255, 255, 229, 1, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 189, 233, 255, 255, 60, 0, 0, 0, 0, 0,
0, 0, 0, 0, 33, 254, 83, 250, 255, 148, 0, 0, 0, 0, 0,
0, 0, 0, 0, 129, 227, 2, 181, 255, 232, 3, 0, 0, 0, 0,
0, 0, 0, 2, 224, 138, 0, 94, 255, 255, 66, 0, 0, 0, 0,
0, 0, 0, 68, 255, 48, 0, 15, 248, 255, 153, 0, 0, 0, 0,
0, 0, 0, 166, 213, 0, 0, 0, 175, 255, 235, 4, 0, 0, 0,
0, 0, 16, 247, 122, 0, 0, 0, 88, 255, 255, 71, 0, 0, 0,
0, 0, 105, 255, 192, 171, 171, 171, 182, 255, 255, 159, 0, 0, 0,
0, 0, 203, 215, 123, 123, 123, 123, 123, 196, 255, 239, 6, 0, 0,
0, 44, 255, 108, 0, 0, 0, 0, 0, 75, 255, 255, 77, 0, 0,
0, 142, 252, 22, 0, 0, 0, 0, 0, 5, 238, 255, 164, 0, 0,
5, 234, 184, 0, 0, 0, 0, 0, 0, 0, 156, 255, 242, 8, 0,
81, 255, 95, 0, 0, 0, 0, 0, 0, 0, 68, 255, 255, 86, 0,
179, 249, 14, 0, 0, 0, 0, 0, 0, 0, 3, 245, 255, 195, 0
};
[[noreturn]] inline void throw_error(const char* desc, const char* sdl_err)
{
throw std::runtime_error(std::string(desc) + sdl_err);
}
void update_pixels(
SDL_Texture& texture,
const SDL_Rect& texture_rect,
const unsigned char* src_alpha,
int src_size_x,
int src_size_y)
{
void* pixels;
int pitch;
if (SDL_LockTexture(&texture, &texture_rect, &pixels, &pitch))
throw_error("could not lock texture: ", SDL_GetError());
auto pixel_buffer = reinterpret_cast<unsigned char*>(pixels);
for (int y = 0; y < src_size_y; ++y) {
for (int x = 0; x < src_size_x; ++x) {
// this assumes SDL_PIXELFORMAT_RGBA8888
unsigned char* const rgba = pixel_buffer + x * 4;
unsigned char& r = rgba[0];
unsigned char& g = rgba[1];
unsigned char& b = rgba[2];
unsigned char& a = rgba[3];
r = 0xff;
g = 0xff;
b = 0xff;
a = src_alpha[x];
}
src_alpha += src_size_y;
pixel_buffer += pitch;
}
SDL_UnlockTexture(&texture);
}
int main(int /* argc */, char* /* argv */[])
{
if (SDL_Init(SDL_INIT_VIDEO) < 0)
throw_error("could not init SDL: ", SDL_GetError());
SDL_Window* window = SDL_CreateWindow("Hello World",
SDL_WINDOWPOS_UNDEFINED,
SDL_WINDOWPOS_UNDEFINED,
1024, 768,
SDL_WINDOW_RESIZABLE);
if (!window)
throw_error("could not create window: ", SDL_GetError());
SDL_Renderer* renderer = SDL_CreateRenderer(window, -1, 0);
if (!renderer)
throw_error("could not create renderer: ", SDL_GetError());
SDL_Texture* texture = SDL_CreateTexture(renderer, SDL_PIXELFORMAT_RGBA8888, SDL_TEXTUREACCESS_STREAMING, 512, 512);
if (!texture)
throw_error("could not create texture: ", SDL_GetError());
SDL_SetTextureColorMod(texture, 255, 0, 0);
SDL_Rect src_rect;
src_rect.x = 0;
src_rect.y = 0;
src_rect.w = 15;
src_rect.h = 17;
update_pixels(*texture, src_rect, pixels, src_rect.w, src_rect.h);
/*
* FreeType documentation: For optimal rendering on a screen the bitmap should be used as
* an alpha channel in linear blending with gamma correction.
*
* The blending used is therefore:
* dstRGB = (srcRGB * srcA) + (dstRGB * (1 - srcA))
* dstA = (srcA * 0) + (dstA * 1) = dstA
*/
SDL_BlendMode blend_mode = SDL_ComposeCustomBlendMode(
SDL_BLENDFACTOR_SRC_ALPHA, SDL_BLENDFACTOR_ONE_MINUS_SRC_ALPHA, SDL_BLENDOPERATION_ADD,
SDL_BLENDFACTOR_ZERO, SDL_BLENDFACTOR_ONE, SDL_BLENDOPERATION_ADD);
if (SDL_SetTextureBlendMode(texture, blend_mode))
throw_error("could not set texture blending: ", SDL_GetError());
while (true) {
SDL_SetRenderDrawColor(renderer, 255, 255, 0, 255);
SDL_RenderClear(renderer);
SDL_Rect dst_rect;
dst_rect.x = 100;
dst_rect.y = 100;
dst_rect.w = src_rect.w;
dst_rect.h = src_rect.h;
SDL_RenderCopy(renderer, texture, &src_rect, &dst_rect);
SDL_RenderPresent(renderer);
SDL_Delay(16);
SDL_Event event;
while (SDL_PollEvent(&event)) {
switch (event.type) {
case SDL_KEYUP:
switch (event.key.keysym.sym) {
case SDLK_ESCAPE:
return 0;
}
break;
case SDL_QUIT:
return 0;
}
}
}
return 0;
}
Expected result: red letter "A" on yellow background.
Actual result: malformed red lines inside black square on yellow background.
I suspect that lines are broken because there is a bug within pointer arithmetics inside update_pixels but I have no idea what's causing the black square.
First of all, part of this stuff is already done in SDL_ttf library. You could use it to rasterise glyphs to surfaces or generate multichar text surface.
Your src_alpha += src_size_y; is incorrect - you copy row by row, but skip by column length, not row length. It should be src_size_x. That results in incorrect offset on each row and only first row of your copied image is correct.
Your colour packing when writing to texture is backwards. See https://wiki.libsdl.org/SDL_PixelFormatEnum#order - Packed component order (high bit -> low bit): SDL_PACKEDORDER_RGBA, meaning R is packed at highest bits while A is at lowest. So, when representing it with unsigned char*, First byte is A and fourth is R:
unsigned char& r = rgba[3];
unsigned char& g = rgba[2];
unsigned char& b = rgba[1];
unsigned char& a = rgba[0];
You don't need custom blending, use SDL_BLENDMODE_BLEND, that is 'standard' "src-alpha, one-minus-src-alpha" formula everyone uses (note that it does not blend dst alpha channel itself, nor uses it to any extent; when blending, we only care about src alpha).
Finally one more approach to this: you could put your glyph luminance value (alpha, whatever it is called, the point is it only have one channel) and put it into every channel. That way you could do additive blending without using alpha at all, don't even need RGBA texture. Glyph colour could still be multiplied with colour mod. SDL_ttf implements just that.
I'm trying to pack 16 bits data to 8 bits by using _mm256_shuffle_epi8 but the result i have is not what i'm expecting.
auto srcData = _mm256_setr_epi8(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,
17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32);
__m256i vperm = _mm256_setr_epi8( 0, 2, 4, 6, 8, 10, 12, 14,
16, 18, 20, 22, 24, 26, 28, 30,
-1, -1, -1, -1, -1, -1, -1, -1,
-1, -1, -1, -1, -1, -1, -1, -1);
auto result = _mm256_shuffle_epi8(srcData, vperm);
I'm expecting that result contains:
1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29, 31,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0
But i have instead:
1, 3, 5, 7, 9, 11, 13, 15, 1, 3, 5, 7, 9, 11, 13, 15,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0
I surely misunderstood how Shuffle works.
If anyone can enlighten me, it will be very appreciated :)
Yeah, to be expected. Look at the docs for _mm_shuffle_epi8. The 256bit avx version simply duplicates the behaviour of that 128bit instruction for the two 16byte values in the YMM register.
So you can shuffle the first 16 values, or the last 16 values; however you cannot shuffle values across the 16byte boundary. (You'll notice that all numbers over 16, are the same numbers minus 16. e.g. 19->3, 31->15, etc).
you'll need to do this with an additional step.
__m256i vperm = _mm256_setr_epi8( 0, 2, 4, 6, 8, 10, 12, 14,
-1, -1, -1, -1, -1, -1, -1, -1,
0, 2, 4, 6, 8, 10, 12, 14,
-1, -1, -1, -1, -1, -1, -1, -1);
and then use _mm256_permute2f128_si256 to pull the 0th and 2nd byte into the first 128bits.
This question already has answers here:
How to fill OpenCV image with one solid color?
(9 answers)
Closed 4 years ago.
I am new in OpenCV. I have a image what I want is to change the color of each and every pixel of image with any single color.
I found that code but when I run this part of code then a exception is generated.
for (i = 0;i < img1.rows;i++) {
for (j = 0;j < img1.cols;j++) {
img1.at<Vec3b>(i, j) = Vec3b(255, 105, 240);
}
}
Can anyone please tell me the solution.
Or what I found is that this take a lot of time for the conversion So if their is any other approach then please tell me.
// Make a 3 channel image
cv::Mat img(480,640,CV_8UC3);
// Fill entire image with cyan (blue and green)
img = cv::Scalar(255,255,0);
You can use Mat::operator() to select the roi and then assign a value to it.
void main()
{
cv::Mat img = cv::Mat::ones(5, 5, CV_8UC3);
img({2, 4}, {2, 4}) = cv::Vec3b(7, 8, 9);
cout << img;
}
This will print
[ 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0;
1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0;
1, 0, 0, 1, 0, 0, 7, 8, 9, 7, 8, 9, 1, 0, 0;
1, 0, 0, 1, 0, 0, 7, 8, 9, 7, 8, 9, 1, 0, 0;
1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0]
To fill image with single color, use rectangle with CV_FILLED argument.
(there might be some reasons for exception - image was not created, wrong pixel format etc - it is hard to diagnose a problem with given information)
I came across this:
cv::Mat Mat_out;
cv::Mat Mat2(openFingerCentroids.size(), CV_8UC1, cv::Scalar(2)); imshow("Mat2", Mat2);
cv::Mat Mat3(openFingerCentroids.size(), CV_8UC1, cv::Scalar(3)); imshow("Mat3", Mat3);
cv::bitwise_and(Mat2, Mat3, Mat_out); imshow("Mat_out", Mat_out);
Why does Mat_out contain all 2? Bit-wise operation of a matrix of all 2s and 3s should give me 0, right? Since 2 is not equal to 3?
Anyway, this is the simple thing I tried to implement: (like find function of MATLAB)
Mat_A = {1, 1, 0, 9, 0, 5;
5, 0, 0, 0, 9, 0;
1, 2, 0, 0, 0, 0};
Output expected, if I'm searching for all 5s:
Mat_out = {0, 0, 0, 0, 0, 5;
5, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0 };
How can I do this in OpenCV using C++??
I am using Google's Annotated Time Line tool to plot data which spans over two years, for example 2010 and 2011.
The timeline on x-axis only shows entries on 2011. It skips all the values of 2010.
Take following data table for example:
var data = new google.visualization.DataTable();
data.addColumn('date', 'Date');
data.addColumn('number', 'Mac Client');
data.addColumn('number', 'Win Client');
data.addColumn('number', 'Total');
data.addRows(7)
data.setValue(0, 0, new Date(2010, 12, 16, 11, 0, 0, 0));
data.setValue(0, 1, 0);
data.setValue(0, 2, 1);
data.setValue(0, 3, 1);
data.setValue(1, 0, new Date(2010, 12, 24, 16, 0, 0, 0));
data.setValue(1, 1, 0);
data.setValue(1, 2, 5);
data.setValue(1, 3, 5);
data.setValue(2, 0, new Date(2010, 12, 16, 12, 0, 0, 0));
data.setValue(2, 1, 0);
data.setValue(2, 2, 19);
data.setValue(2, 3, 19);
data.setValue(3, 0, new Date(2011, 3, 30, 2, 0, 0, 0));
data.setValue(3, 1, 0);
data.setValue(3, 2, 17);
data.setValue(3, 3, 17);
data.setValue(4, 0, new Date(2011, 4, 11, 13, 0, 0, 0));
data.setValue(4, 1, 0);
data.setValue(4, 2, 37);
data.setValue(4, 3, 37);
data.setValue(5, 0, new Date(2011, 10, 2, 0, 0, 0, 0));
data.setValue(5, 1, 1);
data.setValue(5, 2, 21);
data.setValue(5, 3, 22);
data.setValue(6, 0, new Date(2011, 4, 19, 2, 0, 0, 0));
data.setValue(6, 1, 0);
data.setValue(6, 2, 6);
data.setValue(6, 3, 6);
The resulting graph starts from 2011, instead of 2010. Google code playground
How can I make it to include data points for 2010 too?
The graph ends at November 02, 2011, although my last data point in October 2, 2011. How can I make the x-axis of graph end at October 30.
The JavaScript Date object's 'month' value is indexed at 0 (i.e.: 0=January, 1=February), so right now, everything is a month off.
Change
new Date(2011, 10, 2, 0, 0, 0, 0));
to
new Date(2011, 9, 2, 0, 0, 0, 0));
across the board and you should get what you want!