cairomm draw single pixel - c++

I've a simple boolean matrix I want to see it as an Image. I am using Cairomm. In documentations I see how to draw a line, curve, arc. But I just want to put black and white color to each pixels. Not getting any docs on pixel access. This is what I copied from examples. though I want a monochrome image. Not FORMAT_ARGB32
Cairo::RefPtr<Cairo::ImageSurface> surface = Cairo::ImageSurface::create(Cairo::FORMAT_ARGB32, matrix.cols(), matrix.rows());
Cairo::RefPtr<Cairo::Context> context = Cairo::Context::create(surface);
right now I am drawing an 1 pixel line
context->set_antialias(Cairo::ANTIALIAS_NONE);
context->save(); // save the state of the context
context->set_source_rgb(1.0, 1.0, 1.0);
context->paint(); // fill image with the color
context->restore(); // color is back to black now
context->set_source_rgb(0.0, 0.0, 0.0);
context->set_line_width(1.0);
context->move_to(1.0, 1.0);
context->line_to(2.0, 2.0);
context->stroke();
Is this okay or there is something like context->draw(row, col, color) ?

An ImageSurface has a get_data() method. Together with flush() (before modifying data directly) and mark_dirty() (afterwards), you can use this to modify the pixel data directly and set individual pixels.
http://cairographics.org/documentation/cairomm/reference/classCairo_1_1ImageSurface.html#a94ba52fe4a201579c8a5541717822bdb

Related

how to draw multi images/bitmaps without background color?

I have multi images/bitmaps whose background is white color, now I want to draw these to render target one by one without copy white color. I tried to achieve:
SetPrimitiveBlend(D2D1_PRIMITIVE_BLEND_MIN), reference D2D1_PRIMITIVE_BLEND enumeration, but the non-white pixels of the image who draw late will not cover the before image, intersection will show another color.
_deviceContext->SetPrimitiveBlend(D2D1_PRIMITIVE_BLEND_MIN);
_deviceContext->BeginDraw();
_deviceContext->Clear(D2D1::ColorF(D2D1::ColorF::White));
_deviceContext->DrawBitmap(Bkg, D2D1::RectF(0, 0, _width, _height), 1.0);
_deviceContext->DrawBitmap(Img1, D2D1::RectF(0, 0, _width, _height), 1.0);
_deviceContext->DrawBitmap(Img2, D2D1::RectF(0, 0, _width, _height), 1.0);
hr = _deviceContext->EndDraw();
then I tried another method as example Direct2D composite effect modes sample, it works fine for only one foreground image and one background image. But I need draw multi foreground images.
_deviceContext->CreateEffect(CLSID_D2D1Blend, &_EffectBlend);
_deviceContext->CreateEffect(CLSID_D2D1ColorMatrix, &_EffectFore);
_deviceContext->CreateEffect(CLSID_D2D1ColorMatrix, &_EffectBack);
_EffectBlend->SetValue(D2D1_BLEND_PROP_MODE, D2D1_BLEND_MODE_DARKER_COLOR);
_EffectBlend->SetInputEffect(0, _EffectBack);
_EffectBlend->SetInputEffect(1, _EffectFore);
_EffectBack->SetInput(0, Bkg);
_EffectFore->SetInput(0, Img1);
_deviceContext->BeginDraw();
_deviceContext->Clear(D2D1::ColorF(D2D1::ColorF::White));
_deviceContext->DrawImage(_EffectBlend, D2D1_INTERPOLATION_MODE_LINEAR);
_deviceContext->EndDraw();

Why does a black source color with glBlendFunc(GL_SRC_ALPHA, GL_ONE); still cause a change in the destination color?

I am trying to develop a particle system in C++ using OpenGL, and I am confused about how blending works. I am trying to use additive blending, and from my understanding, calling the glBlendFunc function with the following arguments:
glBlendFunc(GL_SRC_ALPHA, GL_ONE);
Will cause the following when you try to render something: It will take the R,G,B color calculated by the fragment shader (source color), multiply that by the alpha value calculated by the fragment shader (source alpha), and then add that to the R,G,B color that already exists in the frame buffer (dest color). But if this is true, then a black color, where (R,G,B,A) = (0,0,0,1), calculated by the fragment shader should leave the existing frame buffer color (dest color) unchanged since it is multiplying the source color value of 0 by the source alpha value of 1, which should obviously always yield 0, and then that 0 is added to the existing frame buffer color (dest color) which should leave it unchanged...right?
However, when I do this, instead of leaving the color unchanged, it actually makes it lighter, as shown here:
In this picture, the environment and sword are rendered with normal blending, and then the square particles are rendered around the sword with glBlendFunc(GL_SRC_ALPHA, GL_ONE); using a fragment shader that ALWAYS outputs (R,G,B,A) = (0,0,0,1). There are many particles rendered, and you can see that as more particles overlap, it gets brighter. When I switch the alpha output of the shader from 1 to 0 (source alpha), then the particles disappear, which makes sense. But why are they still visible when source color=1 and source alpha=0?
Here is the exact function I call to render the particles:
void ParticleController::Render()
{
GraphicsController* gc = m_game->GetGraphicsController();
ShaderController* sc = gc->GetShaderController();
gc->BindAttributeBuffer(m_glBuffer);
ShaderProgram* activeShaderProgram = sc->UseProgram(SHADER_PARTICLE);
glDepthMask(false);
glBlendFunc(GL_SRC_ALPHA, GL_ONE);
gc->GetTextureController()->BindGLTexture(TEX_PARTICLES);
activeShaderProgram->SetUniform(SU_PROJ_VIEW_MAT, gc->GetProjectionMatrix() * gc->GetViewMatrix() * gc->GetWorldScaleMatrix());
activeShaderProgram->SetUniform(SU_LIGHT_AMBIENT, m_game->GetWorld()->GetAmbientLight());
activeShaderProgram->SetUniform(SU_TIME, m_game->GetWorld()->GetWorldTime());
CheckForOpenGLError();
m_pa.GLDraw();
CheckForOpenGLError();
gc->BindAttributeBuffer_Default();
CheckForOpenGLError();
glDepthMask(true);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
}
And these are the last 4 lines of my fragment shader, ensuring that the particle color output (R,G,B,A) is always = (0,0,0,1):
fColor.r = 0.0;
fColor.g = 0.0;
fColor.b = 0.0;
fColor.a = 1.0;
Is there something I am missing?

Why is my cairo_surface_t drawing semi-transparent?

I am trying to draw a png image, the contents of which I have in memory in an ARGB32 format, using C++ Cairo and Gtk on Ubuntu.
First, I create a GtkDrawingArea, then on its expose-event I draw a solid blue background with a red line from top left to bottom right, then I create a surface and try to draw it onto the drawing area. Here's my expose-event callback:
unsigned char *pCairoBuf = ...
....
gboolean OnDrawingAreaExposeEvent(GtkWidget *pWidget, GdkEventExpose *pEvent, gpointer data)
{
cairo_t *cr = gdk_cairo_create(pWidget->window);
cairo_set_source_rgb(cr, 0.0, 0.0, 1.0);
cairo_rectangle(cr, 0.0, 0.0, pEvent->area.width, pEvent->area.height);
cairo_fill(cr);
cairo_set_source_rgb(cr, 1.0, 0.0, 0.0);
cairo_set_line_width(cr, 10.0);
cairo_move_to(cr, 0.0, 0.0);
cairo_line_to(cr, pEvent->area.width, pEvent->area.height);
cairo_stroke(cr);
// Use the existing buffer pCairoBuf that has (iRenderWidth * iRenderHeight * 4) bytes, 4 bytes per pixel for ARGB32 and zero row padding
cairo_surface_t *pSurface = cairo_image_surface_create_for_data(pCairoBuf, CAIRO_FORMAT_ARGB32, iRenderWidth, iRenderHeight, (iRenderWidth * 4));
cairo_set_source_surface(cr, pSurface, 0.0, 0.0);
cairo_paint(cr);
cairo_destroy(cr);
return TRUE;
}
The drawing area and the image are both 244x278 pixels. The image is an image of Smokey the Bear's head and is transparent around his head:
And I expect the final result to look like this:
But it ends up looking like this:
I did not add the code that shows how I got the data buffer pCairoBuf because I figured it would only cloud the issue, but perhaps I'm wrong? I figured that there's something else I'm doing wrong having to do with cairo surfaces, etc. that would explain the difference between what I'm expecting and what I'm getting.
Thanks in advance for any help!
As a guess, not having used any of those libraries, I'd say your alpha channel's uniform across your image and the rendering's applying this also to the parts of the image you'd like non-transparent. Try only applying the alpha channel to those pixels which you'd like to be transparent.
I figured it out. When I was filling in the ARGB32 data in the buffer that I pass to cairo_image_surface_create_for_data(), I was filling in the bytes in that order, A, R, G, B. As an experiment I reversed the order and filled in the bytes B, G, R, A, and it worked perfectly.

LWJGL 3D picking

So I have been trying to understand the concept of 3D picking but as I can't find any video guides nor any concrete guides that actually speak English, it is proving to be very difficult. If anyone is well experienced with 3D picking in LWJGL, could you give me an example with line by line explanation of what everything means. I should mention that all I am trying to do it shoot the ray out of the center of the screen (not where the mouse is) and have it detect just a normal cube (rendered in 6 QUADS).
Though I am not an expert with 3D picking, I have done it before, so I will try to explain.
You mentioned that you want to shoot a ray, rather than go by mouse position; as long as this ray is parallel to the screen, this method will still work, just the same as it will for a random screen coordinate. If not, and you actually wish to shoot a ray out, angled in some direction, things get a little more complicated, but I will not go in to it (yet).
Now how about some code?
Object* picking3D(int screenX, int screenY){
//Disable any lighting or textures
glDisable(GL_LIGHTING);
glDisable(GL_TEXTURE);
//Render Scene
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
orientateCamera();
for(int i = 0; i < objectListSize; i++){
GLubyte blue = i%256;
GLubyte green = min((int)((float)i/256), 255);
GLubyte red = min((int)((float)i/256/256), 255);
glColor3ub(red, green, blue);
orientateObject(i);
renderObject(i);
}
//Get the pixel
GLubyte pixelColors[3];
glReadPixels(screenX, screenY, 1, 1, GL_RGB, GL_UNSIGNED_BYTE, pixelColors);
//Calculate index
int index = pixelsColors[0]*256*256 + pixelsColors[1]*256 + pixelColors[2];
//Return the object
return getObject(index);
}
Code Notes:
screenX is the x location of the pixel, and screenY is the y location of the pixel (in screen coordinates)
orientateCamera() simply calls any glTranslate, glRotate, glMultMatrix, etc. needed to position (and rotate) the camera in your scene
orientateObject(i) does the same as orientateCamera, except for object 'i' in your scene
when I 'calculate the index', I am really just undoing the math I performed during the rendering to get the index back
The idea behind this method is that each object will be rendered exactly how the user sees it, except that all of a model is a solid colour. Then, you check the colour of the pixel for the screen coordinate requested, and which ever model the colour is indexed to: that's your object!
I do recommend, however, adding a check for the background color (or your glClearColor), just in case you don't actually hit any objects.
Please ask for further explanation if necessary.

CGContextStrokeRectWithWidth with inner rounding?

I'm drawing a rounded rectangle on iOS.
This rectangle has an image background which I am clipping with a bezier path and I am then drawing a stroke with a width of two pixels inside it.
My issue is that the stroke has sharp edges on the inside instead of seemingly following the rounded rect path.
Here's a screenshot where I have increased the white colour to show the issue I'm dealing with.
Here's my code:
CGContextRef context = UIGraphicsGetCurrentContext();
CGPathRef strokeRect = [UIBezierPath bezierPathWithRoundedRect:rect cornerRadius:2].CGPath;
CGContextAddPath(context, strokeRect);
CGContextClip(context);
// draw background
[[UIImage imageNamed:#"background.png"] drawInRect:rect];
// draw stroke
CGContextSetRGBStrokeColor(context, 1.0, 1.0, 1.0, 0.1);
CGContextStrokeRectWithWidth(context, rect, 2);
I think what you're going to have to do is clip the context on the inside with the corner radius and fill the context with 10% white.