Rendering a transparent QLabel - c++

I am trying to use QLabel as a powerful tool to render some text with CSS into a texture. After I obtain an image, a texture is formed in OpenGL and the original Qt object is discarded. Afterwards I use the texture as any other texture in plain OpenGL (without Qt) rendering pipeline.
However, I am having problems handling the transparency of the background. Something - probably some Qt setting that I am not aware of - seems to be messing up my settings.
After a bit of simplification, my texture is produced like this:
QLabel label;
label.setAttribute(Qt::WA_TranslucentBackground, true);
const QString styleSheet{"color : #00bbbb; background-color : rgba(0,0,0,0);"
"font-family: 'Calibri'; text-decoration: none; margin: 5px;"};
label.setWordWrap(true);
label.setAutoFillBackground(false);
label.setStyleSheet(styleSheet);
QFont font = label.font();
font.setPointSizeF(12.0f);
label.setFont(font);
// set text
label.setText(QString("The quick brown fox jumps over the lazy dog"));
// render
label.adjustSize();
label.updateGeometry();
std::unique_ptr<QImage> imgTexture = std::make_unique<QImage>(label.size(), QImage::Format_RGBA8888);
QPainter painter(imgTexture.get());
label.render(&painter);
uint8_t *bits = imgTexture->bits();
std::cout << int(bits[0]) << " " << int(bits[1]) << " " << int(bits[2]) << " " << int(bits[3]) << std::endl;
The output - the value of the top-left pixel of the produced image - is:
205 205 205 205
and not 0 0 0 0 as I expected. Thus, the problem is already at this point and not later in my OpenGL handling of the texture. Ultimately my output is:
As seen, the background is not entirely transparent as expected.
Update
As per G.M.'s suggestion I tried setCompositionMode (with no effect), but also other painter settings. Apparently setting QPainter::setBackgroundMode to Qt::OpaqueMode steps one step further:
The Qt manual says:
Qt::TransparentMode (the default) draws stippled lines and text without setting the background pixels. Qt::OpaqueMode fills these space with the current background color.
so, it seems the default is to not change the original pixels of the output image anywhere where letters are not present. So, the transparent (0,0,0,0) is not drawn, and the previous color (apparently 205, 205, 205, 205 for some reason) remains unchanged.
Forcing drawing the background updates all pixels but only in the neighbourhood of the letters. I now need to figure out how to force clearing all pixels to the color specified in CSS.
Update Apparently it is not as simple as it seems. I tried painter.eraseRect(0, 0, width, height); but this clears the rectangle into white, ignoring the CSS settings.

To combine the results of experiments and some comments. The behavior is a combination of:
When QImage is constructed with some image area, that area is uninitialized.
QImage::QImage(int width, int height, QImage::Format format)
Constructs an image with the given width, height and format.
A null image will be returned if memory cannot be allocated.
Warning: This will create a QImage with uninitialized data. Call fill() to fill the image with an appropriate pixel value before drawing onto it with QPainter.
On Visual Studio in Debug mode, uninitialized area is initialized with 0xCD. In Release it would be some garbage
By default, when drawing text with QPainter, the background remains unchanged. The painter just adds the glyphs to whatever was previously present on the target image (in our case: the uninitialized area).
void QPainter::setBackgroundMode(Qt::BGMode mode)
Sets the background mode of the painter to the given mode
Qt::TransparentMode (the default) draws stippled lines and text without setting the background pixels. Qt::OpaqueMode fills these space with the current background color.
Note that in order to draw a bitmap or pixmap transparently, you must use QPixmap::setMask().
So, in order for background pixels to be drawn, as set in QLabel's CSS, the mode needs to be changed to Qt::OpaqueMode. This however draws only around glyphs, not the whole area unfortunately.
We need to manually clear the whole area first.
Image can be cleared via QPainter::fillRect:
void QPainter::fillRect(int x, int y, int width, int height, const QColor &color)
This is an overloaded function.
Fills the rectangle beginning at (x, y) with the given width and height, using the given color.
This function was introduced in Qt 4.5.
but there is a caveat - all QPainter operations are blended with the underlying image. By default it takes the alpha channel of the drawing color into account. Thus, if you paint with (0,0,0,0) you get... no change. The blending operation is controlled by:
void QPainter::setCompositionMode(QPainter::CompositionMode mode)
Sets the composition mode to the given mode.
Warning: Only a QPainter operating on a QImage fully supports all composition modes. The RasterOp modes are supported for X11 as described in compositionMode().
See also compositionMode().
enum QPainter::CompositionMode
Defines the modes supported for digital image compositing. Composition modes are used to specify how the pixels in one image, the source, are merged with the pixel in another image, the destination.
[...]
When a composition mode is set it applies to all painting operator, pens, brushes, gradients and pixmap/image drawing.
Constant Value Description
QPainter::CompositionMode_SourceOver 0 This is the default mode. The alpha of the source is used to blend the pixel on top of the destination.
QPainter::CompositionMode_DestinationOver 1 The alpha of the destination is used to blend it on top of the source pixels. This mode is the inverse of CompositionMode_SourceOver.
QPainter::CompositionMode_Clear 2 The pixels in the destination are cleared (set to fully transparent) independent of the source.
QPainter::CompositionMode_Source 3 The output is the source pixel. (This means a basic copy operation and is identical to SourceOver when the source pixel is opaque).
QPainter::CompositionMode_Destination 4 The output is the destination pixel. This means that the blending has no effect. This mode is the inverse of CompositionMode_Source.
[...] (30 more modes...)
So, the default SourceOver needs to be replaced by Source before calling fillRect.
Clearing can be also done by QImage::fill. Much easier and no mess with draw modes!
Unfortunately, either solution (QImage::fill or QPainter::fillRect) requires the specification of the background color explicitly. It cannot be just read from QLable's CSS.
P.S. I don't know how to blockquote a table :(

Related

SFML cursor seems broken in linux

So, i have a cursor sprite:
I load this sprite with:
sf::Cursor cur;
sf::Image cur_default;
if(!cur_default.loadFromFile("assets/sprites/cursors/cursor_default.png"))
{
std::cerr << "ERROR: unable to load file 'assets/sprites/cursors/cursor_default.png'\n";
return -1;
}
cur.loadFromPixels(cur_default.getPixelsPtr(), cur_default.getSize(), cur_default.getSize() / 2U);
When i was testing my code, I realized that the left size of the sprite was white, and that the colors didn't show up (replaced the white/gray with blue). After resizing the image to 32x32 pixels, it seems as if there are random while lines in my cursor. Unfortunately, I could not take a screenshot of the behavior because it wasn't showing up. I'm aware that according to the SFML wiki:
On Unix, the pixels are mapped into a monochrome bitmap: pixels with an alpha channel to 0 are transparent, black if the RGB channel are close to zero, and white otherwise.
However, it states unix not unix like, so I assume that is not the case for linux, but feel free to correct me if I'm wrong. I'm confused on why there are random white lines in my sprite, and how to fix it (AND also why the colors are not working correctly).

Speeding up drawing bitmap magnification within second bitmap with blend

The following code stretches a bitmap, blends it with an existing background, maintains transparent area of primary graphic and then displays the blend within a window (imgScreen). This works fine when the level of stretch is not large or when it is actually shrinking the initial bitmap. However when stretching the graphic it is very slow.
I have limited experience with C++ and this kind of graphics so perhaps there is another more efficient way to do this. The primary bitmap to be sized is always square. Any ideas are much appreciated..!
I was going to try not displaying clipping area but from tests it seems the initial stretch is causing the slowdown... Also having trouble seeing how to calculate non clipped area... Drawing to controls seems a waste but seems only way to use built in functions like stretchdraw and the alpha draw option.
std::auto_ptr<Graphics::TBitmap> bmap(new Graphics::TBitmap);
std::auto_ptr<Graphics::TBitmap> bmap1(new Graphics::TBitmap);
int s = newsize;
TRect sR = Rect(X,Y,X+s,Y+s);
TRect tR = Rect(0,0,s,s);
bmap->SetSize(s,s);
bmap->Canvas->StretchDraw(Rect(0, 0, s, s), Form1->Image4->Picture-
>Bitmap); // scale
bmap1->SetSize(s,s);
bmap1->Canvas->CopyRect(tR, Form1->imgScreen->Canvas, sR); //background
bmap1->Canvas->Draw(0,0,bmap.get()); // combine
Form1->imgTemp->Picture->Assign(bmap1.get());
Form1->imgScreen->Canvas->Draw(X,Y, Form1->imgTemp->Picture->Bitmap,
alpha);
Displays correctly but as graphic gets larger draw rate slows down quickly...

Do I need to gamma correct the final color output on a modern computer/monitor

I've been under the assumption that my gamma correction pipeline should be as follows:
Use sRGB format for all textures loaded in (GL_SRGB8_ALPHA8) as all art programs pre-gamma correct their files. When sampling from a GL_SRGB8_ALPHA8 texture in a shader OpenGL will automatically convert to linear space.
Do all lighting calculations, post processing, etc. in linear space.
Convert back to sRGB space when writing final color that will be displayed on the screen.
Note that in my case the final color write involves me writing from a FBO (which is a linear RGB texture) to the back buffer.
My assumption has been challenged as if I gamma correct in the final stage my colors are brighter than they should be. I set up for a solid color to be drawn by my lights of value { 255, 106, 0 }, but when I render I get { 255, 171, 0 } (as determined by print-screening and color picking). Instead of orange I get yellow. If I don't gamma correct at the final step I get exactly the right value of { 255, 106, 0 }.
According to some resources modern LCD screens mimic CRT gamma. Do they always? If not, how can I tell if I should gamma correct? Am I going wrong somewhere else?
Edit 1
I've now noticed that even though the color I write with the light is correct, places where I use colors from textures are not correct (but rather far darker as I would expect without gamma correction). I don't know where this disparity is coming from.
Edit 2
After trying GL_RGBA8 for my textures instead of GL_SRGB8_ALPHA8, everything looks perfect, even when using the texture values in lighting computations (if I half the intensity of the light, the output color values are halfed).
My code is no longer taking gamma correction into account anywhere, and my output looks correct.
This confuses me even more, is gamma correction no longer needed/used?
Edit 3 - In response to datenwolf's answer
After some more experimenting I'm confused on a couple points here.
1 - Most image formats are stored non-linearly (in sRGB space)
I've loaded a few images (in my case both .png and .bmp images) and examined the raw binary data. It appears to me as though the images are actually in the RGB color space, as if I compare the values of pixels with an image editing program with the byte array I get in my program they match up perfectly. Since my image editor is giving me RGB values, this would indicate the image stored in RGB.
I'm using stb_image.h/.c to load my images and followed it all the way through loading a .png and did not see anywhere that it gamma corrected the image while loading. I also examined the .bmps in a hex editor and the values on disk matched up for them.
If these images are actually stored on disk in linear RGB space, how am I supposed to (programatically) know when to specify an image is in sRGB space? Is there some way to query for this that a more featured image loader might provide? Or is it up to the image creators to save their image as gamma corrected (or not) - meaning establishing a convention and following it for a given project. I've asked a couple artists and neither of them knew what gamma correction is.
If I specify my images are sRGB, they are too dark unless I gamma correct in the end (which would be understandable if the monitor output using sRGB, but see point #2).
2 - "On most computers the effective scanout LUT is linear! What does this mean though?"
I'm not sure I can find where this thought is finished in your response.
From what I can tell, having experimented, all monitors I've tested on output linear values. If I draw a full screen quad and color it with a hard-coded value in a shader with no gamma correction the monitor displays the correct value that I specified.
What the sentence I quoted above from your answer and my results would lead me to believe is that modern monitors output linear values (i.e. do not emulate CRT gamma).
The target platform for our application is the PC. For this platform (excluding people with CRTs or really old monitors), would it be reasonable to do whatever your response to #1 is, then for #2 to not gamma correct (i.e. not perform the final RGB->sRGB transformation - either manually or using GL_FRAMEBUFFER_SRGB)?
If this is so, what are the platforms on which GL_FRAMEBUFFER_SRGB is meant for (or where it would be valid to use it today), or are monitors that use linear RGB really that new (given that GL_FRAMEBUFFER_SRGB was introduced 2008)?
--
I've talked to a few other graphics devs at my school and from the sounds of it, none of them have taken gamma correction into account and they have not noticed anything incorrect (some were not even aware of it). One dev in particular said that he got incorrect results when taking gamma into account so he then decided to not worry about gamma. I'm unsure what to do in my project for my target platform given the conflicting information I'm getting online/seeing with my project.
Edit 4 - In response to datenwolf's updated answer
Yes, indeed. If somewhere in the signal chain a nonlinear transform is applied, but all the pixel values go unmodified from the image to the display, then that nonlinearity has already been pre-applied on the image's pixel values. Which means, that the image is already in a nonlinear color space.
Your response would make sense to me if I was examining the image on my display. To be sure I was clear, when I said I was examining the byte array for the image I mean I was examining the numerical value in memory for the texture, not the image output on the screen (which I did do for point #2). To me the only way I could see what you're saying to be true then is if the image editor was giving me values in sRGB space.
Also note that I did try examining the output on monitor, as well as modifying the texture color (for example, dividing by half or doubling it) and the output appeared correct (measured using the method I describe below).
How did you measure the signal response?
Unfortunately my methods of measurement are far cruder than yours. When I said I experimented on my monitors what I meant was that I output solid color full screen quad whose color was hard coded in a shader to a plain OpenGL framebuffer (which does not do any color space conversion when written to). When I output white, 75% gray, 50% gray, 25% gray and black the correct colors are displayed. Now here my interpretation of correct colors could most certainly be wrong. I take a screenshot and then use an image editing program to see what the values of the pixels are (as well as a visual appraisal to make sure the values make sense). If I understand correctly, if my monitors were non-linear I would need to perform a RGB->sRGB transformation before presenting them to the display device for them to be correct.
I'm not going to lie, I feel I'm getting a bit out of my depth here. I'm thinking the solution I might persue for my second point of confusion (the final RGB->sRGB transformation) will be a tweakable brightness setting and default it to what looks correct on my devices (no gamma correction).
First of all you must understand that the nonlinear mapping applied to the color channels is often more than just a simple power function. sRGB nonlinearity can be approximated by about x^2.4, but that's not really the real deal. Anyway your primary assumptions are more or less correct.
If your textures are stored in the more common image file formats, they will contain the values as they are presented to the graphics scanout. Now there are two common hardware scenarios:
The scanout interface outputs a linear signal and the display device will then internally apply a nonlinear mapping. Old CRT monitors were nonlinear due to their physics: The amplifiers could put only so much current into the electron beam, the phosphor saturating and so on – that's why the whole gamma thing was introduced in the first place, to model the nonlinearities of CRT displays.
Modern LCD and OLED displays either use resistor ladders in their driver amplifiers, or they have gamma ramp lookup tables in their image processors.
Some devices however are linear, and ask the image producing device to supply a proper matching LUT for the desired output color profile on the scanout.
On most computers the effective scanout LUT is linear! What does this mean though? A little detour:
For illustration I quickly hooked up my laptop's analogue display output (VGA connector) to my analogue oscilloscope: Blue channel onto scope channel 1, green channel to scope channel 2, external triggering on line synchronization signal (HSync). A quick and dirty OpenGL program, deliberately written with immediate mode was used to generate a linear color ramp:
#include <GL/glut.h>
void display()
{
GLuint win_width = glutGet(GLUT_WINDOW_WIDTH);
GLuint win_height = glutGet(GLUT_WINDOW_HEIGHT);
glViewport(0,0, win_width, win_height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, 1, 0, 1, -1, 1);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glBegin(GL_QUAD_STRIP);
glColor3f(0., 0., 0.);
glVertex2f(0., 0.);
glVertex2f(0., 1.);
glColor3f(1., 1., 1.);
glVertex2f(1., 0.);
glVertex2f(1., 1.);
glEnd();
glutSwapBuffers();
}
int main(int argc, char *argv[])
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE);
glutCreateWindow("linear");
glutFullScreen();
glutDisplayFunc(display);
glutMainLoop();
return 0;
}
The graphics output was configured with the Modeline
"1440x900_60.00" 106.50 1440 1528 1672 1904 900 903 909 934 -HSync +VSync
(because that's the same mode the flat panel runs in, and I was using cloning mode)
gamma=2 LUT on the green channel.
linear (gamma=1) LUT on the blue channel
This is how the signals of a single scanout line look like (upper curve: Ch2 = green, lower curve: Ch1 = blue):
You can clearly see the x⟼x² and x⟼x mappings (parabola and linear shapes of the curves).
Now after this little detour we know, that the pixel values that go to the main framebuffer, go there as they are: The OpenGL linear ramp underwent no further changes and only when a nonlinear scanout LUT was applied it altered the signal sent to the display.
Either way the values you present to the scanout (which means the on-screen framebuffers) will undergo a nonlinear mapping at some point in the signal chain. And for all standard consumer devices this mapping will be according to the sRGB standard, because it's the smallest common factor (i.e. images represented in the sRGB color space can be reproduced on most output devices).
Since most programs, like webbrowsers assume the output to undergo a sRGB to display color space mapping, they simply copy the pixel values of the standard image file formats to the on-screen frame as they are, without performing a color space conversion, thereby implying that the color values within those images are in sRGB color space (or they will often merely convert to sRGB, if the image color profile is not sRGB); the correct thing to do (if, and only if the color values written to the framebuffer are scanned out to the display unaltered; assuming that scanout LUT is part of the display), would be conversion to the specified color profile the display expects.
But this implies, that the on-screen framebuffer itself is in sRGB color space (I don't want to split hairs about how idiotic that is, lets just accept this fact).
How to bring this together with OpenGL? First of all, OpenGL does all it's color operations linearly. However since the scanout is expected to be in some nonlinear color space, this means, that the end result of the rendering operations of OpenGL somehow must be brougt into the on-screen framebuffer color space.
This is where the ARB_framebuffer_sRGB extension (which went core with OpenGL-3) enters the picture, which introduced new flags used for the configuration of window pixelformats:
New Tokens
Accepted by the <attribList> parameter of glXChooseVisual, and by
the <attrib> parameter of glXGetConfig:
GLX_FRAMEBUFFER_SRGB_CAPABLE_ARB 0x20B2
Accepted by the <piAttributes> parameter of
wglGetPixelFormatAttribivEXT, wglGetPixelFormatAttribfvEXT, and
the <piAttribIList> and <pfAttribIList> of wglChoosePixelFormatEXT:
WGL_FRAMEBUFFER_SRGB_CAPABLE_ARB 0x20A9
Accepted by the <cap> parameter of Enable, Disable, and IsEnabled,
and by the <pname> parameter of GetBooleanv, GetIntegerv, GetFloatv,
and GetDoublev:
FRAMEBUFFER_SRGB 0x8DB9
So if you have a window configured with such a sRGB pixelformat and enable sRGB rasterization mode in OpenGL with glEnable(GL_FRAMEBUFFER_SRGB); the result of the linear colorspace rendering operations will be transformed in sRGB color space.
Another way would be to render everything into an off-screen FBO and to the color conversion in a postprocessing shader.
But that's only the output side of rendering signal chain. You also got input signals, in the form of textures. And those are usually images, with their pixel values stored nonlinearly. So before those can be used in linear image operations, such images must be brought into a linear color space first. Lets just ignore for the time being, that mapping nonlinear color spaces into linear color spaces opens several of cans of worms upon itself – which is why the sRGB color space is so ridiculously small, namely to avoid those problems.
So to address this an extension EXT_texture_sRGB was introduced, which turned out to be so vital, that it never went through being ARB, but went straight into the OpenGL specification itself: Behold the GL_SRGB… internal texture formats.
A texture loaded with this format undergoes a sRGB to linear RGB colorspace transformation, before being used to source samples. This gives linear pixel values, suitable for linear rendering operations, and the result can then be validly transformed to sRGB when going to the main on-screen framebuffer.
A personal note on the whole issue: Presenting images on the on-screen framebuffer in the target device color space IMHO is a huge design flaw. There's no way to do everything right in such a setup without going insane.
What one really wants is to have the on-screen framebuffer in a linear, contact color space; the natural choice would be CIEXYZ. Rendering operations would naturally take place in the same contact color space. Doing all graphics operations in contact color spaces, avoids the opening of the aforementioned cans-of-worms involved with trying to push a square peg named linear RGB through a nonlinear, round hole named sRGB.
And although I don't like the design of Weston/Wayland very much, at least it offers the opportunity to actually implement such a display system, by having the clients render and the compositor operate in contact color space and apply the output device's color profiles in a last postprocessing step.
The only drawback of contact color spaces is, that there it's imperative to use deep color (i.e. > 12 bits per color channel). In fact 8 bits are completely insufficient, even with nonlinear RGB (the nonlinearity helps a bit to cover up the lack of perceptible resolution).
Update
I've loaded a few images (in my case both .png and .bmp images) and examined the raw binary data. It appears to me as though the images are actually in the RGB color space, as if I compare the values of pixels with an image editing program with the byte array I get in my program they match up perfectly. Since my image editor is giving me RGB values, this would indicate the image stored in RGB.
Yes, indeed. If somewhere in the signal chain a nonlinear transform is applied, but all the pixel values go unmodified from the image to the display, then that nonlinearity has already been pre-applied on the image's pixel values. Which means, that the image is already in a nonlinear color space.
2 - "On most computers the effective scanout LUT is linear! What does this mean though?
I'm not sure I can find where this thought is finished in your response.
This thought is elaborated in the section that immediately follows, where I show how the values you put into a plain (OpenGL) framebuffer go directly to the monitor, unmodified. The idea of sRGB is "put the values into the images exactly as they are sent to the monitor and build consumer displays to follow that sRGB color space".
From what I can tell, having experimented, all monitors I've tested on output linear values.
How did you measure the signal response? Did you use a calibrated power meter or similar device to measure the light intensity emitted from the monitor in response to the signal? You can't trust your eyes with that, because like all our senses our eyes have a logarithmic signal response.
Update 2
To me the only way I could see what you're saying to be true then is if the image editor was giving me values in sRGB space.
That's indeed the case. Because color management was added to all the widespread graphics systems as an afterthought, most image editors edit pixel values in their destination color space. Note that one particular design parameter of sRGB was, that it should merely retroactively specify the unmanaged, direct value transfer color operations as they were (and mostly still are done) done on consumer devices. Since there happens no color management at all, the values contained in the images and manipulated in editors must be in sRGB already. This works for so long, as long images are not synthetically created in a linear rendering process; in case of the later the render system has to take into account the destination color space.
I take a screenshot and then use an image editing program to see what the values of the pixels are
Which gives you of course only the raw values in the scanout buffer without the gamma LUT and the display nonlinearity applied.
I wanted to give a simple explanation of what went wrong in the initial attempt, because although the accepted answer goes in-depth on colorspace theory, it doesn't really answer that.
The setup of the pipeline was exactly right: use GL_SRGB8_ALPHA8 for textures, GL_FRAMEBUFFER_SRGB (or custom shader code) to convert back to sRGB at the end, and all your intermediate calculations will be using linear light.
The last bit is where you ran into trouble. You wanted a light with a color of (255, 106, 0) - but that's an sRGB color, and you're working with linear light. To get the color you want, you need to convert that color to the linear space, the same way that GL_SRGB8_ALPHA8 is doing for your textures. For your case, this would be a vec3 light with intensity (1, .1441, 0) - this is the value after applying gamma-compression.

C++ and Qt: Paint Program - Rendering Transparent Lines Without Alpha Joint Overlap

I have started to create a paint program that interacts with drawing tablets. Depending on the pressure of the pen on the tablet I change the alpha value of the line being drawn. That mechanism works.
Thin lines look decent and it looks a real sketch. But since I am drawing lines between two points (like in the Qt scribble tutorial) to paint there is an alpha overlap between the line joints and it is very noticeable for thick strokes.
This is the effect with line to line conjuction:
As you can see, there is an ugly alpha blend between the line segments.
In order to solve this I decided to use a QPainterPath to render lines.
Two problems with this:
A long, continuous, thick path quickly lags the program.
Since the path is connected it acts as one, so any change to the alpha value affects the the entire path(which I don't want since I want to preserve a blending effect).
The following images use a QPainterPath.
The blend effect I want to keep.
The following image shows the 2nd problem which changes the alpha and thickness of the entire path
The red text should read: "if more pressure is added without removing the pen from the tablet surface the line thickens" (and alpha becomes opaque)
Another thing is that with this approach I can only get a blending trail from a dark to light (or thick to thin path width) but not light to dark. I am not sure why this effect occurs but my best guess is that it has to do with the line segments of the path updating as whole.
I did make the program increase/decrease alpha and line thickness based on the pressure of the pen on the tablet.
The problem is that I want to render lines without the alpha overlap and QPainterPath updates the entire path's alpha and thickness which I don't want.
This is the code that creates the path:
switch(event->type()){
case QEvent::TabletPress:
if(!onTablet){
onTablet = true;
//empty for new segment
freePainterPath();
path = new QPainterPath(event->pos());
} break;
case QEvent::TabletRelease:
if(onTablet)
onTablet = false;
break;
case QEvent::TabletMove:
if(path != NULL)
path->lineTo(event->pos());
if(onTablet){
//checks for pressure of pen on tablet to change alpha/line thickness
brushEffect(event);
QPainter painter(&pixmap);
//renders the path
paintPixmap(painter, event);
} break;
default:;
}
update();
The desired effect that I want as a single path (image created with Krita paint program):
To emulate the Krita paint program:
Keep a backup of the original target surface.
Paint with your brush onto a scratch surface that starts out completely transparent.
On that surface, your composting rule is "take maximum opacity".
Keep track of the dirty regions of that surface, and do a traditional composite of (scratch surface) onto (original target surface) and display the result. Make sure this operation doesn't damage the original target surface.
Now, you don't have to keep the entire original target surface -- just the parts you have drawn on with this tool. (A good tile based lazy-write imaging system will make this easy).
Depending on the segment size you are drawing with, you may want to interpolate between segments to make the strength of the brush be a bit less sharp. The shape of your brush may also need work. But these are independent of the transparency problem.
As for the Qt strangeness, I don't know enough Qt to tell you how to deal with the quirks of Qt's brush code. But the above "key-mask" strategy should solve your alpha overlap problem.
I do not know how to do this in Qt. Glancing at the Qt compositing modes I don't see an obvious way to say "take maximum" as the resulting alpha. Maybe something involving both color and alpha channels in some clever way.
I know this question is very old, and has an accepted answer, but in case someone else needs the answer, here it is:
You need to set the composition mode of painter to source. It draws both source and destination right now.
painter.setCompositionMode(QPainter::CompositionMode_Source);
If you want your transparent areas to show through underlying drawings, you need to set the composition mode of your result back to CompositionMode_SourceOver and draw over destination.
I don't know if you still look for an answer, but I hope this helps someone.

transparent colour being shown some of the time

I am using a LPDIRECT3DTEXTURE9 to hold my image.
This is the function used to display my picture.
int drawcharacter(SPRITE& person, LPDIRECT3DTEXTURE9& image)
{
position.x = (float)person.x;
position.y = (float)person.y;
sprite_handler->Draw(
image,
&srcRect,
NULL,
&position,
D3DCOLOR_XRGB(255,255,255));
return 0;
}
According to the book I have the RGB colour shown as the last parameter will not be displayed on screen, this is how you create transparency.
This works for the most part but leaves a pink line around my image and the edge of the picture. After trial and error I have found that if I go back into photoshop I can eliminate the pink box by drawing over it with the pink colour. This can be see with the ships on the left.
I am starting to think that photoshop is blending the edges of the image so that background is not all the same shade of pink though I have no proof.
Can anyone help fix this by programming or is the error in the image?
If anyone is good at photoshop can they tell me how to fix the image, I use png mostly but am willing to change if necessary.
edit: texture creation code as requested
character_image = LoadTexture("character.bmp", D3DCOLOR_XRGB(255,0,255));
if (character_image == NULL)
return 0;
You are loading a BMP image, which does not support transparency natively - the last parameter D3DCOLOR_XRGB(255,0,255) is being used to add transparency to an image which doesn't have any. The problem is that the color must match exactly, if it is off even by only one it will not be converted to transparent and you will see the near-magenta showing through.
Save your images as 24-bit PNG with transparency, and if you load them correctly there will be no problems. Also don't add the magenta background before you save them.
As you already use PNG, you can just store the alpha value there directly from Photoshop. PNG supports transparency out of the box, and it can give better appearance than what you get with transparent colour.
It's described in http://www.toymaker.info/Games/html/textures.html (for example).
Photoshop is anti-aliasing the edge of the image. If it determines that 30% of a pixel is inside the image and 70% is outside, it sets the alpha value for that pixel to 70%. This gives a much smoother result than using a pixel-based transparency mask. You seem to be throwing these alpha values away, is that right? The pink presumably comes from the way that Photoshop displays partially transparent pixels.