Gradient color text - opengl

What I actually try to achieve:
I'd like to draw text with a gradient vertical color. I found this solution, but it doesn't quite fit for me, as it has black square around the gradient font in my case - don't know how to get rid of it, so I started simple (the irrelevant part) question to understand better the physics of blending and frame buffer in opengl and libgdx
What I was trying to understand, irrelevant to my goal:
I have a texture with a white square on it, I draw it on top of red background. I am trying to draw a green square on top of the white one, the green square partially covers the white one, and partially on top of the red background (see picture below).
My intention is: the white area, that is behind of the green square should be painted in green color, but all red background should not be affected and stayed unchanged (red as it is).
How can I do this?
package com.mygdx.game;
import com.badlogic.gdx.ApplicationAdapter;
import com.badlogic.gdx.Gdx;
import com.badlogic.gdx.graphics.Color;
import com.badlogic.gdx.graphics.GL20;
import com.badlogic.gdx.graphics.Texture;
import com.badlogic.gdx.graphics.g2d.SpriteBatch;
import com.badlogic.gdx.graphics.glutils.ShapeRenderer;
public class Game extends ApplicationAdapter {
SpriteBatch batch;
Texture img;
private int height;
private int width;
private ShapeRenderer shapeRenderer;
#Override
public void create() {
batch = new SpriteBatch();
img = new Texture("white.png");
width = Gdx.graphics.getWidth();
height = Gdx.graphics.getHeight();
shapeRenderer = new ShapeRenderer();
shapeRenderer.setAutoShapeType(true);
}
#Override
public void render() {
Gdx.gl.glClearColor(1, 0, 0, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
batch.begin();
batch.draw(img, width / 7, height / 4);
batch.end();
Gdx.gl.glEnable(GL20.GL_BLEND);
Gdx.gl.glBlendFunc(GL20.GL_ONE, GL20.GL_SRC_COLOR);
shapeRenderer.begin();
shapeRenderer.set(ShapeRenderer.ShapeType.Filled);
shapeRenderer.setColor(Color.GREEN);
shapeRenderer.rect(width / 2 - 100, height / 4 - 50, 200, 200);
shapeRenderer.end();
Gdx.gl.glDisable(GL20.GL_BLEND);
}
#Override
public void dispose() {
batch.dispose();
img.dispose();
}
}
Ideally, the green square should not be transparent anyhow, just should block white where it hides the white area.
The output I'm getting:
Update:
I mark #Xoppa 's answer as correct, as it solves my original question with the following result:

You could indeed use some kind of mask to blend it using a square. For that you can first render the text to the stencil buffer using a custom shader that discards fragments with an alpha value below a certain threshold. After that you can render the square using the stencil function to only affect the fragments "touched" by the text. Note that this does involve multiple render calls though and therefore adds complexity to your calling code as well.
However, you say that you actually just want to render text using gradient. For that you don't need such complex approach and can simply apply the gradient within the same render call.
When you draw text, you actually render many little squares, for each character in the text one square. Each of this square has a textureregion applied that contains the character on a transparent background. If you open the font image (e.g. this is the default), then you'll see this source image.
Just like you can apply a gradient to a normal square, you can also apply a gradient to each of those individual squares that make up the text. There are multiple ways to do that. Which best suits depends on the use-case. For example if you need a horizontal gradient or have multiline text, then you need some additional steps. Since you didn't specify this, I'm going to assume that you want to apply a vertical gradient on a single line of text:
public class MyGdxGame extends ApplicationAdapter {
public static class GradientFont extends BitmapFont {
public static void applyGradient(float[] vertices, int vertexCount, float color1, float color2, float color3, float color4) {
for (int index = 0; index < vertexCount; index += 20) {
vertices[index + SpriteBatch.C1] = color1;
vertices[index + SpriteBatch.C2] = color2;
vertices[index + SpriteBatch.C3] = color3;
vertices[index + SpriteBatch.C4] = color4;
}
}
public GlyphLayout drawGradient(Batch batch, CharSequence str, float x, float y, Color topColor, Color bottomColor) {
BitmapFontCache cache = getCache();
float tc = topColor.toFloatBits();
float bc = bottomColor.toFloatBits();
cache.clear();
GlyphLayout layout = cache.addText(str, x, y);
for (int page = 0; page < cache.getFont().getRegions().size; page++) {
applyGradient(cache.getVertices(page), cache.getVertexCount(page), bc, tc, tc, bc);
}
cache.draw(batch);
return layout;
}
}
SpriteBatch batch;
GradientFont font;
float topColor;
float bottomColor;
#Override
public void create () {
batch = new SpriteBatch();
font = new GradientFont();
}
#Override
public void render () {
Gdx.gl.glClearColor(1, 0, 0, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
batch.begin();
font.drawGradient(batch, "Hello world", 0, 100, Color.GREEN, Color.BLUE);
batch.end();
}
#Override
public void dispose () {
batch.dispose();
font.dispose();
}
}
Btw, to get better answers you should include the actual problem you are trying to solve, instead of focusing on what you think is the solution. See also: https://stackoverflow.com/help/asking.

You can fake blending by doing some math here's what I came up with:
import com.badlogic.gdx.Game;
import com.badlogic.gdx.Gdx;
import com.badlogic.gdx.graphics.Color;
import com.badlogic.gdx.graphics.GL20;
import com.badlogic.gdx.graphics.Texture;
import com.badlogic.gdx.graphics.g2d.SpriteBatch;
import com.badlogic.gdx.graphics.glutils.ShapeRenderer;
import com.badlogic.gdx.math.MathUtils;
import com.badlogic.gdx.math.Rectangle;
public class CalculatedMask extends Game {
private SpriteBatch batch; // The SpriteBatch to draw the white image
private ShapeRenderer renderer; // The ShapeRenderer to draw the green rectangle
private Texture img; // The texture of the image
private Rectangle imgBounds; // The bounds of the image
private Rectangle squareBounds; // The bounds of the square
private float width; // The width of the screen
private float height; // The height of the screen
private float squareX; // The x position of the green square
private float squareY; // The y position of the green square
private float squareWidth; // The width of the green square
private float squareHeight; // The height of the green square
#Override
public void create() {
width = Gdx.graphics.getWidth();
height = Gdx.graphics.getHeight();
batch = new SpriteBatch();
renderer = new ShapeRenderer();
renderer.setAutoShapeType(true);
img = new Texture("pixel.png"); // A 1x1 white pixel png
imgBounds = new Rectangle(); // The white image bounds
imgBounds.setPosition(width / 7f, height / 4f); // Position the white image bounds
imgBounds.setSize(400f, 300f); // Scale the white image bounds
calculateRectangle();
}
private void calculateRectangle() {
// Here we define the green rectangle's original position and size
squareBounds = new Rectangle();
squareX = width / 2f - 300f;
squareY = height / 4f - 50f;
squareWidth = 200f;
squareHeight = 200f;
// Adjust green square x position
squareBounds.x = MathUtils.clamp(squareX, imgBounds.x, imgBounds.x + imgBounds.width);
// Adjust green square y position
squareBounds.y = MathUtils.clamp(squareY, imgBounds.y, imgBounds.y + imgBounds.height);
// Adjust green square width
if (squareX < imgBounds.x) {
squareBounds.width = Math.max(squareWidth + squareX - imgBounds.x, 0f);
} else if (squareX + squareWidth > imgBounds.x + imgBounds.width) {
squareBounds.width = Math.max(imgBounds.width - squareX + imgBounds.x, 0f);
} else {
squareBounds.width = squareWidth;
}
// Adjust green square height
if (squareY < imgBounds.y) {
squareBounds.height = Math.max(squareHeight + squareY - imgBounds.y, 0f);
} else if (squareY + squareHeight > imgBounds.y + imgBounds.height) {
squareBounds.height = Math.max(imgBounds.height - squareY + imgBounds.y, 0f);
} else {
squareBounds.height = squareHeight;
}
}
#Override
public void render() {
// Clear previous frame
Gdx.gl.glClearColor(1, 0, 0, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
// Draw the white image
batch.begin();
batch.draw(img, imgBounds.x, imgBounds.y, imgBounds.width, imgBounds.height);
batch.end();
// Draw the green rectangle without affecting background
renderer.begin();
renderer.setColor(Color.GREEN);
// Debug so we can see the real green rectangle
renderer.set(ShapeRenderer.ShapeType.Line);
renderer.rect(squareX, squareY, squareWidth, squareHeight);
// Draw the modified green rectangle
renderer.set(ShapeRenderer.ShapeType.Filled);
renderer.rect(squareBounds.x, squareBounds.y, squareBounds.width, squareBounds.height);
renderer.end();
}
}
And the results are:
And with:
squareX = width / 2f + 100f;
squareY = height / 4f + 150f;

Related

Qt5 - C++ - QImage - NOT scaling right

I'm getting confused by the way of drawImage and scaledToHeight (or any kind of scaling) is working. Could any of you help me to understand what's going on here?
So I have the fallowing code:
auto cellX = this->parentWidget()->width() / 100;
auto cellY = this->parentWidget()->height() / 100;
QImage icon(dir.absoluteFilePath(m_viewModel.icon));
QImage scaled = icon.scaledToHeight(cellY * 40, Qt::SmoothTransformation);
painter.drawImage(cellX * 20, cellY * 20, scaled);
Now if I understand it correctly this should work as the following:
QImage QImage::scaledToHeight(int height, Qt::TransformationMode mode
= Qt::FastTransformation) const
Returns a scaled copy of the image. The returned image is scaled to
the given height using the specified transformation mode.
This function automatically calculates the width of the image so that
the ratio of the image is preserved.
If the given height is 0 or negative, a null image is returned.
and also
void QPainter::drawImage(int x, int y, const QImage &image, int sx =
0, int sy = 0, int sw = -1, int sh = -1, Qt::ImageConversionFlags
flags = Qt::AutoColor)
This is an overloaded function.
Draws an image at (x, y) by copying a part of image into the paint
device.
(x, y) specifies the top-left point in the paint device that is to be
drawn onto. (sx, sy) specifies the top-left point in image that is to
be drawn. The default is (0, 0).
(sw, sh) specifies the size of the image that is to be drawn. The
default, (0, 0) (and negative) means all the way to the bottom-right
of the image.
So in other words, scaledToHeight is going to return a new image scaled according to that specific height and drawImage is going to draw that specific image started from point X, Y which I'm going to mention down to the end of the image (because it's -1 and -1 default)
QUESTION:
As you could see already my scaled image is strictly dependent on the position of drawImage, why is that? How can I scale and draw my image properly? Or in other words WHY if I will position my image at 0 0 or not will affect how my image looks like?
UPDATE:
I have my Widget Class which looks something like this:
class AC_SpeedLevelController : public QWidget {
Q_OBJECT
protected:
void paintEvent(QPaintEvent *event) override;
private:
AC_ButtonViewModel m_viewModel{};
public:
explicit AC_SpeedLevelController(QWidget *parent);
void setupStyle(const AC_ButtonViewModel &model) override;
};
My paintEvent is going to look like:
void AC_SpeedLevelController::paintEvent(QPaintEvent *event) {
QWidget::paintEvent(event);
QPainter painter(this);
QDir dir(qApp->applicationDirPath());
dir.cd("icons");
auto cellX = this->parentWidget()->width() / 100;
auto cellY = this->parentWidget()->height() / 100;
QImage icon(dir.absoluteFilePath(m_viewModel.icon));
QImage scaled = icon.scaledToHeight(cellY * 20, Qt::SmoothTransformation);
painter.drawImage(0, 0, scaled);
}

Q3DSurface: Semi-transparent QSurface3DSeries

I tried to render my surface with using alpha channel, but when I setting alpha value, it renders with random colors and not semi-transparent
// Init memory
Q3DSurface *poSurface = new Q3DSurface();
QSurface3DSeries *poSeries = new QSurface3DSeries();
QSurfaceDataArray *poDataArray = new QSurfaceDataArray();
// Generating test surface series
for ( int i = 0, k = 0; i < 10; ++i)
{
QSurfaceDataRow *poRow = new QSurfaceDataRow();
for ( int j = 0; j < 10; ++j )
{
float x = j;
float y = i;
float z = k;
poRow->append( QSurfaceDataItem( QVector3D( x, y, z ) ) );
}
poDataArray->append( poRow );
if ( i % 2 == 0 )
{
++k;
}
}
//
poSeries->dataProxy()->resetArray( poDataArray );
poSurface->addSeries( poSeries );
// Setting color with alpha value
poSeries->setBaseColor( QColor( 100, 100, 100, 100 ));
// Show surface widget
QWidget *poWidget = QWidget::createWindowContainer( poSurface );
poWidget->setWindowTitle( "test ");
poWidget->show();
What am I doing wrong?
I'm not sure what you mean by "random colours", but at a guess, are you accounting for the default lighting? The effect of the 3d lighting can make colours look differently to what they are explicitly set to.
With regard to your transparency setting, I think this code looks fine. You are setting the RGBA values as R=100, G=100, B=100, A=100 which will produce a grey colour. Under the default light this may look like light/dark patches because of the function you have graphed and the way the light "bounces" off the edges.
Try changing your code slightly to see if this is really what is happening:
poSeries->dataProxy()->resetArray( poDataArray );
poSurface->addSeries( poSeries );
//PICK A DARK THEME THAT WILL HELP TO ILLUSTRATE THE EFFECT
poSurface->activeTheme()->setType(Q3DTheme::ThemeEbony);
//TURN THE AMBIENT LIGHTING UP TO FULL
poSurface->activeTheme()->setAmbientLightStrength(1.0f);
// Setting color with alpha value
//SET IT TO RED WITH A FULL ALPHA CHANNEL
poSeries->setBaseColor( QColor( 100, 0, 0, 255 ));
// Show surface widget
QWidget *poWidget = QWidget::createWindowContainer( poSurface );
poWidget->setWindowTitle( "test ");
poWidget->show();
This should produce a dark red image of your graph with a dark background (just to make things clearer). Now put the alpha value back to what you wanted originally and you will see what effect this has on the colouring:
// Setting color with alpha value: "washed out" red colour
poSeries->setBaseColor( QColor( 100, 0, 0, 100 ));
You can probably see that it is the colour (rather than the mesh) that is being rendered at the transparency setting set through "setBaseColor".
Unfortunately I cannot tell you how to render transparently the Q3DSurface itself, but I hope that helps a little.

Light shader moved while resizing window

I've been working around to make a little light shader.
It works perfectly, I mean, the light fades as it's supposed to, it's a circle around my character moving with it.
It could be perfect only if that resizing event wasn't existing.
When SFML resizes the window, it enlarges everything, but in a strange way. It enlarged everything but shaders.
I tried to resize my window (I love resizing pixel graph games, I find it most beautiful. So I don't want to prevent the resizing event).
Here's my shader :
uniform vec3 light;
void main(void) {
float distance = sqrt(pow(gl_FragCoord.x - light.x, 2) + pow(gl_FragCoord.y - light.y, 2));
float alpha = 1.;
if (distance <= light.z) {
alpha = (1.0 / light.z) * distance;
}
gl_FragColor = vec4(0., 0., 0., alpha);
}
So, the problem is, my window is showed at 1280 x 736 (to fit with 32x32 textures), and I have a 1920 x 1080 monitor. When I enlarge the window to fit in 1920 x 1080 (title bar included), the whole thing resizes correctly, everything's fine, but the shader is now 1920x1080 (minus the title bar). So the shader needs different coordinates (what's supposed to be in x = 32, y = 0 is, for the shader, in x = 48 y = 0).
So I was wondering, is it possible to enlarge the shader with the whole window ? Should I use events or something like that ?
Thanks for your answers ^^
EDIT : Here's some pics :
So this is the light shader before it resizes (it's dark everywhere but on the player, like it's supposed to be).
Then I resize the window, the player doesn't move, the textures fit the entire window, but the light moved.
So, to explain correctly, when I resize the window, I want everything to fit the window, so it's full of textures, but when I do that, the coordinates given to my shader are the ones before resizing, and if I move it moves as if I didn't resize the window, so the light is never on my player again.
I'm not sure it's clearer, but I tried my best.
EDIT2 : Here's my code which calls the shader :
void Graphics::UpdateLight() {
short radius = 65; // 265 on the pictures
int x = m_game->GetPlayer()->GetSprite()->getPosition().x + CASE_LEN / 2; // Setting on the middle of the player sprite (CASE_LEN is a const which contains the size of a case (here 32))
int y = HEIGHT - (m_game->GetPlayer()->GetSprite()->getPosition().y + CASE_LEN / 2); // (the "HEIGHT -" part was set because it seems that y = 0 is on the bottom of the texture for GLSL)
sf::Vector3f shaderLight;
shaderLight.x = x;
shaderLight.y = y;
shaderLight.z = radius;
m_lightShader.setParameter("light", shaderLight);
}
The code snippet you're showing really only updates the shader coordinates (and from a quick glimpse it looks fine). The bug most likely happens somewhere where you're actually drawing things.
I'd use a completely different approach, because your shader approach might get rather tedious once you're rendering multiple things, other light sources, etc.
As such I'd suggest you render a light map to a render texture (which would essentially be like "black = no light, color = light of that color").
Rather than trying to explain everything in text, I've written a quick commented example program which will draw a window on screen and move some light sources over a background image (I've used the one that comes with SFML's shader example):
There are no requirements other than having a file called "background.jpg" in your startup path.
Feel free to copy this code or use it for inspiration. Just keep in mind this isn't optimized and really just a quick edit to show the general idea.
#include <SFML/Graphics.hpp>
#include <vector>
#include <cmath>
const float PI = 3.1415f;
struct Light
{
sf::Vector2f position;
sf::Color color;
float radius;
};
int main()
{
// Let's setup a window
sf::RenderWindow window(sf::VideoMode(640, 480), "SFML Lights");
window.setVerticalSyncEnabled(false);
window.setFramerateLimit(60);
// Create something simple to draw
sf::Texture texture;
texture.loadFromFile("background.jpg");
sf::Sprite background(texture);
// Setup everything for the lightmap
sf::RenderTexture lightmapTex;
// We're using a 512x512 render texture for max. compatibility
// On modern hardware it could match the window resolution of course
lightmapTex.create(512, 512);
sf::Sprite lightmap(lightmapTex.getTexture());
// Scale the sprite to fill the window
lightmap.setScale(640 / 512.f, 480 / 512.f);
// Set the lightmap's view to the same as the window
lightmapTex.setView(window.getDefaultView());
// Drawable helper to draw lights
// We'll just have to adjust the first vertex's color to tint it
sf::VertexArray light(sf::PrimitiveType::TriangleFan);
light.append({sf::Vector2f(0, 0), sf::Color::White});
// This is inaccurate, but for demo purposes…
// This could be more elaborate to allow better graduation etc.
for (float i = 0; i <= 2 * PI; i += PI * .125f)
light.append({sf::Vector2f(std::sin(i), std::cos(i)), sf::Color::Transparent});
// Setup some lights
std::vector<Light> lights;
lights.push_back({sf::Vector2f(50.f, 50.f), sf::Color::White, 100.f });
lights.push_back({sf::Vector2f(350.f, 150.f), sf::Color::Red, 150.f });
lights.push_back({sf::Vector2f(150.f, 250.f), sf::Color::Yellow, 200.f });
lights.push_back({sf::Vector2f(250.f, 450.f), sf::Color::Cyan, 100.f });
// RenderStates helper to transform and draw lights
sf::RenderStates rs(sf::BlendAdd);
while (window.isOpen()) {
sf::Event event;
while (window.pollEvent(event)) {
switch (event.type) {
case sf::Event::Closed:
window.close();
break;
}
}
bool flip = false; // simple toggle to animate differently
// Draw the light map
lightmapTex.clear(sf::Color::Black);
for(Light &l : lights)
{
// Apply all light attributes and render it
// Reset the transformation
rs.transform = sf::Transform::Identity;
// Move the light
rs.transform.translate(l.position);
// And scale it (this could be animated to create flicker)
rs.transform.scale(l.radius, l.radius);
// Adjust the light color (first vertex)
light[0].color = l.color;
// Draw the light
lightmapTex.draw(light, rs);
// To make things a bit more interesting
// We're moving the lights
l.position.x += flip ? 2 : -2;
flip = !flip;
if (l.position.x > 640)
l.position.x -= 640;
else if (l.position.x < 0)
l.position.x += 640;
}
lightmapTex.display();
window.clear(sf::Color::White);
// Draw the background / game
window.draw(background);
// Draw the lightmap
window.draw(lightmap, sf::BlendMultiply);
window.display();
}
}

Why drawString method does not seem to always start at the given coordinates?

In my code I cannot draw a String at precise coordinates. Its upper left corner does not start at the given coordinates but somewhere else. However if I draw a rectangle from the same given coordinates it is well placed. How on earth can this behaviour be possible ?
Here is my code I call in the beforeShow() method :
Image photoBase = fetchResourceFile().getImage("Voiture_4_3.jpg");
Image watermark = fetchResourceFile().getImage("Watermark.png");
f.setLayout(new LayeredLayout());
final Label drawing = new Label();
f.addComponent(drawing);
// Image mutable dans laquelle on va dessiner (fond blancpar défaut)
Image mutableImage = Image.createImage(photoBase.getWidth(), photoBase.getHeight());
// Paint all the stuff
paintAll(mutableImage.getGraphics(), photoBase, watermark, photoBase.getWidth(), photoBase.getHeight());
drawing.getUnselectedStyle().setBgImage(mutableImage);
drawing.getUnselectedStyle().setBackgroundType(Style.BACKGROUND_IMAGE_SCALED_FIT);
// Save the graphics
// Save the image with the ImageIO class
long time = new Date().getTime();
OutputStream os;
try {
os = Storage.getInstance().createOutputStream("screenshot_" + Long.toString(time) + ".png");
ImageIO.getImageIO().save(mutableImage, os, ImageIO.FORMAT_PNG, 1.0f);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
And the paintAll method
public void paintAll(Graphics g, Image background, Image watermark, int width, int height) {
// Full quality
float saveQuality = 1.0f;
// Create image as buffer
Image imageBuffer = Image.createImage(width, height, 0xffffff);
// Create graphics out of image object
Graphics imageGraphics = imageBuffer.getGraphics();
// Do your drawing operations on the graphics from the image
imageGraphics.drawImage(background, 0, 0);
imageGraphics.drawImage(watermark, 0, 0);
imageGraphics.setColor(0xFF0000);
// Upper left corner
imageGraphics.fillRect(0, 0, 10, 10);
// Lower right corner
imageGraphics.setColor(0x00FF00);
imageGraphics.fillRect(width - 10, height - 10, 10, 10);
imageGraphics.setColor(0xFF0000);
Font f = Font.createTrueTypeFont("Geometos", "Geometos.ttf").derive(220, Font.STYLE_BOLD);
imageGraphics.setFont(f);
// Draw a string right below the M from Mercedes on the car windscreen (measured in Gimp)
int w = 0, h = 0;
imageGraphics.drawString("HelloWorld", w, h);
// Coin haut droit de la string
imageGraphics.setColor(0x0000FF);
imageGraphics.fillRect(w, h, 20, 20);
// Draw the complete image on your Graphics object g (the screen I guess)
g.drawImage(imageBuffer, 0, 0);
}
Result for w = 0, h = 0 (no apparent offset) :
Result for w = 841 , h = 610 (offset appears on both axis : there is an offset between the blue point near Mercedes M on the windscreen and the Hello World String)
EDIT1:
I also read this SO question for Android where it is advised to convert the dpi into pixel. Does it also applies in Codename One ? If so how can I do that ? I tried
Display.getInstance().convertToPixel(measureInMillimeterFromGimp)
without success (I used mm because the javadoc tells that dpi is roughly 1 mm)
Any help would be appreciated,
Cheers
Both g and imageGraphics are the same graphics created twice which might have some implications (not really sure)...
You also set the mutable image to the background of a style before you finished drawing it. I don't know if this will be the reason for the oddities you are seeing but I would suspect that code.
Inspired from Gabriel Hass' answer I finally made it work using another intermediate Image to only write the String at (0 ; 0) and then drawing this image on the the imageBuffer Image now at the right coordinates. It works but to my mind drawString(Image, Coordinates) should directly draw at the given coordinates, shouldn't it #Shai ?
Here is the method paintAll I used to solve my problem (beforeShow code hasn't changed) :
// Full quality
float saveQuality = 1.0f;
String mess = "HelloWorld";
// Create image as buffer
Image imageBuffer = Image.createImage(width, height, 0xffffff);
// Create graphics out of image object
Graphics imageGraphics = imageBuffer.getGraphics();
// Do your drawing operations on the graphics from the image
imageGraphics.drawImage(background, 0, 0);
imageGraphics.drawImage(watermark, 0, 0);
imageGraphics.setColor(0xFF0000);
// Upper left corner
imageGraphics.fillRect(0, 0, 10, 10);
// Lower right corner
imageGraphics.setColor(0x00FF00);
imageGraphics.fillRect(width - 10, height - 10, 10, 10);
// Create an intermediate image just with the message string (will be moved to the right coordinates later)
Font f = Font.createTrueTypeFont("Geometos", "Geometos.ttf").derive(150, Font.STYLE_BOLD);
// Get the message dimensions
int messWidth = f.stringWidth(mess);
int messHeight = f.getHeight();
Image messageImageBuffer = Image.createImage(messWidth, messHeight, 0xffffff);
Graphics messageImageGraphics = messageImageBuffer.getGraphics();
messageImageGraphics.setColor(0xFF0000);
messageImageGraphics.setFont(f);
// Write the string at (0; 0)
messageImageGraphics.drawString(mess, 0, 0);
// Move the string to its final location right below the M from Mercedes on the car windscreen (measured in Gimp)
int w = 841, h = 610;
imageGraphics.drawImage(messageImageBuffer, w, h);
// This "point" is expected to be on the lower left corner of the M letter from Mercedes and on the upper left corner of the message string
imageGraphics.setColor(0x0000FF);
imageGraphics.fillRect(w, h, 20, 20);
// Draw the complete image on your Graphics object g
g.drawImage(imageBuffer, 0, 0);

How to update Geometry properly

I am trying to display a point cloud, consisting of vertices and color with OSG. A static point cloud to display is rather easy with this guide.
But I am not capable of updating such a point cloud. My intention is to create a geometry and attach it to my viewer class once.
This is the mentioned method which is called once in the beginning.
The OSGWidget strongly depends on this OpenGLWidget based approach.
void OSGWidget::attachGeometry(osg::ref_ptr<osg::Geometry> geom)
{
osg::Geode* geode = new osg::Geode;
geom->setDataVariance(osg::Object::DYNAMIC);
geom->setUseDisplayList(false);
geom->setUseVertexBufferObjects(true);
bool addDrawSuccess = geode->addDrawable(geom.get()); // Adding Drawable Shape to the geometry node
if (!addDrawSuccess)
{
throw "Adding Drawable failed!";
}
{
osg::StateSet* stateSet = geode->getOrCreateStateSet();
stateSet->setMode(GL_LIGHTING, osg::StateAttribute::OFF);
}
float aspectRatio = static_cast<float>(this->width()) / static_cast<float>(this->height());
// Setting up the camera
osg::Camera* camera = new osg::Camera;
camera->setViewport(0, 0, this->width(), this->height());
camera->setClearColor(osg::Vec4(0.f, 0.f, 0.f, 1.f)); // Kind of Backgroundcolor, clears the buffer and sets the default color (RGBA)
camera->setProjectionMatrixAsPerspective(30.f, aspectRatio, 1.f, 1000.f); // Create perspective projection
camera->setGraphicsContext(graphicsWindow_); // embed
osgViewer::View* view = new osgViewer::View;
view->setCamera(camera); // Set the defined camera
view->setSceneData(geode); // Set the geometry
view->addEventHandler(new osgViewer::StatsHandler);
osgGA::TrackballManipulator* manipulator = new osgGA::TrackballManipulator;
manipulator->setAllowThrow(false);
view->setCameraManipulator(manipulator);
///////////////////////////////////////////////////
// Set the viewer
//////////////////////////////////////////////////
viewer_->addView(view);
viewer_->setThreadingModel(osgViewer::CompositeViewer::SingleThreaded);
viewer_->realize();
this->setFocusPolicy(Qt::StrongFocus);
this->setMinimumSize(100, 100);
this->setMouseTracking(true);
}
After I have 'attached' the geometry, I am trying to update the geometry like this
void PointCloudViewOSG::processData(DepthDataSet depthData)
{
if (depthData.points()->empty())
{
return; // empty cloud, cannot do anything
}
const DepthDataSet::IndexPtr::element_type& index = *depthData.index();
const size_t nPixel = depthData.points().get()->points.size();
if (depthData.intensity().isValid() && !index.empty() )
{
for (int i = 0; i < nPixel; i++)
{
float x = depthData.points().get()->points[i].x;
float y = depthData.points().get()->points[i].y;
float z = depthData.points().get()->points[i].z;
m_vertices->push_back(osg::Vec3(x
, y
, z));
// 32 bit integer variable containing the rgb (8 bit per channel) value
uint32_t rgb_val_;
memcpy(&rgb_val_, &(depthData.points().get()->points[i].rgb), sizeof(uint32_t));
uint32_t red, green, blue;
blue = rgb_val_ & 0x000000ff;
rgb_val_ = rgb_val_ >> 8;
green = rgb_val_ & 0x000000ff;
rgb_val_ = rgb_val_ >> 8;
red = rgb_val_ & 0x000000ff;
m_colors->push_back(
osg::Vec4f((float)red / 255.0f,
(float)green / 255.0f,
(float)blue / 255.0f,
1.0f)
);
}
m_geometry->setVertexArray(m_vertices.get());
m_geometry->setColorArray(m_colors.get());
m_geometry->setColorBinding(osg::Geometry::BIND_PER_VERTEX);
m_geometry->addPrimitiveSet(new osg::DrawArrays(osg::PrimitiveSet::POINTS, 0, m_vertices->size()));
}
}
My guess is that the
addPrimitiveSet(...)
Shall not be called every time I update the geometry.
Or can it be the attachment of the geometry, so that I have to reattach it every time?
PointCloudlibrary (PCL) is unfortunately not an alternative since of some incompatibilities with my application.
Update: When I am reattaching the geometry to the OSGWidget class,
calling
this->attachGeometry(m_geometry)
after
m_geometry->addPrimitiveSet(new osg::DrawArrays(osg::PrimitiveSet::POINTS, 0, m_vertices->size()));
I get my point cloud visible, but this procedure is definitely wrong since I am losing way too much performance and the display driver crashes.
You need to set the array and add the primitive set only once, after that you can update the vertices like this:
osg::Vec3Array* vx = static_cast<osg::Vec3Array*>(m_vertices);
for (int i = 0; i < nPixel; i++)
{
float x, y, z;
// fill with your data...
(*vx)[i].set(x, y, z);
}
m_vertices->dirty();
The same goes for colors and other arrays.
As you're using VBO, you don't need to call dirtyDisplayList()
If you need instead to recompure the bounding box of the geometry, call
m_geometry->dirtyBound()
In case the number of points changes between updates, you can push new vertices into the array if its size is too small, and update the PrimitiveSet count like this:
osg::DrawArrays* drawArrays = static_cast<osg::DrawArrays*>(m_geometry->getPrimitiveSet(0));
drawArrays->setCount(nPixel);
drawArrays->dirty();
rickvikings solution works - I only had one issue... (OSG 3.6.1 on OSX)
I had to modify the m_vertices array directly, it would cause OSG to crash if I used the static_cast method above to modify the vertices array:
osg::Vec3Array* vx = static_cast(m_vertices);
For some reason OSG would not create a buffer object in the vertices array class if using the static_cast approach.