Q3DSurface: Semi-transparent QSurface3DSeries - c++

I tried to render my surface with using alpha channel, but when I setting alpha value, it renders with random colors and not semi-transparent
// Init memory
Q3DSurface *poSurface = new Q3DSurface();
QSurface3DSeries *poSeries = new QSurface3DSeries();
QSurfaceDataArray *poDataArray = new QSurfaceDataArray();
// Generating test surface series
for ( int i = 0, k = 0; i < 10; ++i)
{
QSurfaceDataRow *poRow = new QSurfaceDataRow();
for ( int j = 0; j < 10; ++j )
{
float x = j;
float y = i;
float z = k;
poRow->append( QSurfaceDataItem( QVector3D( x, y, z ) ) );
}
poDataArray->append( poRow );
if ( i % 2 == 0 )
{
++k;
}
}
//
poSeries->dataProxy()->resetArray( poDataArray );
poSurface->addSeries( poSeries );
// Setting color with alpha value
poSeries->setBaseColor( QColor( 100, 100, 100, 100 ));
// Show surface widget
QWidget *poWidget = QWidget::createWindowContainer( poSurface );
poWidget->setWindowTitle( "test ");
poWidget->show();
What am I doing wrong?

I'm not sure what you mean by "random colours", but at a guess, are you accounting for the default lighting? The effect of the 3d lighting can make colours look differently to what they are explicitly set to.
With regard to your transparency setting, I think this code looks fine. You are setting the RGBA values as R=100, G=100, B=100, A=100 which will produce a grey colour. Under the default light this may look like light/dark patches because of the function you have graphed and the way the light "bounces" off the edges.
Try changing your code slightly to see if this is really what is happening:
poSeries->dataProxy()->resetArray( poDataArray );
poSurface->addSeries( poSeries );
//PICK A DARK THEME THAT WILL HELP TO ILLUSTRATE THE EFFECT
poSurface->activeTheme()->setType(Q3DTheme::ThemeEbony);
//TURN THE AMBIENT LIGHTING UP TO FULL
poSurface->activeTheme()->setAmbientLightStrength(1.0f);
// Setting color with alpha value
//SET IT TO RED WITH A FULL ALPHA CHANNEL
poSeries->setBaseColor( QColor( 100, 0, 0, 255 ));
// Show surface widget
QWidget *poWidget = QWidget::createWindowContainer( poSurface );
poWidget->setWindowTitle( "test ");
poWidget->show();
This should produce a dark red image of your graph with a dark background (just to make things clearer). Now put the alpha value back to what you wanted originally and you will see what effect this has on the colouring:
// Setting color with alpha value: "washed out" red colour
poSeries->setBaseColor( QColor( 100, 0, 0, 100 ));
You can probably see that it is the colour (rather than the mesh) that is being rendered at the transparency setting set through "setBaseColor".
Unfortunately I cannot tell you how to render transparently the Q3DSurface itself, but I hope that helps a little.

Related

Gradient color text

What I actually try to achieve:
I'd like to draw text with a gradient vertical color. I found this solution, but it doesn't quite fit for me, as it has black square around the gradient font in my case - don't know how to get rid of it, so I started simple (the irrelevant part) question to understand better the physics of blending and frame buffer in opengl and libgdx
What I was trying to understand, irrelevant to my goal:
I have a texture with a white square on it, I draw it on top of red background. I am trying to draw a green square on top of the white one, the green square partially covers the white one, and partially on top of the red background (see picture below).
My intention is: the white area, that is behind of the green square should be painted in green color, but all red background should not be affected and stayed unchanged (red as it is).
How can I do this?
package com.mygdx.game;
import com.badlogic.gdx.ApplicationAdapter;
import com.badlogic.gdx.Gdx;
import com.badlogic.gdx.graphics.Color;
import com.badlogic.gdx.graphics.GL20;
import com.badlogic.gdx.graphics.Texture;
import com.badlogic.gdx.graphics.g2d.SpriteBatch;
import com.badlogic.gdx.graphics.glutils.ShapeRenderer;
public class Game extends ApplicationAdapter {
SpriteBatch batch;
Texture img;
private int height;
private int width;
private ShapeRenderer shapeRenderer;
#Override
public void create() {
batch = new SpriteBatch();
img = new Texture("white.png");
width = Gdx.graphics.getWidth();
height = Gdx.graphics.getHeight();
shapeRenderer = new ShapeRenderer();
shapeRenderer.setAutoShapeType(true);
}
#Override
public void render() {
Gdx.gl.glClearColor(1, 0, 0, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
batch.begin();
batch.draw(img, width / 7, height / 4);
batch.end();
Gdx.gl.glEnable(GL20.GL_BLEND);
Gdx.gl.glBlendFunc(GL20.GL_ONE, GL20.GL_SRC_COLOR);
shapeRenderer.begin();
shapeRenderer.set(ShapeRenderer.ShapeType.Filled);
shapeRenderer.setColor(Color.GREEN);
shapeRenderer.rect(width / 2 - 100, height / 4 - 50, 200, 200);
shapeRenderer.end();
Gdx.gl.glDisable(GL20.GL_BLEND);
}
#Override
public void dispose() {
batch.dispose();
img.dispose();
}
}
Ideally, the green square should not be transparent anyhow, just should block white where it hides the white area.
The output I'm getting:
Update:
I mark #Xoppa 's answer as correct, as it solves my original question with the following result:
You could indeed use some kind of mask to blend it using a square. For that you can first render the text to the stencil buffer using a custom shader that discards fragments with an alpha value below a certain threshold. After that you can render the square using the stencil function to only affect the fragments "touched" by the text. Note that this does involve multiple render calls though and therefore adds complexity to your calling code as well.
However, you say that you actually just want to render text using gradient. For that you don't need such complex approach and can simply apply the gradient within the same render call.
When you draw text, you actually render many little squares, for each character in the text one square. Each of this square has a textureregion applied that contains the character on a transparent background. If you open the font image (e.g. this is the default), then you'll see this source image.
Just like you can apply a gradient to a normal square, you can also apply a gradient to each of those individual squares that make up the text. There are multiple ways to do that. Which best suits depends on the use-case. For example if you need a horizontal gradient or have multiline text, then you need some additional steps. Since you didn't specify this, I'm going to assume that you want to apply a vertical gradient on a single line of text:
public class MyGdxGame extends ApplicationAdapter {
public static class GradientFont extends BitmapFont {
public static void applyGradient(float[] vertices, int vertexCount, float color1, float color2, float color3, float color4) {
for (int index = 0; index < vertexCount; index += 20) {
vertices[index + SpriteBatch.C1] = color1;
vertices[index + SpriteBatch.C2] = color2;
vertices[index + SpriteBatch.C3] = color3;
vertices[index + SpriteBatch.C4] = color4;
}
}
public GlyphLayout drawGradient(Batch batch, CharSequence str, float x, float y, Color topColor, Color bottomColor) {
BitmapFontCache cache = getCache();
float tc = topColor.toFloatBits();
float bc = bottomColor.toFloatBits();
cache.clear();
GlyphLayout layout = cache.addText(str, x, y);
for (int page = 0; page < cache.getFont().getRegions().size; page++) {
applyGradient(cache.getVertices(page), cache.getVertexCount(page), bc, tc, tc, bc);
}
cache.draw(batch);
return layout;
}
}
SpriteBatch batch;
GradientFont font;
float topColor;
float bottomColor;
#Override
public void create () {
batch = new SpriteBatch();
font = new GradientFont();
}
#Override
public void render () {
Gdx.gl.glClearColor(1, 0, 0, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
batch.begin();
font.drawGradient(batch, "Hello world", 0, 100, Color.GREEN, Color.BLUE);
batch.end();
}
#Override
public void dispose () {
batch.dispose();
font.dispose();
}
}
Btw, to get better answers you should include the actual problem you are trying to solve, instead of focusing on what you think is the solution. See also: https://stackoverflow.com/help/asking.
You can fake blending by doing some math here's what I came up with:
import com.badlogic.gdx.Game;
import com.badlogic.gdx.Gdx;
import com.badlogic.gdx.graphics.Color;
import com.badlogic.gdx.graphics.GL20;
import com.badlogic.gdx.graphics.Texture;
import com.badlogic.gdx.graphics.g2d.SpriteBatch;
import com.badlogic.gdx.graphics.glutils.ShapeRenderer;
import com.badlogic.gdx.math.MathUtils;
import com.badlogic.gdx.math.Rectangle;
public class CalculatedMask extends Game {
private SpriteBatch batch; // The SpriteBatch to draw the white image
private ShapeRenderer renderer; // The ShapeRenderer to draw the green rectangle
private Texture img; // The texture of the image
private Rectangle imgBounds; // The bounds of the image
private Rectangle squareBounds; // The bounds of the square
private float width; // The width of the screen
private float height; // The height of the screen
private float squareX; // The x position of the green square
private float squareY; // The y position of the green square
private float squareWidth; // The width of the green square
private float squareHeight; // The height of the green square
#Override
public void create() {
width = Gdx.graphics.getWidth();
height = Gdx.graphics.getHeight();
batch = new SpriteBatch();
renderer = new ShapeRenderer();
renderer.setAutoShapeType(true);
img = new Texture("pixel.png"); // A 1x1 white pixel png
imgBounds = new Rectangle(); // The white image bounds
imgBounds.setPosition(width / 7f, height / 4f); // Position the white image bounds
imgBounds.setSize(400f, 300f); // Scale the white image bounds
calculateRectangle();
}
private void calculateRectangle() {
// Here we define the green rectangle's original position and size
squareBounds = new Rectangle();
squareX = width / 2f - 300f;
squareY = height / 4f - 50f;
squareWidth = 200f;
squareHeight = 200f;
// Adjust green square x position
squareBounds.x = MathUtils.clamp(squareX, imgBounds.x, imgBounds.x + imgBounds.width);
// Adjust green square y position
squareBounds.y = MathUtils.clamp(squareY, imgBounds.y, imgBounds.y + imgBounds.height);
// Adjust green square width
if (squareX < imgBounds.x) {
squareBounds.width = Math.max(squareWidth + squareX - imgBounds.x, 0f);
} else if (squareX + squareWidth > imgBounds.x + imgBounds.width) {
squareBounds.width = Math.max(imgBounds.width - squareX + imgBounds.x, 0f);
} else {
squareBounds.width = squareWidth;
}
// Adjust green square height
if (squareY < imgBounds.y) {
squareBounds.height = Math.max(squareHeight + squareY - imgBounds.y, 0f);
} else if (squareY + squareHeight > imgBounds.y + imgBounds.height) {
squareBounds.height = Math.max(imgBounds.height - squareY + imgBounds.y, 0f);
} else {
squareBounds.height = squareHeight;
}
}
#Override
public void render() {
// Clear previous frame
Gdx.gl.glClearColor(1, 0, 0, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
// Draw the white image
batch.begin();
batch.draw(img, imgBounds.x, imgBounds.y, imgBounds.width, imgBounds.height);
batch.end();
// Draw the green rectangle without affecting background
renderer.begin();
renderer.setColor(Color.GREEN);
// Debug so we can see the real green rectangle
renderer.set(ShapeRenderer.ShapeType.Line);
renderer.rect(squareX, squareY, squareWidth, squareHeight);
// Draw the modified green rectangle
renderer.set(ShapeRenderer.ShapeType.Filled);
renderer.rect(squareBounds.x, squareBounds.y, squareBounds.width, squareBounds.height);
renderer.end();
}
}
And the results are:
And with:
squareX = width / 2f + 100f;
squareY = height / 4f + 150f;

Why drawString method does not seem to always start at the given coordinates?

In my code I cannot draw a String at precise coordinates. Its upper left corner does not start at the given coordinates but somewhere else. However if I draw a rectangle from the same given coordinates it is well placed. How on earth can this behaviour be possible ?
Here is my code I call in the beforeShow() method :
Image photoBase = fetchResourceFile().getImage("Voiture_4_3.jpg");
Image watermark = fetchResourceFile().getImage("Watermark.png");
f.setLayout(new LayeredLayout());
final Label drawing = new Label();
f.addComponent(drawing);
// Image mutable dans laquelle on va dessiner (fond blancpar défaut)
Image mutableImage = Image.createImage(photoBase.getWidth(), photoBase.getHeight());
// Paint all the stuff
paintAll(mutableImage.getGraphics(), photoBase, watermark, photoBase.getWidth(), photoBase.getHeight());
drawing.getUnselectedStyle().setBgImage(mutableImage);
drawing.getUnselectedStyle().setBackgroundType(Style.BACKGROUND_IMAGE_SCALED_FIT);
// Save the graphics
// Save the image with the ImageIO class
long time = new Date().getTime();
OutputStream os;
try {
os = Storage.getInstance().createOutputStream("screenshot_" + Long.toString(time) + ".png");
ImageIO.getImageIO().save(mutableImage, os, ImageIO.FORMAT_PNG, 1.0f);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
And the paintAll method
public void paintAll(Graphics g, Image background, Image watermark, int width, int height) {
// Full quality
float saveQuality = 1.0f;
// Create image as buffer
Image imageBuffer = Image.createImage(width, height, 0xffffff);
// Create graphics out of image object
Graphics imageGraphics = imageBuffer.getGraphics();
// Do your drawing operations on the graphics from the image
imageGraphics.drawImage(background, 0, 0);
imageGraphics.drawImage(watermark, 0, 0);
imageGraphics.setColor(0xFF0000);
// Upper left corner
imageGraphics.fillRect(0, 0, 10, 10);
// Lower right corner
imageGraphics.setColor(0x00FF00);
imageGraphics.fillRect(width - 10, height - 10, 10, 10);
imageGraphics.setColor(0xFF0000);
Font f = Font.createTrueTypeFont("Geometos", "Geometos.ttf").derive(220, Font.STYLE_BOLD);
imageGraphics.setFont(f);
// Draw a string right below the M from Mercedes on the car windscreen (measured in Gimp)
int w = 0, h = 0;
imageGraphics.drawString("HelloWorld", w, h);
// Coin haut droit de la string
imageGraphics.setColor(0x0000FF);
imageGraphics.fillRect(w, h, 20, 20);
// Draw the complete image on your Graphics object g (the screen I guess)
g.drawImage(imageBuffer, 0, 0);
}
Result for w = 0, h = 0 (no apparent offset) :
Result for w = 841 , h = 610 (offset appears on both axis : there is an offset between the blue point near Mercedes M on the windscreen and the Hello World String)
EDIT1:
I also read this SO question for Android where it is advised to convert the dpi into pixel. Does it also applies in Codename One ? If so how can I do that ? I tried
Display.getInstance().convertToPixel(measureInMillimeterFromGimp)
without success (I used mm because the javadoc tells that dpi is roughly 1 mm)
Any help would be appreciated,
Cheers
Both g and imageGraphics are the same graphics created twice which might have some implications (not really sure)...
You also set the mutable image to the background of a style before you finished drawing it. I don't know if this will be the reason for the oddities you are seeing but I would suspect that code.
Inspired from Gabriel Hass' answer I finally made it work using another intermediate Image to only write the String at (0 ; 0) and then drawing this image on the the imageBuffer Image now at the right coordinates. It works but to my mind drawString(Image, Coordinates) should directly draw at the given coordinates, shouldn't it #Shai ?
Here is the method paintAll I used to solve my problem (beforeShow code hasn't changed) :
// Full quality
float saveQuality = 1.0f;
String mess = "HelloWorld";
// Create image as buffer
Image imageBuffer = Image.createImage(width, height, 0xffffff);
// Create graphics out of image object
Graphics imageGraphics = imageBuffer.getGraphics();
// Do your drawing operations on the graphics from the image
imageGraphics.drawImage(background, 0, 0);
imageGraphics.drawImage(watermark, 0, 0);
imageGraphics.setColor(0xFF0000);
// Upper left corner
imageGraphics.fillRect(0, 0, 10, 10);
// Lower right corner
imageGraphics.setColor(0x00FF00);
imageGraphics.fillRect(width - 10, height - 10, 10, 10);
// Create an intermediate image just with the message string (will be moved to the right coordinates later)
Font f = Font.createTrueTypeFont("Geometos", "Geometos.ttf").derive(150, Font.STYLE_BOLD);
// Get the message dimensions
int messWidth = f.stringWidth(mess);
int messHeight = f.getHeight();
Image messageImageBuffer = Image.createImage(messWidth, messHeight, 0xffffff);
Graphics messageImageGraphics = messageImageBuffer.getGraphics();
messageImageGraphics.setColor(0xFF0000);
messageImageGraphics.setFont(f);
// Write the string at (0; 0)
messageImageGraphics.drawString(mess, 0, 0);
// Move the string to its final location right below the M from Mercedes on the car windscreen (measured in Gimp)
int w = 841, h = 610;
imageGraphics.drawImage(messageImageBuffer, w, h);
// This "point" is expected to be on the lower left corner of the M letter from Mercedes and on the upper left corner of the message string
imageGraphics.setColor(0x0000FF);
imageGraphics.fillRect(w, h, 20, 20);
// Draw the complete image on your Graphics object g
g.drawImage(imageBuffer, 0, 0);

How to update Geometry properly

I am trying to display a point cloud, consisting of vertices and color with OSG. A static point cloud to display is rather easy with this guide.
But I am not capable of updating such a point cloud. My intention is to create a geometry and attach it to my viewer class once.
This is the mentioned method which is called once in the beginning.
The OSGWidget strongly depends on this OpenGLWidget based approach.
void OSGWidget::attachGeometry(osg::ref_ptr<osg::Geometry> geom)
{
osg::Geode* geode = new osg::Geode;
geom->setDataVariance(osg::Object::DYNAMIC);
geom->setUseDisplayList(false);
geom->setUseVertexBufferObjects(true);
bool addDrawSuccess = geode->addDrawable(geom.get()); // Adding Drawable Shape to the geometry node
if (!addDrawSuccess)
{
throw "Adding Drawable failed!";
}
{
osg::StateSet* stateSet = geode->getOrCreateStateSet();
stateSet->setMode(GL_LIGHTING, osg::StateAttribute::OFF);
}
float aspectRatio = static_cast<float>(this->width()) / static_cast<float>(this->height());
// Setting up the camera
osg::Camera* camera = new osg::Camera;
camera->setViewport(0, 0, this->width(), this->height());
camera->setClearColor(osg::Vec4(0.f, 0.f, 0.f, 1.f)); // Kind of Backgroundcolor, clears the buffer and sets the default color (RGBA)
camera->setProjectionMatrixAsPerspective(30.f, aspectRatio, 1.f, 1000.f); // Create perspective projection
camera->setGraphicsContext(graphicsWindow_); // embed
osgViewer::View* view = new osgViewer::View;
view->setCamera(camera); // Set the defined camera
view->setSceneData(geode); // Set the geometry
view->addEventHandler(new osgViewer::StatsHandler);
osgGA::TrackballManipulator* manipulator = new osgGA::TrackballManipulator;
manipulator->setAllowThrow(false);
view->setCameraManipulator(manipulator);
///////////////////////////////////////////////////
// Set the viewer
//////////////////////////////////////////////////
viewer_->addView(view);
viewer_->setThreadingModel(osgViewer::CompositeViewer::SingleThreaded);
viewer_->realize();
this->setFocusPolicy(Qt::StrongFocus);
this->setMinimumSize(100, 100);
this->setMouseTracking(true);
}
After I have 'attached' the geometry, I am trying to update the geometry like this
void PointCloudViewOSG::processData(DepthDataSet depthData)
{
if (depthData.points()->empty())
{
return; // empty cloud, cannot do anything
}
const DepthDataSet::IndexPtr::element_type& index = *depthData.index();
const size_t nPixel = depthData.points().get()->points.size();
if (depthData.intensity().isValid() && !index.empty() )
{
for (int i = 0; i < nPixel; i++)
{
float x = depthData.points().get()->points[i].x;
float y = depthData.points().get()->points[i].y;
float z = depthData.points().get()->points[i].z;
m_vertices->push_back(osg::Vec3(x
, y
, z));
// 32 bit integer variable containing the rgb (8 bit per channel) value
uint32_t rgb_val_;
memcpy(&rgb_val_, &(depthData.points().get()->points[i].rgb), sizeof(uint32_t));
uint32_t red, green, blue;
blue = rgb_val_ & 0x000000ff;
rgb_val_ = rgb_val_ >> 8;
green = rgb_val_ & 0x000000ff;
rgb_val_ = rgb_val_ >> 8;
red = rgb_val_ & 0x000000ff;
m_colors->push_back(
osg::Vec4f((float)red / 255.0f,
(float)green / 255.0f,
(float)blue / 255.0f,
1.0f)
);
}
m_geometry->setVertexArray(m_vertices.get());
m_geometry->setColorArray(m_colors.get());
m_geometry->setColorBinding(osg::Geometry::BIND_PER_VERTEX);
m_geometry->addPrimitiveSet(new osg::DrawArrays(osg::PrimitiveSet::POINTS, 0, m_vertices->size()));
}
}
My guess is that the
addPrimitiveSet(...)
Shall not be called every time I update the geometry.
Or can it be the attachment of the geometry, so that I have to reattach it every time?
PointCloudlibrary (PCL) is unfortunately not an alternative since of some incompatibilities with my application.
Update: When I am reattaching the geometry to the OSGWidget class,
calling
this->attachGeometry(m_geometry)
after
m_geometry->addPrimitiveSet(new osg::DrawArrays(osg::PrimitiveSet::POINTS, 0, m_vertices->size()));
I get my point cloud visible, but this procedure is definitely wrong since I am losing way too much performance and the display driver crashes.
You need to set the array and add the primitive set only once, after that you can update the vertices like this:
osg::Vec3Array* vx = static_cast<osg::Vec3Array*>(m_vertices);
for (int i = 0; i < nPixel; i++)
{
float x, y, z;
// fill with your data...
(*vx)[i].set(x, y, z);
}
m_vertices->dirty();
The same goes for colors and other arrays.
As you're using VBO, you don't need to call dirtyDisplayList()
If you need instead to recompure the bounding box of the geometry, call
m_geometry->dirtyBound()
In case the number of points changes between updates, you can push new vertices into the array if its size is too small, and update the PrimitiveSet count like this:
osg::DrawArrays* drawArrays = static_cast<osg::DrawArrays*>(m_geometry->getPrimitiveSet(0));
drawArrays->setCount(nPixel);
drawArrays->dirty();
rickvikings solution works - I only had one issue... (OSG 3.6.1 on OSX)
I had to modify the m_vertices array directly, it would cause OSG to crash if I used the static_cast method above to modify the vertices array:
osg::Vec3Array* vx = static_cast(m_vertices);
For some reason OSG would not create a buffer object in the vertices array class if using the static_cast approach.

How do I create a color bar (TV test pattern)

I need to create a colorbar like THIS
I use a scaled array of floats between 0 and 1.
Now I want to compute the RGB color from this float. How to do it? Want to write it in C/c++ so I think I need 2 functions.
The first function to build the colorbar with one parameter like STEPSIZE and the second function need the value and must just return the array index of the colorbar.
I couldn't find it on google, so please help me.
What you are referring to here is the 100% EBU Color Bars (named after the standards body, the European Broadcasting Union). This is not the same as the full SMPTE RP 219-2002 color bars, which have other features including gradients and the PLUGE (Picture Line-Up Generation Equipment), described in the Wikipedia article on Color Bars.
The EBU Color Bars consist of 8 vertical bars of equal width. They are defined in the same way for both SD and HD formats. In the RGB color space, they alternate each of the red, green and blue channels at different rates (much like counting in binary) from 0 to 100% intensity. Counting down from white, in normalised RGB form (appearing left to right):
1, 1, 1: White
1, 1, 0: Yellow
0, 1, 1: Cyan
0, 1, 0: Green
1, 0, 1: Magenta
1, 0, 0: Red
0, 0, 1: Blue
0, 0, 0: Black
So the blue channel alternates every column, the red channel after two columns, and the green after four columns. This arrangement has the useful property that the luminance (Y in YCb'Cr' colour space) results in a downward stepping plot.
To render using 8-bit RGB (most commonly found in desktop systems), simply multiply the above values by 255. The EBU bars come in 75% and 100% variants, based on the intensity of the white column. The SMPTE color bars typically use 75% levels as a reference.
Here is some simple C code to generate 100% EBU color bars and save the result as a PGM file:
#include <stdio.h>
#include <stdint.h>
#include <stdlib.h>
// PAL dimensions
static const unsigned kWidth = 720;
static const unsigned kHeight = 576;
typedef struct
{
uint8_t r;
uint8_t g;
uint8_t b;
} RGB;
int main(int argc, char* argv[])
{
const RGB BAR_COLOUR[8] =
{
{ 255, 255, 255 }, // 100% White
{ 255, 255, 0 }, // Yellow
{ 0, 255, 255 }, // Cyan
{ 0, 255, 0 }, // Green
{ 255, 0, 255 }, // Magenta
{ 255, 0, 0 }, // Red
{ 0, 0, 255 }, // Blue
{ 0, 0, 0 }, // Black
};
// Allocate frame buffer
size_t frameBytes = kWidth*kHeight*sizeof(RGB);
RGB* frame = malloc(frameBytes);
unsigned columnWidth = kWidth / 8;
// Generate complete frame
for (unsigned y = 0; y < kHeight; y++)
{
for (unsigned x = 0; x < kWidth; x++)
{
unsigned col_idx = x / columnWidth;
frame[y*kWidth+x] = BAR_COLOUR[col_idx];
}
}
// Save as PPM
FILE* fout = fopen("ebu_bars.ppm", "wb");
fprintf(fout, "P6\n%u %u\n255\n", kWidth, kHeight);
fwrite(frame, frameBytes, 1, fout);
fclose(fout);
free(frame);
return 0;
}
This should be readily adaptable to any other language. There's probably no need for using float unless you're implementing this on a GPU (in which case the algorithm would be quite different). There is much scope for optimization here; the code is written for clarity, not speed.
Note that while it is possible to generate a perfect digital representation of the color bars in a computer, this will not be safe for broadcast. The transitions between "perfect" color bars would require infinitely high bandwidth to accurately represent. So if the test image is to be transmitted via analog broadcast equipment, it must be bandwidth-limited by a low-pass filter (eg. ~4.3MHz for PAL). This is why you notice the "fuzzy" boundaries in between each column; these contain intermediate values between the pure colors.
Also note that it is not possible to accurately represent the SMPTE color bars in the RGB color space. This is because certain critical values are specified in the YCb'Cr' color space (notably in the PLUGE region) which are outside the gamut of RGB (either SD or HD). You can create something that approximates the values (eg. a very dark blue) but they are not correct. So unless you are representing the test frame in YCb'Cr', stick to EBU bars only (the upper 2/3).
RGB uses bytes, so assuming your array of floats is something like
float scaledColor[3]; // 0 = R, etc., all 0.0 < scaledColor[x] < 1.0
then you can do:
unsigned char r = (unsigned char)(255 * scaledColor[0]);
unsigned char g = (unsigned char)(255 * scaledColor[1]);
unsigned char b = (unsigned char)(255 * scaledColor[2]);
This will of course only work if the values in the floats are really in the range from 0.0 to 1.0.
The simplest solution:
const unsigned char* getColour(float x) /* 0 <= x < 1 */
{
static const unsigned char bar[][3] = {
{255,255,255},
{255,255,0},
// ... fill in all the colours ...
{0,0,0}
};
return bar[(int)(x*sizeof(bar))];
}
Then you can use it to generate bars of any width.
My google-fu turned up that you want the upper third of a SMPTE color bar pattern.
Wikipedia says:
In order from left to right, the colors are gray, yellow, cyan, green,
magenta, red, and blue.
So the easiest way is to simply hard code the respective RGB color codes if you only need those. The article also mentions how those colors can be generated but this seems a lot more difficult and isn't really worth the effort for seven colors.

CPU Ray Casting

I'm attempting ray casting an octree on the CPU (I know the GPU is better, but I'm unable to get that working at this time, I believe my octree texture is created incorrectly).
I understand what needs to be done, and so far I cast a ray for each pixel, and check if that ray intersects any nodes within the octree. If it does and the node is not a leaf node, I check if the ray intersects it's child nodes. I keep doing this until a leaf node is hit. Once a leaf node is hit, I get the colour for that node.
My question is, what is the best way to draw this to the screen? Currently im storing the colours in an array and drawing them with glDrawPixels, but this does not produce correct results, with gaps in the renderings, as well as the projection been wrong (I am using glRasterPos3fv).
Edit: Here is some code so far, it needs cleaning up, sorry. I have omitted the octree ray casting code as I'm not sure it's needed, but I will post if it'll help :)
void Draw(Vector cameraPosition, Vector cameraLookAt)
{
// Calculate the right Vector
Vector rightVector = Cross(cameraLookAt, Vector(0, 1, 0));
// Set up the screen plane starting X & Y positions
float screenPlaneX, screenPlaneY;
screenPlaneX = cameraPosition.x() - ( ( WINDOWWIDTH / 2) * rightVector.x());
screenPlaneY = cameraPosition.y() + ( (float)WINDOWHEIGHT / 2);
float deltaX, deltaY;
deltaX = 1;
deltaY = 1;
int currentX, currentY, index = 0;
Vector origin, direction;
origin = cameraPosition;
vector<Vector4<int>> colours(WINDOWWIDTH * WINDOWHEIGHT);
currentY = screenPlaneY;
Vector4<int> colour;
for (int y = 0; y < WINDOWHEIGHT; y++)
{
// Set the current pixel along x to be the left most pixel
// on the image plane
currentX = screenPlaneX;
for (int x = 0; x < WINDOWWIDTH; x++)
{
// default colour is black
colour = Vector4<int>(0, 0, 0, 0);
// Cast the ray into the current pixel. Set the length of the ray to be 200
direction = Vector(currentX, currentY, cameraPosition.z() + ( cameraLookAt.z() * 200 ) ) - origin;
direction.normalize();
// Cast the ray against the octree and store the resultant colour in the array
colours[index] = RayCast(origin, direction, rootNode, colour);
// Move to next pixel in the plane
currentX += deltaX;
// increase colour arry index postion
index++;
}
// Move to next row in the image plane
currentY -= deltaY;
}
// Set the colours for the array
SetFinalImage(colours);
// Load array to 0 0 0 to set the raster position to (0, 0, 0)
GLfloat *v = new GLfloat[3];
v[0] = 0.0f;
v[1] = 0.0f;
v[2] = 0.0f;
// Set the raster position and pass the array of colours to drawPixels
glRasterPos3fv(v);
glDrawPixels(WINDOWWIDTH, WINDOWHEIGHT, GL_RGBA, GL_FLOAT, finalImage);
}
void SetFinalImage(vector<Vector4<int>> colours)
{
// The array is a 2D array, with the first dimension
// set to the size of the window (WINDOW_WIDTH * WINDOW_HEIGHT)
// Second dimension stores the rgba values for each pizel
for (int i = 0; i < colours.size(); i++)
{
finalImage[i][0] = (float)colours[i].r;
finalImage[i][1] = (float)colours[i].g;
finalImage[i][2] = (float)colours[i].b;
finalImage[i][3] = (float)colours[i].a;
}
}
Your pixel drawing code looks okay. But I'm not sure that your RayCasting routines are correct. When I wrote my raytracer, I had a bug that caused horizontal artifacts in on the screen, but it was related to rounding errors in the render code.
I would try this...create a result set of vector<Vector4<int>> where the colors are all red. Now render that to the screen. If it looks correct, then the opengl routines are correct. Divide and conquer is always a good debugging method.
Here's a question though....why are you using Vector4 when later on you write the image as GL_FLOAT? I'm not seeing any int->float conversion here....
You problem may be in your 3DDDA (octree raycaster), and specifically with adaptive termination. It results from the quantisation of rays into gridcell form, that causes certain octree nodes which lie slightly behind foreground nodes (i.e. of a higher z depth) and which thus should be partly visible & partly occluded, to not be rendered at all. The smaller your voxels are, the less noticeable this will be.
There is a very easy way to test whether this is the problem -- comment out the adaptive termination line(s) in your 3DDDA and see if you still get the same gap artifacts.