Using point sprites with direct x. what steps need to be taken? - c++

This is still an outstanding issue.
I am trying to get a point sprites system workign render a sun in my world. I noticed another user asking a similar question (with the same code, presumably from my class :) ) but they were not able to complete this. My current code for this is as follows:
float fPointSize = 10.0f,fPointScaleB = 100.0f;
IDirect3DDevice9 *m_Device = LudoRenderer::GetInstance()->GetDevice();
m_Device->SetRenderState(D3DRS_POINTSPRITEENABLE,true);
m_Device->SetRenderState(D3DRS_POINTSCALEENABLE,true);
m_Device->SetRenderState(D3DRS_POINTSIZE,
*((DWORD*)&fPointSize));
m_Device->SetRenderState(D3DRS_POINTSCALE_B,
*((DWORD*)&fPointScaleB));
m_Device->SetRenderState(D3DRS_ALPHABLENDENABLE,true);
m_Device->SetRenderState(D3DRS_SRCBLEND,D3DBLEND_ONE);
m_Device->SetRenderState(D3DRS_DESTBLEND,D3DBLEND_ONE);
std::wstring hardcoded = L"..\\Data\\sun.png";
m_SunTexture = LudoTextureManager::GetInstance()->GetTextureData(hardcoded.c_str()).m_Texture;
m_Device->SetTexture(0,m_SunTexture);
m_Device->DrawPrimitive(D3DPT_POINTLIST,0,12);
I do not see my sun on the screen, and it seems to be doing the alpha blend on the rest of my world, rather than on any sun I'm trying to load. Could this be because of which devices I'm using? Any help would be greatly appreciated :)

You don't actually seem to have a draw call in there. Have you missed posting some code, or is this perhaps your problem?
If you're missing the draw call, I would suggest the DrawPrimitiveUP() call is likely the one you want. You'll also need to set the stream format to match your vetex structure ( setFVF() ). Something along these lines:
#define D3DFVF_SUNVERTEX (D3DFVF_XYZ|D3DFVF_DIFFUSE)
struct SUNVERTEX
{
float x,y,z;
DWORD colour;
};
D3DVECTOR pos = { 0.f, 0.f, 0.f }; // Set this to the position you want
SUNVERTEX sunVert = { pos.x, pos.y, pos.z, D3DCOLOR_RGBA( 255, 255, 0, 255 ) };
IDirect3DDevice9& device = LudoRenderer::GetInstance()->GetDevice();
device->SetFVF( D3DFVF_SUNVERTEX );
device->DrawPrimitiveUP( D3DPT_POINTLIST, 1, sunVert, sizeof(SUNVERTEX) );
If you have time I'd strongly recommend reading the "Programming Guide" from the DirectX SDK documentation (you should have it installed). It's not much of a tutorial, but it does cover the basics of DirectX architecture.

Related

How can I achieve noSmooth() with the P3D renderer?

I'd like to render basic 3D shapes without any aliasing/smoothing with a PGraphics instance using the P3D renderer, but noSmooth() doesn't seem to work.
In OF I remember calling setTextureMinMagFilter(GL_NEAREST,GL_NEAREST); on a texture.
What would be the equivalent in Processing ?
I tried to use PGL:
PGL.TEXTURE_MIN_FILTER = PGL.NEAREST;
PGL.TEXTURE_MAG_FILTER = PGL.NEAREST;
but I get a black image as the result.
If I comment PGL.TEXTURE_MIN_FILTER = PGL.NEAREST; I can see the render, but it's interpolated, not sharp.
Here'a basic test sketch with a few things I've tried:
PGraphics buffer;
PGraphicsOpenGL pgl;
void setup() {
size(320, 240, P3D);
noSmooth();
//hint(DISABLE_TEXTURE_MIPMAPS);
//((PGraphicsOpenGL)g).textureSampling(0);
//PGL pgl = beginPGL();
//PGL.TEXTURE_MIN_FILTER = PGL.NEAREST;
//PGL.TEXTURE_MAG_FILTER = PGL.NEAREST;
//endPGL();
buffer=createGraphics(width/8, height/8, P3D);
buffer.noSmooth();
buffer.beginDraw();
//buffer.hint(DISABLE_TEXTURE_MIPMAPS);
//((PGraphicsOpenGL)buffer).textureSampling(0);
PGL bpgl = buffer.beginPGL();
//PGL.TEXTURE_MIN_FILTER = PGL.NEAREST;//commenting this back in results in a blank buffer
PGL.TEXTURE_MAG_FILTER = PGL.NEAREST;
buffer.endPGL();
buffer.background(0);
buffer.stroke(255);
buffer.line(0, 0, buffer.width, buffer.height);
buffer.endDraw();
}
void draw() {
image(buffer, 0, 0, width, height);
}
(I've also posted on the Processing Forum, but no luck so far)
You were actually on the right track. You were just passing the wrong value to textureSampling().
Since the documentation on PGraphicsOpenGL::textureSampling()
is a bit scarce to say the least.
I decided to peak into it using a decompiler, which lead me to
Texture::usingMipmaps().
There I was able to see the values and what they reflected (in the decompiled code).
2 = POINT
3 = LINEAR
4 = BILINEAR
5 = TRILINEAR
Where PGraphicsOpenGL's default textureSampling is 5 (TRILINEAR).
I also later found this old comment on an issue equally confirming it.
So to get point/nearest filtering you only need to call noSmooth() on the application itself, and call textureSampling() on your PGraphics.
size(320, 240, P3D);
noSmooth();
buffer = createGraphics(width/8, height/8, P3D);
((PGraphicsOpenGL) buffer).textureSampling(2);
So considering the above, and only including the code you used to draw the line and drawing buffer to the application. Then that gives the following desired result.
I needed to combine both GL_LINEAR and GL_NEAREST with one shader so the ((PGraphicsOpenGL) buffer).textureSampling(2); was no option.
It was some digging, but this works for me:
PGL pgl = beginPGL();
Texture ascii_map_tex = ((PGraphicsOpenGL)g).getTexture(ascii_map);
pgl.bindTexture(PGL.TEXTURE_2D, ascii_map_tex.glName);
pgl.texParameteri(PGL.TEXTURE_2D, PGL.TEXTURE_MIN_FILTER, PGL.NEAREST);
pgl.texParameteri(PGL.TEXTURE_2D, PGL.TEXTURE_MAG_FILTER, PGL.NEAREST);
pgl.bindTexture(PGL.TEXTURE_2D, 0);
endPGL();

DirectX 9 not rendering after adding transforms

so far I got a cube rendered without any transforms (thus it was rendered in an orthographic perspective), and I am working on the previous code to get it into a perspective view, with all the matrices involved. I changed the Flexible Vertex Format so as not to have RHW (thus only having XYZ coordinates and color, tried ARGB and XRGB but I don't think it matters), and I added a function that sets all the matrices.
Debugging
showed that matrices are being created correctly, functions return correctly (as far as I could see), no crashes (DirectX will never complain if something goes wrong, it just doesn't render) and in general, step-by-step debugging shows no paranormal activity.
Existing project (which I modify and eventually prevent from working):
As I advance I also write tutorials of sorts so I can go back and see what I did last time to get it to work, and this time I've kept versions so you can get the code here along with the VS2010 solution, all the DirectX work is done in 3Dheader.h and D3DLoader.h
Changes:
- the custom vertex format FVF_CUSTOMVERTEX has been changed so as not to include RHW, as I understand it has to be removed so as to be computed through the transformations
- In Render() I add a call to the function setMatrices() which does all the matrix and transform work, and is as follows:
void setMatrices()
{
//--------------transformation code----------------//
D3DXMATRIX objectM, translationM, rotationM, projectionM, lookAtM, finalM;
HRESULT hr;
D3DXMatrixIdentity(&objectM);
D3DXMatrixRotationY(&rotationM, D3DX_PI/4);
//D3DXMatrixMultiply(&finalM, &objectM, &rotationM);
D3DXMatrixPerspectiveFovLH(&projectionM,D3DX_PI/4,(float)yRes/xRes, 1, 100);
D3DXVECTOR3 camera;
camera.x = -10;
camera.y = 0;
camera.z = 0;
D3DXVECTOR3 cameraTarget;
cameraTarget.x = 0;
cameraTarget.y = 0;
cameraTarget.z = 0;
D3DXVECTOR3 cameraUp;
cameraUp.x = 0;
cameraUp.y = 1;
cameraUp.z = 0;
D3DXMatrixLookAtLH(&lookAtM,&camera,&cameraTarget, &cameraUp);
hr = pd3dDevice->SetTransform(D3DTS_WORLD, &objectM);
hr = pd3dDevice->SetTransform(D3DTS_PROJECTION, &projectionM);
hr = pd3dDevice->SetTransform(D3DTS_VIEW, &lookAtM);
D3DVIEWPORT9 view_port;
view_port.X=0;
view_port.Y=0;
view_port.Width=xRes;
view_port.Height=yRes;
view_port.MinZ=0.0f;
view_port.MaxZ=1.0f;
pd3dDevice->SetViewport(&view_port);
}
Note of course that some elements may not be needed, placed there just in case during my attempts, this is the code I have currently so we have a common reference.
Thanks in advance for any answers and/or attempts to answer.
In your code (downloaded), xRes and yRes are ints. Due to integer division, yRes/xRes will be zero, because xRes > yRes. You are passing this into the D3DXMatrixPerspextiveFovLH function as the aspect ratio, which will produce an invalid matrix. Instead, cast them to floats first, before doing the division, and pass the result in.

JOGL GL_SELECT picking fails

I am using the GL_SELECT method to achieve mouse selection in OpenGL using JOGL Java Library.
I know the method is deprecated and such, but it is a simple school assignment and this should do it.
However I am having some trouble: even though something is rendered in GL_SELECT mode, glRenderMode(GL_RENDER) returns zero hits. The problem is deterministic, but I don't see a kind of pattern; for example, if I have a sphere in the center, it works if I click on its upper part, but not on its lower part. For a cube, it only won't work on one specific face. For a rectangle it works alright.
I have tested commenting out the glRenderMode(GL_SELECT) to check if something was indeed being rendered and yes, I could see the shape, but even so glRenderMode(GL_RENDER) gave me zero.
EDIT: I have also tested removing the call to gluPickMatrix() and the glRenderMode(GL_SELECT), which gave me exactly the same as the normal (non-picking) render, so the projection and model view matrixes are set up correctly I think.
So, I don't think I am rendering incorrectly in select mode. What can be going on?
EDIT: maybe this could be a hardware problem, as the method is deprecated. Is that possible?
Thanks in advance.
// Get required information
point.y = getHeight() - point.y;
gl.glGetIntegerv(GL2.GL_VIEWPORT, view, 0);
// Setup OpenGL for selection
gl.glSelectBuffer(64, buffer);
gl.glRenderMode(GL2.GL_SELECT);
gl.glInitNames();
// Setup projection matrix
gl.glMatrixMode(GL2.GL_PROJECTION);
gl.glPushMatrix();
gl.glLoadIdentity();
Util.glu.gluPickMatrix( point.x, point.y, 5.0, 5.0, view, 0 );
Util.glu.gluPerspective(camera.getFieldOfView(), getWidth() * 1.0 / getHeight(),
camera.getCloseDistance(), camera.getFarDistance() );
// Setup model view matrix for rendering
gl.glMatrixMode(GL2.GL_MODELVIEW);
camera.setView(gl); // Set to model view and use glLookAt
gl.glClear( GL2.GL_COLOR_BUFFER_BIT | GL2.GL_DEPTH_BUFFER_BIT );
// Render objects
for(int i = 0; i < shapeList.size(); i++) {
gl.glPushName(i);
// Execute transformations for translation/rotation/scale and render shape
shapeList.get(i).display(gl, false);
gl.glPopName();
}
// Process hits
hits = gl.glRenderMode(GL2.GL_RENDER);
System.out.println("Hits = " + hits);
// ... Process hits here ...
// Reset matrixes
gl.glMatrixMode(GL2.GL_PROJECTION);
gl.glPopMatrix();
gl.glMatrixMode(GL2.GL_MODELVIEW);
gl.glLoadIdentity();
camera.setView function:
public void setView( GL2 gl ) {
gl.glMatrixMode(GL2.GL_MODELVIEW);
gl.glLoadIdentity();
Util.glu.gluLookAt( eye[Axis.X], eye[Axis.Y], eye[Axis.Z],
target[Axis.X], target[Axis.Y], target[Axis.Z],
up[Axis.X], up[Axis.Y], up[Axis.Z] );
}

Projection Mapping with Kinect and OpenGL

Im currently using a JavaCV software called procamcalib to calibrate a Kinect-Projector setup, which has the Kinect RGB Camera as origin. This setup consists solely of a Kinect RGB Camera (Im roughly using the Kinect just as an ordinary camera at the moment) and one Projector. This calibration software uses LibFreenect (OpenKinect) as the Kinect Driver.
Once the software completes its process, it will give me the intrinsics and extrinsics parameters of both the camera and the projector, which are being thrown at an OpenGL software to validate the calibration and is where a few problems begins. Once the Projection and Modelview are correctly set, I should be able to fit what is seen by the Kinect with what is being projected, but in order to achieve this I have to do a manual translation in all 3 axis and this last part isnt making any sense to me! Could you guys please help me sorting this out?
The SDK used to retrieve Kinect data is OpenNI (not the latest 2.x version, it should be 1.5.x)
I'll explain exactly what Im doing to reproduce this error. The calibration parameters is used as follows:
The Projection matrix is set as ( based on http://sightations.wordpress.com/2010/08/03/simulating-calibrated-cameras-in-opengl/ ):
r = width/2.0f; l = -width/2.0f;
t = height/2.0f; b = -height/2.0f;
alpha = fx; beta = fy;
xo = cx; yo = cy;
X = kinectCalibration.c_near + kinectCalibration.c_far;
Y = kinectCalibration.c_near*kinectCalibration.c_far;
d = kinectCalibration.c_near - kinectCalibration.c_far;
float* glOrthoMatrix = (float*)malloc(16*sizeof(float));
glOrthoMatrix[0] = 2/(r-l); glOrthoMatrix[4] = 0.0f; glOrthoMatrix[8] = 0.0f; glOrthoMatrix[12] = (r+l)/(l-r);
glOrthoMatrix[1] = 0.0f; glOrthoMatrix[5] = 2/(t-b); glOrthoMatrix[9] = 0.0f; glOrthoMatrix[13] = (t+b)/(b-t);
glOrthoMatrix[2] = 0.0f; glOrthoMatrix[6] = 0.0f; glOrthoMatrix[10] = 2/d; glOrthoMatrix[14] = X/d;
glOrthoMatrix[3] = 0.0f; glOrthoMatrix[7] = 0.0f; glOrthoMatrix[11] = 0.0f; glOrthoMatrix[15] = 1;
printM( glOrthoMatrix, 4, 4, true, "glOrthoMatrix" );
float* glCameraMatrix = (float*)malloc(16*sizeof(float));
glCameraMatrix[0] = alpha; glCameraMatrix[4] = skew; glCameraMatrix[8] = -xo; glCameraMatrix[12] = 0.0f;
glCameraMatrix[1] = 0.0f; glCameraMatrix[5] = beta; glCameraMatrix[9] = -yo; glCameraMatrix[13] = 0.0f;
glCameraMatrix[2] = 0.0f; glCameraMatrix[6] = 0.0f; glCameraMatrix[10] = X; glCameraMatrix[14] = Y;
glCameraMatrix[3] = 0.0f; glCameraMatrix[7] = 0.0f; glCameraMatrix[11] = -1; glCameraMatrix[15] = 0.0f;
float* glProjectionMatrix = algMult( glOrthoMatrix, glCameraMatrix );
And the Modelview matrix is set as:
proj_loc = new Vec3f( proj_RT[12], proj_RT[13], proj_RT[14] );
proj_fwd = new Vec3f( proj_RT[8], proj_RT[9], proj_RT[10] );
proj_up = new Vec3f( proj_RT[4], proj_RT[5], proj_RT[6] );
proj_trg = new Vec3f( proj_RT[12] + proj_RT[8],
proj_RT[13] + proj_RT[9],
proj_RT[14] + proj_RT[10] );
gluLookAt( proj_loc[0], proj_loc[1], proj_loc[2],
proj_trg[0], proj_trg[1], proj_trg[2],
proj_up[0], proj_up[1], proj_up[2] );
And finally the camera is displayed and moved around with:
glPushMatrix();
glTranslatef(translateX, translateY, translateZ);
drawRGBCamera();
glPopMatrix();
where the translation values are manually adjusted with the keyboard until I have a visual match (I'm projecting on the calibration board what the Kinect-rgb camera is seeing, so I manually adjust the opengl-camera until the projected pattern matches the printed pattern).
My question here is WHY do I have to make this manual adjustment? The modelview and projection setup should take care of it.
I was also wandering if there are any problems when switching drivers like that, since OpenKinect is used for calibration and OpenNI for validation. This came at mind when researching another popular calibration tool called RGBDemo, where it says that if using LibFreenect backend a Kinect calibration is needed.
So, will a calibration go wrong if made with a driver and displayed with another?
Does anyone think it'll be easier to achieve success if this is done with OpenCV rather than OpenGL ?
JavaCV Reference: https://code.google.com/p/javacv/
Procamcalib "short paper": http://www.ok.ctrl.titech.ac.jp/~saudet/research/procamcalib/
Procamcalib source code: https://code.google.com/p/javacv/source/browse?repo=procamcalib
RGBDemo calibration Reference: http://labs.manctl.com/rgbdemo/index.php/Documentation/Calibration
I can upload more things if necessary, just let me know what you guys need to be able to help me out :)
I'm the author of the article you linked to, and I think I can help.
The problem is in how you're setting your modelview matrix. You're using the third column of proj_RT as the camera's position when you call gluLookAt(), but it isn't the camera's position, it's the position of the world origin in camera coordinates. I wrote an article for my new blog that might help clear this up. It describes three different (equivalent) ways of interpreting the extrinsic matrix, with WebGL demos of each:
http://ksimek.github.io/2012/08/22/extrinsic/
If you must use gluLookAt, this article will show you how, but its much simpler to just call glLoadMatrix(proj_RT).
tl;dr: replace gluLookAt() with glLoadMatrix(proj_RT)
For Kinect calibration, take a look at the latest 0.7 release of RGBDemo http://labs.manctl.com/rgbdemo and corresponding Freenect calibration source.
From the v0.7.0 ChangeLogs:
New features since v0.6.1:
New demo to acquire object models using markers
Simple calibration mode for rgbd-multikinect
Much faster grabbing in rgbd-multikinect
Add timestamps and camera serials when saving to disk
Compatibility with PCL 1.4 Various bug fixes
A very good book to follow is Jason McKesson's Learning Modern 3D Graphics Programming You may also read the Kinect's ROS page and Nicolas' Kinect Calibration Page

is it possible to speed-up matlab plotting by calling c / c++ code in matlab?

It is generally very easy to call mex files (written in c/c++) in Matlab to speed up certain calculations. In my experience however, the true bottleneck in Matlab is data plotting. Creating handles is extremely expensive and even if you only update handle data (e.g., XData, YData, ZData), this might take ages. Even worse, since Matlab is a single threaded program, it is impossible to update multiple plots at the same time.
Therefore my question: Is it possible to write a Matlab GUI and call C++ (or some other parallelizable code) which would take care of the plotting / visualization? I'm looking for a cross-platform solution that will work on Windows, Mac and Linux, but any solution that get's me started on either OS is greatly appreciated!
I found a C++ library that seems to use Matlab's plot() syntax but I'm not sure whether this would speed things up, since I'm afraid that if I plot into Matlab's figure() window, things might get slowed down again.
I would appreciate any comments and feedback from people who have dealt with this kind of situation before!
EDIT: obviously, I've already profiled my code and the bottleneck is the plotting (dozen of panels with lots of data).
EDIT2: for you to get the bounty, I need a real life, minimal working example on how to do this - suggestive answers won't help me.
EDIT3: regarding the data to plot: in a most simplistic case, think about 20 line plots, that need to be updated each second with something like 1000000 data points.
EDIT4: I know that this is a huge amount of points to plot but I never said that the problem was easy. I can not just leave out certain data points, because there's no way of assessing what points are important, before actually plotting them (data is sampled a sub-ms time resolution). As a matter of fact, my data is acquired using a commercial data acquisition system which comes with a data viewer (written in c++). This program has no problem visualizing up to 60 line plots with even more than 1000000 data points.
EDIT5: I don't like where the current discussion is going. I'm aware that sub-sampling my data might speeds up things - however, this is not the question. The question here is how to get a c / c++ / python / java interface to work with matlab in order hopefully speed up plotting by talking directly to the hardware (or using any other trick / way)
Did you try the trivial solution of changing the render method to OpenGL ?
opengl hardware;
set(gcf,'Renderer','OpenGL');
Warning!
There will be some things that disappear in this mode, and it will look a bit different, but generally plots will runs much faster, especially if you have a hardware accelerator.
By the way, are you sure that you will actually gain a performance increase?
For example, in my experience, WPF graphics in C# are considerably slower than Matlabs, especially scatter plot and circles.
Edit: I thought about the fact that the number of points that is actually drawn to the screen can't be that much. Basically it means that you need to interpolate at the places where there is a pixel in the screen. Check out this object:
classdef InterpolatedPlot < handle
properties(Access=private)
hPlot;
end
methods(Access=public)
function this = InterpolatedPlot(x,y,varargin)
this.hPlot = plot(0,0,varargin{:});
this.setXY(x,y);
end
end
methods
function setXY(this,x,y)
parent = get(this.hPlot,'Parent');
set(parent,'Units','Pixels')
sz = get(parent,'Position');
width = sz(3); %Actual width in pixels
subSampleX = linspace(min(x(:)),max(x(:)),width);
subSampleY = interp1(x,y,subSampleX);
set(this.hPlot,'XData',subSampleX,'YData',subSampleY);
end
end
end
And here is an example how to use it:
function TestALotOfPoints()
x = rand(10000,1);
y = rand(10000,1);
ip = InterpolatedPlot(x,y,'color','r','LineWidth',2);
end
Another possible improvement:
Also, if your x data is sorted, you can use interp1q instead of interp, which will be much faster.
classdef InterpolatedPlot < handle
properties(Access=private)
hPlot;
end
% properties(Access=public)
% XData;
% YData;
% end
methods(Access=public)
function this = InterpolatedPlot(x,y,varargin)
this.hPlot = plot(0,0,varargin{:});
this.setXY(x,y);
% this.XData = x;
% this.YData = y;
end
end
methods
function setXY(this,x,y)
parent = get(this.hPlot,'Parent');
set(parent,'Units','Pixels')
sz = get(parent,'Position');
width = sz(3); %Actual width in pixels
subSampleX = linspace(min(x(:)),max(x(:)),width);
subSampleY = interp1q(x,y,transpose(subSampleX));
set(this.hPlot,'XData',subSampleX,'YData',subSampleY);
end
end
end
And the use case:
function TestALotOfPoints()
x = rand(10000,1);
y = rand(10000,1);
x = sort(x);
ip = InterpolatedPlot(x,y,'color','r','LineWidth',2);
end
Since you want maximum performance you should consider writing a minimal OpenGL viewer. Dump all the points to a file and launch the viewer using the "system"-command in MATLAB. The viewer can be really simple. Here is one implemented using GLUT, compiled for Mac OS X. The code is cross platform so you should be able to compile it for all the platforms you mention. It should be easy to tweak this viewer for your needs.
If you are able to integrate this viewer more closely with MATLAB you might be able to get away with not having to write to and read from a file (= much faster updates). However, I'm not experienced in the matter. Perhaps you can put this code in a mex-file?
EDIT: I've updated the code to draw a line strip from a CPU memory pointer.
// On Mac OS X, compile using: g++ -O3 -framework GLUT -framework OpenGL glview.cpp
// The file "input" is assumed to contain a line for each point:
// 0.1 1.0
// 5.2 3.0
#include <vector>
#include <sstream>
#include <fstream>
#include <iostream>
#include <GLUT/glut.h>
using namespace std;
struct float2 { float2() {} float2(float x, float y) : x(x), y(y) {} float x, y; };
static vector<float2> points;
static float2 minPoint, maxPoint;
typedef vector<float2>::iterator point_iter;
static void render() {
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(minPoint.x, maxPoint.x, minPoint.y, maxPoint.y, -1.0f, 1.0f);
glColor3f(0.0f, 0.0f, 0.0f);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(2, GL_FLOAT, sizeof(points[0]), &points[0].x);
glDrawArrays(GL_LINE_STRIP, 0, points.size());
glDisableClientState(GL_VERTEX_ARRAY);
glutSwapBuffers();
}
int main(int argc, char* argv[]) {
ifstream file("input");
string line;
while (getline(file, line)) {
istringstream ss(line);
float2 p;
ss >> p.x;
ss >> p.y;
if (ss)
points.push_back(p);
}
if (!points.size())
return 1;
minPoint = maxPoint = points[0];
for (point_iter i = points.begin(); i != points.end(); ++i) {
float2 p = *i;
minPoint = float2(minPoint.x < p.x ? minPoint.x : p.x, minPoint.y < p.y ? minPoint.y : p.y);
maxPoint = float2(maxPoint.x > p.x ? maxPoint.x : p.x, maxPoint.y > p.y ? maxPoint.y : p.y);
}
float dx = maxPoint.x - minPoint.x;
float dy = maxPoint.y - minPoint.y;
maxPoint.x += dx*0.1f; minPoint.x -= dx*0.1f;
maxPoint.y += dy*0.1f; minPoint.y -= dy*0.1f;
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE);
glutInitWindowSize(512, 512);
glutCreateWindow("glview");
glutDisplayFunc(render);
glutMainLoop();
return 0;
}
EDIT: Here is new code based on the discussion below. It renders a sin function consisting of 20 vbos, each containing 100k points. 10k new points are added each rendered frame. This makes a total of 2M points. The performance is real-time on my laptop.
// On Mac OS X, compile using: g++ -O3 -framework GLUT -framework OpenGL glview.cpp
#include <vector>
#include <sstream>
#include <fstream>
#include <iostream>
#include <cmath>
#include <iostream>
#include <GLUT/glut.h>
using namespace std;
struct float2 { float2() {} float2(float x, float y) : x(x), y(y) {} float x, y; };
struct Vbo {
GLuint i;
Vbo(int size) { glGenBuffersARB(1, &i); glBindBufferARB(GL_ARRAY_BUFFER, i); glBufferDataARB(GL_ARRAY_BUFFER, size, 0, GL_DYNAMIC_DRAW); } // could try GL_STATIC_DRAW
void set(const void* data, size_t size, size_t offset) { glBindBufferARB(GL_ARRAY_BUFFER, i); glBufferSubData(GL_ARRAY_BUFFER, offset, size, data); }
~Vbo() { glDeleteBuffers(1, &i); }
};
static const int vboCount = 20;
static const int vboSize = 100000;
static const int pointCount = vboCount*vboSize;
static float endTime = 0.0f;
static const float deltaTime = 1e-3f;
static std::vector<Vbo*> vbos;
static int vboStart = 0;
static void addPoints(float2* points, int pointCount) {
while (pointCount) {
if (vboStart == vboSize || vbos.empty()) {
if (vbos.size() >= vboCount+2) { // remove and reuse vbo
Vbo* first = *vbos.begin();
vbos.erase(vbos.begin());
vbos.push_back(first);
}
else { // create new vbo
vbos.push_back(new Vbo(sizeof(float2)*vboSize));
}
vboStart = 0;
}
int pointsAdded = pointCount;
if (pointsAdded + vboStart > vboSize)
pointsAdded = vboSize - vboStart;
Vbo* vbo = *vbos.rbegin();
vbo->set(points, pointsAdded*sizeof(float2), vboStart*sizeof(float2));
pointCount -= pointsAdded;
points += pointsAdded;
vboStart += pointsAdded;
}
}
static void render() {
// generate and add 10000 points
const int count = 10000;
float2 points[count];
for (int i = 0; i < count; ++i) {
float2 p(endTime, std::sin(endTime*1e-2f));
endTime += deltaTime;
points[i] = p;
}
addPoints(points, count);
// render
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(endTime-deltaTime*pointCount, endTime, -1.0f, 1.0f, -1.0f, 1.0f);
glColor3f(0.0f, 0.0f, 0.0f);
glEnableClientState(GL_VERTEX_ARRAY);
for (size_t i = 0; i < vbos.size(); ++i) {
glBindBufferARB(GL_ARRAY_BUFFER, vbos[i]->i);
glVertexPointer(2, GL_FLOAT, sizeof(float2), 0);
if (i == vbos.size()-1)
glDrawArrays(GL_LINE_STRIP, 0, vboStart);
else
glDrawArrays(GL_LINE_STRIP, 0, vboSize);
}
glDisableClientState(GL_VERTEX_ARRAY);
glutSwapBuffers();
glutPostRedisplay();
}
int main(int argc, char* argv[]) {
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE);
glutInitWindowSize(512, 512);
glutCreateWindow("glview");
glutDisplayFunc(render);
glutMainLoop();
return 0;
}
As a number of people have mentioned in their answers, you do not need to plot that many points. I think it is important to rpeat Andrey's comment:
that is a HUGE amount of points! There isn't enough pixels on the screen to plot that amount.
Rewriting plotting routines in different languages is a waste of your time. A huge number of hours have gone into writing MATLAB, whay makes you think you can write a significantly faster plotting routine (in a reasonable amount of time)? Whilst your routine may be less general, and therefore would remove some of the checks that the MATLAB code will perform, your "bottleneck" is that you are trying to plot so much data.
I strongly recommend one of two courses of action:
Sample your data: You do not need 20 x 1000000 points on a figure - the human eye won't be able to distinguish between all the points, so it is a waste of time. Try binning your data for example.
If you maintain that you need all those points on the screen, I would suggest using a different tool. VisIt or ParaView are two examples that come to mind. They are parallel visualisation programs designed to handle extremenly large datasets (I have seen VisIt handle datasets that contained PetaBytes of data).
There is no way you can fit 1000000 data points on a small plot. How about you choose one in every 10000 points and plot those?
You can consider calling imresize on the large vector to shrink it, but manually building a vector by omitting 99% of the points may be faster.
#memyself The sampling operations are already occurring. Matlab is choosing what data to include in the graph. Why do you trust matlab? It looks to me that the graph you showed significantly misrepresents the data. The dense regions should indicate that the signal is at a constant value, but in your graph it could mean that the signal is at that value half the time - or was at that value at least once during the interval corresponding to that pixel?
Would it be possible to use an alternate architectue? For example, use MATLAB to generate the data and use a fast library or application (GNUplot?) to handle the plotting?
It might even be possible to have MATLAB write the data to a stream as the plotter consumes the data. Then the plot would be updated as MATLAB generates the data.
This approach would avoid MATLAB's ridiculously slow plotting and divide the work up between two separate processes. The OS/CPU would probably assign the process to different cores as a matter of course.
I think it's possible, but likely to require writing the plotting code (at least the parts you use) from scratch, since anything you could reuse is exactly what's slowing you down.
To test feasibility, I'd start with testing that any Win32 GUI works from MEX (call MessageBox), then proceed to creating your own window, test that window messages arrive to your WndProc. Once all that's going, you can bind an OpenGL context to it (or just use GDI), and start plotting.
However, the savings is likely to come from simpler plotting code and use of newer OpenGL features such as VBOs, rather than threading. Everything is already parallel on the GPU, and more threads don't help transfer of commands/data to the GPU any faster.
I did a very similar thing many many years ago (2004?). I needed an oscilloscope-like display for kilohertz sampled biological signals displayed in real time. Not quite as many points as the original question has, but still too many for MATLAB to handle on its own. IIRC I ended up writing a Java component to display the graph.
As other people have suggested, I also ended up down-sampling the data. For each pixel on the x-axis, I calculated the minimum and maximum values taken by the data, then drew a short vertical line between those values. The entire graph consisted of a sequence of short vertical lines, each immediately adjacent to the next.
Actually, I think that the implementation ended up writing the graph to a bitmap that scrolled continuously using bitblt, with only new points being drawn ... or maybe the bitmap was static and the viewport scrolled along it ... anyway it was a long time ago and I might not be remembering it right.
Blockquote
EDIT4: I know that this is a huge amount of points to plot but I never said that the problem was easy. I can not just leave out certain data points, because there's no way of assessing what points are important, before actually plotting them
Blockquote
This is incorrect. There is a way to to know which points to leave out. Matlab is already doing it. Something is going to have to do it at some point no matter how you solve this. I think you need to redirect your problem to be "how do I determine which points I should plot?".
Based on the screenshot, the data looks like a waveform. You might want to look at the code of audacity. It is an open source audio editing program. It displays plots to represent the waveform in real time, and they look identical in style to the one in your lowest screen shot. You could borrow some sampling techniques from them.
What you are looking for is the creation of a MEX file.
Rather than me explaining it, you would probably benefit more from reading this: Creating C/C++ and Fortran Programs to be Callable from MATLAB (MEX-Files) (a documentation article from MathWorks).
Hope this helps.