I would be very grateful for any kind of help. I develop an application using wxWidgets via wxDevC++ on Windows 7. I have a function that does some calculations and is supposed to produce a 2D colour plot of data acquired. It looks like this:
void draw2DColourPlot (wxDC *dc)
{
wxBitmap bmp;
bmp.Create (800, 400);
wxMemoryDC *memDC = new wxMemoryDC ( bmp );
ofstream dbgStr ( "interpolating.txt", std :: osftream :: out );
int progress = 0;
for ( int i = 0; i < 800; ++i )
{
for ( int j = 0; j < 400; ++j )
{
unsigned char r, g, b,;
// calculate values of r, g, b
memDC -> SetPen ( wxPen ( wxColor (r,g,b), 1 ) );
memDC -> DrawPoint ( i,j );
dbgStr << "Point ( " << i << ", " << j << " ) calculated" << '\n';
++progress;
updateProgressBar ( progress );
}
}
dc -> SetPen ( wxPen ( wxColor ( 255, 255, 255 ), 1 ) );
dc -> Clear ();
dc -> Blit ( 0, 0, 800, 400, memDC, 0, 0 );
return;
}
The problem is, that sometimes it does not work - the progress bar reaches some value (between 10 and 90 percent, as I've observed), then everything freezes for a couple of seconds and then DC goes blank (any previous content disappears). After a few times the proper result may be drawn, but it's not a rule. In "interpolating.txt" file the last line is "Point (799, 399) calculated".
Previously I didn't use wxMemoryDC - I used dc -> DrawPoint () directly and observed the same behaviour (points were drawn as expected, but at some point everything dissapeared).
It happens more often when executing on my laptop (also W7), but sometimes on PC, too.
Do you have any solution to that? Is there a chance, that I use wxWidgets incorrectly and it should be done the different way?
Thanks for any help.
I think your problem is due to memory/resource leaks: you allocate wxMemoryDC on the heap but never delete it and leaking DCs is particularly bad because they are a limited resource under Windows, so if you leak too many of them, you won't be able to create any any more. To fix this, just allocate it on the stack instead.
Secondly, while what you do is not wrong, it's horribly inefficient. For a simple improvement, set your pixels in wxImage, then convert it to wxBitmap at once. For a yet more efficient approach, use wxPixelData to set pixels directly in the bitmap. This will work much faster than what you do.
Related
I am using SDL_SetWindowPosition to position my window. Can I use this function to position my window on another monitor?
UPDATE
Using SDL_GetDisplayBounds will not return the correct monitor positions when the text size is changed in Windows 10. Any ideas how to fix this?
SDL2 uses a global screen space coordinate system. Each display device has its own bounds inside this coordinate space. The following example places a window on a second display device:
// enumerate displays
int displays = SDL_GetNumVideoDisplays();
assert( displays > 1 ); // assume we have secondary monitor
// get display bounds for all displays
vector< SDL_Rect > displayBounds;
for( int i = 0; i < displays; i++ ) {
displayBounds.push_back( SDL_Rect() );
SDL_GetDisplayBounds( i, &displayBounds.back() );
}
// window of dimensions 500 * 500 offset 100 pixels on secondary monitor
int x = displayBounds[ 1 ].x + 100;
int y = displayBounds[ 1 ].y + 100;
int w = 500;
int h = 500;
// so now x and y are on secondary display
SDL_Window * window = SDL_CreateWindow( "title", x, y, w, h, FLAGS... );
Looking at the definition of SDL_WINDOWPOS_CENTERED in SDL_video.h we see it is defined as
#define SDL_WINDOWPOS_CENTERED SDL_WINDOWPOS_CENTERED_DISPLAY(0)
so we could also use the macro SDL_WINDOWPOS_CENTERED_DISPLAY( n ) where n is the display index.
Update for Windows 10 - DPI scaling issue
It seems like there is indeed a bug with SDL2 and changing the DPI scale in Windows (i.e. text scale).
Here are two bug reports relevant to the problem. They are both still apparently unresolved.
https://bugzilla.libsdl.org/show_bug.cgi?id=3433
https://bugzilla.libsdl.org/show_bug.cgi?id=2713
Potential Solution
I am sure that the OP could use the WIN32 api to determine the dpi scale, for scale != 100%, and then correct the bounds by that.
DPI scaling issue ("will not return the correct monitor positions when the text size is changed")
It's a known issue with SDL2 (I encountered it in those versions: 2.0.6, 2.0.7, 2.0.8, probably the older versions have this issue as well).
Solutions:
1) Use manifest file and set there:
<dpiAware>True/PM</dpiAware>
(you need to include the manifest file to your app distribution)
2) Try SetProcessDPIAware().
Yes, you can use SetWindowPosition, if you know the boundaries of the second monitor.
You can use the function SDL_GetDisplayBounds(int displayIndex,SDL_Rect* rect) to get them.
So I recently went through and converted a simple test app I wrote to use the new version of Direct2D, which means I basically copied the relevant parts of the Direct2D Quickstart for Windows 8. That worked, in that my application behaved as before (just draws a bunch of pixels.)
Previously, to update the bitmap, I was doing the following:
for(int i = 0; i < 1000; ++i )
{
int x = rand()%600;
int y = rand()%600;
int index = 4 * ( x + ( y * 600 ) );
imageData[index] = rand()%256;
imageData[index+2] = 0;
}
D2D1_RECT_U rect2 = RectU(0,0,600,600);
pBitmap->CopyFromMemory(&rect2, imageData, 600*4);
where imageData is just:
imageData = new byte[600*600*4];
That still worked, but I thought that since I've got this nifty Map method on my shiny new ID2D1Bitmap1 interface that I could get rid of that CPU-side array and do something like:
D2D1_MAPPED_RECT* mapped = NULL;
ThrowIfFailed( pBitmap->Map( D2D1_MAP_OPTIONS_WRITE, mapped ) );
for(int i = 0; i < 1000; ++i)
{
int x = rand()%600;
int y = rand()%600;
int index = 4 * ( x + ( y * 600 ) );
mapped->bits[index] = rand()%256;
mapped->bits[index+2] = 0;
}
ThrowIfFailed(pBitmap->Unmap());
This failed at the call to Map with E_INVALIDARG, every time, using various combinations of D2D1_BITMAP_OPTIONS in the D2D1_BITMAP_PROPERTIES1 passed to CreateBitmap and D2D1_MAP_OPTIONS in the call to Map.
Looking at the description of the D2D1_MAP_OPTIONS enumeration it appears that none of the 3 options (READ, WRITE, DISCARD) can actually be used on bitmaps I create with the Direct2D context...
How do I get, in Direct2D, a bitmap which I can map, write to, unmap and draw?
Your problem is that your mapped pointer shouldn't be a null pointer. I suggest to change your code according to the following:
D2D1_MAPPED_RECT mapped;
ThrowIfFailed( pBitmap->Map( D2D1_MAP_OPTIONS_WRITE, &mapped ) );
Recently i dug into this and faced the same problem. As far as i understand D2D Bitmap cannot be locked for writing for CPU. More than that you cant create Bitmap both for write with D2D and read with CPU.
And i'd like to read&write some byte array with CPU and D2D api. But unfortunately i have to use 2 bitmaps created with different bitmapOptions. First suitable for D2D api and could be render target for context
props.bitmapOptions = D2D1_BITMAP_OPTIONS_CANNOT_DRAW | D2D1_BITMAP_OPTIONS_TARGET;
second can be read with CPU
props.bitmapOptions = D2D1_BITMAP_OPTIONS_CANNOT_DRAW | D2D1_BITMAP_OPTIONS_CPU_READ;
And usage scenario is "render primitives to first, use ID2D1Bitmap1::CopyFromBitmap to get data from second (with map/unmap)"
Trying to implement windows Progressbar showing progress of download in bytes with big numbers, but am unable to do it correctly.
For a download of 2.5gb if I do following, it ends short of the full range when download is complete.
double dlSize = getDlSize();
unsigned int pbRange = (unsigned int)( dlSize / 3000 );
SendMessage( hProgressbar, PBM_SETRANGE, 0, MAKELPARAM( 0, pbRange ) );
and then set new position at each download callback as :
double dlBytes = bytesDownloaded();
unsigned int newIncrement = (unsigned int)( dlBytes / 3000 );
SendMessage( hProgressbar, PBM_DELTAPOS, (WPARAM)newIncrement, 0 );
This is a very noobish implementation, and I don't want fall in the xy situation, So my question is what is the correct way to implement a progressbar with big numbers in tune of 2-5GBs in bytes ?
I tried both approaches suggested below by #msandiford and #NikBougalis, by taking the width of the progressbar in account and by using percentage instead of actual number, I even combined both, but in all cases the newIncrement always comes out 0, maybe that's because dlSize is always lower( in double newIncrement comes out something like 1.15743e+007, type cast it and its 0 ).
What else I can do ?
New Code combining both approaches:
EDIT 2:
Added few checks to the code as I was constantly getting 0 for newIncrement, looks like its working now, not sure how well:
GetClientRect(hProgressbar, &pbRCClient);
pbWidth = pbRCClient.right - pbRCClient.left; // (pbWidth its a global variable)
unsigned int pbRange = pbRCClient.right - pbRCClient.left;
SendMessage( hProgressbar, PBM_SETRANGE, 0, MAKELPARAM( 0, pbRange ) );
and while updating :
double dlSize = getDlSize();
double doubleIncrement = ( ( dlSize * pbWidth ) / totalSize );
unsigned int newIncrement;
if ( (unsigned int)doubleIncrement < 1 )
{
blockFill += doubleIncrement;
if ( (unsigned int)blockFill > 1 )
{
newIncrement = ( unsigned int )blockFill;
SendMessage( hProgressbar, PBM_DELTAPOS, (WPARAM)newIncrement, 0 );
blockFill = 0;
}
}
else
{
newIncrement = ( unsigned int )( doubleIncrement );
SendMessage( hProgressbar, PBM_DELTAPOS, (WPARAM)newIncrement, 0 );
//blockFill = 0;
}
EDIT 3 : Looks like its still finishing early.
The big issue that you have is limitations in the progress bar control itself. PBM_SETRANGE is limited, and although you could use PBM_SETRANGE32 if you need to deal with values larger than 2GB you will still encounter problems.
Coincidentally, why use a double at all? Use a UINT64, which maxes out at approximately 16,384 Petabytes (if you're downloading something that will overflow that... well skip the progress bar, it will only depress you and your customers). Integers work really well for counting things like bytes.
Provided that you know the full size of the file that you are downloading, one way to work around the size of the progress bar having a limited maximum range, is to make your progress bar start at 0 and end at 100. Then you can convert the bytes received into a percentage using the simple rule of three:
percent = (bytes_received * 100) / max_bytes;
If you want to get more "granularity" you can change the scale of the progress bar to 1000 and adjust the calculation accordingly; you could even go to 10000 but at that point and depending on the width (or height) of the control, you will probably bump up against the resolution of the monitor.
There probably isn't much point making the progress bar more accurate than the number of pixels in progress bar window itself. Based on this, you should be able to scale the target to the number of pixels pretty easily.
RECT rcClient;
GetClientRect(hProgressBar, &rcClient);
unsigned int pbRange = rcClient.right - rcClient.left;
// Need to either keep unitsPerPixel, or recalculate later
double pixelsPerUnit = pbRange / dlSize;
SendMessage(hProgressBar, PBM_SETRANGE, 0, MAKELPARAM(0, pbRange));
Then updating progress would be something like:
double dlBytes = totalBytesDownloaded();
unsigned int newProgress = (unsigned int)(dlBytes * pixelsPerUnit);
SendMessage(hProgressBar, PBM_SETPOS, (WPARAM)newProgress, 0);
If the progress window was somehow resizeable, you would need to recalculate pbRange, reset the progress bar range and recalculate unitsPerPixel in response to a WM_SIZE message.
So I'm simply trying to make a red 10 x 10 box move vertically back and forth. I compile and run my program and the red box appears starts moving down, then just disappears after it hits the edge of the screen. I used some cout << statements that tell me when the functions are being called and they are all being called when they are supposed to. Even when the box can't be seen the functions are properly being called.
My main loop
while(running)
{
myScreen->Clear();
boxes.Move();
boxes.Draw();
myScreen->Flip();
........
My draw() function
SDL_Color red;
red.r = 255;
red.g = 0;
red.b = 0;
if( SDL_FillRect( my_screen->Get_screen(), &start_dest, SDL_MapRGB(
my_screen->Get_pixel_format(), red.r, red.g, red.b ) ) == -1 )`
cout << "Fill rect in Draw(); failed\n";
My Move() function
start_dest.y += y_step;
if ( start_dest.y >= my_screen->Get_height() )
{
cout << "start_dest.y >= screen height\n";
start_dest.y = my_screen->Get_height();
y_step = -y_step;
}
if ( start_dest.y <= 0 )
{
cout << "start_dest.y <= 0\n";
start_dest.y = 0;
y_step = -y_step;
}
I have been trying to find this bug forever. just leave a comment if anyone wants to see more code. Thanks
There isn't enough information to give conclusive answer, but here's a hint.
From my experience with SDL, SDL functions can modify your Rect structure when called, especially when rect is partly off-screen. Make sure you set all its properties (x,y,width,height) before each SDL function that uses the rectangle.
(I'm assuming here that start_dest has a 'height' member, and that the screen coordinates have (0,0) in the top left corner)
I think perhaps the first 'if' statement in Move() should be
if(start_dest.y >= my_screen.Get_height - start_dest.height)
so that the rectangle will bounce when its bottom hits the bottom of the screen rather than waiting until the top of the rectangle gets there. Something to that effect, anyways.
I am making an application that does some custom image processing. The program will be driven by a simple menu in the console. The user will input the filename of an image, and that image will be displayed using openGL in a window. When the user selects some processing to be done to the image, the processing is done, and the openGL window should redraw the image.
My problem is that my image is never drawn to the window, instead the window is always black. I think it may have to do with the way I am organizing the threads in my program. The main execution thread handles the menu input/output and the image processing and makes calls to the Display method, while a second thread runs the openGL mainloop.
Here is my main code:
#include <iostream>
#include <GL/glut.h>
#include "ImageProcessor.h"
#include "BitmapImage.h"
using namespace std;
DWORD WINAPI openglThread( LPVOID param );
void InitGL();
void Reshape( GLint newWidth, GLint newHeight );
void Display( void );
BitmapImage* b;
ImageProcessor ip;
int main( int argc, char *argv[] ) {
DWORD threadID;
b = new BitmapImage();
CreateThread( 0, 0, openglThread, NULL, 0, &threadID );
while( true ) {
char choice;
string path = "TestImages\\";
string filename;
cout << "Enter filename: ";
cin >> filename;
path += filename;
b = new BitmapImage( path );
Display();
cout << "1) Invert" << endl;
cout << "2) Line Thin" << endl;
cout << "Enter choice: ";
cin >> choice;
if( choice == '1' ) {
ip.InvertColour( *b );
}
else {
ip.LineThinning( *b );
}
Display();
}
return 0;
}
void InitGL() {
int argc = 1;
char* argv[1];
argv[0] = new char[20];
strcpy( argv[0], "main" );
glutInit( &argc, argv );
glutInitDisplayMode( GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH);
glutInitWindowPosition( 0, 0 );
glutInitWindowSize( 800, 600 );
glutCreateWindow( "ICIP Program - Character recognition using line thinning, Hilbert curve, and wavelet approximation" );
glutDisplayFunc( Display );
glutReshapeFunc( Reshape );
glClearColor(0.0,0.0,0.0,1.0);
glEnable(GL_DEPTH_TEST);
}
void Reshape( GLint newWidth, GLint newHeight ) {
/* Reset viewport and projection parameters */
glViewport( 0, 0, newWidth, newHeight );
}
void Display( void ) {
glClear (GL_COLOR_BUFFER_BIT); // Clear display window.
b->Draw();
glutSwapBuffers();
}
DWORD WINAPI openglThread( LPVOID param ) {
InitGL();
glutMainLoop();
return 0;
}
Here is my draw method for BitmapImage:
void BitmapImage::Draw() {
cout << "Drawing" << endl;
if( _loaded ) {
glBegin( GL_POINTS );
for( unsigned int i = 0; i < _height * _width; i++ ) {
glColor3f( _bitmap_image[i*3] / 255.0, _bitmap_image[i*3+1] / 255.0, _bitmap_image[i*3+2] / 255.0 );
// invert the y-axis while drawing
glVertex2i( i % _width, _height - (i / _width) );
}
glEnd();
}
}
Any ideas as to the problem?
Edit: The problem was technically solved by starting a glutTimer from the openglThread which calls glutPostRedisplay() every 500ms. This is OK for now, but I would prefer a solution in which I only have to redisplay every time I make changes to the bitmap (to save on processing time) and one in which I don't have to run another thread (the timer is another thread im assuming). This is mainly because the main processing thread is going to be doing a lot of intensive work and I would like to dedicate most of the resources to this thread rather than anything else.
I've had this problem before - it's pretty annoying. The problem is that all of your OpenGL calls must be done in the thread where you started the OpenGL context. So when you want your main (input) thread to change something in the OpenGL thread, you need to somehow signal to the thread that it needs to do stuff (set a flag or something).
Note: I don't know what your BitmapImage loading function (here, your constructor) does, but it probably has some OpenGL calls in it. The above applies to that too! So you'll need to signal to the other thread to create a BitmapImage for you, or at least to do the OpenGL-related part of creating the bitmap.
A few points:
Generally, if you're going the multithreaded route, it's preferable if your main thread is your GUI thread i.e. it does minimal tasks keeping the GUI responsive. In your case, I would recommend moving the intensive image processing tasks into a thread and doing the OpenGL rendering in your main thread.
For drawing your image, you're using vertices instead of a textured quad. Unless you have a very good reason, it's much faster to use a single textured quad (the processed image being the texture). Check out glTexImage2D and glTexSubImage2D.
Rendering at a framerate of 2fps (500ms, as you mentioned) will have negligible impact on resources if you're using an OpenGL implementation that is accelerated, which is almost guaranteed on any modern system, and if you use a textured quad instead of a vertex per pixel.
Your problem may be in Display() at the line
b->Draw();
I don't see where b is passed into the scope of Display().
You need to make OpenGL calls on the thread in which context was created (glutInitDisplayMode). Hence glXX calls inside Display method which is on different thread will not be defined. You can see this easily by dumping the function address, hopefully it would be undefined or NULL.
It sounds like the 500ms timer is calling Display() regularly, after 2 calls it fills the back-buffer and the front-buffer with the same rendering. Display() continues to be called until the user enters something, which the OpenGL thread never knows about, but, since, global variable b is now different, the thread blindly uses that in Display().
So how about doing what Jesse Beder says and use a global int, call it flag, to flag when the user entered something. For example:
set flag = 1; after you do the b = new BitmapImage( path );
then set flag = 0; after you call Display() from the OpenGL thread.
You loop on the timer, but, now check if flag = 1. You only need call glutPostRedisplay() when flag = 1, i.e. the user entered something.
Seems like a good way without using a sleep/wake mechanism. Accessing global variables among more than one thread can also be unsafe. I think the worst that can happen here is the OpenGL thread miss-reads flag = 0 when it should read flag = 1. It should then catch it after no more than a few iterations. If you get strange behavior go to synchronization.
With the code you show, you call Display() twice in main(). Actually, main() doesn't even need to call Display(), the OpenGL thread does it.