Story:
I have been creating a font renderer for directx9 to draw text, The actual problem got caused by another problem, I was wondering why the texture didnt draw anything (my wrong bitmap), so i tried to copy the bitmap into a file and realized the current problem. yay
Question:
What exactly am i doing wrong? I mean, i just simply copy my CURRENT pixel array in my bitmap wrapper to a file with some other content ( the bitmap infos ), i have seen in an hex editor that there are colors after the bitmap headers.
Pictures:
This is the result of the bitmap which i have written to the filesystem
Code:
CFont::DrawGlyphToBitmap
This code does copy from a bitmap of an freetype glyph ( which have by the way a pixel format of FT_PIXEL_MODE_BGRA ) to the
font bitmap wrapper class instance
void CFont::DrawGlyphToBitmap ( unsigned char * buffer, int rows, int pitch, int destx, int desty, int format )
{
CColor color = CColor ( 0 );
for ( int row = 0; row < rows; row++ )
{
int x = 0;
for ( int left = 0; left < pitch * 3; left += 3,x++ )
{
int y = row;
unsigned char* cursor = &buffer [ ( row*pitch ) + left ];
color.SetAlphab ( 255 );
color.SetBlueb ( cursor [ 0 ] );
color.SetGreenb ( cursor [ 1 ] );
color.SetRedb ( cursor [ 2 ] );
m_pBitmap->SetPixelColor ( color, destx + x, desty + y );
}
}
}
CBitmap::SetPixelColor
This code does set a single "pixel" / color in its local pixel storage.
void CBitmap::SetPixelColor ( const CColor & color, int left, int top )
{
unsigned char* cursor = &m_pBuffer [ ( m_iPitch * top ) + ( left * bytes_per_px ) ];
cursor [ px_red ] = color.GetRedb ( );
cursor [ px_green ] = color.GetGreenb ( );
cursor [ px_blue ] = color.GetBlueb ( );
if ( px_alpha != 0xFFFFFFFF )
cursor [ px_alpha ] = color.GetAlphab ( );
}
CBitmap::Save
Heres a outcut of the function which writes
the bitmap to the file system, it does shows how
i initialize the bitmap info container ( file header & "dib" header )
void CBitmap::Save ( const std::wstring & path )
{
BITMAPFILEHEADER bitmap_header;
BITMAPV5HEADER bitmap_info;
memset ( &bitmap_header, 0, sizeof ( BITMAPFILEHEADER ) );
memset ( &bitmap_info, 0, /**/sizeof ( BITMAPV5HEADER ) );
bitmap_header.bfType = 'B' + ( 'M' << 8 );//0x424D;
bitmap_header.bfSize = bitmap_header.bfOffBits + ( m_iRows * m_iPitch ) * 3;
bitmap_header.bfOffBits = sizeof ( BITMAPFILEHEADER ) + sizeof ( BITMAPV5HEADER );
double _1px_p_m = 0.0002645833333333f;
bitmap_info.bV5Size = sizeof ( BITMAPV5HEADER );
bitmap_info.bV5Width = m_iPitch;
bitmap_info.bV5Height = m_iRows;
bitmap_info.bV5Planes = 1;
bitmap_info.bV5BitCount = bytes_per_px * 8;
bitmap_info.bV5Compression = BI_BITFIELDS;
bitmap_info.bV5SizeImage = ( m_iPitch * m_iRows ) * 3;
bitmap_info.bV5XPelsPerMeter = m_iPitch * _1px_p_m;
bitmap_info.bV5YPelsPerMeter = m_iRows * _1px_p_m;
bitmap_info.bV5ClrUsed = 0;
bitmap_info.bV5ClrImportant = 0;
bitmap_info.bV5RedMask = 0xFF000000;
bitmap_info.bV5GreenMask = 0x00FF0000;
bitmap_info.bV5BlueMask = 0x0000FF00;
bitmap_info.bV5AlphaMask = 0x000000FF;
bitmap_info.bV5CSType = LCS_WINDOWS_COLOR_SPACE;
...
-> the other part does just write those structures & my px array to file
}
CBitmap "useful" macros
i made macros for the pixel array because ive "changed" the
pixel "format" many times -> to make it "easier" i have made those macros which make it easier todo that
#define bytes_per_px 4
#define px_red 0
#define px_green 1
#define px_blue 2
#define px_alpha 3
Notes
My bitmap has a color order of RGBA
This calculation is wrong:
&m_pBuffer [ ( m_iPitch * top ) + ( left * bytes_per_px ) ]
It should be:
&m_pBuffer [ ( ( m_iPitch * top ) + left ) * bytes_per_px ]
Each row is m_iPitch * bytes_per_px bytes wide. If you just multiply by m_iPitch, then your "rows" are overlapping each other.
Related
I'm using Pango to layout my text and NV Path to render glyphs.
Having difficulty in finding correct methods for getting per glyph positions. As you can see at the moment I'm calculating this values according to line and glyph indexes.
But Pango has better methods for this; like per glyph, per line, extent queries. My problem is that this methods got no documentation and I wasn't able to find any samples.
How can i get correct glyph positions from Pango for this type of application?
std::vector<uint32_t> glyphs;
std::vector<GLfloat> positions;
int lineCount = pango_layout_get_line_count( pangoLayout );
for ( int l = 0; l < lineCount; ++l )
{
PangoLayoutLine* line = pango_layout_get_line_readonly( pangoLayout, l );
GSList* runs = line->runs;
float xOffset = 0.0f;
while( runs )
{
PangoLayoutRun* run = static_cast<PangoLayoutRun*>( runs->data );
glyphs.resize( run->glyphs->num_glyphs, 0 );
positions.resize( run->glyphs->num_glyphs * 2, 0 );
for( int g = 0; g < run->glyphs->num_glyphs; ++g )
{
glyphs[g] = run->glyphs->glyphs[g].glyph;
// Need Correct Values Here
positions[ g * 2 + 0 ] = xOffset * NVPATH_DEFUALT_EMSCALE;
positions[ g * 2 + 1 ] = (float)l * NVPATH_DEFUALT_EMSCALE;
xOffset += PANGO_PIXELS( run->glyphs->glyphs[g].geometry.width ) / getFontSize();
}
const Font::RefT font = getFont( pango_font_description_get_family( pango_font_describe( run->item->analysis.font ) ) );
glEnable( GL_STENCIL_TEST );
glStencilFillPathInstancedNV( run->glyphs->num_glyphs,
GL_UNSIGNED_INT,
&glyphs[0],
font->nvPath,
GL_PATH_FILL_MODE_NV,
0xFF,
GL_TRANSLATE_2D_NV,
&positions[0]
);
glStencilFunc( GL_NOTEQUAL, 0, 0xFF );
glStencilOp( GL_KEEP, GL_KEEP, GL_ZERO );
glColor3f( 0.0, 0.0, 0.0 );
glCoverFillPathInstancedNV( run->glyphs->num_glyphs,
GL_UNSIGNED_INT,
&glyphs[0],
font->nvPath,
GL_BOUNDING_BOX_OF_BOUNDING_BOXES_NV,
GL_TRANSLATE_2D_NV,
&positions[0]
);
glDisable( GL_STENCIL_TEST );
runs = runs->next;
}
}
How i can set rgb color to anything component? FireMonkey, C++ Builder XE8.
I have used this code but its useless...
Rectangle1->Fill->Color = RGB(255, 50, 103);
Rectangle1->Fill->Color = (TColor)RGB(255, 50, 103);
May be i must use RGBA? But i dont know how to do it.
I did it.
UnicodeString s ;
s = "0xFF" ;
s += IntToHex ( 255 , 2 );
s += IntToHex ( 50 , 2 );
s += IntToHex ( 103 , 2 );
Rectangle1 -> Fill -> Color = StringToColor ( s );
This function will allow you to convert int specified RGB values to a TAlphaColor, which is what is used by FireMonkey.
TAlphaColor GetAlphaColor (int R, int G, int B)
{
TAlphaColorRec acr;
acr.R = R;
acr.G = G;
acr.B = B;
acr.A = 255;
return acr.Color;
}
I have been having problems with a uneasy to track down segmentation fault lately. The strange thing is, it lets me access the array fine, but for some reason it doesn't allow me to free it without causing the fault. I tested everything to make sure it wasn't anything else, so I can say with 100% certainty that it only occurs at the free line. The confusing thing is that the crash is very selective. As in, it only occurs after I press the help button on the menu in my game, yet it has no problems otherwise. After accessing the help section, it still allows access though, which makes the error even more strange. I may have the wrong idea about segmentation faults, but I'm pretty sure they don't usually allow access even if it isn't a significant error. I know they usually haven't for me. I would post the code here, but my project is very large. I'll try to post the parts of importance.
Here is how I allocate the array :
static void zero_array( void * memory, int elements, size_t element_size )
{
memset( memory, 0, elements * element_size );
return;
}
/* ******************************************************** */
static void fill_array_with_ones( void * memory, int elements, size_t element_size )
{
memset( memory, 1, elements * element_size );
return;
}
/* ******************************************************** */
static void * create_array( int elements, size_t element_size, t_setter setter )
{
void * temp = NULL;
if ( !( temp = malloc( elements * element_size ) ) )
{
perror( "malloc" );
exit( 1 );
}
if ( setter )
setter( temp, elements, element_size );
return temp;
}
I have tested this many times without problems. I use it mainly for my menu API I use to make it easier to track button events in SDL on bitmap images.
My setup code for the problem area looks like this :
extern void setup_mdata( game_data * game, menu * tmenu, const char * menu_image_path, int max_buttons, bool default_handling, t_handle_var_mdata handle_mdata )
{
int width = 0;
int height = 0;
tmenu->max_buttons = max_buttons;
tmenu->allocation_size = sizeof( bool ) * game->game_menu.max_buttons;
tmenu->hover_over_button = ( bool * )create_array( max_buttons, sizeof( bool ), zero_array );
tmenu->is_clicked = ( bool * )create_array( max_buttons, sizeof( bool ), zero_array );
if ( !default_handling )
tmenu->is_enabled = ( bool * )create_array( max_buttons, sizeof( bool ), fill_array_with_ones );
tmenu->button_shape = ( SDL_Rect * )create_array( max_buttons, sizeof( SDL_Rect ), zero_array );
tmenu->menu_image = load_texture( game->renderer, ( char * )menu_image_path, true, 255, 0, 255 );
SDL_QueryTexture( game->game_menu.menu_image, NULL, NULL, &width, &height );
handle_mdata( game, tmenu, width, height );
return;
}
Yet again, I have tested this code many times and it hasn't ever failed on any other menu. Which makes the problem very strange to me.
Here is how I call this function for both the main and help menu for the title screen :
setup_mdata( game, &game->title_menu, "bitmaps//title_menu.bmp", 3, true, setup_title_menu );
setup_mdata( game, &game->title_help_menu, "bitmaps//title_help_menu.bmp", 3, true, setup_title_help_menu );
I don't think it matters, but if you're wondering, here is the code to actually setup the menus with additional data.
static void setup_title_menu( game_data * game, menu * tmenu, int width, int height )
{
int nbutton = 0;
tmenu->button_highlight = ( SDL_Texture ** )create_array( 1, sizeof( SDL_Texture * ), zero_array );
*tmenu->button_highlight = load_texture( game->renderer, ( char * )"bitmaps//button4_highlight.bmp", true, 255, 255, 255 );
for ( ; nbutton < tmenu->max_buttons; nbutton++ )
{
MB_SHAPE.h = TITLE_BUTTON_HEIGHT;
MB_SHAPE.w = TITLE_BUTTON_WIDTH;
MB_SHAPE.x = menu_shape.x + FIRST_TITLE_BUTTON_X;
MB_SHAPE.y = menu_shape.y + FIRST_TITLE_BUTTON_Y + ( ( TITLE_BUTTON_HEIGHT + TITLE_BUTTON_DIVIDER_HEIGHT ) * nbutton );
}
return;
}
/* ******************************************************** */
static void setup_title_help_page_shape( game_data * game )
{
game->title_screen_data.help_page_shape.h = HELP_PAGE_HEIGHT;
game->title_screen_data.help_page_shape.w = HELP_PAGE_WIDTH;
game->title_screen_data.help_page_shape.x = menu_shape.x + 59;
game->title_screen_data.help_page_shape.y = menu_shape.y + 192;
return;
}
/* ******************************************************** */
static void setup_title_help_menu( game_data * game, menu * tmenu, int width, int height )
{
int nbutton = 0;
tmenu->button_highlight = ( SDL_Texture ** )create_array( 2, sizeof( SDL_Texture * ), zero_array );
tmenu->button_highlight[0] = load_texture( game->renderer, ( char * )"bitmaps//button2_highlight.bmp", true, 255, 255, 255 );
tmenu->button_highlight[1] = load_texture( game->renderer, ( char * )"bitmaps//triangle_highlight.bmp", true, 255, 255, 255 );
setup_title_help_page_shape( game );
for ( ; nbutton < tmenu->max_buttons; nbutton++ )
{
switch ( nbutton )
{
case GO_BACK :
tmenu->button_shape[GO_BACK].h = 50;
tmenu->button_shape[GO_BACK].w = 120;
tmenu->button_shape[GO_BACK].x = menu_shape.x + 329;
tmenu->button_shape[GO_BACK].y = menu_shape.y + 61;
break;
case ARROW_RIGHT :
tmenu->button_shape[ARROW_RIGHT].h = 34;
tmenu->button_shape[ARROW_RIGHT].w = 17;
tmenu->button_shape[ARROW_RIGHT].x = menu_shape.x + 432;
tmenu->button_shape[ARROW_RIGHT].y = menu_shape.y + 473;
break;
case ARROW_LEFT :
tmenu->button_shape[ARROW_LEFT].h = 34;
tmenu->button_shape[ARROW_LEFT].w = 17;
tmenu->button_shape[ARROW_LEFT].x = menu_shape.x + 390;
tmenu->button_shape[ARROW_LEFT].y = menu_shape.y + 473;
break;
}
}
return;
}
The part where it actually breaks is when I try to do this :
free( game->title_help_menu.is_clicked );
free( game->title_help_menu.hover_over_button );
If you're wondering, it doesn't break on the help menu arrays, for whatever reason. Which confuses me even more. I made sure it wasn't anything else by removing these functions, and it didn't throw a SIGSEGV. I figure there is some subtle problem that is causing this, but I have no idea what it is. I'm glad if any of you know more about debugging than I do and can at least make some suggestions on this.
The strange thing is, it lets me access the array fine, but for some reason it doesn't allow me to free it without causing the fault.
That isn't strange at all.
Most UNIX malloc implementations do not release freed memory to the OS immediately. Instead, they keep it on free-list, in the hope that application will ask for it again, and it will be readily available without executing system calls (which are expensive).
The confusing thing is that the crash is very selective.
That is also very typical for heap corruption problems. When you corrupt heap (e.g. by overflowing an allocated buffer, or freeing something twice), the crash may come much later, and may go away when you change something unrelated.
Good news: tools like Valgrind and Address Sanitizer usually help you find heap corruption problems very quickly and efficiently.
I am using videoInput to get a live stream from my webcam, but I've ran into a problem where videoInput's documentation implies that I should always be getting a BGR/RGB, however, the "verbose" output tells me the pixel format is YUY2.
***** VIDEOINPUT LIBRARY - 0.1995 - TFW07 *****
SETUP: Setting up device 0
SETUP: 1.3M WebCam
SETUP: Couldn't find preview pin using SmartTee
SETUP: Default Format is set to 640 by 480
SETUP: trying format RGB24 # 640 by 480
SETUP: trying format RGB32 # 640 by 480
SETUP: trying format RGB555 # 640 by 480
SETUP: trying format RGB565 # 640 by 480
SETUP: trying format YUY2 # 640 by 480
SETUP: Capture callback set
SETUP: Device is setup and ready to capture.
My first thoughts were to try converting to RGB (assuming I was really getting YUY2 data), and I end up getting a blue image that was highly distorted.
Here is my code for converting YUY2 to BGR (Note: This is part of a much larger program, and this is borrowed code - I can get the url at anyone's request):
#define CLAMP_MIN( in, min ) ((in) < (min))?(min):(in)
#define CLAMP_MAX( in, max ) ((in) > (max))?(max):(in)
#define FIXNUM 16
#define FIX(a, b) ((int)((a)*(1<<(b))))
#define UNFIX(a, b) ((a+(1<<(b-1)))>>(b))
#define ICCIRUV(x) (((x)<<8)/224)
#define ICCIRY(x) ((((x)-16)<<8)/219)
#define CLIP(t) CLAMP_MIN( CLAMP_MAX( (t), 255 ), 0 )
#define GET_R_FROM_YUV(y, u, v) UNFIX((FIX(1.0, FIXNUM)*(y) + FIX(1.402, FIXNUM)*(v)), FIXNUM)
#define GET_G_FROM_YUV(y, u, v) UNFIX((FIX(1.0, FIXNUM)*(y) + FIX(-0.344, FIXNUM)*(u) + FIX(-0.714, FIXNUM)*(v)), FIXNUM)
#define GET_B_FROM_YUV(y, u, v) UNFIX((FIX(1.0, FIXNUM)*(y) + FIX(1.772, FIXNUM)*(u)), FIXNUM)
bool yuy2_to_rgb24(int streamid) {
int i;
unsigned char y1, u, y2, v;
int Y1, Y2, U, V;
unsigned char r, g, b;
int size = stream[streamid]->config.g_h * (stream[streamid]->config.g_w / 2);
unsigned long srcIndex = 0;
unsigned long dstIndex = 0;
try {
for(i = 0 ; i < size ; i++) {
y1 = stream[streamid]->vi_buffer[srcIndex];
u = stream[streamid]->vi_buffer[srcIndex+ 1];
y2 = stream[streamid]->vi_buffer[srcIndex+ 2];
v = stream[streamid]->vi_buffer[srcIndex+ 3];
Y1 = ICCIRY(y1);
U = ICCIRUV(u - 128);
Y2 = ICCIRY(y2);
V = ICCIRUV(v - 128);
r = CLIP(GET_R_FROM_YUV(Y1, U, V));
//r = (unsigned char)CLIP( (1.164f * (float(Y1) - 16.0f)) + (1.596f * (float(V) - 128)) );
g = CLIP(GET_G_FROM_YUV(Y1, U, V));
//g = (unsigned char)CLIP( (1.164f * (float(Y1) - 16.0f)) - (0.813f * (float(V) - 128.0f)) - (0.391f * (float(U) - 128.0f)) );
b = CLIP(GET_B_FROM_YUV(Y1, U, V));
//b = (unsigned char)CLIP( (1.164f * (float(Y1) - 16.0f)) + (2.018f * (float(U) - 128.0f)) );
stream[streamid]->rgb_buffer[dstIndex] = b;
stream[streamid]->rgb_buffer[dstIndex + 1] = g;
stream[streamid]->rgb_buffer[dstIndex + 2] = r;
dstIndex += 3;
r = CLIP(GET_R_FROM_YUV(Y2, U, V));
//r = (unsigned char)CLIP( (1.164f * (float(Y2) - 16.0f)) + (1.596f * (float(V) - 128)) );
g = CLIP(GET_G_FROM_YUV(Y2, U, V));
//g = (unsigned char)CLIP( (1.164f * (float(Y2) - 16.0f)) - (0.813f * (float(V) - 128.0f)) - (0.391f * (float(U) - 128.0f)) );
b = CLIP(GET_B_FROM_YUV(Y2, U, V));
//b = (unsigned char)CLIP( (1.164f * (float(Y2) - 16.0f)) + (2.018f * (float(U) - 128.0f)) );
stream[streamid]->rgb_buffer[dstIndex] = b;
stream[streamid]->rgb_buffer[dstIndex + 1] = g;
stream[streamid]->rgb_buffer[dstIndex + 2] = r;
dstIndex += 3;
srcIndex += 4;
}
return true;
} catch(...) {
return false;
}
}
After this wasn't working, I assume either a) my color space conversion function is wrong, or b) videoInput is lying to me.
Well, I wanted to double check that videoInput was indeed telling me the truth, and it turns out there is absolutely no way for me to see the pixel format I'm getting from the videoInput::getPixels() function, outside of the verbose text (unless I'm extremely crazy and just can't see it). This makes me assume that it's possible that videoInput does some sort of color space conversion behind the scenes so you're always getting a consistent image, regardless of the webcam. With this in mind, and following some of the documentation in videoInput.h:96, it sort of appears that it just gives out RGB or BGR images.
The utility I'm using to display the image takes RGB images (Java BufferedImage), so I figured I could just feed it the raw data directly from videoInput, and it should be fine.
Here is how I've got my image setup in Java:
BufferedImage buffer = new BufferedImage(directShow.device_stream_width(stream),directShow.device_stream_height(stream), BufferedImage.TYPE_INT_RGB );
int rgbdata[] = directShow.grab_frame_stream(stream);
if( rgbdata.length > 0 ) {
buffer.setRGB(
0, 0,
directShow.device_stream_width(stream),
directShow.device_stream_height(stream),
rgbdata,
0, directShow.device_stream_width(stream)
);
}
And here is how I send it to Java (C++/JNI):
JNIEXPORT jintArray JNICALL Java_directshowcamera_dsInterface_grab_1frame_1stream(JNIEnv *env, jobject obj, jint streamid)
{
//jclass bbclass = env->FindClass( "java/nio/IntBuffer" );
//jmethodID putMethod = env->GetMethodID(bbclass, "put", "(B)Ljava/nio/IntBuffer;");
int buffer_size;
jintArray ia;
jint *intbuffer = NULL;
unsigned char *buffer = NULL;
append_stream( streamid );
buffer_size = stream_device_rgb24_size(streamid);
ia = env->NewIntArray( buffer_size );
intbuffer = (jint *)calloc( buffer_size, sizeof(jint) );
buffer = stream_device_buffer_rgb( streamid );
if( buffer == NULL ) {
env->DeleteLocalRef( ia );
return env->NewIntArray( 0 );
}
for(int i=0; i < buffer_size; i++ ) {
intbuffer[i] = (jint)buffer[i];
}
env->SetIntArrayRegion( ia, 0, buffer_size, intbuffer );
free( intbuffer );
return ia;
}
This has been driving me absolutely nuts for the past two weeks, and I've tried variations of anything suggested to me as well, with absolutely no sane success.
I've got a dialog that I'd basically like to implement as a texture viewer using DirectX. The source texture can either come from a file on-disk or from an arbitrary D3D texture/surface in memory. The window will be resizable, so I'll need to be able to scale its contents accordingly (preserving aspect ratio, while not necessary, would be useful to know).
What would be the best way to go about implementing the above?
IMHO the easiest way to do this is to create a quad (or two triangles) whose vertices contain the correct UV-coordinates. The XYZ coordinates you set to the viewing cube coordinates. This only works if the identity matrix is set as projection. You can use -1 to 1 on both X and Y axes.
EDIT: Here an example turorial:
http://www.mvps.org/directx/articles/splash_screen.htm
This is the code I use to preserve size and scaling for resizeable dialogue. My texture is held in a memory bitmap. I am sure you can adapt if you do not have a memory bitmap. The important bits is the way I determine the right scaling factor to preserve the aspect ratio for any client area size
CRect destRect( 0, 0, frameRect.Width(), frameRect.Height() );
if( txBitmapInfo.bmWidth <= frameRect.Width() && txBitmapInfo.bmHeight <= frameRect.Height() )
{
destRect.left = ( frameRect.Width() - txBitmapInfo.bmWidth ) / 2;
destRect.right = destRect.left + txBitmapInfo.bmWidth;
destRect.top = ( frameRect.Height() - txBitmapInfo.bmHeight ) / 2;
destRect.bottom = destRect.top + txBitmapInfo.bmHeight;
}
else
{
double hScale = static_cast<double>( frameRect.Width() ) / txBitmapInfo.bmWidth;
double vScale = static_cast<double>( frameRect.Height() ) / txBitmapInfo.bmHeight;
if( hScale < vScale )
{
int height = static_cast<int>( frameRect.Width() * ( static_cast<double>(txBitmapInfo.bmHeight) / txBitmapInfo.bmWidth ) );
destRect.top = ( frameRect.Height() - height ) / 2;
destRect.bottom = destRect.top + height;
}
else
{
int width = static_cast<int>( frameRect.Height() * ( static_cast<double>(txBitmapInfo.bmWidth) / txBitmapInfo.bmHeight ) );
destRect.left = ( frameRect.Width() - width ) / 2;
destRect.right = destRect.left + width;
}
}
Hope this helps!