Dynamic texture loading in SDL - c++

I´ve got problems with opening textures in SDL. I´ve got a function to read bmp files, optimize them and add colorkey:
SDL_Surface* SDLStuff::LoadImage( char* FileName ) {
printf( "Loading texture: \"%s\"\n", FileName );
SDL_Surface* loadedImage = 0;
SDL_Surface* optimizedImage = 0;
loadedImage = SDL_LoadBMP( FileName );
optimizedImage = SDL_DisplayFormat( loadedImage );
SDL_FreeSurface( loadedImage );
Uint32 colorkey = SDL_MapRGB( optimizedImage->format, 255, 0, 255 );
SDL_SetColorKey( optimizedImage, SDL_RLEACCEL | SDL_SRCCOLORKEY, colorkey );
//SDL_SetColorKey(Tiles[0].Texture, SDL_SRCCOLORKEY | SDL_RLEACCEL, SDL_MapRGB(Tiles[0].Texture->format, 255, 0 ,255));
Cache.push_back( optimizedImage );
return optimizedImage;
}
Which works great. I then load all my textures like this and this also works:
Objects[0].Texture = SDLS.LoadImage( "data/mods/default/sprites/house.bmp" );
Objects[1].Texture = SDLS.LoadImage( "data/mods/default/sprites/wall0.bmp" );
Objects[2].Texture = SDLS.LoadImage( "data/mods/default/sprites/wall1.bmp" );
Selector.Texture = SDLS.LoadImage( "data/mods/default/selector.bmp" );
Tiles[0].Texture = SDLS.LoadImage( "data/mods/default/tiles/grass.bmp" );
Tiles[1].Texture = SDLS.LoadImage( "data/mods/default/tiles/dirt.bmp" );
Tiles[2].Texture = SDLS.LoadImage( "data/mods/default/tiles/black.bmp" );
But I want to be able to control this stuff through some kind of data files. So I wrote a functionton parse a csv file. Then I get the values and try to read the bmp-files, like this:
void DataFile( std::string Mod, std::string FileName, std::string Separator = "\t" ) {
ini dataf;
dataf.Init();
dataf.LoadFile( "data/mods/" + Mod + "/" + FileName );
std::vector< std::vector< std::string > > MData = dataf.LoopCSV( Separator );
for ( unsigned int Row = 0; Row < MData.size(); Row++ ) {
if ( MData.at( Row ).size() > 0 ) {
if ( MData.at( Row )[0] == "TILE" ) {
if ( MData.at( Row ).size() == 4 ) {
std::string a = "data/mods/" + Mod + "/" + MData.at( Row )[3];
WriteLog( a.c_str() );
Tileset TTile;
TTile.WalkCost = String2Int( MData.at( Row )[2] );
TTile.Texture = SDLS.LoadImage( a.c_str() );
Tiles[String2Int(MData.at( Row )[1])] = TTile;
} else {
WriteLog( "Wrong number of arguments passed to TILE\n" );
}
}
}
}
dataf.Destroy();
}
This works perfectly well and it logs paths to files that actually exists, I´ve double checked every file. BUT the SDLS.LoadImage()-call fails anyway and the program crashes. If I comment out that line it all works perfect except that nothing is rendered where the tiles should be. But the files is there and works when I load them manually, and sdl is initialized before I try to call SDL_DisplayFormat(), so I don´t know what can be wrong with this :(
EDIT:
Just a note to not cunfuse people; the SDLStuff class uses a cache of the pointers to the textures. That way I can loop through the cache, being able to free all loaded textures with a single function call to a function in SDLStuff.

You may want to use SDL_image official library to load jpg, png, tiff etc:
http://www.libsdl.org/projects/SDL_image/

May be the better solution is to create an archive file with your resources and iterate files in it.
Benefits:
1. you do not need to create csv file.
2. your archived bmp will be of less size.
3. you can set a password for achive file to protect your resources from users.
Additional links:
http://www.zlib.net/
What libraries should I use to manipulate archives from C++?

Related

SDL1.2: How to create text wrapping function with sdl_ttf like in SDL2.0?

I am trying to create a text wrapper function using SDL1.2 + SDL_TTF. I am aware that SDL2.0 already has a readily available text wrapping function (TTF_RenderText_Blended_Wrapped) but I am using SDL1.2, therefore I am trying to create the wrapping function manually.
Basically I have two surfaces, one which is the wrapper surface and the other one a temporary surface which contains the rendered string. The rendered string is a string between two white spaces taken from the entire string which is to be wrapped. Each temporary surface (string part) is rendered and blit onto the wrapper surface and automatically jumps to the next line if the wrapper width is exceeded.
The string is eventually wrapped properly, however the problem comes with the wrapper surface. Initially I create a colored surface with SDL_CreateRGBSurface() because to my understanding I cannot blit the string parts onto an empty surface. After the wrapping, I want to remove the unused space on the wrapper surface so that the the surface is transparent besides the text.
For this I used color-keying but noticed that that is a poor solution since it also removes some of the text color. My other attempt was to use the alpha values on both surfaces but using SDL_SetAlpha() but this somehow resulted into the same result with the text color being all wrong.
Below is an example of the code I wrote, I would really appreciate some alternative ideas or solutions.
TTF_Font* textFont = nullptr;
SDL_Color textColor = {0, 0, 0};
std::string wrapperText = "This text is to be wrapped with transparency";
SDL_Surface* wrapperSurface = nullptr;
SDL_Rect wrapperRect = {0, 0, 160, 50};
std::string tempString = "";
SDL_Surface* tempSurface = nullptr;
SDL_Rect tempRect = {0, 0, 0, 0};
int lineWidthPx = 0;
int lineHeightPx = 0;
int textWidthPx = 0;
int textHeightPx = 0;
std::size_t textBegin = 0;
std::size_t textEnd = 0;
textFont = TTF_OpenFont("res/fonts/Nunito-Black.ttf", 14);
wrapperSurface = SDL_CreateRGBSurface(SDL_HWSURFACE, wrapperRect.w, wrapperRect.h, 32, 255, 255, 255, 255);
SDL_SetAlpha(wrapperSurface, SDL_SRCALPHA | SDL_RLEACCEL, SDL_ALPHA_TRANSPARENT);
while (textEnd < wrapperText.size())
{
textBegin = textEnd + 1;
textEnd = wrapperText.find(" ", textBegin);
if (textEnd == std::string::npos)
textEnd = wrapperText.size(); // Reached end of text
tempString = wrapperText.substr(textBegin - 1, textEnd - textBegin + 1);
TTF_SizeText(textFont, tempString.c_str(), &textWidthPx, &textHeightPx);
lineWidthPx += textWidthPx;
if (lineWidthPx > wrapperRect.w)
{
tempString = wrapperText.substr(textBegin, textEnd - textBegin);
lineWidthPx = 0 + textWidthPx;
lineHeightPx += textHeightPx; // Next line
if (lineHeightPx > wrapperRect.h)
break; // Text is too large for wrapper
}
tempSurface = TTF_RenderUTF8_Solid(textFont, tempString.c_str(), textColor);
SDL_SetAlpha(tempSurface, SDL_SRCALPHA | SDL_RLEACCEL, SDL_ALPHA_OPAQUE);
tempRect = { static_cast<Sint16>(lineWidthPx-textWidthPx), static_cast<Sint16>(lineHeightPx), static_cast<Uint16>(textWidthPx), static_cast<Uint16>(textHeightPx) };
SDL_BlitSurface(tempSurface, NULL, wrapperSurface, &tempRect);
}
TTF_CloseFont(textFont);
textFont = nullptr;
// Remove temporary surface
SDL_FreeSurface(tempSurface);
tempSurface = nullptr;

FastFeatureDetector opencv C++ filtering results

I am developing a game bot and using opencv and I am trying to make it detect spikes.
The spikes look like this :
What I tried was using a FastFeatureDetector to highlight keypoints , the result was the following :
The spikes are horizontal and change colors.the operation is on a full 1920x1080 screen
So my thinking was to take one of the points and compare to all of the other points X's since I have no way of filtering the result and 6094 KeyPoints the operation took too long. (37136836 iterations).
Is there a way to filter FastFeatureDetector results or should I approach this in another way?
my code :
Point * findSpikes( Mat frame , int * num_spikes )
{
Point * ret = NULL;
int spikes_counter = 0;
Mat frame2;
cvtColor( frame , frame2 , CV_BGR2GRAY );
Ptr<FastFeatureDetector> myBlobDetector = FastFeatureDetector::create( );
vector<KeyPoint> myBlobs;
myBlobDetector->detect( frame2 , myBlobs );
HWND wnd = FindWindow( NULL , TEXT( "Andy" ) );
RECT andyRect;
GetWindowRect( wnd , &andyRect );
/*Mat blobimg;
drawKeypoints( frame2 , myBlobs , blobimg );*/
//imshow( "Blobs" , blobimg );
//waitKey( 1 );
printf( "Size of vectors : %d\n" , myBlobs.size( ) );
for ( vector<KeyPoint>::iterator blobIterator = myBlobs.begin( ); blobIterator != myBlobs.end( ); blobIterator++ )
{
#pragma region FilteringArea
//filtering keypoints
if ( blobIterator->pt.x > andyRect.right || blobIterator->pt.x < andyRect.left
|| blobIterator->pt.y > andyRect.bottom || blobIterator->pt.y < andyRect.top )
{
printf( "Filtered\n" );
continue;
}
#pragma endregion
for ( vector<KeyPoint>::iterator comparsion = myBlobs.begin( ); comparsion != myBlobs.end( ); comparsion++ )
{
//filtering keypoints
#pragma region FilteringRegion
if ( comparsion->pt.x > andyRect.right || comparsion->pt.x < andyRect.left
|| comparsion->pt.y > andyRect.bottom || comparsion->pt.y < andyRect.top )
{
printf( "Filtered\n" );
continue;
}
printf( "Processing\n" );
double diffX = abs( blobIterator->pt.x - comparsion->pt.x );
if ( diffX <= 5 )
{
spikes_counter++;
printf( "Spike added\n" );
ret = ( Point * ) realloc( ret , sizeof( Point ) * spikes_counter );
if ( !ret )
{
printf( "Memory error\n" );
ret = NULL;
}
ret[spikes_counter - 1].y = ( ( blobIterator->pt.y + comparsion->pt.y ) / 2 );
ret[spikes_counter - 1].x = blobIterator->pt.x;
break;
}
#pragma endregion
}
}
( *( num_spikes ) ) = spikes_counter;
return ret;//Modify later
}
I'm aware of the usage of realloc and printf in C++ I just don't like cout and new
Are the spikes actually different sizes and irregularly spaced in real life? In your image they are regularly spaced and identically sized and so once you know the coordinates of one point, you can calculate all of the rest by simply adding a fixed increment to the X coordinate.
If the spikes are irregularly spaced and potentially different heights, I'd suggest you might try :
Use Canny edge detector to find the boundary between the spikes and the background
For each X coord in this edge image, search a single column of the edge image using minMaxIdx to find the brightest point in that column
If the Y coordinate of that point is higher up the screen than the Y coordinate of the brightest point in the previous column then the previous column was a spike, save (X,Y) coords.
If a spike was found in step 3, keep skipping across columns until the brightest Y coordinate in a column is the same as in the previous column. Then repeat spike detection, otherwise keep searching for next spike
Considering the form of your spikes, I'd suggest template pattern mathcing. It seems keypoints are a rather indirect approach.

GDAL GeoTiff Corrupt on Write (C++ )

I'm getting corrupt output when writing a GeoTiff using GDAL API (v1.10 - C++). The raster geotransform is correct, the block is written in the correct position but the pixels are written at random positions and values within the block.
Example: http://i.imgur.com/mntnAfK.png
Method: Open a GDAL Raster --> copy projection info & size --> create output GeoTiff --> write a block from array at offset.
Code:
//Open the input DEM
const char* demFName = "/Users/mount.dem";
const char* outFName = "/Users/geodata/out_test.tif";
auto poDataset = ioUtils::openDem(demFName);
double adfGeoTransform[6];
poDataset->GetGeoTransform( adfGeoTransform );
//Setup driver
const char *pszFormat = "GTiff";
GDALDriver *poDriver;
poDriver = GetGDALDriverManager()->GetDriverByName(pszFormat);
char *pszSRS_WKT = NULL;
GDALRasterBand *poBand;
//Get size from input Raster
int xSize = poDataset->GetRasterXSize();
int ySize = poDataset->GetRasterYSize();
//Set output Dataset
GDALDataset *poDstDS;
char **papszOptions = NULL;
//Create output geotiff
poDstDS = poDriver->Create( outFName, xSize, ySize, 1, GDT_Byte, papszOptions );
//Get the geotrans from the input geotrans
poDataset->GetGeoTransform( adfGeoTransform );
poDstDS->SetGeoTransform( adfGeoTransform );
poDstDS->SetProjection( poDataset->GetProjectionRef() );
//Create some data to write
unsigned char rData[512*512];
//Assign some values other than 0
for (int col=0; col < 512; col++){
for (int row=0; row < 512; row++){
rData[col*row] = 50;
}
}
//Write some data
poBand = poDstDS->GetRasterBand(1);
poBand->RasterIO( GF_Write, 200, 200, 512, 512,
rData, 512, 512, GDT_Byte, 0, 0 );
//Close
GDALClose( (GDALDatasetH) poDstDS );
std::cout << "Done" << std::endl;
Any ideas / pointers where I'm going wrong much appreciated.
Always something trivial...
rData[row*512+col] = 50
Qudos to Even Rouault on osgeo.

FreeImage: Pixel data accessed by FreeImage_GetBits is not correct (data + size)

I'm using the FreeImage 3.15.4 library to analyze PNG images. I'm basically trying to build a simple data structure of consisting of a palette of all colors as well as an array version of the image per-pixel data consisting of indexes into the palette.
The thing is that FreeImage_GetBits seems to be returning a pointer to invalid data and I'm not sure why. I am able to read the width and height of the PNG file correctly, but the data pointed to by FreeImage_GetBits is just garbage data, and appears to be of an odd size. No matter how many times I run the program, it consistently dies in the same place, when iPix in the code below is equal to 131740. I get a C0000005 error accessing bits[131740] in the std::find call. The actual and reported PNG image size is 524288.
Furthermore, I've tried this code with smaller images that I myself have built and they work fine. The PNG I'm using is provided my a third party, and does not appear to be corrupt in anyway (Photoshop opens it, and DirectX can process and use it normally)
Any ideas?
Here's the data declarations:
struct Color
{
char b; // Blue
char g; // Green
char r; // Red
char a; // Alpha value
bool operator==( const Color& comp )
{
if ( a == comp.a &&
r == comp.r &&
g == comp.g &&
b == comp.b )
return TRUE;
else
return FALSE;
}
};
typedef std::vector<Color> ColorPalette; // Array of colors forming a palette
And here's the code that does the color indexing:
// Read image data with FreeImage
unsigned int imageSize = FreeImage_GetWidth( hImage ) * FreeImage_GetHeight( hImage );
unsigned char* pData = new unsigned char[imageSize];
// Access bits via FreeImage
FREE_IMAGE_FORMAT fif;
FIBITMAP* hImage;
fif = FreeImage_GetFIFFromFilename( fileEntry.name.c_str() );
if( fif == FIF_UNKNOWN )
{
return false;
}
hImage = FreeImage_Load( fif, filename );
BYTE* pPixelData = NULL;
pPixelData = FreeImage_GetBits( hImage );
if ( pPixelData == NULL )
{
return false;
}
Color* bits = (Color*)pPixelData;
ColorPalette palette;
for ( unsigned int iPix = 0; iPix < imageSize; ++iPix )
{
ColorPalette::iterator it;
if( ( it = std::find( palette.begin(), palette.end(), bits[iPix] ) ) == palette.end() )
{
pData[iPix] = palette.size();
palette.push_back( bits[iPix] );
}
else
{
unsigned int index = it - palette.begin();
pData[iPix] = index;
}
}
The PNG images that were problematic were using indexed color modes and the raw pixel data was indeed being returned as 8bpp. The correct course of action was to treat this data as 8 bits per pixel, and treat each 8-bit value as an index into a palette of colors that can be retrieved using FreeImage_GetPalette. The alternative, which is the choice I ultimately made, was to call FreeImage_ConvertTo32Bits on these indexed color mode PNG images, and then pass everything through the same code path as the same 32-bit image format.
Pretty simple conversion, but here it is:
// Convert non-32 bit images
if ( FreeImage_GetBPP( hImage ) != 32 )
{
FIBITMAP* hOldImage = hImage;
hImage = FreeImage_ConvertTo32Bits( hOldImage );
FreeImage_Unload( hOldImage );
}

Qt and image processing [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
How can someone use Qt for image processing? Are there libraries or plugins for Qt for this purpose?
Thanks.
Qt is rather meant for developing graphical user interfaces (GUIs). However, it comes with a lot of supplementary libraries, including one dedicated to image processing. However, if you want to get serious, I would recommend a dedicated library such as OpenCV.
I did use Qt for GUI plus LTIlib for image processing.
Qt itself won't be very helpful for processing any image, but there are a couple of independent libraries that you can use best fitting your needs. Bear in mind that Qt is essentially meant to be a GUI framework. It is very very good, if not the best, to make windows, buttons, tree views, etc. but don't expect it to be so comprehensive that can do everything.
Please let us to know more preciselly what you mean when you say "image processing".
It is a vast reign with hundreds or thousands of possible goals and approaches...
EDIT:
Here is a small excerpt or what I used to do with Qt+LTI.
See LTI documentation for all operators available. I used to do convolutions, self-correlations, basic erosion/dilation and a lot more.
#include <ltiDilation.h>
#include <ltiErosion.h>
#include <ltiBinaryKernels.h>
#include <ltiFastRelabeling.h>
#include <ltiLabelAdjacencyMap.h>
void QLTIDialog::init()
{
viewLayout = new QGridLayout( frmView, 1, 1, 4, 4, "viewLayout" );
view= new QImageLabel( frmView, "view" );
viewLayout->addWidget( view, 0, 0 );
frmView->setUpdatesEnabled( false );
view->image( &qimg );
}
void QLTIDialog::btnOpen_clicked()
{
QString fn= QFileDialog::getOpenFileName(
"",
tr( "All files (*.*)" ),
this,
tr( "Open image" ),
tr( "Select image file" ) );
if ( !fn.isEmpty( ) )
{
if ( !qimg.load( fn ) )
{
QMessageBox::critical( this, tr( "Fatal error" ),
QString( tr( "Unable to open %1" ) ).arg( fn ),
tr( "Exit" ) );
return;
}
view->update( );
setCaption( fn );
}
}
void QLTIDialog::btnProcess_clicked()
{
lti::image img;
lti::channel8 tmp0,
h, s, v;
// Taking QImage data, as in the wiki.
img.useExternData( qimg.width( ), qimg.height( ), ( lti::rgbPixel * )qimg.bits( ) );
// Converting to HSV gives-me best results, but it can be left out.
lti::splitImageToHSV hsv;
hsv.apply( img, h, s, v );
// I do some manipulation over the channels to achieve my objects positions.
lti::maskFunctor< lti::channel8::value_type > masker;
masker.invert( v, tmp0 );
masker.algebraicSum( s, tmp0 );
// Show the resulting processed image (ilustrative)...
QLTIDialog *dh= new QLTIDialog;
dh->showImage( tmp0 );
// Apply relabeling (example). Any other operator can be used.
lti::fastRelabeling::parameters flPar;
flPar.sortSize= true;
flPar.minimumObjectSize= 25;
flPar.fourNeighborhood= true;
flPar.minThreshold= 40;
lti::fastRelabeling fr( flPar );
fr.apply( tmp0 );
lti::image imgLam;
lti::labelAdjacencyMap lam;
lam.apply( tmp0, imgLam );
// By hand copy to QImage.
lti::image::iterator iit= imgLam.begin( );
lti::rgbPixel *pix= ( lti::rgbPixel * )qimg.bits( );
for ( ; iit != imgLam.end( ); ++iit, ++pix )
*pix= *iit;
view->update( );
}
void QLTIDialog::showImage( lti::image &img )
{
qimg= QImage( reinterpret_cast< uchar * >( &( *img.begin( ) ) ),
img.rows( ), img.columns( ), 32, ( QRgb * )NULL,
0, QImage::LittleEndian ).copy( );
QDialog::show( );
}
void QLTIDialog::showImage( lti::channel8 &ch )
{
lti::image img;
img.castFrom( ch );
qimg= QImage( reinterpret_cast< uchar * >( &( *img.begin( ) ) ),
img.rows( ), img.columns( ), 32, ( QRgb * )NULL,
0, QImage::LittleEndian ).copy( );
QDialog::show( );
}
EDIT Again:
I found another sample that may be more interesting to you...
lti::image img;
lti::channel8 chnl8( false, imgH, imgW ), h, s, v;
// Pass image data to LTI.
img.useExternData( imgH, imgW, ( lti::rgbPixel * )pixels );
// I got better results in HSV for my images.
lti::splitImageToHSV hsv;
hsv.apply( img, h, s, v );
// Segmentation.
lti::channel8::iterator it= chnl8.begin( );
lti::channel8::iterator hit= h.begin( ),
sit= s.begin( ),
vit= v.begin( );
for ( ; it != chnl8.end( ); ++it, ++hit, ++sit, ++vit )
{
int tmp= *sit * 2;
tmp-= *hit - 320 + *vit;
*it= ( *hit > 40 && tmp > 460 ? 1 : 0 );
}
// Distinguish connected objects.
lti::imatrix objs;
std::vector< lti::geometricFeatureGroup0 > objF;
lti::geometricFeaturesFromMask::parameters gfPar;
gfPar.merge= true; // Join close objects.
gfPar.minimumDistance= lti::point( 24, 24 );
gfPar.minimumMergedObjectSize= 2; // Exclude small ones.
gfPar.nBest= 800; // Limit no. of objects.
lti::geometricFeaturesFromMask gf( gfPar );
gf.apply( chnl8, objs, objF );
points.clear( );
for( std::vector< lti::geometricFeatureGroup0 >::const_iterator gfg0= objF.begin( );
gfg0 != objF.end( ); ++gfg0 )
points.push_back( Point( gfg0->cog.x, gfg0->cog.y ) );
The rest is like the first example.
Hope it helps.
Image processing is a rather generic term. Have a look at VTK and ITK from Kitware. Also Freemat (a Matlab clone) is based on Qt. Qt is popular among quantitative scientists, I expect that there quite a few Qt-based imaging libraries and products.
I use QT for image Processing. I use OpenCV then I convert the OpenCV Mat into a QImage, then I display it in a label on the UI.
Thank you