We want to create an SDL surface by loading an image with SDL_Image and if the dimensions exceed a limit resize the surface.
The reason we need to do this is on Raspbian SDL throws an error creating a texture from the surface ('Texture dimensions are limited to 2048x2048'). Whilst that's a very large image we don't want users to be concerned about image size, we want to resize it for them. Although we haven't encountered this limit on Windows, we're trying to develop the solution on windows and having issues resizing the texture.
Looking for a solution there have been similar questions...:
2008 not SDL2 custom blitting
2010 use SDL_gfx
2008 can't be done use SDL_gfx, 2015 use SDL_BlitScaled, 2015 use SDL_RenderCopyEx
Is a custom blitter or SDL_gfx necessary with current SDL2 (those answers pre-date SDL2's 2013 release)? SDLRenderCopyEx doesn't help as you need to generate the texture which is where our problem occurs.
So we tried some of the available blitting functions like SDL_BlitScaled, below is a simple program to render a 2500x2500 PNG with no scaling:
#include <SDL.h>
#include <SDL_image.h>
#include <sstream>
#include <string>
SDL_Texture * get_texture(
SDL_Renderer * pRenderer,
std::string image_filename) {
SDL_Texture * result = NULL;
SDL_Surface * pSurface = IMG_Load(image_filename.c_str());
if (pSurface == NULL) {
printf("Error image load: %s\n", IMG_GetError());
}
else {
SDL_Texture * pTexture = SDL_CreateTextureFromSurface(pRenderer, pSurface);
if (pTexture == NULL) {
printf("Error image load: %s\n", SDL_GetError());
}
else {
SDL_SetTextureBlendMode(
pTexture,
SDL_BLENDMODE_BLEND);
result = pTexture;
}
SDL_FreeSurface(pSurface);
pSurface = NULL;
}
return result;
}
int main(int argc, char* args[]) {
SDL_Window * pWindow = NULL;
SDL_Renderer * pRenderer = NULL;
// set up
SDL_Init(SDL_INIT_VIDEO);
SDL_SetHint(SDL_HINT_RENDER_SCALE_QUALITY, "1");
SDL_Rect screenDimensions;
screenDimensions.x = 0;
screenDimensions.y = 0;
screenDimensions.w = 640;
screenDimensions.h = 480;
pWindow = SDL_CreateWindow("Resize Test",
SDL_WINDOWPOS_UNDEFINED,
SDL_WINDOWPOS_UNDEFINED,
screenDimensions.w,
screenDimensions.h,
SDL_WINDOW_SHOWN);
pRenderer = SDL_CreateRenderer(pWindow,
-1,
SDL_RENDERER_ACCELERATED);
IMG_Init(IMG_INIT_PNG);
// render
SDL_SetRenderDrawColor(
pRenderer,
0,
0,
0,
0);
SDL_RenderClear(pRenderer);
SDL_Texture * pTexture = get_texture(
pRenderer,
"2500x2500.png");
if (pTexture != NULL) {
SDL_RenderCopy(
pRenderer,
pTexture,
NULL,
&screenDimensions);
SDL_DestroyTexture(pTexture);
pTexture = NULL;
}
SDL_RenderPresent(pRenderer);
// wait for quit
bool quit = false;
while (!quit)
{
// poll input for quit
SDL_Event inputEvent;
while (SDL_PollEvent(&inputEvent) != 0) {
if ((inputEvent.type == SDL_KEYDOWN) &&
(inputEvent.key.keysym.sym == 113)) {
quit = true;
}
}
}
IMG_Quit();
SDL_DestroyRenderer(pRenderer);
pRenderer = NULL;
SDL_DestroyWindow(pWindow);
pWindow = NULL;
return 0;
}
Changing the get_texture function so it identifies a limit and tries to create a new surface:
SDL_Texture * get_texture(
SDL_Renderer * pRenderer,
std::string image_filename) {
SDL_Texture * result = NULL;
SDL_Surface * pSurface = IMG_Load(image_filename.c_str());
if (pSurface == NULL) {
printf("Error image load: %s\n", IMG_GetError());
}
else {
const int limit = 1024;
int width = pSurface->w;
int height = pSurface->h;
if ((width > limit) ||
(height > limit)) {
SDL_Rect sourceDimensions;
sourceDimensions.x = 0;
sourceDimensions.y = 0;
sourceDimensions.w = width;
sourceDimensions.h = height;
float scale = (float)limit / (float)width;
float scaleH = (float)limit / (float)height;
if (scaleH < scale) {
scale = scaleH;
}
SDL_Rect targetDimensions;
targetDimensions.x = 0;
targetDimensions.y = 0;
targetDimensions.w = (int)(width * scale);
targetDimensions.h = (int)(height * scale);
SDL_Surface *pScaleSurface = SDL_CreateRGBSurface(
pSurface->flags,
targetDimensions.w,
targetDimensions.h,
pSurface->format->BitsPerPixel,
pSurface->format->Rmask,
pSurface->format->Gmask,
pSurface->format->Bmask,
pSurface->format->Amask);
if (SDL_BlitScaled(pSurface, NULL, pScaleSurface, &targetDimensions) < 0) {
printf("Error did not scale surface: %s\n", SDL_GetError());
SDL_FreeSurface(pScaleSurface);
pScaleSurface = NULL;
}
else {
SDL_FreeSurface(pSurface);
pSurface = pScaleSurface;
width = pSurface->w;
height = pSurface->h;
}
}
SDL_Texture * pTexture = SDL_CreateTextureFromSurface(pRenderer, pSurface);
if (pTexture == NULL) {
printf("Error image load: %s\n", SDL_GetError());
}
else {
SDL_SetTextureBlendMode(
pTexture,
SDL_BLENDMODE_BLEND);
result = pTexture;
}
SDL_FreeSurface(pSurface);
pSurface = NULL;
}
return result;
}
SDL_BlitScaled fails with an error 'Blit combination not supported' other variations have a similar error:
SDL_BlitScaled(pSurface, NULL, pScaleSurface, NULL)
SDL_BlitScaled(pSurface, &sourceDimensions, pScaleSurface, &targetDimensions)
SDL_LowerBlitScaled(pSurface, &sourceDimensions, pScaleSurface, &targetDimensions) // from the wiki this is the call SDL_BlitScaled makes internally
Then we tried a non-scaled blit... which didn't throw an error but just shows white (not the clear colour or a colour in the image).
SDL_BlitSurface(pSurface, &targetDimensions, pScaleSurface, &targetDimensions)
With that blitting function not working we then tried it with the same image as a bitmap (just exporting the .png as .bmp), still loading the file with SDL_Image and both those functions work with SDL_BlitScaled scaling as expected 😐
Not sure what's going wrong here (we expect and need support for major image file formats like .png) or if this is the recommended approach, any help appreciated!
TL;DR The comment from #kelter pointed me in the right direction and another stack overflow question gave me a solution: it works if you first Blit to a 32bpp surface and then BlitScaled to another 32bpp surface. That worked for 8 and 24 bit depth pngs, 32 bit were invisible again another stack overflow question suggested first filling the surface before blitting.
An updated get_texture function:
SDL_Texture * get_texture(
SDL_Renderer * pRenderer,
std::string image_filename) {
SDL_Texture * result = NULL;
SDL_Surface * pSurface = IMG_Load(image_filename.c_str());
if (pSurface == NULL) {
printf("Error image load: %s\n", IMG_GetError());
}
else {
const int limit = 1024;
int width = pSurface->w;
int height = pSurface->h;
if ((width > limit) ||
(height > limit)) {
SDL_Rect sourceDimensions;
sourceDimensions.x = 0;
sourceDimensions.y = 0;
sourceDimensions.w = width;
sourceDimensions.h = height;
float scale = (float)limit / (float)width;
float scaleH = (float)limit / (float)height;
if (scaleH < scale) {
scale = scaleH;
}
SDL_Rect targetDimensions;
targetDimensions.x = 0;
targetDimensions.y = 0;
targetDimensions.w = (int)(width * scale);
targetDimensions.h = (int)(height * scale);
// create a 32 bits per pixel surface to Blit the image to first, before BlitScaled
// https://stackoverflow.com/questions/33850453/sdl2-blit-scaled-from-a-palettized-8bpp-surface-gives-error-blit-combination/33944312
SDL_Surface *p32BPPSurface = SDL_CreateRGBSurface(
pSurface->flags,
sourceDimensions.w,
sourceDimensions.h,
32,
pSurface->format->Rmask,
pSurface->format->Gmask,
pSurface->format->Bmask,
pSurface->format->Amask);
if (SDL_BlitSurface(pSurface, NULL, p32BPPSurface, NULL) < 0) {
printf("Error did not blit surface: %s\n", SDL_GetError());
}
else {
// create another 32 bits per pixel surface are the desired scale
SDL_Surface *pScaleSurface = SDL_CreateRGBSurface(
p32BPPSurface->flags,
targetDimensions.w,
targetDimensions.h,
p32BPPSurface->format->BitsPerPixel,
p32BPPSurface->format->Rmask,
p32BPPSurface->format->Gmask,
p32BPPSurface->format->Bmask,
p32BPPSurface->format->Amask);
// 32 bit per pixel surfaces (loaded from the original file) won't scale down with BlitScaled, suggestion to first fill the surface
// 8 and 24 bit depth pngs did not require this
// https://stackoverflow.com/questions/20587999/sdl-blitscaled-doesnt-work
SDL_FillRect(pScaleSurface, &targetDimensions, SDL_MapRGBA(pScaleSurface->format, 255, 0, 0, 255));
if (SDL_BlitScaled(p32BPPSurface, NULL, pScaleSurface, NULL) < 0) {
printf("Error did not scale surface: %s\n", SDL_GetError());
SDL_FreeSurface(pScaleSurface);
pScaleSurface = NULL;
}
else {
SDL_FreeSurface(pSurface);
pSurface = pScaleSurface;
width = pSurface->w;
height = pSurface->h;
}
}
SDL_FreeSurface(p32BPPSurface);
p32BPPSurface = NULL;
}
SDL_Texture * pTexture = SDL_CreateTextureFromSurface(pRenderer, pSurface);
if (pTexture == NULL) {
printf("Error image load: %s\n", SDL_GetError());
}
else {
SDL_SetTextureBlendMode(
pTexture,
SDL_BLENDMODE_BLEND);
result = pTexture;
}
SDL_FreeSurface(pSurface);
pSurface = NULL;
}
return result;
}
The comment from #kelter had me look more closely at the surface pixel formats, bitmaps were working at 24bpp, pngs were being loaded at 8bpp and not working. Tried changing the target surface to 24 or 32 bpp but that didn't help. We had generated the png with auto-detected bit depth, setting it to 8 or 24 and performing the BlitScaled on a surface with the same bits-per-pixel worked although it didn't work for 32. Googling the blit conversion error lead to the question and answer from #Petruza.
Update Was a bit quick writing up this answer, the original solution handled bmp and 8 and 24 bit pngs but 32 bit pngs weren't rendering. #Retired Ninja answer to another question about Blit_Scaled suggested filling the surface before calling the function and that sorts it, there's another question related to setting alpha on new surfaces that may be relevant to this (particularily if you needed transparency) but filling with a solid colour is enough for me... for now.
Related
I wrote some solution to draw gifs, using Direct2D GIF sample:
// somewhere in render loop
IWICBitmapFrameDecode* frame = nullptr;
IWICFormatConverter* converter = nullptr;
gifDecoder->GetFrame(current_frame, &frame);
if (frame)
{
d2dWICFactory->CreateFormatConverter(&converter);
if (converter)
{
ID2D1Bitmap* temp = nullptr;
converter->Initialize(frame, GUID_WICPixelFormat32bppPBGRA,
WICBitmapDitherTypeNone, NULL, 0.f, WICBitmapPaletteType::WICBitmapPaletteTypeCustom);
tar->CreateBitmapFromWicBitmap(converter, NULL, &temp);
scale->SetValue(D2D1_SCALE_PROP_SCALE, D2D1::Vector2F(1, 1));
scale->SetInput(0, temp);
target->DrawImage(scale);
SafeRelease(&temp);
}
}
SafeRelease(&frame);
SafeRelease(&converter);
Where gifDecoder is obtained this way:
d2dWICFactory->CreateDecoderFromFilename(pathToGif, NULL, GENERIC_READ, WICDecodeMetadataCacheOnLoad, &gifDecoder);
But in my program almost with all gifs, all frames, except the first one, for some reason have horizontal transparent lines. Obviously, when I view the same gif in browser or in another program there are no that holes.
I tried to change pixel format, pallet type, but that glitches are still there.
What do I do wrong?
Basic steps:
On image load, in case of gif, for caching purposes store not all frames as ready to draw ID2D1Bitmaps but only the image's decoder. Every time decode the frame, get bitmap and draw it. At least, at first animation loop frames bitmaps and metadata (again, for each frame) can be cached.
For each gif create compatible target with the size of gif screen (obtained from metadata) for composing.
Compose animation on render: get metadata for current frame -> if current frame has disposal = true, clear compose target -> draw current frame to compose target (even if disposal true) -> draw compose target to render target,
Note, that frames may have different sizes and even offsets.
Approximate code:
ID2D1DeviceContext *renderTarget;
class GIF
{
ID2D1BitmapRenderTarget *compose;
IWICBitmapDecoder *decoder; // it seems like precache all frames costs too much time, it will be faster to obtain each frame each time (maybe cache during first loop)
float naturalWidth; // gif screen width
float naturalHeight; // gif screen height
int framesCount;
int currentFrame;
unsigned int lastFrameEpoch;
int x;
int y;
int width; // image width needed to be drawn
int height; // image height
float fw; // scale factor width;
float fh; // scale factor height
GIF(const wchar_t *path, int x, int y, int width, int height)
{
// Init, using WIC obtain here image's decoder, naturalWidth, naturalHeight
// framesCount and other optional stuff like loop information
// using IWICMetadataQueryReader obtained from the whole image,
// not from zero frame
compose = nullptr;
currentFrame = 0;
lastFrameEpoch = epoch(); // implement Your millisecond function
Resize(width, height);
}
void Resize(int width, int height)
{
this->width = width;
this->height = height;
// calculate scale factors
fw = width / naturalWidth;
fh = height / naturalHeight;
if(compose == nullptr)
{
renderTarget->CreateCompatibleRenderTarget(D2D1::SizeF(naturalWidth, naturalHeight), &compose);
compose->Clear(D2D1::ColorF(0,0,0,0));
}
}
void Render()
{
IWICBitmapFrameDecode* frame = nullptr;
decoder->GetFrame(currentFrame, &frame);
IWICFormatConverter* converter = nullptr;
d2dWICFactory->CreateFormatConverter(&converter);
converter->Initialize(frame, GUID_WICPixelFormat32bppPBGRA,
WICBitmapDitherTypeNone, NULL, 0.f, WICBitmapPaletteType::WICBitmapPaletteTypeCustom);
IWICMetadataQueryReader* reader = nullptr;
frame->GetMetadataQueryReader(&reader);
PROPVARIANT propValue;
PropVariantInit(&propValue);
char disposal = 0;
float l = 0, t = 0; // frame offsets (may have)
HRESULT hr = S_OK;
reader->GetMetadataByName(L"/grctlext/Delay", &propValue);
if (SUCCEEDED((propValue.vt == VT_UI2 ? S_OK : E_FAIL)))
{
UINT frameDelay = 0;
UIntMult(propValue.uiVal, 10, &fr);
frameDelay = fr;
if (frameDelay < 16)
{
frameDelay = 16;
}
if(epoch() - lastFrameEpoch < frameDelay)
{
SafeRelease(&reader);
SafeRelease(&frame);
SafeRelease(&converter);
return;
}
}
if (SUCCEEDED(reader->GetMetadataByName(
L"/grctlext/Disposal",
&propValue)))
{
hr = (propValue.vt == VT_UI1) ? S_OK : E_FAIL;
if (SUCCEEDED(hr))
{
disposal = propValue.bVal;
}
}
PropVariantClear(&propValue);
{
hr = reader->GetMetadataByName(L"/imgdesc/Left", &propValue);
if (SUCCEEDED(hr))
{
hr = (propValue.vt == VT_UI2 ? S_OK : E_FAIL);
if (SUCCEEDED(hr))
{
l = static_cast<FLOAT>(propValue.uiVal);
}
PropVariantClear(&propValue);
}
}
{
hr = reader->GetMetadataByName(L"/imgdesc/Top", &propValue);
if (SUCCEEDED(hr))
{
hr = (propValue.vt == VT_UI2 ? S_OK : E_FAIL);
if (SUCCEEDED(hr))
{
t = static_cast<FLOAT>(propValue.uiVal);
}
PropVariantClear(&propValue);
}
}
renderTarget->CreateBitmapFromWicBitmap(converter, NULL, &temp);
compose->BeginDraw();
if (disposal == 2)
compose->Clear(D2D1::ColorF(0, 0, 0, 0));
auto ss = temp->GetSize();
compose->DrawBitmap(temp, D2D1::RectF(l, t, l + ss.width, t + ss.height));
compose->EndDraw();
ID2D1Bitmap* composition = nullptr;
compose->GetBitmap(&composition);
// You need to create scale effect to have antialised resizing
scale->SetValue(D2D1_SCALE_PROP_SCALE, D2D1::Vector2F(fw, fh));
scale->SetInput(0, composition);
auto p = D2D1::Point2F(x, y);
renderTarget->DrawImage(iscale, p);
SafeRelease(&temp);
SafeRelease(&composition);
SafeRelease(&reader);
SafeRelease(&frame);
SafeRelease(&converter);
}
}*gif[100] = {nullptr};
inline void Render()
{
renderTarget->BeginDraw();
for(int i = 0; i < 100 && gif[i] != nullptr; i++)
gif[i]->Render();
renderTarget->EndDraw();
}
SDL_TTF 2.20.0 (with Harfbuzz enabled) introduced TTF_SetFontDirection() and TTF_SetFontScriptName() "for additional control over fonts using HarfBuzz".
I am trying to render some Arabic text, and I have set the font direction to RTL and the script name to "arSA\0", but the glyphs aren't joining up correctly.
Both TTF_SetFontDirection(ttf_font, TTF_DIRECTION_RTL) and TTF_SetFontScriptName(ttf_font, "arSA\0") return 0, which indicates success.
This is how I expected the text to appear:
And this is how it is appearing instead:
Can anybody tell me what I'm doing wrong?
Edit: Minimal reproducible example:
(Adapted from https://github.com/aminosbh/sdl2-ttf-sample/blob/master/src/main.c)
#include <SDL.h>
#include <SDL_ttf.h>
#include <string>
int main(int argc, char* argv[])
{
// Unused argc, argv
(void)argc;
(void)argv;
// Initialize SDL2
if (SDL_Init(SDL_INIT_VIDEO) < 0)
{
printf("SDL2 could not be initialized!\n"
"SDL2 Error: %s\n", SDL_GetError());
return 0;
}
// Initialize SDL2_ttf
TTF_Init();
// Create window
SDL_Window* window = SDL_CreateWindow("SDL2_ttf sample",
SDL_WINDOWPOS_UNDEFINED,
SDL_WINDOWPOS_UNDEFINED,
SCREEN_WIDTH, SCREEN_HEIGHT,
SDL_WINDOW_SHOWN);
if (!window)
{
printf("Window could not be created!\n"
"SDL_Error: %s\n", SDL_GetError());
}
else
{
// Create renderer
SDL_Renderer* renderer = SDL_CreateRenderer(window, -1, SDL_RENDERER_ACCELERATED);
if (!renderer)
{
printf("Renderer could not be created!\n"
"SDL_Error: %s\n", SDL_GetError());
}
else
{
// Declare rect of square
SDL_Rect squareRect;
// Square dimensions: Half of the min(SCREEN_WIDTH, SCREEN_HEIGHT)
squareRect.w = MIN(SCREEN_WIDTH, SCREEN_HEIGHT) / 2;
squareRect.h = MIN(SCREEN_WIDTH, SCREEN_HEIGHT) / 2;
// Square position: In the middle of the screen
squareRect.x = SCREEN_WIDTH / 2 - squareRect.w / 2;
squareRect.y = SCREEN_HEIGHT / 2 - squareRect.h / 2;
TTF_Font* font = TTF_OpenFont(FONT_PATH, 40);
if (!font) {
printf("Unable to load font: '%s'!\n"
"SDL2_ttf Error: %s\n", FONT_PATH, TTF_GetError());
return 0;
}
const auto fontDirectionSuccess = TTF_SetFontDirection(font, TTF_DIRECTION_RTL);
const auto fontScriptNameSuccess = TTF_SetFontScriptName(font, "arSA\0");
SDL_Color textColor = { 0x00, 0x00, 0x00, 0xFF };
SDL_Color textBackgroundColor = { 0xFF, 0xFF, 0xFF, 0xFF };
SDL_Texture* text = NULL;
SDL_Rect textRect;
// Arabic text to display
const wchar_t * wstr = L"كسول الزنجبيل القط";
// Convert unicode to multibyte
int wstr_len = (int)wcslen(wstr);
int num_chars = WideCharToMultiByte(CP_UTF8, 0, wstr, wstr_len, NULL, 0, NULL, NULL);
CHAR* strTo = (CHAR*)malloc((num_chars + 1) * sizeof(CHAR));
if (strTo)
{
WideCharToMultiByte(CP_UTF8, 0, wstr, wstr_len, strTo, num_chars, NULL, NULL);
strTo[num_chars] = '\0';
}
// Render text to surface
SDL_Surface* textSurface = TTF_RenderUTF8_Blended(font, strTo, textColor);
if (!textSurface) {
printf("Unable to render text surface!\n"
"SDL2_ttf Error: %s\n", TTF_GetError());
}
else {
// Create texture from surface pixels
text = SDL_CreateTextureFromSurface(renderer, textSurface);
if (!text) {
printf("Unable to create texture from rendered text!\n"
"SDL2 Error: %s\n", SDL_GetError());
return 0;
}
// Get text dimensions
textRect.w = textSurface->w;
textRect.h = textSurface->h;
SDL_FreeSurface(textSurface);
}
textRect.x = (SCREEN_WIDTH - textRect.w) / 2;
textRect.y = squareRect.y - textRect.h - 10;
// Event loop exit flag
bool quit = false;
// Event loop
while (!quit)
{
SDL_Event e;
// Wait indefinitely for the next available event
SDL_WaitEvent(&e);
// User requests quit
if (e.type == SDL_QUIT)
{
quit = true;
}
// Initialize renderer color white for the background
SDL_SetRenderDrawColor(renderer, 0xFF, 0xFF, 0xFF, 0xFF);
// Clear screen
SDL_RenderClear(renderer);
// Draw text
SDL_RenderCopy(renderer, text, NULL, &textRect);
// Update screen
SDL_RenderPresent(renderer);
}
// Destroy renderer
SDL_DestroyRenderer(renderer);
}
// Destroy window
SDL_DestroyWindow(window);
}
// Quit SDL2_ttf
TTF_Quit();
// Quit SDL
SDL_Quit();
return 0;
}
Edit 2:
I've replaced
TTF_SetFontDirection(font, TTF_DIRECTION_RTL);
TTF_SetFontScriptName(font, "arSA\0");
with the depracated:
TTF_SetDirection(HB_DIRECTION_RTL);
TTF_SetScript(HB_SCRIPT_ARABIC);
And everything works fine. I suspect "arSA\0" might be wrong, but the only documentation I can find says it should be a "null-terminated string of exactly 4 characters", with no indication or example of which 4 characters.
Instead of "arSA", you should use "Arab".
See https://github.com/harfbuzz/harfbuzz/blob/7a219ca9f0f8140906cb7fd3b879b5bf5259badc/src/hb-common.h#L514
Credits to commenters "1.8e9-where's-my-share m" & "skink".
I'm adding the answer here so you can accept is as answered.
I have tried looking all over the place for an answer but to no avail.
in debugger mode it worked fine but when i put it in release mode it give me this
Unhandled exception at 0x00DC1814 in sdl project.exe: 0xC0000005: Access violation reading location 0x00000000.
i have my debugger and release mode include and library the same, even my subsystems are the same. here is my code, i have shorten it by a bit
SDL_Surface *loadimage();
struct picture
{
int maxframe;
SDL_Surface *surface = NULL;
SDL_Rect rect;
std::string filepath;
};
SDL_Surface *background = NULL;
SDL_Surface *backbuffer = NULL;
SDL_Surface *holder = NULL;
std::vector<picture *> veck;
int main(int argc, char* argu[]){
background = SDL_LoadBMP("pics/bac.bmp");
if (background == NULL){
return false;
}
int height = background->h;
int width = background->w;
init_testprogram();
backbuffer = SDL_SetVideoMode(width, height, 32, SDL_SWSURFACE);
SDL_WM_SetCaption("Yamada's first window", NULL);
//this was here to test if i could format a surface more then once in
//the same format kept in just in case
holder = SDL_DisplayFormat(background);
background = SDL_DisplayFormat(holder);
SDL_FreeSurface(holder);
//this is where i get the error
veck[0]->surface = loadimage();
veck[0]->rect.w = veck[0]->surface->w;
veck[0]->rect.h = veck[0]->surface->h;
//if commented out this is where i get my second error
veck.push_back(new picture);
veck[1]->rect.x = 39;
veck[1]->rect.y = 49;
veck[1]->surface = veck[0]->surface;
veck[1]->rect.w = veck[0]->surface->w;
veck[1]->rect.h = veck[0]->surface->h;
veck[0]->rect.x = 500;
veck[0]->rect.y = 200;
//printing to screan
TTF_Font *font = NULL;
Mix_Chunk *sound = NULL;
picture *picture1;
//if commented out again this is where i get my third error in sound
sound = Mix_LoadWAV("sound/walking in grass.wav");
font = TTF_OpenFont("fonts/CaviarDreams.ttf", 100);
while (programisrunning()){
//do SDL stuff herre
}
SDL_Delay(3000);
SDL_Quit();
TTF_Quit();
Mix_CloseAudio();
int t;
std::cin >> t;
return 0;
}
/////definitions
SDL_Surface *loadimage(){
veck.push_back(new picture);
SDL_Surface* rawimage = NULL;
SDL_Surface* processedimage = NULL;
veck[0]->filepath = "pics/walk 3.png";
rawimage = IMG_Load(veck[0]->filepath.c_str());
if (rawimage == NULL){
errorreport("image 'walk 3.png' failed to load\n");
return false;
}
processedimage = SDL_DisplayFormat(rawimage);
SDL_FreeSurface(rawimage);
if (processedimage == NULL){
errorreport("image 'walk 3.png' failed to process\n");
return false;
}
Uint32 colorkey = SDL_MapRGB(processedimage->format, 255, 255, 255);
SDL_SetColorKey(processedimage, SDL_SRCCOLORKEY, colorkey);
// EDIt
if (processedimage == NULL)
errorreport("ERRRORORROROOR BUT WHY\n");
return processedimage;
}
i know its not in the best way of doing things but this is my test project for doing stuff in sdl. if i comment out the hole veck[0] thing then i get the same error but at further down in veck.pushback() and if i comment out all the vecks then i get the error at sound. i am using visual studio exress 2013 if that helps at all. i don't understand what i am doing wrong.
i did cut out stuff i thought was pointless to add here
You have undefined behavior in veck[0]->surface = loadimage(); since veck[0] won't get an object until loadimage() has executed.
There's no guarantee that loadimage() will be called before veck[0] is evaluated.
First of all, let me make some things clear:
My monitor is at 60 hertz
I cap my FPS to 60, and it seems to be working correctly
I have the double buffering flag active
I made a backbuffer myself, and made sure to draw to it, and afterwards to the screen
This problem happens both in fullscreen and windowed mode
This is my main function (it contains all the code):
SDL_Init(SDL_INIT_EVERYTHING);
SDL_Surface * backbuffer = NULL;
SDL_Surface * screen = NULL;
SDL_Surface * box = NULL;
SDL_Surface * background = NULL;
SDL_Rect * rect = new SDL_Rect();
double FPS = 60;
double next_time;
bool drawn = false;
rect->x = 0;
rect->y = 0;
screen = SDL_SetVideoMode(1920, 1080, 32, SDL_HWSURFACE | SDL_DOUBLEBUF | SDL_FULLSCREEN);
if(screen == NULL) {
return 0;
}
background = SDL_LoadBMP("background.bmp");
box = SDL_LoadBMP("box.bmp");
if((background == NULL) || (box == NULL)) {
return 0;
}
background = SDL_DisplayFormat(background);
box = SDL_DisplayFormat(box);
backbuffer = SDL_CreateRGBSurface(SDL_HWSURFACE | SDL_DOUBLEBUF | SDL_FULLSCREEN,
1920,
1080,
32,
0,
0,
0,
0);
if(backbuffer == NULL) {
return 0;
}
next_time = (double)SDL_GetTicks() + (1000.0 / FPS);
while(true) {
if(!drawn) {
SDL_BlitSurface(background, NULL, backbuffer, NULL);
SDL_BlitSurface(box, NULL, backbuffer, rect);
rect->x += 3;
drawn = true;
}
if((Uint32)next_time <= SDL_GetTicks()) {
SDL_BlitSurface(backbuffer, NULL, screen, NULL);
SDL_Flip(screen);
next_time += 1000.0 / FPS;
drawn = false;
}
}
SDL_FreeSurface(backbuffer);
SDL_FreeSurface(background);
SDL_FreeSurface(box);
SDL_FreeSurface(screen);
SDL_Quit();
return 0;
I know this code isn't looking great, it was just a test to see why this happens to me whenever I write anything in SDL.
Please ignore the ugly code, and let me know if you have any idea of what might be causing the image of the moving white square over the black background to have weird artifacts and to seem to be tearing / stuttering.
If you need any more information let me know, and I'll update what's needed.
EDIT:
If I don't cap my FPS, it runs at 200-400 fps, which probably means SDL_Flip isn't waiting for the screen refresh.
I don't know if the flags I write, are actually used.
I checked my flags, and it seems like I can't get the SDL_HWSURFACE and SDL_DOUBLEBUF flags. It might cause the problem?
SDL Double buffering on HW_SURFACE
This is the solution to my problem.
All I had to do was add this line:
SDL_putenv("SDL_VIDEODRIVER=directx");
Thanks for the comments, they helped me find the solution. :D
I guess I should check SDL 2.0 out too...
I've been following the lazy foo SDL tutorials and have already hit a road block in lesson 2. My code is exactly what he wants, yet i keep getting the same errors whenever i'm trying to blit the following images:
Unhandled exception at 0x68126030 in SDLtest.exe: 0xC0000005: Access violation reading location 0x00000004.
Here is the following code that is constantly producing such errors:
#include "SDL.h"
#include <string>
const int SCREEN_WIDTH = 640;
const int SCREEN_HEIGHT = 480;
const int SCREEN_BPP = 32;
SDL_Surface *message = NULL;
SDL_Surface *background = NULL;
SDL_Surface *screen = NULL;
SDL_Surface *load_image(std::string filename)
{
SDL_Surface* loadedImage = NULL;
SDL_Surface* optimizedImage = NULL;
//load the image
loadedImage = SDL_LoadBMP( filename.c_str() );
if (loadedImage != NULL)
{
optimizedImage = SDL_DisplayFormat(loadedImage);
SDL_FreeSurface(loadedImage);
}
return optimizedImage;
}
void apply_surface(int x, int y, SDL_Surface* source, SDL_Surface* destination)
{
SDL_Rect offset;
offset.x = x;
offset.y = y;
//blit the surface
SDL_BlitSurface(source, NULL, destination, &offset);
}
int main(int argc, char* args[])
{
if (SDL_Init(SDL_INIT_EVERYTHING) == -1)
{
return 1;
}
screen = SDL_SetVideoMode(SCREEN_WIDTH, SCREEN_HEIGHT, SCREEN_BPP, SDL_SWSURFACE);
if (screen = NULL)
{
return 1;
}
SDL_WM_SetCaption("Hello World",NULL);
//loading images
message = load_image("hello.bmp");
background = load_image("background.bmp");
//image blitting
apply_surface(0,0,background,screen);
apply_surface(320,0,background,screen);
apply_surface(0,240,background,screen);
apply_surface(320,240,background,screen);
apply_surface(180,140,message,screen);
if (SDL_Flip(screen) == -1)
{
return 1;
}
SDL_Delay(2000);
SDL_FreeSurface(message);
SDL_FreeSurface(background);
SDL_Quit();
return 0;
}
The error Access violation reading location 0x00000004 is saying that you are dereferencing a pointer whose value is 4, instead of something real.
The easiest way to track this down is to run under a debugger, and see what line causes the problem. Then you can backtrack to figure out where the pointer's value got messed up. Then you may find an error like Bert pointed out.
Replace the line
if(screen = NULL)
with
if(screen == NULL)