This is driving me up the wall..
I've got a very simple SDL2 program.
It has a array of 3 SDL_Texture pointers.
These textures are filled as follows:
SDL_Texture *myarray[15];
SDL_Surface *surface;
for(int i=0;i<3;i++)
{
char filename[] = "X.bmp";
filename[0] = i + '0';
surface = SDL_LoadBMP(filename);
myarray[i] = SDL_CreateTextureFromSurface(myrenderer,surface);
SDL_FreeSurface(surface);
}
This works, no errors.
In the main loop (which is just a standard event loop waiting for SDL_QUIT, keystrokes and a user-event which a SDL_Timer puts in the event queue every second) I just do (for the timer triggered event):
idx = (idx+1) % 3; // idx is global var initially 0.
SDL_RenderClear(myrenderer);
SDL_RenderCopy(myrenderer, myarray[idx], NULL, NULL);
SDL_RendererPresent(myrenderer);
This works fine for 0.bmp and 1.bmp, but the 3rd image (2.bmp) simply shows as a black field.
This is structural.
If I alternate the first 2 images they are both fine.
If I alternate the 2nd and 3rd image the 3rd image doesn't show.
If I use more than 3 images then 3 and upwards show as black.
Loading order doesn't matter. It starts going wrong with the 3rd image loaded from disk.
All images are properly formatted BMP's.
I even saved 2.bmp back to disk under a different name by using SDL_SaveBMP() after it was loaded to make sure it got loaded in memory OK. The new file is bit for bit identical to the original.
This program, without modifications and the same bmp files, works fine on OSX (XCode5) and Windows (VC++ 2012 Express).
The problem only shows on the Raspberry PI.
I have placed explicit error checks on every call that can leave a result/error-code (not shown in the samples above for brevity) but all of them show "no error".
I have used the latest stable source set of www.libsdl.org and compiled as instructed (configure, make, make install, etc.).
Anybody got any idea what could be going on ?
P.S.
Keyboard input doesn't seem to work either on my PI, but I haven't delved into that yet.
Answering myself as I finally figured it out myself...
I finally went back to the README-raspberrypi.txt that came with the SDL2 sources.
I didn't read it carefully enough the first time around...
Problem 1: I'am running on a FULL-HD display. The PI's default GPU memory is 64MB which is not enough for large displays and double-buffering. As suggested in the README I increased this to 128MB and this solved the black image problem.
Problem 2: Text input wasn't working because my user-account was not in the input group. I had added the default "pi" account to the input group initially, but when I later started using another account I forgot to add that user to the group.
In short: Caught by my own (too) quick skimming of the documentation.
Related
I am using openCV library to open and display multiple images. I am doing this with multiple windows created in order to display each image. In order to achieve display of multiple windows at the same time, I am using waitKey() only after the last image.
cv::namedWindow("Window1");
cv::imshow("Window1", myImage1);
cv::namedWindow("Window2");
cv::imshow("Window2", myImage2);
cv::waitKey(1000);
As can be seen from the code, my goal is to give the user 1s of time to press any key, otherwise I want to destroy one of the windows (for the purpose of this question it can be either one). I want to achieve this by using openCV's function destroyWindow().
Below my entire code can be seen:
cv::namedWindow("Window1");
cv::imshow("Window1", myImage1);
cv::namedWindow("Window2");
cv::imshow("Window2", myImage2);
cv::waitKey(1000);
cv::destroyWindow("Window2");
The goal of this code snippet should be that only "Window1" remains displayed, if 1s goes by, with the user not pressing any key.
However, this does not happen. The end result is that none of the windows are destroyed.
I have tested the following code snippet, which results in both windows being closed:
cv::namedWindow("Window1");
cv::imshow("Window1", myImage1);
cv::namedWindow("Window2");
cv::imshow("Window2", myImage2);
cv::waitKey(1000);
cv::destroyWindow("Window1");
cv::destroyWindow("Window2");
The same results when I use destroyAllWindows() function (which makes sense).
My question now is, why can't I destroy only one of the windows?
Additional info:
Using Ubuntu 20.04.
OpenCV version is 4.2.
Working in C++
Changing the order of which window I want to destroy changes nothing.
Tried to replicate it, facing this issue in Python as well on Ubuntu. If you are still stuck, you can try a stopgap solution of reshowing only the one you wanted to show provided the user has pressed a key or not by storing the result of waitKey in some variable. If it is -1 then no key has been pressed.
I have provided a sample solution in Python which you shouldn't face any difficulties converting to C++.
import cv2
img1 = cv2.imread('img1.png')
img2 = cv2.imread('img2.png')
cv2.namedWindow('img1')
cv2.imshow('img1', img1)
cv2.namedWindow('img2')
cv2.imshow('img2', img2)
key = cv2.waitKey(5000)
if key == -1:
cv2.destroyAllWindows()
cv2.imshow('img1', img1)
cv2.waitKey(0)
else:
# do whatever destroy both or keep on showing both using cv2.waitKey(0)
cv2.destroyAllWindows()
I have reached a solution by adding startWindowThread() before adding each of the windows.
An important thing to note is also that I have built openCV using GTK option, so my solution is tested only on GTK not on others.
startWindowThread() is used only with GTK as noted here: https://github.com/opencv/opencv/issues/7562 - for others the function is empty.
This is a direct follow-up of the last question I asked which was aptly named "C++: OpenCV2.3.1(!) access to webcam parameters" and where I was told to install OpenCV2.4.11 instead (OpenCV3.0 did not work)... which I did. And yes, most of this text is an exact copy&paste of the last thread since my problem hasn't actually vanished...
Again, I've searched here, on other forums (Google, OpenCV etc), looked at the code of the videoInput library, the different header files and especially OpenCV's highgui_c.h and still seem to be unable to find an answer to this very simple question:
How do I change exposure and gain (or, to be general, any webcam property) in my Logitech C310 webcam with OpenCV2.4.11 the same way I was able to with OpenCV2.1.0? (using Win7 64-bit, Visual Studio 10)
EDIT: This has been solved. I do not know how but when I tested my code this morning it was able to report and set the exposure using VideoCapture and the set/get method.
There's the nice and easy VideoCapture get and set method, I know, similar to the videoInput's [Set/Get]VideoSetting[Camera/Filter] functions. Here's my short example in OpenCV2.4.11 that doesn't work:
EDIT: It does work now. What I don't understand is that the values of several properties are reported as -8.58993E+008 (namely hue, monocrome, gamma, temperature, zoom, focus, pan, tilt, roll and iris) and that property 6 (fourcc) is -4.66163E+008. I know I don't have these features on my webcam but all other unimplemented features report -1.
int __stdcall WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, char* CmdArgs, int CmdShow) {
int device0 = 0;
VideoCapture VC(device0);
if(!VC.isOpened()) // check if we succeeded
return -1;
ostringstream oss;
double CamProp;
for(int i=-4; i<27; i++) {
CamProp = VC.get(i);
Sleep(5);
oss << "Item " << i << ": " << CamProp << "\n";
}
MessageBox(NULL, oss.str().c_str(), "Webcam Values", MB_OK);
return 0;
}
It compiles, it runs, it accesses the webcam alright (and even shows a picture with imshow if I add it to the code) but it only opens a nice window saying this:
Item -4: 0
Item -3: 0
Item -2: 0
...
Item 2: 0
Item 3: 640
Item 4: 480
Item 5: 0
...
Item 25: 0
Item 26: 0
EDIT: See above, this works now. I get values for all supported parameters like exposure, gain, sharpness, brightness, contrast and so on. Perhaps I was still linking to the 2.3.1 libraries or whatever.
The point is: This was all perfectly settable with this camera under OpenCV 2.1.0 using videoInput. I had a running application doing its own lighting instead of using the Logitech functions (RightLight, Auto Exposure, Auto Whitebalance). Now setting and getting the parameters has been integrated into OpenCV highgui for quite a while but with a strongly reduced feature list (no requesting of parameter ranges, Min/Max/Stepwidth..., no setting of auto exposure, RightLight and similar stuff) and for some reason it's incompatible with my Logitech webcam. I can report the resolution but nothing else.
EDIT: I still miss the Min, Max, Step, Auto/Manual features of videoInput. I can set a value but I don't know whether it's allowed.
The videoInput code is now merged into OpenCV's code in the file cap_dshow.cpp but I can't find a header file that declares the videoInput class and simply using my old code doesn't work. So I have a cpp file which contains all functions I need and which I know did the job for me a while back but which I can't access now. Any clues on how to do that? Has anyone accessed and changed camera parameters in OpenCV2.4.11 using the videoInput/DirectShow interface?
EDIT: Seems this has happened now in a working way, unlike 2.3.1. No direct interaction with videoInput seems to be needed. However it would be nice to have it for the aforementioned reasons.
There's also the funny problem that using e.g.
VideoCapture cam(0)
addresses exactly the same camera as
VideoCapture cam(1)
or
VideoCapture cam(any integer value)
which seems odd to me and hints in the same direction - that CV's VideoCapture does not work properly for me. A similar problem is described here but I also tried the code with a Sleep(1000) after opening the capture - without success.
EDIT: This is also working correctly now. I get my webcam with (0) and and error with (1) which is absolutely OK.
I've used the following code to draw on a Image using a wxMemoryDC.
To do so I used a wxPen and changed the settings of the pen as in the following code. The code compiles and runs perfectly in windows environment. But in Ubuntu it draws the lines but the pen size stays correctly for a very little time and then the pen size becomes very low.(As shown in the image) It is not an error of the m_pensize variable because it always prints the correct value. Why does this works so strange in ubuntu when it works correctly in windows?.
(m_graphics is the memoryDC here)
if (x<m_backgroundImage.GetWidth() && y< m_backgroundImage.GetHeight()){
m_graphics.SelectObject(m_maskImage);
wxPen* pen;
if (m_isDrawing){
pen = wxThePenList->FindOrCreatePen(*wxRED, m_penSize);
printf("Pen size is %d", m_penSize);
}
else{
pen = wxThePenList->FindOrCreatePen(*wxBLACK, m_penSize);
}
if (m_pentype != Circle){
pen->SetCap(wxCAP_PROJECTING);
}
m_graphics.SetPen(*pen);
m_graphics.DrawLine(m_lastX,m_lastY,x,y);
m_graphics.SelectObject(wxNullBitmap);
}
In windows it is shown Correctly
In linux The pen size is changed unexpectadly.
Your help is greatly appreciated.
If the same code behaves differently in wxMSW and wxGTK, then it's probably a bug in wxWidgets itself, however to fix it it needs to be reproduced in some simple to test way, ideally by making the smallest possible change to the wxWidgets drawing sample and opening a ticket attaching this change as a patch to it.
To simplify the code as much as possible, I'd recommend:
Getting rid of wxThePenList and just creating the pen directly. It's unlikely that the bug is here, but who knows.
Check if it's not due to SetCap() call, this is the most likely candidate IMHO.
EDIT: Even that the problem still exists, I haven't been able to reproduce this frequently enough to examine it closer. See more info at the end of the question.
I started to develop a game, and I am currently writing basic library for it. I'm using D programming language with SDL-2 and OpenGL 3 (using Derelict3 bindings), on Linux Mint 13 (Maya). Compiler is DMD64 D Compiler v2.067.1, and I rebuild binary each time with 'rdmd'.
To render (changing) text, I create glyphs on-demand. The piece of code I use for this is:
class Font {
...
Texture render(char c) {
if(!(c in rendered)) rendered[c] = texture(to!string(c));
return rendered[c];
}
Texture texture(string text) {
SDL_Color color={255, 255, 255, 255};
auto bitmap = TTF_RenderText_Blended(
font,
std.string.toStringz(text),
color
);
if(!bitmap) {
throw new TTFError(
"TTF_RenderText_Blended: " ~
to!string(TTF_GetError()) ~ ": '" ~ text ~ "'"
);
}
auto texture = new Texture(bitmap);
SDL_FreeSurface(bitmap);
return texture;
}
The problem is that this fails purely randomly. Sometimes it works without any problems. When it fails to render a glyph, it is interesting that it will fail to render the same glyph over and over again. Here is an example when catching the exception I throw:
...
TTF_RenderText_Blended: Text has zero width: '9'
TTF_RenderText_Blended: Text has zero width: '6'
TTF_RenderText_Blended: Text has zero width: '9'
TTF_RenderText_Blended: Text has zero width: '6'
TTF_RenderText_Blended: Text has zero width: '9'
TTF_RenderText_Blended: Text has zero width: '6'
...
(I'm printing score to screen, other numbers showing fine except those few ones). The numbers TTF_RenderText_Blended fails to render vary from run to run, and as mentioned, time to time it renders all the numbers.
One detail is that static strings I render before entering game loop have not yet failed to render, just single letters I use for changing texts.
I'm pretty much out of any ideas, and haven't found anything related to this problem by searching web. Any ideas to look for solutions are very well appreciated.
CURRENT SITUATION: I updated compiler to DMD 2.067.1 and the problem remains (compilers used so far: 2.066.1, 2.067.1). The whole - lets say project family is in the github at the moment:
https://github.com/mkoskim/games
The text glyph rendering function is located in this file:
https://github.com/mkoskim/games/blob/master/engine/ext/font.d
...and it is used from here:
https://github.com/mkoskim/games/blob/master/engine/ext/gui/label.d
The problem occurs mainly/most frequently in the pacman game (although very seldomly just right now):
https://github.com/mkoskim/games/tree/master/testbench/pacman
If you want to try it out, first, read the (hopefully complete enough) installation instructions:
https://github.com/mkoskim/games/blob/master/INSTALL
The project is made for 64-bit Linux Mint Maya, and it is currently not that user friendly and portable from building perspective. Pacman is the only demo that (hopefully) works without game controller. After successful installation of required libraries and tools, you can build it with command:
games/testbench/pacman$ make default
I ran into the exact same issue, and for me it was fixed by keeping the SDL_RWops structure used to create the font (with TTF_OpenFontRW) alive for the whole lifetime of the TTF_Font created by it. I saw you're creating the font with TTF_OpenFontRW as well so I assume this will fix it for you as well. It looks like SDL_ttf relies on this being kept alive, it reads freed memory otherwise.
I know this question is a little bit outdated but I maybe had a similar problem.
I fixed it with simply call SDL_DestroyTexture() every time befor I used TTF_Render_Text_Blended() :)
I have this short C++ program which takes snapshot images from a camera in a loop and displays them:
void GenericPGRTest::execute()
{
// connect camera
Camera *cam = Camera::Connect();
// query resolution and create view window
const Resolution res = cam->GetResolution();
cv::namedWindow("View");
c = 0;
// keep taking snapshots until escape hit
while (c != 27)
{
const uchar *buf = cam->SnapshotMono();
// create image from buffer and display it
cv::Mat image(res.height, res.width, CV_8UC1, (void*)buf);
cv::imshow("Camera", image);
c = cv::waitKey(1000);
}
}
This uses a class (Camera) for camera control I created using the Point Grey SDK and functions from the OpenCV library to display the images. I'm not necessarily looking for answers relating to the usage of either of these libraries, but rather some insight on how to debug a bizarre problem in general. The problem is that the application freezes (not crashes) on the cam->SnapshotMono() line. Of course, I ran through the function with a debugger. Here is the contents:
const uchar* Camera::SnapshotMono()
{
cam_.StartCapture();
// get a frame
Image image;
cam_.RetrieveBuffer(&image);
cam_.StopCapture();
grey_buffer_.DeepCopy(&image);
return grey_buffer_.GetData();
}
Now, every time I step through the function in the debugger, everything works OK. But the first time I do a "step over" instead of "step into" SnapshotMono(), bam, the program freezes. When I pause it at that time, I notice that it's stuck inside SnapshotMono() at the RetrieveBuffer() line. I know it's a blocking call so it theoretically can freeze (no idea why but it's possible), but why does it block when running normally and not when being debugged? This is one of the weirdest kinds of behaviour under debugging I've seen so far. Any idea why this could happen?
For those familiar with FlyCapture, the code above doesn't break as is, but rather only when I use StartCapture() in callback mode, then terminate it with StopCapture() before it.
Compiled with MSVC2010, OpenCV 2.4.5 and PGR FlyCapture 2.4R10.
Wild guess ... but may it be that StartCapture already starts the process that
ends up with having the buffer in ìmage, and if you step you leave it some
time until you get to RetrieveBuffer. That's not the case if you run it all at once ...