Cout + Float, no debug error, crash on release run - c++

I've seen a post something like 2 hours ago about a problem i've met days ago, i'm also looking for an explanation too...
https://stackoverflow.com/questions/25281294/c-cout-float-fail-while-printf-do-the-job
I've encoutered exactly the same behavior with cout, which didn't like float at all ...
I've a seperate threaded-UserInterface, made with openGL, with a callback function attached on a button in my main thread.
class ObjectA//My class
{
private:
float light;
float surface;
float height;
public:
ObjectA();
~ObjectA();
float r_light();
float r_surface();
float r_height();
void render(int x, int y); // Not implemented when bug occured
}
ObjectA::ObjectA(void)
{
light = 0;
surface = 0;
height = 0;
}
float ObjectA::r_light()
{
return this->light;
}
void displayResult(ObjectA * a) //The callback function attached to the UserInterface-button
{
cout << a->r_light() << endl;
}
When I runned it, the program crashed hard, and I had to finish the process manually ...
The only solution i had was to replace cout by printf, but I didn't really liked that.
Anyone know why i couldn't cout this value ?

Got it, friend of mine has had the same issue.
_setmode(_fileno(stdout), _O_TEXT);
And cout go back to normal behavior. He's telling me something about a sh***y VC++ compiler and openGL old version. (Init openGl world screwed stdout in some sort of way, if i understood)

Related

Timer takes longer to reach 10.f seconds if std::cout is commented

I have a weird problem with the value returned by my class called Timer (that uses std::chrono).
If I keep the std::cout commented, I have the feeling that delta returned by timer.restart() gets a very low value (it takes 3 or 4 times longer to reach 10.f). I tried to display it, but as I said above, uncommenting the std::cout solves the problem.
My timer does its job well in others parts of the application, so I don't think the problem is in there.
void Party::gameOver(float delta)
{
_delta += delta;
// std::cout << _delta << std::endl; // if I uncomment this the problem is solved
if (_delta > 10.0000f) {
// ...
_state = GameStatusType::Waiting;
_delta = 0;
}
}
This method is called here:
void Party::loop(void)
{
Timer timer;
while (!isFinished())
{
float delta = timer.restart(); // return in second
switch (_state)
{
// ...
case GameStatusType::GameOver:
gameOver(delta);
break;
}
}
}
The method "loop" is called in a thread like below:
void Party::run(void)
{
_party = std::thread(&Party::loop, shared_from_this());
}
I don't know if this can help, but I execute this code on Visual Studio 2015 on Windows 10. If you need further information, just ask.
One possibility is the windows console is really slow, there was many topic about this. You may try use overlapped io to write to console to see if it's improved.
The problem was just the loop was going too fast. Just added a Sleep.

My program checking for mouse click can be run just once

I have made this program in Turbo C++ wherein when the user clicks inside the square that comes on screen, the program should exit. The program works fine if I run it once. But when I run it again, it exits as soon as mouse is inside the square. It does not wait for the click. I think it is something to do with resetting the mouse.
#include<process.h>
#include<conio.h>
#include<graphics.h>
#include<dos.h>
union REGS in,out;
void main()
{
int gdriver = DETECT,gmode;
int xp,yp,cl=0;
int x,y;
initgraph(&gdriver,&gmode,"C:\\Turboc3\\BGI");
x=getmaxx()/2;
y=getmaxy()/2;
in.x.ax=4;
in.x.cx=10;
in.x.dx=10;
int86(51,&in,&out);
in.x.ax=1;
int86(51,&in,&out);
setcolor(RED);
rectangle((x-100),(y-100),x,y);
in.x.ax=3;
while(1)
{
int86(51,&in,&out);
cl=out.x.bx;
xp=out.x.cx;
yp=out.x.dx;
if(((xp>=x-100)&&(xp<=x))&&((yp>=y-100)&&(yp<=y)))
if(cl==1)
{
cl=0;
exit(1);
}
}
}
OUTPUT
P.S. I already know that Turbo C++ is an "ancient compiler" and I am well aware of the existence of other modern compilers, but I am forced to use this compiler.
Ok I have figured out the problem. When I start the program again, instead of dragging the mouse inside the square button straight away, if I click outside the square button first and then move towards the square button, the problem doesn't happen.
Basically, when the program starts for the 2nd time, the mouse starts with click=1 instead of click=0. I can't find out how to fix this though..
I've found this, dunno if that would help any. Depending what your OS you're running... or is that DosBox? It uses BGI to set graphic mode which may not work if you run it from x64 windows, should work from DosBox(at least, Turbo Pascal's version does). It is curious, that program does one dummy reading of mouse status after making cursor visible, to flush the registers. Is that the gotcha you're hit by?
#include<graphics.h>
#include<conio.h>
#include<dos.h>
union REGS i, o;
int initmouse()
{
i.x.ax = 0;
int86(0X33,&i,&o);
return ( o.x.ax );
}
void showmouseptr()
{
i.x.ax = 1;
int86(0X33,&i,&o);
}
void getmousepos(int *button, int *x, int *y)
{
i.x.ax = 3;
int86(0X33,&i,&o);
*button = o.x.bx;
*x = o.x.cx;
*y = o.x.dx;
}
main()
{
int gd = DETECT, gm, status, button, x, y;
char array[50];
initgraph(&gd,&gm,"C:\\TC\\BGI");
settextstyle(DEFAULT_FONT,0,2);
status = initmouse();
if ( status == 0 )
printf("Mouse support not available.\n");
else
{
showmouseptr();
getmousepos(&button,&x,&y);
while(!kbhit())
{
getmousepos(&button,&x,&y);
if( button == 1 )
{
button = -1;
cleardevice();
sprintf(array,"Left Button clicked x = %d y = %d",x,y);
outtext(array);
}
else if( button == 2 )
{
button = -1;
cleardevice();
sprintf(array,"Right Button clicked x = %d y = %d",x,y);
outtext(array);
}
}
}
getch();
return 0;
}
You're doing what my boss called jokingly "Computer necrophilia". Those old systems had all kind of quirks. There were reasons why programmers of old where maniacal about initializing variables. You could run into issue that if you declare a long int variable, then assigning to it a long value, then a short value, then only lower word will be set in second case - all because compiler wasn't "casting" short to long implicitly, it was just copying binary image to the same address.
I have faced the same problem recently and the cause is DOSBox, more precisely Turbo C++ IDE running in DOSBox. Try exiting the IDE and runnning your compiled program from the command line, it will work fine. Or try a virtualbox MS-DOS machine, it will work fine even from the IDE.

Animations producing intel_drm errors

I'm working on implementing animations within my model loader which uses Assimp; C++/OpenGL for rendering. I've been following this tutorial: http://ogldev.atspace.co.uk/www/tutorial38/tutorial38.html extensively. Suffice it to say that I did not follow the tutorial completely as there were some bits that I disagreed with code-wise, so I adapted it. Mind you, I don't use none of the maths components the author there uses, so I used glm. At any rate, the problem is that sometimes my program runs, and on other times it doesn't. When I run my program it would run and then crash instantly, and on other times it would simply run as normal.
A few things to take into account:
Before animations/loading bones were added, the model loader worked completely fine and models were loaded without causing no crash whatsoever;
Models with NO bones still load just as fine; it only becomes a problem when models with bones are being loaded.
Please note that NOTHING from the bones is being rendered. I haven't even started allocating the bones to vertex attributes; not even the shaders are modified for this.
Everything is being run on a single thread; there is no multi-threading... yet.
So, naturally I took to this bit of code which actually loaded the bones. I've debugged the application and found that the problems lie mostly around here:
Mesh* processMesh(uint meshIndex, aiMesh *mesh)
{
vector<VertexBoneData> bones;
bones.resize(mesh->mNumVertices);
// .. getting other mesh data
if (pAnimate)
{
for (uint i = 0; i < mesh->mNumBones; i++)
{
uint boneIndex = 0;
string boneName(mesh->mBones[i]->mName.data);
auto it = pBoneMap.find(boneName);
if (it == pBoneMap.end())
{
boneIndex = pNumBones;
++pNumBones;
BoneInfo bi;
pBoneInfo.push_back(bi);
auto tempMat = mesh->mBones[i]->mOffsetMatrix;
pBoneInfo[boneIndex].boneOffset = to_glm_mat4(tempMat);
pBoneMap[boneName] = boneIndex;
}
else boneIndex = pBoneMap[boneName];
for (uint j = 0; j < mesh->mBones[i]->mNumWeights; j++)
{
uint vertexID = mesh->mBones[i]->mWeights[j].mVertexId;
float weit = mesh->mBones[i]->mWeights[j].mWeight;
bones.at(vertexID).addBoneData(boneIndex, weit);
}
}
}
}
In the last line the author used a [] operator to access elements, but I decided to use '.at for range-checking. The function to_glm_mat4 is defined thus:
glm::mat4 to_glm_mat4(const aiMatrix4x4 &m)
{
glm::mat4 to;
to[0][0] = m.a1; to[1][0] = m.a2;
to[2][0] = m.a3; to[3][0] = m.a4;
to[0][1] = m.b1; to[1][1] = m.b2;
to[2][1] = m.b3; to[3][1] = m.b4;
to[0][2] = m.c1; to[1][2] = m.c2;
to[2][2] = m.c3; to[3][2] = m.c4;
to[0][3] = m.d1; to[1][3] = m.d2;
to[2][3] = m.d3; to[3][3] = m.d4;
return to;
}
I also had to change VertexBoneData since it used raw arrays which I thought flawed:
struct VertexBoneData
{
vector boneIDs;
vector weights;
VertexBoneData()
{
reset();
boneIDs.resize(NUM_BONES_PER_VERTEX);
weights.resize(NUM_BONES_PER_VERTEX);
}
void reset()
{
boneIDs.clear();
weights.clear();
}
void addBoneData(unsigned int boneID, float weight)
{
for (uint i = 0; i < boneIDs.size(); i++)
{
if (weights.at(i) == 0.0) // SEG FAULT HERE
{
boneIDs.at(i) = boneID;
weights.at(i) = weight;
return;
}
}
assert(0);
}
};
Now, I'm not entirely sure what is causing the crash, but what baffles me most is that sometimes the program runs (implying that the code isn't necessarily the culprit). So I decided to do a debug-smashdown which involved me inspecting each bone (I skipped some; there are loads of bones!) and found that AFTER all the bones have been loaded I would get this very strange error:
No source available for "drm_intel_bo_unreference() at 0x7fffec369ed9"
and sometimes I would get this error:
Error in '/home/.../: corrupted double-linked list (not small): 0x00000 etc ***
and sometimes I would get a seg fault from glm regarding a vec4 instantiation;
and sometimes... my program runs without ever crashing!
To be fair, implementing animations may just about be harsh on my laptop so maybe it's a CPU/GPU problem as in it's unable to process so much data in one gulp, which is resulting in this crash. My theory is that since it's unable to process that much data, that data is never allocated to vectors.
I'm not using any multi-threading whatsoever, but it has crossed my mind. I figure that it may be the CPU being unable to process so much data hence the chance-run. If I implemented threading, such that the bone-loading is done on another thread; or better, use a mutex because what I found is that by debugging the application slowly the program runs, which makes sense because each task is being broken down into chunks; and that is what a mutex technically does, per se.
For the sake of the argument, and no mockery avowed, my technical specs:
Ubuntu 15.04 64-bit
Intel i5 dual-core
Intel HD 5500
Mesa 10.5.9 (OpenGL 3.3)
Programming on Eclipse Mars
I thus ask, what the hell is causing these intel_drm errors?
I've reproduced this issue and found it may have been a problem with the lack of multi-threading when it comes to loading bones. I decided to move the loading bone errata into its own function as prescribed in the foresaid tutorial. What I later did was:
if (pAnimate)
{
std::thread t1[&] {
loadBones(meshIndex, mesh, bones);
});
t1.join();
}
The lambda function above has the [&] to indicate we're passing everything as a reference to ensure no copies are created. To prevent any external forces from 'touching' the data within the loadBones(..) function, I've installed a mutex within the function like so:
void ModelLoader::loadBones(uint meshIndex, const aiMesh *mesh, std::vector<VertexBoneData> &bones)
{
std::mutex mut;
std::lock_guard<std::mutex> lock(mut);
// load bones
}
This is only a quick and dirty fix. It might not work for everyone, and there's no guarantee the program will run crash-less.
Here are some testing results:
Sans threading & mutex: program runs 0 out of 3 times in a row
With threading; sans mutex: program runs 2 out of 3 times in a row
With threading & mutex: program runs 3 out of 3 times in a row
If you're using Linux, remember to link pthread as well as including <thread> and <mutex>. Suggestions on thread-optimisation are welcome!

automated mouse clicks make screen go blank

I am working on writing a program that will do a few mouse clicks for me in a loop. I created a struct and set it to type INPUT_MOUSE to replicate the clicks and used SendInput() to send the info. everything compiles right and could be called a "working" program but I ran into a rather funny glitch. I wrote the program on my laptop (windows vista) tried it and it worked fine. When I rewrote the same exact code and used it on my desktop (Windows 7) when I run the program my screen will go to black as soon as I start the automation part of the program just like it does when it goes into sleep mode. The program will run in the background just fine, but its kind of a pain that the automater blacks my screen out. What is going on here?
I am adding my code:
#include "stdafx.h"
#include <windows.h>
#include <iostream>
#include <string>
#include <time.h>
using namespace std;
void clicky(int x, int y)
{
// 5 sec wait
clock_t run;
run = clock()+5*CLOCKS_PER_SEC;
while (clock() < run) {}
//plug in cursor coords and click down and up
SetCursorPos(x,y);
INPUT mouse;
mouse.type = INPUT_MOUSE;
mouse.mi.dwFlags = MOUSEEVENTF_LEFTDOWN;
SendInput(1,&mouse,sizeof(INPUT));
mouse.type = INPUT_MOUSE;
mouse.mi.dwFlags= MOUSEEVENTF_LEFTUP;
SendInput(1,&mouse,sizeof(INPUT));
}
void main()
{
int coords=0;
string h;
//find out how many clicks are needed
cout << "How many clicks will you need?";
cin >> coords;
//extra getline here without it when return is hit
//from entering the click amount it would also enter
//a cursor coordinate
getline(cin,h);
POINT p[21];
for (int count = 1;count<=coords;count++)
{
cout << "Place mouse cursor where you want a click and press return"<<endl;
//each time return is hit the cursor coordinates
//will be stored in the corresponding spot in
// the p array
string key = "r";
getline(cin,key);
GetCursorPos(&p[count]);
break;
}
string go;
cout << "Hit Return to initialize your click loop";
getline(cin,go);
while (true)
//infinite loop cycling through the users
//cursor coordinates and clicking
{
for(int click=1;click<=coords;click++)
{
int x = p[click].x;
int y = p[click].y;
clicky(x,y);
}
}
}
Try initializing the INPUT structure to all zeroes before calling SendInput(), like
INPUT i;
ZeroMemory(&i, sizeof(i));
In addition to that, make sure that the coordinates you specify are not too large.
I had the screen go blank (in fact, the screensaver kicked in) when doing either of these two wrong.

DOS ASCII Animation Lagging without constant input, Turbo C compiled

Here's an oddity from the past!
I'm writing an ASCII Pong game for the command prompt (Yes yes oldschool) and I'm writing to the video memory directly (Add. 0xB8000000) so I know I'm rendering quickly (As opposed to gotoxy and then printf rendering)
My code works fine, the code compiles fine under Turbo C++ V1.01 BUT the animation lags... now hold on hold on, there's a cavaet! Under my super fast boosted turbo Dell Core 2 Duo this seems logical however when I hold a key on the keyboard the animation becomes smooth as a newly compiled baby's bottom.
I thought maybe it was because I was slowing the computer down by overloading the keyboard buffer (wtf really? come on...) but then I quickly smartened up and tried compiling for DJGPP and Tiny C Compiler to test if the results are the same. On Tiny C Compiler I found I coulnd't compile 'far' pointer types... still confused on that one but I was able to compile for DJGPP and it the animation ran smoothly!
I want to compile this and have it work for Turbo C++ but this problem has been plagueing me for the past 3 days to no resolve. Does anyone know why the Turbo C++ constant calls to my rendering method (code below) will lag in the command prompt but DJGPP will not? I don't know if I'm compiling as debug or not, I don't even know how to check if I am. I did convert the code to ASM and I saw what looked to be debugging data at the header of the source so I don't know...
Any and all comments and help will be greatly appreciated!
Here is a quick example of what I'm up against, simple to compile so please check it out:
#include<stdio.h>
#include<conio.h>
#include<dos.h>
#include<time.h>
#define bX 80
#define bY 24
#define halfX bX/2
#define halfY bY/2
#define resolution bX*bY
#define LEFT 1
#define RIGHT 2
void GameLoop();
void render();
void clearBoard();
void printBoard();
void ballLogic();
typedef struct {
int x, y;
}vertex;
vertex vertexWith(int x, int y) {
vertex retVal;
retVal.x = x;
retVal.y = y;
return retVal;
}
vertex vertexFrom(vertex from) {
vertex retVal;
retVal.x = from.x;
retVal.y = from.y;
return retVal;
}
int direction;
char far *Screen_base;
char *board;
vertex ballPos;
void main() {
Screen_base = (char far*)0xB8000000;
ballPos = vertexWith(halfX, halfY);
direction = LEFT;
board = (char *)malloc(resolution*sizeof(char));
GameLoop();
}
void GameLoop() {
char input;
clrscr();
clearBoard();
do {
if(kbhit())
input = getch();
render();
ballLogic();
delay(50);
}while(input != 'p');
clrscr();
}
void render() {
clearBoard();
board[ballPos.y*bX+ballPos.x] = 'X';
printBoard();
}
void clearBoard() {
int d;
for(d=0;d<resolution;d++)
board[d] = ' ';
}
void printBoard() {
int d;
char far *target = Screen_base+d;
for(d=0;d<resolution;d++) {
*target = board[d];
*(target+1) = LIGHTGRAY;
++target;
++target;
}
}
void ballLogic() {
vertex newPos = vertexFrom(ballPos);
if(direction == LEFT)
newPos.x--;
if(direction == RIGHT)
newPos.x++;
if(newPos.x == 0)
direction = RIGHT;
else if(newPos.x == bX)
direction = LEFT;
else
ballPos = vertexFrom(newPos);
}
First, in the code:
void printBoard() {
int d;
char far *target = Screen_base+d; // <-- right there
for(d=0;d<resolution;d++) {
you are using the variable d before it is initialized.
My assumption is that if you are running this in a DOS window, rather than booting into DOS and running it, is that kbhit is having to do more work (indirectly -- within the DOS box's provided environment) if there isn't already a keypress queued up.
This shouldn't effect your run time very much, but I suggest that in the event that there is no keypress you explicitly set the input to some constant. Also, input should really be an int, not a char.
Other suggestions:
vertexFrom doesn't really do anything.
A = vertexFrom(B);
should be able to be replaced with:
A = B;
Your macro constants that have operators in them should have parenthisis around them.
#define Foo x/2
should be:
#define Foo (x/2)
so that you never ever have to worry about operator precedence no matter what code surrounds uses of Foo.
Under 16 bit x86 PCs there are actually 4 display areas that can be switched between. If you can swap between 2 of those for your animation, and your animations should appear to happen instantaneously. It's called Double Buffering. You have one buffer that acts as the current display buffer and one that is the working buffer. Then when you are satisfied with the working buffer (and the time is right, if you are trying to update the screen at a certain rate) then you swap them. I don't remember how to do this, but the particulars shouldn't be too difficult to find. I'd suggest that you might leave the initial buffer alone and restore back to it upon exit so that the program would leave the screen in just about the state that it started in. Also, you could use the other buffer to hold debug output and then if you held down the space bar or something that buffer could be displayed.
If you don't want to go that route and the 'X' is the only thing changing then you could forgo clearing the screen and just clear the last location of the 'X'.
Isn't the screen buffer an array of 2 byte units -- one for display character, and the other for the attributes? I think so, so I would represent it as an array of:
struct screen_unit {
char ch;
unsigned char attr;
}; /* or reverse those if I've got them backwards */
This would make it less likely for you to make mistakes based on offsets.
I'd also probably read and write them to the buffer as the 16 bit value, rather than the byte, though this shouldn't make a big difference.
I figured out why it wasn't rendering right away, the timer that I created is fine the problem is that the actual clock_t is only accurate to .054547XXX or so and so I could only render at 18fps. The way I would fix this is by using a more accurate clock... which is a whole other story