I'm building a browser plugin which will draw pictures as a slideshow inside browser windows, however the plugin I created only draws on first plugin instance. If I open multiple instances of the plugin, it keeps on drawing on the first plugin window overlapping each picture.
I'm using opengl to draw picture from the url.
Following is a code which draws dummy opengl tringles in a loop using a thread:
FB::PluginWindowWin *pluginWindowWin = dynamic_cast(pluginWindow);
EnableOpenGL(pluginWindowWin->getHWND(), &hDC, &hRC);
SetFocus(pluginWindowWin->getHWND());
//FB::
static int fps = 1;
GLfloat rotate = 0;
static double start = 0, diff, wait;
wait = 1 / fps;
//return 0;
while (true)
{
//lets check for keyboard input
try
{
FB::Rect pos = pluginWindow->getWindowPosition();
PAINTSTRUCT ps;
if (pluginWindowWin){
hDC = BeginPaint(pluginWindowWin->getHWND(), &ps);
pos.right -= pos.left;
pos.left = 0;
pos.bottom -= pos.top;
pos.top = 0;
rotate += 0.1f;
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glClear(GL_COLOR_BUFFER_BIT);
glPushMatrix();
glRotatef(rotate, 0.0f, 1.0f, 0.0f);
glBegin(GL_TRIANGLES);
glColor3f(1.0f, 0.0f, 0.0f); glVertex2f(0.0f, 1.0f);
glColor3f(0.0f, 1.0f, 0.0f); glVertex2f(0.87f, -0.5f);
glColor3f(0.0f, 0.0f, 1.0f); glVertex2f(-0.87f, -0.5f);
glEnd();
glBegin(GL_QUADS); // Draw A Quad
glColor3f(1.0f, 0.0f, 0.0f); glVertex3f(-0.5f, 0.5f, 0.0f); // Top Left
glColor3f(0.0f, 1.0f, 0.0f); glVertex3f(0.5f, 0.5f, 0.0f); // Top Right
glColor3f(0.0f, 0.0f, 1.0f); glVertex3f(0.5f, -0.5f, 0.0f); // Bottom Right
glColor3f(0.0f, 0.0f, 0.0f); glVertex3f(-0.5f, -0.5f, 0.0f); // Bottom Left
glEnd(); // Done Drawing The Quad
glPopMatrix();
glRotatef(rotate, 0.0f, 1.0f, 0.0f);
SwapBuffers(hDC);
}
//rtri+=0.1f;
::SetTextAlign(hDC, TA_CENTER | TA_BASELINE);
LPCTSTR pszText = _T("FireBreath Plugin!\n:-)");
::TextOut(hDC, pos.left + (pos.right - pos.left) / 2, pos.top + (pos.bottom - pos.top) / 2, pszText, lstrlen(pszText));
if (pluginWindowWin) {
// Release the device context
EndPaint(pluginWindowWin->getHWND(), &ps);
}
}
catch (...)
{
return 0;
}
Sleep(10);
}//end of while run
Any thing I'm doing wrong here?
From what you've told me in the comments, your primary issue is that you're starting with a flawed example. Remember that every instance of the plugin starts up in the same process; the example you're using is a simplified one which does not use good practices for plugins. Most specifically, it uses several global variables.
In addition to that you are threading but don't seem to be doing any locks to make sure that you are totally threadsafe. You live in someone else's process, you don't own it -- the browser does. You need to be very careful with a lot of things.
Most likely your crash has to do with not shutting down cleanly or perhaps with a race condition in your threading code. The best way to troubleshoot that is to attach a debugger and find out where it's crashing, rather than just running around in circles asking "Why?? why???!?" (exagerating for effect, obviously). You'd be shocked how few people do that simple step -- attaching a debugger -- until I tell them to, but it should always be your first step in troubleshooting a crash.
Finally, it bears asking: Do you realize that you are building this on a technology which won't be available in 6 months? FireFox is removing support for NPAPI at the end of the year. I expect ActiveX to work a bit longer than that, but edge doesn't support it.
FireBreath 2 (in the 2.0 branch) is a major change from firebreath 1 but it supports Chrome via native messaging and will support FireFox as well. there are many trying to convince MS to add native messaging support to Edge, but we'll see how that goes. Feel free to follow that link and vote, since it would help you as well I suspect.
Thing is, you don't get SDL or SDL2 w/ native messaging; you'd have to use WebGL and do the dev on the javascript side, then pull data over native messaging. alternately you could look into using NaCL which does have some opengl / drawing stuff (maybe even SDL? not sure) but is sandboxed and may or may not have the networking things you need. Also, of course, it only works on Chrome.
Food for thought. Good luck.
Related
I want to test the spring contraint of Bullet Physics. So I created a static box hovering above the ground and a second dynamic box hanging down from it. But activating the spring behavior does nothing! The box is indeed hanging freely. I know it because it rotates freely. But it does not oscillate or anything.
btCollisionShape *boxShape = createBoxShape(0.2f, 0.2f, 0.2f);
btRigidBody *box1 = createStatic(boxShape);
btRigidBody *box2 = createDynamic(1.0f /*mass*/, boxShape);
box1->setWorldTransform(btTransform(btQuaternion::getIdentity(), { 0.0f, 2.0f, 1.0f }));
box2->setWorldTransform(btTransform(btQuaternion::getIdentity(), { 0.0f, 1.0f, 1.0f }));
btGeneric6DofSpring2Constraint *spring = new btGeneric6DofSpring2Constraint(
*box1, *box2,
btTransform(btQuaternion::getIdentity(), { 0.0f, -1.0f, 0.0f }),
btTransform(btQuaternion::getIdentity(), { 0.0f, 0.0f, 0.0f })
);
// I thought maybe the linear movement is locked, but even using these lines do not help.
// spring->setLinearUpperLimit(btVector3(0.0f, 0.1, 0.0f));
// spring->setLinearLowerLimit(btVector3(0.0f, -0.1, 0.0f));
// Enabling the spring behavior for they y-coordinate (index = 1)
spring->enableSpring(1, true);
spring->setStiffness(1, 0.01f);
spring->setDamping (1, 0.00f);
spring->setEquilibriumPoint();
What is wrong? I played a lot with the the Stiffness and Damping parameters. But it changed nothing. Setting linear lower and upper limits makes the box movable in the y-direction, but it still not oscillates. And yes, gravity is activated.
Ok, I found a solution by checking out Bullet's provided example projects (could have come up with the idea earlier). Three things I have learned:
The spring constraint will not violate the linear limits. The problem with my former approach was that the linear movement was either locked, or limited to a too small range for the assigned spring stiffness. Now there are no more limits (by setting the lower limit above the upper one).
The stiffness was far too small, so the joined objects were acting as if they were freely movable inside the linear limits. You can check out the values in my code below, I got them from the example project.
There is a small difference in the behavior between btGeneric6DofSpringConstraint and btGeneric6DofSpring2Constraint. The former one seems to violet the non-spring-axes less (x- and z-axes in my case). The latter one seems to apply a stronger damping. But these are just first observations.
btGeneric6DofSpringConstraint *spring = new btGeneric6DofSpringConstraint(
*box1, *box2,
btTransform(btQuaternion::getIdentity(), { 0.0f, -1.0f, 0.0f }),
btTransform(btQuaternion::getIdentity(), { 0.0f, 0.0f, 0.0f }),
true
);
// Removing any restrictions on the y-coordinate of the hanging box
// by setting the lower limit above the upper one.
spring->setLinearLowerLimit(btVector3(0.0f, 1.0f, 0.0f));
spring->setLinearUpperLimit(btVector3(0.0f, 0.0f, 0.0f));
// Enabling the spring behavior for they y-coordinate (index = 1)
spring->enableSpring(1, true);
spring->setStiffness(1, 35.0f);
spring->setDamping (1, 0.5f);
spring->setEquilibriumPoint();
I'm trying to rotate a red triangle and a green triangle one after another after a constant time interval. I tried the following code and the time interval is not constant.I can't figure out the problem.
static void display(void)
{
now=glutGet(GLUT_ELAPSED_TIME);
elapsedTime = now - interval;
if(flag)
{
if(now%3000==0)
{
flag=false;
}
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glColor3f(1,0,0);
glRotatef(0.1,0.0,0.0,1.0);
glBegin(GL_TRIANGLES);
glVertex3f(-0.5f, 0.5f, -5.0f);
glVertex3f(-1.0f, 1.5f, -5.0f);
glVertex3f(-1.5f, 0.5f, -5.0f);
glEnd();
glutSwapBuffers();
}
else
{
if(now%3000==0)
{
flag=true;
}
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glColor3f(0,1,0);
glRotatef(-0.1,0.0,0.0,1.0);
glBegin(GL_TRIANGLES);
glVertex3f(-0.5f, 0.5f, -5.0f);
glVertex3f(-1.0f, 1.5f, -5.0f);
glVertex3f(-1.5f, 0.5f, -5.0f);
glEnd();
glutSwapBuffers();
}
}
Before anything, have you tried debug 101: print key variables? I am pretty confident that if do cout << now % 3000 << endl you can find the source of the problem yourself.
For the answer:
now % 3000 == 0
Does not seem like a good idea.
How can you be sure that glutGet(GLUT_ELAPSED_TIME) will increment as 1, 2, 3, ...? What if rendering one frame takes 2 ms and the following happens:
2998, 2999, 3001? You just lost one switch time.
Due to the unpredictable time taken to redraw, it is difficult to be perfectly precise on about rendering times.
In your case, a good approximation could be:
now % 6000 < 3000
This should work well because 3s is much larger than the frequency at which the display will be called.
In most applications however, we want continuous movement, and the best option is to make movement proportional to the actual time lapses with code like:
float dt;
int t, oldT;
t = glutGet(GLUT_ELAPSED_TIME);
dt = (t - oldT)/1000.0;
oldT = t;
drawTriangle(rotationSpeed * dt);
i want to code a little minecraft clone. Now i tried to insert some simple lighting but my results are very bad. I read much about it and i tried different solutions without any result.
Thats what i got.
Initializing:
GL11.glViewport(0, 0, Config.GAME_WIDTH, Config.GAME_HEIGHT);
GL11.glMatrixMode(GL11.GL_PROJECTION); // Select The Projection Matrix
GL11.glLoadIdentity(); // Reset The Projection Matrix
GL11.glMatrixMode(GL11.GL_MODELVIEW); // Select The Modelview Matrix
GL11.glLoadIdentity(); // Reset The Modelview Matrix
GL11.glEnable(GL11.GL_DEPTH_TEST);
GL11.glEnable(GL11.GL_CULL_FACE);
GL11.glFrontFace(GL11.GL_CCW);
GL11.glLightModeli(GL11.GL_LIGHT_MODEL_LOCAL_VIEWER, GL11.GL_TRUE);
GL11.glEnable(GL11.GL_LIGHTING);
GL11.glEnable(GL11.GL_LIGHT0);
FloatBuffer qaAmbientLight = floatBuffer(0.0f, 0.0f, 0.0f, 1.0f);
FloatBuffer qaDiffuseLight = floatBuffer(1.0f, 1.0f, 1.0f, 1.0f);
FloatBuffer qaSpecularLight = floatBuffer(1.0f, 1.0f, 1.0f, 1.0f);
GL11.glLight(GL11.GL_LIGHT0, GL11.GL_AMBIENT, qaAmbientLight);
GL11.glLight(GL11.GL_LIGHT0, GL11.GL_DIFFUSE, qaDiffuseLight);
GL11.glLight(GL11.GL_LIGHT0, GL11.GL_SPECULAR, qaSpecularLight);
FloatBuffer qaLightPosition = floatBuffer(lightPosition.x, lightPosition.y, lightPosition.z, 1.0f);
GL11.glLight(GL11.GL_LIGHT0, GL11.GL_POSITION, qaLightPosition);
So now before each render i tried this:
GL11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT | GL11.GL_STENCIL_BUFFER_BIT);
GL11.glEnable(GL11.GL_DEPTH_TEST);
GL11.glClearColor(0.0f, 100.0f, 100.0f, 1.0f);
GL11.glShadeModel(GL11.GL_FLAT);
GL11.glLoadIdentity();
FloatBuffer qaLightPosition = floatBuffer(lightPosition.x, lightPosition.y, lightPosition.z, 1.0f);
GL11.glLight(GL11.GL_LIGHT0, GL11.GL_POSITION, qaLightPosition);
FloatBuffer ambientMaterial = floatBuffer(0.2f, 0.2f, 0.2f, 1.0f);
FloatBuffer diffuseMaterial = floatBuffer(0.8f, 0.8f, 0.8f, 1.0f);
FloatBuffer specularMaterial = floatBuffer(0.0f, 0.0f, 0.0f, 1.0f);
GL11.glMaterial(GL11.GL_FRONT, GL11.GL_AMBIENT, ambientMaterial);
GL11.glMaterial(GL11.GL_FRONT, GL11.GL_DIFFUSE, diffuseMaterial);
GL11.glMaterial(GL11.GL_FRONT, GL11.GL_SPECULAR, specularMaterial);
GL11.glMaterialf(GL11.GL_FRONT, GL11.GL_SHININESS, 50.0f);
Of course this is not much code but this is all about lighting. Did i make a mistake? I read that OpenGL is not as good as DirectX for lighting and shadowing.
That's what it looks like:
http://img199.imageshack.us/img199/7014/testrender.png
Can someone give me tips to get it a better look?
I found one post with an awesome block landscape.
http://i.imgur.com/zIocp.jpg
That's how it should look like :)
Can someone give me tips to get it a better look?
Neither OpenGL nor DirectX have nothing to do with lighting and shadowing, if you use programmable pipeline. The normals become just another vertex attribute, which can be used for lighting computation. Fixed functionality is old and deprecated, and thus not recommended, if you aren't forced to use it.
Changing to shaders isn't really that hard, and you won't be limited by the fixed pipeline anymore; you have then complete control over how the lighting is computed, you can easily output more debug information (such as coloring surfaces based on their normals).
That's how it should look like :)
The screen you posted has also visible ambient occlusion. Achieving this effect without shaders would be extremely hard and simply not worth the effort.
I happen to be doing a similar project myself; I wouldn't mention it, if it wasn't OpenSource and publicly available. Here's the sample result:
You can find the lighting shader code here.
I'll post an excerpt to prevent links from rotting:
float CalcDirectionalLightFactor(vec3 lightDirection, vec3 normal) {
float DiffuseFactor = dot(normalize(normal), -lightDirection);
if (DiffuseFactor > 0) {
return DiffuseFactor;
}
else {
return 0.0;
}
}
vec3 DiffuseColor = Light0.Color * Light0.DiffuseIntensity * CalcDirectionalLightFactor(Light0.Direction, normal);
Bartek's answer is a good one. You will want to go down the path of writing your own shaders, understanding what OpenGl provides for shadowing and lighting and not relying on older, deprecated lighting models. It is a lot more complex the glEnable(LIGHTING_AND_SHADOWING).
But, if you just want to play with your code to see the colors change from binary black/white, one potential idea is turning off the qaSpecularLight (which creates "glossy" all-white highlights that don't help you get to a "matte" look) and adjusting the glShadeModel setting for SMOOTH shading.
That should help somewhat, but will not get you all the way to your goal. Follow Bartek's suggested path (or google for similar ideas).
So I have begun learning OpenGL, reading from the book "OpenGL Super Bible 5 ed.". It's explains things really well, and I have been able to create my first gl program myself! Just something simple, a rotating 3d pyramid.
Now for some reason one of the faces are not rendering. I checked the vertecies (plotted it on paper first) and it seemed to be right. Found out if I changed the shader to draw a line loop, it would render. However it would not render a triangle. Can anyone explain why?
void setupRC()
{
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
shaderManager.InitializeStockShaders();
M3DVector3f vVerts1[] = {-0.5f,0.0f,-0.5f,0.0f,0.5f,0.0f,0.5f,0.0f,-0.5f};
M3DVector3f vVerts2[] = {-0.5f,0.0f,-0.5f,0.0f,0.5f,0.0f,-0.5f,0.0f,0.5f};
M3DVector3f vVerts3[] = {-0.5f,0.0f,0.5f,0.0f,0.5f,0.0f,0.5f,0.0f,0.5f};
M3DVector3f vVerts4[] = {0.5f,0.0f,0.5f,0.0f,0.5f,0.0f,0.5f,0.0f,-0.5f};
triangleBatch1.Begin(GL_LINE_LOOP, 3);
triangleBatch1.CopyVertexData3f(vVerts1);
triangleBatch1.End();
triangleBatch2.Begin(GL_TRIANGLES, 3);
triangleBatch2.CopyVertexData3f(vVerts2);
triangleBatch2.End();
triangleBatch3.Begin(GL_TRIANGLES, 3);
triangleBatch3.CopyVertexData3f(vVerts3);
triangleBatch3.End();
triangleBatch4.Begin(GL_TRIANGLES, 3);
triangleBatch4.CopyVertexData3f(vVerts4);
triangleBatch4.End();
glEnable(GL_CULL_FACE);
}
float rot = 1;
void renderScene()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
GLfloat vRed[] = {1.0f, 0.0f, 0.0f, 0.5f};
GLfloat vBlue[] = {0.0f, 1.0f, 0.0f, 0.5f};
GLfloat vGreen[] = {0.0f, 0.0f, 1.0f, 0.5f};
GLfloat vWhite[] = {1.0f, 1.0f, 1.0f, 0.5f};
M3DMatrix44f transformMatrix;
if (rot >= 360)
rot = 0;
else
rot = rot + 1;
m3dRotationMatrix44(transformMatrix,m3dDegToRad(rot),0.0f,1.0f,0.0f);
shaderManager.UseStockShader(GLT_SHADER_FLAT, transformMatrix, vRed);
triangleBatch1.Draw();
shaderManager.UseStockShader(GLT_SHADER_FLAT, transformMatrix, vBlue);
triangleBatch2.Draw();
shaderManager.UseStockShader(GLT_SHADER_FLAT, transformMatrix, vGreen);
triangleBatch3.Draw();
shaderManager.UseStockShader(GLT_SHADER_FLAT, transformMatrix, vWhite);
triangleBatch4.Draw();
glutSwapBuffers();
glutPostRedisplay();
Sleep(10);
}
You've most likely defined the vertices in clockwise order for the triangle that isn't showing, and in counterclockwise order (normally the default) for those that are. Clockwise winding essentially creates an inward facing normal and thus OpenGL won't bother to render it when culling is enabled.
The easiest way to check this is to set glCullFace(GL_FRONT)--that should toggle it so you see the missing triangle and no longer see the other three.
The only thing I see that affects polygons here is glEnable(GL_CULL_FACE);.
You shouldn't have that, because if you plot your vertices backwards, the polygon won't render.
Remove it or actually call glDisable(GL_CULL_FACE); to be sure.
In your case, it's not likely that you want to draw a polygon that you can see from one side only.
For my project i needed to rotate a rectangle. I thought, that would be easy but i'm getting an unpredictable behavior when running it..
Here is the code:
glPushMatrix();
glRotatef(30.0f, 0.0f, 0.0f, 1.0f);
glTranslatef(vec_vehicle_position_.x, vec_vehicle_position_.y, 0);
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f);
glVertex2f(0, 0);
glTexCoord2f(1.0f, 0.0f);
glVertex2f(width_sprite_, 0);
glTexCoord2f(1.0f, 1.0f);
glVertex2f(width_sprite_, height_sprite_);
glTexCoord2f(0.0f, 1.0f);
glVertex2f(0, height_sprite_);
glEnd();
glDisable(GL_BLEND);
glDisable(GL_TEXTURE_2D);
glPopMatrix();
The problem with that, is that my rectangle is making a translation somewhere in the window while rotating. In other words, the rectangle doesn't keep the position : vec_vehicle_position_.x and vec_vehicle_position_.y.
What's the problem ?
Thanks
You need to flip the order of your transformations:
glRotatef(30.0f, 0.0f, 0.0f, 1.0f);
glTranslatef(vec_vehicle_position_.x, vec_vehicle_position_.y, 0);
becomes
glTranslatef(vec_vehicle_position_.x, vec_vehicle_position_.y, 0);
glRotatef(30.0f, 0.0f, 0.0f, 1.0f);
To elaborate on the previous answers.
Transformations in OpenGL are performed via matrix multiplication. In your example you have:
M_r_ - the rotation transform
M_t_ - the translation transform
v - a vertex
and you had applied them in the order:
M_r_ * M_t_ * v
Using parentheses to clarify:
( M_r_ * ( M_t_ * v ) )
We see that the vertex is transformed by the closer matrix first, which is in this case the translation. It can be a bit counter intuitive because this requires you to specify the transformations in the order opposite of that which you want them applied in. But if you think of how the transforms are placed on the matrix stack it should hopefully make a bit more sense (the stack is pre-multiplied together).
Hence why in order to get your desired result you needed to specify the transforms in the opposite order.
Inertiatic provided a very good response. From a code perspective, your transformations will happen in the reverse order they appear. In other words, transforms closer to the actual drawing code will be applied first.
For example:
glRotate();
glTranslate();
glScale();
drawMyThing();
Will first scale your thing, then translate it, then rotate it. You effectively need to "read your code backwards" to figure out which transforms are being applied in which order. Also keep in mind what the state of these transforms is when you push and pop the model-view stack.
Make sure the rotation is applied before the translation.