I am tasked with making a sun/moon object flow across the screen throughout a time-span (as it would in a regular day). One of the options available to me is to use a "billboard", which is a quad that is always facing the camera.
I have yet to use many direct x libraries or techniques. This is my first graphics project. How does this make sense? And how can you use this to move a sun object across a screen?
Thanks :) This will be run on windows machines only and I only have the option for direct x (9).
I have gotten this half working. I have a sun image displaying, but it sits at the front of my screen overtop of 80% of my screen, no matter which way my camera is pointing. I'm looking down towards the ground? Still a huge sun there. Why is this? Here is the code I used to create it...
void Sun::DrawSun()
{
std::wstring hardcoded = L"..\\Data\\sun.png";
m_SunTexture = MyTextureManager::GetInstance()->GetTextureData(hardcoded.c_str()).m_Texture;
LPD3DXSPRITE sprite = NULL;
if (SUCCEEDED(D3DXCreateSprite(MyRenderer::GetInstance()->GetDevice(), &sprite)))
{
//created!
}
sprite->Begin(D3DXSPRITE_ALPHABLEND);
D3DXVECTOR3 pos;
pos.x = 40.0f;
pos.y = 20.0f;
pos.z = 20.0f;
HRESULT someHr;
someHr = sprite->Draw(m_SunTexture, NULL, NULL, &pos, 0xFFFFFFFF);
sprite->End();
}
Obviously, my position vector is hardcoded. Is this what I need to be changing? I have noticed in the documentation the possibility of D3DXSPRITE_BILLBOARD rather than D3DXSPRITE_ALPHABLEND, will this work? Is it possible to use both?
As per the tutorial mentioned in an earlier post, D3DXSPRITE is a 2d object, and probably will not work for displaying within the 3d world? What is a smarter alternative?
The easiest way to do a screen aligned quad is by using Point Sprites with texture rewrite.
I never did that with DirectX but in OpenGL, enabling point sprites is a matter of 2 API calls that look like this.
glEnable(GL_POINT_SPRITE);
glTexEnvi(GL_POINT_SPRITE, GL_COORD_REPLACE, true);
With this mode enabled, You draw a single vertex and instead of a point, a screen aligned quad is renderd. The coordinate replace thing means that the rendered quad is processed with texture coordinates. This means that you can place any texture on the quad. Usually you'll want something with an alpha channel to blend into the background seamlessly.
There should be an equivalently easy way to do it in D3D. In addition, if you write a shader, it may allow you do do some additional stuff like discard some of the pixels of the texture.
This tutorial might help
Also, google.
--Edit
To transform a quad to any other shape, use the alpha channel of the texture. If the alpha is 0, the pixel is not visible. You can't add an alpha channel to a JPEG but you can do it to a PNG. Here's an example:
sun with alpha channel http://www.shiny.co.il/shooshx/sun.png
If you'll open this image in photoshop you'll see that the background is invisible.
Using alpha blending might cause some problems if you have other things going on in the scene so if that happens, you can write a simple fragment shader which discards the pixels with alpha==0.
Okay im not a direct x expert so i am going to assume a few things.
First i am assuming you have some sort of DrawQuad() function that takes the 4 corners and a texture inside your rendering class.
First we want to get the current viewport matrix
D3DMATRIX mat;
hr = m_Renderer->GetRenderDevice()->GetTransform(D3DTS_VIEW,&mat);
Lets just set some sort of arbitrary size
float size = 20.0f;
Now we need to calculate two vectors the up unit vector and the right unit vector.
D3DXVECTOR3 rightVect;
D3DXVECTOR3 viewMatrixA(mat._11,mat._21,mat._31);
D3DXVECTOR3 viewMatrixB(mat._12,mat._22,mat._32);
D3DXVec3Normalize(&rightVect, &viewMatrixA);
rightVect = rightVect * size * 0.5f;
D3DXVECTOR3 upVect;
D3DXVec3Normalize(&upVect, &viewMatrixB);
upVect = upVect * size * 0.5f;
Now we need to define a Location for our object, i am just going with the origin.
D3DXVECTOR3 loc(0.0f, 0.0f, 0.0f);
Lets load the sun texture:
m_SunTexture = <insert magic texture load>
Now lets figure out the 4 corners.
D3DXVECTOR3 upperLeft = loc - rightVect;
D3DXVECTOR3 upperRight = loc + upVect;
D3DXVECTOR3 lowerRight = loc-upVect;
D3DXVECTOR3 lowerLeft= loc + rightVect;
Lets draw our quad. I am assuming this function exists otherwise you'll need to do
some vertex drawing.
m_Renderer->DrawQuad(
upperLeft.x, upperLeft.y, upperLeft.z,
upperRight.x, upperRight.y, upperRight.z,
lowerRight.x, lowerRight.y, lowerRight.z,
lowerLeft.x, lowerLeft.y, lowerLeft.z, m_SunTexture);
Enjoy :)!
Yes, a billboard would typically be used for this. It's pretty straightforward to calculate the coordinates of the corners of the billboard and the texture parameters. The texture itself could be rendered using some other technique (in the application in system ram for instance).
It it simple enough to take the right and up vectors of the camera and add/sub those (scaled appropriately) from the centre point of the world-coordinates you want to render the object in.
Related
I want to be able to (in fragment shader) add one texture to another. Right now I have projective texturing and want to expand on that.
Here is what I have so far :
Im also drawing the viewfrustum along which the blue/gray test image is projected onto the geometry that is in constant rotation.
My vertex shader:
ProjTexCoord = ProjectorMatrix * ModelTransform * raw_pos;
My Fragment Shader:
vec4 diffuse = texture(texture1, vs_st);
vec4 projTexColor = textureProj(texture2, ProjTexCoord);
vec4 shaded = diffuse; // max(intensity * diffuse, ambient); -- no shadows for now
if (ProjTexCoord[0] > 0.0 ||
ProjTexCoord[1] > 0.0 ||
ProjTexCoord[0] < ProjTexCoord[2] ||
ProjTexCoord[1] < ProjTexCoord[2]){
diffuse = shaded;
}else if(dot(n, projector_aim) < 0 ){
diffuse = projTexColor;
}else{
diffuse = shaded;
}
What I want to achieve:
When for example - the user presses a button, I want the blue/gray texture to be written to the gray texture on the sphere and rotate with it. Imagine it as sort of "taking a picture" or painting on top of the sphere so that the blue/gray texture spins with the sphere after a button is pressed.
As the fragment shader operates on each pixel it should be possible to copy pixel-by-pixel from one texture to the other, but I have no clue how, I might be googling for the wrong stuff.
How can I achieve this technically? What method is most versatile? Suggestions are very much appreciated, please let me know If more code is necessary.
Just to be clear, you'd like to bake decals into your sphere's grey texture.
The trouble with writing to the grey texture while drawing another object is it's not one to one. You may be writing twice or more to the same texel, or a single fragment may need to write to many texels in your grey texture. It may sound attractive as you already have the coordinates of everything in the one place, but I wouldn't do this.
I'd start by creating a texture containing the object space position of each texel in your grey texture. This is key, so that when you click you can render to your grey texture (using an FBO) and know where each texel is in your current view or your projective texture's view. There may be edge cases where the same bit of texture appears on multiple triangles. You could do this by rendering your sphere to the grey texture using the texture coordinates as your vertex positions. You probably need a floating point texture for this, and the following image probably isn't the sphere's texture mapping, but it'll do for demonstration :P.
So when you click, you render a full screen quad to your grey texture with alpha blending enabled. Using the grey texture object space positions, each fragment computes the image space position within the blue texture's projection. Discard the fragments that are outside the texture and sample/blend in those that are inside.
I think you are overcomplicating things.
Writes to textures inside classic shaders (i.e. not compute shader) are only implemented for latest hardware and very latest OpenGL versions and extensions.
It could be terribly slow if used wrong. It's so easy to introduce pipeline stalls and CPU-GPU sync points
Pixel shader could become a terribly slow unmaintainable mess of branches and texture fetches.
And all this mess will be done for every single pixel every single frame
Solution: KISS
Just update your texture on CPU side.
Write to texture, replacing parts of it with desired content
Update is only need to be done once and only when you need this. Data persists until you rewrite it (not even once per frame, but only once per change request)
Pixel shader is dead brain simple: no branching, one texture
To get target pixels, implement ray-picking (you will need it anyway for any non-trivial interactive 3D-graphics program)
P.S. "Everything should be made as simple as possible, but not simpler." Albert Einstein.
I want to know if there is a way to know if a specific pixel is shown
(e.g i want to get all positions that is behind the scene, which means all background objects that is not shown in the near plane).
It's probably faster and easier to do some simple calculations yourself CPU side. For example...
clipSpacePoint = projectionMatrix * viewMatrix * modelMatrix * vec4f(0, 0, 0, 1); //using the model's origin and transform matrix
//or
clipSpacePoint = projectionMatrix * viewMatrix * vec4f(x, y, z, 1); //using the model's position
clipSpacePoint.xyz /= clipSpacePoint.w; //possible division by zero
//point is visible if clipSpacePoint.w is positive and clipSpacePoint.xyz are between -1 and 1
Alternatively, and to better answer your question, you can use the occlusion query to check how many fragments have been rendered during a draw call. An example application might be checking if a bright light is visible and drawing a lens flare if it is. The occlusion query, like many OpenGL calls, run asynchronously so if you need the results right away you can stall the rendering pipeline. If you don't mind waiting a frame or two for the result it may run a bit faster.
http://www.opengl.org/wiki/Query_Object#Occlusion_queries
This won't tell you where the pixels were drawn though. Two ways to find the pixel locations come to mind...
You could write pixel coordinates in your fragment shader to a texture, use the histopyramid/stream compaction method to move them to the start of the texture, and read them back from the GPU.
If you don't mind using recent GL features, ARB_atomic_counter can be used in the fragment shader to create a unique index which you could then use to write the fragment's coordinates to a buffer or texture with ARB_image_load_store. You'll probably also want to enable the early depth test for this too.
These are both far more complex than doing the point in box check or an occlusion query.
I'm writing a 2D game in Opengl. I already set up the orthogonal projection so I can easily know where a quad will end up on screen. The problem is, I also want to be able to map pixels directly to texture coords, so I also applied an orthogonal transformation (using gluOrtho2d) to the texture. Now I can map pixels directly using integers and glTexCoord2i. The thing is, after googling/reading/asking, I found out no one really knows (apparently) the behavior of glTexCoord2i, but it works just fine the way I'm using. Some sample test code I wrote follows:
glBegin(GL_QUADS);
glTexCoord2i(16,0);
glVertex2f(X, Y);
glTexCoord2i(16,16);
glVertex2f(X, Y+32);
glTexCoord2i(32, 16);
glVertex2f(X+32, Y+32);
glTexCoord2i(32, 0);
glVertex2f(X+32, Y);
glEnd();
So, is there any problem with what I'm doing, or is what I'm doing correct?
There's nothing special about glTexCoord*i, it's just one variant of glTexCoord that takes integer arguments for convenience, the values will be transformed by the texture transform in the same way as all other tex coords.
If you want to express your texture coordinates in pixels your code looks totally ok.
EDIT: If you want to take your OpenGL coding to the "next level", you should consider dropping the immediate mode rendering (glBegin/glEnd) and instead build vertex buffers and draw them with glDrawArrays, this is way faster.
How can i scale sprites in SDL?
SDL doesn't provide scaling functionality directly, but there's an additional library called SDL_gfx which provides rotation and zooming capabilities. There's also another library called Sprig which provides similar features.
You can do scaling if you are getting sprites from a texture with SDL_RenderCopy() but i cannot guarantee you antialiasing.
With function SDL_RenderCopy() you pass 4 params:
a pointer to a renderer (where you are going to renderize).
a pointer to a texture (where you are going to get the sprite).
pointer to source rect(the area and position where you get the sprite on the texture).
and pointer to dest rect(the area and position on the renderer you are going to draw).
You should only modify your dest rect like for example, if you are going to render an image 300 x 300 and you want it scaled, your dest rect should be like 150 x 150 or 72 x 72 or whatever value you wanted to scale.
You haven't provided any code, so I'm going to assume you are using textures and an SDL_Renderer:
When using SDL_RenderCopy() the texture will be stretched to fit the destination SDL_Rect, so if you make the destination SDL_Rect larger or smaller you can perform a simple scaling of the texture.
https://wiki.libsdl.org/SDL_RenderCopy
Solution from Ibrahim CS works.
Let me expand on top of this solution and providing the code.
Another thing to note is to calculate new position (x,y) with top-left is origin to render scaled texture.
I do it like this
// calculate new x and y
int new_x = (x + texture->w/2) - (texture->w/2 * new_scale);
int new_y = (y + texture->h/2) - (texture->h/2 * new_scale);
// form new destination rect
SDL_Rect dest_rect = { new_x, new_y, texture->w * scale, texture->h * scale };
// render
SDL_RenderCopy(renderer, texture, NULL, &dest_rect);
assume that texture is SDL_Texture, and renderer is SDL_Renderer, and you render fully from input texture to destination.
If you use SFML instead then you get a very similar set of cross-platform capabilities but the graphics are hardware accelerated and features such as scaling and rotation come for free, both in terms of needing no additional dependencies and in terms of taking no noticeable CPU time to operate.
I've been working on a game engine for awhile. I've started out with 2D Graphics with just SDL but I've slowly been moving towards 3D capabilities by using OpenGL. Most of the documentation I've seen about "how to get things done," use GLUT, which I am not using.
The question is how do I create a "camera" in OpenGL that I could move around a 3D environment and properly display 3D models as well as sprites (for example, a sprite that has a fixed position and rotation). What functions should I be concerned with in order to setup a camera in OpenGL camera and in what order should they be called in?
Here is some background information leading up to why I want an actual camera.
To draw a simple sprite, I create a GL texture from an SDL surface and I draw it onto the screen at the coordinates of (SpriteX-CameraX) and (SpriteY-CameraY). This works fine but when moving towards actual 3D models it doesn't work quite right. The cameras location is a custom vector class (i.e. not using the standard libraries for it) with X, Y, Z integer components.
I have a 3D cube made up of triangles and by itself, I can draw it and rotate it and I can actually move the cube around (although in an awkward way) by passing in the camera location when I draw the model and using that components of the location vector to calculate the models position. Problems become evident with this approach when I go to rotate the model though. The origin of the model isn't the model itself but seems to be the origin of the screen. Some googling tells me I need to save the location of the model, rotate it about the origin, then restore the model to its origal location.
Instead of passing in the location of my camera and calculating where things should be being drawn in the Viewport by calculating new vertices, I figured I would create an OpenGL "camera" to do this for me so all I would need to do is pass in the coordinates of my Camera object into the OpenGL camera and it would translate the view automatically. This tasks seems to be extremely easy if you use GLUT but I'm not sure how to set up a camera using just OpenGL.
EDIT #1 (after some comments):
Following some suggestion, here is the update method that gets called throughout my program. Its been updated to create perspective and view matrices. All drawing happens before this is called. And a similar set of methods is executed when OpenGL executes (minus the buffer swap). The x,y,z coordinates are straight an instance of Camera and its location vector. If the camera was at (256, 32, 0) then 256, 32 and 0 would be passed into the Update method. Currently, z is set to 0 as there is no way to change that value at the moment. The 3D model being drawn is a set of vertices/triangles + normals at location X=320, Y=240, Z=-128. When the program is run, this is what is drawn in FILL mode and then in LINE mode and another one in FILL after movement, when I move the camera a little bit to the right. It likes like may Normals may be the cause, but I think it has moreso to do with me missing something extremely important or not completely understanding what the NEAR and FAR parameters for glFrustum actually do.
Before I implemented these changes, I was using glOrtho and the cube rendered correctly. Now if I switch back to glOrtho, one face renders (Green) and the rotation is quite weird - probably due to the translation. The cube has 6 different colors, one for each side. Red, Blue, Green, Cyan, White and Purple.
int VideoWindow::Update(double x, double y, double z)
{
glMatrixMode( GL_PROJECTION );
glLoadIdentity();
glFrustum(0.0f, GetWidth(), GetHeight(), 0.0f, 32.0f, 192.0f);
glMatrixMode( GL_MODELVIEW );
SDL_GL_SwapBuffers();
glLoadIdentity();
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glRotatef(0, 1.0f, 0.0f, 0.0f);
glRotatef(0, 0.0f, 1.0f, 0.0f);
glRotatef(0, 0.0f, 0.0f, 1.0f);
glTranslated(-x, -y, 0);
return 0;
}
EDIT FINAL:
The problem turned out to be an issue with the Near and Far arguments of glFrustum and the Z value of glTranslated. While change the values has fixed it, I'll probably have to learn more about the relationship between the two functions.
You need a view matrix, and a projection matrix. You can do it one of two ways:
Load the matrix yourself, using glMatrixMode() and glLoadMatrixf(), after you use your own library to calculate the matrices.
Use combinations of glMatrixMode(GL_MODELVIEW) and glTranslate() / glRotate() to create your view matrix, and glMatrixMode(GL_PROJECTION) with glFrustum() to create your projection matrix. Remember - your view matrix is the negative translation of your camera's position (As it's where you should move the world to relative to the camera origin), as well as any rotations applied (pitch/yaw).
Hope this helps, if I had more time I'd write you a proper example!
You have to do it using the matrix stack as for object hierarchy,
but the camera is inside the hierarchy so you have to put the inverse transform on the stack before drawing the objects as openGL only uses the matrix from 3D to camera.
If you have not checked then may be looking at following project would explain in detail what "tsalter" wrote in his post.
Camera from OGL SDK (CodeColony)
Also look at Red book for explanation on viewing and how does model-view and projection matrix will help you create camera. It starts with good comparison between actual camera and what corresponds to OpenGL API. Chapter 3 - Viewing