Render a texture to a window - c++

I've got a dialog that I'd basically like to implement as a texture viewer using DirectX. The source texture can either come from a file on-disk or from an arbitrary D3D texture/surface in memory. The window will be resizable, so I'll need to be able to scale its contents accordingly (preserving aspect ratio, while not necessary, would be useful to know).
What would be the best way to go about implementing the above?

IMHO the easiest way to do this is to create a quad (or two triangles) whose vertices contain the correct UV-coordinates. The XYZ coordinates you set to the viewing cube coordinates. This only works if the identity matrix is set as projection. You can use -1 to 1 on both X and Y axes.
EDIT: Here an example turorial:
http://www.mvps.org/directx/articles/splash_screen.htm

This is the code I use to preserve size and scaling for resizeable dialogue. My texture is held in a memory bitmap. I am sure you can adapt if you do not have a memory bitmap. The important bits is the way I determine the right scaling factor to preserve the aspect ratio for any client area size
CRect destRect( 0, 0, frameRect.Width(), frameRect.Height() );
if( txBitmapInfo.bmWidth <= frameRect.Width() && txBitmapInfo.bmHeight <= frameRect.Height() )
{
destRect.left = ( frameRect.Width() - txBitmapInfo.bmWidth ) / 2;
destRect.right = destRect.left + txBitmapInfo.bmWidth;
destRect.top = ( frameRect.Height() - txBitmapInfo.bmHeight ) / 2;
destRect.bottom = destRect.top + txBitmapInfo.bmHeight;
}
else
{
double hScale = static_cast<double>( frameRect.Width() ) / txBitmapInfo.bmWidth;
double vScale = static_cast<double>( frameRect.Height() ) / txBitmapInfo.bmHeight;
if( hScale < vScale )
{
int height = static_cast<int>( frameRect.Width() * ( static_cast<double>(txBitmapInfo.bmHeight) / txBitmapInfo.bmWidth ) );
destRect.top = ( frameRect.Height() - height ) / 2;
destRect.bottom = destRect.top + height;
}
else
{
int width = static_cast<int>( frameRect.Height() * ( static_cast<double>(txBitmapInfo.bmWidth) / txBitmapInfo.bmHeight ) );
destRect.left = ( frameRect.Width() - width ) / 2;
destRect.right = destRect.left + width;
}
}
Hope this helps!

Related

How to downsample a not-power-of-2 texture in UnrealEngine?

I am rendering the Viewport with a resolution of something like 1920x1080 multiplied by a Oversampling value like 4. Now i need to downsample from the rendered Resolution 7680‬x4320 back to the 1920x1080.
Are there any functions in Unreal I could use for that ? Or any Library (windows only) which handle this nicely ?
Or what would be a propper way of writing this my own ?
We tried to implement a downsampling but it only works if SnapshotScale is 2, when its higher than 2 it doesn't seem to have an effect regarding image quality.
UTexture2D* AAVESnapShotManager::DownsampleTexture(UTexture2D* Texture)
{
UTexture2D* Result = UTexture2D::CreateTransient(RenderSettings.imageWidth, RenderSettings.imageHeight, PF_B8G8R8A8);
void* TextureDataVoid = Texture->PlatformData->Mips[0].BulkData.Lock(LOCK_READ_ONLY);
void* ResultDataVoid = Result->PlatformData->Mips[0].BulkData.Lock(LOCK_READ_WRITE);
FColor* TextureData = (FColor*)TextureDataVoid;
FColor* ResultData = (FColor*)ResultDataVoid;
int32 WindowSize = RenderSettings.resolutionScale / 2;
for (int x = 0; x < Result->GetSizeX(); ++x)
{
for (int y = 0; y < Result->GetSizeY(); ++y)
{
const uint32 ResultIndex = y * Result->GetSizeX() + x;
uint32_t R = 0, G = 0, B = 0, A = 0;
int32 Samples = 0;
for (int32 dx = -WindowSize; dx < WindowSize; ++dx)
{
for (int32 dy = -WindowSize; dy < WindowSize; ++dy)
{
int32 PosX = (x * RenderSettings.resolutionScale + dx);
int32 PosY = (y * RenderSettings.resolutionScale + dy);
if (PosX < 0 || PosX >= Texture->GetSizeX() || PosY < 0 || PosY >= Texture->GetSizeY())
{
continue;
}
size_t TextureIndex = PosY * Texture->GetSizeX() + PosX;
FColor& Color = TextureData[TextureIndex];
R += Color.R;
G += Color.G;
B += Color.B;
A += Color.A;
++Samples;
}
}
ResultData[ResultIndex] = FColor(R / Samples, G / Samples, B / Samples, A / Samples);
}
}
Texture->PlatformData->Mips[0].BulkData.Unlock();
Result->PlatformData->Mips[0].BulkData.Unlock();
Result->UpdateResource();
return Result;
}
I expect a high quality oversampled Texture output, working with any positive int value in SnapshotScale.
I have a suggestion. It's not really direct, but it involves no writing of image filtering or importing of libraries.
Make an unlit Material with nodes TextureObject->TextureSample-> connect to Emissive.
Use the texture you start with in your function to populate the Texture Object on a Material Instance Dynamic of the material.
Use the "Draw Material to Render Target" function to draw the Material Instance Dynamic to a Render Target that is pre-set with your target resolution.

Get point around rounded rectangle

I'm using this to get a point around a circle.
constexpr int quality = 20;
static Vertex_t verts[quality];
for ( int i = 0; i < quality; i++ ) {
float angle = ((float)i / -quality) * MATH_TAU;
verts[i].x = cir.pos.x + (cir.radius * sin( angle ));
verts[i].y = cir.pos.y + (cir.radius * cos( angle ));
}
now i need to get a point around a rounded rectangle given it's position, size and radius.
You'll have to split the code to 4 - one for each corner. As these are vertices you're dealing with, the straight lines will be filled automatically.

Realtime object painting

I am trying to perform a realtime painting to the object texture. Using Irrlicht for now, but that does not really matter.
So far, i've got the right UV coordinates using this algorithm:
find out which object's triangle user selected (raycasting, nothing
really difficult)
find out the UV (baricentric) coordinates of intersection point on
that triangle
find out the UV (texture) coordinates of each triangle vertex
find out the UV (texture) coordinates of intersection point
calculate the texture image coordinates for intersection point
But somehow, when i am drawing in the point i got in the 5th step on texture image, i get totally wrong results. So, when drawing a rectangle in cursor point, the X (or Z) coordinate of its is inverted:
Here's the code i am using to fetch texture coordinates:
core::vector2df getPointUV(core::triangle3df tri, core::vector3df p)
{
core::vector3df
v0 = tri.pointC - tri.pointA,
v1 = tri.pointB - tri.pointA,
v2 = p - tri.pointA;
float dot00 = v0.dotProduct(v0),
dot01 = v0.dotProduct(v1),
dot02 = v0.dotProduct(v2),
dot11 = v1.dotProduct(v1),
dot12 = v1.dotProduct(v2);
float invDenom = 1.f / ((dot00 * dot11) - (dot01 * dot01)),
u = (dot11 * dot02 - dot01 * dot12) * invDenom,
v = (dot00 * dot12 - dot01 * dot02) * invDenom;
scene::IMesh* m = Mesh->getMesh(((scene::IAnimatedMeshSceneNode*)Model)->getFrameNr());
core::array<video::S3DVertex> VA, VB, VC;
video::SMaterial Material;
for (unsigned int i = 0; i < m->getMeshBufferCount(); i++)
{
scene::IMeshBuffer* mb = m->getMeshBuffer(i);
video::S3DVertex* vertices = (video::S3DVertex*) mb->getVertices();
for (unsigned long long v = 0; v < mb->getVertexCount(); v++)
{
if (vertices[v].Pos == tri.pointA)
VA.push_back(vertices[v]); else
if (vertices[v].Pos == tri.pointB)
VB.push_back(vertices[v]); else
if (vertices[v].Pos == tri.pointC)
VC.push_back(vertices[v]);
if (vertices[v].Pos == tri.pointA || vertices[v].Pos == tri.pointB || vertices[v].Pos == tri.pointC)
Material = mb->getMaterial();
if (VA.size() > 0 && VB.size() > 0 && VC.size() > 0)
break;
}
if (VA.size() > 0 && VB.size() > 0 && VC.size() > 0)
break;
}
core::vector2df
A = VA[0].TCoords,
B = VB[0].TCoords,
C = VC[0].TCoords;
core::vector2df P(A + (u * (C - A)) + (v * (B - A)));
core::dimension2du Size = Material.getTexture(0)->getSize();
CursorOnModel = core::vector2di(Size.Width * P.X, Size.Height * P.Y);
int X = Size.Width * P.X, Y = Size.Height * P.Y;
// DRAWING SOME RECTANGLE
Material.getTexture(0)->lock(true);
Device->getVideoDriver()->setRenderTarget(Material.getTexture(0), true, true, 0);
Device->getVideoDriver()->draw2DRectangle(video::SColor(255, 0, 100, 75), core::rect<s32>((X - 10), (Y - 10),
(X + 10), (Y + 10)));
Device->getVideoDriver()->setRenderTarget(0, true, true, 0);
Material.getTexture(0)->unlock();
return core::vector2df(X, Y);
}
I just wanna make my object paintable in realtime. My current problems are: wrong texture coordinate calculation and non-unique vertex UV coordinates (so, drawing something on the one side of the dwarfe's axe would draw the same on the other side of that axe).
How should i do this?
I was able to use your codebase and get it to work for me.
Re your second problem "non-unique vertex UV coordinates":
Well you are absolutely right, you need unique vertexUVs to get this working, which means that you have to unwrap you models and don't make use of shared uv-space for e.g. mirrored elements and stuff. (e.g. left/right boot - if they use the same uv-space, you'll paint automatically on both, where you want the one to be red and the other to be green). You can check out "uvlayout" (tool) or the uv-unwrap modifier ind 3ds max.
Re the first and more important problem: "**wrong texture coordinate calculation":
the calculation of your baycentric coordinates is correct, but as i suppose your input-data is wrong. I assume you get the triangle and the collisionPoint by using irrlicht's CollisionManager and TriangleSelector. The problem is, that the positions of the triangle's vertices (which you get as returnvalue from the collisionTest) is in WorldCoordiates. But you'll need them in ModelCoordinates for the calculation, so here's what you need to do:
pseudocode:
add the node which contains the mesh of the hit triangle as parameter to getPointUV()
get the inverse absoluteTransformation-Matrix by calling node->getAbsoluteTransformation() [inverse]
transform the vertices of the triangle by this inverse Matrix and use those values for the rest of the method.
Below you'll find my optimized method wich does it for a very simple mesh (one mesh, only one meshbuffer).
Code:
irr::core::vector2df getPointUV(irr::core::triangle3df tri, irr::core::vector3df p, irr::scene::IMeshSceneNode* pMeshNode, irr::video::IVideoDriver* pDriver)
{
irr::core::matrix4 inverseTransform(
pMeshNode->getAbsoluteTransformation(),
irr::core::matrix4::EM4CONST_INVERSE);
inverseTransform.transformVect(tri.pointA);
inverseTransform.transformVect(tri.pointB);
inverseTransform.transformVect(tri.pointC);
irr::core::vector3df
v0 = tri.pointC - tri.pointA,
v1 = tri.pointB - tri.pointA,
v2 = p - tri.pointA;
float dot00 = v0.dotProduct(v0),
dot01 = v0.dotProduct(v1),
dot02 = v0.dotProduct(v2),
dot11 = v1.dotProduct(v1),
dot12 = v1.dotProduct(v2);
float invDenom = 1.f / ((dot00 * dot11) - (dot01 * dot01)),
u = (dot11 * dot02 - dot01 * dot12) * invDenom,
v = (dot00 * dot12 - dot01 * dot02) * invDenom;
irr::video::S3DVertex A, B, C;
irr::video::S3DVertex* vertices = static_cast<irr::video::S3DVertex*>(
pMeshNode->getMesh()->getMeshBuffer(0)->getVertices());
for(unsigned int i=0; i < pMeshNode->getMesh()->getMeshBuffer(0)->getVertexCount(); ++i)
{
if( vertices[i].Pos == tri.pointA)
{
A = vertices[i];
}
else if( vertices[i].Pos == tri.pointB)
{
B = vertices[i];
}
else if( vertices[i].Pos == tri.pointC)
{
C = vertices[i];
}
}
irr::core::vector2df t2 = B.TCoords - A.TCoords;
irr::core::vector2df t1 = C.TCoords - A.TCoords;
irr::core::vector2df uvCoords = A.TCoords + t1*u + t2*v;
return uvCoords;
}

Quick code to resize DIB image and maintain good img quality

There is many algorithms to do image resizing - lancorz, bicubic, bilinear, e.g. But most of them are pretty complex and therefore consume too much CPU.
What I need is fast relatively simple C++ code to resize images with acceptable quality.
Here is an example of what I'm currently doing:
for (int y = 0; y < height; y ++)
{
int srcY1Coord = int((double)(y * srcHeight) / height);
int srcY2Coord = min(srcHeight - 1, max(srcY1Coord, int((double)((y + 1) * srcHeight) / height) - 1));
for (int x = 0; x < width; x ++)
{
int srcX1Coord = int((double)(x * srcWidth) / width);
int srcX2Coord = min(srcWidth - 1, max(srcX1Coord, int((double)((x + 1) * srcWidth) / width) - 1));
int srcPixelsCount = (srcX2Coord - srcX1Coord + 1) * (srcY2Coord - srcY1Coord + 1);
RGB32 color32;
UINT32 r(0), g(0), b(0), a(0);
for (int xSrc = srcX1Coord; xSrc <= srcX2Coord; xSrc ++)
for (int ySrc = srcY1Coord; ySrc <= srcY2Coord; ySrc ++)
{
RGB32 curSrcColor32 = pSrcDIB->GetDIBPixel(xSrc, ySrc);
r += curSrcColor32.r; g += curSrcColor32.g; b += curSrcColor32.b; a += curSrcColor32.alpha;
}
color32.r = BYTE(r / srcPixelsCount); color32.g = BYTE(g / srcPixelsCount); color32.b = BYTE(b / srcPixelsCount); color32.alpha = BYTE(a / srcPixelsCount);
SetDIBPixel(x, y, color32);
}
}
The code above is fast enough, but the quality is not ok on scaling pictures up.
Therefore, possibly someone already has fast and good C++ code sample for scaling DIBs?
Note: I was using StretchDIBits before - it was super-slow when was needed to downsize 10000x10000 picture down to 100x100 size, my code is much, much faster, I just want to have a bit higher quality
P.S. I'm using my own SetPixel/GetPixel functions, to work directly with data array and fast, that's not device context!
Why are you doing it on the CPU? Using GDI, there's a good chance of some hardware acceleration. Use StretchBlt and SetStretchBltMode.
In pseudocode:
create source dc and destination dc using CreateCompatibleDC
create source and destination bitmaps
SelectObject source bitmap into source DC and dest bitmap into dest DC
SetStretchBltMode
StretchBlt
release DCs
Allright, here is the answer, had to do it myself... It works perfectly well for scaling pictures up (for scaling down my initial code works perfectly well too). Hope someone will find a good use for it, it's fast enough and produced very good picture quality.
for (int y = 0; y < height; y ++)
{
double srcY1Coord = (y * srcHeight) / (double)height;
int srcY1CoordInt = (int)(srcY1Coord);
double srcY2Coord = ((y + 1) * srcHeight) / (double)height - 0.00000000001;
int srcY2CoordInt = min(maxSrcYcoord, (int)(srcY2Coord));
double yMultiplierForFirstCoord = (0.5 * (1 - (srcY1Coord - srcY1CoordInt)));
double yMultiplierForLastCoord = (0.5 * (srcY2Coord - srcY2CoordInt));
for (int x = 0; x < width; x ++)
{
double srcX1Coord = (x * srcWidth) / (double)width;
int srcX1CoordInt = (int)(srcX1Coord);
double srcX2Coord = ((x + 1) * srcWidth) / (double)width - 0.00000000001;
int srcX2CoordInt = min(maxSrcXcoord, (int)(srcX2Coord));
RGB32 color32;
ASSERT(srcX1Coord < srcWidth && srcY1Coord < srcHeight);
double r(0), g(0), b(0), a(0), multiplier(0);
for (int xSrc = srcX1CoordInt; xSrc <= srcX2CoordInt; xSrc ++)
for (int ySrc = srcY1CoordInt; ySrc <= srcY2CoordInt; ySrc ++)
{
RGB32 curSrcColor32 = pSrcDIB->GetDIBPixel(xSrc, ySrc);
double xMultiplier = xSrc < srcX1Coord ? (0.5 * (1 - (srcX1Coord - srcX1CoordInt))) : (xSrc >= srcX2Coord ? (0.5 * (srcX2Coord - srcX2CoordInt)) : 0.5);
double yMultiplier = ySrc < srcY1Coord ? yMultiplierForFirstCoord : (ySrc >= srcY2Coord ? yMultiplierForLastCoord : 0.5);
double curPixelMultiplier = xMultiplier + yMultiplier;
if (curPixelMultiplier > 0)
{
r += (curSrcColor32.r * curPixelMultiplier); g += (curSrcColor32.g * curPixelMultiplier); b += (curSrcColor32.b * curPixelMultiplier); a += (curSrcColor32.alpha * curPixelMultiplier);
multiplier += curPixelMultiplier;
}
}
color32.r = BYTE(r / multiplier); color32.g = BYTE(g / multiplier); color32.b = BYTE(b / multiplier); color32.alpha = BYTE(a / multiplier);
SetDIBPixel(x, y, color32);
}
}
P.S. Please don't ask why I’m not using StretchDIBits - leave comments for these who understand that not always system api is available or acceptable.
Again, why do it on the CPU? Why not use OpenGL / DirectX and fragment shaders? In pseudocode:
upload source texture (cache it if it's to be reused)
create destination texture
use shader program
render quad
download output texture
where shader program is the filtering method you're using. The GPU is much better at processing pixels than CPU/GetPixel/SetPixel.
You could probably find fragment shaders for lots of different filtering methods on the web - GPU Gems is a good place to start.

simple 2d collision problem

I want to find when a collision between a static and a moving ball occurs, but the algorithm I came up with, sometimes doesn't detect a collision and the moving ball goes through the static one. The moving ball is affected by gravity and the static one is not.
Here's my collision detection code:
GLfloat whenSpheresCollide(const sphere2d &firstSphere, const sphere2d &secondSphere)
{
Vector2f relativePosition = subtractVectors(firstSphere.vPosition, secondSphere.vPosition);
Vector2f relativeVelocity = subtractVectors(firstSphere.vVelocity, secondSphere.vVelocity);
GLfloat radiusSum = firstSphere.radius + secondSphere.radius;
//We'll find the time when objects collide if a collision takes place
//r(t) = P[0] + t * V[0]
//
//d^2(t) = P[0]^2 + 2 * t * P[0] * V[0] + t^2 * V[0]^2
//
//d^2(t) = V[0]^2 * t^2 + 2t( P[0] . V[0] ) + P[0]^2
//
//d(t) = R
//
//d(t)^2 = R^2
//
//V[0]^2 * t^2 + 2t( P[0] . V[0] ) + P[0]^2 - R^2 = 0
//
//delta = ( P[0] . V[0] )^2 - V[0]^2 * (P[0]^2 - R^2)
//
// We are interested in the lowest t:
//
//t = ( -( P[0] . V[0] ) - sqrt(delta) ) / V[0]^2
//
GLfloat equationDelta = squaref( dotProduct(relativePosition, relativeVelocity) ) - squarev( relativeVelocity ) * ( squarev( relativePosition ) - squaref(radiusSum) );
if (equationDelta >= 0.0f)
{
GLfloat collisionTime = ( - dotProduct(relativePosition, relativeVelocity) - sqrtf(equationDelta) ) / squarev(relativeVelocity);
if (collisionTime >= 0.0f && collisionTime <= 1.0f / GAME_FPS)
{
return collisionTime;
}
}
return -1.0f;
}
And here is the updating function that calls collision detection:
void GamePhysicsManager::updateBallPhysics()
{
//
//Update velocity
vVelocity->y -= constG / GAME_FPS; //v = a * t = g * 1 sec / (updates per second)
shouldApplyForcesToBall = TRUE;
vPosition->x += vVelocity->x / GAME_FPS;
vPosition->y += vVelocity->y / GAME_FPS;
if ( distanceBetweenVectors( *pBall->getPositionVector(), *pBasket->getPositionVector() ) <= pBasket->getRadius() + vectorLength(*vVelocity) / GAME_FPS )
{
//Ball sphere
sphere2d ballSphere;
ballSphere.radius = pBall->getRadius();
ballSphere.mass = 1.0f;
ballSphere.vPosition = *( pBall->getPositionVector() );
ballSphere.vVelocity = *( pBall->getVelocityVector() );
sphere2d ringSphereRight;
ringSphereRight.radius = 0.05f;
ringSphereRight.mass = -1.0f;
ringSphereRight.vPosition = *( pBasket->getPositionVector() );
//ringSphereRight.vPosition.x += pBasket->getRadius();
ringSphereRight.vPosition.x += (pBasket->getRadius() - ringSphereRight.radius);
ringSphereRight.vVelocity = zeroVector();
GLfloat collisionTime = whenSpheresCollide(ballSphere, ringSphereRight);
if ( collisionTime >= 0.0f )
{
DebugLog("collision");
respondToCollision(&ballSphere, &ringSphereRight, collisionTime, pBall->getRestitution() * 0.75f );
}
//
//Implement selection of the results that are first to collide collision
vVelocity->x = ballSphere.vVelocity.x;
vVelocity->y = ballSphere.vVelocity.y;
vPosition->x = ballSphere.vPosition.x;
vPosition->y = ballSphere.vPosition.y;
}
Why isn't the collision being detected in 100% of cases? It's being detected only in 70% of cases.
Thanks.
UPDATE: Problem seems to be solved when I change FPS from 30 to 10. How does FPS affect my collision detection?
delta = ( P[0] . V[0] )^2 - V[0]^2 * (P[0]^2 - R^2)
Shouldn't that be delta = b2 - 4 ac?
[Edit] Oh I see, you factored the 4 out. In that case, are you sure you're considering both solutions for t?
t = ( -( P[0] . V[0] ) - sqrt(delta) ) / V[0]^2
and
t = ( -( P[0] . V[0] ) + sqrt(delta) ) / V[0]^2
How large are the sphere's and how fast are they moving? Can a sphere "jump" over the second one during a frame (i.e., is it's velocity vector longer than it's width?).
Along those lines, what happens if you remove the upper limit here:
if (collisionTime >= 0.0f && collisionTime <= 1.0f / GAME_FPS)
{
return collisionTime;
}
If the sphere was moving too fast, maybe your algorithm is detecting a collision that happened more than one frame ago .. (?)