Possible to blit images with alpha mask onto transparent surface? - c++

I am trying to do just that. I have an image of various tiles of an explosion in my game. I am trying to preprocess the explosion tiles and create the image and then blit it onto the screen.
Here is the tile sheet with the alpha mask:Explosion
Now, I want to blit these and have them maintain their alpha transparency onto a surface which I can then render.
Here is my code:
SDL_Surface* SpriteManager::buildExplosion(int id, SDL_Surface* image, int size)
{
// Create the surface that will hold the explosion image
SDL_Surface* explosion = SDL_CreateRGBSurface(SDL_HWSURFACE, size * 32 , size * 32, 32, 0, 0, 0, 255 );
// Our source and destination surfaces
SDL_Rect srcrect;
SDL_Rect dstrect;
int parentX = sprites[id].x;
int parentY = sprites[id].y;
int middle = size / 2;
// Create the first image
srcrect.x = sprites[id].imgBlockX * 32; // default for now
srcrect.y = sprites[id].imgBlockY * 32; // default for now
srcrect.w = 32;
srcrect.h = 32;
// Get the location it should be applied to
dstrect.x = middle * 32;
dstrect.y = middle * 32;
dstrect.w = 32;
dstrect.h = 32;
// Apply the texture
SDL_BlitSurface(image, &srcrect, explosion, &dstrect);
errorLog.writeError("Applying surface from x: %i y: %i to x: %i y:%i", srcrect.x, srcrect.y, dstrect.x, dstrect.y);
// Iterate through each explosion
for(int i = 0; i < sprites[id].children.size(); i++)
{
// Get the texture source
srcrect.x = 0; // default for now
srcrect.y = 0; // default for now
srcrect.w = 32;
srcrect.h = 32;
// Get the location it should be applied to
dstrect.x = sprites[id].children[i].x - parentX * 32;
dstrect.y = sprites[id].children[i].y - parentY * 32;
dstrect.w = 32;
dstrect.h = 32;
// Apply the texture
SDL_BlitSurface(image, &srcrect, explosion, &dstrect);
}
//return img;
return explosion;
}
I suspect it has to do with this line but I am really at a loss:
SDL_Surface* explosion = SDL_CreateRGBSurface(SDL_HWSURFACE, size * 32 , size * 32, 32, 0, 0, 0, 255 );
The SDL_Surface called image is the image I linked above just to make that clear. If anyone sees the error of my ways, many thanks!
My problem: The code above does either blits a completely invisible surface or a black surface with the images on them.
I guess I am curious if it possible to do what I described above and if I can modify this code to make that work.

Related

OpenGL: Textures are not aligned correctly

I have a problem with OpenGL (in LWJGL) and Texture Mapping. I'm loading an ARGB image using
public static ByteBuffer toByteArray(BufferedImage image) {
int[] pixels = new int[image.getWidth() * image.getHeight()];
image.getRGB(0, 0, image.getWidth(), image.getHeight(), pixels, 0, image.getWidth());
ByteBuffer buffer = BufferUtils.createByteBuffer(image.getWidth() * image.getHeight() * 4);
for (int y = 0; y < image.getHeight(); y++) {
for (int x = 0; x < image.getWidth(); x++) {
int pixel = pixels[y * image.getWidth() + x];
buffer.put((byte) ((pixel >> 16) & 0xFF)); // Red component
buffer.put((byte) ((pixel >> 8) & 0xFF)); // Green component
buffer.put((byte) ((pixel >> 0) & 0xFF)); // Blue component
buffer.put((byte) ((pixel >> 24) & 0xFF)); // Alpha component.
}
}
buffer.flip();
return buffer;
}
I'm uploading the textures using
int[] textureIds = new int[textures.size()];
GL11.glGenTextures(textureIds);
int i = 0;
for (Texture texture : textures.values()) {
int textureId = textureIds[i++];
GL13.glActiveTexture(GL13.GL_TEXTURE0);
GL11.glBindTexture(GL11.GL_TEXTURE_2D, textureId);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_WRAP_S, GL11.GL_REPEAT);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_WRAP_T, GL11.GL_REPEAT);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MIN_FILTER, GL11.GL_LINEAR);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MAG_FILTER, GL11.GL_LINEAR);
GL11.glPixelStorei(GL11.GL_UNPACK_ALIGNMENT, 4);
BufferedImage data = texture.load();
ByteBuffer bytes = Texture.toByteArray(data);
GL11.glTexImage2D(GL11.GL_TEXTURE_2D, 0, GL11.GL_RGBA, data.getWidth(), data.getHeight(), 0, GL11.GL_RGBA,
GL11.GL_UNSIGNED_BYTE, bytes);
// GL30.glGenerateMipmap(GL11.GL_TEXTURE_2D);
texture.setTextureId(textureId);
GL11.glBindTexture(GL11.GL_TEXTURE_2D, 0);
}
Unfortunately I'm ending up with something like this:
But actually it should look like this (in blender):
The Texture can be found: Here is the texture
So everything is crooked and somewhat following diagonals. The model is made with blender and therefore has proper texture coordinates. I also managed it to load the model plus texture in another engine. But not in mine. Does anyone have an idea how to fix this?

DirectX: Draw bitmap image scale up in viewport caused low quality?

I'm using DirectX to draw the images with RGB data in buffer. The fllowing is sumary code:
// create the vertex buffer
D3D11_BUFFER_DESC bd;
ZeroMemory(&bd, sizeof(bd));
bd.Usage = D3D11_USAGE_DYNAMIC; // write access access by CPU and GPU
bd.ByteWidth = sizeOfOurVertices; // size is the VERTEX struct * pW*pH
bd.BindFlags = D3D11_BIND_VERTEX_BUFFER; // use as a vertex buffer
bd.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE; // allow CPU to write in buffer
dev->CreateBuffer(&bd, NULL, &pVBuffer); // create the buffer
//Create Sample for texture
D3D11_SAMPLER_DESC desc;
desc.Filter = D3D11_FILTER_ANISOTROPIC;
desc.MaxAnisotropy = 16;
ID3D11SamplerState *ppSamplerState = NULL;
dev->CreateSamplerState(&desc, &ppSamplerState);
devcon->PSSetSamplers(0, 1, &ppSamplerState);
//Create list vertices from RGB data buffer
pW = bitmapSource->PixelWidth;
pH = bitmapSource->PixelHeight;
OurVertices = new VERTEX[pW*pH];
vIndex = 0;
unsigned char* curP = rgbPixelsBuff;
for (y = 0; y < pH; y++)
{
for (x = 0; x < pW; x++)
{
OurVertices[vIndex].Color.b = *curP++;
OurVertices[vIndex].Color.g = *curP++;
OurVertices[vIndex].Color.r = *curP++;
OurVertices[vIndex].Color.a = *curP++;
OurVertices[vIndex].X = x;
OurVertices[vIndex].Y = y;
OurVertices[vIndex].Z = 0.0f;
vIndex++;
}
}
sizeOfOurVertices = sizeof(VERTEX)* pW*pH;
// copy the vertices into the buffer
D3D11_MAPPED_SUBRESOURCE ms;
devcon->Map(pVBuffer, NULL, D3D11_MAP_WRITE_DISCARD, NULL, &ms); // map the buffer
memcpy(ms.pData, OurVertices, sizeOfOurVertices); // copy the data
devcon->Unmap(pVBuffer, NULL);
// unmap the buffer
// clear the back buffer to a deep blue
devcon->ClearRenderTargetView(backbuffer, D3DXCOLOR(0.0f, 0.2f, 0.4f, 1.0f));
// select which vertex buffer to display
UINT stride = sizeof(VERTEX);
UINT offset = 0;
devcon->IASetVertexBuffers(0, 1, &pVBuffer, &stride, &offset);
// select which primtive type we are using
devcon->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_POINTLIST);
// draw the vertex buffer to the back buffer
devcon->Draw(pW*pH, 0);
// switch the back buffer and the front buffer
swapchain->Present(0, 0);
When the viewport's size is smaller or equal the image's size => everything is ok. But when viewport's size is lager image's size => the image's quality is very bad.
I've searched and tried to use desc.Filter = D3D11_FILTER_ANISOTROPIC;as above code (I've tried to use D3D11_FILTER_MIN_POINT_MAG_MIP_LINEAR or D3D11_FILTER_MIN_LINEAR_MAG_MIP_POINTtoo), but the result is not better. The following images are result of displaying:
Someone can tell me the way to fix it.
Many thanks!
You are drawing each pixel as a point using DirectX. It is normal that when the screen size gets bigger, your points will move apart and the quality will be bad. You should draw a textured quad instead, using a texture that you fill with your RGB data and a pixel shader.

DirectX Texture Not Drawing Correctly

I'm trying to render a texture to the screen using DirectX without DirectXTK.
This is the texture that I am trying to render on screen (512x512px):
The texture loads correctly but when it is put on the screen, it comes up like this:
I noticed that the rendered image seems to be the texture split four times in the x-direction and many times in the y-direction. The tiles seem to increase in height as the texture is rendered farther down the screen.
I have two thoughts as to how the texture was rendered incorrectly.
I could have initialized the texture incorrectly.
I could have improperly setup my texture sampler.
Regarding improper texture initialization, here is the code that I used to initialize the texture.
Texture2D & Shader Resource View Creation Code
Load Texture Data
This loads the texture for a PNG file into a vector of unsigned chars and sets the width and height of the texture.
std::vector<unsigned char> fileData;
if (!loadFileToBuffer(fileName, fileData))
return nullptr;
std::vector<unsigned char> imageData;
unsigned long width;
unsigned long height;
decodePNG(imageData, width, height, fileData.data(), fileData.size());
Create Texture Description
D3D11_TEXTURE2D_DESC texDesc;
ZeroMemory(&texDesc, sizeof(D3D11_TEXTURE2D_DESC));
texDesc.Width = width;
texDesc.Height = height;
texDesc.MipLevels = 1;
texDesc.ArraySize = 1;
texDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
texDesc.SampleDesc.Count = 1;
texDesc.SampleDesc.Quality = 0;
texDesc.Usage = D3D11_USAGE_DYNAMIC;
texDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
texDesc.CPUAccessFlags = D3D10_CPU_ACCESS_WRITE;
Assign Texture Subresource Data
D3D11_SUBRESOURCE_DATA texData;
ZeroMemory(&texData, sizeof(D3D11_SUBRESOURCE_DATA));
texData.pSysMem = (void*)imageData.data();
texData.SysMemPitch = sizeof(unsigned char) * width;
//Create DirectX Texture In The Cache
HR(m_pDevice->CreateTexture2D(&texDesc, &texData, &m_textures[fileName]));
Create Shader Resource View for Texture
D3D11_SHADER_RESOURCE_VIEW_DESC srDesc;
ZeroMemory(&srDesc, sizeof(D3D11_SHADER_RESOURCE_VIEW_DESC));
srDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
srDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
srDesc.Texture2D.MipLevels = 1;
HR(m_pDevice->CreateShaderResourceView(m_textures[fileName], &srDesc,
&m_resourceViews[fileName]));
return m_resourceViews[fileName];//This return value is used as "texture" in the next line
Use The Texture Resource
m_pDeviceContext->PSSetShaderResources(0, 1, &texture);
I have messed around with the MipLevels and SampleDesc.Quality variables to see if they were changing something about the texture but changing them either made the texture black or did nothing to change it.
I also looked into the the SysMemPitch variable and made sure that it aligned with MSDN
Regarding setting up my sampler incorrectly, here is the code that I used to initialize my sampler.
//Setup Sampler
D3D11_SAMPLER_DESC samplerDesc;
ZeroMemory(&samplerDesc, sizeof(D3D11_SAMPLER_DESC));
samplerDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR;
samplerDesc.AddressU = D3D11_TEXTURE_ADDRESS_CLAMP;
samplerDesc.AddressV = D3D11_TEXTURE_ADDRESS_CLAMP;
samplerDesc.AddressW = D3D11_TEXTURE_ADDRESS_CLAMP;
samplerDesc.MipLODBias = 0.0f;
samplerDesc.MaxAnisotropy = 1;
samplerDesc.ComparisonFunc = D3D11_COMPARISON_NEVER;
samplerDesc.BorderColor[0] = 1.0f;
samplerDesc.BorderColor[1] = 1.0f;
samplerDesc.BorderColor[2] = 1.0f;
samplerDesc.BorderColor[3] = 1.0f;
samplerDesc.MinLOD = -FLT_MAX;
samplerDesc.MaxLOD = FLT_MAX;
HR(m_pDevice->CreateSamplerState(&samplerDesc, &m_pSamplerState));
//Use the sampler
m_pDeviceContext->PSSetSamplers(0, 1, &m_pSamplerState);
I have tried different AddressU/V/W types to see if the texture was loaded with incorrect width/height and was thus shrunk but changing these did nothing.
My VertexShader passes the texture coordinates through using TEXCOORD0 and my PixelShader uses texture.Sample(samplerState, input.texCoord); to get the color of the pixel.
In summary, I am trying to render a texture but the texture gets tiled and I am not able to figure out why. What do I need to change/do to render just one of my texture?
I think you assign the wrong pitch:
texData.SysMemPitch = sizeof(unsigned char) * width;
should be
texData.SysMemPitch = 4 * sizeof(unsigned char) * width;
because each pixels has DXGI_FORMAT_R8G8B8A8_UNORM format and occupies 4 bytes.

glDrawPixels puts each RGB component in a different pixel

As the title says, Red Green And Blue values are put into different slots, making the screen have stripes of red green and blue.
the code is pretty much: (w and h are width and height...)
unsigned int pixels[w * h * 3];
for (unsigned int i = 0; i < w * h * 3; i+=3) {
pixels[i + 0] = 0xff // Red
pixels[i + 1] = 0xff // Green
pixels[i + 2] = 0xff // Blue
}
while(windowIsOpen()) {
glClear(GL_COLOR_BUFFER_BIT);
glDrawPixels(w, h, GL_RGB, GL_UNSIGNED_BYTE, pixels);
glSwapBuffers();
}
But, this produces and image like this when it should be all white:
Any help would be amazing! I know glDrawPixels is deprecated but I need an easy way to draw pixels on the screen, performance isn't an issue for this project.
Please note that you are using a buffer of ints (sizeof(int) = 4 ) while you are telling your opengl that you are sending UNSIGNED_BYTE (sizeof(char) = 1).
Change your buffer from int to char and see if everything goes fine.

SDL_Texture opacity

How can I change opacity of the SDL_Texture? And I don't know how to apply the opacity number in my function.
My code
void drawTexture(SDL_Texture *img, int x, int y, int width, int height, double opacity)
{
SDL_Rect SrcR;
SDL_Rect DestR;
SrcR.x = 0;
SrcR.y = 0;
SrcR.w = width;
SrcR.h = height;
DestR.x = x;
DestR.y = y;
DestR.w = width;
DestR.h = height;
SDL_RenderCopy(_main::_main_renderer, img, &SrcR, &DestR);
}
Use SDL_SetTextureAlphaMod:
SDL_SetTextureAlphaMod(img, opacity);
This will set the opacity (alpha) of the texture, the alpha value must be a Uint8 from 0 (totally transparent aka invisible) to 255 (fully opaque).
Opening question doesn't have info on the origin of img texture so the chosen answer it not correct it the texture is created from raw pixel data that doesn't have Alpha, i.e. using this:
SDL_UpdateTexture(img, NULL, pixels, pitch);
If pixels contains raw pixel data without alpha, i.e. ARGB with A 0x00, even if you do this
SDL_UpdateTexture(img, NULL, pixels, pitch);
SDL_SetTextureAlphaMod(img, opacity);
you will not see the texture (in this case alpha is 0x00) or you'll see garbage