Hi I have problem while I am trying to create dynamic texture to assign as a background in my ogre window. I want to assign values dynamicly for an each pixel of texture and then I use this texture as a background.
I use this code to create dynamic texture.
Ogre::TexturePtr texture = Ogre::TextureManager::getSingleton().createManual("BackgroundTex", Ogre::ResourceGroupManager::DEFAULT_RESOURCE_GROUP_NAME, Ogre::TEX_TYPE_2D, 800, 600, 0, Ogre::PF_R8G8B8, Ogre:: TU_DYNAMIC);
Ogre::MaterialPtr material = Ogre::MaterialManager::getSingleton().create("BackgroundMat",Ogre::ResourceGroupManager::DEFAULT_RESOURCE_GROUP_NAME);
material->getTechnique(0)->getPass(0)->createTextureUnitState("BackgroundTex");
material->getTechnique(0)->getPass(0)->setSceneBlending(Ogre::SBT_TRANSPARENT_COLOUR);
Ogre::Rectangle2D* rect = new Ogre::Rectangle2D(true);
rect->setCorners(-1.0, 1.0, 1.0, -1.0);
rect->setRenderQueueGroup(Ogre::RENDER_QUEUE_BACKGROUND);
rect->setBoundingBox(Ogre::AxisAlignedBox(-100000.0 * Ogre::Vector3::UNIT_SCALE, 100000.0 * Ogre::Vector3::UNIT_SCALE));
Ogre::SceneNode* node = sceneManager->getRootSceneNode()->createChildSceneNode("BackgroundMat");
node->attachObject(rect);
node->setVisible(true);
rect->setMaterial("BackgroundMat");
Ogre::HardwarePixelBufferSharedPtr pixelBuffer = texture->getBuffer();
pixelBuffer->lock(Ogre::HardwareBuffer::HBL_DISCARD);
const Ogre::PixelBox& pixelBox = pixelBuffer->getCurrentLock();
Ogre::uint8* pDest = static_cast<Ogre::uint8*>(pixelBox.data);
for(size_t i=0; i < 600; i++)
{
for(size_t j=0; j < 800; j++)
{
*pDest++ = 0;
*pDest++ = 0;
*pDest++ = 255;
}
}
pixelBuffer->unlock();
in this piece of code I assign blue ( R:0 G:0 B:255 ) for each value. I expect to obtain full of blue window but instead of blue background I obtain this background that seen in the picture.
Instead of blue background, in the texture that I obtain there are 3 different color types and they are always repeat sequently. Blue pixels are true but the other 2 color would be blue as well. I cant find reason that cause this problem. What can I do? What is a wrong part?
According to Christopher's comment:
I have no experience with Ogre3D, but can it be that it actually gives you the image data as RGBA (or BGRA, or ARGB) instead of just RGB. So you miss an additional pDest++ (or maybe *pDest++ = 255) and therefore in the first loop interation you get blue, then green, then red and then blue again, ..., which would somehow coincide with your shown image.
EDIT: In your comment you say (if I understood correctly) that you get a completely red image when you add an additional ++pDest in the loop. This at least tells us that you indeed get a 4-component image from Ogre3D, since we now are not out of sync with the colors anymore and have only a single color. But since this color is red, it seems Ogre3D gives you the image data as BGRA. So just set the first component to 255 instead of the third (and of course keep this additional ++pDest in there).
You may have specified the texture as PF_R8G8B8, but it seems Ogre3D has some freedom regarding the layout of the image data in the buffer and actually the graphics driver also has some freedom regarding the memory layout of the textures and often a 32-bit RGBA or BGRA image has some advantages over 24-bit RGB.
It may also depend what underlying graphics API (D3D or GL) is used by Ogre3D and which is the standard there. In GL for example you cannot map texture memory directly and need to use a PBO for this, whose memory layout can in turn be chosen different from the texture. I don't know about D3D, but I think D3D especially likes BGRA layout.
EDIT: You can also check pixelBox.format to see what format the data has.
I had met the same problem like this.
The texture you have created was in the format of Ogre::PF_R8G8B8, but the actual hardwarebuffer Ogre is using is still Ogre::PF_R8G8B8A8, which means it's 4 bits rather than 3 bits.
You can add another line in your loop:
pDest++;
Related
Matlab offers the ability to set colour limits for the current axis using CAXIS. OpenCV has applyColorMap which can be used to highlight differences in pixel intensity in a greyscale image which I believe maps pixel from 0 - 255.
I am new to Matlab/Image-processing and have been asked to port a simple program from MatLab which uses the CAXIS function to change the "brightness" of a colour map. I have no experience in Matlab but it appears that they use this function to "lower" the intensity requirements needed for pixels to be mapped to a more intense colour on the map
i.e. Colour map using "JET"
When brightness = 1, red = 255
When brightness = 10, red >= 25
The matlab program allows 16bit images to be read in and displayed which obviouly gives higher pixel values whereas everything i've read and done indicates OpenCV only supports 8 bit images (for colour maps)
Therefore my question is, is it possible to provide similar functionality in OpenCV? How do you set the axis limit for a colourmap/how do you scale the colour map lookup table so that "less" intense pixels are scaled to the more intense regions?
A similar question was asked with a reply stating the array needs to be "normalised" but unfortunately I don't quite know how to achieve this and can't reply to the answer as i don't have enough rep!
I have gone ahead and used cv::normalize to set the max value in the array to be maxPixelValue/brightness but that doesn't work at all.
I have also experimented and tried converting my 16bit image into a CV_8UC1 with a scale factor to no avail. Any help would be greatly appreciated!
In my opinion you can use cv::normalize to "crop" values in the source picture to the corresponding ones in color map you are interested in. Say you want your image to be mapped to the blue-ish region of Jet colormap then you should do something like:
int minVal = 0, maxVal = 80;
cv::normalize(src,dst, minVal, maxVal, cv::NORM_MINMAX);
If you plan to apply some kind of custom map it's fairly easy for 1-or3-channel 8-bit image, you only need to create LUT with 255 values (with proper number of channels) and apply it using cv::LUT, more about it in this blog, also see the dosc about LUT
If the image you are working is of different depth, 16-bit or even floating point data I guess all you need to do is write a function like:
template<class T>
T customColorMapper(T input_pixel)
{
T output_pixel = 0;
// do something with output_pixel basing on intput_pixel
return output_pixel;
}
and apply it to each source image pixel like:
cv::Mat dst_image = src_image.clone(); //copy data
dst_image.forEach<TYPE>([](TYPE& input_pixel, const int* pos_row_col) -> void {
input_pixel = customColorMapper<TYPE>(input_pixel);
});
of course TYPE need to be a valid type. Maybe specialized version of this function taking cv::Scalar or cv::Vec3-something would be nice if you need to work with multiple channels.
Hope this helps!
I managed to replicate the MATLAB behaviour but had to resort to manually iterating over each pixel and setting the value to the maximum value for the image depth or scaling the value where needed.
my code looked something like this
cv::minMaxLoc(dst, &min, &max);
double axisThreshold = floor(max / contrastLevel);
for (int i = 0; i < dst.rows; i++)
{
for (int j = 0; j < dst.cols; j++)
{
short pixel = dst.at<short>(i, j);
if (pixel >= axisThreshold)
{
pixel = USHRT_MAX;
}
else
{
pixel *= (USHRT_MAX / axisThreshold);
}
dst.at<short>(i, j) = cv::saturate_cast<short>(pixel);
}
}
In my example I had a slider which adjusted the contrast/brightness (we called it contrast, the original implementation called it brightness).
When the contrast/brightness was changed, the program would retrieve the maximum pixel value and then compute the axis limit by doing
calculatedThreshold = Max pixel value / contrast
Each pixel more than the threshold gets set to MAX, each pixel lower than the threshold gets multiplied by a scale factor calculated by
scale = MAX Pixel Value / calculatedThreshold.
TBH i can't say I fully understand the maths behind it. I just used trial and error until it worked; any help in that department would be appreciated HOWEVER it seems to do what i want to!
My understanding of the initial matlab implementation and the terminology "brightness" is in fact their attempt to scale the colourmap so that the "brighter" the image, the less intense each pixel had to be to map to a particular colour in the colourmap.
Since applycolourmap only works on 8 bit images, when the brightness increases and the colourmap axis values decrease, we need to ensure the values of the pixels scale accordingly so that they now match up with the "higher" intensity values in the map.
I have seen numerous OPENCV tutorials which use this approach to changing the contrast/brightness but they often promote the use of optimised convertTo (especially if you're trying to use the GPU). However as far as I can see, convertTo applies the aplha/beta values uniformly and not on a pixel by pixel basis therefore I can't use that approach.
I will update this question If i found more suitable OPENCV functions to achieve what I want.
I want to draw with Direct2D frames which color channels are shifted on x-axis. I know I could set the composition mode to D2D1_COMPOSITE_MODE_PLUS and draw each color channel separately so I can shift them manually. But I want to know if there is another (maybe more efficient) way of drawing shapes with shifted color channels?
I attached an image which shows what I mean.
(I suggest to open this image in a new tab and zoom in to see the effect better)
The way this is typically done is to sample 3 pixels from the input image at a time, each separated by some amount in the x direction, and combine the red from one, the green from another, and the blue from the third. Unfortunately, I don't know DX2D at all, so I don't know the specifics of how it works there. But if you have a bitmap and a pointer to the pixels, you can simply subtract one (or more) pixels from that pointer, and add one or more pixels to the that pointer and read from those memory locations (being careful to account for image edges). Then pull the channels from the values you've read. For example:
RGBA8* pixel = baseAddressOfImage;
RGBA8* pixelMinus1 = pixel - 1;
RGBA8* pixelPuls1 = pixel + 1;
for each pixel in the output
{
result.red = pixelMinus1->red;
result.green = pixel->green;
result.blue = pixelPlus1->blue;
pixelMinus1++;
pixel++;
pixelPlus1++;
}
Note that you can add or subtract more than 1, but as mentioned above, you have to handle what happens at the edges in those cases.
In my OpenGL program, I'm loading a 24BPP image with the width of 501. The GL_UNPACK_ALINGMENT parameter is set to 4. They write it shouldn't work because the size of each of the rows which are being uploaded (501*3 = 1503) cannot be divided by 4. However, I can see a normal texture without artifacs when displaying it.
So my code works. I'm considering why to understand this fully and prevent the whole project from getting bugged.
Maybe (?) it works because I'm not just calling glTexImage2D. Instead, at first I'm creating a proper (with dimensions which are powers of two) blank texture, then uploading pixels with glTexSubImage2D.
EDIT:
But do you think it does a sense to write some code like that?
// w - the width of the image
// depth - the depth of the image
bool change_alignment = false;
if (depth != 4 && !is_divisible(w*depth)) // *
{
change_alignment = true;
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
}
// ... now use glTexImage2D
if (change_alingment) glPixelStorei(GL_UNPACK_ALIGNMENT, 4); // set to default
// * - of course we don't even need such a function
// but I wanted to make the code as clear as possible
Hope it should prevent the application from crashing or malfunction?
It depends on where your image data is coming from.
The Windows BMP format, for example, enforces a 4-byte row alignment. Indeed, formats like this are exactly why OpenGL has a row-alignment field: because some image formats enforce a row alignment.
So how correct it is to use a 4-byte row alignment on your data depends entirely on how your data is aligned in memory. Some image loaders will automatically align to 4 bytes. And some will not.
So I've set up my framework in a neat little system to wrap SDL, openGL and box2D all together for a 2D game.
Now how it works is that I create an object of "GameObject" class, specify a "source PNG", and then it automatically creates an openGL texture and a box2d body of the same dimensions.
Now I am worried about if I start needing to render many different textures on screen.
Is it possible to load in all my sprite sheets at run time, and then group them all together into one texture? If so, how? And what would be a good way to implement it (so that I wouldn't have to manually be specifying any parameters or anything).
The reason I want to do it at run time and not pre-done is so that I can easily load together all (or most) of the tiles, enemies etc.. of a certain level into this one texture, because every level won't have the same enemies. It'd also make the whole creating art process easier.
There are likely some libraries that already exist for creating texture atlases (optimal packing is a nontrivial problem) and converting old texture coordinates to the new ones.
However, if you want to do it yourself, you probably would do something like this:
Load all textures from disk (your "source PNG") and retrieve the raw pixel data buffer,
If necessary, convert all source textures into the same pixel format,
Create a new texture big enough to hold all the existing textures, along with a corresponding buffer to hold the pixel data
"Blit" the pixel data from the source images into the new buffer at a given offset (see below)
Create a texture as normal using the new buffer's data.
While doing this, determine the mapping from "old" texture coordinates into the "new" texture coordinates (should be a simple matter of recording the offsets for each element of the texture atlas and doing a quick transform). It would probably also be pretty easy to do it inside a pixel shader, but some profiling would be required to see if the overhead of passing the extra parameters is worth it.
Obviously you also want to check to make sure you are not doing something silly like loading the same texture into the atlas twice, but that's a concern that's outside this procedure.
To "blit" (copy) from the source image to the target image you'd do something like this (assuming you're copying a 128x128 texture into a 512x512 atlas texture, starting at (128, 0) on the target):
unsigned char* source = new unsigned char[ 128 * 128 * 4 ]; // in reality, comes from your texture loader
unsigned char* target = new unsigned char[ 512 * 512 * 4 ];
int targetX = 128;
int targetY = 0;
for(int sourceY = 0; sourceY < 128; ++sourceY) {
for(int sourceX = 0; sourceX < 128; ++sourceX) {
int from = (sourceY * 128 * 4) + (sourceX * 4); // 4 bytes per pixel (assuming RGBA)
int to = ((targetY + sourceY) * 512 * 4) + ((targetX + sourceX) * 4); // same format as source
for(int channel = 0; channel < 4; ++channel) {
target[to + channel] = source[from + channel];
}
}
}
This is a very simple brute force implementation: there are much faster, more succinct and more clever ways to copy an array, but the idea is that you are basically copying the contents of the source texture into the target texture at a given X and Y offset. In the end, you will have created a new texture which contains the old textures in it.
If the indexing math doesn't make sense to you, think about how a 2D array is actually indexed inside a 1D space (such as computer memory).
Please forgive any bugs. This isn't production code but instead something I wrote without checking if it compiles or runs.
Since you're using SDL, I should mention that it has a nice function that might be able to help you: SDL_BlitSurface. You can create an SDL_Surface entirely within SDL and simply use SDL_BlitSurface to copy your source surfaces into it, then convert the atlas surface into a GL texture.
It will take care of all the math, and can also do a format conversion for you on the fly.
I am using a LPDIRECT3DTEXTURE9 to hold my image.
This is the function used to display my picture.
int drawcharacter(SPRITE& person, LPDIRECT3DTEXTURE9& image)
{
position.x = (float)person.x;
position.y = (float)person.y;
sprite_handler->Draw(
image,
&srcRect,
NULL,
&position,
D3DCOLOR_XRGB(255,255,255));
return 0;
}
According to the book I have the RGB colour shown as the last parameter will not be displayed on screen, this is how you create transparency.
This works for the most part but leaves a pink line around my image and the edge of the picture. After trial and error I have found that if I go back into photoshop I can eliminate the pink box by drawing over it with the pink colour. This can be see with the ships on the left.
I am starting to think that photoshop is blending the edges of the image so that background is not all the same shade of pink though I have no proof.
Can anyone help fix this by programming or is the error in the image?
If anyone is good at photoshop can they tell me how to fix the image, I use png mostly but am willing to change if necessary.
edit: texture creation code as requested
character_image = LoadTexture("character.bmp", D3DCOLOR_XRGB(255,0,255));
if (character_image == NULL)
return 0;
You are loading a BMP image, which does not support transparency natively - the last parameter D3DCOLOR_XRGB(255,0,255) is being used to add transparency to an image which doesn't have any. The problem is that the color must match exactly, if it is off even by only one it will not be converted to transparent and you will see the near-magenta showing through.
Save your images as 24-bit PNG with transparency, and if you load them correctly there will be no problems. Also don't add the magenta background before you save them.
As you already use PNG, you can just store the alpha value there directly from Photoshop. PNG supports transparency out of the box, and it can give better appearance than what you get with transparent colour.
It's described in http://www.toymaker.info/Games/html/textures.html (for example).
Photoshop is anti-aliasing the edge of the image. If it determines that 30% of a pixel is inside the image and 70% is outside, it sets the alpha value for that pixel to 70%. This gives a much smoother result than using a pixel-based transparency mask. You seem to be throwing these alpha values away, is that right? The pink presumably comes from the way that Photoshop displays partially transparent pixels.