Is it possible to create a new tif by iterating pixel by pixel and setting the RGB values for each pixel?
Let me explain what I'm attempting to do. I'm trying to open an existing tif, read it using TIFFReadRGBAImage, take the RGB values given by TIFFGetR/TIFFGetG/TIFFGetB, subtract them from 255, take those new values and use them to write each pixel one by one. In the end I'd like to end up with the original image and a new "complement" image that would be like a negative of the original.
Is there a way to do this using LibTiff? I've gone over the documentation and searched around Google but I've only seen very short examples of TIFFWriteScanline which provide so little lines of code/context/comments that I cannot figure out how to implement it in the way that I'd like it to work.
I'm still fairly new to programming so if someone could please either point me to a thorough example with plenty of explanatory comments or help me out directly with my code, I would appreciate it greatly. Thank you for taking the time to read this and help me learn.
What I have so far:
// Other unrelated code here...
//Invert color values and write to new image file
for (e = height - 1; e != -1; e--)
{
for (c = 0; c < width; c++)
{
red = TIFFGetR(raster[c]);
newRed = 255 - red;
green = TIFFGetG(raster[c]);
newGreen = 255 - green;
blue = TIFFGetB(raster[c]);
newBlue = 255 - blue;
// What to do next? Is this feasible?
}
}
// Other unrelated code here...
Full code if you need it.
I went back and looked at my old code. It turns out that I didn't use libtiff. Nevertheless you are on the right track. You want something like;
lineBuffer = (char *)malloc(width * 3) // 3 bytes per pixel
for all lines
{
ptr = lineBuffer
// modify your line code above so that you make a new line
for all pixels in line
{
*ptr++ = newRed;
*ptr++ = newGreen;
*ptr++ = newBlue
}
// write the line using libtiff scanline write
write a line here
}
Remember to set the tags appropriately. This example assumes 3 byte pixels. TIFF also allows for separate planes of 1 byte per pixel in each plane.
Alternately you can also write the whole image into a new buffer instead of one line at a time.
Related
I am creating a program that allows you to view fractals like the Mandelbrot or Julia set. I would like to render them as quickly as possible. I would love a way to put an array of uint8_t pixel values onto the screen. The array is formatted like this...
{r0,g0,b0,r1,g1,b1,...}
(A one dimensional array or RGB color values)
I know I have the proper data because before I just set individual points and it worked...
for(int i = 0;i < height * width;++i) {
//setStroke and point are functions that I made that together just draw a colored point
r.setStroke(data[i*3],data[i*3+1],data[i*3+2]);
r.point(i % r.window.w,i / r.window.w);
}
This is a pretty slow operation especially if the screen is big (which I would like it to be)
Is there any faster way to just put all the data onto the screen.
I tried doing something like this
void* pixels;
int pitch;
SDL_Texture* img = SDL_CreateTexture(ren,
SDL_GetWindowPixelFormat(win),SDL_TEXTUREACCESS_STREAMING,window.w,window.h);
SDL_LockTexture(img, NULL, &pixels, &pitch);
memcpy(pixels, data, window.w * 3 * window.h);
SDL_UnlockTexture(img);
SDL_RenderCopy(ren,img,NULL,NULL);
SDL_DestroyTexture(img);
I have no idea what I'm doing so please have mercy
Edit (thank you for comments :))
So here is what I do now
SDL_Texture* img = SDL_CreateTexture(ren, SDL_PIXELFORMAT_RGB888,SDL_TEXTUREACCESS_STREAMING,window.w,window.h);
SDL_UpdateTexture(img,NULL,&data[0],window.w * 3);
SDL_RenderCopy(ren,img,NULL,NULL);
SDL_DestroyTexture(img);
But I get this Image... which is not what it should look like
I am thinking that my data is just formatted wrong, right now it is formatted as an array of uint8_t in RGB order. Is there another way I should be formatting it (note I do not need an alpha channel)
Code is here:
void readOIIOImage( const char* fname, float* img)
{
int xres, yres;
ImageInput *in = ImageInput::create (fname);
if (! in) {return;}
ImageSpec spec;
in->open (fname, spec);
xres = spec.width;
yres = spec.height;
iwidth = spec.width;
iheight = spec.height;
channels = spec.nchannels;
cout << "\n";
pixels = new float[xres*yres*channels];
in->read_image (TypeDesc::FLOAT, pixels);
long index = 0;
for( int j=0;j<yres;j++)
{
for( int i=0;i<xres;i++ )
{
for( int c=0;c<channels;c++ )
{
img[ (i + xres*(yres - j - 1))*channels + c ] = pixels[index++];
}
}
}
in->close ();
delete in;
}
Currently, my code produces JPG files fine. It has the ability to read the file's information, and display it fine. However, when I try reading in a PNG file, it doesn't display correctly at all. Usually, it kind of displays the same distorted version of the image in three separate columns on the display. It's very strange. Any idea why this is happening with the given code?
Additionally, the JPG files all have 3 channels. The PNG has 2.
fname is simply a filename, and img is `new float[3*size];
Any help would be great. Thanks.`
Usually, it kind of displays the same distorted version of the image in three separate columns on the display. It's very strange. Any idea why this is happening with the given code?
This reads a lot like the output you get from the decoder is in row-planar format. Planar means, that you get individual rows one for every channel one-after another. The distortion and the discrepancy between number of channels in PNG and apparent count of channels are likely due to alignment mismatch. Now you didn't specify which image decoder library you're using exactly, so I can't look up information in how it communicates the layout of the pixel buffer. I suppose you can read the necessary information from ImageSpec.
Anyway, you'll have to rearrange your pixel buffer rearrangement loop indexing a bit so that consecutive row-planes are interleaved into channel-tuples.
Of course you could as well use a ready to use imagefile-to-OpenGL reader library. DevIL is thrown around a lot, but it's not very well maintained. SOIL seems to be a popular choice these days.
I knew this was going to come back and bite me one day. I'm reading an image, doing a resize to 48 pixels tall (by whatever the width is), then grabbing the total image columns and reading each individual pixel to get the color values. All of this information gets written out to a file. The concise version of the code is this:
unsigned char cols, rows;
unsigned char red, green, blue;
short int myCol, myRow;
cols = processedImage.columns();
rows = processedImage.rows();
myFile.write(reinterpret_cast<const char *>(&cols), sizeof(cols));
for (myCol = cols - 1; myCol >= 0; myCol--) {
for (myRow = rows - 1; myRow >= 0; myRow--) {
Magick::ColorRGB rgb(processedImage.pixelColor(myCol, myRow));
red = rgb.red() * 255;
green = rgb.green() * 255;
blue = rgb.blue() * 255;
myFile.write(reinterpret_cast <const char*> (&red), sizeof(red));
myFile.write(reinterpret_cast <const char*> (&green), sizeof(green));
myFile.write(reinterpret_cast <const char*> (&blue), sizeof(blue));
}
}
The problem here is when the file is wider than what char can hold. For example, I'm processing a file that's 494x48 pixels.
When I look at the (binary) file created, the first line which holds the column count says it's '238'. The next line starts the RGB data:
0: 238 // Column count
1: 255 // Red
2: 0 // Green
3: 0 // Blue
4: 255 // Red
5: 0 // Green
6: 0 // Blue
So I'm stuck. How can I store the actual columns value as a single line in the resulting file?
What about using more than one character instead of one character? Presume there are say 4 characters to store the cols, rows etc. since character can store 0-255, so 4 character will store 256x256x256x256 i.e. 32 bits, long enough
Answering my own question. Thanks to everyone who responded and helped figure out what I was doing wrong. The issue here stems from months of making assumptions based on Arduino code. Arduino has a single INT/UINT and I was using that to read in values from the generated files. I assumed that data type was a uint8_t when in reality I discovered it's really a uint16_t. As it was messing up other parts in the code (namely what position to seek to in a file), I had switched to a char data type as that was only taking up 1-byte. But in doing so I nailed myself with the roll-over issue mentioned above. So the solution, now that I know more about how the data types are within Arduino code:
change the image file processing to use uint16_t for both rows and columns
(since I have access to it) change the reading on the Arduino side to also use uint16_t
change the file seek command to move one more byte after the "header" so the data being
read doesn't get mangled.
And ultimately, I've now stopped using Arduino's built-in data types and switched to platform independent data types that are actually what they say they are.
Chalk this up to another learning experience (in my entire process of actually learning c++) ...
Suppose we have a 32-bit PNG file of some ghostly/incorporeal character, which is drawn in a semi-transparent fashion. It is not equally transparent in every place, so we need the per-pixel alpha information when loading it to a surface.
For fading in/out, setting the alpha value of an entire surface is a good way; but not in this case, as the surface already has the per-pixel information and SDL doesn't combine the two.
What would be an efficient workaround (instead of asking the artist to provide some awesome fade in/out animation for the character)?
I think the easiest way for you to achieve the result you want is to start by loading the source surface containing your character sprites, then, for every instance of your ghost create a working copy of the surface. What you'll want to do is every time the alpha value of an instance change, SDL_BlitSurface (doc) your source into your working copy and then apply your transparency (which you should probably keep as a float between 0 and 1) and then apply your transparency on every pixel's alpha channel.
In the case of a 32 bit surface, assuming that you initially loaded source and allocated working SDL_Surfaces you can probably do something along the lines of:
SDL_BlitSurface(source, NULL, working, NULL);
if(SDL_MUSTLOCK(working))
{
if(SDL_LockSurface(working) < 0)
{
return -1;
}
}
Uint8 * pixels = (Uint8 *)working->pixels;
pitch_padding = (working->pitch - (4 * working->w));
pixels += 3; // Big Endian will have an offset of 0, otherwise it's 3 (R, G and B)
for(unsigned int row = 0; row < working->h; ++row)
{
for(unsigned int col = 0; col < working->w; ++col)
{
*pixels = (Uint8)(*pixels * character_transparency); // Could be optimized but probably not worth it
pixels += 4;
}
pixels += pitch_padding;
}
if(SDL_MUSTLOCK(working))
{
SDL_UnlockSurface(working);
}
This code was inspired from SDL_gfx (here), but if you're doing only that, I wouldn't bother linking against a library just for that.
I've been working for a while on image processing and I've noticed weird things.
I'm reading a BMP file, using simple methods like ReadFile and stuff, and using Microsoft's BMP structures.
Here is the code:
ReadFile(_bmpFile,&bmpfh,sizeof(bfh),&data,NULL);
ReadFile(_bmpFile, &bmpih, sizeof(bih), &data, NULL);
imagesize = bih.biWidth*bih.biHeight;
image = new RGBQUAD[imagesize];
ReadFile(_bmpFile,image, imagesize*sizeof(RGBQUAD),&written,NULL);
That is how I read the file and then I'm turning it into gray scale using a simple for-loop.
for (int i = 0; i < imagesize; i++)
{
RED = image[i].rgbRed;
GREEN = image[i].rgbGreen;
BLUE = image[i].rgbBlue;
avg = (RED + GREEN + BLUE ) / 3;
image[i].rgbRed = avg;
image[i].rgbGreen = avg;
image[i].rgbBlue = avg;
}
Now when I write the file using this code:
#pragma pack(push, 1)
WriteFile(_bmpFile, &bmpfh, sizeof(bfh), &data, NULL);
WriteFile(_bmpFile, &bmpih, sizeof(bih), &data, NULL);
WriteFile(_bmpFile, image, imagesize*sizeof(RGBQUAD), &written, NULL);
#pragma pack(pop)
The file is getting much bigger(30MB -> 40MB).
The reason it happens is because I'm using RGBQUAD instead RGBTRIPLE, but if i'm using RGBTRIPLE I have a problem converting small pictures into
gray scale - can't open the picture after creating it(says it's not in the right structure).
Also the file size is missing one byte, (1174kb and after 1173kb)
Has anybody seen this before (it only occurs with small pictures)?
In a BMP file, every scan line has to be padded out so the next scan line starts on a 32-bit boundary. If you do 32 bits per pixel, that happens automatically, but if you use 24 bits per pixel, you'll need to add code to do it explicitly.
You are ignoring stride (Jerry's comment) and the pixel format of the bitmap. Which is 24bpp judging by the file size increase, you are writing it as though it is 32bpp. Your grayscale conversion is wrong, the human eye isn't equally sensitive to red, green and blue.
Consider using GDI+, you #include <gdiplus.h> in your code to use the Bitmap class. Its LockBits() method gives you access to the bitmap bits. The ColorMatrixEffect class lets you apply a color transformation in a single operation. Check this answer for the color matrix you need to get a grayscale image. The MSDN docs start here.
Each horizontal row in a BMP must be a multiple of 4 bytes long.
If the pixel data does not take up a multiple of 4 bytes, then 0x00 bytes are added at the end of the row. For a 24-bpp image, the number of bytes per row is (imageWidth*3 + 3) & ~3. The number of padding bytes is ((imageWidth*3 + 3) & ~3) - (imageWidth*3).
This was answered by immibis.
I would like to add that the size of array is ((imageWidth*3 + 3) & ~3)*imageHeight.
I hope this helps