Reading width and height of PNG header - c++

I am experimenting with reading the width and height of a PNG file.
This is my code:
struct TImageSize {
int width;
int height;
};
bool getPngSize(const char *fileName, TImageSize &is) {
std::ifstream file(fileName, std::ios_base::binary | std::ios_base::in);
if (!file.is_open() || !file) {
file.close();
return false;
}
// Skip PNG file signature
file.seekg(9, std::ios_base::cur);
// First chunk: IHDR image header
// Skip Chunk Length
file.seekg(4, std::ios_base::cur);
// Skip Chunk Type
file.seekg(4, std::ios_base::cur);
__int32 width, height;
file.read((char*)&width, 4);
file.read((char*)&height, 4);
std::cout << file.tellg();
is.width = width;
is.height = height;
file.close();
return true;
}
If I try to read for example from this image from Wikipedia, I'm getting these wrong values:
252097920 (should be 800)
139985408 (should be 600)
Note that the function is not returning false so the contents of the width and height variables must come from the file.

It looks like you're off by a byte:
// Skip PNG file signature
file.seekg(9, std::ios_base::cur);
The PNG Specification says the header is 8 bytes long, so you want that "9" to be an "8" instead. Positions start at 0.
Also note that the spec says that integers are in network (big-endian) order, so you may want or need to use ntohl() or otherwise convert byte order if you're on a little-endian system.
It's probably worth using libpng or stb_image or something similar rather than attempting to parse the png yourself, though -- unless you're doing this to learn.

When you look at Portable Network Graphics Technical details, it says the signature is 8 bytes not 9.
Plus, are you sure your system has the same byte order as the PNG standard? ntohl(3) will ensure the correct byte order. It's available for windows also.

Related

Loading Wave File but there is random nonsense at the end of the data rather than the expected samples

I've got a simple wav header reader i found online a long time ago, i've gotten back round to using it but it seems to replace around 1200 samples towards the end of the data chunk with a single random repeated number, eg -126800. At the end of the sample is expected silence so the number should be zero.
Here is the simple program:
void main() {
WAV_HEADER* wav = loadWav(".\\audio\\test.wav");
double sample_count = wav->SubChunk2Size * 8 / wav->BitsPerSample;
printf("Sample count: %i\n", (int)sample_count);
vector<int16_t> samples = vector<int16_t>();
for (int i = 0; i < wav->SubChunk2Size; i++)
{
int val = ((wav->data[i] & 0xff) << 8) | (wav->data[i + 1] & 0xff);
samples.push_back(val);
}
printf("done\n");
}
And here is the Wav reader:
typedef struct
{
//riff
uint32_t Chunk_ID;
uint32_t ChunkSize;
uint32_t Format;
//fmt
uint32_t SubChunk1ID;
uint32_t SubChunk1Size;
uint16_t AudioFormat;
uint16_t NumberOfChanels;
uint32_t SampleRate;
uint32_t ByteRate;
uint16_t BlockAlignment;
uint16_t BitsPerSample;
//data
uint32_t SubChunk2ID;
uint32_t SubChunk2Size;
//Everything else is data. We note it's offset
char data[];
} WAV_HEADER;
#pragma pack()
inline WAV_HEADER* loadWav(const char* filePath)
{
long size;
WAV_HEADER* header;
void* buffer;
FILE* file;
fopen_s(&file,filePath, "r");
assert(file);
fseek(file, 0, SEEK_END);
size = ftell(file);
rewind(file);
std::cout << "Size of file: " << size << std::endl;
buffer = malloc(sizeof(char) * size);
fread(buffer, 1, size, file);
header = (WAV_HEADER*)buffer;
//Assert that data is in correct memory location
assert((header->data - (char*)header) == sizeof(WAV_HEADER));
//Extra assert to make sure that the size of our header is actually 44 bytes
assert((header->data - (char*)header) == 44);
fclose(file);
return header;
}
Im not sure what the problem is, i've confirmed that there is no meta data, nor is there a mis match between the numbers read from the header of the file and the actual file. Im assuming its a size/offset misallignment on my side, but i cannot see it.
Any help welcomed.
Sulkyoptimism
WAV is just a container for different audio sample formats.
You're making assumptions on a wav file that would have been OK on Windows 3.11 :) These don't hold in 2021.
Instead of rolling your own Wav file reader, simply use one of the available libraries. I personally have good experiences using libsndfile, which has been around roughly forever, is very slim, can deal with all prevalent WAV file formats, and with a lot of other file formats as well, unless you disable that.
This looks like a windows program (one notices by the fact you're using very WIN32API style capital struct names – that's a bit oldschool); so, you can download libsndfile's installer from the github releases and directly use it in your visual studio (another blind guess).
Apple (macOS and iOS) software often does not create WAVE/RIFF files with just a canonical Microsoft 44-byte header at the beginning. Those Wave files can instead can use a longer header followed by a padding block.
So you need to use the full WAVE RIFF format parsing specification instead of just reading from a fixed size 44 byte struct.

C++ bitmap editing

I am trying to open a bitmap file, edit it, and then save the edited version as a new file. This is eventually to mess with using steganography. I am trying to save the bitmap information now but the saved file will not open. No errors in compilation or run time. It opens fine and the rest of the functions work.
void cBitmap::SaveBitmap(char * filename)
{
// attempt to open the file specified
ofstream fout;
// attempt to open the file using binary access
fout.open(filename, ios::binary);
unsigned int number_of_bytes(m_info.biWidth * m_info.biHeight * 4);
BYTE red(0), green(0), blue(0);
if (fout.is_open())
{
// same as before, only outputting now
fout.write((char *)(&m_header), sizeof(BITMAPFILEHEADER));
fout.write((char *)(&m_info), sizeof(BITMAPINFOHEADER));
// read off the color data in the bass ackwards MS way
for (unsigned int index(0); index < number_of_bytes; index += 4)
{
red = m_rgba_data[index];
green = m_rgba_data[index + 1];
blue = m_rgba_data[index + 2];
fout.write((const char *)(&blue), sizeof(blue));
fout.write((const char *)(&green), sizeof(green));
fout.write((const char *)(&red), sizeof(red));
}
}
else
{
// post file not found message
cout <<filename << " not found";
}
// close the file
fout.close();
}
You're missing the padding bytes after each RGB row. The rows have to be a multiple of 4 bytes each.
Also, are you supposed to be writing a 24 or 32-bit bmp file? If you're writing 24-bit, you're just missing padding. If you're writing 32-bit, then you're missing each extra byte (alpha). Not enough information to fix your code sample short of writing a complete bmp writer that would support all possible options.

What is the best solution for writing numbers into file and than read them?

I have 640*480 numbers. I need to write them into a file. I will need to read them later. What is the best solution? Numbers are between 0 - 255.
For me the best solution is to write them binary(8 bits). I wrote the numbers into txt file and now it looks like 1011111010111110 ..... So there are no questions where the number starts and ends.
How am I supposed to read them from the file?
Using c++
It's not good idea to write bit values like 1 and 0 to text file. The file size will bigger in 8 times. 1 byte = 8 bits. You have to store bytes, 0-255 - is byte. So your file will have size 640*480 bytes instead of 640*480*8. Every symbol in text file has size of 1 byte minimum. If you want to get bits, use binary operators of programming language that you use. To read bytes much easier. Use binary file for saving your data.
Presumably you have some sort of data structure representing your image, which somewhere inside holds the actual data:
class pixmap
{
public:
// stuff...
private:
std::unique_ptr<std::uint8_t[]> data;
};
So you can add a new constructor which takes a filename and reads bytes from that file:
pixmap(const std::string& filename)
{
constexpr int SIZE = 640 * 480;
// Open an input file stream and set it to throw exceptions:
std::ifstream file;
file.exceptions(std::ios_base::badbit | std::ios_base::failbit);
file.open(filename.c_str());
// Create a unique ptr to hold the data: this will be cleaned up
// automatically if file reading throws
std::unique_ptr<std::uint8_t[]> temp(new std::uint8_t[SIZE]);
// Read SIZE bytes from the file
file.read(reinterpret_cast<char*>(temp.get()), SIZE);
// If we get to here, the read worked, so we move the temp data we've just read
// into where we'd like it
data = std::move(temp); // or std::swap(data, temp) if you prefer
}
I realise I've assumed some implementation details here (you might not be using a std::unique_ptr to store the underlying image data, though you probably should be) but hopefully this is enough to get you started.
You can print the number between 0-255 as the char value in the file.
See the below code. in this example I am printing integer 70 as char.
So this result in print as 'F' on the console.
Similarly you can read it as char and then convert this char to integer.
#include <stdio.h>
int main()
{
int i = 70;
char dig = (char)i;
printf("%c", dig);
return 0;
}
This way you can restrict the file size.

C++ replacement for BYTE C macro

I'm trying to port the C openGL texture loading code found here:
http://www.nullterminator.net/gltexture.html
to C++. In particular I'm trying to deal with reading some textures in from a file, what is the best way of rewriting the following code in an idiomatic and portable manner:
GLuint texture;
int width = 256, height = 256;
BYTE * data;
FILE * file;
// open texture data
file = fopen( filename, "rb" );
if ( file == NULL ) return 0;
// allocate buffer
data = malloc( width * height * 3 );
// read texture data
fread( data, width * height * 3, 1, file );
fclose( file );
In particular what is the best way of replacing the BYTE macro in a c++ way that is portable?
EDIT: BYTE macro is not defined in the current environment I am working in. I was trying to figure out what the underlying type of this is on other systems so that I can typedef for the correct type.
Assuming the original code is portable, you can just leave it. Just make sure you pull in the definition of BYTE as is. C++ compilers are backwards compatible to C, so the corresponding headers are still there.
(If BYTE is really a macro, I'd perhaps typedef it.)
The C code should work just fine when compiled as C++.
Rather than use the BYTE type, just use the OpenGL-defined type GLbyte, which is the actual type the APIs take anyway. It is defined in gl.h thus:
typedef signed char GLbyte;
A very quick (untested!) translation of the above code into C++ would be something like:
GLuint texture;
unsigned width = 256, height = 256;
unsigned buffer_size = width * height * 3;
GLbyte * data;
std::ifstream file;
// open texture data
file.open(filename, ios_base::in | ios_base::binary);
if (!file) return 0;
// allocate buffer
data = new BYTE[buffer_size];
// read texture data
file.read(data, buffer_size);
file.close();
// Process data...
// ...
// Don't forget to release it when you're done!
delete [] data;
BYTE* in this case seems to be just a macro for char* or unsigned char*. I could be wrong but I doubt it. So using char* or unsigned char* in your program would be equivalent. However if you are porting from C to C++ you might want to consider using ifstream (in binary mode) from the C++ standard library.
Use unsigned char instead of BYTE - should work as expected (you might have to cast the return value of malloc().

Libtiff: How can I get pixel values OR how can I convert TIFF-files to text files

I'm trying to get libtiff to read out tiff files that consist of one strip of about 500x500 32-Bit pixels using the method TIFFReadScanline(tif, buf, row). This gives me tdata_t (??) rows.
How can I write out this buffer as text file or access pixel values (should be doubles)?
My code looks like this:
TIFF* tif = TIFFOpen(c_str2, "r");
uint32 imagelength;
tdata_t buf;
uint32 row;
TIFFGetField(tif, TIFFTAG_IMAGELENGTH, &imagelength);
buf = _TIFFmalloc(TIFFScanlineSize(tif));
for (row = 0; row < imagelength; row++){
TIFFReadScanline(tif, buf, row);
myfile << buf << endl;
}
In the last line I try to write write out the whole buffer into a text file, but there are no double values but Hex-Values. When I replace the tdata_t buffer by a char buffer there is ASCII symbol gibberish. I think I should convert the tdata_t buffer to a double or char buffer but how?
It shouldn't be byte-order since libtiff handles this automatically I think.
Any suggestions welcome! Thanks for helping, wish you all a nice weekend!
The << noticed you are outputting types of tdata_t which are probably ints and puts them into hex to make them easier to read.
Just loop over all the elements in a row (in buf) and output them as floats with << (float)buf[element]