So, I have an XImage and i was able to store it in the filesystem, but the image didn't have cursor in it. On further reserch, I found that XOrg has a fix for this, using extension Xfixes.h
The function XFixesGetCursorImage(display) returns a structure XFixesCursorImage
typedef struct {
short x, y;
unsigned short width, height;
unsigned short xhot, yhot;
unsigned long cursor_serial;
unsigned long *pixels;
#if XFIXES_MAJOR >= 2
Atom atom; /* Version >= 2 only */
const char *name; /* Version >= 2 only */
#endif
} XFixesCursorImage;
I believed that unsigned long *pixel is an array that will contain pixel by pixel information of the entire image(of the cursor and rest of the background will be valued 0).
Then using the steps given in this article, I would merge my original XImage with that of cursor(I hope, i have the right idea).
My Problem :
To effectively do the entire masking thing, the first thing is i need to have all the pixel values in XFixesCursorImage. But i believe that pixel array contain way too less values, because, my screen size is 1366 X 768 so i believe there should be 1366*768 elements in pixel array (each containng a long value of pixel in ARGB), but when i used GDB and tried to find the last element it was 21272(21273 elements in total)
Using GDB
(gdb) print cursor[0]
$22 = {x = 475, y = 381, width = 24, height = 24, xhot = 11, yhot = 11,
cursor_serial = 92, pixels = 0x807f39c, atom = 388, name = 0x807fc9c "xterm"}
(gdb) print cursor[0]
$22 = {x = 475, y = 381, width = 24, height = 24, xhot = 11, yhot = 11,
cursor_serial = 92, pixels = 0x807f39c, atom = 388, name = 0x807fc9c "xterm"}
(gdb) print cursor->pixels[21273]
Cannot access memory at address 0x8094000
Few More Data
(gdb) print cursor[0]
$5 = {x = 1028, y = 402, width = 1, height = 1, xhot = 1, yhot = 1,
cursor_serial = 120, pixels = 0x807e854, atom = 0, name = 0x807e858 ""}
(gdb) print cursor[0]->pixels[21994]
$8 = 0
(gdb) print cursor[0]->pixels[21995]
Cannot access memory at address 0x8094000
Am i missing something? Because no of elements doesn't make sense?
Which brings me to a very important question
How are data structured in both XImage->data and XFixesCursorImage->pixels?
XFixesCursorImage stores ONLY the cursor image, NOT the entire screen. So -as Andrey says- you can only access 24x24 unsigned longs.
You can place the Cursor Image on your XImage by using the fields x,y on the XFixesCursorImage, but remember that the pixel format of the XImage might differ from that of the XFixesCursorImage, which is always 32 bits per pixel ARGB.
Said that, please note that unsigned long can actually be 64 bits if compiling for x86_64, so your conversions should use unsigned long to be portable and not assume it will be 32 bits.
Placing example (with funcs missing but good enough for the explanation):
unsigned char r,g,b,a;
unsigned short row,col,pos;
for(pos = row = 0;row<img->height; row++)
{
for(col=0;col < img->width;col++,pos++)
{
a = (unsigned char)((img->pixels[pos] >> 24) & 0xff);
r = (unsigned char)((img->pixels[pos] >> 16) & 0xff);
g = (unsigned char)((img->pixels[pos] >> 8) & 0xff);
b = (unsigned char)((img->pixels[pos] >> 0) & 0xff);
put_pixel_in_ximage(img->x+col,img->y+row, convert_to_ximage_pixel(r,g,b,a));
}
}
Notes: img in the code is the XFixesCursorImage, and do not trust the field 'cursor_serial' in order to determine if cursors are different of each other, because sometimes this field is just 0. Not sure why.
pixels contain 32-bit per pixel pixmap, in your case 24x24*4=2304 bytes.
From protocol docs:
The cursor image itself is returned as a single image at 32 bits per
pixel with 8 bits of alpha in the most significant 8 bits of the pixel
followed by 8 bits each of red, green and finally 8 bits of blue in
the least significant 8 bits. The color components are pre-multiplied
with the alpha component.
Related
I'm working on a streaming prototype using UE4.
My goal here (in this post) is solely about capturing frames and saving one as a bitmap, just to visually ensure frames are correctly captured.
I'm currently capturing frames converting the backbuffer to a ID3D11Texture2D then mapping it.
Note : I tried the ReadSurfaceData approach in the render thread, but it didn't perform well at all regarding performances (FPS went down to 15 and I'd like to capture at 60 FPS), whereas the DirectX texture mapping from the backbuffer currently takes 1 to 3 milliseconds.
When debugging, I can see the D3D11_TEXTURE2D_DESC's format is DXGI_FORMAT_R10G10B10A2_UNORM, so red/green/blues are stored on 10 bits each, and alpha on 2 bits.
My questions :
How to convert the texture's data (using the D3D11_MAPPED_SUBRESOURCE pData pointer) to a R8G8B8(A8), that is, 8 bit per color (a R8G8B8 without the alpha would also be fine for me there) ?
Also, am I doing anything wrong about capturing the frame ?
What I've tried :
All the following code is executed in a callback function registered to OnBackBufferReadyToPresent (code below).
void* NativeResource = BackBuffer->GetNativeResource();
if (NativeResource == nullptr)
{
UE_LOG(LogTemp, Error, TEXT("Couldn't retrieve native resource"));
return;
}
ID3D11Texture2D* BackBufferTexture = static_cast<ID3D11Texture2D*>(NativeResource);
D3D11_TEXTURE2D_DESC BackBufferTextureDesc;
BackBufferTexture->GetDesc(&BackBufferTextureDesc);
// Get the device context
ID3D11Device* d3dDevice;
BackBufferTexture->GetDevice(&d3dDevice);
ID3D11DeviceContext* d3dContext;
d3dDevice->GetImmediateContext(&d3dContext);
// Staging resource
ID3D11Texture2D* StagingTexture;
D3D11_TEXTURE2D_DESC StagingTextureDesc = BackBufferTextureDesc;
StagingTextureDesc.Usage = D3D11_USAGE_STAGING;
StagingTextureDesc.BindFlags = 0;
StagingTextureDesc.CPUAccessFlags = D3D11_CPU_ACCESS_READ;
StagingTextureDesc.MiscFlags = 0;
HRESULT hr = d3dDevice->CreateTexture2D(&StagingTextureDesc, nullptr, &StagingTexture);
if (FAILED(hr))
{
UE_LOG(LogTemp, Error, TEXT("CreateTexture failed"));
}
// Copy the texture to the staging resource
d3dContext->CopyResource(StagingTexture, BackBufferTexture);
// Map the staging resource
D3D11_MAPPED_SUBRESOURCE mapInfo;
hr = d3dContext->Map(
StagingTexture,
0,
D3D11_MAP_READ,
0,
&mapInfo);
if (FAILED(hr))
{
UE_LOG(LogTemp, Error, TEXT("Map failed"));
}
// See https://dev.to/muiz6/c-how-to-write-a-bitmap-image-from-scratch-1k6m for the struct definitions & the initialization of bmpHeader and bmpInfoHeader
// I didn't copy that code here to avoid overloading this post, as it's identical to the article's code
// Just making clear the reassigned values below
bmpHeader.sizeOfBitmapFile = 54 + StagingTextureDesc.Width * StagingTextureDesc.Height * 4;
bmpInfoHeader.width = StagingTextureDesc.Width;
bmpInfoHeader.height = StagingTextureDesc.Height;
std::ofstream fout("output.bmp", std::ios::binary);
fout.write((char*)&bmpHeader, 14);
fout.write((char*)&bmpInfoHeader, 40);
// TODO : convert to R8G8B8 (see below for my attempt at this)
fout.close();
StagingTexture->Release();
d3dContext->Unmap(StagingTexture, 0);
d3dContext->Release();
d3dDevice->Release();
BackBufferTexture->Release();
(As mentioned in the code comments, I followed this article about the BMP headers for saving the bitmap to a file)
Texture data
One thing I'm concerned about is the retrieved data with this method.
I used a temporary array to check with the debugger what's inside.
// Just noted which width and height had the texture and hardcoded it here to allocate the right size
uint32_t data[1936 * 1056];
// Multiply by 4 as there are 4 bytes (32 bits) per pixel
memcpy(data, mapInfo.pData, StagingTextureDesc.Width * StagingTextureDesc.Height * 4);
Turns out the 1935 first uint32 in this array all contain the same value ; 3595933029. And after that, the same values are often seen hundred times in a row.
This makes me think the frame isn't captured as it should, because the UE4 editor's window doesn't have the exact same color on its first row all along (whether it's top or bottom).
R10G10B10A2 to R8G8B8(A8)
So I tried to guess how to convert from R10G10B10A2 to R8G8B8. I started from this value that appears 1935 times in a row at the beginning of the data buffer : 3595933029.
When I color pick an editor's window screenshot (using the Windows tool, which gets me an image with the exact same dimensions as the DirectX texture, that is 1936x1056), I get the following different colors:
R=56, G=57, B=52 (top left & bottom left)
R=0, G=0, B=0 (top right)
R=46, G=40, B=72 (bottom right - it overlaps the task bar, thus the color)
So I tried to manually convert the color to check if it matches any of those I color picked.
I thought about bit shifting to simply compare the values
3595933029 (value in retrieved buffer) in binary : 11010110010101011001010101100101
Can already see the pattern : 11 followed 3 times by the 10-bit value 0101100101, and none of the picked colors follow this (except the black corner, which would be only made of zeros though)
Anyway, assuming RRRRRRRRRR GGGGGGGGGG BBBBBBBBBB AA order (ditched bits are marked with an x) :
11010110xx01010110xx01010110xxxx
R=214, G=86, B=86 : doesn't match
Assuming AA RRRRRRRRRR GGGGGGGGGG BBBBBBBBBB :
xx01011001xx01011001xx01011001xx
R=89, G=89, B=89 : doesn't match
If that can help, here's the editor window that should be captured (it really is a Third person template, didn't add anything to it except this capture code)
Here's the generated bitmap when shifting bits :
Code to generate bitmap's pixels data :
struct Pixel {
uint8_t blue = 0;
uint8_t green = 0;
uint8_t red = 0;
} pixel;
uint32_t* pointer = (uint32_t*)mapInfo.pData;
size_t numberOfPixels = bmpInfoHeader.width * bmpInfoHeader.height;
for (int i = 0; i < numberOfPixels; i++) {
uint32_t value = *pointer;
// Ditch the color's 2 last bits, keep the 8 first
pixel.blue = value >> 2;
pixel.green = value >> 12;
pixel.red = value >> 22;
++pointer;
fout.write((char*)&pixel, 3);
}
It somewhat seems similar in the present colors, however that doesn't look at all like the editor.
What am I missing ?
First of all, you are assuming that the mapInfo.RowPitch is exactly StagicngTextureDesc.Width * 4. This is often not true. When copying to/from Direct3D resources, you need to do 'row-by-row' copies. Also, allocating 2 MBytes on the stack is not good practice.
#include <cstdint>
#include <memory>
// Assumes our staging texture is 4 bytes-per-pixel
// Allocate temporary memory
auto data = std::unique_ptr<uint32_t[]>(
new uint32_t[StagingTextureDesc.Width * StagingTextureDesc.Height]);
auto src = static_cast<uint8_t*>(mapInfo.pData);
uint32_t* dest = data.get();
for(UINT y = 0; y < StagingTextureDesc.Height; ++y)
{
// Multiply by 4 as there are 4 bytes (32 bits) per pixel
memcpy(dest, src, StagingTextureDesc.Width * sizeof(uint32_t));
src += mapInfo.RowPitch;
dest += StagingTextureDesc.Width;
}
For C++11, using std::unique_ptr ensures the memory is eventually released automatically. You can transfer ownership of the memory to something else with uint32_t* ptr = data.release(). See cppreference.
With C++14, the better way to write the allocation is: auto data = std::make_unique<uint32_t[]>(StagingTextureDesc.Width * StagingTextureDesc.Height);. This assumes you are fine with a C++ exception being thrown for out-of-memory.
If you want to return an error code for out-of-memory instead of a C++ exception, use: auto data = std::unique_ptr<uint32_t[]>(new (std::nothrow) uint32_t[StagingTextureDesc.Width * StagingTextureDesc.Height]); if (!data) // return error
Converting 10:10:10:2 content to 8:8:8:8 content can be done efficiently on the CPU with bit-shifting.
The tricky bit is dealing with the up-scaling of the 2-bit alpha to 8-bits. For example, you want the Alpha of 11 to map to 255, not 192.
Here's a replacement for the loop above
// Assumes our staging texture is DXGI_FORMAT_R10G10B10A2_UNORM
for(UINT y = 0; y < StagingTextureDesc.Height; ++y)
{
auto sptr = reinterpret_cast<uint32_t*>(src);
for(UINT x = 0; x < StagingTextureDesc.Width; ++x)
{
uint32_t t = *(sptr++);
uint32_t r = (t & 0x000003ff) >> 2;
uint32_t g = (t & 0x000ffc00) >> 12;
uint32_t b = (t & 0x3ff00000) >> 22;
// Upscale alpha
// 11xxxxxx -> 11111111 (255)
// 10xxxxxx -> 10101010 (170)
// 01xxxxxx -> 01010101 (85)
// 00xxxxxx -> 00000000 (0)
t &= 0xc0000000;
uint32_t a = (t >> 24) | (t >> 26) | (t >> 28) | (t >> 30);
// Convert to DXGI_FORMAT_R8G8B8A8_UNORM
*(dest++) = r | (g << 8) | (b << 16) | (a << 24);
}
src += mapInfo.RowPitch;
}
Of course we can combine the shifting operations since we move them down and then back up in the previous loop. We do need to update the masks to remove the bits that are normally shifted off by the full shifts. This replaces the inner body of the loop above:
// Convert from 10:10:10:2 to 8:8:8:8
uint32_t t = *(sptr++);
uint32_t r = (t & 0x000003fc) >> 2;
uint32_t g = (t & 0x000ff000) >> 4;
uint32_t b = (t & 0x3fc00000) >> 6;
t &= 0xc0000000;
uint32_t a = t | (t >> 2) | (t >> 4) | (t >> 6);
*(dest++) = r | g | b | a;
Any time you reduce the bit-depth you will introduce error. Techniques like ordered dithering and error-diffusion dithering are commonly used in pixels conversions of this nature. These introduce a bit of noise to the image to reduce the visual impact of the lost low bits.
For examples of conversions for all DXGI_FORMAT types, see DirectXTex which makes use of DirectXMath for all the various packed vector types. DirectXTex also implements both 4x4 ordered dithering and Floyd-Steinberg error-diffusion dithering when reducing bit-depth.
I am working with depth images retrieved from kinect which are 16 bits. I found some difficulties on making my own filters due to the index or the size of the images.
I am working with Textures because allows to work with any bit size of images.
So, I am trying to compute an easy gradient to understand what is wrong or why it doesn't work as I expected.
You can see that there is something wrong when I use y dir.
For x:
For y:
That's my code:
typedef concurrency::graphics::texture<unsigned int, 2> TextureData;
typedef concurrency::graphics::texture_view<unsigned int, 2> Texture
cv::Mat image = cv::imread("Depth247.tiff", CV_LOAD_IMAGE_ANYDEPTH);
//just a copy from another image
cv::Mat image2(image.clone() );
concurrency::extent<2> imageSize(640, 480);
int bits = 16;
const unsigned int nBytes = imageSize.size() * 2; // 614400
{
uchar* data = image.data;
// Result data
TextureData texDataD(imageSize, bits);
Texture texR(texDataD);
parallel_for_each(
imageSize,
[=](concurrency::index<2> idx) restrict(amp)
{
int x = idx[0];
int y = idx[1];
// 65535 is the maxium value that can take a pixel with 16 bits (2^16 - 1)
int valX = (x / (float)imageSize[0]) * 65535;
int valY = (y / (float)imageSize[1]) * 65535;
texR.set(idx, valX);
});
//concurrency::graphics::copy(texR, image2.data, imageSize.size() *(bits / 8u));
concurrency::graphics::copy_async(texR, image2.data, imageSize.size() *(bits) );
cv::imshow("result", image2);
cv::waitKey(50);
}
Any help will be very appreciated.
Your indexes are swapped in two places.
int x = idx[0];
int y = idx[1];
Remember that C++AMP uses row-major indices for arrays. Thus idx[0] refers to row, y axis. This is why the picture you have for "For x" looks like what I would expect for texR.set(idx, valY).
Similarly the extent of image is also using swapped values.
int valX = (x / (float)imageSize[0]) * 65535;
int valY = (y / (float)imageSize[1]) * 65535;
Here imageSize[0] refers to the number of columns (the y value) not the number of rows.
I'm not familiar with OpenCV but I'm assuming that it also uses a row major format for cv::Mat. It might invert the y axis with 0, 0 top-left not bottom-left. The Kinect data may do similar things but again, it's row major.
There may be other places in your code that have the same issue but I think if you double check how you are using index and extent you should be able to fix this.
I create an image using
UIGraphicsBeginImageContextWithOptions(image.size, NO, 0);
[image drawInRect:CGRectMake(0, 0, image.size.width, image.size.height)];
// more code - not relevant - removed for debugging
image = UIGraphicsGetImageFromCurrentImageContext(); // the image is now ARGB
UIGraphicsEndImageContext();
Then I try to find the color of a pixel (using the code by Minas Petterson from here: Get Pixel color of UIImage).
But since the image is now in ARGB format I had to modified the code with this:
alpha = data[pixelInfo];
red = data[(pixelInfo + 1)];
green = data[pixelInfo + 2];
blue = data[pixelInfo + 3];
However this did not work.
The problem is that (for example) a red pixel, that in RGBA would be represented as 1001 (actually 255 0 0 255, but for simplicity I use 0 to 1 values), in the image is represented as 0011 and not (as I thought) 1100.
Any ideas why? Am I doing something wrong?
PS. The code I have to use looks like it has to be this:
alpha = 255-data[pixelInfo];
red = 255-data[(pixelInfo + 1)];
green = 255-data[pixelInfo + 2];
blue = 255-data[pixelInfo + 3];
There are some problems that arises there:
"In some contexts, primarily OpenGL, the term "RGBA" actually means the colors are stored in memory such that R is at the lowest address, G after it, B after that, and A last. OpenGL describes the above format as "BGRA" on a little-endian machine and "ARGB" on a big-endian machine." (wiki)
Graphics hardware is backed by OpenGL on OS X/iOS, so I assume that we deal with little-endian data(intel/arm processors). So, when format is kCGImageAlphaPremultipliedFirst (ARGB) on little-endian machine it's BGRA. But don't worry, there is easy way to fix that.
Assuming that it's ARGB, kCGImageAlphaPremultipliedFirst, 8 bits per component, 4 components per pixel(That's what UIGraphicsGetImageFromCurrentImageContext() returns), don't_care-endiannes:
- (void)parsePixelValuesFromPixel:(const uint8_t *)pixel
intoBuffer:(out uint8_t[4])buffer {
static NSInteger const kRedIndex = 0;
static NSInteger const kGreenIndex = 1;
static NSInteger const kBlueIndex = 2;
static NSInteger const kAlphaIndex = 3;
int32_t *wholePixel = (int32_t *)pixel;
int32_t value = OSSwapHostToBigConstInt32(*wholePixel);
// Now we have value in big-endian format, regardless of our machine endiannes (ARGB now).
buffer[kAlphaIndex] = value & 0xFF;
buffer[kRedIndex] = (value >> 8) & 0xFF;
buffer[kGreenIndex] = (value >> 16) & 0xFF;
buffer[kBlueIndex] = (value >> 24) & 0xFF;
}
I'm loading a PNG file in SDL2 and I'm trying to find 'special' pixel colours to track during a spritesheet animation. I've put these pixels into my image but my code isn't finding them.
I'm using this code to read the pixels (taken from internet, wrapped into my own Texture class):
Uint32 getpixel(SDL_Surface *surface, int x, int y)
{
int bpp = surface->format->BytesPerPixel;
/* Here p is the address to the pixel we want to retrieve */
Uint8 *p = (Uint8 *)surface->pixels + y * surface->pitch + x * bpp;
switch(bpp) {
case 1:
return *p;
break;
case 2:
return *(Uint16 *)p;
break;
case 3:
if(SDL_BYTEORDER == SDL_BIG_ENDIAN)
return p[0] << 16 | p[1] << 8 | p[2];
else
return p[0] | p[1] << 8 | p[2] << 16;
break;
case 4:
return *(Uint32 *)p;
break;
default:
return 0; /* shouldn't happen, but avoids warnings */
}
}
And these are the important bits of code I'm using to compare pixels to the 'special' values I've set before:
// convert special SDL_Color to Uint32
Uint32 spec1 = SDL_MapRGBA(_texture->GetSDLSurface()->format, _spec1.r, _spec1.g, _spec1.b, 255);
Uint32 spec2 = SDL_MapRGBA(_texture->GetSDLSurface()->format, _spec2.r, _spec2.g, _spec2.b, 255);
...and, while looping through all pixels in each sprite frame...
// get pixel at (x, y)
Uint32 pix = _texture->GetPixel(x, y);
// if pixel is a special value, store it in animation
if (pix == spec1)
{
SDL_Point pt = {x, y};
anim->Special1.push_back(pt);
found1 = true;
}
else if (pix == spec2)
{
SDL_Point pt = {x, y};
anim->Special2.push_back(pt);
found2 = true;
}
Now, I'm setting a breakpoint in these if-statements to check if the colour has been found, but the breakpoint is never reached. Does anyone know what the problem is?
P.S. I've tried also using SDL_MapRGB() but that doesn't work either.
[edit]
Okay so I tried putting a pixel at 0,0 of the whole image with RGB values 66, 77 and 88. It read them in as 84, 96 and 107, so obviously the colours are either being changed or not read in properly. However, when I try it with a specific alpha value, it reads it in perfectly. I would change my system to only use alpha values but it seems the pixel editor I'm using removes the alpha value once you put in the pixel and blends it in with the rest of the image.
Your formula to offset is not correct, it should be :
Uint8 *p = (Uint8 *)surface->pixels + y * surface->pitch + x
(x does not need to be multiplied by bpp)
From the docs
pitch
The length of a surface scanline in bytes
The pitch, also called stride is computed as following :
pitch = width * bytes per pixel
bytes per pixel = (bits per pixel + 7) / 8
When you are at correct byte offset, get an Uint32 (for a 32bpp image) from it and do your comparison.
I have a binary file of image data where each pixel is exactly 4 bits. Image data is laid out as follow:
There a N images where the first image is 1x1, the second image is 2x2, the third is 4x4, and so on (they are mipmaps if you care to know).
Given a pointer to the start of the data buffer, I want to skip to the biggest image.
Now I know how many bytes I want to skip, but there is this annoying 1x1 image at the start which is 4 bits. I am not aware of anyway to increment a pointer by bit.
How can I successfully retrieve the data without everything being off by 4 bits?
Assuming you can change your file format you can do either of the following:
Add padding to the 1x1 image
Store the images in reverse order (effectively the same as above, but not ideal for mip-maps because you don't necessarily know how many images you will have)
If you can't change your format, you have these choices:
Convert the data
Accept that the buffer is offset by half a byte and work with it accordingly
You said:
How can I successfully retrieve the data without everything being off
by 4 bits?
So that means you need to convert. When you calculate your offset in bytes, you will find that the first one contains half a byte of the previous image. So in a pinch you can shuffle them like this:
for( i = start; i < end; i++ ) {
p[i] = (p[i] << 4) | (p[i+1] >> 4);
}
That's assuming the first pixel is bits 4-7 and the second pixel is bits 0-3, and so on... If it's the other way around, just invert those two shifts.
// this assumes pixels points to bytes(unsigned chars)
index = ?;// your index to the pixel
byte_t b = pixels[index / 2];
if (index % 2) pixel = b >> 4;
else pixel = b & 15;
// Or you can use
byte_t b = pixels[index >> 1];
if (index & 1) pixel = b >> 4;
else pixel = b & 15;
Either way just compute the logical index into the file. Dividing by two takes you to the start of the byte where the pixel is. And then just read the correct half of the byte.
So make a function
byte_t GetMyPixel(unsigned char* pixels, unsigned index) {
byte_t b = pixels[index >> 1];
byte_t pixel;
if (index & 1) pixel = b >> 4;
else pixel = b & 15;
return pixel;
}
To read first image.
Image1x1 = GetMyPixel(pixels,0);
Image2x2_1 = GetMyPixel(pixels,1);// Top left pixel of second image
Image2x2_2 = GetMyPixel(pixels,2);// Top Right pixel of second image
Image2x2_3 = GetMyPixel(pixels,3);// Bottom left pixel of second image
... etc
So that is one way to go about it. You might need to take into account the endian-ness you are using so if it seems wrong then switch the logic for the pixel read thusly...
byte_t GetMyPixel(unsigned char* pixels, unsigned index) {
byte_t b = pixels[index >> 1];
byte_t pixel;
#if OTHER_ENDIAN
if (index & 1) pixel = b >> 4;
else pixel = b & 15;
#else
if (index & 1) pixel = b & 15;
else pixel = b >> 4;
#endif
return pixel;
}