Loading non-power-of-two textures in Vulkan - c++

My 2D texture loader works fine if my texture dimensions are power-of-two, but when they are not, the texture data displays as skewed. How do I fix this? I assume the issue has something to do with memory alignment and row pitch. Here's relevant parts of my loader code:
VkMemoryRequirements memReqs;
vkGetImageMemoryRequirements( GfxDeviceGlobal::device, mappableImage, &memReqs );
VkMemoryAllocateInfo memAllocInfo = {};
memAllocInfo.sType = VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO;
memAllocInfo.pNext = nullptr;
memAllocInfo.memoryTypeIndex = 0;
memAllocInfo.allocationSize = memReqs.size;
GetMemoryType( memReqs.memoryTypeBits, VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT, &memAllocInfo.memoryTypeIndex );
VkDeviceMemory mappableMemory;
err = vkAllocateMemory( GfxDeviceGlobal::device, &memAllocInfo, nullptr, &mappableMemory );
CheckVulkanResult( err, "vkAllocateMemory in Texture2D" );
err = vkBindImageMemory( GfxDeviceGlobal::device, mappableImage, mappableMemory, 0 );
CheckVulkanResult( err, "vkBindImageMemory in Texture2D" );
VkImageSubresource subRes = {};
subRes.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT;
subRes.mipLevel = 0;
subRes.arrayLayer = 0;
VkSubresourceLayout subResLayout;
vkGetImageSubresourceLayout( GfxDeviceGlobal::device, mappableImage, &subRes, &subResLayout );
void* mapped;
err = vkMapMemory( GfxDeviceGlobal::device, mappableMemory, 0, memReqs.size, 0, &mapped );
CheckVulkanResult( err, "vkMapMemory in Texture2D" );
const int bytesPerPixel = 4;
std::size_t dataSize = bytesPerPixel * width * height;
std::memcpy( mapped, data, dataSize );
vkUnmapMemory( GfxDeviceGlobal::device, mappableMemory );

The VkSubresourceLayout, which you obtained from vkGetImageSubresourceLayout will contain the pitch of the texture in the rowPitch member. It's more than likely not equal to the width, thus, when you do a memcpy of the entire data block, you're copying relevant data into the padding section of the texture.
Instead you will need to memcpy row-by-row, skipping the padding memory in the mapped texture:
const int bytesPerPixel = 4;
std::size_t dataRowSize = bytesPerPixel * width;
char* mappedBytes = (char*)mapped;
for(int i = 0; i < height; ++i)
{
std::memcpy(mapped, data, dataSize);
mappedBytes += rowPitch;
data += dataRowSize;
}
(this code assumes data is a char * as well - its declaration wasn't given)

for(int i = 0; i < height; ++i)
{
std::memcpy(mappedBytes, data, dataRowSize);
mappedBytes += layout.rowPitch;
data += dataRowSize;
}

Related

ffmpeg hevc encoding failure

I use ffmpeg to h265 encode yuv data, but the image after encoding is always incorrect, as shown below:
However, the following command can be used to encode correctly:ffmpeg -f rawvideo -s 480x256 -pix_fmt yuv420p -i origin.yuv -c:v hevc -f hevc -x265-params keyint=1:crf=18 out.h265, image below:
here my code:
void H265ImageCodec::InitCPUEncoder() {
avcodec_register_all();
AVCodec* encoder = avcodec_find_encoder(AV_CODEC_ID_H265);
CHECK(encoder) << "Can not find encoder with h265.";
// context
encode_context_ = avcodec_alloc_context3(encoder);
CHECK(encode_context_) << "Could not allocate video codec context.";
encode_context_->codec_id = AV_CODEC_ID_H265;
encode_context_->profile = FF_PROFILE_HEVC_MAIN;
encode_context_->codec_type = AVMEDIA_TYPE_VIDEO;
encode_context_->width = width_; // it's 480
encode_context_->height = height_; // it's 256
encode_context_->bit_rate = 384 * 1024;
encode_context_->pix_fmt = AVPixelFormat::AV_PIX_FMT_YUV420P;
encode_context_->time_base = (AVRational){1, 25};
encode_context_->framerate = (AVRational){25, 1};
AVDictionary* options = NULL;
av_dict_set(&options, "preset", "ultrafast", 0);
av_dict_set(&options, "tune", "zero-latency", 0);
av_opt_set(encode_context_->priv_data, "x265-params", "keyint=1:crf=18",
0); // crf: Quality-controlled variable bitrate
avcodec_open2(encode_context_, encoder, &options);
encode_frame_ = av_frame_alloc();
encode_frame_->format = encode_context_->pix_fmt;
encode_frame_->width = encode_context_->width;
encode_frame_->height = encode_context_->height;
av_frame_get_buffer(encode_frame_, 0);
// packet init
encode_packet_ = av_packet_alloc();
}
std::string H265ImageCodec::EncodeImage(std::string_view raw_image) {
av_packet_unref(encode_packet_);
av_frame_make_writable(encode_frame_);
const int64 y_size = width_ * height_;
int64 offset = 0;
memcpy(encode_frame_->data[0], raw_image.data() + offset, y_size);
offset += y_size;
memcpy(encode_frame_->data[1], raw_image.data() + offset, y_size / 4);
offset += y_size / 4;
memcpy(encode_frame_->data[2], raw_image.data() + offset, y_size / 4);
avcodec_send_frame(encode_context_, encode_frame_);
int ret = avcodec_receive_packet(encode_context_, encode_packet_);
CHECK_EQ(ret, 0) << "receive encode packet ret: " << ret;
std::string h265_frame(reinterpret_cast<char*>(encode_packet_->data),
encode_packet_->size);
return h265_frame;
}
Any idea what might cause this?
As commented, the issue is that rows of U and V buffers in encode_frame_ are not continuous in memory.
When executing encode_frame_ = av_frame_alloc() the steps are as follows:
encode_frame_->linesize[0] = 480
The value is equal to the width, so Y channel in continuous in memory.
encode_frame_->linesize[1] = 256 (not equal 480/2).
encode_frame_->linesize[2] = 256 (not equal 480/2).
The rows of U and V channels are not continuous in memory.
Illustration for destination U channel in memory:
<----------- 256 bytes ----------->
<------- 240 elements ------->
^ uuuuuuuuuuuuuuuuuuuuuuuuuuuuuu xxxx
| uuuuuuuuuuuuuuuuuuuuuuuuuuuuuu xxxx
128 rows uuuuuuuuuuuuuuuuuuuuuuuuuuuuuu xxxx
| uuuuuuuuuuuuuuuuuuuuuuuuuuuuuu xxxx
V uuuuuuuuuuuuuuuuuuuuuuuuuuuuuu xxxx
For checking we may print linesize:
printf("encode_frame_->linesize[0] = %d\n", encode_frame_->linesize[0]); //480
printf("encode_frame_->linesize[1] = %d\n", encode_frame_->linesize[1]); //256 (not 240)
printf("encode_frame_->linesize[2] = %d\n", encode_frame_->linesize[2]); //256 (not 240)
Inspired by cudaMemcpy2D, we may implement the function memcpy2D:
//memcpy from src to dst with optional source "pitch" and destination "pitch".
//The "pitch" is the step in bytes between two rows.
//The function interface is based on cudaMemcpy2D.
static void memcpy2D(void* dst,
size_t dpitch,
const void* src,
size_t spitch,
size_t width,
size_t height)
{
const unsigned char* I = (unsigned char*)src;
unsigned char* J = (unsigned char*)dst;
for (size_t y = 0; y < height; y++)
{
const unsigned char* I0 = I + y*spitch; //Pointer to the beggining of the source row
unsigned char* J0 = J + y*dpitch; //Pointer to the beggining of the destination row
memcpy(J0, I0, width); //Copy width bytes from row I0 to row J0
}
}
Use memcpy2D instead of memcpy for copy data to destination frame that may not be continuous in memory:
//Copy Y channel:
memcpy2D(encode_frame_->data[0], //void* dst,
encode_frame_->linesize[0], //size_t dpitch,
raw_image.data() + offset, //const void* src,
width_, //size_t spitch,
width_, //size_t width,
height_); //size_t height)
offset += y_size;
//Copy U channel:
memcpy2D(encode_frame_->data[1], //void* dst,
encode_frame_->linesize[1], //size_t dpitch,
raw_image.data() + offset, //const void* src,
width_/2, //size_t spitch,
width_/2, //size_t width,
height_/2); //size_t height)
offset += y_size / 4;
//Copy V channel:
memcpy2D(encode_frame_->data[2], //void* dst,
encode_frame_->linesize[2], //size_t dpitch,
raw_image.data() + offset, //const void* src,
width_/2, //size_t spitch,
width_/2, //size_t width,
height_/2); //size_t height)

GDAL GeoTiff Corrupt on Write (C++ )

I'm getting corrupt output when writing a GeoTiff using GDAL API (v1.10 - C++). The raster geotransform is correct, the block is written in the correct position but the pixels are written at random positions and values within the block.
Example: http://i.imgur.com/mntnAfK.png
Method: Open a GDAL Raster --> copy projection info & size --> create output GeoTiff --> write a block from array at offset.
Code:
//Open the input DEM
const char* demFName = "/Users/mount.dem";
const char* outFName = "/Users/geodata/out_test.tif";
auto poDataset = ioUtils::openDem(demFName);
double adfGeoTransform[6];
poDataset->GetGeoTransform( adfGeoTransform );
//Setup driver
const char *pszFormat = "GTiff";
GDALDriver *poDriver;
poDriver = GetGDALDriverManager()->GetDriverByName(pszFormat);
char *pszSRS_WKT = NULL;
GDALRasterBand *poBand;
//Get size from input Raster
int xSize = poDataset->GetRasterXSize();
int ySize = poDataset->GetRasterYSize();
//Set output Dataset
GDALDataset *poDstDS;
char **papszOptions = NULL;
//Create output geotiff
poDstDS = poDriver->Create( outFName, xSize, ySize, 1, GDT_Byte, papszOptions );
//Get the geotrans from the input geotrans
poDataset->GetGeoTransform( adfGeoTransform );
poDstDS->SetGeoTransform( adfGeoTransform );
poDstDS->SetProjection( poDataset->GetProjectionRef() );
//Create some data to write
unsigned char rData[512*512];
//Assign some values other than 0
for (int col=0; col < 512; col++){
for (int row=0; row < 512; row++){
rData[col*row] = 50;
}
}
//Write some data
poBand = poDstDS->GetRasterBand(1);
poBand->RasterIO( GF_Write, 200, 200, 512, 512,
rData, 512, 512, GDT_Byte, 0, 0 );
//Close
GDALClose( (GDALDatasetH) poDstDS );
std::cout << "Done" << std::endl;
Any ideas / pointers where I'm going wrong much appreciated.
Always something trivial...
rData[row*512+col] = 50
Qudos to Even Rouault on osgeo.

How do I create a cudaTextureObject_t from linear memory?

I cannot get bindless textures referencing linear memory to work -- the result is always a zero/black read. My initialization code:
The buffer:
int const num = 4 * 16;
int const size = num * sizeof(float);
cudaMalloc(buffer, size);
auto b = new float[num];
for (int i = 0; i < num; ++i)
{
b[i] = i % 4 == 0 ? 1 : 1;
}
cudaMemcpy(*buffer, b, size, cudaMemcpyHostToDevice);
The texture object:
cudaTextureDesc td;
memset(&td, 0, sizeof(td));
td.normalizedCoords = 0;
td.addressMode[0] = cudaAddressModeClamp;
td.addressMode[1] = cudaAddressModeClamp;
td.addressMode[2] = cudaAddressModeClamp;
td.readMode = cudaReadModeElementType;
td.sRGB = 0;
td.filterMode = cudaFilterModePoint;
td.maxAnisotropy = 16;
td.mipmapFilterMode = cudaFilterModePoint;
td.minMipmapLevelClamp = 0;
td.maxMipmapLevelClamp = 0;
td.mipmapLevelBias = 0;
struct cudaResourceDesc resDesc;
memset(&resDesc, 0, sizeof(resDesc));
resDesc.resType = cudaResourceTypeLinear;
resDesc.res.linear.devPtr = *buffer;
resDesc.res.linear.sizeInBytes = size;
resDesc.res.linear.desc.f = cudaChannelFormatKindFloat;
resDesc.res.linear.desc.x = 32;
resDesc.res.linear.desc.y = 32;
resDesc.res.linear.desc.z = 32;
resDesc.res.linear.desc.w = 32;
checkCudaErrors(cudaCreateTextureObject(texture, &resDesc, &td, nullptr));
The kernel:
__global__ void
d_render(uchar4 *d_output, uint imageW, uint imageH, float* buffer, cudaTextureObject_t texture)
{
uint x = blockIdx.x * blockDim.x + threadIdx.x;
uint y = blockIdx.y * blockDim.y + threadIdx.y;
if ((x < imageW) && (y < imageH))
{
// write output color
uint i = y * imageW + x;
//auto f = make_float4(buffer[0], buffer[1], buffer[2], buffer[3]);
auto f = tex1D<float4>(texture, 0);
d_output[i] = to_uchar4(f * 255);
}
}
The texture object is initialized with something sensible (4099) when given to the kernel. The Buffer version works flawlessly.
Why does the texture object return zero/black?
As per the CUDA programming reference guide You need to use tex1Dfetch() to read from one-dimensional textures bound to linear texture memory, and tex1D to read from one-dimensional textures bound to CUDA arrays. This applies to both CUDA texture references and CUDA textures passed by object.
The difference between the two APIs is the coordinate argument. Textures bound to linear memory can only be addressed in texture coordinates (hence the integer coordinate argument in text1Dfetch()), whereas arrays support both texture and normalised coordinates (thus the float coordinate argument in tex1D).

Bitmap in C# into C++

I think this must be an easy question for somebody who uses bitmap in C++. I have my a working code in C# - how to do something simillar in C++ ?? Thanks for your codes (help) :-))
public Bitmap Visualize ()
{
PixelFormat fmt = System.Drawing.Imaging.PixelFormat.Format24bppRgb;
Bitmap result = new Bitmap( Width, Height, fmt );
BitmapData data = result.LockBits( new Rectangle( 0, 0, Width, Height ), ImageLockMode.ReadOnly, fmt );
unsafe
{
byte* ptr;
for ( int y = 0; y < Height; y++ )
{
ptr = (byte*)data.Scan0 + y * data.Stride;
for ( int x = 0; x < Width; x++ )
{
float num = 0.44;
byte c = (byte)(255.0f * num);
ptr[0] = ptr[1] = ptr[2] = c;
ptr += 3;
}
}
}
result.UnlockBits( data );
return result;
}
Raw translation to C++/CLI, I didn't run the example so it may contains some typo. Anyway there are different ways to get the same result in C++ (because you can use the standard CRT API).
Bitmap^ Visualize ()
{
PixelFormat fmt = System::Drawing::Imaging::PixelFormat::Format24bppRgb;
Bitmap^ result = gcnew Bitmap( Width, Height, fmt );
BitmapData^ data = result->LockBits( Rectangle( 0, 0, Width, Height ), ImageLockMode::ReadOnly, fmt );
for ( int y = 0; y < Height; y++ )
{
unsigned char* ptr = reinterpret_cast<unsigned char*>((data->Scan0 + y * data->Stride).ToPointer());
for ( int x = 0; x < Width; x++ )
{
float num = 0.44f;
unsigned char c = static_cast<unsigned char>(255.0f * num);
ptr[0] = ptr[1] = ptr[2] = c;
ptr += 3;
}
}
result->UnlockBits( data );
return result;
}
You can do very similar loops using the Easy BMP library
C++ does contains nothing in reference to images or processing images. Many libraries are available for this, and the way in which you operate on the data may be different for each.
At it's most basic level, an image consists of a bunch of bytes. If you can extract just the data (i.e., not headers or other metadata) into a unsigned char[] (or some other appropriate type given the format of your image) then you can iterate through each pixel much like you have done in your C# example.

Create CImage from Byte array

I need to create a CImage from a byte array (actually, its an array of unsigned char, but I can cast to whatever form is necessary). The byte array is in the form "RGBRGBRGB...". The new image needs to contain a copy of the image bytes, rather than using the memory of the byte array itself.
I have tried many different ways of achieving this -- including going through various HBITMAP creation functions, trying to use BitBlt -- and nothing so far has worked.
To test whether the function works, it should pass this test:
BYTE* imgBits;
int width;
int height;
int Bpp; // BYTES per pixel (e.g. 3)
getImage(&imgBits, &width, &height, &Bpp); // get the image bits
// This is the magic function I need!!!
CImage img = createCImage(imgBits, width, height, Bpp);
// Test the image
BYTE* data = img.GetBits(); // data should now have the same data as imgBits
All implementations of createCImage() so far have ended up with data pointing to an empty (zero filled) array.
CImage supports DIBs quite neatly and has a SetPixel() method so you could presumably do something like this (uncompiled, untested code ahead!):
CImage img;
img.Create(width, height, 24 /* bpp */, 0 /* No alpha channel */);
int nPixel = 0;
for(int row = 0; row < height; row++)
{
for(int col = 0; col < width; col++)
{
BYTE r = imgBits[nPixel++];
BYTE g = imgBits[nPixel++];
BYTE b = imgBits[nPixel++];
img.SetPixel(row, col, RGB(r, g, b));
}
}
Maybe not the most efficient method but I should think it is the simplest approach.
Use memcpy to copy the data, then SetDIBits or SetDIBitsToDevice depending on what you need to do. Take care though, the scanlines of the raw image data are aligned on 4-byte boundaries (IIRC, it's been a few years since I did this) so the data you get back from GetDIBits will never be exactly the same as the original data (well it might, depending on the image size).
So most likely you will need to memcpy scanline by scanline.
Thanks everyone, I managed to solve it in the end with your help. It mainly involved #tinman and #Roel's suggestion to use SetDIBitsToDevice(), but it involved a bit of extra bit-twiddling and memory management, so I thought I'd share my end-point here.
In the code below, I assume that width, height and Bpp (Bytes per pixel) are set, and that data is a pointer to the array of RGB pixel values.
// Create the header info
bmInfohdr.biSize = sizeof(BITMAPINFOHEADER);
bmInfohdr.biWidth = width;
bmInfohdr.biHeight = -height;
bmInfohdr.biPlanes = 1;
bmInfohdr.biBitCount = Bpp*8;
bmInfohdr.biCompression = BI_RGB;
bmInfohdr.biSizeImage = width*height*Bpp;
bmInfohdr.biXPelsPerMeter = 0;
bmInfohdr.biYPelsPerMeter = 0;
bmInfohdr.biClrUsed = 0;
bmInfohdr.biClrImportant = 0;
BITMAPINFO bmInfo;
bmInfo.bmiHeader = bmInfohdr;
bmInfo.bmiColors[0].rgbBlue=255;
// Allocate some memory and some pointers
unsigned char * p24Img = new unsigned char[width*height*3];
BYTE *pTemp,*ptr;
pTemp=(BYTE*)data;
ptr=p24Img;
// Convert image from RGB to BGR
for (DWORD index = 0; index < width*height ; index++)
{
unsigned char r = *(pTemp++);
unsigned char g = *(pTemp++);
unsigned char b = *(pTemp++);
*(ptr++) = b;
*(ptr++) = g;
*(ptr++) = r;
}
// Create the CImage
CImage im;
im.Create(width, height, 24, NULL);
HDC dc = im.GetDC();
SetDIBitsToDevice(dc, 0,0,width,height,0,0, 0, height, p24Img, &bmInfo, DIB_RGB_COLORS);
im.ReleaseDC();
delete[] p24Img;
Here is a simpler solution. You can use GetPixelAddress(...) instead of all this BITMAPHEADERINFO and SedDIBitsToDevice. Another problem I have solved was with 8-bit images, which need to have the color table defined.
CImage outImage;
outImage.Create(width, height, channelCount * 8);
int lineSize = width * channelCount;
if (channelCount == 1)
{
// Define the color table
RGBQUAD* tab = new RGBQUAD[256];
for (int i = 0; i < 256; ++i)
{
tab[i].rgbRed = i;
tab[i].rgbGreen = i;
tab[i].rgbBlue = i;
tab[i].rgbReserved = 0;
}
outImage.SetColorTable(0, 256, tab);
delete[] tab;
}
// Copy pixel values
// Warining: does not convert from RGB to BGR
for ( int i = 0; i < height; i++ )
{
void* dst = outImage.GetPixelAddress(0, i);
const void* src = /* put the pointer to the i'th source row here */;
memcpy(dst, src, lineSize);
}