BGFX setWindowSize Issue - c++

Trying to set window size using native/common functions via entry::setWindowSize, but somehow the window size is still setting it to 1280x720.
The buffer size was successfully set to the preferred size (800x600) though as per screenshot below.
Testing this on Windows 10. Am I missing something?
m_width = 800;
m_height = 600;
init.platformData.nwh = entry::getNativeWindowHandle(entry::kDefaultWindowHandle);
init.platformData.ndt = entry::getNativeDisplayHandle();
entry::setWindowSize(entry::kDefaultWindowHandle, m_width, m_height);
init.resolution.width = m_width;
init.resolution.height = m_height;
init.resolution.reset = m_reset;
bgfx::init(init);

Solved. It was a bug and the author just fixed this issue:
https://github.com/bkaradzic/bgfx/commit/4f3ce6abcb05f0bf154ef447136793bfe2a7da92
https://github.com/bkaradzic/bgfx/discussions/3022#discussioncomment-4678807
For those who have an older version of bgfx: In
examples/common/entry/entry.cpp, entry::setWindowSize should be called first before init()
int runApp(AppI* _app, int _argc, const char* const* _argv)
{
setWindowSize(kDefaultWindowHandle, s_width, s_height);
_app->init(_argc, _argv, s_width, s_height);
bgfx::frame();

Related

Loading a BMP image at a specific index in OpenGL

I have to load a 24 bit BMP file at a certain (x,y) index of glut window from a file using OpenGL. I have found a function that uses glaux library to do so. Here the color mentioned in ignoreColor is ignored during rendering.
void iShowBMP(int x, int y, char filename[], int ignoreColor)
{
AUX_RGBImageRec *TextureImage;
TextureImage = auxDIBImageLoad(filename);
int i,j,k;
int width = TextureImage->sizeX;
int height = TextureImage->sizeY;
int nPixels = width * height;
int *rgPixels = new int[nPixels];
for (i = 0, j=0; i < nPixels; i++, j += 3)
{
int rgb = 0;
for(int k = 2; k >= 0; k--)
{
rgb = ((rgb << 8) | TextureImage->data[j+k]);
}
rgPixels[i] = (rgb == ignoreColor) ? 0 : 255;
rgPixels[i] = ((rgPixels[i] << 24) | rgb);
}
glRasterPos2f(x, y);
glDrawPixels(width, height, GL_RGBA, GL_UNSIGNED_BYTE, rgPixels);
delete []rgPixels;
free(TextureImage->data);
free(TextureImage);
}
But the problem is that glaux is now obsolete. If I call this function, the image is rendered and shown for a minute, then an error pops up (without any error message) and the glut window disappears. From the returned value shown in the console, it seems like a runtime error.
Is there any alternative to this function that doesn't use glaux? I have seen cimg, devil etc but none of them seems to work like this iShowBMP function. I am doing my project in Codeblocks.
I have to load every frame to keep the implementation consistent with other parts of the program. Also, the bmp file whose name has been passed as a parameter to the function has both width and height in powers of 2.
The last two free() statements were not getting executed for some unknown reasons, so the memory consumption was increasing. That's why the program was crashing after a moment. Later I solved it using stb_image.h.

How to feed cudaArray to Windows-Machine-Learning inference engine?

I am trying to develop an ML powered plugin for a real-time image processing software, that provides image data as cudaArray_t on the GPU, but because the software locks me into an older CUDA version, I would like to do this with DirectML (the software is Windows only anyways).
For latency reasons, I don't want to do any unnecessary GPU-CPU-GPU roundtrips. To do this, I thought that I would need to map the CUDA data to D3D12 resources, which then can be used to create input and output tensors to bind to the model. I have found a sample that uses the CUDA External Resource Interoperability API to map a cudaArray_t into a ID3D12Resource here that I am trying to base my code on. As I don't need to render anything, I thought I was able to simply create the heap and resource and then copy the incoming cudaArray_tinto the interop cudaArray_t as shown below, without needing to create any sort of command queue. Note that the missing code is the same as in the linked github repo above, so I left it out for conciseness.
This approach does not work, but I am not sure how to debug this, as I am new to Direct3D programming and GPU programming in general. I am using the official Direct3D 12 docs as a reference, but it is a bit overwhelming, so some direction on what should be fixed here would be greatly appreciated :) I was thinking that I need to use a semaphore for some kind of syncing, but I am not sure if that works without creating some sort of command queue.
bool initD3d12() {
// setup the d3d12 device
UINT dxgiFactoryFlags = 0;
winrt::com_ptr<IDXGIFactory4> factory;
winrt::check_hresult(CreateDXGIFactory2(dxgiFactoryFlags, IID_PPV_ARGS(factory.put())));
winrt::com_ptr<IDXGIAdapter1> hardwareAdapter;
GetHardwareAdapter(factory.get(), hardwareAdapter.put());
winrt::check_hresult(D3D12CreateDevice(hardwareAdapter.get(), D3D_FEATURE_LEVEL_11_0, IID_PPV_ARGS(m_d3d12Device.put())));
DXGI_ADAPTER_DESC1 desc;
hardwareAdapter->GetDesc1(&desc);
m_dx12deviceluid = desc.AdapterLuid;
return true;
}
void initCuda() {
// setup the cuda device
int num_cuda_devices = 0;
checkCudaErrors(cudaGetDeviceCount(&num_cuda_devices));
if (!num_cuda_devices) {
throw std::exception("No CUDA Devices found");
}
for (int devId = 0; devId < num_cuda_devices; devId++) {
cudaDeviceProp devProp;
checkCudaErrors(cudaGetDeviceProperties(&devProp, devId));
if ((memcmp(&m_dx12deviceluid.LowPart, devProp.luid,
sizeof(m_dx12deviceluid.LowPart)) == 0) &&
(memcmp(&m_dx12deviceluid.HighPart,
devProp.luid + sizeof(m_dx12deviceluid.LowPart),
sizeof(m_dx12deviceluid.HighPart)) == 0)) {
checkCudaErrors(cudaSetDevice(devId));
m_cudaDeviceID = devId;
m_nodeMask = devProp.luidDeviceNodeMask;
checkCudaErrors(cudaStreamCreate(&m_streamToRun));
printf("CUDA Device Used [%d] %s\n", devId, devProp.name);
break;
}
}
}
void copyArrayToResource(cudaArray_t cudaArray) {
// then we want to copy cudaArray to the D3D texture, via its mapped form : cudaArray
cudaMemcpy2DArrayToArray(
m_cudaArray, // dst array
0, 0, // offset
cudaArray, 0, 0, // src
m_width * 4 * sizeof(float), m_height, // extent
cudaMemcpyDeviceToDevice); // kind
}
void createResource(size_t width, size_t height, ID3D12Resource** d3d12Resource) {
// Create a d3d12 resource in the desired size and map it to a cudaArray
m_width = width;
m_height = height;
// Create D3D12 2DTexture
// Assume 32-Bit float RGBA image
const auto channels = 4;
const auto textureSurface = width * height;
const auto texturePixels = textureSurface * channels;
const auto textureSizeBytes = sizeof(float)* texturePixels;
const auto texFormat = channels == 4 ? DXGI_FORMAT_R32G32B32A32_FLOAT : DXGI_FORMAT_R32G32B32_FLOAT;
const auto texDesc = CD3DX12_RESOURCE_DESC::Tex2D(texFormat, width, height, 1, 1, 1, 0, D3D12_RESOURCE_FLAG_ALLOW_SIMULTANEOUS_ACCESS);
D3D12_HEAP_PROPERTIES heapProperties = {
D3D12_HEAP_TYPE_DEFAULT,
D3D12_CPU_PAGE_PROPERTY_UNKNOWN,
D3D12_MEMORY_POOL_UNKNOWN,
0,
0};
winrt::check_hresult(m_d3d12Device->CreateCommittedResource(
&heapProperties,
D3D12_HEAP_FLAG_SHARED,
&texDesc,
D3D12_RESOURCE_STATE_COMMON,
nullptr,
IID_PPV_ARGS(d3d12Resource)));
// Create CUDA external resource
HANDLE sharedHandle;
WindowsSecurityAttributes windowsSecurityAttributes{};
LPCWSTR name = NULL;
winrt::check_hresult(m_d3d12Device->CreateSharedHandle(
*d3d12Resource, &windowsSecurityAttributes, GENERIC_ALL, 0,
&sharedHandle));
D3D12_RESOURCE_ALLOCATION_INFO d3d12ResourceAllocationInfo;
d3d12ResourceAllocationInfo = m_d3d12Device->GetResourceAllocationInfo(
m_nodeMask, 1, &texDesc);
size_t actualSize = d3d12ResourceAllocationInfo.SizeInBytes;
size_t alignment = d3d12ResourceAllocationInfo.Alignment;
cudaExternalMemoryHandleDesc externalMemoryHandleDesc;
memset(&externalMemoryHandleDesc, 0, sizeof(externalMemoryHandleDesc));
externalMemoryHandleDesc.type = cudaExternalMemoryHandleTypeD3D12Resource;
externalMemoryHandleDesc.handle.win32.handle = sharedHandle;
externalMemoryHandleDesc.size = actualSize;
externalMemoryHandleDesc.flags = cudaExternalMemoryDedicated;
checkCudaErrors(
cudaImportExternalMemory(&m_externalMemory, &externalMemoryHandleDesc));
cudaExternalMemoryMipmappedArrayDesc cuExtmemMipDesc{};
cuExtmemMipDesc.extent = make_cudaExtent(width, height, 0);
cuExtmemMipDesc.formatDesc = cudaCreateChannelDesc<float4>();
cuExtmemMipDesc.numLevels = 1;
cuExtmemMipDesc.flags = cudaArrayDefault;
cudaMipmappedArray_t cuMipArray{};
checkCudaErrors(cudaExternalMemoryGetMappedMipmappedArray(&cuMipArray, m_externalMemory, &cuExtmemMipDesc));
checkCudaErrors(cudaGetMipmappedArrayLevel(&m_cudaArray, cuMipArray, 0));
}
In the end if the mapping to a ID3D12Resource would work, I assume that one could use the ITensorStaticsNative interface to create a tensor to bind to the output or input of a LearningModel.

How to get an SDL_PixelFormat from an SDL_PixelFormatEnum or SDL_Texture?

I've been trying to wrap my head around the basics of SDL, and I'm stumped by what should seem simple.
SDL_MapRGB() requires const SDL_PixelFormat*, and I use a SDL_PixelFormatEnum for creating textures in my project which is unit32. But I can't find any way of converting it for use with SDL_MapRGB().
There's probably an easier way than using SDL_MapRGB(), but this problem would still confuse me, as you can easily convert it the other way.
Irrelevant, but if you wish to know the rest of the code, then their you go.
#include <SDL.h>
SDL_Window *sdlWindow;
SDL_Renderer *sdlRenderer;
int main( int argc, char *args[] )
{
int w = 640;
int h = 480;
Uint32 format = SDL_PIXELFORMAT_RGB888;
SDL_CreateWindowAndRenderer(w, h, 0, &sdlWindow, &sdlRenderer);
SDL_Texture *sdlTexture = SDL_CreateTexture(sdlRenderer, format, SDL_TEXTUREACCESS_STREAMING, w, h);
extern uint32_t *pixels;
for (int x = 0; x < w; x++) {
for (int y = 0; y < h; y++) {
pixels[x + y * w] = SDL_MapRGB(format, 255, 255, 255);
}
}
SDL_UpdateTexture(sdlTexture, NULL, pixels, 640 * sizeof (Uint32));
SDL_RenderClear(sdlRenderer);
SDL_RenderCopy(sdlRenderer, sdlTexture, NULL, NULL);
SDL_RenderPresent(sdlRenderer);
SDL_Delay(5000);
SDL_Quit();
return 0;
}
Before you say it, I know this just makes a white screen.
So, SDL_PixelFormat and SDL_PixelFormatEnum are simply completely different types, you don't cast between them. You can ask SDL to lookup the SDL_PixelFormat corresponding to the Uint32 you mentioned though:
/**
* Create an SDL_PixelFormat structure corresponding to a pixel format.
*
* Returned structure may come from a shared global cache (i.e. not newly
* allocated), and hence should not be modified, especially the palette. Weird
* errors such as `Blit combination not supported` may occur.
*
* \param pixel_format one of the SDL_PixelFormatEnum values
* \returns the new SDL_PixelFormat structure or NULL on failure; call
* SDL_GetError() for more information.
*
* \since This function is available since SDL 2.0.0.
*
* \sa SDL_FreeFormat
*/
extern DECLSPEC SDL_PixelFormat * SDLCALL SDL_AllocFormat(Uint32 pixel_format);
Source: SDL2 header
SDL docs are often a bit spotty, but my goto info places when I'm not sure about some SDL thing are for example, these pages first, then just go look in the SDL2 headers themselves, then maybe Google it and hope it's mentioned in a forum post or something.
Hope this helped. (Note that I didn't try to compile anything here)

How to convert Windows Bitmap into Actionscript Bitmap in C++

to circumvent some (a lot) of problems with the Actionscript Camera API on Windows 8 Systems,
I decided to create a native extension to deal with the camera.
Right now, the camera part and all the glue to communicate with the AIR Runtime is actually working, so clicking on a button in AIR will open a new Windows window that will return a System::Drawing::Bitmap.
My task would be now to
a) Create a FREBitmapData object and
b) Fill in the BitmapData from the Windows Bitmap.
Should be easy, I thought, many days ago... As I'm not really familiar with C++ I didn't get this to work at all.
Here's what I tried so far:
bmp = form1->bitmap; // bmp is a handle to the System::Drawing::Bitmap returned from the external window
// Lock the bitmap's bits.
Rectangle rect = Rectangle(0, 0, bmp->Width, bmp->Height);
System::Drawing::Imaging::BitmapData^ bmpData = bmp->LockBits(rect, System::Drawing::Imaging::ImageLockMode::ReadWrite, bmp->PixelFormat);
// Get the address of the first line.
IntPtr ptr = bmpData->Scan0;
// Declare an array to hold the bytes of the bitmap.
// This code is specific to a bitmap with 24 bits per pixels.
int inputLength = Math::Abs(bmpData->Stride) * bmp->Height;
array<Byte>^ input = gcnew array<Byte>(inputLength);
// Copy the RGB values into the array.
System::Runtime::InteropServices::Marshal::Copy(ptr, input, 0, inputLength);
// Unlock the bits.
bmp->UnlockBits(bmpData);
// Create a FREByteArray to hold the data.
// Don't know, if this is necessary
FREObject* outputObject;
FREByteArray* outputBytes = new FREByteArray;
outputBytes->length = inputLength;
outputBytes->bytes = (uint8_t *) &input;
FREAcquireByteArray(outputObject, outputBytes);
// now copy it over
memcpy(outputBytes->bytes, &input, inputLength);
FREReleaseByteArray(outputObject);
// we create a new instance of BitmapData here,
// as we cannot simply pass it over in the args,
// because we don't know it's correct size at extension creation
FREObject* width;
FRENewObjectFromUint32(bmp->Width, width);
FREObject* height;
FRENewObjectFromUint32(bmp->Height, height);
FREObject* transparent;
FRENewObjectFromBool(uint32_t(0), transparent);
FREObject* fillColor;
FRENewObjectFromUint32(uint32_t(0xFFFFFF), fillColor);
FREObject obs[4] = { width, height, transparent, fillColor };
// we create some Actionscript Intsances here, we want to send back
FREObject* asBmpObj;
FRENewObject("BitmapData", 4, obs, asBmpObj, NULL);
// Now we have our AS bitmap data, copy bytes over
FREBitmapData* asData;
FREAcquireBitmapData(asBmpObj, asData);
// Now what? asData->bits32 won't accept array<Bytes> or any other value I've tried.
return asBmpObj;
The basic idea was:
a) find out the size and bit-depth of the original Win Bitmap (size is determinded by cam resolution picked in the Camera window)
b) write it's bytes to an array. Convert to 32 bits as necessary. (Still missing any idea.)
c) create AS Bitmap of the same size. Bit-depth must always be 32.
d) copy over array to AS Bitmap.
But I just can't achieve this.
Any advice? Thank you!
I don't think the following straight copy will work.
// Copy the RGB values into the array.
System::Runtime::InteropServices::Marshal::Copy(ptr, input, 0, inputLength);
You have to convert pixel by pixel. I don't know how to convert it to FREBitmapData. Here are the examples you can following on msdn
I finally figured it out:
the code below doesn't deal with the 24to32 bit conversion though, but it actually works in my application quite well, so I thought, i might share it:
FREObject launch(FREContext ctx, void* funcData, uint32_t argc, FREObject argv[])
{
System::Drawing::Bitmap^ windowsBitmap;
SKILLCamControl::CamControlForm^ form1;
form1 = gcnew SKILLCamControl::CamControlForm();
DialogResult dr;
// Show testDialog as a modal dialog and determine if DialogResult = OK.
dr = form1->ShowDialog();
if (dr == DialogResult::OK) {
windowsBitmap = form1->bitmap;
int bmpW = windowsBitmap->Width;
int bmpH = windowsBitmap->Height;
// we create a new instance of BitmapData here,
// as we cannot simply pass it over in the args,
// because we don't know it's correct size at extension creation
FREObject width;
FRENewObjectFromUint32( uint32_t(bmpW), &width);
FREObject height;
FRENewObjectFromUint32( uint32_t(bmpH), &height);
FREObject transparent;
FRENewObjectFromBool( uint32_t(0), &transparent);
FREObject fillColor;
FRENewObjectFromUint32( uint32_t(0xFF0000), &fillColor);
FREObject obs[4] = { width, height, transparent, fillColor };
FREObject freBitmap;
FRENewObject((uint8_t *)"flash.display.BitmapData", 4, obs, &freBitmap , NULL);
FREBitmapData2 freBitmapData;
FREAcquireBitmapData2(freBitmap, &freBitmapData);
// is inverted?
if (&freBitmapData.isInvertedY != (uint32_t*)(0) ) windowsBitmap->RotateFlip(RotateFlipType::RotateNoneFlipY);
int pixelSize = 4;
//Rect rect( 0, 0, freBitmap.width, freBitmap.height );
System::Drawing::Rectangle rect(0, 0, bmpW, bmpH);
BitmapData^ windowsBitmapData = windowsBitmap->LockBits(rect, ImageLockMode::ReadOnly, PixelFormat::Format32bppArgb);
for (int y = 0; y < bmpH ; y++)
{
//get pixels from each bitmap
byte* oRow = (byte*)windowsBitmapData->Scan0.ToInt32() + (y * windowsBitmapData->Stride);
byte* nRow = (byte*)freBitmapData.bits32 + (y * freBitmapData.lineStride32 * 4);
for (int x = 0; x < bmpW ; x++)
{
// set pixels
nRow[x * pixelSize] = oRow[x * pixelSize]; //B
nRow[x * pixelSize + 1] = oRow[x * pixelSize + 1]; //G
nRow[x * pixelSize + 2] = oRow[x * pixelSize + 2]; //R
}
}
// Free resources
FREReleaseBitmapData(freBitmap);
FREInvalidateBitmapDataRect(freBitmap, 0, 0, bmpW, bmpH);
windowsBitmap->UnlockBits(windowsBitmapData);
delete windowsBitmapData;
delete windowsBitmap;
return freBitmap;
}
else if (dr == DialogResult::Cancel)
{
return NULL;
}
return NULL;
}
I dont use C++ myself so this is not a full answer but just something to consider...
Bitmap data is universal raw pixel data. It should be passable within different software. Unless you are actually creating .BMP files with header etc??
...that will return a System::Drawing::Bitmap does this mean you have a bitmap's data held by C++ (as raw uncompressed RGBA pixels)? If so then just either put that inside a byteArray and send to AS3 or a if you can get that bitmap copied to the Windows clipboard then use AS3 to read from clipboard into a new AS3 Bitmap.
these might help you:
AS3: Copy image from clipboard
AS3: Serialize Bitmaps : Scroll down to the section ByteArray to BitmapData (for this to work you must first store the C++ bitmap bytes as a file call it what you want, example tempIMG.dat or myPIc.bin or whatever since file extension does not really matter just that you need a loadable URL).

Program crashes when calling new operator (C++)

I'm working my way through some tutorials I found on creating an ASCII game engine in C and writing my program in C++ to practice. I'm currently working on some stuff with allocating image data on the heap in the form of an Image struct (containing an int width, int height, and two char pointers to locations on the heap holding arrays of chars [width * height] in size)... however, I'm having some problems calling the new operator. The function where I'm allocating the memory for the struct itself, as well as its character and colour data, looks like this:
Image *allocateImage(int width, int height) {
Image *image;
image = new Image;
if (image == NULL)
return NULL;
image->width = width;
image->height = height;
image->chars = new CHAR[width * height];
image->colours = new COL[width * height];
//image->colours = (CHAR*) PtrAdd(image->chars, sizeof(CHAR) + width * height);
for (int i = 0; i < width * height; ++i) { //initializes transparent image
*(&image->chars + i) = 0;
*(&image->colours + i) = 0;
}
return image;
}
The main function itself (where this function is called twice) looks like this:
int main() {
int x, y, offsetx, offsety;
DWORD i;
srand(time(0));
bool write = FALSE;
INPUT_RECORD *eventBuffer;
COLORREF palette[16] =
{
0x00000000, 0x00800000, 0x00008000, 0x00808000,
0x00000080, 0x00800080, 0x00008080, 0x00c0c0c0,
0x00808080, 0x00ff0000, 0x0000ff00, 0x00ffff00,
0x000000ff, 0x00ff00ff, 0x0000ffff, 0x00ffffff
};
COORD bufferSize = {WIDTH, HEIGHT};
DWORD num_events_read = 0;
SMALL_RECT windowSize = {0, 0, WIDTH - 1, HEIGHT - 1};
COORD characterBufferSize = {WIDTH, HEIGHT};
COORD characterPosition = {0, 0};
SMALL_RECT consoleWriteArea = {0, 0, WIDTH - 1, HEIGHT - 1};
wHnd = GetStdHandle(STD_OUTPUT_HANDLE);
rHnd = GetStdHandle(STD_INPUT_HANDLE);
SetConsoleTitle("Title!");
SetConsolePalette(palette, 8, 8, L"Sunkure Font");
SetConsoleScreenBufferSize(wHnd, bufferSize);
SetConsoleWindowInfo(wHnd, TRUE, &windowSize);
for (y = 0; y < HEIGHT; ++y) {
for (x = 0; x < WIDTH; ++x) {
consoleBuffer[x + WIDTH * y].Char.AsciiChar = (unsigned char)219;
consoleBuffer[x + WIDTH * y].Attributes = FOREGROUND_BLUE;
}
}
write = TRUE;
Image *sun_image = allocateImage(SUNW, SUNH);
Image *cloud_image = allocateImage(CLOUDW, CLOUDH);
setImage(sun_image, SUN.chars, SUN.colors);
setImage(cloud_image, Cloud.chars, Cloud.colours);
I can post more code if anyone feels it's necessary, but the program only reaches this point - in fact, a little before, as it crashes on the second call to allocateImage, at the point in the function where the new operator is called. The program has been working just fine until this point - the only recent additions have been the functions for allocation of image data on the heap (for creation of images with variable sizes) as well as deallocation (which isn't reached by this program). Since the program I'm learning from is written in C this is one place where looking at the source code won't help me, and Google's been not much help either. Can anyone point me to what's going wrong?
These lines
*(&image->chars + i) = 0;
*(&image->colours + i) = 0;
are dubious because image is already a pointer. A pointer to a pointer doesn't make sense here. Simply remove the &.
Since your actual code writes to Joe Random Address anything can happen. So it is not unusual that you thwart the memory subsystem and hence the next new call.