How can I achieve noSmooth() with the P3D renderer? - opengl

I'd like to render basic 3D shapes without any aliasing/smoothing with a PGraphics instance using the P3D renderer, but noSmooth() doesn't seem to work.
In OF I remember calling setTextureMinMagFilter(GL_NEAREST,GL_NEAREST); on a texture.
What would be the equivalent in Processing ?
I tried to use PGL:
PGL.TEXTURE_MIN_FILTER = PGL.NEAREST;
PGL.TEXTURE_MAG_FILTER = PGL.NEAREST;
but I get a black image as the result.
If I comment PGL.TEXTURE_MIN_FILTER = PGL.NEAREST; I can see the render, but it's interpolated, not sharp.
Here'a basic test sketch with a few things I've tried:
PGraphics buffer;
PGraphicsOpenGL pgl;
void setup() {
size(320, 240, P3D);
noSmooth();
//hint(DISABLE_TEXTURE_MIPMAPS);
//((PGraphicsOpenGL)g).textureSampling(0);
//PGL pgl = beginPGL();
//PGL.TEXTURE_MIN_FILTER = PGL.NEAREST;
//PGL.TEXTURE_MAG_FILTER = PGL.NEAREST;
//endPGL();
buffer=createGraphics(width/8, height/8, P3D);
buffer.noSmooth();
buffer.beginDraw();
//buffer.hint(DISABLE_TEXTURE_MIPMAPS);
//((PGraphicsOpenGL)buffer).textureSampling(0);
PGL bpgl = buffer.beginPGL();
//PGL.TEXTURE_MIN_FILTER = PGL.NEAREST;//commenting this back in results in a blank buffer
PGL.TEXTURE_MAG_FILTER = PGL.NEAREST;
buffer.endPGL();
buffer.background(0);
buffer.stroke(255);
buffer.line(0, 0, buffer.width, buffer.height);
buffer.endDraw();
}
void draw() {
image(buffer, 0, 0, width, height);
}
(I've also posted on the Processing Forum, but no luck so far)

You were actually on the right track. You were just passing the wrong value to textureSampling().
Since the documentation on PGraphicsOpenGL::textureSampling()
is a bit scarce to say the least.
I decided to peak into it using a decompiler, which lead me to
Texture::usingMipmaps().
There I was able to see the values and what they reflected (in the decompiled code).
2 = POINT
3 = LINEAR
4 = BILINEAR
5 = TRILINEAR
Where PGraphicsOpenGL's default textureSampling is 5 (TRILINEAR).
I also later found this old comment on an issue equally confirming it.
So to get point/nearest filtering you only need to call noSmooth() on the application itself, and call textureSampling() on your PGraphics.
size(320, 240, P3D);
noSmooth();
buffer = createGraphics(width/8, height/8, P3D);
((PGraphicsOpenGL) buffer).textureSampling(2);
So considering the above, and only including the code you used to draw the line and drawing buffer to the application. Then that gives the following desired result.

I needed to combine both GL_LINEAR and GL_NEAREST with one shader so the ((PGraphicsOpenGL) buffer).textureSampling(2); was no option.
It was some digging, but this works for me:
PGL pgl = beginPGL();
Texture ascii_map_tex = ((PGraphicsOpenGL)g).getTexture(ascii_map);
pgl.bindTexture(PGL.TEXTURE_2D, ascii_map_tex.glName);
pgl.texParameteri(PGL.TEXTURE_2D, PGL.TEXTURE_MIN_FILTER, PGL.NEAREST);
pgl.texParameteri(PGL.TEXTURE_2D, PGL.TEXTURE_MAG_FILTER, PGL.NEAREST);
pgl.bindTexture(PGL.TEXTURE_2D, 0);
endPGL();

Related

SDL_SetRenderTarget doesn't set the tartget

I am trying to write a C++ lambda that is registered and to be used in Lua using the Sol2 binding. The callback below should create an SDL_Texture, and clear it to a color. A Lua_Texture is just a wrapper for an SDL_Texture, and l_txt.texture is of type SDL_Texture*.
lua.set_function("init_texture",
[render](Lua_Texture &l_txt, int w, int h)
{
// free any previous texture
l_txt.deleteTexture();
l_txt.texture = SDL_CreateTexture(render, SDL_PIXELFORMAT_RGBA8888, SDL_TEXTUREACCESS_TARGET, w, h);
SDL_SetRenderTarget(render, l_txt.texture);
SDL_Texture *target = SDL_GetRenderTarget(render);
assert(l_txt.texture == target);
assert(target == nullptr);
SDL_SetRenderDrawColor(render, 0xFF, 0x22, 0x22, 0xFF);
SDL_RenderClear(render);
});
My problem is that SDL_SetRenderTarget isn't functioning as I'd expect it. I try to set the texture as the target so I can clear it's color, but when I try to draw the texture to the screen it is still blank. The asserts in the above code both fail, and show that the current target texture is not set to the texture I am trying to clear and later use, nor is it Null (which is the expected value if there is no current target texture).
I have used this snippet of code before in just c++ (not as a Lua callback) and it works as intended. Somehow, embedding it in Lua causes the behavior to change. Any help is very much appreciated as I've been pulling my hair out over this for a while, thanks!
I may have an answer for you, but you're not going to like it.
It looks like SDL_GetRenderTarget doesn't work as expected.
I got the exact same problem you have (that's how I found your question), and I could reproduce it reliably using that simple program :
int rendererIndex;
[snipped code : rendererIndex is set to the index of the DX11 renderer]
SDL_Renderer * renderer = SDL_CreateRenderer(pWindow->pWindow, rendererIndex, SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC | SDL_RENDERER_TARGETTEXTURE);
SDL_Texture* rtTexture = SDL_CreateTexture(renderer, SDL_PIXELFORMAT_RGBA8888, SDL_TEXTUREACCESS_TARGET, 200, 200);
SDL_SetRenderTarget(renderer, rtTexture);
if(SDL_GetRenderTarget(renderer) != rtTexture)
printf("ERROR.");
This always produces :
ERROR.
The workaround I used it that I'm saving the pointer to the render target texture I'm setting for the renderer and I don't use SDL_GetRenderTarget.
EDIT :
I was curious why I didn't get the correct render target when getting it, and I look through SDL2's source code. I found out why (code snipped for clarity) :
int
SDL_SetRenderTarget(SDL_Renderer *renderer, SDL_Texture *texture)
{
// CODE SNIPPED
/* texture == NULL is valid and means reset the target to the window */
if (texture) {
CHECK_TEXTURE_MAGIC(texture, -1);
if (renderer != texture->renderer) {
return SDL_SetError("Texture was not created with this renderer");
}
if (texture->access != SDL_TEXTUREACCESS_TARGET) {
return SDL_SetError("Texture not created with SDL_TEXTUREACCESS_TARGET");
}
// *** EMPHASIS MINE : This is the problem.
if (texture->native) {
/* Always render to the native texture */
texture = texture->native;
}
}
// CODE SNIPPED
renderer->target = texture;
// CODE SNIPPED
}
SDL_Texture *
SDL_GetRenderTarget(SDL_Renderer *renderer)
{
return renderer->target;
}
In short, the renderer saves the current render target in renderer->target, but not before converting the current texture to it's native form. When we use SDL_GetRenderTarget, we're getting that native texture, which may or may not be different.

Direct2d. Texture atlas creation

My goal is to create a texture atlas in my directx application. What i have is a vector of ID2D1PathGeometries which need to to be put on a texture atlas. So i create a ID2D1Bitmap1, but i have no clue on what is my next step. In other words, - how exactly do i lay an ID2D1PathGeometry on a ID2D1Bitmap1 on the spot i need?
p/s/ it worth mentioning, that i'm kind of a newbie in directx and when i try to look for an answer on msdn i just keep getting lost in everything direct2d provides you with.
TU
p/p/s Code requested:
there is not much to show, as i mentioned already.
std::vector<Microsoft::WRL::ComPtr<ID2D1PathGeometry>> atlasGeometries; // so i have my geometries
////than i fill the vector
{
....
}
////Creating Bitmap for font sheet
Microsoft::WRL::ComPtr<ID2D1Bitmap1> bitmap;
D2D1_SIZE_U dimensions;
dimensions.height = 1024;
dimensions.width = 1024;
D2D1_BITMAP_PROPERTIES1 d2dbp;
D2D1_PIXEL_FORMAT d2dpf;
FLOAT dpiX;
FLOAT dpiY;
d2dpf.format = DXGI_FORMAT_A8_UNORM;
d2dpf.alphaMode = D2D1_ALPHA_MODE_PREMULTIPLIED;
this->dxDevMt.GetD2DFactory()->GetDesktopDpi(&dpiX, &dpiY);
d2dbp.pixelFormat = d2dpf;
d2dbp.dpiX = dpiX;
d2dbp.dpiY = dpiY;
d2dbp.bitmapOptions = D2D1_BITMAP_OPTIONS_TARGET;
d2dbp.colorContext = nullptr;
newCtx->CreateBitmap(dimensions, nullptr, 0, d2dbp, bitmap.GetAddressOf());
But what i do next is a quest for me. i kind of figured out, i should use RenderTarget for such kind of stuff. but i failed to figure out, how exactly.
The problem was solved via using bitmap as a render target.
The idea is to create a new d2d device
create a new DeviceContext
set Bitmap as a render target
and render everything, thta needs to be rendered

Can't get texture.Sample to work, although I can get texture.Load to work fine in Direct 3d 11 shader

In my HLSL for Direct3d 11 app, I'm having a problem where the texture.Sample intrinsic always return 0. I know my data and parameters are correct because if I use texture.Load instead of Sample the value returned is correct.
Here are my declarations:
extern Texture2D<float> texMask;
SamplerState TextureSampler : register (s2);
Here is the code in my pixel shader that works-- this confirms that my texture is available correctly to the shader and my texcoord values are correct:
float maskColor = texMask.Load(int3(8192*texcoord.x, 4096*texcoord.y, 0));
If I substitute for this the following line, maskColor is always 0, and I can't figure out why.
float maskColor = texMask.Sample(TextureSampler, texcoord);
TextureSampler has the default state values; texMask is defined with 1 mip level.
I've also tried:
float maskColor = texMask.SampleLevel(TextureSampler, texcoord, 0);
and that also always returns 0.
C++ code for setting up sampler:
D3D11_SAMPLER_DESC sd;
ZeroMemory(&sd, sizeof(D3D11_SAMPLER_DESC));
sd.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR;
sd.AddressU = D3D11_TEXTURE_ADDRESS_WRAP;
sd.AddressV = D3D11_TEXTURE_ADDRESS_WRAP;
ID3D11SamplerState* pSampler;
dev->CreateSamplerState(&sd, &pSampler);
devcon->PSSetSamplers(2, 1, &pSampler);
Forgive me for reviving such an old post, but I figured it important to add another possible cause for this sort of issue for others, and this post is the most relevant place I could find to post in.
I, too, had an issue where the HLSL Sample function would always return 0, but only on specific textures, and not on others. I checked, ensured the texture was properly bound, and that the color values should not have been 0, and still was left wondering why I was always getting 0 back for this one specific texture, but not others used in the same shader pass. The Load function worked fine, but then I lost the nice features that samplers give us.
As it turns out, in my case, I had accidentally created this texture's description as:
D3D11_TEXTURE2D_DESC desc;
desc.Width = _width;
desc.Height = _height;
desc.MipLevels = 0; // <- Bad!
desc.ArraySize = 1;
desc.Format = DXGI_FORMAT_R16G16B16A16_FLOAT;
desc.SampleDesc.Count = 1;
desc.SampleDesc.Quality = 0;
desc.Usage = D3D11_USAGE_DEFAULT;
desc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET;
desc.CPUAccessFlags = 0;
desc.MiscFlags = 0;
This worked and created a texture that was visible and renderable, however what happens when defining MipLevels to 0, is that DirectX generates an entire mip chain for that texture. Me being me, however, I forgot this while working on my project further, and while DirectX may generate the textures for the mip chain, drawing to the texture does not cascade through all the levels of the chain (which does make sense, I suppose).
Now, I suppose it's important to note that I'm still new to the whole graphics programming thing, if that wasn't already obvious enough. I have absolutely no idea what mip level, or combination of mip levels, the regular Sample function uses. But I can say that in my case, it didn't happen to be level 0. Maybe it will for a smaller mip chain, but this texture in particular had 12 levels in total, with which only level 0 had any actual color information drawn to it. Using the Load function, or SampleLevel to explicitly access mip level 0 worked fine. As I do not need, nor want, the texture I'm trying to sample to have a mip chain, I simply changed it's description to fix it.
I found my problem -- I needed to specify a register for the texture as well as the sampler in my HLSL. I can't find any documentation anywhere that describes why this is necessary, but it did fix my problem.

How to efficiently render a small sprite in Direct3D / C++ on a large Window (DWM)?

I'm implementing a custom cursor in DirectX/C++ that is drawn on a transparent window on top of the desktop.
I have stripped it down to a basic example. The magic of executing Direct3D on the DWM is based on this article on Code Project
The problem is that when using a very big window (e.g. 2560x1440) as a base for the DirectX rendering, it will give up to 40% GPU Load according to GPU-Z. Even if the only thing I am displaying is a static 128x128 sprite, or nothing at all. If I use an area like 256x256, the GPU Load is around 1-3%.
Basically this loop would make the GPU go crazy on a big window while it's smooth sailing on a small window:
while(true) {
g_pD3DDevice->PresentEx(NULL, NULL, NULL, NULL, NULL);
Sleep(10);
}
So it seems like it re-renders the whole screen whether anything changes or not, am I right? Can I tell Direct3D to only re-render specific parts that needs to be updated?
EDIT:
I have found a way to tell Direct3D to render a specific part by providing RGNDATA Dirty region information to PresentEx. It is now 1% GPU Load instead of 20-40%.
std::vector<RECT> dirtyRects;
//Fill dirtyRects with previous and new cursor boundaries
DWORD size = dirtyRects.size() * sizeof(RECT)+sizeof(RGNDATAHEADER);
RGNDATA *rgndata = NULL;
rgndata = (RGNDATA *)HeapAlloc(GetProcessHeap(), 0, size);
RECT* pRectInitial = (RECT*)rgndata->Buffer;
RECT rectBounding = dirtyRects[0];
for (int i = 0; i < dirtyRects.size(); i++)
{
RECT rectCurrent = dirtyRects[i];
rectBounding.left = min(rectBounding.left, rectCurrent.left);
rectBounding.right = max(rectBounding.right, rectCurrent.right);
rectBounding.top = min(rectBounding.top, rectCurrent.top);
rectBounding.bottom = max(rectBounding.bottom, rectCurrent.bottom);
*pRectInitial = dirtyRects[i];
pRectInitial++;
}
//preparing rgndata header
RGNDATAHEADER header;
header.dwSize = sizeof(RGNDATAHEADER);
header.iType = RDH_RECTANGLES;
header.nCount = dirtyRects.size();
header.nRgnSize = dirtyRects.size() * sizeof(RECT);
header.rcBound.left = rectBounding.left;
header.rcBound.top = rectBounding.top;
header.rcBound.right = rectBounding.right;
header.rcBound.bottom = rectBounding.bottom;
rgndata->rdh = header;
// Update display
g_pD3DDevice->PresentEx(NULL, NULL, NULL, rgndata, 0);
But it's something I do not understand. It will only give 1% GPU Load if I add the following
SetLayeredWindowAttributes(hWnd, 0, 180, LWA_ALPHA);
I want it transparent anyway so it's good, but instead I get some weird tearing effects after a while. It is more noticeable the faster I move the cursor. What does that come from? It looks like image provided. I am sure I have set the dirty rects perfectly accurate.
The above tearing seem to differ from computer to computer.

Text rendering terribly slow

I'm using FTGL library to render text in my C++, OpenGL application, but I find it terribly slow, even though it is said to be fast and efficient library for this.
Even for small amounts of text, performance drop is visible, but when I try to render few lines of text, FPS drops from 350~ to 30~:
Yes, I already know that FPS isn't a good way to check efficiency, yet in this case there shouldn't be so big difference.
I found a function which allows me to make FTGL use display lists internally in order to increase speed, but it appears to be turned on by default. Anyway I tried using it, but it gave me nothing. So I thought that maybe it's somehow corrupted, or I don't understand it quite well, so I decided to put rendering text into my own display lists, but difference is either so slight that I can't even see it, or there's no difference.
bool TFontManager::renderWrappedText(font_ptr font, int lineLength, const TPoint& position, const std::string& text) {
if(font == nullptr) {
return false;
}
string key = sizeToString(font->FaceSize()); // key to look for it in map
key.append(TUtil::intToString(lineLength));
key.append(text);
GLuint displayListId = getDisplayListId(key); // get display list id from internal map
if(displayListId != 0) { // if display list id was found in map, i can call it
glCallList(displayListId);
return true;
}
// if id was not found, i'm creating new display list
FTSimpleLayout simpleLayout;
simpleLayout.SetLineLength((float)lineLength);
simpleLayout.SetFont(font.get());
displayListId = glGenLists(1);
glNewList(displayListId, GL_COMPILE);
glPushMatrix();
glTranslatef(position.x, position.y, 0.0f);
simpleLayout.Render(TUtil::stringToWString(text).c_str(), -1, FTPoint(), FTGL::RENDER_FRONT | FTGL::RENDER_BACK); // according to visual studio's profiler, bottleneck is inside this function. more exactly in drawing textured quads when i looked into FTGL code.
glPopMatrix();
glEndList();
m_textDisplayLists[key] = displayListId;
glCallList(displayListId);
return true;
}
I checked with breakpoints in debug mode - it creates display list only once, later it only calls previously created one.
What might be the reason for such slow rendering? How may I speed it up?
Edit:
I'm using FTTextureFont (which uses one texture per glyph). According to this FTGL tutorial, I should rather use FTBufferFont, because it uses only one texture per line. Buffer font should be faster, but after I tried it it's uglier and even slower (6 fps whereas texture font gave me 30 fps).
Edit2:
This is how I create my fonts:
font_ptr TFontManager::getFont(const std::string& filename, int size) {
string fontKey = filename;
fontKey.append(sizeToString(size));
FontIter result = fonts.find(fontKey);
if(result != fonts.end()) {
return result->second; // Found font in list
}
// If font wasn't found, create a new one and store it in list of fonts
font_ptr font(new FTTextureFont(filename.c_str()));
font->UseDisplayList(true);
if(font->Error()) {
string message = "Failed to open font";
message.append(filename);
TError::showMessage(message);
return nullptr;
}
if(!font->FaceSize(size)) {
string message = "Failed to set font size";
TError::showMessage(message);
return nullptr;
}
fonts[fontKey] = font;
return font;
}
Edit3:
This is function taken from FTGL library source code which renders glyph in FTTextureFont. It uses the same texture for separate glyphs, just with other coordinates, so this shouldn't be a problem.
const FTPoint& FTTextureGlyphImpl::RenderImpl(const FTPoint& pen,
int renderMode)
{
float dx, dy;
if(activeTextureID != glTextureID)
{
glBindTexture(GL_TEXTURE_2D, (GLuint)glTextureID);
activeTextureID = glTextureID;
}
dx = floor(pen.Xf() + corner.Xf());
dy = floor(pen.Yf() + corner.Yf());
glBegin(GL_QUADS);
glTexCoord2f(uv[0].Xf(), uv[0].Yf());
glVertex2f(dx, dy);
glTexCoord2f(uv[0].Xf(), uv[1].Yf());
glVertex2f(dx, dy - destHeight);
glTexCoord2f(uv[1].Xf(), uv[1].Yf());
glVertex2f(dx + destWidth, dy - destHeight);
glTexCoord2f(uv[1].Xf(), uv[0].Yf());
glVertex2f(dx + destWidth, dy);
glEnd();
return advance;
}
Rendering typography from normal typeface files is a pretty computationally intensive operation. The font glyphs are read as a set of splines that are used to generate character boundaries which are tessellated and fed into the graphics pipeline. I'm not highly familiar with FreeType2 but I have used FTGL. You should be using a FontAtlas to render type. A FontAtlas is a regular texture atlas (much like a sprite sheet) that is rendered once for each font size and then stored for future glyph renders.
Check out this link for more information on the process:
http://antongerdelan.net/opengl4/freetypefonts.html
This should greatly improve performance. Although you may lose out on some font-rendering flexibility.