I am trying to make a path tracer using OpenTK and a compute shader, but I have been struggling with textures repeating on the edges of my skybox. I followed the tutorial from learnopengl and adapted it to work with my compute shader but I have not been able to get rid of these artifacts.
This is the snippet that loads the skybox texture:
private TextureHandle _skyboxTexture;
...
protected override void OnLoad() {
base.OnLoad();
GL.Enable(EnableCap.TextureCubeMapSeamless);
...
_skyboxTexture = GL.CreateTexture(TextureTarget.TextureCubeMap);
GL.BindTexture(TextureTarget.TextureCubeMap, _skyboxTexture);
foreach (var file in Directory.GetFiles(#"Images\Skybox")) {
using (var image = SixLabors.ImageSharp.Image.Load(file)) {
image.Mutate(img => img.Rotate(180)); // without this the textures dont line up
using (var ms = new MemoryStream()) {
image.Save(ms, new BmpEncoder());
GL.TexImage2D(Texture.CubeMapTextureTargetFromString(file), 0, (int)InternalFormat.Rgb, 2048, 2048, 0, OpenTK.Graphics.OpenGL.PixelFormat.Bgr, PixelType.UnsignedByte, ms.ToArray());
}
}
}
GL.TexParameteri(TextureTarget.TextureCubeMap, TextureParameterName.TextureMinFilter, (int)TextureMinFilter.Linear);
GL.TexParameteri(TextureTarget.TextureCubeMap, TextureParameterName.TextureMagFilter, (int)TextureMagFilter.Linear);
GL.TexParameteri(TextureTarget.TextureCubeMap, TextureParameterName.TextureWrapS, (int)TextureWrapMode.ClampToEdge);
GL.TexParameteri(TextureTarget.TextureCubeMap, TextureParameterName.TextureWrapT, (int)TextureWrapMode.ClampToEdge);
GL.TexParameteri(TextureTarget.TextureCubeMap, TextureParameterName.TextureWrapR, (int)TextureWrapMode.ClampToEdge);
...
}
These are a couple screenshots from RenderDoc, you can clearly see the artifact in the skybox texture. In the other picture you can see that the clamping and seamless settings are loaded correctly.
This image is blurry on stackoverflow but when you click it it's better.
I don't think it is an issue with the sampling logic in my compute shader, because when using RenderDoc I can also see the artifact in the texture. I also tried saving the image from the MemoryStream to a .bmp to check if something is going wrong during the loading of the image, but the exported image looks fine. It's also not a problem with the skybox textures, it happens with all textures I try.
Thanks to derhass, I got it working. The problem was that the bitmap header caused a misalignment in the textures. I am using ImageSharp for image processing, so I created a new class that implements IImageEncoder and just puts the raw rgb values in the stream. I don't need the alpha channel so I omitted it but it can be added easily I think.
public class RawBytesEncoder : IImageEncoder {
public void Encode<TPixel>(Image<TPixel> image, Stream stream) where TPixel : unmanaged, IPixel<TPixel> {
for (var y = 0; y < image.Height; y++)
for (var x = 0; x < image.Width; x++) {
var target = new Rgba32();
image[x, y].ToRgba32(ref target);
stream.WriteByte(target.R);
stream.WriteByte(target.G);
stream.WriteByte(target.B);
}
}
public Task EncodeAsync<TPixel>(Image<TPixel> image, Stream stream, CancellationToken cancellationToken) where TPixel : unmanaged, IPixel<TPixel> {
throw new NotImplementedException();
}
}
I also had to change some stuff in the other part of my code. The image no longer needed to be rotated 180 degrees but needs to be flipped now, and the PixelFormat in glTexImage is Rgb again now.
foreach (var file in Directory.GetFiles(#"Images\Skybox"))
using (var image = Image.Load(file)) {
image.Mutate(img => img.Flip(FlipMode.Horizontal));
using (var ms = new MemoryStream()) {
image.Save(ms, new RawBytesEncoder());
GL.TexImage2D(Texture.CubeMapTextureTargetFromString(file), 0, (int)InternalFormat.Rgb, 2048, 2048, 0, PixelFormat.Rgb, PixelType.UnsignedByte, ms.ToArray());
}
}
}
Related
A little background: I'm attempting to make a Windows (10) application which makes the screen look like an old CRT monitor, scanlines, blur, and all. I'm using this official Microsoft screen capture demo as a starting point: At this stage I can capture a window, and display it back in a new mouse-through window as if it were the original window.
I am attempting to use the CRT-Royale CRT shaders which are generally considered the best CRT shaders; these are available in .cg format. I transpile them with cgc to hlsl, then compile the hlsl files to compiled shader byte code with fxc. I am able to successfully load the compiled shaders and create the pixel shader. I then set the pixel shader in the d3d context. I then attempt to copy the capture surface frame to a pixel shader resource and set the created shaders resource. All of this builds and runs, but I do not see any difference in the output image and am not sure how to proceed. Below is the relevant code. I am not a c++ developer and am making this as a personal project which I plan on open sourcing once I have a primitive working version. Any advice is appreciated, thanks.
SimpleCapture::SimpleCapture(
IDirect3DDevice const& device,
GraphicsCaptureItem const& item)
{
m_item = item;
m_device = device;
// Set up
auto d3dDevice = GetDXGIInterfaceFromObject<ID3D11Device>(m_device);
d3dDevice->GetImmediateContext(m_d3dContext.put());
auto size = m_item.Size();
m_swapChain = CreateDXGISwapChain(
d3dDevice,
static_cast<uint32_t>(size.Width),
static_cast<uint32_t>(size.Height),
static_cast<DXGI_FORMAT>(DirectXPixelFormat::B8G8R8A8UIntNormalized),
2);
// ADDED THIS
HRESULT hr1 = D3DReadFileToBlob(L"crt-royale-first-pass-ps_4_0.fxc", &ps_1_buffer);
HRESULT hr = d3dDevice->CreatePixelShader(
ps_1_buffer->GetBufferPointer(),
ps_1_buffer->GetBufferSize(),
nullptr,
&ps_1
);
m_d3dContext->PSSetShader(
ps_1,
nullptr,
0
);
// END OF ADDED CHANGES
// Create framepool, define pixel format (DXGI_FORMAT_B8G8R8A8_UNORM), and frame size.
m_framePool = Direct3D11CaptureFramePool::Create(
m_device,
DirectXPixelFormat::B8G8R8A8UIntNormalized,
2,
size);
m_session = m_framePool.CreateCaptureSession(m_item);
m_lastSize = size;
m_frameArrived = m_framePool.FrameArrived(auto_revoke, { this, &SimpleCapture::OnFrameArrived });
}
void SimpleCapture::OnFrameArrived(
Direct3D11CaptureFramePool const& sender,
winrt::Windows::Foundation::IInspectable const&)
{
auto newSize = false;
{
auto frame = sender.TryGetNextFrame();
auto frameContentSize = frame.ContentSize();
if (frameContentSize.Width != m_lastSize.Width ||
frameContentSize.Height != m_lastSize.Height)
{
// The thing we have been capturing has changed size.
// We need to resize our swap chain first, then blit the pixels.
// After we do that, retire the frame and then recreate our frame pool.
newSize = true;
m_lastSize = frameContentSize;
m_swapChain->ResizeBuffers(
2,
static_cast<uint32_t>(m_lastSize.Width),
static_cast<uint32_t>(m_lastSize.Height),
static_cast<DXGI_FORMAT>(DirectXPixelFormat::B8G8R8A8UIntNormalized),
0);
}
{
auto frameSurface = GetDXGIInterfaceFromObject<ID3D11Texture2D>(frame.Surface());
com_ptr<ID3D11Texture2D> backBuffer;
check_hresult(m_swapChain->GetBuffer(0, guid_of<ID3D11Texture2D>(), backBuffer.put_void()));
// ADDED THIS
D3D11_TEXTURE2D_DESC txtDesc = {};
txtDesc.MipLevels = txtDesc.ArraySize = 1;
txtDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
txtDesc.SampleDesc.Count = 1;
txtDesc.Usage = D3D11_USAGE_IMMUTABLE;
txtDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
auto d3dDevice = GetDXGIInterfaceFromObject<ID3D11Device>(m_device);
ID3D11Texture2D *tex;
d3dDevice->CreateTexture2D(&txtDesc, NULL,
&tex);
frameSurface.copy_to(&tex);
d3dDevice->CreateShaderResourceView(
tex,
nullptr,
srv_1
);
auto texture = srv_1;
m_d3dContext->PSSetShaderResources(0, 1, texture);
// END OF ADDED CHANGES
m_d3dContext->CopyResource(backBuffer.get(), frameSurface.get());
}
}
DXGI_PRESENT_PARAMETERS presentParameters = { 0 };
m_swapChain->Present1(1, 0, &presentParameters);
... // Truncated
Shaders define how things are drawn. However, you don't draw anything - you just copy, which is why the shader doesn't do anything.
What you should do is to remove the CopyResource call, and instead draw a full screen quad on the back buffer (Which requires you to create a vertex buffer that you can bind, then set the back buffer as render target, and finally call Draw/DrawIndexed to actually render something, which then will invoke the shader).
Also - since I'm not sure whether you already do this and just stripped it from the shown code - functions like CreatePixelShader don't return HRESULTs just for the fun of it - you should check what is actually returned, because DirectX silently returns most errors and expects you to handle them, instead of crashing your program.
I have a weird issue only happening on my Intel HD Graphics 530
When rendering an image, some pixels color randomly change.
See the following image:
Bug
For the problematic pixels, the graphics debugger show that the color outputted from the graphics pipeline is the correct one. But the pixel displayed color is wrong.
See: Debugger
From my investigations i have found that these pixels seems using information from the materials of the others. My materials information is handled by a descriptor heap. So the switch between graphics root descriptor table in my rendering loop seems to be my problem (when i only draw one object everything is fine)
Here the code snippet i use:
void ForwardLighningEffect::pushCommands(ForwardLigthningPushArgs data, ID3D12GraphicsCommandList* commandList, int frameIndex) {
// set PSO
commandList->SetPipelineState(m_mainPipelineStateObject);
// set root signature
commandList->SetGraphicsRootSignature(m_rootSignature);
// set constant buffer view
commandList->SetGraphicsRootConstantBufferView(0, m_constantBufferUploadHeaps[frameIndex]->GetGPUVirtualAddress());
const auto& meshes = data.model->getMeshes();
for (auto mesh : meshes)
{
if (auto materialHandle = mesh->material.lock()) // get material handle from weak ptr.
{
ID3D12DescriptorHeap * matDescriptorHeap = materialHandle->material.descriptorHeap;
// set the material descriptor heap
ID3D12DescriptorHeap* descriptorHeaps[] = { matDescriptorHeap };
commandList->SetDescriptorHeaps(_countof(descriptorHeaps), descriptorHeaps);
// HERE ! set the descriptor table to the descriptor heap (parameter 1, as constant buffer root descriptor is parameter index 0)
commandList->SetGraphicsRootDescriptorTable(1, matDescriptorHeap->GetGPUDescriptorHandleForHeapStart());
}
commandList->IASetVertexBuffers(0, 1, &mesh->vertexBuffer.bufferView);
commandList->IASetIndexBuffer(&mesh->indexBuffer.bufferView);
for (auto camera : data.cameras)
{
updateConstantBuffer(camera, frameIndex);
// Draw mesh.
commandList->DrawIndexedInstanced(mesh->nbIndices, 1, 0, 0, 0);
}
}
}
Whats wrong ?
Found a solution. Updating textures state when not used fixed it.
So when binding a texture:
commandList->ResourceBarrier(1, &CD3DX12_RESOURCE_BARRIER::Transition(/*textureResource*/, D3D12_RESOURCE_STATE_COMMON, D3D12_RESOURCE_STATE_PIXEL_SHADER_RESOURCE));
After use then reset state:
commandList->ResourceBarrier(1, &CD3DX12_RESOURCE_BARRIER::Transition(/*textureResource*/, D3D12_RESOURCE_STATE_PIXEL_SHADER_RESOURCE, D3D12_RESOURCE_STATE_COMMON));
I created a Unity sphere and applied standard material with albedo texture. Now I'm trying to rotate mesh uvs (it looks like this is the simpliest way to rotate the texture)
Here is the code
using UnityEngine;
public class GameController : MonoBehaviour
{
public GameObject player;
public float rotationValue;
void Start ()
{
Mesh mesh = player.GetComponent<MeshFilter>().mesh;
Vector2[] uvs = mesh.uv;
mesh.uv = changeUvs(uvs);
}
private Vector2[] changeUvs (Vector2[] originalUvs)
{
for (int i = 0; i < originalUvs.Length; i++)
{
originalUvs[i].x = originalUvs[i].x + rotationValue;
if (originalUvs[i].x > 1)
{
originalUvs[i].x = originalUvs[i].x - 1f;
}
}
return originalUvs;
}
}
This gives me this strange artifact. What am I doing wrong?
It can't be done the way you're trying to do it. Even if you go outside the [0,1] range as pleluron suggests there will be a line on the sphere where the texture interpolates from high to low, and you get the entire texture in a single band as you see now.
The way the original sphere solves the problem is by having a seam that is duplicated. One version has x 0 and the other one has x 1. You don't see it because the vertices are at the same location. If you want to solve the problem with uv trickery then the only option is to move the seam, which involves creating a new mesh.
Actually the simplest way to rotate the planet is by leaving the texture alone and just rotate the object! If this for some reason is not an option then go into the material and find the tiling and offset. If you're using the standard shader then make sure you use the top one, just below the emission checkbox. If you modify that X you get the effect you're trying to create with the script you posted.
I'd like to render basic 3D shapes without any aliasing/smoothing with a PGraphics instance using the P3D renderer, but noSmooth() doesn't seem to work.
In OF I remember calling setTextureMinMagFilter(GL_NEAREST,GL_NEAREST); on a texture.
What would be the equivalent in Processing ?
I tried to use PGL:
PGL.TEXTURE_MIN_FILTER = PGL.NEAREST;
PGL.TEXTURE_MAG_FILTER = PGL.NEAREST;
but I get a black image as the result.
If I comment PGL.TEXTURE_MIN_FILTER = PGL.NEAREST; I can see the render, but it's interpolated, not sharp.
Here'a basic test sketch with a few things I've tried:
PGraphics buffer;
PGraphicsOpenGL pgl;
void setup() {
size(320, 240, P3D);
noSmooth();
//hint(DISABLE_TEXTURE_MIPMAPS);
//((PGraphicsOpenGL)g).textureSampling(0);
//PGL pgl = beginPGL();
//PGL.TEXTURE_MIN_FILTER = PGL.NEAREST;
//PGL.TEXTURE_MAG_FILTER = PGL.NEAREST;
//endPGL();
buffer=createGraphics(width/8, height/8, P3D);
buffer.noSmooth();
buffer.beginDraw();
//buffer.hint(DISABLE_TEXTURE_MIPMAPS);
//((PGraphicsOpenGL)buffer).textureSampling(0);
PGL bpgl = buffer.beginPGL();
//PGL.TEXTURE_MIN_FILTER = PGL.NEAREST;//commenting this back in results in a blank buffer
PGL.TEXTURE_MAG_FILTER = PGL.NEAREST;
buffer.endPGL();
buffer.background(0);
buffer.stroke(255);
buffer.line(0, 0, buffer.width, buffer.height);
buffer.endDraw();
}
void draw() {
image(buffer, 0, 0, width, height);
}
(I've also posted on the Processing Forum, but no luck so far)
You were actually on the right track. You were just passing the wrong value to textureSampling().
Since the documentation on PGraphicsOpenGL::textureSampling()
is a bit scarce to say the least.
I decided to peak into it using a decompiler, which lead me to
Texture::usingMipmaps().
There I was able to see the values and what they reflected (in the decompiled code).
2 = POINT
3 = LINEAR
4 = BILINEAR
5 = TRILINEAR
Where PGraphicsOpenGL's default textureSampling is 5 (TRILINEAR).
I also later found this old comment on an issue equally confirming it.
So to get point/nearest filtering you only need to call noSmooth() on the application itself, and call textureSampling() on your PGraphics.
size(320, 240, P3D);
noSmooth();
buffer = createGraphics(width/8, height/8, P3D);
((PGraphicsOpenGL) buffer).textureSampling(2);
So considering the above, and only including the code you used to draw the line and drawing buffer to the application. Then that gives the following desired result.
I needed to combine both GL_LINEAR and GL_NEAREST with one shader so the ((PGraphicsOpenGL) buffer).textureSampling(2); was no option.
It was some digging, but this works for me:
PGL pgl = beginPGL();
Texture ascii_map_tex = ((PGraphicsOpenGL)g).getTexture(ascii_map);
pgl.bindTexture(PGL.TEXTURE_2D, ascii_map_tex.glName);
pgl.texParameteri(PGL.TEXTURE_2D, PGL.TEXTURE_MIN_FILTER, PGL.NEAREST);
pgl.texParameteri(PGL.TEXTURE_2D, PGL.TEXTURE_MAG_FILTER, PGL.NEAREST);
pgl.bindTexture(PGL.TEXTURE_2D, 0);
endPGL();
I am getting the following weird results with the FrameBuffer class in libgdx.
Here is the code that is producing this result:
// This is the rendering code
#Override
public void render(float delta) {
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT);
stage.act();
stage.draw();
fbo.begin();
batch.begin();
batch.draw(heart, 0, 0);
batch.end();
fbo.end();
test = new Image(fbo.getColorBufferTexture());
test.setPosition(256, 256);
stage.addActor(test);
}
//This is the initialization code
#Override
public void show() {
stage = new Stage(Gdx.graphics.getWidth(), Gdx.graphics.getHeight(), false);
atlas = Assets.getAtlas();
batch = new SpriteBatch();
background = new Image(atlas.findRegion("background"));
background.setFillParent(true);
heart = atlas.findRegion("fluttering");
fbo = new FrameBuffer(Pixmap.Format.RGBA8888, heart.getRegionWidth(), heart.getRegionHeight(), false);
stage.addActor(background);
Image temp = new Image(new TextureRegion(heart));
stage.addActor(temp);
}
Why is it that I am getting the heart that I drew on the frame buffer to get flipped and be smaller than the original one though the frame buffer width and height are the same as that of the image (71 x 72).
Your SpriteBatch is using the wrong projection matrix. Since you are rendering to a custom sized FrameBuffer you will have to manually set one.
projectionMatrix = new Matrix4();
projectionMatrix.setToOrtho2D(0, 0, heart.getRegionWidth(), heart.getRegionHeight());
batch.setProjectionMatrix(projectionMatrix);
To solve this, the frame buffer has to have a width and height equal to that of stage, like this:
fbo = new FrameBuffer(Pixmap.Format.RGBA8888, Gdx.graphics.getWidth(), Gdx.graphics.getHeight(), false);