I'm following vulkan-tutorial and I've successfully rendered a spinning square. I'm at the point in the lessons right before applying textures. Before moving on within the lessons, I've been modularizing the code into a framework of interfaces one piece at a time. I have successfully managed to extract various Vulkan objects out of the main engine class into their own classes. Each of these class objects has an interface with an initialize, create, and cleanup function at a minimum.
I've done this with a Buffer class that is an abstract base class that my IndexBuffer, VertexBuffer, and UniformBuffer all derived from. I've done this with my CommandPool class, SyncObjects(VkSemaphore and VkFence) classes, my Pipelines(only MainGraphicsPipeline for now), and my SwapChain.
With all of these in their own classes, I'm now storing these classes as either shared_ptr<Class> or vector<shared_ptr<Class>> within my classes that have internal instances of these. This is the design flow that I've been staying with.
Everything was working perfectly fine until I started to have VkImageView types contained within their own class. Within my SwapChainclass it contained the private member:
std::vector<VkImageView> imageViews_
and these member functions that work on them:
void SwapChain::cleanupImageViews() {
for (auto imageView : imageViews_) {
vkDestroyImageView(*device_, imageView, nullptr);
}
}
void SwapChain::createImageViews() {
imageViews_.resize(images_.size());
for (size_t i = 0; i < images_.size(); i++) {
VkImageViewCreateInfo createInfo{};
createInfo.sType = VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO;
createInfo.image = images_[i];
createInfo.viewType = VK_IMAGE_VIEW_TYPE_2D;
createInfo.format = imageFormat_;
createInfo.components.r = VK_COMPONENT_SWIZZLE_IDENTITY;
createInfo.components.g = VK_COMPONENT_SWIZZLE_IDENTITY;
createInfo.components.b = VK_COMPONENT_SWIZZLE_IDENTITY;
createInfo.components.a = VK_COMPONENT_SWIZZLE_IDENTITY;
createInfo.subresourceRange.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT;
createInfo.subresourceRange.baseMipLevel = 0;
createInfo.subresourceRange.levelCount = 1;
createInfo.subresourceRange.baseArrayLayer = 0;
createInfo.subresourceRange.layerCount = 1;
if (vkCreateImageView(*device_, &createInfo, nullptr, &imageViews_[i]) != VK_SUCCESS) {
throw std::runtime_error("failed to create image views!");
}
}
}
With my code in this state, everything works fine. I'm able to render a spinning colored square, I can resize the window and close the window with 0 errors from both Visual Studio and from Vulkan Layers.
When I change to this pattern that I've done before:
std::vector<std::shared_ptr<ImageView>> imageViews_;
and my functions become:
void SwapChain::cleanupImageViews() {
for (auto& imageView : imageViews_ ) {
imageView->cleanup();
}
}
void SwapChain::createImageViews() {
imageViews_.resize(images_.size());
for(auto& i : imageViews_) {
i = std::shared_ptr<ImageView>();
i->initialize(device_);
}
for (size_t i = 0; i < images_.size(); i++ ) {
imagesViews_[i]->create(images_[i], imageFormat_);
}
}
It fails when It calls the create() for the ImageViews giving me an unhandled exception: access read/write violation stating that the "this" pointer within my ImageView class is nullptr.
Here is what my ImageView class looks like:
ImageView.h
#pragma once
#include "utility.h"
namespace ForceEngine {
namespace vk {
class ImageView {
private:
VkImageView imageView_;
VkDevice* device_;
VkImage image_;
VkFormat format_;
public:
ImageView() = default;
~ImageView() = default;
void initialize(VkDevice* device);
void create(VkImage image, VkFormat format);
void cleanup();
VkImageView* get() { return &imageView_; }
private:
void createImageView();
};
} // namespace vk
} // namespace ForceEngine
ImageView.cpp
#include "ImageView.h"
namespace ForceEngine {
namespace vk {
void ImageView::initialize(VkDevice* device) {
if (device == nullptr) {
throw std::runtime_error("failed to initialize ImageView: device was nullptr!");
device_ = device;
}
}
void ImageView::create(VkImage image, VkFormat format) {
//if (image == nullptr) throw std::runtime_error("failed to create Image View: image was nullptr!");
//if (format == nullptr) throw std::runtime_error("failed to create Image View: format was nullptr!");
image_ = image; // This is where it is throwing the exception.
format_ = format;
createImageView();
}
void ImageView::createImageView() {
VkImageViewCreateInfo createInfo{};
createInfo.sType = VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO;
createInfo.image = image_;
createInfo.viewType = VK_IMAGE_VIEW_TYPE_2D;
createInfo.format = format_;
createInfo.components.r = VK_COMPONENT_SWIZZLE_IDENTITY;
createInfo.components.g = VK_COMPONENT_SWIZZLE_IDENTITY;
createInfo.components.b = VK_COMPONENT_SWIZZLE_IDENTITY;
createInfo.components.a = VK_COMPONENT_SWIZZLE_IDENTITY;
createInfo.subresourceRange.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT;
createInfo.subresourceRange.baseMipLevel = 0;
createInfo.subresourceRange.levelCount = 1;
createInfo.subresourceRange.baseArrayLayer = 0;
createInfo.subresourceRange.layerCount = 1;
if (vkCreateImageView(*device_, &createInfo, nullptr, &imageView_) != VK_SUCCESS) {
throw std::runtime_error("failed to create image views!");
}
}
void ImageView::cleanup() {
vkDestroyImageView(*device_, imageView_, nullptr);
}
} // namespace vk
} // namespace ForceEngine
Within my class, I've tried having image_ as a pointer and as a non-pointer and as for the signature of create(), I've tried passing the parameters by copy, reference, pointer, const reference, const pointer, etc. and to no avail, none of the above has worked. Everything keeps causing an exception. I don't know what's causing the access violation. For some reason, the ImageView class is not properly allocating memory for the image_ member as it states that this was nullptr for this class or it can't write to memory, etc. Yet, I've followed this same pattern for all of my other Vulkan Objects and did not have this issue until now.
What are the takes on the usages and proper setup of VkImageView and VkImage within the context of the SwapChain? Currently, the VkImages are still stored within the SwapChain as std::vector<VkImage> I can create it successfully within the class directly, but when I try to extract the VkImageView out into its own class object is when I start to run into this problem. I'm learning Vulkan through this tutorial and I'm starting to get a grasp of how it is designed, but I'm still no expert at the API. Right now any help would be appreciated. And, yes I've stepped through the debugger, I've watched my variables, I've watched the call stack, and for the life of me, I'm completely stumped. If you need more information than this please don't hesitate to ask.
User nicol-bolas pointed out in the comment section that I had i = std::shared_ptr<ImageView>() and that I should have had i = std::make_shared<ImageView>() and yes that is correct and that is how my other independent classes are being created. This was an overlooked typo mistake. So I give credit to NicolBolas for that. However, after fixing that bug, it allowed me to find and fix the real bug.
The actual bug that was causing the Unhandled Exception had nothing to do with the ImageView, VkImageView, VkImage objects. The real culprit was within the ImageView class itself within its initialize() method. I was assigning its member device_ within the scope of the if statement, where I was checking to see if the device pointer being passed in was nullptr or not.
I originally had this:
void ImageView::initialize(VkDevice* device) {
if (device == nullptr) { throw std::runtime_error("failed to initialize Image View: device was nullptr!");
device_ = nullptr;
}
}
And the device_ was never being set. It should have been:
void ImageView::initialize(VkDevice* device) {
if (device == nullptr) throw std::runtime_error("failed to initialize Image View: device was nullptr!");
device_ = nullptr;
}
Now the code works again and I have a spinning colored square. The code exits with 0 errors and no messages from Vulkan Layers.
I had simply overlooked the {} within the if statement of the function.
Related
I'm creating a system that allows the manipulation of constant buffer variables by name using their byte offsets and byte sizes via Shader Reflection. I can bind the buffers to the Device Context just fine, but none of the cubes in my 3D scene are showing up. I believe this is because something is wrong with how I'm mapping data to the Constant Buffers. I cast the struct, be it a float4 or a matrix to a void pointer, and then copy that void pointer data to a another structure that has the variable's name. Once the shader needs to have its buffers updated before a draw call, I map the data of every structure in the list during the Map/Unmap call with a pointer iterator. Also, there seems to be a crash whenever the program calls the destructor on one of the shader constant structures. Below is the code i've written so far:
I'm mapping the buffer data through this algorithm here:
void DynamicConstantBuffer::UpdateChanges(ID3D11DeviceContext* pDeviceContext)
{
D3D11_MAPPED_SUBRESOURCE mappedResource;
HRESULT hr = pDeviceContext->Map(m_Buffer.Get(), 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource);
if (FAILED(hr)) return;
// Set mapped data
for (const auto& constant : m_BufferConstants)
{
char* startPosition = static_cast<char*>(mappedResource.pData) + constant.desc.StartOffset;
memcpy(startPosition, constant.pData, sizeof(constant.pData));
}
// Copy memory and unmap
pDeviceContext->Unmap(m_Buffer.Get(), 0);
}
I'm initializing the Constant Buffer here:
BOOL DynamicConstantBuffer::Initialize(UINT nBufferSlot, ID3D11Device* pDevice, ID3D11ShaderReflection* pShaderReflection)
{
ID3D11ShaderReflectionConstantBuffer* pReflectionBuffer = NULL;
D3D11_SHADER_BUFFER_DESC cbShaderDesc = {};
// Fetch constant buffer description
if (!(pReflectionBuffer = pShaderReflection->GetConstantBufferByIndex(nBufferSlot)))
return FALSE;
// Get description
pReflectionBuffer->GetDesc(&cbShaderDesc);
m_BufferSize = cbShaderDesc.Size;
// Create constant buffer on gpu end
D3D11_BUFFER_DESC cbDescription = {};
cbDescription.Usage = D3D11_USAGE_DYNAMIC;
cbDescription.BindFlags = D3D11_BIND_CONSTANT_BUFFER;
cbDescription.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
cbDescription.ByteWidth = cbShaderDesc.Size;
if (FAILED(pDevice->CreateBuffer(&cbDescription, NULL, m_Buffer.GetAddressOf())))
return FALSE;
// Poll shader variables
for (UINT i = 0; i < cbShaderDesc.Variables; i++)
{
ID3D11ShaderReflectionVariable* pVariable = NULL;
pVariable = pReflectionBuffer->GetVariableByIndex(i);
// Get variable description
D3D11_SHADER_VARIABLE_DESC variableDesc = {};
pVariable->GetDesc(&variableDesc);
// Push variable back into list of variables
m_BufferConstants.push_back(ShaderConstant(variableDesc));
}
return TRUE;}
Here's the method that sets a variable within the constant buffer:
BOOL DynamicConstantBuffer::SetConstantVariable(const std::string& varName, const void* varData)
{
for (auto& v : m_BufferConstants)
{
if (v.desc.Name == varName)
{
memcpy(v.pData, varData, sizeof(varData));
bBufferDirty = TRUE;
return TRUE;
}
}
// No variable to assign :(
return FALSE;
}
Here's the class layout for ShaderConstant:
class ShaderConstant
{
public:
ShaderConstant(D3D11_SHADER_VARIABLE_DESC& desc)
{
this->desc = desc;
pData = new char[desc.Size];
_size = desc.Size;
};
~ShaderConstant()
{
if (!pData)
return;
delete[] pData;
pData = NULL;
}
D3D11_SHADER_VARIABLE_DESC desc;
void* pData;
size_t _size;
};
Any help at all would be appreciated. Thank you.
I'm trying to make a simple frame counter for DirectX 12 games using Dear IMGUI. I simply want to overlay a small transparent window that displays the sequential order of frames during gameplay. To do so, I hook Present(), so I can get the SwapChain, and count the number of times the method is called (frame counting). THIS IS NOT FOR A CHEAT. I am not writing cheats for games, I simply want to record frame numbers for analytical purposes.
I have successfully done this for DirectX 11 using the ShowExampleAppSimpleOverlay() example provided in here: https://github.com/ocornut/imgui/blob/master/imgui_demo.cpp
Here is an image sample showing the frame counter in a DX 11 game.
I'm now trying to do the same with DirectX 12. Hooking the Present() is not an issue.
Using example code provided here: https://github.com/ocornut/imgui/blob/master/examples/example_win32_directx12/main.cpp
I attempt to use the ShowExampleAppSimpleOverlay() method again, however in my code on the call to d3d12CommandQueue->ExecuteCommandLists(1, (ID3D12CommandList* const*)&d3d12CommandList); (to render the overlay) it results in an error saying (0x887A0001: DXGI_ERROR_INVALID_CALL). This is the last line of code in the code sample provided below:
I'm not sure how to proceed. Any thoughts?
Edit: I forgot to mention that I'm also hooking and acquiring the games command que. So d3d12CommandQueue is acquired directly from the game. It doesn't return NULL so I'm assuming it is the correct object. I could be wrong though...
For each call to Present() do the following:
//iterate frame
Frame_Number = Frame_Number + 1;
//Get Device, using IDXGISwapChain3
ID3D12Device* device;
HRESULT gd = pSwapChain->GetDevice(__uuidof(ID3D12Device), (void**)&device);
assert(gd == S_OK);
//Get window handle from swapchain for IMGUI
DXGI_SWAP_CHAIN_DESC sd;
pSwapChain->GetDesc(&sd);
window = sd.OutputWindow;
//Get backbuffers
buffersCounts = sd.BufferCount;
frameContext = new FrameContext[buffersCounts];
D3D12_DESCRIPTOR_HEAP_DESC descriptorImGuiRender = {};
descriptorImGuiRender.Type = D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV;
descriptorImGuiRender.NumDescriptors = buffersCounts;
descriptorImGuiRender.Flags = D3D12_DESCRIPTOR_HEAP_FLAG_SHADER_VISIBLE;
// Create Descriptor Heap IMGUI render
if (device->CreateDescriptorHeap(&descriptorImGuiRender, IID_PPV_ARGS(&d3d12DescriptorHeapImGuiRender)) != S_OK)
return false;
//Create Command Allocator
ID3D12CommandAllocator* allocator;
if (device->CreateCommandAllocator(D3D12_COMMAND_LIST_TYPE_DIRECT, IID_PPV_ARGS(&allocator)) != S_OK)
return false;
for (size_t i = 0; i < buffersCounts; i++) {
frameContext[i].commandAllocator = allocator;
}
if (device->CreateCommandList(0, D3D12_COMMAND_LIST_TYPE_DIRECT, allocator, NULL, IID_PPV_ARGS(&d3d12CommandList)) != S_OK ||
d3d12CommandList->Close() != S_OK)
return false;
//create descriptor heap, describe and create a render target view (RTV) descriptor heap.
D3D12_DESCRIPTOR_HEAP_DESC descriptorBackBuffers;
descriptorBackBuffers.Type = D3D12_DESCRIPTOR_HEAP_TYPE_RTV;
descriptorBackBuffers.NumDescriptors = buffersCounts;
descriptorBackBuffers.Flags = D3D12_DESCRIPTOR_HEAP_FLAG_NONE;
descriptorBackBuffers.NodeMask = 1;
if (device->CreateDescriptorHeap(&descriptorBackBuffers, IID_PPV_ARGS(&d3d12DescriptorHeapBackBuffers)) != S_OK)
return false;
const auto rtvDescriptorSize = device->GetDescriptorHandleIncrementSize(D3D12_DESCRIPTOR_HEAP_TYPE_RTV);
// Create frame resources.
D3D12_CPU_DESCRIPTOR_HANDLE rtvHandle = d3d12DescriptorHeapBackBuffers->GetCPUDescriptorHandleForHeapStart();
// Create a RTV for each frame.
for (size_t i = 0; i < buffersCounts; i++) {
ID3D12Resource* pBackBuffer = nullptr;
frameContext[i].main_render_target_descriptor = rtvHandle;
pSwapChain->GetBuffer(i, IID_PPV_ARGS(&pBackBuffer));
device->CreateRenderTargetView(pBackBuffer, nullptr, rtvHandle);
frameContext[i].main_render_target_resource = pBackBuffer;
rtvHandle.ptr += rtvDescriptorSize;
}
// Setup Platform/Renderer bindings dor IMGUI
ImGui_ImplWin32_Init(window);
ImGui_ImplDX12_Init(device, buffersCounts,
DXGI_FORMAT_R8G8B8A8_UNORM, d3d12DescriptorHeapImGuiRender,
d3d12DescriptorHeapImGuiRender->GetCPUDescriptorHandleForHeapStart(),
d3d12DescriptorHeapImGuiRender->GetGPUDescriptorHandleForHeapStart());
ImGui::GetIO().ImeWindowHandle = window;
// Start the Dear ImGui frame
ImGui_ImplDX12_NewFrame();
ImGui_ImplWin32_NewFrame();
ImGui::NewFrame();
//call imgui menues here
bool bShow = true;
ShowExampleAppSimpleOverlay(&bShow);
// Rendering (imgui)
FrameContext& currentFrameContext = frameContext[pSwapChain->GetCurrentBackBufferIndex()];
currentFrameContext.commandAllocator->Reset();
D3D12_RESOURCE_BARRIER barrier;
barrier.Type = D3D12_RESOURCE_BARRIER_TYPE_TRANSITION;
barrier.Flags = D3D12_RESOURCE_BARRIER_FLAG_NONE;
barrier.Transition.pResource = currentFrameContext.main_render_target_resource;
barrier.Transition.Subresource = D3D12_RESOURCE_BARRIER_ALL_SUBRESOURCES;
barrier.Transition.StateBefore = D3D12_RESOURCE_STATE_PRESENT;
barrier.Transition.StateAfter = D3D12_RESOURCE_STATE_RENDER_TARGET;
d3d12CommandList->Reset(currentFrameContext.commandAllocator, nullptr);
d3d12CommandList->ResourceBarrier(1, &barrier);
d3d12CommandList->OMSetRenderTargets(1, ¤tFrameContext.main_render_target_descriptor, FALSE, nullptr);
d3d12CommandList->SetDescriptorHeaps(1, &d3d12DescriptorHeapImGuiRender);
ImGui::Render();
ImGui_ImplDX12_RenderDrawData(ImGui::GetDrawData(), d3d12CommandList);
barrier.Transition.StateBefore = D3D12_RESOURCE_STATE_RENDER_TARGET;
barrier.Transition.StateAfter = D3D12_RESOURCE_STATE_PRESENT;
d3d12CommandList->ResourceBarrier(1, &barrier);
d3d12CommandList->Close();
d3d12CommandQueue->ExecuteCommandLists(1, (ID3D12CommandList* const*)&d3d12CommandList);
DXGI_ERROR_INVALID_CALL tells that one command in the list is invalid but not which.
you need to use the d3d12 debug layer for runtime checks at command list creation.
The debug layer also tell you the reason why it is invalid.
see msdn for more info.
you can aktivate it with the following code, but it needs to be called before device creation
ID3D12Debug* debugInterface;
if (SUCCEEDED(D3D12GetDebugInterface(IID_PPV_ARGS(&debugInterface)))) {
debugInterface->EnableDebugLayer();
}
I've been getting a weird error when calling CreateGraphicsPipelineState().
The function returns E_INVALIDARG even though the description is all set up.
The description worked before and the I tried to add indexbuffers to my pipeline, I didn't even touch any of the code for PSO or Shaders and now the PSO creation is all messed up.
The issue is that I don't get any DX-error messages from the driver when enabling the debug-layer. I only get this
"Microsoft C++ exception: _com_error at memory location
when I step through the function.
It feels like it is some pointer error or similar, but I can't figure out what the error is. Perhaps anyone of you can see an obvious mistake that I made?
Here's my code:
CGraphicsPSO* pso = new CGraphicsPSO();
D3D12_GRAPHICS_PIPELINE_STATE_DESC psoDesc = {};
// Input Layout
std::vector<D3D12_INPUT_ELEMENT_DESC> elements;
if (aPSODesc.inputLayout != nullptr)
{
auto& ilData = aPSODesc.inputLayout->desc;
for (auto& element : ilData)
{
// All Data here is correct when breaking
D3D12_INPUT_ELEMENT_DESC elementDesc;
elementDesc.SemanticName = element.mySemanticName;
elementDesc.SemanticIndex = element.mySemanticIndex;
elementDesc.InputSlot = element.myInputSlot;
elementDesc.AlignedByteOffset = element.myAlignedByteOffset;
elementDesc.InputSlotClass = _ConvertInputClassificationDX12(element.myInputSlotClass);
elementDesc.Format = _ConvertFormatDX12(element.myFormat);
elementDesc.InstanceDataStepRate = element.myInstanceDataStepRate;
elements.push_back(elementDesc);
}
D3D12_INPUT_LAYOUT_DESC inputLayout = {};
inputLayout.NumElements = (UINT)elements.size();
inputLayout.pInputElementDescs = elements.data();
psoDesc.InputLayout = inputLayout;
}
// TOPOLOGY
switch (aPSODesc.topology)
{
default:
case EPrimitiveTopology::TriangleList:
psoDesc.PrimitiveTopologyType = D3D12_PRIMITIVE_TOPOLOGY_TYPE_TRIANGLE; // <--- Always this option
break;
case EPrimitiveTopology::PointList:
psoDesc.PrimitiveTopologyType = D3D12_PRIMITIVE_TOPOLOGY_TYPE_POINT;
break;
case EPrimitiveTopology::LineList:
psoDesc.PrimitiveTopologyType = D3D12_PRIMITIVE_TOPOLOGY_TYPE_LINE;
break;
//case EPrimitiveTopology::Patch:
// psoDesc.PrimitiveTopologyType = D3D12_PRIMITIVE_TOPOLOGY_TYPE_PATCH;
// break;
}
// Shaders
if (aPSODesc.vs != nullptr)
{
D3D12_SHADER_BYTECODE vertexShaderBytecode = {};
vertexShaderBytecode.BytecodeLength = aPSODesc.vs->myByteCodeSize;
vertexShaderBytecode.pShaderBytecode = aPSODesc.vs->myByteCode;
psoDesc.VS = vertexShaderBytecode;
}
if (aPSODesc.ps != nullptr)
{
D3D12_SHADER_BYTECODE pixelShaderBytecode = {};
pixelShaderBytecode.BytecodeLength = aPSODesc.ps->myByteCodeSize;
pixelShaderBytecode.pShaderBytecode = aPSODesc.ps->myByteCode;
psoDesc.PS = pixelShaderBytecode;
}
psoDesc.RTVFormats[0] = DXGI_FORMAT_R8G8B8A8_UNORM; // format of the render target
DXGI_SAMPLE_DESC sampleDesc = {};
sampleDesc.Count = 1;
sampleDesc.Quality = 0;
psoDesc.DepthStencilState.DepthEnable = FALSE;
psoDesc.DepthStencilState.StencilEnable = FALSE;
psoDesc.SampleDesc = sampleDesc; // must be the same sample description as the swapchain and depth/stencil buffer
psoDesc.SampleMask = UINT_MAX; // sample mask has to do with multi-sampling. 0xffffffff means point sampling is done
psoDesc.RasterizerState = CD3DX12_RASTERIZER_DESC(D3D12_DEFAULT); // a default rasterizer state.
psoDesc.BlendState = CD3DX12_BLEND_DESC(D3D12_DEFAULT); // a default blent state.
psoDesc.NumRenderTargets = 1; // we are only binding one render target
psoDesc.pRootSignature = myGraphicsRootSignature;
psoDesc.Flags = D3D12_PIPELINE_STATE_FLAG_NONE;
ID3D12PipelineState* pipelineState;
HRESULT hr = myDevice->CreateGraphicsPipelineState(&psoDesc, IID_PPV_ARGS(&pipelineState));
pso->myPipelineState = pipelineState;
if (FAILED(hr))
{
delete pso;
return nullptr;
}
return pso;
So I just found the error.
It seems like the way I parsed my semantics for my input-layout gave me an invalid pointer.
Thus the memory at the adress was invalid and giving the DX12-device incorrect decriptions.
So what I did was to locally store the semantic-names within my CreatePSO function until the PSO was created, and now it all works.
Looks to me like the pointers to storage you promised are going out of scope.
..
D3D12_INPUT_LAYOUT_DESC inputLayout = {};
..
psoDesc.InputLayout = inputLayout;
}
I am new to C++. I've wrote code in C# and PHP.Since I am using Unreal engine I am trying to learn C++. For my project I need to make a screenshot in-game and show it immediately so I want to get it as a texture.
I made a blueprint node which calls this function i've made:
void UMyBlueprintFunctionLibrary::TakeScreenshot()
{
FScreenshotRequest::RequestScreenshot(true);
if (GEngine)
GEngine->AddOnScreenDebugMessage(-1, 15.0f, FColor::Red, "Tried to take screenshot");
}
When I hover my mouse above RequestScreenshot I see the following pop-up:
"Screenshot can be read from memory by subscribing to the viewsport OnScreenshopCaptured delegate"
So that is what I try to do but I have no idea how I looked up this:
https://docs.unrealengine.com/latest/INT/API/Runtime/Engine/Engine/UGameViewportClient/OnScreenshotCaptured/
Can someone tell me how to implement this and how you see/know how to implement it?
I have an alternative, no delegate, but FRenderTarget::ReadPixel() to some buffer you allocated, by implementing your own UGameViewportClient (inherit it), and overriding Draw() function.
I'll show the essential codes, but not complete.
void UMyGameViewportClient::Draw(FViewport* Viewport, FCanvas* SceneCanvas)
{
Super::Draw(Viewport, SceneCanvas);
if (any_condition_you_need) {
CaptureFrame();
}
}
void UMyGameViewportClient::CaptureFrame()
{
if (!Viewport) {
return;
}
if (ViewportSize.X == 0 || ViewportSize.Y == 0) {
return;
}
ColorBuffer.Empty(); // Declare this in header as TArray<FColor>
if (!Viewport->ReadPixels(ColorBuffer, FReadSurfaceDataFlags(),
FIntRect(0, 0, ViewportSize.X, ViewportSize.Y)))
{
return;
}
SaveThumbnailImage();
}
void UMyGameViewportClient::SaveThumbnailImage()
{
IImageWrapperModule& wrappermodule = FModuleManager::LoadModuleChecked<IImageWrapperModule>(FName("ImageWrapper"));
auto wrapper_ptr = wrappermodule.CreateImageWrapper(EImageFormat::PNG);
for (int i = 0; i < ColorBuffer.Num(); i++)
{
auto ptr = &ColorBuffer[i];
auto r = ptr->R;
auto b = ptr->B;
ptr->R = b;
ptr->B = r;
ptr->A = 255;
} // not necessary, if you like bgra, just change the following function argument to ERGBFormat::BGRA
wrapper_ptr->SetRaw(&ColorBuffer[0], ColorBuffer.Num() * 4,
ViewportSize.X, ViewportSize.Y, ERGBFormat::RGBA, 8);
FFileHelper::SaveArrayToFile(wrapper_ptr->GetCompressed(), *ThumbnailFile);
}
I'm using custom classes to manage a vending machine. I can't figure out why it keeps throwing a stack overflow error. There are two versions to my program, the first is a basic test to see whether the classes etc work, by pre-defining certain variables. The second version is what it should be like, where the variables in question can change each time the program is ran (depending on user input).
If anyone can suggest ways of avoiding this recursion, or stack overflow, I'd great. Below is the code for the three classes involved;
class Filling
{
protected:
vector<Filling*> selection;
string fillingChosen;
public:
virtual float cost()
{
return 0;
}
virtual ~Filling(void)
{
//needs to be virtual in order to ensure Condiment destructor is called via Beverage pointer
}
};
class CondimentDecorator : public Filling
{
public:
Filling* filling;
void addToPancake(Filling* customerFilling)
{
filling = customerFilling;
}
~CondimentDecorator(void)
{
delete filling;
}
};
class Frosted : public CondimentDecorator
{
float cost()
{ //ERROR IS HERE//
return (.3 + filling->cost());
}
};
Below is the code used to call the above 'cost' function;
void displayCost(Filling* selectedFilling)
{
cout << selectedFilling->cost() << endl;
}
Below is part of the code that initiates it all (main method);
Filling* currentPancake = NULL;
bool invalid = true;
do
{
int selection = makeSelectionScreen(money, currentStock, thisState);
invalid = false;
if (selection == 1)
{
currentPancake = new ChocolateFilling;
}
else if...
.
.
.
.
else
invalid = true;
} while (invalid);
bool makingSelection = true;
CondimentDecorator* currentCondiment = NULL;
do
{
int coatingSelection = makeCoatingSelectionScreen(money, currentStock, thisState);
if (coatingSelection == 1)
currentCondiment = new Frosted;
else if (coatingSelection == 2)...
.
.
.
else if (coatingSelection == 0)
makingSelection = false;
currentCondiment = thisSelection;
currentCondiment->addToPancake(currentPancake);
currentPancake = currentCondiment;
displayCost(currentPancake);
//Below is the code that DOES work, however it is merely meant to be a test. The
//above code is what is needed to work, however keeps causing stack overflows
//and I'm uncertain as to why one version works fine and the other doesn't
/*currentCondiment = new Frosted;
currentCondiment->addToPancake(currentPancake);
currentPancake = currentCondiment;
displayCost(currentPancake);
currentCondiment = new Wildlicious;
currentCondiment->addToPancake(currentPancake);
currentPancake = currentCondiment;
displayCost(currentPancake);*/
} while (makingSelection);
displayCost(currentPancake);
delete currentPancake;
The infinite recursion happens when you call displayCostwith a Frosted whose filling is a Frosted as well. And that happens right here:
currentCondiment->addToPancake(currentPancake);
currentPancake = currentCondiment;
displayCost(currentPancake);
You set the filling of currentCondiment to currentPancake, then call displayCost with currentCondiment.
In the process you also leak the memory that was originally assigned to currentPancake.
Btw currentCondiment = thisSelection; also leaks memory.
Idea: Use smart pointers like std::unique_ptr to get rid of the leaks.