Use DataWrite::DetachBuffer An error occurred(WinRT C++) - c++

auto featureReport = hidDevice->CreateFeatureReport(6);
auto dataWriter = ref new DataWriter();
Array<UINT8>^buff = ref new Array<UINT8>(6);
buff[0] = (uint8)featureReport->Id;
buff[1] = 0xe;//update mode
buff[2] = 0;
buff[3] = 0;
buff[4] = 0;
buff[5] = 0;
dataWriter->WriteBytes(buff);
featureReport->Data = dataWriter->DetachBuffer();
create_task(hidDevice->SendFeatureReportAsync(featureReport))
.then([this](task<uint32> bytesWrittenTask)
{
auto x = bytesWrittenTask.get(); // If exception occured, let an exception flow down the task chain so it can be caught
//MessageDialog^ msg = ref new MessageDialog(x.ToString());
});
This code is to access the hid driver after the success of the need to send commands to the hid device, but here the error featureReport-> Data = dataWriter-> DetachBuffer ();
Error Message: HRESULT: 0x80070057 Parameter Error

You are probably hitting an invalid buffer length. Try to get the buffer length, before you try to write to it.
(pseudo code)
FeatureReport report = hidDevice->GetFeatureReport(reportId)
Array<UINT8>^buff = ref new Array<UINT8>(report.Data.Length);

Related

Vulkan queue waiting on semaphore that can't be signaled

It seems I have had invalid code for a while but the validation layers were silent. After updating my sdk to the latest version I started getting this error:
Message ID name: VUID-vkQueuePresentKHR-pWaitSemaphores-03268
Message: [ VUID-vkQueuePresentKHR-pWaitSemaphores-03268 ] Object: 0x55b4b87478f0 (Name = Selected logical device : Type = 3) | VkQueue 0x55b4b8224020[Main queue] is waiting on VkSemaphore 0x110000000011[Render Finished Semaphore: 0] that has no way to be signaled. The Vulkan spec states: All elements of the pWaitSemaphores member of pPresentInfo must reference a semaphore signal operation that has been submitted for execution and any semaphore signal operations on which it depends (if any) must have also been submitted for execution. (https://www.khronos.org/registry/vulkan/specs/1.1-extensions/html/vkspec.html#VUID-vkQueuePresentKHR-pWaitSemaphores-03268)
Severity: VK_DEBUG_UTILS_MESSAGE_SEVERITY_ERROR_BIT_EXT
This happens inside of my main draw loop and it's the only validation layer error in my code. If I never call the code that is responsible for surface presentation. I get no errors.
I am sure the place where I am doing things wrong is here:
void DisplayTarget::StartPass(uint target_num, bool should_clear,
VulkanImage* external_depth)
{
auto device = h_interface->GetDevice();
auto result = device.acquireNextImageKHR(
*swap_chain,
std::numeric_limits<uint64_t>::max(),
*img_available_sems[current_frame],
nullptr,
&active_image_index);
if(result != vk::Result::eSuccess)
Log::RecordLog("Failed to acquire image");
}
vk::Result DisplayTarget::EndPass()
{
auto device = h_interface->GetDevice();
auto cmd_buff = h_interface->GetCmdBuffer();
auto graphics_queue = h_interface->GetQueue();
device.waitForFences(
1,
&*in_flight_fences[current_frame],
VK_TRUE,
std::numeric_limits<uint64_t>::max());
vk::Semaphore wait_semaphores[] = {*img_available_sems[current_frame]};
vk::PipelineStageFlags wait_stages[] = {
vk::PipelineStageFlagBits::eColorAttachmentOutput};
vk::Semaphore signal_semaphores[] = {*render_finished_sems[current_frame]};
vk::SubmitInfo submit_info(
1, wait_semaphores, wait_stages, 1, &cmd_buff, 1, signal_semaphores);
device.resetFences(1, &*in_flight_fences[current_frame]);
auto result =
graphics_queue.submit(1, &submit_info, *in_flight_fences[current_frame]);
if(result != vk::Result::eSuccess)
Log::RecordLog("Failed to submit draw command buffer!");
graphics_queue.waitIdle();
device.waitIdle();
vk::SwapchainKHR swap_chains[] = {*swap_chain};
vk::PresentInfoKHR present_info = {};
present_info.waitSemaphoreCount = 1;
present_info.pWaitSemaphores = signal_semaphores;
present_info.swapchainCount = 1;
present_info.pSwapchains = swap_chains;
present_info.pImageIndices = &active_image_index;
result = graphics_queue.presentKHR(&present_info);
current_frame = (current_frame + 1) % MAX_FRAMES_IN_FLIGHT;
return result;
}
Currently they are called consecutively:
display.StartPass();
display.EndPass();
To get things working I have tried both commenting parts of these 2 functions out or changing the order in which things are called, but either the error persists or I get different validation errors.
I also tried signaling the semaphore directly:
vk::SemaphoreSignalInfo semaphore_info = {};
semaphore_info.semaphore = *render_finished_sems[current_frame];
semaphore_info.value = 0;
device.signalSemaphore(semaphore_info);
But all I managed is to cause a segmentation fault
The error was the order of operations. This is wrong:
graphics_queue.waitIdle();
device.waitIdle();
vk::SwapchainKHR swap_chains[] = {*swap_chain};
vk::PresentInfoKHR present_info = {};
present_info.waitSemaphoreCount = 1;
present_info.pWaitSemaphores = signal_semaphores;
present_info.swapchainCount = 1;
present_info.pSwapchains = swap_chains;
present_info.pImageIndices = &active_image_index;
result = graphics_queue.presentKHR(&present_info);
current_frame = (current_frame + 1) % MAX_FRAMES_IN_FLIGHT;
This is the correct use:
vk::SwapchainKHR swap_chains[] = {*swap_chain};
vk::PresentInfoKHR present_info = {};
present_info.waitSemaphoreCount = 1;
present_info.pWaitSemaphores = signal_semaphores;
present_info.swapchainCount = 1;
present_info.pSwapchains = swap_chains;
present_info.pImageIndices = &active_image_index;
result = graphics_queue.presentKHR(&present_info);
current_frame = (current_frame + 1) % MAX_FRAMES_IN_FLIGHT;
graphics_queue.waitIdle();
device.waitIdle();

E_INVALIDARG when calling CreateGraphicsPipelineState

I've been getting a weird error when calling CreateGraphicsPipelineState().
The function returns E_INVALIDARG even though the description is all set up.
The description worked before and the I tried to add indexbuffers to my pipeline, I didn't even touch any of the code for PSO or Shaders and now the PSO creation is all messed up.
The issue is that I don't get any DX-error messages from the driver when enabling the debug-layer. I only get this
"Microsoft C++ exception: _com_error at memory location
when I step through the function.
It feels like it is some pointer error or similar, but I can't figure out what the error is. Perhaps anyone of you can see an obvious mistake that I made?
Here's my code:
CGraphicsPSO* pso = new CGraphicsPSO();
D3D12_GRAPHICS_PIPELINE_STATE_DESC psoDesc = {};
// Input Layout
std::vector<D3D12_INPUT_ELEMENT_DESC> elements;
if (aPSODesc.inputLayout != nullptr)
{
auto& ilData = aPSODesc.inputLayout->desc;
for (auto& element : ilData)
{
// All Data here is correct when breaking
D3D12_INPUT_ELEMENT_DESC elementDesc;
elementDesc.SemanticName = element.mySemanticName;
elementDesc.SemanticIndex = element.mySemanticIndex;
elementDesc.InputSlot = element.myInputSlot;
elementDesc.AlignedByteOffset = element.myAlignedByteOffset;
elementDesc.InputSlotClass = _ConvertInputClassificationDX12(element.myInputSlotClass);
elementDesc.Format = _ConvertFormatDX12(element.myFormat);
elementDesc.InstanceDataStepRate = element.myInstanceDataStepRate;
elements.push_back(elementDesc);
}
D3D12_INPUT_LAYOUT_DESC inputLayout = {};
inputLayout.NumElements = (UINT)elements.size();
inputLayout.pInputElementDescs = elements.data();
psoDesc.InputLayout = inputLayout;
}
// TOPOLOGY
switch (aPSODesc.topology)
{
default:
case EPrimitiveTopology::TriangleList:
psoDesc.PrimitiveTopologyType = D3D12_PRIMITIVE_TOPOLOGY_TYPE_TRIANGLE; // <--- Always this option
break;
case EPrimitiveTopology::PointList:
psoDesc.PrimitiveTopologyType = D3D12_PRIMITIVE_TOPOLOGY_TYPE_POINT;
break;
case EPrimitiveTopology::LineList:
psoDesc.PrimitiveTopologyType = D3D12_PRIMITIVE_TOPOLOGY_TYPE_LINE;
break;
//case EPrimitiveTopology::Patch:
// psoDesc.PrimitiveTopologyType = D3D12_PRIMITIVE_TOPOLOGY_TYPE_PATCH;
// break;
}
// Shaders
if (aPSODesc.vs != nullptr)
{
D3D12_SHADER_BYTECODE vertexShaderBytecode = {};
vertexShaderBytecode.BytecodeLength = aPSODesc.vs->myByteCodeSize;
vertexShaderBytecode.pShaderBytecode = aPSODesc.vs->myByteCode;
psoDesc.VS = vertexShaderBytecode;
}
if (aPSODesc.ps != nullptr)
{
D3D12_SHADER_BYTECODE pixelShaderBytecode = {};
pixelShaderBytecode.BytecodeLength = aPSODesc.ps->myByteCodeSize;
pixelShaderBytecode.pShaderBytecode = aPSODesc.ps->myByteCode;
psoDesc.PS = pixelShaderBytecode;
}
psoDesc.RTVFormats[0] = DXGI_FORMAT_R8G8B8A8_UNORM; // format of the render target
DXGI_SAMPLE_DESC sampleDesc = {};
sampleDesc.Count = 1;
sampleDesc.Quality = 0;
psoDesc.DepthStencilState.DepthEnable = FALSE;
psoDesc.DepthStencilState.StencilEnable = FALSE;
psoDesc.SampleDesc = sampleDesc; // must be the same sample description as the swapchain and depth/stencil buffer
psoDesc.SampleMask = UINT_MAX; // sample mask has to do with multi-sampling. 0xffffffff means point sampling is done
psoDesc.RasterizerState = CD3DX12_RASTERIZER_DESC(D3D12_DEFAULT); // a default rasterizer state.
psoDesc.BlendState = CD3DX12_BLEND_DESC(D3D12_DEFAULT); // a default blent state.
psoDesc.NumRenderTargets = 1; // we are only binding one render target
psoDesc.pRootSignature = myGraphicsRootSignature;
psoDesc.Flags = D3D12_PIPELINE_STATE_FLAG_NONE;
ID3D12PipelineState* pipelineState;
HRESULT hr = myDevice->CreateGraphicsPipelineState(&psoDesc, IID_PPV_ARGS(&pipelineState));
pso->myPipelineState = pipelineState;
if (FAILED(hr))
{
delete pso;
return nullptr;
}
return pso;
So I just found the error.
It seems like the way I parsed my semantics for my input-layout gave me an invalid pointer.
Thus the memory at the adress was invalid and giving the DX12-device incorrect decriptions.
So what I did was to locally store the semantic-names within my CreatePSO function until the PSO was created, and now it all works.
Looks to me like the pointers to storage you promised are going out of scope.
..
D3D12_INPUT_LAYOUT_DESC inputLayout = {};
..
psoDesc.InputLayout = inputLayout;
}

unable to form command using protobuf

I have the following proto file
package DS3DExcite.Cpp.ExternalVariantSwitcher.ProtocolBuffers;
message DGCommand
{
extensions 100 to max;
enum Type
{
AttachModel = 1;
AttachModelReply = 2;
....
...
SwitchVariants = 4;
}
required Type type = 1;
required uint32 id = 2;
optional uint32 replyTo = 3 [default = 0];
optional string message = 4;
}
message SwitchVariants
{
extend DGCommand
{
required SwitchVariants command = 103;
}
message VariantState
{
required string variant = 1;
required string state = 2;
}
repeated VariantState variants = 1;
repeated string variantNames = 2;
optional string state = 3;
}
I compiled the proto file with protobuf 2.4.1 version to generate .pb.h and .pb.cc files
Now I form the commands
DS3DExcite::Net::PVCConnector::ProtocolBuffers::DGCommand commandObj;
commandObj.set_type(DS3DExcite::Net::PVCConnector::ProtocolBuffers::DGCommand_Type_SwitchVariants);
commandObj.set_id(3);
DS3DExcite::Net::PVCConnector::ProtocolBuffers::SwitchVariants *objVarState;
objVarState = commandObj.MutableExtension(DS3DExcite::Net::PVCConnector::ProtocolBuffers::SwitchVariants::command);
DS3DExcite::Net::PVCConnector::ProtocolBuffers::SwitchVariants_VariantState *state = objVarState->add_variants();
state->set_state("OFF");
state->set_variant("M_Carpaint_3");
I serialise the message
int size = commandObj.ByteSize();
int sizeSize = sizeof(int);
std::vector<char> data(size ,0);
memcpy(data.data(), &size, sizeSize);
data.resize(size + sizeSize );
commandObj.SerializeToArray(static_cast<void*>(&(data[0])+sizeSize) ,size);
QByteArray byteArray = QByteArray::fromRawData(static_cast<const char*>(data.data()), data.size());
And I send this message on a Qtcp socket to server which deserializes the message and extract the information from the message .
At the server end this is the code to read
uint32 pendingData = 0;
rcvSocket->HasPendingData(pendingData); //rcvSocket is the serversside socket
if (pendingData == 0)
{
UE_LOG(PVCConnector, Warning, TEXT("Lost connection to client."));
break;
}
TArray<char> newData; //customized Array template
newData.InsertZeroed(0, pendingData);
int32 bytesRead = 0;
rcvSocket->Recv(reinterpret_cast<uint8*>(newData.GetData()), pendingData, bytesRead);
data += newData;
However at the server end the the desired information lands up in the unknown fields of ::google::protobuf::Message . What could be the reason ?
I had a similar problem whe I have been sending big enough messages. We decide, that this happens when message divided into several net packages. We use blobs to prevent that, and it works. About blobs, its technique to send message length, before message
I was able to solve the problem . There were 2 issues
1) The way I was converting to ByteArray
I replaced
QByteArray byteArray = QByteArray::fromRawData(static_cast<const char*>(data.data()), data.size());
with
QByteArray *byteArray = new QByteArray(reinterpret_cast<const char*>(data.data()), data.size());
2) The way I was sending the message on the socket . I just used
const int nbBytes = itM->second->write(qByteArray);
instead of using QTextStream

vkGetSwapChainImages crashes after first call

I started learning Vulkan and everything went quite well, but somehow, the function vkGetSwapChainImages() wants to ruin my life.
Basically, this is how I create the SwapChain. desiredFormat, desiredExtent, desiredUsage, desiredExtent and desiredTransform are all set well.
VkSwapchainCreateInfoKHR swapChainCreateInfo = { };
swapChainCreateInfo.sType = VkStructureType::VK_STRUCTURE_TYPE_SWAPCHAIN_CREATE_INFO_KHR;
swapChainCreateInfo.flags = 0;
swapChainCreateInfo.pNext = nullptr;
swapChainCreateInfo.compositeAlpha = VkCompositeAlphaFlagBitsKHR::VK_COMPOSITE_ALPHA_OPAQUE_BIT_KHR;
swapChainCreateInfo.imageColorSpace = desiredFormat.colorSpace;
swapChainCreateInfo.imageFormat = desiredFormat.format;
swapChainCreateInfo.imageExtent = desiredExtent;
swapChainCreateInfo.clipped = VK_TRUE;
swapChainCreateInfo.imageArrayLayers = 1;
swapChainCreateInfo.imageSharingMode = VkSharingMode::VK_SHARING_MODE_EXCLUSIVE;
swapChainCreateInfo.surface = mRenderingSurface;
swapChainCreateInfo.imageUsage = desiredUsage;
swapChainCreateInfo.minImageCount = desiredImageCount;
swapChainCreateInfo.presentMode = desiredMode;
swapChainCreateInfo.oldSwapchain = oldSwapChain;
swapChainCreateInfo.pQueueFamilyIndices = nullptr;
swapChainCreateInfo.queueFamilyIndexCount = 0;
swapChainCreateInfo.preTransform = desiredTransform;
if ( vkCreateSwapchainKHR( mLogicalDevice, &swapChainCreateInfo, nullptr, &mSwapChain ) != VK_SUCCESS )
return false;
If I call vkGetSwapChainImages nothing bad happens. But if I want to call vkGetSwapChainImages a second time, it doesn't work and I get an exception like this
Exception thrown at 0x0433A209 (VkLayer_core_validation.dll) in Project1.exe: 0xC0000005: Access violation reading location 0x000000A8.
And I don't understand why this happens. I tried saving the results from first call and using them, but I still get an error, so I that there's something I'm doing wrong here.

OSX AUGraph recreation causes badComponentType error

On OSX I'm creating an AUGraph for my audio system like so:
OSStatus result = NewAUGraph(&mGraph);
AUNode outputNode;
AudioComponentDescription outputDesc;
outputDesc.componentType = kAudioUnitType_Output;
outputDesc.componentSubType = kAudioUnitSubType_DefaultOutput;
outputDesc.componentManufacturer = kAudioUnitManufacturer_Apple;
outputDesc.componentFlags = 0;
outputDesc.componentFlagsMask = 0;
result = AUGraphAddNode(mGraph, &outputDesc, &outputNode);
AUNode converterNode;
AudioComponentDescription converterDesc;
converterDesc.componentType = kAudioUnitType_FormatConverter;
converterDesc.componentSubType = kAudioUnitSubType_AUConverter;
converterDesc.componentManufacturer = kAudioUnitManufacturer_Apple;
converterDesc.componentFlags = 0;
outputDesc.componentFlagsMask = 0;
result = AUGraphAddNode(mGraph, &converterDesc, &converterNode);
result = AUGraphConnectNodeInput(mGraph, converterNode, 0, outputNode, 0);
result = AUGraphOpen(mGraph);
...initialize graph, start graph, etc...
This all works fine, I can hear sound, etc. Later the system is shut down:
unsigned char isRunning = false;
AUGraphIsRunning(mGraph, &isRunning);
if (isRunning)
AUGraphStop(mGraph);
OSStatus result;
unsigned char isInitialized = false;
AUGraphIsInitialized(mGraph, &isInitialized);
if (isInitialized)
{
result = AUGraphUninitialize(mGraph);
}
result = DisposeAUGraph(mGraph);
Again, no problems here. However a short while after the first code block gets executed again when the system is restarted. On:
result = AUGraphOpen(mGraph);
"result" comes out as -2005 (badComponentType). Anyone know what causes this?
Calling AUGraphClose in the shutdown fixed this. Guess you can't have two open graphs with the same output unit in?