Consider the following code that copys an srcImage from gpu to cpu dstImage:
vkCmdCopyImage(command_buffer, srcImage,
VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL, dstImage,
VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL, 1, ©_region);
VkImageMemoryBarrier visible_barrier = {
VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER, // VkStructureType sType
nullptr, // const void* pNext
VK_ACCESS_TRANSFER_WRITE_BIT, // VkAccessFlags srcAccessMask
VK_ACCESS_HOST_READ_BIT, // VkAccessFlags dstAccessMask
VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL, // VkImageLayout oldLayout
VK_IMAGE_LAYOUT_GENERAL, // VkImageLayout newLayout
VK_QUEUE_FAMILY_IGNORED, // uint32_t srcQueueFamilyIndex
VK_QUEUE_FAMILY_IGNORED, // uint32_t dstQueueFamilyIndex
dstImage, // VkImage image
{VK_IMAGE_ASPECT_COLOR_BIT, 0, 1, 0, 1} // VkImageSubresourceRange subresourceRange
};
vkCmdPipelineBarrier(command_buffer,
VK_PIPELINE_STAGE_TRANSFER_BIT,
VK_PIPELINE_STAGE_HOST_BIT,
0, 0, nullptr, 0, nullptr, 1, &visible_barrier);
vkMapMemory()
vkInvalidateMappedMemoryRanges()//Do we need this line at all?
vkUnmapmemory()
The barrier(i.e. memory dependency) above already generates an availability operation that makes the write to dstImage available in the device domain, a memory domain operation that makes the write in the device domain available to the host domain, and a visibility operation that makes the write visible to the host.
Is there still a need to call vkInvalidateMappedMemoryRanges, after the barrier, to make the write visible to the host?
vkInvalidateMappedMemoryRanges is used to make sure your mapped memory is valid, if the device memory is not host coherent.
vkCmdPipelineBarrier is used to make sure your GPU won't run commands out of order, it's a synchronization mechanism inside the GPU.
They are two totally different thing.
To your question, whether to use vkInvalidateMappedMemoryRanges depends on two things:
do you want to read the content of the target image back on CPU side? If no, I don't see any reason to call vkInvalidateMappedMemoryRanges.
if you do want to read the content back, does the device memory your target image bound host coherent? If yes, you don't need vkInvalidateMappedMemoryRanges
Yes, the invalidate is still needed. In addition to the rules in the memory dependency section, you also have to apply the rules in the access flags section.
From section "7.1.3 Access Types" in the Vulkan 1.3 specification:
If a memory object does not have the VK_MEMORY_PROPERTY_HOST_COHERENT_BIT property, then [...] vkInvalidateMappedMemoryRanges must be called to guarantee that writes which are available to the host domain are made visible to host operations.
Related
I have a function that copies data from one buffer to another, I need to synchronize its execution.
I have such a bad option:
void MainWindow::copyBuffer(VkBuffer srcBuffer, VkBuffer dstBuffer, VkDeviceSize size)
{
VkCommandBuffer commandBuffer;
vkAllocateCommandBuffers(logicalDevice, &allocInfo, &commandBuffer);
//Start recording
vkBeginCommandBuffer(commandBuffer, &beginInfo);
vkCmdCopyBuffer(commandBuffer, srcBuffer, dstBuffer, 1, ©Region);
vkEndCommandBuffer(commandBuffer);
//Run command buffer
vkQueueSubmit(graphicsQueue, 1, &submitInfo, VK_NULL_HANDLE);
//Waiting for completion
vkQueueWaitIdle(graphicsQueue);
vkFreeCommandBuffers(logicalDevice, commandPool, 1, &commandBuffer);
}
This option is bad because if I want to execute the copyBuffer() function several times, then all the buffers will be copied strictly one at a time.
I want to use a fence for each function call so that multiple calls can run in parallel.
So far, I have only such a solution:
void MainWindow::copyBuffer(VkBuffer srcBuffer, VkBuffer dstBuffer, VkDeviceSize size)
{
VkCommandBuffer commandBuffer;
vkAllocateCommandBuffers(logicalDevice, &allocInfo, &commandBuffer);
//Create fence
VkFenceCreateInfo fenceInfo{};
fenceInfo.sType = VK_STRUCTURE_TYPE_FENCE_CREATE_INFO;
fenceInfo.flags = VK_FENCE_CREATE_SIGNALED_BIT;
VkFence executionCompleteFence = VK_NULL_HANDLE;
if (vkCreateFence(logicalDevice, &fenceInfo, VK_NULL_HANDLE, &executionCompleteFence) != VK_SUCCESS) {
throw MakeErrorInfo("Failed to create fence");
}
//Start recording
vkBeginCommandBuffer(commandBuffer, &beginInfo);
vkCmdCopyBuffer(commandBuffer, srcBuffer, dstBuffer, 1, ©Region);
vkEndCommandBuffer(commandBuffer);
//Run command buffer
vkQueueSubmit(graphicsQueue, 1, &submitInfo, VK_NULL_HANDLE);
vkWaitForFences(logicalDevice, 1, &executionCompleteFence, VK_TRUE, UINT64_MAX);
vkResetFences(logicalDevice, 1, &executionCompleteFence);
vkFreeCommandBuffers(logicalDevice, commandPool, 1, &commandBuffer);
vkDestroyFence(logicalDevice, executionCompleteFence, VK_NULL_HANDLE);
}
Which of these options is better?
Is the second option written correctly?
Both functions are bad in the same way. They both block the CPU from doing anything until the transfer is done. And they both could be used to potentially submit multiple CBs to the same queue in the same frame, but with different submit commands.
Neither is desirable if performance is something you care about.
Ultimately, what you need to do is have your copyBuffer function not actually perform the copy. You should have a function which builds a command buffer to do a copy. That CB is then stored in a place to be submitted later with other copying CBs. Or better yet, you can have just one copying CB that each command adds to (the first one called in a frame will create the CB).
At some point in the future, before you've submitted the work that will use this data, you need to submit the transfer operations. And the way this works depends on if you're submitting the transfer operations on the same queue as the work that will consume them or not.
If they're on the same queue, then all you need to do is have an event in a command buffer at the end of your batch that synchronizes the transfer operations with their receivers. If you want to be more clever, each transfer operation can have its own event, which the receiving operations will wait on.
And in same-queue transfers, you also want to make sure that you submit the transfers in the same vkQueueSubmit call as the rest of your work. Or to put it another way, you should never make more than one call to vkQueueSubmit for a particular queue in a particular frame.
If you're dealing with separate queues, then things change. A bit. If timeline semaphores aren't an option, you'll need to submit your transfer work before you submit the receiving operations. This is because the transfer batch will need to signal a semaphore that the receiving operation will wait on. And a binary semaphore cannot be waited on until the operation that signals it has been submitted to a queue.
But otherwise, everything else stays the same. Of course, you don't need events since you're synchronizing by semaphore.
The two functions are semantically identical and do exactly the same blocking behavior.
The second is slightly better. vkQueueWaitIdle is kind of a debug and out-of-hotspot feature. It might incur a hidden second submit to signal the implicit fence.
You don't need to reset fence that you subsequently destroy anyway. And you are creating it presignaled, which is a bug. Also you forgot to pass it to the vkQueueSubmit.
I think I do understand how Vulkan synchronization works, but I do have a problem of understanding the synchronization with the WSI.
The the Synchronization Examples, we can find this code
/* Only need a dependency coming in to ensure that the first
layout transition happens at the right time.
Second external dependency is implied by having a different
finalLayout and subpass layout. */
VkSubpassDependency dependency = {
.srcSubpass = VK_SUBPASS_EXTERNAL,
.dstSubpass = 0,
// .srcStageMask needs to be a part of pWaitDstStageMask in the WSI semaphore.
.srcStageMask = VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT,
.dstStageMask = VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT,
.srcAccessMask = 0,
.dstAccessMask = VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT,
.dependencyFlags = 0};
According to me, it should be something like that :
VkSubpassDependency dependency = {
.srcSubpass = VK_SUBPASS_EXTERNAL,
.dstSubpass = 0,
// .srcStageMask needs to be a part of pWaitDstStageMask in the WSI semaphore.
.srcStageMask = VK_PIPELINE_STAGE_BOTTOM_OF_PIPE,
.dstStageMask = VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT,
.srcAccessMask = 0,
.dstAccessMask = 0,
.dependencyFlags = 0};
Indeed, since we are going to WRITE into the attachment, there is no need to use a WRITE_BIT (meaning make writes available) in the dstAccessMask.
But the true issue is in the srcStageMask.
I understand why the dstStageMask is COLOR_ATTACHMENT_OUTPUT_BIT. It is because it is okay to have the prior stages working since we don't touch the attachment.
However, for the srcStageMask, I did not see anything about the link between WSI and the COLOR_ATTACHMENT_OUTPUT_BIT. To me, the layout transition must appear at the end of the presentation, and just before the beginning of the COLOR_ATTACHMENT_OUTPUT stage.
And the end of presentation, for me, should be represented by BOTTOM_OF_PIPE and not COLOR_ATTACHMENT_OUTPUT
Where am I mistaken?
Indeed, since we are going to WRITE into the attachment, there is no need to use a WRITE_BIT (meaning make writes available) in the dstAccessMask.
I am not exactly sure of your logic here. It should be WRITE exactly because we are going to write to the attachment.
It would perhaps help describe what is happening here (from krOoze/Hello_Triangle/doc):
The VkSubpassDependency chains of the the semaphore (via pWaitDstStageMask). Then it performs its layout transition. Then it syncs with the Load Op. (Then Load Op happens in the subpass. And subpass vkDraws some stuff into the swapchain image.)
Now assumably (as is typical for first use of the swapchain image) our Load Op is VK_ATTACHMENT_LOAD_OP_CLEAR. That means VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT + VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT access.
So:
Presentation is gonna read the swapchain image, and submit a semaphore signal.
We chain our VkSubpassDependency to the signal (via pWaitDstStageMask). Semaphore signal already covers all memory accesses, therefore our srcAccessMask = 0.
The Dependency performs its implicit dependency to the Layout Transition (takes your src, and invents some internal dst), then Layout Transition happens, which reads the image and writes it back, then the dependency performs another implicit Dependency (invents some src that covers the layout transition, and uses your dst).
The Load Op happens in the subpass, and you have to make sure explicitly it happens-after everything above. So your dst in your Dependency must be VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT + VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT to cover the Load Op.
Now, there are three types of memory hazards: read→write, write→write, and write→read (and read→read is a non-hazard). In WWH something writes a resource, while some other write also writes it. It is not clear which write happens last, so the memory contents becomes garbage. In RWH and WRH read and write may happen at the same time. It is not clear if the read sees the unmodified memory, or the written one (i.e. it reads garbage).
Without an explicit dependency, the Layout Transition and the subsequent Load Op form a write→write hazard. Therefore dstAccessMask must not be 0 to resolve the hazard and make sure one write happens-before the second one.
(It is perhaps worth noting we introduce the VkSubpassDependency solely for the sake of the layout transition. Otherwise the semaphore wait would already be all that is needed. The default is srcStageMask = TOP_OF_PIPE, which means without explicit Dependency the layout transition could happen before the semaphore wait, i.e. before presentation finishes reading it; that would be a read→write hazard).
To me, the layout transition must appear at the end of the presentation, and just before the beginning of the COLOR_ATTACHMENT_OUTPUT stage. And the end of presentation, for me, should be represented by BOTTOM_OF_PIPE and not COLOR_ATTACHMENT_OUTPUT
We have little bit of choice here: pWaitDstStageMask = dependency.srcStageMask = ?
Now our situation is this:
vkBeginCommandBuffer();
[possibly vkCmdDispatch()?]
vkBeginRenderPass();
vkCmdDraw(); // does vertex shading + fragment shading, then color writes
vkEndRenderPass();
vkEndCommandBuffer();
vkQueueSubmit(.pwaitDstStageMask = ?);
If we use VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT, then the semaphore wait does not block starting the hypothetical vkCmdDispatch() (VK_PIPELINE_STAGE_COMPUTE_SHADER_BIT). And the Subpass Dependency src does not force it to finish neither. Great!
The Stage used should not be any earlier stage than VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT. E.g. with VK_PIPELINE_STAGE_ALL_COMMANDS the semaphore wait would unnecesserily block the vkCmdDispatch() as well as vertex and fragment shader of the vkCmdDraw. Meanwhile we need the swapchain image only when we actually need to write it at the Load Op (which you can imagine happens when the vkCmdDraw() gets to the point it needs to perform color writes).
Now, we could choose VK_PIPELINE_STAGE_BOTTOM_OF_PIPE. The semaphore blocks nothing (dstStage\pWaitDstStageMask = VK_PIPELINE_STAGE_BOTTOM_OF_PIPE means the same as "nothing"). Great! But the Subpass Dependency now blocks everything (srcStageMask = VK_PIPELINE_STAGE_BOTTOM_OF_PIPE means the same as VK_PIPELINE_STAGE_ALL_COMMANDS). That means our vkCmdDispatch has to finish, before our subpass starts. Not so great...
So, the best practice is to use pWaitDstStageMask = dependency.srcStageMask = VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT.
For CUDA, I know they are executed asynchronously after issuing the launch commands to the default stream(null stream), so how about that in OpenCL? Sample codes are as follows:
cl_context context;
cl_device_id device_id;
cl_int err;
...
cl_kernel kernel1;
cl_kernel kernel2;
cl_command_queue Q = clCreateCommandQueue(context, device_id, 0, &err);
...
size_t global_w_offset[3] = {0,0,0};
size_t global_w_size[3] = {16,16,1};
size_t local_w_size[3] = {16,16,1};
err = clEnqueueNDRangeKernel(Q, kernel1, 3, global_w_offset, global_w_size,
local_w_size, 0, nullptr, nullptr);
err = clEnqueueNDRangeKernel(Q, kernel2, 3, global_w_offset, global_w_size,
local_w_size, 0, nullptr, nullptr);
clFinish(Q);
Will kernel1 and kernel2 be executed asynchronously after commands enqueued?(i.e. executions overlap)
Update
According to the OpenCL Reference, It seems set properties as CL_QUEUE_OUT_OF_ORDER_EXEC_MODE_ENABLE in clCreateCommandQueue can meet my need. But, Does out_of_order mean asynchronization?
Does out_of_order mean asynchronization
"Out of order" queue means kernels may execute in different order than they were queued (if their event/data dependencies allow it). They also may execute concurrently, but not necessarily.
Also, asynchronous execution means something else than execution overlap (that's called parallel execution or concurrency). Asynchronous execution means that kernel code on device executes independently of host code - which is always true in OpenCL.
The simple way to get concurrency (execution overlap) is by using >1 queues on the same device. This works even on implementations which don't have Out-of-order queue capability. It does not guarantee execution overlap (because OpenCL can be used on much more devices than CUDA, and on some devices you simply can't execute >1 kernel at a time), but in my experience with most GPUs you should get at least some overlap. You need to be careful about buffers used by kernels in separate queues, though.
In your current code:
err = clEnqueueNDRangeKernel(Q, kernel1, 3, global_w_offset, global_w_size,
local_w_size, 0, nullptr, nullptr);
err = clEnqueueNDRangeKernel(Q, kernel2, 3, global_w_offset, global_w_size,
local_w_size, 0, nullptr, nullptr);
kernel1 finishes first and then kernel2 is executed
Using
clCreateCommandQueue(context, device_id, CL_QUEUE_OUT_OF_ORDER_EXEC_MODE_ENABLE, &err);
you can execute multiple different kernels concurrently though it isn't guranteed.
Beware though, CL_QUEUE_OUT_OF_ORDER_EXEC_MODE_ENABLE is not supported in all OpenCL implementations. This also means that you have no guarantee that kernel1 will finish execution before kernel2. If any objects that are output by kernel1 are required as input in kernel2, it may fail.
Also multiple command queues can be created and enqueued with commands and the reason for their existence is because the problem you wish to solve might involve some, if not all of the heterogeneous devices in the host. And they could represent independent streams of computation where no data is shared, or dependent streams of computation where each subsequent task depends on the previous task (often, data is shared). However, these command queues will execute on the device without synchronization, provided that no data is shared. If data is shared, then the programmer needs to ensure synchronization of the data using synchronization commands in the OpenCL specification.
I have an NVIDIA Tegra TK1 processor module on a carrier board with a PCI-e slot connecting to it. In that PCIe slot is an FPGA board which exposes some registers and a 64K memory area via PCIe.
On the ARM CPU of the Tegra board, a minimal Linux installation is running.
I am using /dev/mem and the mmap function to obtain user-space pointers to the register structs and the 64K memory area.
The distinct register files and the memory block are all assigned addresses which are aligned and do not overlap with regards to 4KB memory pages.
I explicitly map whole pages with mmap, using the result of getpagesize(), which also is 4096.
I can read/write from/to those exposed registers just fine.
I can read from the memory area (64KB), doing uint32 word-by-word reads in a for loop, just fine. I.e. read contents are correct.
But if I use std::memcpy on the same address range, though, the Tegra CPU freezes, always. I do not see any error message, if GDB is attached I also don't see a thing in Eclipse when trying to step over the memcpy line, it just stops hard. And I have to reset the CPU using the hardware reset button, as the remote console is frozen.
This is debug build with no optimization (-O0), using gcc-linaro-6.3.1-2017.05-i686-mingw32_arm-linux-gnueabihf. I was told the 64K region is accessible byte-wise, I did not try that explicitly.
Is there an actual (potential) problem that I need to worry about, or is there a specific reason why memcpy does not work and maybe should not be used in the first place in this scenario - and I can just carry on using my for loops and think nothing of it?
EDIT: Another effect has been observed: The original code snippet was missing a "vital" printf in the copying for loop, that came before the memory read. That removed, I don't get back valid data. I now updated the code snippet to have an extra read from the same address instead of the printf, which also yields correct data. The confusion intensifies.
Here the (I think) important excerpts of what's going on. With minor modifications, to make sense as shown, in this "de-fluffed" form.
// void* physicalAddr: PCIe "BAR0" address as reported by dmesg, added to the physical address offset of FPGA memory region
// long size: size of the physical region to be mapped
//--------------------------------
// doing the memory mapping
//
const uint32_t pageSize = getpagesize();
assert( IsPowerOfTwo( pageSize ) );
const uint32_t physAddrNum = (uint32_t) physicalAddr;
const uint32_t offsetInPage = physAddrNum & (pageSize - 1);
const uint32_t firstMappedPageIdx = physAddrNum / pageSize;
const uint32_t lastMappedPageIdx = (physAddrNum + size - 1) / pageSize;
const uint32_t mappedPagesCount = 1 + lastMappedPageIdx - firstMappedPageIdx;
const uint32_t mappedSize = mappedPagesCount * pageSize;
const off_t targetOffset = physAddrNum & ~(off_t)(pageSize - 1);
m_fileID = open( "/dev/mem", O_RDWR | O_SYNC );
// addr passed as null means: we supply pages to map. Supplying non-null addr would mean, Linux takes it as a "hint" where to place.
void* mapAtPageStart = mmap( 0, mappedSize, PROT_READ | PROT_WRITE, MAP_SHARED, m_fileID, targetOffset );
if (MAP_FAILED != mapAtPageStart)
{
m_userSpaceMappedAddr = (volatile void*) ( uint32_t(mapAtPageStart) + offsetInPage );
}
//--------------------------------
// Accessing the mapped memory
//
//void* m_rawData: <== m_userSpaceMappedAddr
//uint32_t* destination: points to a stack object
//int length: size in 32bit words of the stack object (a struct with only U32's in it)
// this crashes:
std::memcpy( destination, m_rawData, length * sizeof(uint32_t) );
// this does not, AND does yield correct memory contents - but only with a preceding extra read
for (int i=0; i<length; ++i)
{
// This extra read makes the data gotten in the 2nd read below valid.
// Commented out, the data read into destination will not be valid.
uint32_t tmp = ((const volatile uint32_t*)m_rawData)[i];
(void)tmp; //pacify compiler
destination[i] = ((const volatile uint32_t*)m_rawData)[i];
}
Based on the description, it looks like your FPGA code is not responding correctly to load instructions that are reading from locations on your FPGA and it is causing the CPU to lock up. It's not crashing it is permanently stalled, hence the need for the hard reset. I had this problem also when debugging my PCIE logic on an FPGA.
Another indication that your logic is not responding correctly is that you need an extra read in order to get the right responses.
Your loop is doing 32-bit loads but memcpy is doing at least 64-bit loads, which changes how your logic responds. For example, it will need to use two TLPs with 32 bits of response if the first 128 bits of the completion and the next 32 bits in the second 128 bit TLP of the completion.
What I found super-useful was to add logic to log all the PCIE transactions into an SRAM and to be able to dump the SRAM out to see how the logic was behaving or misbehaving. We have a nifty utility, pcieflat, that prints one PCIE TLP per line. It even has documentation.
When the PCIE interface is not working well enough, I stream the log to a UART in hex which can be decoded by pcieflat.
This tool is also useful for debugging performance problems -- you can look at how well your DMA reads and writes are pipelined.
Alternatively, if you have integrated logic analyzer or similar on the FPGA, you can trace the activity that way. But it's nicer to have the TLPs parsed according to PCIE protocol.
I am implementing a reader of huge compressed raster files. Decompression is performed partially on the fly. Only requested regions of the raster are decompressed and stored in memory cache. Reader works similarly as memory mapping of a file but the data is not mapped to memory 1:1, it is decompressed.
It is implemented using anonymous memory mapping:
char* raster_cache = static_cast<char*>(mmap(0, UNCOMPRESSED_RASTER_SIZE, PROT_NONE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0));
Reading of an area which is not cached yet emits segmentation violation signal which is caught and handled using libsigsegv (see my previous question):
struct CacheHandlerData
{
std::mutex mutex;
// other data needed for decompression
};
int cache_sigsegv_handler(void* fault_address, void* user_data)
{
void* page_address = reinterpret_cast<void*>(reinterpret_cast<uintptr_t>(fault_address) & ~(PAGE_SIZE - 1));
CacheHandlerData* data = static_cast<CacheHandlerData*>(user_data);
std::lock_guard<std::mutex> lock(data->mutex);
unsigned char cached = 0;
mincore(page_address, 1, &cached);
if (!cached)
{
mprotect(page_address, PAGE_SIZE, PROT_WRITE);
// decompress whole page
mprotect(page_address, PAGE_SIZE, PROT_READ);
}
return 1;
}
The problem is that cached pages stay in memory forever. Because i write to the pages, they are marked as dirty and never invalidated.
QUESTION: Is there some possibility to mark pages as not dirty?
In case the system is running out of memory, the pages would be removed from memory similarly to a normal disk cache. It would also be needed to call mprotect(page_address, PAGE_SIZE, PROT_NONE) for the removed pages in order to cause a segmentation violation when the page is accessed again.
Thank you.
EDIT: I could use temporary file backed mapping instead of anonymous one. Pages would be swapped to disk in case the system is out of memory. But this solution loses benefits from using compressed data (smaller disk size, probably faster reading).