I have a 3D application that use 4 thread with one deferred context each one. The problem is that when I use the deferred context to map (ID3D11DeviceContext::Map) the resource, the variables RowPitch and DepthPitch are equal to 0. I get the pointer to the mapped resource and with the memory inspector I see that it have reserve memory (like a calloc).
I have this problem only with ATI graphic cards.
The following code show where is the problem (part of hieroglyph3 engine):
D3D11_MAPPED_SUBRESOURCE Data;
Data.pData = NULL;
Data.DepthPitch = Data.RowPitch = 0;
if ( nullptr == pGlyphResource ) {
Log::Get().Write( L"Trying to map a subresource that doesn't exist!!!" );
return( Data );
}
// TODO: Update this to use a ComPtr!
// Acquire the native resource pointer.
ID3D11Resource* pResource = 0;
pResource = pGlyphResource->GetResource();
if ( nullptr == pResource ) {
Log::Get().Write( L"Trying to map a subresource that has no native resource in it!!!" );
return( Data );
}
// Perform the mapping of the resource.
// This function must fill Data but it only fill the pointer to the mapped resource
HRESULT hr = m_pContext->Map( pResource, subresource, actions, flags, &Data );
if ( FAILED( hr ) ) {
Log::Get().Write( L"Failed to map resource!" );
}
return( Data );
In hieroglyph3 you can download and test it. The code is in PipeLineManagerDX11.cpp in line 688, you can find the class also check it here
The problem was that in ATI GPUs that function don't fill that variables. This don't happen in NVIDIA GPUs.
The solution is to use NVIDIA GPUs that fill the variables.
Related
I am trying to create a service which is going to be used for monitoring client machines and providing updates but part of the service is reporting the hardware that is being used. I currently have the following code to get the GPU information which works when run as a command line application but when it is running as a service no information is returned.
I believe this is due to services not having access to the display but I cannot find any other ways to get the GPU information that would work from a service.
QByteArrayList GetGpuNames ()
{
QByteArrayList list;
IDirect3D9* d3dobject = Direct3DCreate9 ( D3D_SDK_VERSION );
if ( !d3dobject )
return list;
D3DPRESENT_PARAMETERS d3dpresent;
memset ( &d3dpresent , 0 , sizeof ( D3DPRESENT_PARAMETERS ) );
d3dpresent.Windowed = TRUE;
d3dpresent.SwapEffect = D3DSWAPEFFECT_DISCARD;
UINT adaptercount = d3dobject->GetAdapterCount ();
D3DADAPTER_IDENTIFIER9* adapters = ( D3DADAPTER_IDENTIFIER9* ) malloc ( sizeof ( D3DADAPTER_IDENTIFIER9 ) * adaptercount );
for ( int i = 0; i < adaptercount; i++ )
{
d3dobject->GetAdapterIdentifier ( i , 0 , &( adapters [ i ] ) );
list << QByteArray ( adapters [ i ].Description );
}
return list;
}
I am following this Vulkan Youtube video tutorial by Joshua Shucker. I'm currently on his 14th video where he is working on creating a secondary queue family for the vertex buffer. This focuses on the staging process for vertex buffers. My code matches that of his in his video except that of a cout statement in which I added for testing. Here is the function and structure for the Queue Families:
struct QueueFamilyIndices {
int graphicsFamily = -1;
int transferFamily = -1;
bool isComplete() {
return (graphicsFamily >= 0 && transferFamily >= 0);
}
};
QueueFamilyIndices FindQueueFamilies( const VkPhysicalDevice* device, const VkSurfaceKHR* surface ) {
QueueFamilyIndices indices;
uint32_t queueFamilyCount = 0;
vkGetPhysicalDeviceQueueFamilyProperties( *device, &queueFamilyCount, nullptr );
std::vector<VkQueueFamilyProperties> queueFamilies( queueFamilyCount );
vkGetPhysicalDeviceQueueFamilyProperties( *device, &queueFamilyCount, queueFamilies.data() );
int i = 0;
for( const auto &queueFamily : queueFamilies ) {
VkBool32 presentSupport = false;
vkGetPhysicalDeviceSurfaceSupportKHR( *device, i, *surface, &presentSupport );
if( queueFamily.queueCount > 0 && (queueFamily.queueFlags & VK_QUEUE_GRAPHICS_BIT) && presentSupport ) {
indices.graphicsFamily = i;
}
if( queueFamily.queueCount > 0 && (queueFamily.queueFlags & VK_QUEUE_TRANSFER_BIT) &&
!(queueFamily.queueFlags & VK_QUEUE_GRAPHICS_BIT) && presentSupport ) {
indices.transferFamily = i;
}
if( indices.isComplete() ) {
break;
}
i++;
}
if( indices.graphicsFamily >= 0 && indices.transferFamily == -1 ) {
std::cout << "Graphics family found, transfer family missing: using graphics family" << std::endl;
indices.transferFamily = indices.graphicsFamily;
}
return indices;
}
Within this function vkGetPhysicalDeviceSurfaceSupportKHR(...) is being called twice since there are 2 queue families that have been found after vkGetPhysicalDeviceQueueFamilyProperties(...) has been called to populate the vector of VkQueueFamilyProperties structures.
Here is the specs for my NVidia GeForce gtx 750 Ti card based on Vulkan's specifications for its queue families: Vulkan:Report and in case the link changes over time here is the information directly:
Queue family 0
queueCount 16
flags GRAPHICS_BIT
COMPUTE_BIT
TRANSFER_BIT
SPARSE_BINDING_BIT
timestampValidBits 64
minImageTransferGranularity.width 1
minImageTransferGranularity.height 1
minImageTransferGranularity.depth 1
supportsPresent 1
Queue family 1
queueCount 1
flags TRANSFER_BIT
timestampValidBits 64
minImageTransferGranularity.width 1
minImageTransferGranularity.height 1
minImageTransferGranularity.depth 1
supportsPresent 0
Now according to these specs which coincide with the values in my vector of structs while I'm stepping through the debugger my structures are populated with
the values of:
queueFamilies[0].queueFlags = 15;
queueFamilies[0].queueCount = 16;
queueFamilies[0].timestampValidBits = 64;
queueFamilies[0].minImageTransferGranularity = { width = 1, height = 1, depth = 1 };
queueFamilies[1].queueFlags = 4;
queueFamilies[1].queueCount = 1;
queueFamilies[1].timestampValidBits = 64;
queueFamilies[1].minImageTransferGranularity = { width = 1, height = 1, depth = 1 };
So this appears to me that my card does support a separate queueFamily specifically the transferFamily.
Based on my assumption of this support and stepping through this function he has two if statements to check for valid conditions within the for loop for each of the indexed queueFamily objects. The if statements are returning exactly as they should be. My code compiles and builds without any errors or warnings, and it still does render then triangle when I'm not running it through the debugger and it does exit with a code of (0). So the code appears to be fine. However I'm not getting the results that I would at least be expecting.
I'm not sure if there is a bug in his code that he happened to missed, if I'm misinterpreting my video card's support of this Vulkan functionality, or if this could be either a Vulkan API bug or NVidia Driver bug.
However as I was stepping through this function to find out why the indices.transferFamily variable was not being set to i; I noticed that on the second iteration of the loop it has nothing to do with the presence of the transferFamilyQueue, its parameter values, or flags. What is causing this if statement to return false is the presentSupport variable as it is being set to 0 on the second call which does match the data sheet above. So the output is as expected.
My question then becomes: Is there an actual implementation problem with the condition checking in the second if statement?
This is where I'm stuck as I am a bit confused because we are checking to see if there is a transferQueueFamily available and if so use that to create and use a stagingBuffer to copy the contents from the CPU to the GPU for the vertex buffer(s). From what I can see it appears my card does have this transferFamily but does not have supportPresent for this family. However, when thinking about it; if you are using a transferFamily - transferQueue you wouldn't want to present it directly as you'll just be copying the data from a temporary vertexBuffer on the CPU to the vertexBuffer that will be used on the GPU. So I'm wondering if the final check in this if statement is correct or not. If my assumptions about how Vulkan is working here is incorrect please don't hesitate to correct me as this is my first attempt at getting a Vulkan rendering application working.
There's no Vulkan API or NVidia Driver bug. It's right there in your report sheet:
supportsPresent 0
E.g. AMD does seem to support presents on VK_QUEUE_TRANSFER_BIT queue families, but it is purely optional (31.4. Querying for WSI Support):
Not all physical devices will include WSI support. Within a physical device, not all queue families will support presentation.
There's no good reason to be checking presentSupport when searching for a transfer specific queue. This is likely a copy and paste error somewhere. Typically you don't care if anything other than the graphics queue has support for presentation.
You do want to use a transfer queue that does not have the graphics bit set, as such a queue is likely to correspond to dedicated transfer hardware that will not impact the performance of work being done on the graphics queue.
After reading a few good answers here and doing some more testing on my end I think I have found an appropriate solution to the application code design. This function is called about 4 or 5 times throughout the application by other functions. It is called when Vulkan is being initialized, it is called again when rating the devices' suitability for choosing the best possible device that is available, it is also being called when creating the logical device, and so forth.
All of these initial calls typically only need the queueFamily's count and or index values to make sure that a suitable graphics device with a queueFamily is available for graphics processing and rendering.
However when this function is being called to create an arbitrary buffer that will be used as a staging buffer for an existing dedicated transfer queue this time we actually need the family queue and all of its properties. So to fix this problem; when checking for the graphicsQueue I left this last condition check to see if presentSupport is available, as for when the for loop iterates to the next index to check for the dedicated transferQueue, I omitted this condition check for the presentSupport all together.
QueueFamilyIndices FindQueueFamilies( const VkPhysicalDevice* device, const VkSurfaceKHR* surface ) {
QueueFamilyIndices indices;
uint32_t queueFamilyCount = 0;
vkGetPhysicalDeviceQueueFamilyProperties( *device, &queueFamilyCount, nullptr );
std::vector<VkQueueFamilyProperties> queueFamilies( queueFamilyCount );
vkGetPhysicalDeviceQueueFamilyProperties( *device, &queueFamilyCount, queueFamilies.data() );
int i = 0;
for( const auto &queueFamily : queueFamilies ) {
VkBool32 presentSupport = false;
vkGetPhysicalDeviceSurfaceSupportKHR( *device, i, *surface, &presentSupport );
if( queueFamily.queueCount > 0 && (queueFamily.queueFlags & VK_QUEUE_GRAPHICS_BIT) && presentSupport ) {
indices.graphicsFamily = i;
}
if( queueFamily.queueCount > 0 && (queueFamily.queueFlags & VK_QUEUE_TRANSFER_BIT) &&
!(queueFamily.queueFlags & VK_QUEUE_GRAPHICS_BIT) /*&& presentSupport*/ ) {
indices.transferFamily = i;
}
if( indices.isComplete() ) {
break;
}
i++;
}
if( indices.graphicsFamily >= 0 && indices.transferFamily == -1 ) {
std::cout << "Graphics family found, transfer family missing: using graphics family" << std::endl;
indices.transferFamily = indices.graphicsFamily;
}
return indices;
}
Now not only is the indices.transferFamily being set to i on the 2nd iteration; the check for indices.isComplete() is also returning true and the last if statement for the roll back is now returning false. Everything seems to being rendering properly without issue. It appears that the staging buffers are being copied over to the GPU now instead of using the CPU for the vertex buffer objects.
if I have a datasource with only polygons, and each polygon has a field of
string, let's say its name, this name is unique for each geometry, they are
attributed dynamically
The more the name of a geometry is similar with another, the more the
geometries are close.
What I want to do is to fetch all the geometries for which the name begins
with something given by the user.
I looked GDAL's algorithms but it seems that nothing fits my problem, maybe
I didn't search enough, and I don't want to look at all the geometries to
find ones that fit.
Thanks,
EDIT :
As mentioned in the commentaries I didn't give any example of what I'm trying to do.
So, I'm using GDAL and OGR libraries (v1.11.0) in C++ because I have to work with raster and vector datas at the same time.
In my process I use the function GDALPolygonize() to extract the polygon, here an sample :
GDALDataset dataset; //this a dataset of int32
size_t width, height;
//Do something with the dataset
const char *pszDriverName = "ESRI Shapefile";
OGRSFDriver *poDriver;
OGRRegisterAll();
poDriver = OGRSFDriverRegistrar::GetRegistrar()->GetDriverByName(
pszDriverName );
if( poDriver == NULL )
{
printf( "%s driver not available.\n", pszDriverName );
exit( 1 );
}
OGRDataSource *poDS;
poDS = poDriver->CreateDataSource("name.shp", NULL );
if( poDS == NULL )
{
printf( "Creation of output file failed.\n" );
exit( 1 );
}
OGRLayer *poLayer;
poLayer = poDS->CreateLayer( "region", NULL, wkbPolygon, NULL );
if( poLayer == NULL )
{
printf( "Layer creation failed.\n" );
exit( 1 );
}
OGRFieldDefn oField( "id", OFTString ); //I use string but there i can use another type
oField.SetWidth(32);
if( poLayer->CreateField( &oField ) != OGRERR_NONE )
{
printf( "Creating Name field failed.\n" );
exit( 1 );
}
GDALPolygonize(dataset->GetRasterBand(1), nullptr, poLayer, 0, nullptr, nullptr, nullptr);
OGRDataSource::DestroyDataSource( poDS );
GDALClose(dataset);
This code is mostly taken from the GDAL website.
So, I use the polygons given by OGRPolygon. Then, later, the user selected a polygon and the goal is to find those who share a certain number of bits.
Thanks to GDAL I can use, the OGRFeature and OGRField, but my only idea is to list each polygons, but i'm sure there is better way to accomplish this task
Hopefully it clearer.
An efficient way is to implement this as a lookup in a radix tree.
I'm trying to write a template method to create shaders for Direct3D. The API functions to create each type of shader as well as the types of shaders have different names. So, I wrote the following code:
class Shader final
{
public:
explicit Shader( _In_ ID3DBlob *const pBlob );
template <class T>
void Create
( std::weak_ptr<ID3D11Device>& pDevice
, CComPtr<T>& pResource )
{
auto p_Device = pDevice.lock();
if ( mp_Blob && p_Device )
{
HRESULT hr = E_FAIL;
ID3D11ClassLinkage* pClassLinkage = nullptr; // unsupported for now
pResource.Release();
CComPtr<ID3D11DeviceChild> pRes;
if ( std::is_same<T, ID3D11VertexShader>() )
{
hr = p_Device->CreateVertexShader
( mp_Blob->GetBufferPointer()
, mp_Blob->GetBufferSize()
, pClassLinkage
, reinterpret_cast<ID3D11VertexShader**>( &pRes ) );
}
else if ( std::is_same<T, ID3D11HullShader>() )
{
hr = p_Device->CreateHullShader
( mp_Blob->GetBufferPointer()
, mp_Blob->GetBufferSize()
, pClassLinkage
, reinterpret_cast<ID3D11HullShader**>( &pRes ) );
}
else if ( std::is_same<T, ID3D11DomainShader>() )
{
hr = p_Device->CreateDomainShader
( mp_Blob->GetBufferPointer()
, mp_Blob->GetBufferSize()
, pClassLinkage
, reinterpret_cast<ID3D11DomainShader**>( &pRes ) );
}
else if ( std::is_same<T, ID3D11GeometryShader>() )
{
hr = p_Device->CreateGeometryShader
( mp_Blob->GetBufferPointer()
, mp_Blob->GetBufferSize()
, pClassLinkage
, reinterpret_cast<ID3D11GeometryShader**>( &pRes ) );
}
else if ( std::is_same<T, ID3D11ComputeShader>() )
{
hr = p_Device->CreateComputeShader
( mp_Blob->GetBufferPointer()
, mp_Blob->GetBufferSize()
, pClassLinkage
, reinterpret_cast<ID3D11ComputeShader**>( &pRes ) );
}
else if ( std::is_same<T, ID3D11PixelShader>() )
{
hr = p_Device->CreatePixelShader
( mp_Blob->GetBufferPointer()
, mp_Blob->GetBufferSize()
, pClassLinkage
, reinterpret_cast<ID3D11PixelShader**>( &pRes ) );
}
else
{
assert( false
&& "Need a pointer to an ID3D11 shader interface" );
}
//TODO: log hr's error code.
assert( SUCCEEDED( hr ) && "Error: shader creation failed!" );
if ( FAILED( hr ) )
{
pResource.Release();
}
else
{
hr = pRes->QueryInterface( IID_PPV_ARGS( &pResource ) );
assert( SUCCEEDED( hr ) );
}
}
}
private:
CComPtr<ID3DBlob> mp_Blob;
};
It should work, although I have not tested it yet. But the issue is that the compiler doesn't throw away the branching paths that will certainly not be taken. So for example:
CComPtr<ID3D11DomainShader> pDS;
//pShader is an instance of Shader class
pShader->Create(pDevice, pDs);
will create a domain shader. But the compiler keeps all the paths in the generated function instead of generating just
void Create
( std::weak_ptr<ID3D11Device>& pDevice
, CComPtr<ID3D11DomainShader>& pResource )
{
auto p_Device = pDevice.lock();
if ( mp_Blob && p_Device )
{
HRESULT hr = E_FAIL;
ID3D11ClassLinkage* pClassLinkage = nullptr; // unsupported for now
pResource.Release();
CComPtr<ID3D11DeviceChild> pRes;
if ( true ) // this is the evaluation of std::is_same<ID3D11DomainShader, ID3D11DomainShader>()
{
hr = p_Device->CreateDomainShader
( mp_Blob->GetBufferPointer()
, mp_Blob->GetBufferSize()
, pClassLinkage
, reinterpret_cast<ID3D11DomainShader**>( &pRes ) );
}
//TODO: log hr's error code.
assert( SUCCEEDED( hr ) && "Error: shader creation failed!" );
if ( FAILED( hr ) )
{
pResource.Release();
}
else
{
hr = pRes->QueryInterface( IID_PPV_ARGS( &pResource ) );
assert( SUCCEEDED( hr ) );
}
}
}
I think there should be a way to do this because the type of the shader is known at compile-time, but I don't really know how (my metaprogramming skills need yet to grow).
p.s.
I compiled both in debug and releas setting and in both the paths are kept.
Following may help:
HRESULT createShader(
ID3D11Device& pDevice,
CComPtr<ID3D11VertexShader>& pResource,
CComPtr<ID3D11DeviceChild> pRes)
{
return p_Device.CreateVertexShader(
mp_Blob->GetBufferPointer(),
mp_Blob->GetBufferSize(),
pClassLinkage,
reinterpret_cast<ID3D11VertexShader**>(&pRes));
}
// similar for other Shader type
template <class T>
void Create(
std::weak_ptr<ID3D11Device>& pDevice,
CComPtr<T>& pResource)
{
auto p_Device = pDevice.lock();
if (!mp_Blob || !p_Device) {
return;
}
pResource.Release();
CComPtr<ID3D11DeviceChild> pRes;
// ---------------- 8< --------------------
// Here is the change: no more `if` to check type,
// let the compiler choose the correct overload
HRESULT hr = createShader(*p_device, pResource, pRes);
// ---------------- >8 --------------------
assert( SUCCEEDED( hr ) && "Error: shader creation failed!" );
if ( FAILED( hr ) ) {
pResource.Release();
} else {
hr = pRes->QueryInterface( IID_PPV_ARGS( &pResource ) );
assert( SUCCEEDED( hr ) );
}
}
Regarding your optimisations:
I think you're upset about the code being created to handle everything regardless of which template type.
You need to shift your is_same logic to the enable_if in my meta-programming solution, then the function that matches the template for what you want will ONLY be the code you want.
HOWEVER I interpret your question still as a problem of too much abstraction, you cannot use an Animal class to only accept a Banana if the underlying animal is a monkey.
(In this classic example, Monkey derives from Animal and Banana from Food, where Animal has a method void eat(Food))
Answer of how to do what you want well
A bit long, so I skimmed it.
Remember meta-programming wont always save the day (there are many cases where you know the types but the program doesn't, take for example columns in database result sets).
High performance
Don't let unknown types in in the first place. Here's a common pattern:
class unverified_thing: public base_class {
public:
unverified_thing(base_class* data): data(data) { type_code = -1; }
void set_type_code(int to) { /*throw if not -1*/ type_code = to; }
derived_A* get_as_derived_A() const { /*throw if not the right type code*/
return *(derived_A*)data;
}
derived_B* get_as_derived_B() const { /*throw is not right type code*/
return *(derived_B*)data;
}
//now do the base class methods
whatever base_class_method() {
return data->base_class_method();
}
private:
int type_code;
base_class data;
};
Now you can pretend unverified_thing is your data, and you have introduced a form of type checking. You can afford to throw in the getter because you wont be calling that every frame or whatever. You only deal with that when you're setting up.
So say shader is the base class of fragment_shader and vertex_shader, you can be dealing with a shader but have set the type_id, so you can deal with shaders right until you compile your shader, then you can cast to the correct derived type with a runtime error if wrong. This avoids C++ RTTI which can be quite heavy.
Remember you can afford setup time, you want to make sure every bit of data you send into the engine is correct.
This type pattern comes from validated input only being allowed through (which stops SO many bugs) you have a unverified_thing that doesn't derive from the data type, you can only extract the data without error if you set the type to verified.
An even better way to do this (but can get messy quick) is to have:
template<bool VERIFIED=true>
class user_input { };
/*somewhere in your dialog class (or whatever)*/
user_input<false> get_user_input() const { /*whatever*/ }
/*then have somewhere*/
user_input verify_input(const user_input<false>& some_input) { /*which will throw as needed*/ }
For large data classes of user_input it can be good to hide a large_data* inside the user_input class, but you get the idea.
To use metaprogramming (Limits how flexible the end result can be re. user input)
template<class U>
typename ::std::enable_if<my_funky_criteria<U>::value,funky_shader>::type
Create(::std::istream& input) { /*blah*/ }
with
template<class U>
struct my_funky_criteria: typename ::std::conditional</*what you want*/,::std::true_type,::std::false_type>::type { };
This must be a compiler setting issue, even though you stated that you use release mode. Have you checked that you're using /O3 and not /O2 in your release mode configuration? O2 optimizes for size and could perhaps reuse the same binary instead of creating a version for each type (even though I'm not sure if it's prohibited by the standard).
Also, check the disassembler window to see if the IDE simply cheats you. And rebuild your project, etc. Some times Visual Studio fails seeing changed header files.
There is simply no other answer than build settings in this particular case...
I'm currently creating a sound system for my project. Every call PlayAsync creating instance of sound in std::thread callback. The sound data proceed in cycle in this callback. When thread proceeds it store sound instance in static vector. When thread ends (sound complete) - it delete sound instance and decrement instance count. When application ends - it must stop all sounds immediate, sending interrupt to every cycle of sound.
The problem is in array keeping these sounds. I am not sure, but I think vector isn't right choice for this purpose.. Here is a code.
void gSound::PlayAsync()
{
std::thread t(gSound::Play,mp_Audio,std::ref(*this));
t.detach();
}
HRESULT gSound::Play(IXAudio2* s_XAudio,gSound& sound)
{
gSound* pSound = new gSound(sound);
pSound->m_Disposed = false;
HRESULT hr;
// Create the source voice
IXAudio2SourceVoice* pSourceVoice;
if( FAILED( hr = s_XAudio->CreateSourceVoice( &pSourceVoice, pSound->pwfx ) ) )
{
gDebug::ShowMessage(L"Error creating source voice");
return hr;
}
// Submit the wave sample data using an XAUDIO2_BUFFER structure
XAUDIO2_BUFFER buffer = {0};
buffer.pAudioData = pSound->pbWaveData;
buffer.Flags = XAUDIO2_END_OF_STREAM; // tell the source voice not to expect any data after this buffer
buffer.AudioBytes = pSound->cbWaveSize;
if( FAILED( hr = pSourceVoice->SubmitSourceBuffer( &buffer ) ) )
{
gDebug::ShowMessage(L"Error submitting source buffer");
pSourceVoice->DestroyVoice();
return hr;
}
hr = pSourceVoice->Start( 0 );
// Let the sound play
BOOL isRunning = TRUE;
m_soundInstanceCount++;
mp_SoundInstances.push_back(pSound); #MARK2
while( SUCCEEDED( hr ) && isRunning && pSourceVoice != nullptr && !pSound->m_Interrupted)
{
XAUDIO2_VOICE_STATE state;
pSourceVoice->GetState( &state );
isRunning = ( state.BuffersQueued > 0 ) != 0;
Sleep(10);
}
pSourceVoice->DestroyVoice();
delete pSound;pSound = nullptr; //its correct ??
m_soundInstanceCount--;
return 0;
}
void gSound::InterrupAllSoundInstances()
{
for(auto Iter = mp_SoundInstances.begin(); Iter != mp_SoundInstances.end(); Iter++)
{
if(*Iter != nullptr)//#MARK1
{
(*Iter)->m_Interrupted = true;
}
}
}
And this I call in application class before disposing sound objects, after main application loop immediate.
gSound::InterrupAllSoundInstances();
while (gSound::m_soundInstanceCount>0)//waiting for deleting all sound instances in threads
{
}
Questions:
So #MARK1 - How to check memory validation in vector? I don't have experience about it. And get errors when try check invalid memory (it's not equals null)
And #MARK2 - How to use vector correctly? Or maybe vector is bad choice? Every time I create sound instance it increases size. It's not good.
A typical issue:
delete pSound;
pSound = nullptr; // issue
This does not do what you think.
It will effectively set pSound to null, but there are other copies of the same pointer too (at least one in the vector) which do not get nullified. This is why you do not find nullptr in your vector.
Instead you could register the index into the vector and nullify that: mp_SoundInstances[index] = nullptr;.
However, I am afraid that you simply do not understand memory handling well and you lack structure. For memory handling, it's hard to tell without details and your system seems complicated enough that I am afraid it would tell too long to explain. For structure, you should read a bit about the Observer pattern.