Decoding flags are not working correctly - c++

I have designed some flags:
enum ImportAssignment
{
OCLMChairman = 0x00000001,
OCLMOpenPrayer = 0x00000002,
OCLMClosePrayer = 0x00000004,
OCLMConductorCBS = 0x00000008,
OCLMReaderCBS = 0x00000016,
PTChairman = 0x00000032,
PTHospitality = 0x00000064,
WTConductor = 0x00000128,
WTReader = 0x00000256
};
In my dialog I read/write the flags from/to the registry:
void CImportOCLMAssignmentHistoryDlg::ReadSettings()
{
m_dwImportFlags = theApp.GetNumberSetting(theApp.GetActiveScheduleSection(_T("Options")), _T("ImportFlags"), 0);
}
void CImportOCLMAssignmentHistoryDlg::SaveSettings()
{
theApp.SetNumberSetting(theApp.GetActiveScheduleSection(_T("Options")), _T("ImportFlags"), m_dwImportFlags);
}
SetNumberSetting is basically a wrapper for GetProfileInt etc..
I then have two methods for encoding and decoding the flags into a series of BOOL variables (check boxes):
void CImportOCLMAssignmentHistoryDlg::DecodeImportFlags()
{
m_bImportOCLMChairman = (m_dwImportFlags & ImportAssignment::OCLMChairman);
m_bImportOCLMOpenPrayer = (m_dwImportFlags & ImportAssignment::OCLMOpenPrayer);
m_bImportOCLMClosePrayer = (m_dwImportFlags & ImportAssignment::OCLMClosePrayer);
m_bImportOCLMConductorCBS = (m_dwImportFlags & ImportAssignment::OCLMConductorCBS);
m_bImportOCLMReaderCBS = (m_dwImportFlags & ImportAssignment::OCLMReaderCBS);
m_bImportPTChairman = (m_dwImportFlags & ImportAssignment::PTChairman);
m_bImportPTHospitality = (m_dwImportFlags & ImportAssignment::PTHospitality);
m_bImportWTConductor = (m_dwImportFlags & ImportAssignment::WTConductor);
m_bImportWTReader = (m_dwImportFlags & ImportAssignment::WTReader);
}
void CImportOCLMAssignmentHistoryDlg::EncodeImportFlags()
{
m_dwImportFlags = 0; // Reset
if (m_bImportOCLMChairman) m_dwImportFlags |= ImportAssignment::OCLMChairman;
if (m_bImportOCLMOpenPrayer) m_dwImportFlags |= ImportAssignment::OCLMOpenPrayer;
if (m_bImportOCLMClosePrayer) m_dwImportFlags |= ImportAssignment::OCLMClosePrayer;
if (m_bImportOCLMConductorCBS) m_dwImportFlags |= ImportAssignment::OCLMConductorCBS;
if (m_bImportOCLMReaderCBS) m_dwImportFlags |= ImportAssignment::OCLMReaderCBS;
if (m_bImportPTChairman) m_dwImportFlags |= ImportAssignment::PTChairman;
if (m_bImportPTHospitality) m_dwImportFlags |= ImportAssignment::PTHospitality;
if (m_bImportWTConductor) m_dwImportFlags |= ImportAssignment::WTConductor;
if (m_bImportWTReader) m_dwImportFlags |= ImportAssignment::WTReader;
}
When I first run the application the checkboxes are unticked. I then tick a couple. I close the dialog and re-open it. Always the first two are ticked.
I am supporting 64x and 32x builds.
What am I doing wrong?

On further debugging I found the solution to my problem.
I had to adjust my DecodeImportFlags method:
void CImportOCLMAssignmentHistoryDlg::DecodeImportFlags()
{
m_bImportOCLMChairman = (m_iImportFlags & ImportAssignment::OCLMChairman) ? TRUE : FALSE;
m_bImportOCLMOpenPrayer = (m_iImportFlags & ImportAssignment::OCLMOpenPrayer) ? TRUE : FALSE;
m_bImportOCLMClosePrayer = (m_iImportFlags & ImportAssignment::OCLMClosePrayer) ? TRUE : FALSE;
m_bImportOCLMConductorCBS = (m_iImportFlags & ImportAssignment::OCLMConductorCBS) ? TRUE : FALSE;
m_bImportOCLMReaderCBS = (m_iImportFlags & ImportAssignment::OCLMReaderCBS) ? TRUE : FALSE;
m_bImportPTChairman = (m_iImportFlags & ImportAssignment::PTChairman) ? TRUE : FALSE;
m_bImportPTHospitality = (m_iImportFlags & ImportAssignment::PTHospitality) ? TRUE : FALSE;
m_bImportWTConductor = (m_iImportFlags & ImportAssignment::WTConductor) ? TRUE : FALSE;
m_bImportWTReader = (m_iImportFlags & ImportAssignment::WTReader) ? TRUE : FALSE;
}
It m_iImportFlags & ImportAssignment::XXXXX returns the actual flag value. So I needed conditional testing so that the BOOL could be set correctly.
Update: These are the ways I now declare the flags:
enum ImportAssignment
{
/*
OCLMChairman = 1,
OCLMOpenPrayer = 2,
OCLMClosePrayer = 4,
OCLMConductorCBS = 8,
OCLMReaderCBS = 16,
PTChairman = 32,
PTHospitality = 64,
WTConductor = 128,
WTReader = 256*/
None = 0,
OCLMChairman = 1 << 0,
OCLMOpenPrayer = 1 << 1,
OCLMClosePrayer = 1 << 2,
OCLMConductorCBS = 1 << 3,
OCLMReaderCBS = 1 << 4,
PTChairman = 1 << 5,
PTHospitality = 1 << 6,
WTConductor = 1 << 7,
WTReader = 1 << 8
};

Related

Nvidia NVEnc output corrupt when enableSubFrameWrite = 1

My preset:
m_stEncodeConfig.encodeCodecConfig.hevcConfig.sliceMode = 3u;
m_stEncodeConfig.encodeCodecConfig.hevcConfig.sliceModeData = (uint32_t)m_stEncodeStreamInfo.nMaxSliceNum; //4
m_stCreateEncodeParams.reportSliceOffsets = 1;
m_stCreateEncodeParams.enableSubFrameWrite = 1;
code of process output:
NV_ENC_LOCK_BITSTREAM lockBitstreamData;
memset(&lockBitstreamData, 0, sizeof(lockBitstreamData));
lockBitstreamData.version = NV_ENC_LOCK_BITSTREAM_VER;
lockBitstreamData.outputBitstream = pEncodeBuffer->stOutputBfr.hBitstreamBuffer;
lockBitstreamData.doNotWait = 1u;
std::vector<uint32_t> arrSliceOffset(m_stEncodeConfig.encodeCodecConfig.hevcConfig.sliceModeData);
lockBitstreamData.sliceOffsets = arrSliceOffset.data();
while (true)
{
NVENCSTATUS status = m_pEncodeAPI->nvEncLockBitstream(m_hEncoder, &lockBitstreamData);
auto tick = int(std::chrono::steady_clock::now().time_since_epoch().count() / 1000000);
if (status == NVENCSTATUS::NV_ENC_SUCCESS)
{
if (lockBitstreamData.hwEncodeStatus == 2)
{
static std::ofstream of("slice.h265", std::ios::trunc | std::ios::binary);
of.write((char*)lockBitstreamData.bitstreamBufferPtr, lockBitstreamData.bitstreamSizeInBytes);
of.flush();
break;
}
NVENCAPI_CALL_CHECK(m_pEncodeAPI->nvEncUnlockBitstream(m_hEncoder, lockBitstreamData.outputBitstream));
}
else
{
break;
}
}
play bitstream:
ffplay -i slice.h265
output : Packet corrupt
arrSliceOffset[0] always = 255.
I watch the memory from VS Debug and compare with enableSubFrameWrite = 0,the bitstreamSizeInBytes less of valid data size . It’s BUG or I loss some details? Anybady can tell me how can I correct use of enableSubFrameWrite

Trying to implement NTP Client using Managed C++ but getting date from 1899 using time.windows.com

I am trying to implement a code to get time from time.windows.com but it returns a weird date (year of the date I get is 1899). Since the same servers work for my unmanaged C++ code using WinSock, I can imagine that something must be wrong with my code itself. Can someone look at my code below and tell me what I am doing wrong?
typedef unsigned int uint;
typedef unsigned long ulong;
long GetTimestampFromServer()
{
System::String^ server = L"time.windows.com";
array<unsigned char>^ ntpData = gcnew array<unsigned char>(48);
ntpData[0] = 0x1B;
array<System::Net::IPAddress^>^ addresses = System::Net::Dns::GetHostEntry(server)->AddressList;
System::Net::IPEndPoint^ ipEndPoint = gcnew System::Net::IPEndPoint(addresses[0], 123);
System::Net::Sockets::Socket^ socket = gcnew System::Net::Sockets::Socket
(
System::Net::Sockets::AddressFamily::InterNetwork,
System::Net::Sockets::SocketType::Dgram,
System::Net::Sockets::ProtocolType::Udp
);
try
{
socket->Connect(ipEndPoint);
socket->ReceiveTimeout = 3000;
socket->Send(ntpData);
socket->Receive(ntpData);
socket->Close();
}
catch (System::Exception^ e)
{
System::Console::WriteLine(e->Message);
return 0;
}
const System::Byte serverReplyTime = 40;
ulong intPart = System::BitConverter::ToUInt32(ntpData, serverReplyTime);
ulong fractPart = System::BitConverter::ToUInt32(ntpData, serverReplyTime + 4);
intPart = SwapEndianness(intPart);
fractPart = SwapEndianness(fractPart);
long long milliseconds = (intPart * 1000) + ((fractPart * 1000) / 0x100000000L);
System::DateTime networkDateTime = (gcnew System::DateTime(1900, 1, 1, 0, 0, 0, System::DateTimeKind::Utc))->AddMilliseconds((long)milliseconds);
std::cout << ConvertToTimestamp(networkDateTime);
return 0;
}
static uint SwapEndianness(ulong x)
{
return (uint)(((x & 0x000000ff) << 24) +
((x & 0x0000ff00) << 8) +
((x & 0x00ff0000) >> 8) +
((x & 0xff000000) >> 24));
}
long ConvertToTimestamp(System::DateTime value)
{
System::TimeZoneInfo^ NYTimeZone = System::TimeZoneInfo::FindSystemTimeZoneById(L"Eastern Standard Time");
System::DateTime NyTime = System::TimeZoneInfo::ConvertTime(value, NYTimeZone);
System::TimeZone^ localZone = System::TimeZone::CurrentTimeZone;
System::Globalization::DaylightTime^ dst = localZone->GetDaylightChanges(NyTime.Year);
NyTime = NyTime.AddHours(-1);
System::DateTime epoch = System::DateTime(1970, 1, 1, 0, 0, 0, 0).ToLocalTime();
System::TimeSpan span = (NyTime - epoch);
return (long)System::Convert::ToDouble(span.TotalSeconds);
}
This is largely a guess, but:
intPart * 1000
It has been about 3.8 billion seconds since 1900. In order to store that number, all 32 bits of the ulong are used. Then you multiply by 1000, without up-casting first. This keeps the result as a 32-bit integer, but it overflows, so your data is destroyed.
To fix, cast to a 64-bit integer first, before converting to milliseconds. Or don't bother converting, just call AddSeconds directly.
That said, you could have a networking problem and are getting all zeros as your result, which would be January 1 1900, but then you're converting to Eastern Standard Time, which is a 5 hour offset, which would be 7:00 PM on December 31, 1899.
David Yaw pushed me in the right direction. Big thanks to him, I finally got my code to work. There was a lot of incorrectness associated with different statements throughout the method. I have fixed them and posting the new code below.
ref class SNTP
{
public: static DateTime GetNetworkTime()
{
System::String^ ntpServer = "time.windows.com";
auto ntpData = gcnew cli::array<unsigned char>(48);
ntpData[0] = 0x1B; //LI = 0 (no warning), VN = 3 (IPv4 only), Mode = 3 (Client Mode)
auto addresses = System::Net::Dns::GetHostEntry(ntpServer)->AddressList;
auto ipEndPoint = gcnew System::Net::IPEndPoint(addresses[0], 123);
auto socket = gcnew System::Net::Sockets::Socket
(
System::Net::Sockets::AddressFamily::InterNetwork,
System::Net::Sockets::SocketType::Dgram,
System::Net::Sockets::ProtocolType::Udp
);
socket->Connect(ipEndPoint);
socket->ReceiveTimeout = 3000;
socket->Send(ntpData);
socket->Receive(ntpData);
socket->Close();
const System::Byte serverReplyTime = 40;
System::UInt64 intPart = System::BitConverter::ToUInt32(ntpData, serverReplyTime);
System::UInt64 fractPart = System::BitConverter::ToUInt32(ntpData, serverReplyTime + 4);
intPart = SwapEndianness(intPart);
fractPart = SwapEndianness(fractPart);
auto milliseconds = (intPart * 1000) + ((fractPart * 1000) / 0x100000000L);
auto networkDateTime = (System::DateTime(1900, 1, 1, 0, 0, 0, System::DateTimeKind::Utc)).AddMilliseconds((System::Int64)milliseconds);
return networkDateTime.ToLocalTime();
}
private: static System::UInt32 SwapEndianness(System::UInt64 x)
{
return (System::UInt32)(((x & 0x000000ff) << 24) +
((x & 0x0000ff00) << 8) +
((x & 0x00ff0000) >> 8) +
((x & 0xff000000) >> 24));
}
};

Micropython User Module: Class Prints Initialized Data Even If Attributes Have Changed

Here's a visual representation of the problem from the front end. Notice that the output from printing the class does not change even if the attributes do, yet I get the proper value from printing the attributes directly.
import bitbanglab as bbl
class Entity(bbl.Entity):
def __init__(self):
super().__init__(bytearray([0x0F]), 1, 1, 2, 2, 8, 2)
print(self) #Entity(x:1, y:1, width:2, height:2, scale:8, len:1, pal:2)
self.x = 50
self.y = 20
self.width = 50
self.height= 60
self.scale = 10
self.len = 200
self.pal = 14
print(self) #Entity(x:1, y:1, width:2, height:2, scale:8, len:1, pal:2)
print(self.x) #50
print(self.y) #20
Entity()
Print Method
STATIC void Entity_print(const mp_print_t *print, mp_obj_t self_in, mp_print_kind_t kind) {
(void)kind;
bitbanglab_Entity_obj_t *self = MP_OBJ_TO_PTR(self_in);
mp_printf(print, "Entity(x:%u, y:%u, width:%u, height:%u, scale:%u, len:%u, pal:%u)", self->x, self->y, self->width, self->height, self->scale, self->len, self->pal);
}
get/set attributes
STATIC void Entity_attr(mp_obj_t self_in, qstr attr, mp_obj_t *dest) {
bitbanglab_Entity_obj_t *self = MP_OBJ_TO_PTR(self_in);
if (dest[0] == MP_OBJ_NULL) {
if (attr == MP_QSTR_x)
dest[0] = mp_obj_new_int(self->x);
else if(attr == MP_QSTR_y)
dest[0] = mp_obj_new_int(self->y);
else if(attr == MP_QSTR_width)
dest[0] = mp_obj_new_int(self->width);
else if(attr == MP_QSTR_height)
dest[0] = mp_obj_new_int(self->height);
else if(attr == MP_QSTR_scale)
dest[0] = mp_obj_new_int(self->scale);
else if(attr == MP_QSTR_len)
dest[0] = mp_obj_new_int(self->len);
else if(attr == MP_QSTR_pal)
dest[0] = mp_obj_new_int(self->pal);
} else {
if (attr == MP_QSTR_x)
self->x = mp_obj_get_int(dest[1]);
else if (attr == MP_QSTR_y)
self->y = mp_obj_get_int(dest[1]);
else if (attr == MP_QSTR_width)
self->width = mp_obj_get_int(dest[1]);
else if (attr == MP_QSTR_height)
self->height = mp_obj_get_int(dest[1]);
else if (attr == MP_QSTR_scale)
self->scale = mp_obj_get_int(dest[1]);
else if (attr == MP_QSTR_len)
self->len = mp_obj_get_int(dest[1]);
else if (attr == MP_QSTR_pal)
self->pal = mp_obj_get_int(dest[1]);
dest[0] = MP_OBJ_NULL;
}
}
Class Type
const mp_obj_type_t bitbanglab_Entity_type = {
{ &mp_type_type },
.name = MP_QSTR_bitbanglab,
.print = Entity_print,
.make_new = Entity_make_new,
.locals_dict = (mp_obj_dict_t*)&Entity_locals_dict,
.attr = Entity_attr,
};
Class Object
typedef struct _bitbanglab_Entity_obj_t {
mp_obj_base_t base;
uint8_t *bitmap;
uint16_t len;
uint16_t x;
uint16_t y;
uint16_t width;
uint16_t height;
uint8_t scale;
uint8_t pal;
} bitbanglab_Entity_obj_t;
Class
The class doesn't have any methods yet. The locals_dict_table for it is empty.
mp_obj_t Entity_make_new(const mp_obj_type_t *type, size_t n_args, size_t n_kw, const mp_obj_t *args) {
mp_arg_check_num(n_args, n_kw, 7, 7, true);
//make self
bitbanglab_Entity_obj_t *self = m_new_obj(bitbanglab_Entity_obj_t);
self->base.type = &bitbanglab_Entity_type;
//arguments
enum { ARG_bitmap, ARG_x, ARG_y, ARG_width, ARG_height, ARG_scale, ARG_pal};
//get buffer
mp_buffer_info_t bufinfo;
mp_get_buffer_raise(args[ARG_bitmap], &bufinfo, MP_BUFFER_RW);
//properties
self->bitmap = (uint8_t *)bufinfo.buf;
self->len = bufinfo.len;
self->x = mp_obj_get_int(args[ARG_x]);
self->y = mp_obj_get_int(args[ARG_y]);
self->width = mp_obj_get_int(args[ARG_width]);
self->height = mp_obj_get_int(args[ARG_height]);
self->scale = mp_obj_get_int(args[ARG_scale]);
self->pal = mp_obj_get_int(args[ARG_pal]);
return MP_OBJ_FROM_PTR(self);
}
I realized the problem. I was subclassing my user-module-defined class, and the values I was setting were not the ones on the super. If I instantiate the Entity class without a subclass and change the attributes, printing the class reflects those changes. Now my new problem is figuring out how to reach the super's attributes from a subclass. I'll update this answer to be stronger when I figure it all out.

How is possible get only the device to capture or playback using pjsua2

I trying get the devices from pjsua2 , I got it get all devices, but do not got split in capture device and playback device.
void AudioController::load(){
Endpoint ep;
ep.libCreate();
// Initialize endpoint
EpConfig ep_cfg;
ep.libInit( ep_cfg );
AudDevManager &manager = ep.audDevManager();
manager.refreshDevs();
this->input.clear();
const AudioDevInfoVector &list = manager.enumDev();
for(unsigned int i = 0;list.size() != i;i++){
AudioDevInfo * info = list[i];
GtAudioDevice * a = new GtAudioDevice();
a->name = info->name.c_str();
a->deviceId = i;
qDebug() << info->name.c_str();
qDebug() << info->driver.c_str();
qDebug() << info->caps;
this->input.append(a);
}
ep.libDestroy();
}
This is my output:
Wave mapper
WMME
23
Microfone (Dispositivo de High
WMME
3
Alto-falantes (Dispositivo de H
WMME
21
You can check the fields inputCount and outputCount inside AudioDevInfo.
According the documentation:
unsigned inputCount
Maximum number of input channels supported by this device. If the
value is zero, the device does not support input operation (i.e. it is
a playback only device).
And
unsigned outputCount
Maximum number of output channels supported by this device. If the
value is zero, the device does not support output operation (i.e. it
is an input only device).
So you could do something like this:
for(unsigned int i = 0;list.size() != i;i++){
AudioDevInfo * info = list[i];
GtAudioDevice * a = new GtAudioDevice();
a->name = info->name.c_str();
a->deviceId = i;
if (info->inputCount > 0) {
a->captureDevice = true;
}
if (info->outputCount > 0) {
a->playbackDevice = true;
}
this->input.append(a);
}
Reference: http://www.pjsip.org/pjsip/docs/html/structpj_1_1AudioDevInfo.htm
Another way, you can check the field caps (capabilities). Something like this:
for (int i = 0; i < list.size(); i++)
{
AudioDevInfo * info = list[i];
if ((info.caps & (int)pjmedia_aud_dev_cap.PJMEDIA_AUD_DEV_CAP_OUTPUT_LATENCY) != 0)
{
// Playback devices come here
}
if ((info.caps & (int)pjmedia_aud_dev_cap.PJMEDIA_AUD_DEV_CAP_INPUT_LATENCY) != 0)
{
// Capture devices come here
}
}
caps is combined from these possible values:
enum pjmedia_aud_dev_cap {
PJMEDIA_AUD_DEV_CAP_EXT_FORMAT = 1,
PJMEDIA_AUD_DEV_CAP_INPUT_LATENCY = 2,
PJMEDIA_AUD_DEV_CAP_OUTPUT_LATENCY = 4,
PJMEDIA_AUD_DEV_CAP_INPUT_VOLUME_SETTING = 8,
PJMEDIA_AUD_DEV_CAP_OUTPUT_VOLUME_SETTING = 16,
PJMEDIA_AUD_DEV_CAP_INPUT_SIGNAL_METER = 32,
PJMEDIA_AUD_DEV_CAP_OUTPUT_SIGNAL_METER = 64,
PJMEDIA_AUD_DEV_CAP_INPUT_ROUTE = 128,
PJMEDIA_AUD_DEV_CAP_INPUT_SOURCE = 128,
PJMEDIA_AUD_DEV_CAP_OUTPUT_ROUTE = 256,
PJMEDIA_AUD_DEV_CAP_EC = 512,
PJMEDIA_AUD_DEV_CAP_EC_TAIL = 1024,
PJMEDIA_AUD_DEV_CAP_VAD = 2048,
PJMEDIA_AUD_DEV_CAP_CNG = 4096,
PJMEDIA_AUD_DEV_CAP_PLC = 8192,
PJMEDIA_AUD_DEV_CAP_MAX = 16384
}

Collision filtering with layers (PhysX 3.4)

I would like to filter my collisions with layers like in Unity, but I really don't understand how to do it. I'm following this tutorial : http://docs.nvidia.com/gameworks/content/gameworkslibrary/physx/guide/Manual/RigidBodyCollision.html#collision-filtering
All I want to do is disable the collisions between the objects that have the layer Cube and Plane...
Graph.cpp :
bool Graph::Init()
{
/*...*/
cubeCollider->SetLayer(Physics::PhysicLayer::Cube);
planeCollider->SetLayer(Physics::PhysicLayer::Plane);
sphereCollider->SetLayer(Physics::PhysicLayer::Sphere);
capsuleCollider->SetLayer(Physics::PhysicLayer::Capsule);
_physX->SetCollisionFiltering(Physics::PhysicLayer::Cube, Physics::PhysicLayer::Plane);
/*...*/
}
And here is how I set the filter shader :
PhysX.cpp :
void PhysX::SetCollisionFiltering(PhysicLayer p_one, PhysicLayer p_two)
{
// I don't really know what to do here...
PxFilterData filterData;
filterData.word0 = p_one;
filterData.word1 = p_two;
// no collision between objects with layer ONE and objects with layer TWO ?
for (unsigned int i = 0; i < _colliders.size(); ++i)
{
if (_colliders[i]->GetLayer() == p_one || _colliders[i]->GetLayer() == p_two)
_colliders[i]->GetShape()->setSimulationFilterData(filterData);
}
}
physx::PxFilterFlags CreateFilterShader(PxFilterObjectAttributes p_attributes0, PxFilterData p_filterData0,
PxFilterObjectAttributes p_attributes1, PxFilterData p_filterData1,
PxPairFlags& p_pairFlags, const void* p_constantBlock, PxU32 constantBlockSize)
{
// Trigger
if (PxFilterObjectIsTrigger(p_attributes0) || PxFilterObjectIsTrigger(p_attributes1))
{
p_pairFlags = PxPairFlag::eDETECT_DISCRETE_CONTACT
| PxPairFlag::eSOLVE_CONTACT
| PxPairFlag::eNOTIFY_TOUCH_FOUND
| PxPairFlag::eNOTIFY_TOUCH_LOST;
}
// Normal Collision
else
{
// Not sure
if ((p_filterData0.word0 & p_filterData1.word1) && (p_filterData1.word0 & p_filterData0.word1))
{
p_pairFlags = PxPairFlag::eDETECT_DISCRETE_CONTACT
| PxPairFlag::eSOLVE_CONTACT
| PxPairFlag::eNOTIFY_CONTACT_POINTS
| PxPairFlag::eNOTIFY_THRESHOLD_FORCE_FOUND
| PxPairFlag::eNOTIFY_THRESHOLD_FORCE_LOST
| PxPairFlag::eNOTIFY_THRESHOLD_FORCE_PERSISTS
| PxPairFlag::eNOTIFY_TOUCH_FOUND
| PxPairFlag::eNOTIFY_TOUCH_LOST
| PxPairFlag::eNOTIFY_TOUCH_PERSISTS;
}
}
return PxFilterFlag::eDEFAULT;
}
With this code, none of my objects collide... I don't really understand what are PxFilter.word0, word1 word2 and word3 by the way...
Thanks in advance !
This a very late answer, just a reference for others.
Based on nvidia documentation you should return eSupress or kill enum.
https://docs.nvidia.com/gameworks/content/gameworkslibrary/physx/apireference/files/structPxFilterFlag.html
:
PxFilterFlags PhysicsWorldFilterShader(
PxFilterObjectAttributes attributes0, PxFilterData filterData0,
PxFilterObjectAttributes attributes1, PxFilterData filterData1,
PxPairFlags& pairFlags, const void* constantBlock, PxU32 constantBlockSize
)
{
// Checking if layers should be ignored
auto const layerMaskA = filterData0.word0;
auto const layerA = filterData0.word1;
auto const layerMaskB = filterData1.word0;
auto const layerB = filterData1.word1;
auto const aCollision = layerMaskA & layerB;
auto const bCollision = layerMaskB & layerA;
if (aCollision == 0 || bCollision == 0)
{
return PxFilterFlag::eSUPPRESS;
}
// all initial and persisting reports for everything, with per-point data
pairFlags = PxPairFlag::eSOLVE_CONTACT | PxPairFlag::eDETECT_DISCRETE_CONTACT | PxPairFlag::eTRIGGER_DEFAULT;
return PxFilterFlag::eDEFAULT;
}