How can I implement a decay for a matrix transformation with react-native-gesture-handler and reanimated? - react-native-gesture-handler

I'm trying to learn react-native-gesture-handler and react-native-reanimate. I like the idéa of using matrices for transformation, but I have a hard time understanding where to fit in physics functions, such as withDecay.
I want to use the new Gestures API (https://docs.swmansion.com/react-native-gesture-handler/docs/api/gestures/gesture) instead of the old gestureHandler API (https://docs.swmansion.com/react-native-gesture-handler/docs/gesture-handlers/api/common-gh)
Here is a nice example of using matrices for transformation and simply multiplying the change-matrice with the current matrice.
https://github.com/wcandillon/can-it-be-done-in-react-native/blob/master/bonuses/sticker-app/src/GestureHandler.tsx
This concept works great, but I also want to have a decay effect after ending a pan, i.e. the translation should continue to glide a bit.
Here is my first attempt. It does not work, and might be a bit naive. But I think that the code shows what I want to do. I just don't know how to do it correctly. My attempt was to add the .onEnd event, compared to Williams Candillon's original code.
const pan = Gesture.Pan()
.onChange(e => {
matrix.value = multiply4(
Matrix4.translate(e.changeX, e.changeY, 0),
matrix.value,
);
})
.onEnd(({velocityX, velocityY}) => {
matrix.value = multiply4(
Matrix4.translate(
withDecay({velocity: velocityX}),
withDecay({velocity: velocityY}),
0,
),
matrix.value,
);
});

the problem here is the withDecay returns an animation value (it auto update). So the idea would be to do trX.value = withDecay(), same of y and then use useDerivedValue to get the total Matrix4. I hope this helps.

Related

GLSL dynamic looping not working on Intel UHD Graphics [duplicate]

I asked for help about an OpenGL ES 2.0 Problem in this question.
What seems to be the answer is very odd to me.
Therefore I decided to ask this question in hope of being able to understand what is going on.
Here is the piece of faulty vertex-shader code:
// a bunch of uniforms and stuff...
uniform int u_lights_active;
void main()
{
// some code...
for ( int i = 0; i < u_lights_active; ++i )
{
// do some stuff using u_lights_active
}
// some other code...
}
I know this looks odd but this is really all code that is needed to explain the problem / faulty behavior.
My question is: Why is the loop not getting executed when I pass in some value greater 0 for u_lights_active?
When I hardcode some integer e.g. 4, instead of using the uniform u_lights_active, it is working just fine.
One more thing, this only appears on Android but not on the Desktop. I use LibGDX to run the same code on both platforms.
If more information is needed you can look at the original question but I didn't want to copy and paste all the stuff here.
I hope that this approach of keeping it short is appreciated, otherwise I will copy all the stuff over.
Basically GLSL specifies that implementations may restrict loops to have "constant" bounds on them. This is to make it simpler to optimize the code to run in parallel (different loop counts for different pixels would be complex). I believe on some implementations the constants even have to be small. Note that the spec just specifies the "minimum" behavior, so some devices might support more complex loop controls than the spec requires.
Here's a nice summary of the constraints:
http://www.khronos.org/webgl/public-mailing-list/archives/1012/msg00063.html
Here's the GLSL spec (look at section 4 of Appendix A):
http://www.khronos.org/registry/gles/specs/2.0/GLSL_ES_Specification_1.0.17.pdf
http://www.opengl.org/discussion_boards/showthread.php/171437-can-for-loops-terminate-with-a-uniform
http://www.opengl.org/discussion_boards/showthread.php/177051-GLSL-loop-problem-on-Radeon-HD-cards
http://www.opengl.org/discussion_boards/showthread.php/162722-Problem-when-using-uniform-variable-as-a-loop-count-in-fragment-shader
https://www.opengl.org/discussion_boards/showthread.php/162535-variable-controlled-for-loops
If you have a static loop it can be unrolled and made into static constant lookups. If you absolutely need to make it dynamic, you'll need to store indexed data into a 1D texture and sample that instead.
I'm guessing that the hardware on the desktop is more advanced than on the tablet. Hope this helps!
Kind of a fun half-answer, and-or, the solution to the underlying problem that I have chosen.
The following function called with 'id' passed as the ID of the shader's script block and 'swaps' filled with an array of 2 component arrays in the format of [[ThingToReplace, ReplaceWith],] strings. Called before the shader is created.
In the javascript:
var ReplaceWith = 6;
function replaceinID(id,swaps){
var thingy = document.getElementById(id);
for(var i=0;i<swaps.length;i++){
thingy.innerHTML = thingy.innerHTML.replace(swaps[i][0], swaps[i][1]);
}
}
replaceinID("My_Shader",[['ThingToReplace',ReplaceWith],]);
Coming from C, this is a very Macro like approach, in that it simulates a preprocessor.
In the GLSL:
for(int i=0;i<ThingToReplace;i++){
;//whatever goes here
}
Or;
const int val = ThingToReplace;
for(int i=0;i<val;i++){
;//whatever goes here
}

Detecting orientation of iPhone using C++

Embarcadero C++Builder 10.3.2 Enterprise
Searching the internet, I could not find any FMX code for this. Based on Delphi code, this should have worked but the compiler does not like it
if (Application->FormFactor->Orientations == Fmx::Types::TScreenOrientations::Landscape) {
//Landscape
}
Also, the value of Application->FormFactor->Orientations is the same whatever the orientation of the iphone. {System::SetBase = {Data = {[0] = 11 '\v'}}}
How does one determine the orientation?
The Orientations property is a TFormOrientations, which is a System::Set of TFormOrientation values. You can't use Set::operator== to compare it to a single value, which is why you are getting a compiler error. However, you can use the Set::Contains() method to check if it has a given value, eg:
if (Application->FormFactor->Orientations.Contains(Fmx::Forms::TFormOrientation::Landscape)) {
//...
}
In any case, the Orientations property specifies which orientation(s) the application's Forms are allowed to take (a value of 11 has its 1st, 2nd, and 4th bits set to 1, which correspond to the Portrait, Landscape, and InvertedLandscape orientations being enabled). It does not report what the device's current orientation is. For that, use the IFMXScreenService::GetScreenOrientation() method instead, eg:
_di_IFMXScreenService ScreenService;
if (TPlatformServices::Current->SupportsPlatformService(__uuidof(IFMXScreenService), &ScreenService)) {
if (ScreenService->GetScreenOrientation() == Fmx::Types::TScreenOrientation::Landscape) {
//...
}
}

Reverse engineering the checksum algorithm

I have an IP camera that receives commands using POST HTTP requests(for example to call PTZ commands or set various camera settings). The standard way of controlling it is through it's own web interface which is partially an ActiveX plugin and partially standard html+js. Of course because of the ActiveX part it only works in IE under Windows.
I'm attempting to change that by figuring out all the commands and writing a small python or javascript code to do the same, so that it is more cross platform.
I have one major problem. Each POST request contains a calculated "cc" field which I assume is a checksum. The JS code in the cam interface points out that it is calculated by calling a function inside the plugin:
tt = new Date().Format("yyyyMMddhhmmss");
jo_header["tt"] = tt;
if (getCpPlugin() != null && getCpPlugin().valid) {
jo_header["cc"] = getCpPlugin().nsstpGetCC(tt, session_id);
}
nsstpGetCC function obviously calculates the checksum from two parameters the timestamp and session_id. Real example(captured with Wireshark):
tt = "20171018231918"
session_id = "30303532646561302D623434612D3131"
cc = "849e586524385e1071caa4023a3df75401e5bb82"
Checksum seems to be 160bit. I tried both sha-1 and ripemd-160 and all combinations of concatenating tt and session_id I could think of. But I can't seem to get the same hash as the one the original plugin gets. The plugin dll seems to be written in c++. And I have almost no experience with decompilation to dive into this problem from that angle.
So my question basically is can someone figure out how they calculated that cc, or at least give me an idea in which direction to research further. Maybe I'm looking at wrong hash algorithms or something... Or give me some idea how I could somehow figure out what the original ActiveX function nsstpGetCC is doing for example by decompilation or maybe by monitoring it's operation in memory while running. What tools should I use?

vtkResliceCursorWidget rotate both axes

I am using a vtkResliceCursorWidget in a VTK app, and I want setup a useful behavior: when I move a side (axe), I want to being moved both axes.
See the images from below:
Actual behavior:
Desired behavior:
I have found inside of vtkResliceCursorWidget representation, a method that fit my needs:
SetManipulationMode(vtkResliceCursorRepresentation::RotateBothAxes)
but the issue is that though I have used, it simply do nothing:
vtkResliceCursorRepresentation* pRep = reinterpret_cast<vtkResliceCursorRepresentation*>(resliceCursorWidget[1]->GetRepresentation());
pRep->SetManipulationMode(vtkResliceCursorRepresentation::RotateBothAxes);
where resliceCursorWidget is a vtkResliceCursorWidget, taken from here:
Example
Somwhow I expect of this, because on SetManipulation method remark write quite clear: "INTERNAL - Do not use Set the manipulation mode. This is done by the widget", even this method is a "public" method.
Could you guide me in order to able to move both axes (of vtkResliceCursorWidget) on the same time ?
Thank you.
With the CTRL key as modifier, the behaviour will be as desired.

OPENGL ARB_occlusion_query Occlusion Culling

for (int i = 0; i < Number_Of_queries; i++)
{
glBeginQueryARB(GL_SAMPLES_PASSED_ARB, queries[i]);
Box[i]
glEndQueryARB(GL_SAMPLES_PASSED_ARB);
}
I'm curious about the method suggested in GPU GEMS 1 for occlusion culling where a certain number of querys are performed. Using the method described you can't test individual boxes against each other so are you supposed to do the following?
Test Box A -> Render Box A
Test Box B -> Render Box B
Test Box C -> Render Box C
and so on...
I'm not sure if I understand you correctly, but isn't this one of the drawbacks of the naive implementation of first rendering all boxes (and not writing to depth buffer) and then using the query results to check every object? But your suggestion to use the query result of a single box immediately is an even more naive approach as this stalls the pipeline. If you read this chapter (assuming you refer to chapter 29) further, they present a simple technique to overcome the disadvantages of both naive approaches (that is, just render everything normally and use the query results of the previous frame).
I think (it would have been good to link the GPU gems article...) you are confused about somewhat asynchronous queries as described in extensions like this:
http://developer.download.nvidia.com/opengl/specs/GL_NV_conditional_render.txt
If I recall correctly there were other extensions to check for the availability of a result without blocking also.
As Christian Rau points out doing just "query, wait for result, do stuff based on result" might stall and might not be any gain because of that, depending on how much work is in "do stuff". In fact, doing the query, waiting for it to round trip just to save a single draw call is most likely not going to help at all.