What is a frameset? - computer-vision

I just started working with realsense depth and tracking cameras. In various code scripts, I've seen 'frameset' term (probably objects) being used. Unfortunately, I could not be sure what it meant. Is it the set of frames that was obtained by a single device, that we are receiving frames from, at the same time? What if we are running multiple devices at the same time, and still want to use framesets? Should we define different frameset objects as two devices will most probably not acquire the frames at exactly same time? I might even be wrong with my prediction of 'frameset'. I would really appreciate if someone could help.

Related

Making GL textures uploading async and figuring out when it's done uploading

I have run into an issue where my application requires loading images dynamically at runtime. This ends up being a problem because it's not possible to load them all up front since I can't know which ones will be used... otherwise I have to upload everything. The problem is that some people do not have good PCs and have been complaining that loading all the images to the GPU takes a long time for them due to bad hardware.
My work around for the latter group was to upload textures just as is needed, and this worked for the most part. The problem is that during the application, there are times where delays occur when a series of images need to be uploaded and there's a delay due to the uploading.
I was researching how to get around this, and I have an idea: Users want a smooth experience and are okay if the textures are not immediately loaded but instead are absent. This makes it easy, as I can upload in the background and then just draw nothing for where the object should be, and then bring it into existence after it is done. This is acceptable because the uploads are usually pretty fast anyways, but they're slow enough that it causes it to dip under 60 fps for some people which causes some stutter. On average it causes anywhere from 1-3 frames of stutter so the uploads do resolve fast and on average less than 50ms.
My solution was to attempt something using a PBO to get some async-like uploading. Problem is I cannot find online how I can tell when the uploading is done. Is there a way to do this?
I figure there are four options:
There's a way to do what I want with OpenGL 3.1 onwards and that will be that
It is not possible to do (1), but I could place a fence in and then check if the fence is done, however I've never had to do this before so I'm not sure if it would work in this case
It's not possible, but I could make the assumption that everything would be uploaded in < 50ms and use some kind of timestamp to tell if it's drawable or not and just hope that it is the case (and if the time is < 50ms since issuing an upload, then draw nothing)
It's not possible do to this for texture uploading and I'm stuck
This leads me to my question: Can I tell when an asynchronous upload of pixels to a texture is done?
Fence sync objects tell when all previously issued commands have completed their execution. This includes asynchronous pixel transfer operations. So you can issue a fence after your transfers and use the sync object tools to check to see when it is done.
The annoying issue you'll have here is that it's very coarse-grained. Testing the fence also includes testing whether any non-transfer commands have also completed, despite the fact that the two operations are probably being handled by independent hardware. So if the transfer completes before the frame rendered before starting the transfer has finished rendering, the fence still won't be set. However, if you fire off a lot of texture uploading all at once, then the transfer operations will dominate the results.

How do you know what you've displayed is completely drawn on screen?

Displaying images on a computer monitor involves the usage of a graphic API, which dispatches a series of asynchronous calls... and at some given time, put the wanted stuff on the computer screen.
But, what if you are interested in knowing the exact CPU time at the point where the required image is fully drawn (and visible to the user)?
I really need to grab a CPU timestamp when everything is displayed to relate this point in time to other measurements I take.
Without taking account of the asynchronous behavior of the graphic stack, many things can get the length of the graphic calls to jitter:
multi-threading;
Sync to V-BLANK (unfortunately required to avoid some tearing);
what else have I forgotten? :P
I target a solution on Linux, but I'm open to any other OS. I've already studied parts of the xvideo extension for X.org server and the OpenGL API but I havent found an effective solution yet.
I only hope the solution doesn't involve hacking into video drivers / hardware!
Note: I won't be able to use the recent Nvidia G-SYNC thing on the required hardware. Although, this technology would get rid of some of the unpredictable jitter, I think it wouldn't completely solve this issue.
OpenGL Wiki suggests the following: "If GPU<->CPU synchronization is desired, you should use a high-precision/multimedia timer rather than glFinish after a buffer swap."
Does somebody knows how properly grab such a high-precision/multimedia timer value just after the swapBuffer call is completed in the GPU queue?
Recent OpenGL provides sync/fence objects. You can place sync objects in the OpenGL command stream and later wait for them to get passed. See http://www.opengl.org/wiki/Sync_Object

DirectCompute writing to buffer speed

Im working on a particle sim and have ran into a bit of a bottleneck, using UAV to write to a RWStructured single float buffer is around 10 times too slow. From experimentation it seems there is no shortage in bandwidth but just the access time itself boggles it down. Append writing is out of the question since the outgoing data needs to be in a specific order. This is on DX10/SM4 hardware so here are a few questions: Is there any way at all to speed things up (other than writing larger chunks of data since the output from the shaders is non consecutive)? If not then is DX11 grade hardware any quicker with UAVs?
First thing (if you haven't done already), to profile your shader code, is to add GPU queries to your system. Here is a link to explain it:
http://mynameismjp.wordpress.com/2011/10/13/profiling-in-dx11-with-queries/
It's in dx11 but features are in dx10 too, so it should be really simple to port over.
After in compute there's different aspects, but first one would be to play with:
[numthreads(TGX, 1, 1)]
Trying values like 8,16,32,64 and try to find the sweet spot (don't forget to divide on your dispatch).

Matching images based on timestamps acquired from two different threads

For a project I am working on I am using two different threads to acquire images from different hardware on a Windows system using Opencv and C++. For the purposes of this post we will call them camera1 and camera2. Due to varying frame rates I cannot capture both from both camera1 and camera2 on the same thread or try to capture one image at a time. Instead, I have buffers set up for both cameras. Camera1 runs at over double the frame rate of camera2.
A third thread gets a frame from camera2 and tries to match that to an image taken by camera1 at the nearest point in time. The problem I am having is that I can't find a good way to match an image from one source to an image that was acquired at roughly the same time in the other source.
Is there a method of assigning time stamps to these images that is accurate when used in separate threads that are not high priority? If not, is there another design method that would be better suited to a system like this.
I have had a hard time finding any information as to how far off methods of keeping track of time in Windows like clock() and queryperformancecounter are when used in separate threads on different CPUS. I have read that they are not accurate when used in separate threads, but I am acquiring frames at roughly 20 frames per second so even if they are off, it may be close enough to still work well.
Thank you in advance! Let me know if I should clear anything up or explain better.
Edit:
Unfortunately I don't think there is any way for me to get timestamps from the drivers or the hardware. I think I have figured out a way to pretty accurately determine when an image is actually captured relative to each thread though. The part I don't really know how to deal with is if I use clock() or something similar in two threads that are executing on separate cores, how far off the time stamps could be. If I could count on the time stamps on two different cores being within about 25ms from each other, that would be ideal. I just haven't really been able to figure out a good way to measure what the difference is between the two. I have done some more research and it appears that timeGetTime in the Windows API is not effected too much by being called by separate cpu cores. I am going to try that out.
To get a satisfying answer, you need to answer the following questions first:
How closely do the images need to match?
How much jitter / error in timestamping can you live with?
How much latency you expect between the moment the image is captured and the moment your thread receives it and can timestamp it?
Is the latency expected to be about the same for both cameras?
If you have a mostly static scene, then you can live with higher timestamping jitter and latency than if you have a very dynamic scene.
Precision of the clock() is not going to be the main factor in your timestamping error -- after all, you should care about relative error (how close the timestamps are between two threads), not absolute error (how close the timestamp is to the true atomic time).
On the other hand latencies, and especially jitter in latencies, between the hardware and your thread will drive most of the error in your timestamping. If you can find a way to get a timestamp in hardware or at least in the driver, it will go a long way toward improving accuracy of your timestamps.

Programs causing static noise in speakers?

Does anyone know a reason why my programs could be causing my speakers to output some soft static? The programs themselves don't have a single element that outputs sound to anything, yet when I run a few of my programs I can hear a static coming from my speakers. It even gets louder when I run certain programs. Moving the speakers around doesn't help, so it must be coming from inside the computer.
I'm not sure what other details to put down since this seems very odd. They are OpenGL programs written in C++ with MS Visual C++.
Edit: It seems to be that swapping the framebuffers inside an infinite loop is making the noise, as when I stop swapping I get silence...
:)
You will be surprised to know that the speaker input is picking up static from the hard disk. When you do something memory/disk intensive (like swapping framebuffers) so that the hard disk has to rotate fast, the sound will appear.
I had the same problem some years back, I solved it too. But I am sorry that I don't remember how I did it.
Hope the diagnosis helps in remedying the problem.
UPDATE: I remembered. If you are using Windows, go to volume control and mute all the external inputs/outputs like CD input etc. Just keep the two basic ones.
Computers consume a different amount of power when executing code. This fluctuation of current acts like a RF transmitter and can be picked up by audio equipment and it will be essentially "decoded" much like a AM modulated signal. As the execution usually does not produce a recognizable signal it sounds like white noise. A good example of audio equippment picking up a RF signal is if you hold your (GSM) cell phone close to an audio amplifier when receiving a call. You most likely will hear a characteristic pumping buzz from the cell phone's transmitter.
Go here to learn more about Electromagnetic compatibility. There are multiple ways a signal can couple into your audio. As you mentioned a power cord to be the source it was most likely magnetic inductive coupling.
Since you say you don't touch sound in your programs, I doubt it's your code doing this. Does it occur if you run any other graphics-intensive programs? Also, what happens if you mute various channels in the mixer (sndvol32.exe on 32-bit windows)?
Not knowing anything else I'd venture a guess that it could be related to the fan on your graphics card. If your programs cause the fan to turn on and it's either close to your sound card or the fan's power line crosses an audio cable, it could cause some static. Try moving any audio cables as far as possible from the fan and power cables and see what happens.
It could also be picking up static from a number of other sources, and I wouldn't say it's necessarily unusual. If non-graphics-intensive programs cause this as well, it could be hard-disk access, or even certain frequencies of CPU/power usage being picked up on an audio line like an antenna. You can also try to reduce the number of loops in your audio wires and see if it helps, but no guarantees.
Crappy audio hardware on motherboards, especially the ones that end up in office PCs. The interior of a PC case is full of electrical noise. If that couples to the audio hardware, you'll hear it.
Solution: Get a pair of headphones with a volume control on the cord. Turn the volume on the headphones down, and turn the volume on the PC up full. This will increase the signal level relative to the noise level in most cases.
Most electronic devices give off some kind of electromagnetic interference. Your speakers or sound hardware may be picking up something as simple as the signaling on your video cable or the graphics card itself. Cheap speakers and poorly-protected audio devices tend to be fairly sensitive to this kind of radiation, in my experience.
There is interference on your motherboard that is leaking onto your sound bus.
This is usually because of the quality of your motherboard, or the age of it. Also, the layout of the equipment inside your computer (close together, over lapping) often will make interesting EM fields. My old laptop used to do this a lot easier as it got older.
So as things are winding up or down you'll hear it.
Try to see if it happens on a different computer. Try different computers of different ages and different configurations (external soundcard, or a physical sound card, etc).
Hope that helps.
tempest
dvbt