Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
A video can be edit by deleting some frames from it.that can be done by deleting consecutive frames or random frames in that video.
we need to detect the tampered videos, we hope to do this by detecting frames which has been altered , simply we need to have an algorithm for way to identify deleted frames.Because we are creating a tool which could use for tamper detection as a video surveillance tool.Which could use in law enforcements and courts
Is there any method to identify that frames of a video has been deleted by malicious attacks, there are already available methods such as watermarking, digital signature, but we need an algorithm to detect that frames by using proper algorithm.
In general, whatever image sequence I am handed, some or all of it could have come from a real camera, from Photoshop or from a 3D renderer, and there is no general purpose image processing technique that will be able to tell the difference based on analysis of the image content alone, either well enough to stand up in court, or, indeed, at all.
You'll need to embed some easy to recognise yet difficult to fake, relocate or tamper with signal into the video at recording time. You've tagged the question "opencv", but the fields you want to investigate are cryptography and watermarking, not computer vision.
Have the video surveillance equipment use public key crypto to visibly watermark each frame with an identifier unique to the piece of equipment, a timestamp, a frame sequence number and a hash or other suitable function of the frame image, using a scheme robust to compression.
Tampering with the video sequence will then require either knowledge of the device's private key, or removal of the watermark. This isn't great (keeping the private key secret will be a logistical headache) but is probably the best solution you can get.
this can't be done in general. However some approaches may be possible.
the used video format may support frame wise meta data that stores the index or time index and that is not touched during editing
the image sensor itself may be configured to write some meta data to some specific region of the image
you may have some external reference that was imaged by the camera and may help identify missing frames
precise clock
fast blinking indicator
some uniform motion
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
In my current app I need to share screens ala skype or discord, I'd prefer to not use external libs, but will if I have to.
So far I have been sending the screenshots in down-scaled bitmap form over TCP sockets and repainting the window every few milliseconds, this is of course an effort I knew was doomed from the start, is there any api that could save me?
Any help appreciated.
While I haven't implemented it myself, I believe that what's usually done is the screen is broken into 16x16 pixel blocks. You can keep the previous screenshot, take a new one, compare which blocks have changed and send only the 16x16 blocks that have changes in them.
You can further improve performance by having a change threshold. If fewer than x pixels have changed in a block, don't send yet. Or if the cumulative sum of the changes in a block (the difference between corresponding pixels) is below some threshold, don't send that block.
The blocks are also often compressed using a lossy compression scheme that really shrinks down the required size you need to send per block. The image blocks are often also sent in 4:2:2 mode, meaning you store the red and blue channels at half the resolution of the green channel. This is based on how the visual system works, but it explains why things that are pure red or pure blue sometimes get blockiness or fringing around them when screen sharing.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I want to start writing an application that can capture screen content, or capture specific full screen app content, but I am not sure where to start.
Ideally this would be written using OpenGL but I don't know the capabilities for OpenGL to capture application screen content. If I could use OpenGL to capture, let's say World of Warcraft, that would be perfect.
the capabilities for OpenGL to capture application screen content
are nonexistent. OpenGL is an API for getting things on the screen. There's exactly one function to retrieve pixels back from OpenGL (glReadPixels) and it's only asserted to work for things that have been drawn by the very OpenGL context with which that call to glReadPixels is made; and even that is highly unreliable for anything but off-screen FBOs, since the operating system is at liberty to clobber, clear or otherwise alter the main window's framebuffer contents at any time.
Note that you can find several tutorials on how to do screenshots with OpenGL scattered around the internet. And none of them works on modern computer systems, because the undefined behaviour on which those rely (all windows on a screen share one large contiguous region of the GPUs scanout framebuffer) no longer holds in modern graphics systems (ever window owns its own, independent set of framebuffers and the on-screen image is composited from those).
Capturing screen content is a highly operating system dependent task and there's no silver bullet on how to approach it. Some systems provide ready to use screen capture APIs; however depending on the performance requirements those screen capture APIs may not be the best choice. Some capture programs inject a DLL into each and every process to tap into the rendering process right at the source of the generated images. And some screen capture systems install a custom kernel driver to get access to the graphics cards video scanout buffer (which is usually mapped into system address space), bypassing the graphics card's driver to copy out the contents.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I am a student who is currently working on a computer science project that will require soon computer vision and more specifically stereoscopy (for depth detection). I am now looking for a great camera to do the job and I found several interesting options:
1- A custom built set of two cheap cameras (i.e. webcam);
2- The old, classic, economic and proven Kinect;
3- Specialized stereo sensors.
I found a couple months ago this sensor: https://duo3d.com/product/duo-mini-lv1
I tought it was interesting because it is small, stereoscopic and brand new (encouraging a fresh USA company). However, if we take apart the additional APIs that come with it, I just don't understand why you would buy this when a Kinect (or "cheap" cameras) are at least 4-5 times less expensive and still have great if not better specifications.
Same applies for this one: http://www.ptgrey.com/bumblebee2-firewire-stereo-vision-camera-systems
Can someone please explain to me why I would need such a device and if not, why they are needed for some?
The reason you want a "real" stereo camera as opposed to a pair of usb webcams is synchronization. Cameras like the bumblebee have hardware triggering, which takes the two images with virtually no delay in between. With the webcams you will always have a noticeable delay between the two shots. This may be ok if you are looking at a static scene or at a scene where things are moving slowly. But if you want to have a stereo camera on a mobile robot, you will need good synchronization.
Kinect is great. However, a good stereo pair of cameras has a couple of serious advantages. One is that Kinect will not work outdoors during the day. The bright sun will interfere with the infra-red camera that Kinect uses for depth estimation. Also Kinect has a fairly short range of a few meters. You can get 3D information at much larger distances with a stereo pair by increasing the baseline.
In computer vision, we always want an ideal stereo camera such as no skewness on pixels, perfectly matched, aligned, identical cameras and so on. The cameras must supply enough images per seconds, because some of the algorithms uses temporal information that requires high fps to approximate the continuous motion. The lens is also an important part which may be so expensive. Additionally, the hardware suppliers generally provide an SDK. Creating an SDK is adding them extra value, because we always want to care what is important for us such as algorithms. Preparing the required software in order to connect the cameras to PC or any other boards may waste our time.
Kinect allows researchers to get depth information easily with a really good price; however, I agree with Dima that it's just for indoor applications and Kinect depth data has some holes which are generally required to be filled.
In addition to what Dima already pointed out: The cameras that you have listed both only give you the raw image data. If you are interested in the depth data, you will have to compute them yourself which can be very resource demanding and hard to do in real-time.
I currently know of one hardware system which does that for you in real-time:
http://nerian.com/products/sp1-stereo-vision/
But I don't think that this would be cheap either, and you would have to buy that on-top of the industrial cameras. So, if you are looking for a cheap solution, you should go with the Kinnect.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am writing a program that displays a random natural note and waits for the user to play that note on the guitar. The audio input is processed to see if the correct pitch was played, and if it was, the next note is shown and the score of the user is updated. The idea is to teach basic guitar notes.
I intend to use SFML for audio processing and QT4 for the gui. I will have a widget derived from the relevant QObject and SFML classes.
Question: How do I detect the pitch of microphone input using SFML? Is it possible to simply store a part of the input in an sf::sound object and call it's getPitch() method?
Is it possible to simply store a part of the input in an sf::sound object and call it's getPitch() method?
GetPitch() from sf::SoundSource will return the value you used on SetPitch(pitch) or teh default 1.0f. It is to edit the sound, not to get information about it. I think the only way to do it is to get the array of sound samples and process it with some kind of algorithm. You can get this array with it:
sf::SoundBufferRecorder recorder;
recorder.Start();
// ...
recorder.Stop();
const sf::SoundBuffer& buffer = recorder.GetBuffer();
size_t sample_count = buffer.GetSamplesCount();
const sf::Int16* samples = buffer.GetSamples();
unsigned int samples_per_second = buffer.GetSampleRate();
As it turns out, SFML does not have any algorithms for detecting pitch built in. Thanks to LBg for getting my mind working in the right direction. SFML only provides the tools needed to record the sounds and store them in a buffer.
I found out that I can use a Fast Fourier transform to evaluate the buffer for a frequency. This frequency can then be compared to a list of known pitch frequencies, together with a pitch threshold.
While SFML doesn't have an fft algorithm built in, it does have the tools needed to get a sound buffer. I will have to check and see if this is the most cross-platform way of doing things.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I'm interested in learning to use OpenGL and I had the idea of writing a music visualizer. Can anyone give me some pointers of what elements I'll need and how I should go about learning to do this?
If you use C++/CLI, here's an example that uses WPF four (fourier that is;) display.
He references this site (archived) that has considerable information about what your asking, here's anoutline from the specific page;
How do we split sound into
frequencies? Our ears do it by
mechanical means, mathematicians do it
using Fourier transforms, and
computers do it using FFT.
The Physics of Sound
Harmonic Oscillator
Sampling Sounds
Fourier Analysis
Complex Numbers
Digital Fourier Transform
FFT
Ahhh, I found this (archived) a few minutes later, it's a native C++ analyzer. Code included, that should get you off and running.
My approach for creating BeatHarness (http://www.beatharness.com) :
record audio in real time
have a thread that runs an FFT on the audio to get the frequency intensities
calculate audio-volume for left and right channel
filter the frequencies in bands (bass, midtones, treble)
now you have some nice variables to use in your graphics display.
For example, show a picture where the size is multiplied by the bass - this will give you a picture that'll zoom in on the beat.
From there on it's your own imagination ! :)
Are you trying to write your own audio/music player? Perhaps you should try writing a plugin for an existing player so you can focus on graphics rather than the minutia of codecs, dsp, and audio output devices.
I know WinAMP and Foobar have APIs for visualization plugins. I'm sure Windows Media Player and iTunes also have them. Just pick a media player and start reading. Some of them may even have existing OpenGL plugins from which you can start so you can focus on pure OpenGL.
If you're just after some basic 3D or accelerated 2D then I'd recommend purchasing a copy of Dave Astle's "Beginning OpenGL Game Programming" which covers the basics of OpenGL in C++.
For the music analysis part, you should study the basis of Fourier series, then pick a free implementation of a DFFT (digital fast fourier transform) algorithm.
You can find implementation of FFT algorithms and other useful informations in Numerical Recipes in C book. The book is free AFAIK. There is also Numerical Recipes in C++ book.
You may want to consider using libvisual's FFT/DCT functions over FFTW; they're much simpler to work with and provide data that's similarly easy to work with for the sake of generating visuals. Several media players and visualization plugins use libvisual to some extent for their visuals. Examples: Totem (player), GOOM (plugin for Totem and other players), PsyMP3 2.x (player)
From my point of view...check this site:
http://nehe.gamedev.net/
really good Information and Tutorials for using OpenGL
edit:
http://www.opengl.org/code/