reading jpeg file based on Mirror Effect,Brightness and zoom Level - c++

I am working on the gateway Simulator where Simulator will stream image/video to Data center
I have JPEG file for 30 min(lot of individual JPEG images).
Data Center Center can request video/Image with varying value of these parameter.
Image Option
1. Mirror Effect (None,Column,Row,Row / Column)
2. Brightness (Normal,Intense Light,Low Light,MAX)
3. Zoom Level (1X, 2X, 4X, 8X)
Capture mode
Single Snapshot- requests one image from the camera
Burst Number- NUMBER will gather N (1-65535) number of images from the camera
Burst Second-option produces a stream of images and will go until a CancelImageRequest command is sent
Continuous- option produces a stream of images and will go until a CancelImageRequest command is sent
Round-Robi-, is a mode to allow the user to get a single snapshot from each active and selected sensor
Schedule Continuous- THis is similar to Continuous except timing.
Now I need to read JPEG files based above mentioned option and send it to data center.
I wanted to how I can enforce these Image option while reading the data.
is there any Api which will allow reading JPeg imges on following Image option.
If you have any suggestion please go ahead.

GDI+ has an Image class that can load JPEGs and manipulate them -
http://msdn.microsoft.com/en-us/library/ms534462%28VS.85%29.aspx
If you don't find then manipulation you're looking for you can use the Bitmap class that ingerits from Image and the BitmapData class that allows you direct access to pixels
http://msdn.microsoft.com/en-us/library/ms534420%28VS.85%29.aspx

Related

add multiple images to a video using aws mediaconvert

I'm trying to add images to a video using mediaconvert. I used mediaconvert graphic overlay/ image inserter to perform this task. However, the image is overriding the given video in the output for the given duration. I want the image to be still at first and then start the video from the beginning if I add an image and similarly for the rest of the images. Can this be done by using aws-mediaconvert?
Overlays are normally used for things like watermarks and logos or simple sports scores/news tickers etc. Images you want to appear over the top of the video.
You could create a clip of blank video to insert into your output, then apply the overlay to just that?
Another option is to convert the image to a video yourself with ffmpeg and insert that into your output?

Handling Image data from IMFSourceReader and IMFSample

I am attempting to use the IMFSourceReader to read and decode a .mp4 file. I have configured the source reader to decode to MFVideoFormat_NV12 by setting a partial media type and calling IMFSourceReader::SetCurrentMediaType and loaded a video with dimensions of 1266x544.
While processing I receive the MF_SOURCE_READERF_CURRENTMEDIATYPECHANGED flag with a new dimension of 1280x544 and a MF_MT_MINIMUM_DISPLAY_APERTURE of 1266x544.
I believe the expectation is to then use either the video resizer dsp or video processor mft. However it is my understanding that the video processor mft requires windows 8.1 while I am on windows 7, and the video resizer dsp does not support MFVideoFormat_NV12.
What is the correct way to crop out the extra data added by the source reader to display only the data within the minimum display aperture for MFVideoFormat_NV12?
New media type says this: "video is 1266x544 and you expected/requested, but I have to carry it in 1280x544 textures because this is how GPU wanted it to work".
Generally speaking this does not require further scaling or cropping you already have the frames you need. If you are reading them out of sample objects - which is what I believe you are trying to do - just use increased stride (1280 bytes between consecutive rows).
If you are using this as a texture, presenting it somewhere or using it as a part of rendering, you would just use adjusted coordinates (0, 0) - (1266, 544) ignoring the remainder, as opposed to using full texture.

Modifying CISCO openh264 to take image frames and out compressed frames

Has anyone tried to modify the CISCO openh264 library to take JPEG images as input and compress them into P and I frames (output as frames, NOT video) and similarly to modify decoder to take compressed P and I frames and generate uncompressed-frames ?
I have a camera looking at a static scene and taking pictures (1280x720p) every 30 second. The scene is almost static. Currenlty I am using JPEG compression to compress each frame individually and it is resulting in an image size of ~270KB. This compressed frame is transferred via internet to a storage server. Since there is very little motion in the scene, the 'I' frame size will be very small (I think it should be ~20-50KB). So it will be very cost effective to transmit I frames over internet instead of JPEG images.
Can anyone guide me to some project or about how to proceed with this task ?
You are describing exactly what a codec does. It takes images, and compresses them. There relationship in time is irrelevant to the compression step. The decoder than decides how to display or just write them to disk. You don't need to modify open264, what you want to do is exactly what it is designed to do.

OpenCV : Open Mobotix Camera Feed

I have a Mobotix Camera. It is an IP Camera. In the API they offer us the possibility to get the feed via
http:// [user]:[password]#[ip_adress]:[port]/cgi-bin/faststream.jpg?[options]
What I've tried is to open it like a normal webcam feed :
cv::VideoCapture capture("http://...");
cv::Mat frame;
if (capture.isOpened())
// always false anyway.
while(1)
{
capture.read(frame);
cv::imshow("Hi there", frame);
cv::waitkey(10);
}
FYI : Developer Mobotix API Docs
EDIT : Now thanks to berak I just had to add &data=v.mjpg to the options :
?stream=full&fps=5.0&noaudio&data=v.mjpg
Note that in v.mjpg, only the [dot]mjpg is important, you could as well put myfile.mjpg.
Now the problem is the speed at which the feed update. I got a 2 seconds delay, plus the feed is very very slow.
And when I change the stream option for MxJPG or mxg I get a corrupted image where the bytes aren't ordering properlly.
EDIT : I tried to change the camera parameters directly with the mobotix control center but only the resolution affected my OpenCV program, without actually changing the speed at which I access the images.
for max speed use fps=0 Its in the api docs
something like
http://cameraip/cgi-bin/faststream.jpg?stream=full&fps=0
see http://developer.mobotix.com/paks/help_cgi-image.html
faststream is the mjpeg stream (for image capture) , make sure mxpeg is turned off and pick the smallest image that gives you enough resolution. i.e get it working using 640 by 480 (set it camera webgui) then increase the image size.
Note this is for image capture not video and you need to detect the beginning and end of each jpeg then copy from receive buffer into memory.
vlc can handle mxpeg ,but need to either start from command line with vlc --ffmpeg-format=mxg or set an edit option ffmpeg-format=mxg in the gui
see https://wiki.videolan.org/MxPEG
I know this post is quite old but I thought to answer for anyone else who comes across this issue. To get a stream without frame rate limitations you need to use a different CGI command:
http://<camera_IP>/control/faststream.jpg?stream=full&fps=0
As per the camera's on-line help:
http://<camera_IP>/cgi-bin/faststream.jpg (guest access)
http://<camera_IP>/control/faststream.jpg (user access)
The default limitation of the "guest" access is indeed 2 fps but it can be modified from the page Admin Menu > Language and Start Page.
A detailed description of how to retrieve a live stream from a MOBOTIX camera is available at the following link: https://community.mobotix.com/t/how-to-access-a-live-stream-with-a-video-client-e-g-vlc/202

Using Async_reader and Wave Parser in DirectShow filter graph results in video seeking issues

Some background:
I am attempting to create a DirectShow source filter based on the pushsource example from the DirectShow SDK. This essentially outputs a set of bitmaps, each of which can last for a long time (for example 30 seconds), to a video. I have set up a filter graph which uses Async_reader with a Wave Parser for audio and my new filter to push the video (the filter is a CSourceStream and I populate my frames in the FillBuffer function). These are both connected to a WMASFWriter to output a WMV.
The problem:
When I attempt to seek through the resulting video, I have to wait until a bitmap's start time occurs before it is displayed. For example, if I'm currently seeing bitmap 4 and skip back to the time which bitmap 2 is displayed the video output will not change until the third bitmap starts. Initially I wondered if I wasn't allowing FillBuffer to be called enough (as at the moment it's only once per bitmap) however I have since noted that when the audio track is very short (just a second long perhaps), I can seek through the video as expected. Is there a another way I should be introducing audio into the filter graph? Do I need to perform some kind of indexing when the WMV has been rendered? I'm at a bit of a loss...
You may need to do indexing as a post-processing step. Try indexing it with Windows Media File Editor from Windows Media Encoder SDK and see if this improves seeking.
Reducing key frame interval in the encoder profile may improve seeking. This can be done in Windows Media Profile Editor from the SDK. Note that this will cause file size increase.