I'm writing an application that handles metadata for images and all kinds of animations, so I'm looking for a way to find basic info about an animation file, e.g:
length (in minutes/seconds/frames)
aspect ratio of pixels
resolution of individual frames
framerate
Right now, I let my program execute
mplayer -identify animfile.avi
and parse its console output, which contains all the info I need in a machine-readable format. This works fine, but I know that some potential users of the program prefer vlc as a media player so I'd rather avoid having a hard dependence on mplayer being installed.
I've tried
vlc -vv animfile.avi
which prints an ungodly amount of junk on the console, sometimes containing the stuff I'm looking for. The formatting and what data gets printed seems to vary depending on the file format of the animation though.
Is there an easier way to extract basic info from an animation of any format one has a decoder for (especially the length of the animation) using vlc or som other app/library that is usually available on a typical Linux installation?
Edit: I'd rather use another program to do the dirty work, as this is supposed to work for any animation format, e.g avi, mpg, mov, wmv, vob etc.
Edit: totem-video-indexer seems more promising, and was also included with the standard installation. Enough codecs to make it useful, however, was not. That could be fixed by installing the "non-free-codecs" package from medibuntu.
The output of totem-video-indexer is very easy to parse:
TOTEM_INFO_DURATION=5217
TOTEM_INFO_HAS_VIDEO=True
TOTEM_INFO_VIDEO_WIDTH=720
TOTEM_INFO_VIDEO_HEIGHT=480
TOTEM_INFO_VIDEO_CODEC=XVID MPEG-4
TOTEM_INFO_FPS=30
TOTEM_INFO_HAS_AUDIO=True
TOTEM_INFO_AUDIO_BITRATE=50
TOTEM_INFO_AUDIO_CODEC=MPEG 1 Audio, Layer 3 (MP3)
TOTEM_INFO_AUDIO_SAMPLE_RATE=48000
TOTEM_INFO_AUDIO_CHANNELS=Stereo
mediainfo is a pretty useful program. It's LGPL, and is just a frontend for libmediainfo, which should be exactly what you want.
http://mediainfo.sf.net/
This is a little more difficult question than you may realize. The AVI file format grew over time, and often has nearly the same information in two or three different places. In some cases those are really supposed to agree (but sometimes don't) and in other cases they're subtly different.
Just for example, you asked about the width and height. There are actually four different width/height specs for a single frame: the screen width/height, the pixel width/height (from which you derive the pixel aspect ratio), the active width/height, and the compressed width/height. The frame width and height is the (theoretical) size of the screen. The active width/height excludes the overscan area. The compressed width/height takes into account rounding -- for example, JPEG compresses in blocks of 8x8 pixels, so the compressed width and height have to be multiples of 8 for a motion JPEG file. The active width/height tells you if (for example) some pixels at the border should be ignored.
In any case, since your question is tagged C++, I'm going to guess you'd rather read the file and get the data directly than depend on spawning something else to do the dirty work. If so, you probably want to look at the OpenDML AVI file spec. You can get at least some idea of the length, resolution, and framerate just from reading the basic AVI header, which is in a fixed spot at the beginning of the file, so that much is trivial to get. It'll take a bit more work to get to the pixel aspect ratio though...
Related
I was reading an article explaining How to Hide Files in JPEG Pictures.
I am wondering how it's possible for a file to contain both jpeg data and a rar file without any visible distortion either to the image or to the compressed file.
My guess is that it has something to do with how either the compressed file or the jpeg file is represented in binary form, but I have no idea how this works.
Can someone elaborate on that?
All that is doing is adding the archive to the end of a JPEG stream. You then hope your JPEG decoder will not read past the EOI marker, find data there, and say something is wrong.
A JPEG image is a stream of bytes starting with an SOI marker and ending with an EOI marker.
ZIP and RAR are streams of byte. A ZIP stream starts with 50 4B. A RAR stream starts with 52 61 72 21 1A 07.
The method described in the link above takes a binary copy of (multiple) a JPEG stream and appends a ZIP or RAR stream to it.
The RAR/ZIP decoders scan the stream until they find the signature for RAR or ZIP (ignoring the JPEG stream).
This answer does not address the exact case in the link you gave, but it provides another way of hiding data:
It would also be theoretically possible to hide a file within the JPEG picture itself, but you would need a complicated program to write the encoded data and then read it again.
Basically, a JPEG photograph contains a lot of information which, if it changed, would not be noticeable to the human eye. Imagine you have a photo of a person in a blue shirt. If you zoom in on that shirt you will see that it is not an even blue colour, but made up of a multitude of flecks of colour, most of which are a bluish tone (but some could be other colours as well). You could easily change some of those flecks to a slightly different tone and it would make no obvious visible difference to the picture.
A clever program could embed a code in the photo by subtly changing pixels to a pattern that represents data. A very simple example: if the "hue" (i.e. colour tone) is represented by a number between 0 and 255, pixels of an even hue could represent a "0" bit and pixels of odd hue a "1" bit. It would be hard for the human eye to detect such a difference in the picture.
It is an old idea and this article discusses how much data could be hidden in this way: High capacity data hiding in JPEG-compressed images (2004)
In general, hiding a file within another file is a practice known as Steganography. The method described in the link you provided simply concatenates the .rar to the end of the .jpg using the + operator, taking advantage of the different headers of each file type. #user3344003 does an excellent job of explaining why this works in his/her answer. This doesn't distort the image because the image data is left unaltered.
Another common method of hiding a file within an image is to use the Least Significant Bit (LSB) of each byte. The way this is performed is to replace every 8th bit in the image's bitstream with the next bit of the file you wish to hide. This works because the image's colors can be distorted slightly without being easily perceived by the human eye. In this approach, the image's size on disk will not grow as it would in the method from your link. This makes evidence of the hidden file much harder to detect. For a detailed look at this and other Steganographic methods, see this paper by Bret Dunbar.
there is a simple algorithm that I implemented it with matlab. if you division your image to 8 bit. the most significant bit has most valuable information and you can remove bit 0 and bit 1 without any change on original image. so you can put your file instead of bit 0 and 1. I saw this algorithm in anil.k.jain book.
In order to accomplish some specific editing on some .avi files, I'd like to create an application (in C++) that is able to load, edit, and save those .avi files. But, what is the most efficient way? When first thinking about it, a simple 3D-Array containing a 2D-array of pixels for every frame seems the simplest solution; But then its size would be ENORMOUS. I mean, let's assume that a pixel only needs a color. One color would mean 3bytes (1char r, 1char b, 1char g). If I now have a 1920x1080 video format, this would mean 2MEGABYTES for only one frame! This data may or may not be smaller if using pointers for the colors, so that alreay used colors wont take more size - I don't really know, since I'm pretty new to C++ and the whole low-level stuff. (As a comparison: One of my AVI files recorded with Xvid codec is 40seconds long, 30fps, and only has 2MB.)
So how would you actually store the video data (Not even the audio, just the video) efficiently (while still being easily able to perform per-frame-changes on it)?
As you have realised, uncompressed video is enormous and it is not practical to store an entire video in this way.
Video compression is an extremely complex topic, but more-or-less, it works as follows: certain "key-frames" are compressed using fairly standard compression techniques similar or identical to still-photo compression such as JPEG. Frames following key-frames are compressed by comparing the frame with the previous one and looking for changes (such as moving blocks). Every now and again, a new key-frame is used.
You don't really have to worry much about that as you are not going to write your own video coder/decoder (codec). There are standard ones.
What will happen is that your program will decode the compressed video frame-by-frame and keep a certain number of frames in memory while you are working on them and then re-encode them when it is finished. In the uncompressed form, you will have access to the individual pixels and can work on them how you want.
You are probably not going to do that either by yourself - it is very hard. You probably need to use a framework, such as OpenCV. There are a huge number of standard filters and tools built in to these frameworks, and it may be that what you want to do is already implemented somewhere.
The OpenCV framework can return individual frames in a Mat object and you can then access the pixels. See this post Get Pixels from Mat
OpenCV
Tutorial page: Open CV Tutorial
Given that FFmpeg is the leading multimedia framework and most of the video/audio players uses it, I'm wondering somethings about audio/video players using FFmpeg as intermediate.
I'm studying and I want to know how audio/video players works and I have some questions.
I was reading the ffplay source code and I saw that ffplay handles the subtitle stream. I tried to use a mkv file with a subtitle on it and doesn't work. I tried using arguments such as -sst but nothing happened. - I was reading about subtitles and how video files uses it (or may I say containers?). I saw that there's two ways putting a subtitle: hardsubs and softsubs - roughly speaking hardsubs mode is burned and becomes part of the video, and softsubs turns a stream of subtitles (I might be wrong - please, correct me).
The question is: How does they handle this? I mean, when the subtitle is part of the video there's nothing to do, the video stream itself shows the subtitle, but what about the softsubs? how are they handled? (I heard something about text subs as well). - How does the subtitle appears on the screen and can be configured changing fonts, size, colors, without encoding everything again?
I was studying some video players source codes and some or most of them uses OpenGL as renderer of the frame and others uses (such as Qt's QWidget) (kind of or for sure) canvas. - What is the most used and which one is fastest and better? OpenGL with shaders and stuffs? Handling YUV or RGB and so on? How does that work?
It might be a dump question but what is the format that AVFrame returns? For example, when we want to save frames as images first we need the frame and then we convert, from which format we are converting from? Does it change according with the video codec or it's always the same?
Most of the videos I've been trying to handle is using YUV720P, I tried to save the frames as png and I need to convert to RGB first. I did a test with the players and I put at the same frame and I took also screenshots and compared. The video players shows the frames more colorful. I tried the same with ffplay that uses SDL (OpenGL) and the colors (quality) of the frames seems to be really low. What might be? What they do? Is it shaders (or a kind of magic? haha).
Well, I think that is it for now. I hope you help me with that.
If this isn't the correct place, please let me know where. I haven't found another place in Stack Exchange communities.
There are a lot of question in one post:
How are 'soft subtitles' handled
The same way as any other stream :
read packets from a stream to the container
Give the packet to a decoder
Use the decoded frame as you wish. Here with most containers supporting subtitles the presentation time will be present. All you need at this time is get the text and burn it onto the image at the same presentation time. There are a lot of ways to print the text on the video, with ffmpeg or another library
What is the most used renderer and which one is fastest and better?
most used depend on the underlying system. For instance Qt only wrap native renderers, and even has a openGL version
You can only be as fast as the underlying system allows. Does it support ouble-buffering? Can it render in your decoded pixel format or do you have to perform color conversion before? This topic is too broad
Better only depend on the use case. this is too broad
what is the format that AVFrame returns?
It is a raw format (enum AVPixelFormat), and depends on the codec. There is a list of YUV and RGB FOURCCs which cover most formats in ffmpeg. Programmatically you can access the table AVCodec::pix_fmts to obtain the pixel format a specific codec support.
I am making a game with a large number of sprite sheets in cocos2d-x. There are too many characters and effects, and each of them use a sequence of frames. The apk file is larger than 400mb. So I have to compress those images.
In fact, each frame in a sequence only has a little difference compares with others. So I wonder if there is a tool to compress a sequence of frames instead of just putting them into a sprite sheet? (Armature animation can help but the effects cannot be regarded as an armature.)
For example, there is an effect including 10 png files and the size of each file is 1mb. If I use TexturePacker to make them into a sprite sheet, I will have a big png file of 8mb and a plist file of 100kb. The total size is 8.1mb. But if I can compress them using the differences between frames, maybe I will get a png file of 1mb and 9 files of 100kb for reproducing the other 9 png files during loading. This method only requires 1.9mb size in disk. And if I can convert them to pvrtc format, the memory required in runtime can also be reduced.
By the way, I am now trying to convert .bmp to .pvr during game loading. Is there any lib for converting to pvr?
Thanks! :)
If you have lots of textures to convert to pvr, i suggest you get PowerVR tools from www.imgtec.com. It comes with GUI and CLI variants. PVRTexToolCLI did the job for me , i scripted a massive conversion job. Free to download, free to use, you must register on their site.
I just tested it, it converts many formats to pvr (bmp and png included).
Before you go there (the massive batch job), i suggest you experiment with some variants. PVR is (generally) fat on disk, fast to load, and equivalent to other formats in RAM ... RAM requirements is essentially dictated by the number of pixels, and the amount of bits you encode for each pixel. You can get some interesting disk size with pvr, depending on the output format and number of bits you use ... but it may be lossy, and you could get artefacts that are visible. So experiment with limited sample before deciding to go full bore.
The first place I would look at, even before any conversion, is your animations. Since you are using TP, it can detect duplicate frames and alias N frames to a single frame on the texture. For example, my design team provide me all 'walk/stance' animations with 5 pictures, but 8 frames! The plist contains frame aliases for the missing textures. In all my stances, frame 8 is the same as frame 2, so the texture only contains frame 2, but the plist artificially produces a frame8 that crops the image of frame 2.
The other place i would look at is to use 16 bits. This will favour bundle size, memory requirement at runtime, and load speed. Use RGBA565 for textures with no transparency, or RGBA5551 for animations , for examples. Once again, try a few to make certain you get acceptable rendering.
have fun :)
There is a task, to write a programm that will be crope a JPEG files. But the problem is that some jpeg files has large sizes - hundreds of MegaBytes. So the question: Is it possible to crop a jpeg file, but without loading all file to the RAM, using something like fseek(), and decoding only the parts that needed.
Is that possible? If yes, maybe there is some libraries do the same.
Upd. All this will be used for the deep zoom technology. So when deep zoom will asking for a file, this program will give it, but this should be in real time
There are two ways to accomplish this.
The first is lossless cropping, where you don't decode the file all the way but work with the 8x8 DCT blocks. You'll need to use a library that has this capability, and it places some restrictions on the cropping ability. You can't crop to a boundary that isn't on the DCT square, which limits you to multiples of 8 or 16 depending on the subsampling in the file.
The second way is to use a library that allows you to read and write one line at a time. I know that the IJG library can do this, and probably others as well. This is the easy way, but the downside is that the image goes through a decompression/recompression pass and will lose quality and/or be larger.