I need to extract audio from video and save it. FFmpeg has command for this purpose. I wonder if it is a right way to execute ffmpeg from my code and not to write code with their API functions.
The lack of this approach is that I use Qt Framework and need cross-platform application. Sometimes (especially in windows, because PATH variable doesn't set up automatically so call ffmpeg won't work) a user will need to indicate path to executable file to run in command line.
So both variants are possible to realize, but which is the best and correct one?
I don't really want to use their API because it is not so easy to understand and will take time to write my own code.
Thanks for any advice!
Using standalone ffmpeg seems to be preferred in your case. You will have to bundle ffmpeg and it's dependencies along with your application. However there is no need to set or use PATH or other environment variables to launch ffmpeg. You should do it by supplying full path to ffmpeg executable.
Using libav API is indeed rather tricky. And I would like to mention that in general (depending on codec) ffmpeg and libav should not be considered stable and you should spawn a separate process to protect main executable from potential crash in this case as well. So complexity of this approach is much higher compared to first one.
Disclaim: I never used Qt with ffmpeg together myself, but have much experience with Qt especially.
Qt tends to try having everything in their library, wrapping many other content for convenience. Most of the time (All those I tested), it is still possible quite easily to use the original library without troubles, but the Qt facilitate integration.
As an example: QOpenGLWidget is a wrapper for OpenGL with their widget system, adding signals and slots, etc. I made some test using normal OpenGL and it worked fine.
In another project, we(my team, not me particularly) used ffmpeg to display video on a QtWidget. It works with limited problems (due to other architectural requirements).
Considering your use case, and especially that you are using ffmpeg for background processing and not for displaying video, you may IMO go ahead with high probability of success.
Related
Currently I'm developing an extension for Premiere Pro that should use .exe file to make things happen.
The main algorithm is written in python and works with actual video files: reads them frame by frame, processes the information in pixel level... I'm thinking of making an executable from it and call it from extendscript to make things happen...
Is this the right way to go ? Or should I consider other technics like writing a plugin using the PPro SDK and call it from extendscript ? Is it possibe to access the actual video-files, read them, get pixel values with SDK ?
Thanks.
I'm not sure how good the Premiere SDK is, but the drawback I can see to use the SDK you would have to port everything to C++. If that's less work than creating the Extendscript bridge it would be the way to go.
The PPro Extendscript documentation is notoriously non-existent.
I am developing a project using the QCanBusDevice library provided by Qt in 5.10 version. However, the readFrame() API to read can frames from a QVector input buffer is too slow. It can't keep up with the amount of CAN messages the processor is throwing.
I want to be able to flush the buffer and obtain the latest data on the wire, however, the library doesn't provide any API to flush the input buffer (please let me know if I missed that).
Is it possible to modify this library and add a flush API?
Thanks!
"Is it possible to modify this library and add a flush API?"
Of course it is. Qt is open-source, you can download the source code, make the modification you need, recompile the library and then use your modified version with your application.
At the place I work we do exactly this to add a few bits and pieces we need and fix some bugs.
But if you do this, please submit your changes back to the project so a) others can benefit, and b) you don't have to maintain your own fork indefinitely (at my company we, of course, also do this).
I would like to write a console app with text UI in D. I looked at curses, but it seems that it works only on unix. Are there any cross-patform alternatives?
My terminal.d could be used as a foundation for a TUI library.
https://github.com/adamdruppe/arsd/blob/master/terminal.d
It has no required dependencies, so you can simply download that one file and start building with dmd yourfile.d terminal.d. Here's an example program getting input: http://arsdnet.net/dcode/book/chapter_12/07/input.d
You can also use terminal.moveTo(x, y); terminal.color(Color.green, Color.black); terminal.writef("something"); terminal.flush(); and such to move and draw.
Look for version(Demo) in terminal.d itself for a main which handles all kinds of input events, including mouse events.
While terminal.d mostly offers lower-level functions (its main high level function is terminal.getline, great for line based apps but not TUIs), it should give all the foundation needed to write a little text widget library.
and I think someone might have done that once but I don't recall where.
terminal.d works on Windows and Posix systems, for the most common terminals like xterm. ncurses is more comprehensive and probably has fewer bugs on more obscure targets but terminal.d, being a single file, is easier to build.
That was true long ago. However, ncurses is known to work nicely on Windows as well. The easiest way to build it on Windows is inside the MSYS2 shell. There is really no other cross-platform alternative to Curses (find out why they named the project "curses" and you will find why there is no good alternative).
For fun i'd like to write code that runs on OSless hardware. I think writing code that will run in a VM (like VMware or virtualbox) would be good. However i don't want to start from scratch. I'd like the C++ runtime to be available. Something that allows me to read/write (maybe FAT32 filesystem code). Graphics for text and if i can graphics for drawing on screen (pixel by pixel. sdl support would be a bonus but not essential).
I'll write my own threads if i want them. I'll write everything else (that i want to use) needed for an OS. I just want a basic filesystem, gfx and keyboard/mouse support.
Take a look at the list of projects on osdev.org - (http://wiki.osdev.org/Projects) - most of these are hobbyist, open-source and range from just-a-bootsector through to proper threads/graphics/terminal support.
Minix3 targets your desires pretty well.
You should definitely take a look at OSKit (links to source code on this site are dead but there is a mirror here). Unfortunately, OSKit has no support for C++ but using this information you may be able to use GCC libraries.
I have some basic effect algorithms (i.e chrous, LP filtering..) which I would like to build a GUI application to be able to use these algorithms.
For example I want to be able to open an audio file, process the audio file in some way with my algorithms and playback the processed file.
Later on I would like to, if possible be able to see the waveforms of the original file and the processed file in the GUI application. This is my objective now.
In the future I want to be able to create a user interface through which users can be able to use my own audio processing algorithms on files of their own.
Is it possible to design such a GUI with the Qt programming framework? If so, could someone point me in the right direction to get started? Right now I have the Qt SDK 1.1 beta running on Windows 7 OS and also using Qt creator. I would really appreciate some guidance.
Qt is a very powerful application framework, but do not expect any extra help with DSP tasks from it. It contains API for some basic and common tasks, like playing an audio/video file, working with audio devices, creating audio effects (search QAudio and Phonon in the Qt's help) etc. You can use some ready-to-use widgets and create your own multimedia player in a few moments.
But in the DSP you are -mostly, on your own. There is, for instance, only a limited audio file format support, so if you want to work with more formats than .wav and .aiff, use some specialized library. I recommend libsndfile (http://www.mega-nerd.com/libsndfile/) which is most powerful free audio file library available. And if you plan your effects to be more universal, use rather VST technology by Steinberg - today's audio plug-in standard, but it is relatively complicated, not suitable for beginners.
There is no built-in widget which can show a waveform, you have to create it yourself, but it is not much complicated. Qt has a really cool drawing functions, brushes, texts, gradients, transformations, antialiasing, even OpenGL wrapper - everything ready and very simple to use.
So the answer is definitely yes. I use Qt in my multimedia applications for three years and now I can't see how I could live without it (using VST GUI and Windows APIs before).
Sure its possible, QT is a framework for writing applications, you can write any application you want using it, you'll probably end up needing to write some custom controls. As an example, here's an Open source QT based application that does pretty much everything you are talking about and much more:
http://qtractor.sourceforge.net/qtractor-index.html