Currently I'm developing an extension for Premiere Pro that should use .exe file to make things happen.
The main algorithm is written in python and works with actual video files: reads them frame by frame, processes the information in pixel level... I'm thinking of making an executable from it and call it from extendscript to make things happen...
Is this the right way to go ? Or should I consider other technics like writing a plugin using the PPro SDK and call it from extendscript ? Is it possibe to access the actual video-files, read them, get pixel values with SDK ?
Thanks.
I'm not sure how good the Premiere SDK is, but the drawback I can see to use the SDK you would have to port everything to C++. If that's less work than creating the Extendscript bridge it would be the way to go.
The PPro Extendscript documentation is notoriously non-existent.
Related
I've been going through the windows documentation for the Dism API with the goal of writing an exe in C++ (or whatever language can accomplish this) that can create a WIM image while running in Windows PE. I found a .NET Wrapper for the Dism API that seems like it might be useful for this purpose, but I'm unsure if a .NET app will successfully run in Windows PE. Overall, my problem is that I don't see a function that can create--and doesn't simply modify--a wim file.
If I didn't care about encapsulating this in an .exe file, the Dism documentation does show how to initially create a wim--which makes me curious why a similar function wouldn't exist within the api. Please advise if the simplest solution is to have my code call a function such as system() within the code.
To summarize, I'm looking for a way to create a wim file programmatically (called from executing an exe file) from within Windows PE.
As always, thank you for the help and advice.
I work on a project that works with DISM in WinPE quite a bit. We configure WinPE with all the .net packages as described here. Then WinPE can be configured to start an application.
I use c#, but you can do managed apps in c++ as I'm sure you know. I find putting c# code into WinPE substantially easier, but that's more a function of my experience, I suppose.
The main way we use to interact with DISM is run a command using System.Diagnostics.Process. The process runs in a separate thread, but the API is simple, and you can wait on (and/or timeout) your process for synchronization purposes. This just uses the DISM command line interface, although you can also use powershell cmdlets if you've added that package to your WinPE image. It may seem like a hacky "interface" from your app to DISM, but it works reliably, and you can keep the process window from showing up on the screen. This makes for a decent asynchronous platform for running bunches of windows imaging utilities, such as DISKPART, DISM and BCDEDIT.
The principal way you'd capture a new image is with DISM /Capture-Image. Sounds like you've already discovered this fact. Lots of options that are somewhat beyond the scope of this q/a, but I hope this gets you on a useful path.
Even though this post is a bit older, here is a possibly still relevant resource for you. Perhaps this one will help.
I've written a small GUI-based tool, project-named WIM-Backup, that uses the Windows Imaging Format (WIM) to create full backups of computer systems (operating system images) within WinPE and then restore them.
The application is hosted on GitHub, is open source, and is offered under the Apache 2.0 license.
In addition, the repository includes an illustrated step-by-step guide to help get it up and running.
Brief summary:
WIM-Backup always requires an external bootable media such as a USB flash drive.
From this drive WinPE is booted to perform a backup or restore to or from an external medium (e.g. a USB hard drive).
On the bootable USB flash drive the WinPE must be set up before (is documented illustrated in the readme).
After completion of the respective operation, a status message is displayed whether the operation was successful or failed.
After restoring a backup, you can boot normally from the destination drive.
Both the backup and restore process are relatively simple (not "rocket science").
To set up the solution, you need about 30 minutes time in the best case due to the necessary downloads (e .g. ADK)
Last but not least: it has a permissive license (non-proprietary) and is open source.
The project can be found here: WIM-Backup
I need to extract audio from video and save it. FFmpeg has command for this purpose. I wonder if it is a right way to execute ffmpeg from my code and not to write code with their API functions.
The lack of this approach is that I use Qt Framework and need cross-platform application. Sometimes (especially in windows, because PATH variable doesn't set up automatically so call ffmpeg won't work) a user will need to indicate path to executable file to run in command line.
So both variants are possible to realize, but which is the best and correct one?
I don't really want to use their API because it is not so easy to understand and will take time to write my own code.
Thanks for any advice!
Using standalone ffmpeg seems to be preferred in your case. You will have to bundle ffmpeg and it's dependencies along with your application. However there is no need to set or use PATH or other environment variables to launch ffmpeg. You should do it by supplying full path to ffmpeg executable.
Using libav API is indeed rather tricky. And I would like to mention that in general (depending on codec) ffmpeg and libav should not be considered stable and you should spawn a separate process to protect main executable from potential crash in this case as well. So complexity of this approach is much higher compared to first one.
Disclaim: I never used Qt with ffmpeg together myself, but have much experience with Qt especially.
Qt tends to try having everything in their library, wrapping many other content for convenience. Most of the time (All those I tested), it is still possible quite easily to use the original library without troubles, but the Qt facilitate integration.
As an example: QOpenGLWidget is a wrapper for OpenGL with their widget system, adding signals and slots, etc. I made some test using normal OpenGL and it worked fine.
In another project, we(my team, not me particularly) used ffmpeg to display video on a QtWidget. It works with limited problems (due to other architectural requirements).
Considering your use case, and especially that you are using ffmpeg for background processing and not for displaying video, you may IMO go ahead with high probability of success.
The problem
I would like to use c++ to create an application that uses the new macbook pro touch bar. However I am not able to find any really good resources. And apple does not have any docs on using c++ to program the touch bar.
What I have done
I found this article on c++ and the touch bar, However I cannot find either of the header files for the script GLFW/glfw3.h and GLFW/glfw3native.h. These both seem critical to the script working.
More on the issue
Even if the above article's script works, there are no official docs for programing the touch bar with c++ (That I know of). I think that this is an important thing to have given the fact that many, if not most applications are written in c/c++.
Thank you in advance for the help!
So the article that you link to basically does not need the GLFW/glfw3.h and GLFW/glfw3native.h files if you are not using GLFW.
What UI framework are you using for your C++ app?
Unless it is still using Carbon, at the lowest level, the framework will be creating NSWindows to actually have windows in the UI. You need to get access to the NSWindow that your framework is using to host it the UI. If it is still using Carbon, I think you are probably not going to be able to accomplish this.
If the framework provides some mechanism to get the native platform window (which will be an NSWindow), you would replace the author's call to glfwGetCocoaWindow(window); with the correct call from your framework.
If the framework does not provide access to the NSWindow, then you will need to use the code that is commented out at the bottom of the article to attach your touchbar to the windows in your app.
Please note that all that code is Obj-C code; you'll need to have at least one .m or .mm file in your project to provide that Obj-C glue code to get access to the touchbar. Basically that code is a C-calleable wrapper around the Cocoa API.
Also note that you'll need to expand the list of buttons and actions for all the different things you want to put in the touchbar. You could add your own wrapping API so that the construction of the toolbar is done from C++ and registers actions that call-back into your C++ app to handle the events.
Fundamentally though, the touchbar is not available on any other platform, so there is no great benefit to trying to avoid writing Obj-C to implement your touchbar as that code will only run on macOS anyway. If you use .mm files to implement Obj-C++ for this code, you can still call into your C++ objects from your touchbar code.
I want to try some C and C++ programming with audio processing, such as synthesizers, chorus, delay etc, but I only know working with a console as output. I wish to have, instead of a console application, a window that would be capable of sending an audio signal to the speakers, running the code at the background and working with it in a similar way as it goes with printf: every time I would call the "output function", it would send to the speakers (or sound card) a sample value, indicating current oscilator position. This output operation could be executed every time it is requested or in the end of a built in loop. Doing all this with a high sample rate would be just great.
I think I could do all this using AudioWorker on Web Audio API, plus a flexible GUI on HTML5 canvas, but I'm new at this API and I'm not sure whether its resulting sound quality is good enough.
Thanks in advance.
Edit: I use Windows 8.1, but any answer for other platform is welcome.
Edit2: Any programming languages other than C, C++ or JavaScript suggestions are also welcome.
I do lots of sound synthesis with the Web Audio API and I think it sounds great. Javascript is really all you need. Well, if you want to use audio files, you need a web server to serve those audio files, but the audio synthesis all happens in javascript.
It doesn't matter so much what OS you use, but different browsers have different levels of support for the web audio API. Chrome tends to have the best support, and Internet Explorer definitely has the worst support.
I have some basic effect algorithms (i.e chrous, LP filtering..) which I would like to build a GUI application to be able to use these algorithms.
For example I want to be able to open an audio file, process the audio file in some way with my algorithms and playback the processed file.
Later on I would like to, if possible be able to see the waveforms of the original file and the processed file in the GUI application. This is my objective now.
In the future I want to be able to create a user interface through which users can be able to use my own audio processing algorithms on files of their own.
Is it possible to design such a GUI with the Qt programming framework? If so, could someone point me in the right direction to get started? Right now I have the Qt SDK 1.1 beta running on Windows 7 OS and also using Qt creator. I would really appreciate some guidance.
Qt is a very powerful application framework, but do not expect any extra help with DSP tasks from it. It contains API for some basic and common tasks, like playing an audio/video file, working with audio devices, creating audio effects (search QAudio and Phonon in the Qt's help) etc. You can use some ready-to-use widgets and create your own multimedia player in a few moments.
But in the DSP you are -mostly, on your own. There is, for instance, only a limited audio file format support, so if you want to work with more formats than .wav and .aiff, use some specialized library. I recommend libsndfile (http://www.mega-nerd.com/libsndfile/) which is most powerful free audio file library available. And if you plan your effects to be more universal, use rather VST technology by Steinberg - today's audio plug-in standard, but it is relatively complicated, not suitable for beginners.
There is no built-in widget which can show a waveform, you have to create it yourself, but it is not much complicated. Qt has a really cool drawing functions, brushes, texts, gradients, transformations, antialiasing, even OpenGL wrapper - everything ready and very simple to use.
So the answer is definitely yes. I use Qt in my multimedia applications for three years and now I can't see how I could live without it (using VST GUI and Windows APIs before).
Sure its possible, QT is a framework for writing applications, you can write any application you want using it, you'll probably end up needing to write some custom controls. As an example, here's an Open source QT based application that does pretty much everything you are talking about and much more:
http://qtractor.sourceforge.net/qtractor-index.html