spleeter separate video file to vocals and accompaniment, length of time is different - spleeter

I want to use spleeter to spleeter separate video file to vocals audio and accompaniment audio,but length of time is different.
This is the command below.
spleeter separate "[file path].mp4" -p spleeter:2stems -o "[folder path]" 
Input file mp4 time : about 13 minutes
Output file wav time : about 10 minutes
How to keep time length of media file after using spleeter separate command?

This is because duration limitation.
$ spleeter separate --help
Usage: spleeter separate [OPTIONS] FILES...
Separate audio file(s)
Arguments:
FILES... List of input audio file path [required]
Options:
-d, --duration FLOAT Set a maximum duration for processing audio
(only separate offset + duration first
seconds of the input file) [default: 600.0]

Related

GStreamer : seek request problem in a pipeline with a mixer with pad offset

I want to mix multiple wav file. Files can have different start time.
For that, I set an offset on the pad of the mixer.
I am using gstreamer-java.
This is an example of the timeline with two files. There is a 10 sec offset for file 2.
It works fine. File 2 start as expected.
But if I do a seek request, I won't hear file 2 for the duration of its offset (here : 10 sec).
When I hear file 2 again, the file is in sync with the expected timeline.
Is it possible to do a seek request when the mixer has pad with offset ?

Fluentd S3 output plugin not recognizing index

I am facing problems while using S3 output plugin with fluent-d.
s3_object_key_format %{path}%{time_slice}_%{index}.%{file_extension}
using %index at end never resolves to _0,_1 . But i always end up with log file names as
sflow_.log
while i need sflow_0.log
Regards,
Can you paste your fluent.conf?. It's hard to find the problem without full info. File creations are mainly controlled by time slice flag and buffer configuration..
<buffer>
#type file or memory
path /fluentd/buffer/s3
timekey_wait 1m
timekey 1m
chunk_limit_size 64m
</buffer>
time_slice_format %Y%m%d%H%M
With above, you create a file every minute and within 1min if your buffer limit is reached or due to any other factor another file is created with index 1 under same minute.

Concatenating text and binary data into one file

I am developing an application, and I have several pieces of data that I want to be able to save to and open from the same file. The first is several lines that are essentially human readable that store simple data about certain attributes. The data is stored in AttributeList objects that support operator<< and operator>>. The rest are .png images which I have loaded into memory.
How can I save all this data to one file in such a way that I can then easily read it back into memory? Is there a way to store the image data in memory that will make this easier?
How can I save all this data to one file in such a way that I can then
easily read it back into memory? Is there a way to store the image
data in memory that will make this easier?
Yes.
In an embedded system I once worked on, the requirement was to capture sysstem configuration into a ram file system. (1 Meg byte)
We used zlib to compress and 'merge' multiple files into a single storage file.
Perhaps any compression system can work for you. On Linux, I would use popen() to run gzip or gunzip, etc.
update 2017-08-07
In my popen demo (for this question), I build the command string with standard shell commands:
std::string cmd;
cmd += "tar -cvf dumy514DEMO.tar dumy*V?.cc ; gzip dumy514DEMO.tar ; ls -lsa *.tar.gz";
// tar without compression ; next do compress
Then construct my popen-wrapped-in-a-class instance and invoke the popen read action. There is normally very little feedback to the user (as is the style of UNIX Philosophy, i.e. no success messages), so I included (for this demo) the -v (for verbose option). The resulting feedback lists the 4 files tar'd together, and I list the resulting .gz file.
dumy514V0.cc
dumy514V1.cc
dumy514V2.cc
dumy514V3.cc
8 -rw-rw-r-- 1 dmoen dmoen 7983 Aug 7 17:23 dumy514DEMO.tar.gz
And a snippet from the dir listing shows my executable, my source code, and the newly created tar.gz.
-rwxrwxr-x 1 dmoen dmoen 86416 Aug 7 17:18 dumy514DEMO
-rw-rw-r-- 1 dmoen dmoen 13576 Aug 7 17:18 dumy514DEMO.cc
-rw-rw-r-- 1 dmoen dmoen 7983 Aug 7 17:23 dumy514DEMO.tar.gz
As you can see, the tar.gz is about 8000 bytes. The 4 files add to about 70,000 bytes.

How to handle FTE queued transfers

I have fte monitor with a '*.txt' as trigger condition, whenever a text file lands at source fte transfer file to destination, but when 10 files land at source at a time then fte is triggering 10 transfer request simultaneously & all the transfers are getting queued & stuck.
Please suggest how to handle this scenarios
Ok, I have just tested this case:
I want to transfer four *.xml files from directory right when they appear in that directory. So I have monitor set to *.xml and transfer pattern set to *.xml (see commands bellow).
Created with following commands:
fteCreateTransfer -sa AGENT1 -sm QM.FTE -da AGENT2 -dm QM.FTE -dd c:\\workspace\\FTE_tests\\OUT -de overwrite -sd delete -gt /var/IBM/WMQFTE/config/QM.FTE/FTE_TEST_TRANSFER.xml c:\\workspace\\FTE_tests\\IN\\*.xml
fteCreateMonitor -ma AGENT1 -mn FTE_TEST_TRANSFER -md c:\\workspace\\FTE_tests\\IN -mt /var/IBM/WMQFTE/config/TQM.FTE/FTE_TEST_TRANSFER.xml -tr match,*.xml
I got three different results depending on configuration changes:
1) just as commands are, default agent.properties:
in transfer log appeared 4 transfers
all 4 transfers tryed to transfer all four XML files
3 of them with partial success because agent could't delete source file
with success that transfered all files and deleted all source files
Well, with transfer type File to File, final state is in fact ok - four files in destination directory because the previous file are overwritten. But with File to Queue I got 16 messages in destination queue.
2) fteCreateMonitor command modified with parameter "-bs 100", default agent.properties:
in transfer log , there is only one transfer
this transfer is with partial success result
this transfer tryed to transfer 16 files (each XML four times)
agent was not able to delete any file, so source files remained in source directory
So in sum I got same total amount of files transfered (16) as in first result. And not even deleted source files.
3) just as commands are, agent.properties modified with parameter "monitorMaxResourcesInPoll=1":
in transfer log , there is only one transfer
this transfer is with success result
this transfer tryed to transfer four files and succeeded
agent was able to delete all source files
So I was able to get expected result only with this settings. But I am still not sure about appropriateness of the monitorMaxResourcesInPoll parameter set to "1".
Therefore for me the answer is: add
monitorMaxResourcesInPoll=1
to agent.properties. But this is in collision with other answers posted here, so I am little bit confused now.
tested on version 7.0.4.4
Check the box that says "Batch together the file transfers when multiple trigger files are found in one poll interval" (screen three).
Make sure that you set the maxFilesForTransfer in the agent.properties file to a value that is large enough for you, but be careful as this will affect all transfers.
You can also set monitorMaxResourcesInPoll=1 in the agent.properties file. I don't recommend this for 2 reasons: 1) it will affect all monitors 2) it may make it so that you can never catch up on all the files you have to transfer depending on your volume and poll interval.
Set your "Batch together the file transfers..." to a value more than 10:
Max Batch Size = 100

How to make .avi, .mp4 file with jpeg frames?

I'm working with IP camera, and I have got Jpeg frames and audio data (PCM) from camera.
Now, I want to create video file (both audio and video) under .avi or .mp4 format from above data.
I searched and I knew that ffmpeg library can do it. But I don't know how to using ffmpeg to do this.
Can you suggest me some sample code or the function of ffmpeg to do it?
If your objective is to write a c++ app to do this for you please disregard this answer, I'll just leave it here for future reference. If not, here's how you can do it in bash:
First, make sure your images are in a nice format, easy to handle by ffmpeg. You can copy the images to a different directory:
mkdir tmp
x=1; for i in *jpg; do counter=$(printf %03d $x); cp "$i" tmp/img"$counter".jpg; x=$(($x+1)); done
Copy your audio data to the tmp directory and encode the video. Let's say your camera took a picture every ten seconds:
cd tmp
ffmpeg -i audio.wav -f image2 -i img%03d.jpg -vcodec msmpeg4v2 -r 0.1 -intra out.avi
Where -r 0.1 indicates a framerate of 0.1 which is one frame every 10 seconds.
The possible issues here are:
Your audio/video might go slightly out of sync unless you calculate your desired framerate carefully in advance. You should be able to get the length of the audio (or video) using ffmpeg and some grep magic. Even so the sync might be an issue with longer clips.
if you have more than 999 images the %03d format will not be enough, make sure to change the 3 to the desired length of the index
The video will inherit its length from the longer of the streams, you can restrict it using the -t switch:
-t duration - Restrict the transcoded/captured video sequence to the duration specified in seconds. "hh:mm:ss[.xxx]" syntax is also supported.