youtube-dl error:Cannot download a video and extract audio into the same file - youtube-dl

I used the exact same youtube-dl command without the playlist option to download individual audio files, and it worked. But when I use it for this playlist, I get an error: Cannot download a video and extract audio into the same file! Use "(ext)s.%(ext)s" instead of "(ext)s" as the output template
Running on windows 10. Any help would be greatly appreciated!!
PS C:\xxx\FFMPEG> .\YouTubeBatchAudioPlaylistIndexes.bat
C:\xxx\FFMPEG>call bin\youtube-dl.exe -x --audio-format "mp3" --audio-quality 3 --batch-file="songs.txt" --playlist-items 4,6,7,8,10,11,16,17,20,21,23,25,27,28,31,33,36,38,39,41,43,45,46,48,50 -o"C:\Users\xxx\Downloads\%(title)s.%(ext)s" --verbose
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['-x', '--audio-format', 'mp3', '--audio-quality', '3', '--batch-file=songs.txt', '--playlist-items', '4,6,7,8,10,11,16,17,20,21,23,25,27,28,31,33,36,38,39,41,43,45,46,48,50', '-oC:\\Users\\xxx\\Downloads\\(ext)s', '--verbose']
[debug] Batch file urls: ['https://www.youtube.com/watch?v=anurOHpo0aY&index=4&list=PLlRluznmnq9f7OMI4avwFyV2xMVxlV3_w&t=0s']
Usage: youtube-dl.exe [OPTIONS] URL [URL...]
youtube-dl.exe: error: Cannot download a video and extract audio into the same file! Use "C:\Users\xxx\Downloads\(ext)s.%(ext)s" instead of "C:\Users\xxx\Downloads\(ext)s" as the output template

If you look at the output, you see that the percent signs in your output template were gobbled up:
(...) '-oC:\\Users\\xxx\\Downloads\\(ext)s', '--verbose']
That is because in a batch file, you need to write %% if you want a percent sign, and double that again for call, like this:
call bin\youtube-dl.exe -x --audio-format "mp3" --audio-quality 3 ^
--batch-file="songs.txt" --playlist-items ^
4,6,7,8,10,11,16,17,20,21,23,25,27,28,31,33,36,38,39,41,43,45,46,48,50 ^
-o "C:\Users\xxx\Downloads\%%%%(title)s.%%%%(ext)s" --verbose

Related

how to parse Http json response and fail or pass job based on that?

I have an gitlab ci yaml file. and 2 jobs. My .gitlab-ci.yaml file is:
variables:
MSBUILD_PATH: 'C:\Program Files (x86)\MSBuild\14.0\Bin\msbuild.exe'
SOLUTION_PATH: 'Source/NewProject.sln'
stages:
- build
- trigger_IT_service
build_job:
stage: build
script:
- '& "$env:MSBUILD_PATH" "$env:SOLUTION_PATH" /nologo /t:Rebuild /p:Configuration=Debug'
trigger_IT_service_job:
stage: trigger_IT_service
script:
- 'curl http://webapps.xxx.com.tr/dataBus/runTransfer/ctDigiTransfer'
And It's my trigger_IT_service job report:
Running on DIGITALIZATION...
00:00
Fetching changes with git depth set to 50...
00:05
Reinitialized existing Git repository in D:/GitLab-Runner/builds/c11pExsu/0/personalname/newproject/.git/
Checking out 24be087a as master...
Removing Output/
git-lfs/2.5.2 (GitHub; windows amd64; go 1.10.3; git 8e3c5c93)
Skipping Git submodules setup
$ curl http://webapps.xxx.com.tr/dataBus/runTransfer/ctDigiTransfer
00:02
StatusCode : 200
StatusDescription : 200
Content : {"status":200,"message":"SAP transfer started. Please
check in db","errorCode":0,"timestamp":"2020-03-25T13:53:05
.722+0300","responseObject":null}
RawContent : HTTP/1.1 200 200
Keep-Alive: timeout=10
Connection: Keep-Alive
Transfer-Encoding: chunked
Content-Type: application/json;charset=UTF-8
Date: Wed, 25 Mar 2020 10:53:05 GMT
Server: Apache
I have to control the this report "Content" part in gitlab ci yaml
If "message" is "SAP transfer started. Please check in db" the pipeline should pass otherwise must be failed.
Actually my question is:
how to parse Http json response and fail or pass job based on that
Thank you for all your helps.
Best way would be to install some tool to parse json and use it, different examples here
Given json example from comment:
{
"status": 200,
"message": "SAP transfer started. Please check in db",
"errorCode": 0,
"timestamp": "2020-03-25T17:06:43.430+0300",
"responseObject": null
}
If you can install python3 on your runner you could achieve it all with script:
import requests; # note this might require additional install with pip install requests
message = requests.get('http://webapps.xxx.com.tr/dataBus/runTransfer/ctDigiTransfer').json()['message']
if message != 'SAP transfer started. Please check in db':
print('Invalid message: ' + message)
exit(1)
else:
print('Message ok')
So trigger_IT_service stage in your yaml would be:
trigger_IT_service_job:
stage: trigger_IT_service
script: >
python -c "import requests; message = requests.get('http://webapps.xxx.com.tr/dataBus/runTransfer/ctDigiTransfer').json()['message']; (print('Invalid message: ' + message), exit(1)) if message != 'SAP transfer started. Please check in db' else (print('Message ok'), exit(0))"

Why can only download the first episode video on bilibili with youtube-dl?

I can download the first episode of a series.
yutube-dl https://www.bilibili.com/video/av90163846?p=1
Now I want to download all episodes of the series.
for i in $(seq 1 55)
do
yutube-dl https://www.bilibili.com/video/av90163846?p=$i
done
All other episodes except the first can't be downloaded ,both of them contains same error info such as below:
[BiliBili] 90163846: Downloading webpage
[BiliBili] 90163846: Downloading video info page
[download] 【合集300集全】地道美音 美国中小学教学 自然科学 社会常识-90163846.flv has already been downloaded
Please have a try and check what happens,how to fix then?
#Christos Lytras,strange thing happen with your code:
for i in $(seq 1 55)
do
youtube-dl https://www.bilibili.com/video/av90163846?p=$i -o "%(title)s-%(id)s-$i.%(ext)s"
done
It surely can download video on bilibili,but all of downloaded video have different name and same content,all the content are the same as the first episode,have a try and check ,you will find that fact.
This error occurs because youtube-dl ignores URI parameters after ? for the filename, so the next file it tries to download has the same name with the previous one and it fails because a file already exists with that name. The solution is to use the --output template filesystem option to set a filename which it'll have an index in its name using the variable i.
Filesystem Options
-o, --output TEMPLATE Output filename template, see the "OUTPUT
TEMPLATE" for all the info
OUTPUT TEMPLATE
The -o option allows users to indicate a
template for the output file names.
The basic usage is not to set any template arguments when downloading
a single file, like in youtube-dl -o funny_video.flv "https://some/video". However, it may contain special sequences that
will be replaced when downloading each video. The special sequences
may be formatted according to python string formatting operations. For
example, %(NAME)s or %(NAME)05d. To clarify, that is a percent symbol
followed by a name in parentheses, followed by formatting operations.
Allowed names along with sequence type are:
id (string): Video identifier
title (string): Video title
url (string): Video URL
ext (string): Video filename extension
...
For your case, to use the i in the output filename, you can use something like this:
for i in $(seq 1 55)
do
youtube-dl https://www.bilibili.com/video/av90163846?p=$i -o "%(title)s-%(id)s-$i.%(ext)s"
done
which will use the title the id the i variable for indexing and the ext for the video extension.
You can check the Output Template variables for more options defining the filename.
UPDATE
Apparently, bilibili.com has some Javascript involved to setup the video player and fetch the video files. There is no way so you can extract the whole playlist using youtube-dl. I suggest you use Lux which supports Bilibili playlists out of the box. It has installers for all major operating systems and you can use it like this to download the whole playlist:
lux -p https://www.bilibili.com/video/av90163846
of if you want to download only until 55 video, you can use -end 55 cli option like this:
lux -end 55 -p https://www.bilibili.com/video/av90163846
You can use the -start, -end or -items option to specify the download
range of the list:
-start
Playlist video to start at (default 1)
-end
Playlist video to end at
-items
Playlist video items to download. Separated by commas like: 1,5,6,8-10
For bilibili playlists only:
-eto
File name of each bilibili episode doesn't include the playlist title
If you want to only get information of a playlist without downloading files, then use the -i command line option like this:
lux -i -p https://www.bilibili.com/video/av90163846
will output something like this:
Site: 哔哩哔哩 bilibili.com
Title: 【合集300集全】地道美音 美国中小学教学 自然科学 社会常识 P1 【001】Parts of Plants
Type: video
Streams: # All available quality
[64] -------------------
Quality: 高清 720P
Size: 308.24 MiB (323215935 Bytes)
# download with: lux -f 64 ...
[32] -------------------
Quality: 清晰 480P
Size: 201.57 MiB (211361230 Bytes)
# download with: lux -f 32 ...
[16] -------------------
Quality: 流畅 360P
Size: 124.75 MiB (130809508 Bytes)
# download with: lux -f 16 ...
Site: 哔哩哔哩 bilibili.com
Title: 【合集300集全】地道美音 美国中小学教学 自然科学 社会常识 P2 【002】Life Cycle of a Plant
Type: video
Streams: # All available quality
[64] -------------------
Quality: 高清 720P
Size: 227.75 MiB (238809781 Bytes)
# download with: lux -f 64 ...
[32] -------------------
Quality: 清晰 480P
Size: 148.96 MiB (156191413 Bytes)
# download with: lux -f 32 ...
[16] -------------------
Quality: 流畅 360P
Size: 94.82 MiB (99425641 Bytes)
# download with: lux -f 16 ...

FFmpeg ignores some HTTP options when using the PUT method

I am using FFmpeg to create a CMAF stream and I upload it to an AWS resource (AWS MediaStore) using the PUT method of FFMpeg.
I need to pass the Content-Type header when uploading manifests & segments.
I have 3 type of files:
application/x-mpegURL : m3u8 manifest
application/dash+xml : mpd manifest
video/mp4 : video segments
Currently, all the types are set to Binary - octet-stream in the AWS resource (AWS MediaStore).
As I will upload a huge number of files, I can't use AWS Lambda functions to set the correct content type after a file as been uploaded.
FFmpeg upload logs
[https # 0x555fe7a7d1c0] Opening 'https://XXXX.YYYY.amazonaws.com/chunk-stream0-00001.mp4' for writing
[https # 0x555fe7a7d0c0] request: PUT /chunk-stream0-00001.mp4 HTTP/1.1
Transfer-Encoding: chunked
User-Agent: Lavf/58.28.100
Accept: */*
Connection: keep-alive
Host: XXXXX.YYYY.amazonaws.com
Icy-MetaData: 1
My tries
I tried static builds & master branch of FFMpeg.
I tried different ways to pass the content type, without success :
-mime_type 1 -headers "Content-type: video/mp4\r\n"
-mime_type "video/mp4,application/dash+xml,application/x-mpegURL"
-content_type application/dash+xml
-multiple_requests 1 -headers "a:b" -icy 0
Upload command :
./ffmpeg -re -i ~/videos/BigBuckBunny.mp4 -loglevel debug \
-map 0 -map 0 -map 0 -c:a aac -c:v libx264 -tune zerolatency \
-b:v:0 2000k -s:v:0 1280x720 -profile:v:0 high -b:v:1 1500k -s:v:1 640x340 -profile:v:1 main -b:v:2 500k -s:v:2 320x170 -profile:v:2 baseline -bf 1 \
-keyint_min 24 -g 24 -sc_threshold 0 -b_strategy 0 -ar:a:1 22050 -use_timeline 1 -use_template 1 -window_size 5 \
-adaptation_sets "id=0,streams=v id=1,streams=a" -hls_playlist 1 -seg_duration 3 -streaming 1 \
-strict experimental -lhls 1 -remove_at_exit 0 -master_m3u8_publish_rate 3 \
-f dash -method PUT -http_persistent 1 https://example.com/manifest.mpd
Any help would be highly appreciated.
Reference:
https://www.ffmpeg.org/ffmpeg-protocols.html#http

telegraf - exec plugin - aws ec2 ebs volumen info - metric parsing error, reason: [missing fields] or Errors encountered: [ invalid number]

Machine - CentOS 7.2 or Ubuntu 14.04/16.xx
Telegraf version: 1.0.1
Python version: 2.7.5
Telegraf supports an INPUT plugin named: exec. First please see EXAMPLE 2 in the README doc there. I can't use JSON format as it only consumes Numeric values for metrics. As per the docs:
If using JSON, only numeric values are parsed and turned into floats. Booleans and strings will be ignored.
So, the idea is simple, you specify a script in exec plugin section, which should spit some meaningful info(in either JSON -or- influx data format in my case as I have some metrics which contains non-numeric values) which you would want to catch/show somewhere in a cool dashboard like for example Wavefront Dashboard shown here:
:
Basically one can use these metrics, tags, sources from where these metrics are coming from to find out various info about memory, cpu, disk, networking, other meaningful info and also create alerts using those if something unwanted happens.
OK, I came up with this python script available here:
#!/usr/bin/python
# sudo pip install boto3 if you don't have it on your machine.
import boto3
def generate(key, value):
"""
Creates a nicely formatted Key(Value) item for output
"""
return '{}="{}"'.format(key, value)
#return '{}={}'.format(key, value)
def main():
ec2 = boto3.resource('ec2', region_name="us-west-2")
volumes = ec2.volumes.all()
for vol in volumes:
# You don't need to wrap everything in `str` unless it is not a string
# By default most things will come back as a string
# unless they are very obviously not (complex, date time, etc)
# but since we are printing these (and formatting them into strings)
# the cast to string will be implicit and we don't need to make it
# explicit
# vol is already a fully returned volume you are essentially DOUBLING
# your API calls when you do this
#iv = ec2.Volume(vol.id)
output_parts = [
# Volume level details
generate('create_time', vol.create_time),
generate('availability_zone', vol.availability_zone),
generate('volume_id', vol.volume_id),
generate('volume_type', vol.volume_type),
generate('state', vol.state),
generate('size', vol.size),
generate('iops', vol.iops),
generate('encrypted', vol.encrypted),
generate('snapshot_id', vol.snapshot_id),
generate('kms_key_id', vol.kms_key_id),
]
for _ in vol.attachments:
# Will get any attachments and since it is a list
# we should write this to handle MULTIPLE attachments
output_parts.extend([
generate('InstanceId', _.get('InstanceId')),
generate('InstanceVolumeState', _.get('State')),
generate('DeleteOnTermination', _.get('DeleteOnTermination')),
generate('Device', _.get('Device')),
])
# only process when there are tags to process
if vol.tags:
for _ in vol.tags:
# Get all of the tags
output_parts.extend([
generate(_.get('Key'), _.get('Value')),
])
# output everything at once..
print ','.join(output_parts)
if __name__ == '__main__':
main()
This script will talk to AWS EC2 EBS volumes and outputs all values it can find (usually what you see in AWS EC2 EBS volume console) and format that info into a meaningful CSV format which I'm redirecting to a .csv log file.
We don't want to run the python script all the time (AWS API limits / cost factor).
So, once the .csv file is created, I created this small shell script which I'll set in Telegraf's exec plugin's section.
Shell script /tmp/aws-vol-info.sh set in Telegraf exec plugin is:
#!/bin/bash
cat /tmp/aws-vol-info.csv
Telegraf configuration file created using exec plugin (/etc/telegraf/telegraf.d/exec-plugin-aws-info.conf):
#--- https://github.com/influxdata/telegraf/tree/master/plugins/inputs/exec
[[inputs.exec]]
commands = ["/tmp/aws-vol-info.sh"]
## Timeout for each command to complete.
timeout = "5s"
# Data format to consume.
# NOTE json only reads numerical measurements, strings and booleans are ignored.
data_format = "influx"
name_suffix = "_telegraf_execplugin"
I tweaked the .py (Python script for generate function) to generate the following three type of output formats (.csv file) and wanted to test how telegraf would handle this data before I enable the config file (/etc/telegraf/telegraf.d/catch-aws-ebs-info.conf) and restart telegraf service.
Format 1: (with double quotes " wrapped for every value)
create_time="2017-01-09 23:24:29.428000+00:00",availability_zone="us-east-2b",volume_id="vol-058e1d47dgh721121",volume_type="gp2",state="in-use",size="8",iops="100",encrypted="False",snapshot_id="snap-06h1h1b91bh662avn",kms_key_id="None",InstanceId="i-0jjb1boop26f42f50",InstanceVolumeState="attached",DeleteOnTermination="True",Device="/dev/sda1",Name="[company-2b-app90] secondary",hostname="company-2b-app90-i-0jjb1boop26f42f50",high_availability="1",mirror="secondary",cluster="company",autoscale="true",role="app"
Testing telegraf configuration on the telegraf directory gives me the following error.
Command: $ telegraf --config-directory=/etc/telegraf --test --input-filter=exec
[vagrant#myvagrant ~] $ telegraf --config-directory=/etc/telegraf --test --input-filter=exec
2017/03/10 00:37:48 I! Using config file: /etc/telegraf/telegraf.conf
* Plugin: inputs.exec, Collection 1
2017-03-10T00:37:48Z E! Errors encountered: [ metric parsing error, reason: [invalid field format], buffer: [create_time="2017-01-09 23:24:29.428000+00:00",availability_zone="us-east-2b",volume_id="vol-058e1d47dgh721121",volume_type="gp2",state="in-use",size="8",iops="100",encrypted="False",snapshot_id="snap-06h1h1b91bh662avn",kms_key_id="None",InstanceId="i-0jjb1boop26f42f50",InstanceVolumeState="attached",DeleteOnTermination="True",Device="/dev/sda1",Name="[company-2b-app90] secondary",hostname="company-2b-app90-i-0jjb1boop26f42f50",high_availability="1",mirror="secondary",cluster="company",autoscale="true",role="app"], index: [372]]
[vagrant#myvagrant ~] $
Format 2: (without any " double quotes)
create_time=2017-01-09 23:24:29.428000+00:00,availability_zone=us-east-2b,volume_id=vol-058e1d47dgh721121,volume_type=gp2,state=in-use,size=8,iops=100,encrypted=False,snapshot_id=snap-06h1h1b91bh662avn,kms_key_id=None,InstanceId=i-0jjb1boop26f42f50,InstanceVolumeState=attached,DeleteOnTermination=True,Device=/dev/sda1,Name=[company-2b-app90] secondary,hostname=company-2b-app90-i-0jjb1boop26f42f50,high_availability=1,mirror=secondary,cluster=company,autoscale=true,role=app
Getting same error while testing Telegraf's configuration for exec plugin:
2017/03/10 00:45:01 I! Using config file: /etc/telegraf/telegraf.conf
* Plugin: inputs.exec, Collection 1
2017-03-10T00:45:01Z E! Errors encountered: [ metric parsing error, reason: [invalid value], buffer: [create_time=2017-01-09 23:24:29.428000+00:00,availability_zone=us-east-2b,volume_id=vol-058e1d47dgh721121,volume_type=gp2,state=in-use,size=8,iops=100,encrypted=False,snapshot_id=snap-06h1h1b91bh662avn,kms_key_id=None,InstanceId=i-0jjb1boop26f42f50,InstanceVolumeState=attached,DeleteOnTermination=True,Device=/dev/sda1,Name=[company-2b-app90] secondary,hostname=company-2b-app90-i-0jjb1boop26f42f50,high_availability=1,mirror=secondary,cluster=company,autoscale=true,role=app], index: [63]]
Format 3: (this format doesn't have any " double quote and space character in the values). Substituted space with _ character.
create_time=2017-01-09_23:24:29.428000+00:00,availability_zone=us-east-2b,volume_id=vol-058e1d47dgh721121,volume_type=gp2,state=in-use,size=8,iops=100,encrypted=False,snapshot_id=snap-06h1h1b91bh662avn,kms_key_id=None,InstanceId=i-0jjb1boop26f42f50,InstanceVolumeState=attached,DeleteOnTermination=True,Device=/dev/sda1,Name=[company-2b-app90]_secondary,hostname=company-2b-app90-i-0jjb1boop26f42f50,high_availability=1,mirror=secondary,cluster=company,autoscale=true,role=app
Still didn't work, getting same error:
[vagrant#myvagrant ~] $ telegraf --config-directory=/etc/telegraf --test --input-filter=exec
2017/03/10 00:50:30 I! Using config file: /etc/telegraf/telegraf.conf
* Plugin: inputs.exec, Collection 1
2017-03-10T00:50:30Z E! Errors encountered: [ metric parsing error, reason: [missing fields], buffer: [create_time=2017-01-09_23:24:29.428000+00:00,availability_zone=us-east-2b,volume_id=vol-058e1d47dgh721121,volume_type=gp2,state=in-use,size=8,iops=100,encrypted=False,snapshot_id=snap-06h1h1b91bh662avn,kms_key_id=None,InstanceId=i-0jjb1boop26f42f50,InstanceVolumeState=attached,DeleteOnTermination=True,Device=/dev/sda1,Name=[company-2b-app90]_secondary,hostname=company-2b-app90-i-0jjb1boop26f42f50,high_availability=1,mirror=secondary,cluster=company,autoscale=true,role=app], index: [476]]
Format 4: If I follow influx line protocol as per this page: https://docs.influxdata.com/influxdb/v1.2/write_protocols/line_protocol_tutorial/
awsebs,Name=[company-2b-app90]_secondary,hostname=company-2b-app90-i-0jjb1boop26f42f50,high_availability=1,mirror=secondary,cluster=company,autoscale=true,role=app create_time=2017-01-09_23:24:29.428000+00:00,availability_zone=us-east-2b,volume_id=vol-058e1d47dgh721121,volume_type=gp2,state=in-use,size=8,iops=100,encrypted=False,snapshot_id=snap-06h1h1b91bh662avn,kms_key_id=None,InstanceId=i-0jjb1boop26f42f50,InstanceVolumeState=attached,DeleteOnTermination=True,Device=/dev/sda1
I'm getting this error:
[vagrant#myvagrant ~] $ telegraf --config-directory=/etc/telegraf --test --input-filter=exec
2017/03/10 02:34:30 I! Using config file: /etc/telegraf/telegraf.conf
* Plugin: inputs.exec, Collection 1
2017-03-10T02:34:30Z E! Errors encountered: [ invalid number]
HOW can I get rid of this error and get telegraf to work with exec plugin (which runs the .sh script)?
Other Info:
Python script will run once/twice per day (via cron) and telegraf will run every 1 minute (to run exec plugin - which runs .sh script - which will cat the .csv file so that telegraf can consume it in influx data format).
https://galaxy.ansible.com/wavefrontHQ/wavefront-ansible/
https://github.com/influxdata/telegraf/issues/2525
It seems like the rules are very strict, I should have looked more closely.
Syntax of the output of any program that you can to consume MUST match or follow INFLUX LINE PROTOCOL format shown below and also all the RULES which comes with it.
For ex:
weather,location=us-midwest temperature=82 1465839830100400200
| -------------------- -------------- |
| | | |
| | | |
+-----------+--------+-+---------+-+---------+
|measurement|,tag_set| |field_set| |timestamp|
+-----------+--------+-+---------+-+---------+
You can read more about what's measurement, tag, field and optional(timestamp) here: https://docs.influxdata.com/influxdb/v1.2/write_protocols/line_protocol_tutorial/
Important rules are:
1) There must be a , and no space between measurement and tag set.
2) There must be a space between tag set and field set.
3) For tag keys, tag values, and field keys always use a backslash character \ to escape if you want to escape any character in measurement name, tag or field set name and their values!
4) You can't escape \ with \
5) Line Protocol handles emojis with no problem :)
6) TAG / TAG set (tags comma separated) in OPTIONAL
7) FIELD / FIELD set (fields, comma separated) - At least ONE is required per line.
8) TIMESTAMP (last value shown in the format) is OPTIONAL.
9) VERY IMPORTANT QUOTING rules are below:
a) Never double or single quote the timestamp. It’s not valid Line Protocol. '123123131312313' or "1231313213131" won't work if that # is valid.
b) Never single quote field values (even if they’re strings!). It’s also not valid Line Protocol. i.e. fieldname='giga' won't work.
c) Do not double or single quote measurement names, tag keys, tag values, and field keys. NOTE: THIS does say !!! tag values !!!! so careful.
d) Do not double quote field values that are ONLY in floats, integers, or booleans format, otherwise InfluxDB will assume that those values are strings.
e) Do double quote field values that are strings.
f) AND the MOST IMPORTANT one (which will save you from getting BALD): If a FIELD value is set without double quote / i.e. you think it's an integer value or float in one line (for ex: anyone will say fields size or iops) and in some other lines (anywhere in the file that telegraf will read/parse using exec plugin) if you have a non-integer value set (i.e. a String), then you'll get the following error message Errors encountered: [ invalid number error.
So to fix it, the RULE is, if any possible FIELD value for a FIELD key is a string, then you MUST make sure to use " to wrap it (in every lines), it doesn't matter whether it has value 1, 200 or 1.5 in some lines (for ex: iops can be 1, 5) and in some other lines that value (iops can be None).
Error message: Errors encountered: [ invalid number
[vagrant#myvagrant ~] $ telegraf --config-directory=/etc/telegraf --test --input-filter=exec
2017/03/10 11:13:18 I! Using config file: /etc/telegraf/telegraf.conf
* Plugin: inputs.exec, Collection 1
2017-03-10T11:13:18Z E! Errors encountered: [ invalid number metric parsing error, reason: [invalid field format], buffer: [awsebsvol,host=myvagrant ], index: [25]]
So, after all this learning, it's clear that first I was missing the Influx Line protocol format and ALSO the RULES!!
Now, my output that I want my python script to generate should be like this (acc. to the INFLUX LINE PROTOCOL). You can just change the .sh file and use sed "s/^/awsec2ebs,/" or also do sed "s/^/awsec2ebs,sourcehost=$(hostname) /" (note: the space before the closing sed / character) and then you can have " around any key=value pair. I did change .py file to not use " for size and iops fields.
Anyways, if the output is something like this:
awsec2ebs,volume_id=vol-058e1d47dgh721121 create_time="2017-01-09 23:24:29.428000+00:00",availability_zone="us-east-2b",volume_type="gp2",state="in-use",size="8",iops="100",encrypted="False",snapshot_id="snap-06h1h1b91bh662avn",kms_key_id="None",InstanceId="i-0jjb1boop26f42f50",InstanceVolumeState="attached",DeleteOnTermination="True",Device="/dev/sda1",Name="[company-2b-app90] secondary",hostname="company-2b-app90-i-0jjb1boop26f42f50",high_availability="1",mirror="secondary",cluster="company",autoscale="true",role="app"
In the above final working solution, I created a measurement named awsec2ebs then gave , between this measurement and tag key volume_id and for tag value, I did NOT use any ' or " quotes and then I gave a space character (as I just wanted only one tag for now otherwise you can have more tag using command separated way and following the rules) between tag set and field set.
Finally ran the command:
$ telegraf --config-directory=/etc/telegraf --test --input-filter=exec which worked like a shenzi!
2017/03/10 03:33:54 I! Using config file: /etc/telegraf/telegraf.conf
* Plugin: inputs.exec, Collection 1
> awsec2ebs_telegraf_execplugin,volume_id=vol-058e1d47dgh721121,host=myvagrant volume_type="gp2",iops="100",kms_key_id="None",role="app",size="8",encrypted="False",InstanceId="i-0jjb1boop26f42f50",InstanceVolumeState="attached",Name="[company-2b-app90] secondary",snapshot_id="snap-06h1h1b91bh662avn",DeleteOnTermination="True",mirror="secondary",cluster="company",autoscale="true",high_availability="1",create_time="2017-01-09 23:24:29.428000+00:00",availability_zone="us-east-2b",state="in-use",Device="/dev/sda1",hostname="company-2b-app90-i-0jjb1boop26f42f50" 1489116835000000000
[vagrant#myvagrant ~] $ echo $?
0
In the above example, size is the only field which will always be a number/numeric value, so we don't need to wrap it with " but it's up to you. Recall the MOST IMPORTANT rule.. above and the error it generates.
So final python file is:
#!/usr/bin/python
#Do `sudo pip install boto3` first
import boto3
def generate(key, value, qs, qe):
"""
Creates a nicely formatted Key(Value) item for output
"""
return '{}={}{}{}'.format(key, qs, value, qe)
def main():
ec2 = boto3.resource('ec2', region_name="us-west-2")
volumes = ec2.volumes.all()
for vol in volumes:
# You don't need to wrap everything in `str` unless it is not a string
# By default most things will come back as a string
# unless they are very obviously not (complex, date time, etc)
# but since we are printing these (and formatting them into strings)
# the cast to string will be implicit and we don't need to make it
# explicit
# vol is already a fully returned volume you are essentially DOUBLING
# your API calls when you do this
#iv = ec2.Volume(vol.id)
output_parts = [
# Volume level details
generate('volume_id', vol.volume_id, '"', '"'),
generate('create_time', vol.create_time, '"', '"'),
generate('availability_zone', vol.availability_zone, '"', '"'),
generate('volume_type', vol.volume_type, '"', '"'),
generate('state', vol.state, '"', '"'),
generate('size', vol.size, '', ''),
#The following vol.iops variable can be a number or None so you must wrap it with double quotes otherwise "invalid number" error will come.
generate('iops', vol.iops, '"', '"'),
generate('encrypted', vol.encrypted, '"', '"'),
generate('snapshot_id', vol.snapshot_id, '"', '"'),
generate('kms_key_id', vol.kms_key_id, '"', '"'),
]
for _ in vol.attachments:
# Will get any attachments and since it is a list
# we should write this to handle MULTIPLE attachments
output_parts.extend([
generate('InstanceId', _.get('InstanceId'), '"', '"'),
generate('InstanceVolumeState', _.get('State'), '"', '"'),
generate('DeleteOnTermination', _.get('DeleteOnTermination'), '"', '"'),
generate('Device', _.get('Device'), '"', '"'),
])
# only process when there are tags to process
if vol.tags:
for _ in vol.tags:
# Get all of the tags
output_parts.extend([
generate(_.get('Key'), _.get('Value'), '"', '"'),
])
# output everything at once..
print ','.join(output_parts)
if __name__ == '__main__':
main()
Final aws-vol-info.sh is:
#!/bin/bash
cat aws-vol-info.csv | sed "s/^/awsebsvol,host=`hostname|head -1|sed "s/[ \t][ \t]*/_/g"` /"
Final telegraf exec plugin config file is (/etc/telegraf/telegraf.d/exec-plugin-aws-info.conf) give any name with .conf:
#--- https://github.com/influxdata/telegraf/tree/master/plugins/inputs/exec
[[inputs.exec]]
commands = ["/some/valid/path/where/csvfileexists/aws-vol-info.sh"]
## Timeout for each command to complete.
timeout = "5s"
# Data format to consume.
# NOTE json only reads numerical measurements, strings and booleans are ignored.
data_format = "influx"
name_suffix = "_telegraf_exec"
Run: and everything will work now!
$ telegraf --config-directory=/etc/telegraf --test --input-filter=exec

How to execute the ffmpeg thumbnail extraction command using sub-process in django?

The following code we are using to extract the thumbnail images from video
ffmpeg -i low.mkv -vf thumbnail=10,setpts=N/TB -r 1 -vframes 10 inputframes%03d.png
This code is working absolutely fine on terminal, but it is giving an error when we are processing the same with subprocess in django.
Our aim is to generate 10 Thumbnails from any length of the video
Here is the code
vaild_fps = "'thumbnail=10,setpts=N/TB -r 1 -vframes 10'"
subprocess.call([settings.FFMPEG_PATH,
'-i',
input_file_path,
'-vf',
vaild_fps,
thumbnail_output_file_path,
]
)
Error No such filter: 'thumbnail=10,setpts=N/TB -r 1 -vframes 10'
Error opening filters!
Unfortunately i'am able to crack-it. Here is the solution
subprocess.call([settings.FFMPEG_PATH,
'-i',
input_file_path,
'-vf',
'thumbnail=10,setpts=N/TB',
'-r',
'1',
'-vframes',
'10',
thumbnail_output_file_path,
]
)