"YYYYMMDD": Invalid identifier error while trying through SQOOP - hdfs

Please help me out from the below error.It works fine when checked in oracle but fails when trying through SQOOP import.
version : Hadoop 0.20.2-cdh3u4 and Sqoop 1.3.0-cdh3u5
sqoop import $SQOOP_CONNECTION_STRING
--query 'SELECT st.reference,u.unit,st.reading,st.code,st.read_id,st.avg FROM reading st,tunit `tu,unit u
WHERE st.reference=tu.reference and st.number IN ('218730','123456') and tu.unit_id = u.unit_id
and u.enrolled='Y' AND st.reading <= latest_off and st.reading >= To_Date('20120701','yyyymmdd')
and st.type_id is null and $CONDITIONS'
--split-by u.unit
--target-dir /sample/input
Error:
12/10/10 09:33:21 ERROR manager.SqlManager: Error executing statement:
java.sql.SQLSyntaxErrorException: ORA-00904: "YYYYMMDD": invalid identifier
followed by....
12/10/10 09:33:21 ERROR sqoop.Sqoop: Got exception running Sqoop:
java.lang.NullPointerException
Thanks & Regards,
Tamil

I believe that the problem is actually on Bash side (or your command line interpret). Your query contains for example following fragment u.enrolled='Y'. Please notice that you're escaping character constants with single quotes. You seem to be putting entire query into additional single quotes: --query 'YOUR QUERY'. Which results in something like --query '...u.enrolled='Y'...'. However such string is stripped by bash to '...u.enrolled=Y...'. You can verify that by using "echo" to see what exactly will bash do with your string before it will be passed to Sqoop.
jarcec#jarcec-thinkpad ~ % echo '...u.enrolled='Y'...'
...u.enrolled=Y..
.
I would recommend to either escape all single quotes (\') inside your query or choose double quotes for entire query. Please note that the later option will require escaping $ characters with backslash (\$).

Related

Gradle copy task expand yaml file escape whole string

I have the following gradle task:
processResources {
inputs.properties(project.properties.findAll { it.value instanceof String })
filesMatching("**/*.yaml") {
filteringCharset = 'UTF-8'
expand project.properties
}
}
that I use to process a spring boot application.yaml file that contains variable placeholders.
How can I escape a whole log pattern without escaping every single special character, thus keeping the pattern clean?
application.yaml:
logging:
pattern:
console: %clr(%d{${LOG_DATEFORMAT_PATTERN:-yyyy-MM-dd HH:mm:ss.SSS}}){faint} %clr(${LOG_LEVEL_PATTERN:-%5p}) %clr(${PID:- }){magenta} %clr(---){faint} %clr([t]){faint} %clr(%-40.40logger{39}){cyan} %clr(:){faint} %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}
Tried using slashy strings with no success:
1.Tried:
/%clr(%d{${LOG_DATEFORMAT_PATTERN:-yyyy-MM-dd HH:mm:ss.SSS}}){faint} %clr(${LOG_LEVEL_PATTERN:-%5p}) %clr(${PID:- }){magenta} %clr(---){faint} %clr([t]){faint} %clr(%-40.40logger{39}){cyan} %clr(:){faint} %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}/
Error:
Caused by: groovy.lang.GroovyRuntimeException: Failed to parse template script (your template may contain an error or be trying to use expressions not currently supported): startup failed:
SimpleTemplateScript8.groovy: 138: expecting '}', found 'HH' # line 138, column 60.
ATEFORMAT_PATTERN:-yyyy-MM-dd HH:mm:ss.S
^
2.Tried:
$/%clr(%d{${LOG_DATEFORMAT_PATTERN:-yyyy-MM-dd HH:mm:ss.SSS}}){faint} %clr(${LOG_LEVEL_PATTERN:-%5p}) %clr(${PID:- }){magenta} %clr(---){faint} %clr([t]){faint} %clr(%-40.40logger{39}){cyan} %clr(:){faint} %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}/$
Error:
Caused by: groovy.lang.GroovyRuntimeException: Failed to parse template script (your template may contain an error or be trying to use expressions not currently supported): startup failed:
SimpleTemplateScript10.groovy: 138: illegal string body character after dollar sign;
solution: either escape a literal dollar sign "\$5" or bracket the value expression "${5}" # line 138, column 15.
console: $/%clr(%d{${LOG_DATEFORMAT_PATTERN:-yyyy-MM-dd HH:mm:ss.SSS}}){faint} %clr(${LOG_LEVEL_PATTERN:-%5p}) %clr(${PID:- }){magenta} %clr(---){faint} %clr([t]){faint} %clr(%-40.40logger{39}){cyan} %clr(:){faint} %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}/$
^
Given that your YAML files contains character which are interpreted as tokens by the expand method, relying on Groovy's SimpleTemplateEngine, you should use an alternative method like the filter methods.
The documentation shows a number of examples of filter. By using the underlying Ant ReplaceTokens you might have better luck, as it uses #token# as the notation.

Issue with filter syntax in AWS tools for Powershell Core

I wrote a Powershell script that gets a filtered list of cognito-idp identities using AWS CLI. However, I wanted to make this a lambda script and realized that I could not use AWS CLI and instead needed to use the AWS for Powershell Core module.
When I use the AWS CLI command
aws cognito-idp list-users --user-pool-id $user_pool_id --filter 'email=\"foo#bar.com\"'
I get the expected result.
When I use the equivalent cmdlet from the module
Get-CGIPUserList -UserPoolId $user_pool_id -Region $region -Filter 'email=\"foo#bar.com\"'
I get a filter parsing error
Get-CGIPUserList : One or more errors occurred. (Error while parsing filter.)
At line:1 char:9
+ Get-CGIPUserList -UserPoolId "****" -Region "u ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (Amazon.PowerShe...PUserListCmdlet:GetCGIPUserListCmdlet) [Get-CGIPUserList], InvalidOperationException
+ FullyQualifiedErrorId : System.AggregateException,Amazon.PowerShell.Cmdlets.CGIP.GetCGIPUserListCmdlet
According to the module reference here:
https://docs.aws.amazon.com/powershell/latest/reference/items/Get-CGIPUserList.html the syntax for the filter parameter should be the same. What am I doing wrong?
The powershell module is failing to parse your filter string 'email=\"foo#bar.com\"' because of the escaped double quotations.
Simply remove them and you should get past this error, as the single quote ' in powershell expresses content as string literal:
'email="foo#bar.com"'
You could also wrap your filter string in double quotes ". You would generally only need to do this if your string contained a powershell variable that you would like to interpolate. You would need to replace the \ escape character in this case with powershell's escape character ` like so:
"email=`"foo#bar.com`""

How to use Regular Expression in AWS CLI Filter

I am using AWS Command Line Interface (CLI) to list some AMI Images from AWS.
The Name of an Image is like:
XY_XYZ_Docker_1.13_XYZ_XXYY
When using
aws ec2 describe-images --filters 'Name=name,Values="*_Docker_1.13_*"'
it works as expected.
Now i want to use Regular Expression instead of static value for the Name-Filter.
In the AWS-Docs I read that filtering by RegEx is possible
My approach is:
1:
aws ec2 describe-images --filters 'Name=name,Values="[_]Docker[_][0-9][.][0-9]{2}[_]"'
The result is always null for this. I tried different ways of quoting the RegEx.
2:
[_]Docker[_][0-9][.][0-9]{2}[_]
(without quotes) leads to
Error parsing parameter '--filters': Expected: ',', received: 'D' for input:
Name=name,Values=[]Docker[][0-9][.][0-9]{2}[_]
3:
*[_]Docker[_][0-9][.][0-9]{2}[_]*
(with Asterisk) leads to
Error parsing parameter '--filters': Expected: ',', received: ']' for input:
Name=name,Values=[_]Docker[_][0-9][.][0-9]{2}[_]
I wasn't able to find if Jmespath or the --filters flag can support regex, so instead I just piped to Python to run through regex.
aws ec2 describe-images --filters 'Name=name,Values="*Docker*"' | \
python -c '
import json, sys, re
obj = json.load(sys.stdin)
matched_images = {"Images":[]}
for image in obj["Images"]:
if len(re.findall(r"[Dd]ocker\s?[0-9][.][0-9]{2}", image["Name"])) > 0:
matched_images["Images"].append(image)
print json.dumps(matched_images)
'
You can pipe the output (which is just a JSON string) to your next bash command if needed with a pipe character following the closing quote. Maybe this can address concerns with using grep since it returns a JSON string instead or regular text.
See the gist below.
It covers:
search ECR images sorted descending by imagePushDate
selecting the tag that meets a Regex criteria
using that to replace a key/value pair in a yaml
https://gist.github.com/pprogrammingg/69e7c85abede9822f2480e9b5e1e66fd

telegraf - exec plugin - aws ec2 ebs volumen info - metric parsing error, reason: [missing fields] or Errors encountered: [ invalid number]

Machine - CentOS 7.2 or Ubuntu 14.04/16.xx
Telegraf version: 1.0.1
Python version: 2.7.5
Telegraf supports an INPUT plugin named: exec. First please see EXAMPLE 2 in the README doc there. I can't use JSON format as it only consumes Numeric values for metrics. As per the docs:
If using JSON, only numeric values are parsed and turned into floats. Booleans and strings will be ignored.
So, the idea is simple, you specify a script in exec plugin section, which should spit some meaningful info(in either JSON -or- influx data format in my case as I have some metrics which contains non-numeric values) which you would want to catch/show somewhere in a cool dashboard like for example Wavefront Dashboard shown here:
:
Basically one can use these metrics, tags, sources from where these metrics are coming from to find out various info about memory, cpu, disk, networking, other meaningful info and also create alerts using those if something unwanted happens.
OK, I came up with this python script available here:
#!/usr/bin/python
# sudo pip install boto3 if you don't have it on your machine.
import boto3
def generate(key, value):
"""
Creates a nicely formatted Key(Value) item for output
"""
return '{}="{}"'.format(key, value)
#return '{}={}'.format(key, value)
def main():
ec2 = boto3.resource('ec2', region_name="us-west-2")
volumes = ec2.volumes.all()
for vol in volumes:
# You don't need to wrap everything in `str` unless it is not a string
# By default most things will come back as a string
# unless they are very obviously not (complex, date time, etc)
# but since we are printing these (and formatting them into strings)
# the cast to string will be implicit and we don't need to make it
# explicit
# vol is already a fully returned volume you are essentially DOUBLING
# your API calls when you do this
#iv = ec2.Volume(vol.id)
output_parts = [
# Volume level details
generate('create_time', vol.create_time),
generate('availability_zone', vol.availability_zone),
generate('volume_id', vol.volume_id),
generate('volume_type', vol.volume_type),
generate('state', vol.state),
generate('size', vol.size),
generate('iops', vol.iops),
generate('encrypted', vol.encrypted),
generate('snapshot_id', vol.snapshot_id),
generate('kms_key_id', vol.kms_key_id),
]
for _ in vol.attachments:
# Will get any attachments and since it is a list
# we should write this to handle MULTIPLE attachments
output_parts.extend([
generate('InstanceId', _.get('InstanceId')),
generate('InstanceVolumeState', _.get('State')),
generate('DeleteOnTermination', _.get('DeleteOnTermination')),
generate('Device', _.get('Device')),
])
# only process when there are tags to process
if vol.tags:
for _ in vol.tags:
# Get all of the tags
output_parts.extend([
generate(_.get('Key'), _.get('Value')),
])
# output everything at once..
print ','.join(output_parts)
if __name__ == '__main__':
main()
This script will talk to AWS EC2 EBS volumes and outputs all values it can find (usually what you see in AWS EC2 EBS volume console) and format that info into a meaningful CSV format which I'm redirecting to a .csv log file.
We don't want to run the python script all the time (AWS API limits / cost factor).
So, once the .csv file is created, I created this small shell script which I'll set in Telegraf's exec plugin's section.
Shell script /tmp/aws-vol-info.sh set in Telegraf exec plugin is:
#!/bin/bash
cat /tmp/aws-vol-info.csv
Telegraf configuration file created using exec plugin (/etc/telegraf/telegraf.d/exec-plugin-aws-info.conf):
#--- https://github.com/influxdata/telegraf/tree/master/plugins/inputs/exec
[[inputs.exec]]
commands = ["/tmp/aws-vol-info.sh"]
## Timeout for each command to complete.
timeout = "5s"
# Data format to consume.
# NOTE json only reads numerical measurements, strings and booleans are ignored.
data_format = "influx"
name_suffix = "_telegraf_execplugin"
I tweaked the .py (Python script for generate function) to generate the following three type of output formats (.csv file) and wanted to test how telegraf would handle this data before I enable the config file (/etc/telegraf/telegraf.d/catch-aws-ebs-info.conf) and restart telegraf service.
Format 1: (with double quotes " wrapped for every value)
create_time="2017-01-09 23:24:29.428000+00:00",availability_zone="us-east-2b",volume_id="vol-058e1d47dgh721121",volume_type="gp2",state="in-use",size="8",iops="100",encrypted="False",snapshot_id="snap-06h1h1b91bh662avn",kms_key_id="None",InstanceId="i-0jjb1boop26f42f50",InstanceVolumeState="attached",DeleteOnTermination="True",Device="/dev/sda1",Name="[company-2b-app90] secondary",hostname="company-2b-app90-i-0jjb1boop26f42f50",high_availability="1",mirror="secondary",cluster="company",autoscale="true",role="app"
Testing telegraf configuration on the telegraf directory gives me the following error.
Command: $ telegraf --config-directory=/etc/telegraf --test --input-filter=exec
[vagrant#myvagrant ~] $ telegraf --config-directory=/etc/telegraf --test --input-filter=exec
2017/03/10 00:37:48 I! Using config file: /etc/telegraf/telegraf.conf
* Plugin: inputs.exec, Collection 1
2017-03-10T00:37:48Z E! Errors encountered: [ metric parsing error, reason: [invalid field format], buffer: [create_time="2017-01-09 23:24:29.428000+00:00",availability_zone="us-east-2b",volume_id="vol-058e1d47dgh721121",volume_type="gp2",state="in-use",size="8",iops="100",encrypted="False",snapshot_id="snap-06h1h1b91bh662avn",kms_key_id="None",InstanceId="i-0jjb1boop26f42f50",InstanceVolumeState="attached",DeleteOnTermination="True",Device="/dev/sda1",Name="[company-2b-app90] secondary",hostname="company-2b-app90-i-0jjb1boop26f42f50",high_availability="1",mirror="secondary",cluster="company",autoscale="true",role="app"], index: [372]]
[vagrant#myvagrant ~] $
Format 2: (without any " double quotes)
create_time=2017-01-09 23:24:29.428000+00:00,availability_zone=us-east-2b,volume_id=vol-058e1d47dgh721121,volume_type=gp2,state=in-use,size=8,iops=100,encrypted=False,snapshot_id=snap-06h1h1b91bh662avn,kms_key_id=None,InstanceId=i-0jjb1boop26f42f50,InstanceVolumeState=attached,DeleteOnTermination=True,Device=/dev/sda1,Name=[company-2b-app90] secondary,hostname=company-2b-app90-i-0jjb1boop26f42f50,high_availability=1,mirror=secondary,cluster=company,autoscale=true,role=app
Getting same error while testing Telegraf's configuration for exec plugin:
2017/03/10 00:45:01 I! Using config file: /etc/telegraf/telegraf.conf
* Plugin: inputs.exec, Collection 1
2017-03-10T00:45:01Z E! Errors encountered: [ metric parsing error, reason: [invalid value], buffer: [create_time=2017-01-09 23:24:29.428000+00:00,availability_zone=us-east-2b,volume_id=vol-058e1d47dgh721121,volume_type=gp2,state=in-use,size=8,iops=100,encrypted=False,snapshot_id=snap-06h1h1b91bh662avn,kms_key_id=None,InstanceId=i-0jjb1boop26f42f50,InstanceVolumeState=attached,DeleteOnTermination=True,Device=/dev/sda1,Name=[company-2b-app90] secondary,hostname=company-2b-app90-i-0jjb1boop26f42f50,high_availability=1,mirror=secondary,cluster=company,autoscale=true,role=app], index: [63]]
Format 3: (this format doesn't have any " double quote and space character in the values). Substituted space with _ character.
create_time=2017-01-09_23:24:29.428000+00:00,availability_zone=us-east-2b,volume_id=vol-058e1d47dgh721121,volume_type=gp2,state=in-use,size=8,iops=100,encrypted=False,snapshot_id=snap-06h1h1b91bh662avn,kms_key_id=None,InstanceId=i-0jjb1boop26f42f50,InstanceVolumeState=attached,DeleteOnTermination=True,Device=/dev/sda1,Name=[company-2b-app90]_secondary,hostname=company-2b-app90-i-0jjb1boop26f42f50,high_availability=1,mirror=secondary,cluster=company,autoscale=true,role=app
Still didn't work, getting same error:
[vagrant#myvagrant ~] $ telegraf --config-directory=/etc/telegraf --test --input-filter=exec
2017/03/10 00:50:30 I! Using config file: /etc/telegraf/telegraf.conf
* Plugin: inputs.exec, Collection 1
2017-03-10T00:50:30Z E! Errors encountered: [ metric parsing error, reason: [missing fields], buffer: [create_time=2017-01-09_23:24:29.428000+00:00,availability_zone=us-east-2b,volume_id=vol-058e1d47dgh721121,volume_type=gp2,state=in-use,size=8,iops=100,encrypted=False,snapshot_id=snap-06h1h1b91bh662avn,kms_key_id=None,InstanceId=i-0jjb1boop26f42f50,InstanceVolumeState=attached,DeleteOnTermination=True,Device=/dev/sda1,Name=[company-2b-app90]_secondary,hostname=company-2b-app90-i-0jjb1boop26f42f50,high_availability=1,mirror=secondary,cluster=company,autoscale=true,role=app], index: [476]]
Format 4: If I follow influx line protocol as per this page: https://docs.influxdata.com/influxdb/v1.2/write_protocols/line_protocol_tutorial/
awsebs,Name=[company-2b-app90]_secondary,hostname=company-2b-app90-i-0jjb1boop26f42f50,high_availability=1,mirror=secondary,cluster=company,autoscale=true,role=app create_time=2017-01-09_23:24:29.428000+00:00,availability_zone=us-east-2b,volume_id=vol-058e1d47dgh721121,volume_type=gp2,state=in-use,size=8,iops=100,encrypted=False,snapshot_id=snap-06h1h1b91bh662avn,kms_key_id=None,InstanceId=i-0jjb1boop26f42f50,InstanceVolumeState=attached,DeleteOnTermination=True,Device=/dev/sda1
I'm getting this error:
[vagrant#myvagrant ~] $ telegraf --config-directory=/etc/telegraf --test --input-filter=exec
2017/03/10 02:34:30 I! Using config file: /etc/telegraf/telegraf.conf
* Plugin: inputs.exec, Collection 1
2017-03-10T02:34:30Z E! Errors encountered: [ invalid number]
HOW can I get rid of this error and get telegraf to work with exec plugin (which runs the .sh script)?
Other Info:
Python script will run once/twice per day (via cron) and telegraf will run every 1 minute (to run exec plugin - which runs .sh script - which will cat the .csv file so that telegraf can consume it in influx data format).
https://galaxy.ansible.com/wavefrontHQ/wavefront-ansible/
https://github.com/influxdata/telegraf/issues/2525
It seems like the rules are very strict, I should have looked more closely.
Syntax of the output of any program that you can to consume MUST match or follow INFLUX LINE PROTOCOL format shown below and also all the RULES which comes with it.
For ex:
weather,location=us-midwest temperature=82 1465839830100400200
| -------------------- -------------- |
| | | |
| | | |
+-----------+--------+-+---------+-+---------+
|measurement|,tag_set| |field_set| |timestamp|
+-----------+--------+-+---------+-+---------+
You can read more about what's measurement, tag, field and optional(timestamp) here: https://docs.influxdata.com/influxdb/v1.2/write_protocols/line_protocol_tutorial/
Important rules are:
1) There must be a , and no space between measurement and tag set.
2) There must be a space between tag set and field set.
3) For tag keys, tag values, and field keys always use a backslash character \ to escape if you want to escape any character in measurement name, tag or field set name and their values!
4) You can't escape \ with \
5) Line Protocol handles emojis with no problem :)
6) TAG / TAG set (tags comma separated) in OPTIONAL
7) FIELD / FIELD set (fields, comma separated) - At least ONE is required per line.
8) TIMESTAMP (last value shown in the format) is OPTIONAL.
9) VERY IMPORTANT QUOTING rules are below:
a) Never double or single quote the timestamp. It’s not valid Line Protocol. '123123131312313' or "1231313213131" won't work if that # is valid.
b) Never single quote field values (even if they’re strings!). It’s also not valid Line Protocol. i.e. fieldname='giga' won't work.
c) Do not double or single quote measurement names, tag keys, tag values, and field keys. NOTE: THIS does say !!! tag values !!!! so careful.
d) Do not double quote field values that are ONLY in floats, integers, or booleans format, otherwise InfluxDB will assume that those values are strings.
e) Do double quote field values that are strings.
f) AND the MOST IMPORTANT one (which will save you from getting BALD): If a FIELD value is set without double quote / i.e. you think it's an integer value or float in one line (for ex: anyone will say fields size or iops) and in some other lines (anywhere in the file that telegraf will read/parse using exec plugin) if you have a non-integer value set (i.e. a String), then you'll get the following error message Errors encountered: [ invalid number error.
So to fix it, the RULE is, if any possible FIELD value for a FIELD key is a string, then you MUST make sure to use " to wrap it (in every lines), it doesn't matter whether it has value 1, 200 or 1.5 in some lines (for ex: iops can be 1, 5) and in some other lines that value (iops can be None).
Error message: Errors encountered: [ invalid number
[vagrant#myvagrant ~] $ telegraf --config-directory=/etc/telegraf --test --input-filter=exec
2017/03/10 11:13:18 I! Using config file: /etc/telegraf/telegraf.conf
* Plugin: inputs.exec, Collection 1
2017-03-10T11:13:18Z E! Errors encountered: [ invalid number metric parsing error, reason: [invalid field format], buffer: [awsebsvol,host=myvagrant ], index: [25]]
So, after all this learning, it's clear that first I was missing the Influx Line protocol format and ALSO the RULES!!
Now, my output that I want my python script to generate should be like this (acc. to the INFLUX LINE PROTOCOL). You can just change the .sh file and use sed "s/^/awsec2ebs,/" or also do sed "s/^/awsec2ebs,sourcehost=$(hostname) /" (note: the space before the closing sed / character) and then you can have " around any key=value pair. I did change .py file to not use " for size and iops fields.
Anyways, if the output is something like this:
awsec2ebs,volume_id=vol-058e1d47dgh721121 create_time="2017-01-09 23:24:29.428000+00:00",availability_zone="us-east-2b",volume_type="gp2",state="in-use",size="8",iops="100",encrypted="False",snapshot_id="snap-06h1h1b91bh662avn",kms_key_id="None",InstanceId="i-0jjb1boop26f42f50",InstanceVolumeState="attached",DeleteOnTermination="True",Device="/dev/sda1",Name="[company-2b-app90] secondary",hostname="company-2b-app90-i-0jjb1boop26f42f50",high_availability="1",mirror="secondary",cluster="company",autoscale="true",role="app"
In the above final working solution, I created a measurement named awsec2ebs then gave , between this measurement and tag key volume_id and for tag value, I did NOT use any ' or " quotes and then I gave a space character (as I just wanted only one tag for now otherwise you can have more tag using command separated way and following the rules) between tag set and field set.
Finally ran the command:
$ telegraf --config-directory=/etc/telegraf --test --input-filter=exec which worked like a shenzi!
2017/03/10 03:33:54 I! Using config file: /etc/telegraf/telegraf.conf
* Plugin: inputs.exec, Collection 1
> awsec2ebs_telegraf_execplugin,volume_id=vol-058e1d47dgh721121,host=myvagrant volume_type="gp2",iops="100",kms_key_id="None",role="app",size="8",encrypted="False",InstanceId="i-0jjb1boop26f42f50",InstanceVolumeState="attached",Name="[company-2b-app90] secondary",snapshot_id="snap-06h1h1b91bh662avn",DeleteOnTermination="True",mirror="secondary",cluster="company",autoscale="true",high_availability="1",create_time="2017-01-09 23:24:29.428000+00:00",availability_zone="us-east-2b",state="in-use",Device="/dev/sda1",hostname="company-2b-app90-i-0jjb1boop26f42f50" 1489116835000000000
[vagrant#myvagrant ~] $ echo $?
0
In the above example, size is the only field which will always be a number/numeric value, so we don't need to wrap it with " but it's up to you. Recall the MOST IMPORTANT rule.. above and the error it generates.
So final python file is:
#!/usr/bin/python
#Do `sudo pip install boto3` first
import boto3
def generate(key, value, qs, qe):
"""
Creates a nicely formatted Key(Value) item for output
"""
return '{}={}{}{}'.format(key, qs, value, qe)
def main():
ec2 = boto3.resource('ec2', region_name="us-west-2")
volumes = ec2.volumes.all()
for vol in volumes:
# You don't need to wrap everything in `str` unless it is not a string
# By default most things will come back as a string
# unless they are very obviously not (complex, date time, etc)
# but since we are printing these (and formatting them into strings)
# the cast to string will be implicit and we don't need to make it
# explicit
# vol is already a fully returned volume you are essentially DOUBLING
# your API calls when you do this
#iv = ec2.Volume(vol.id)
output_parts = [
# Volume level details
generate('volume_id', vol.volume_id, '"', '"'),
generate('create_time', vol.create_time, '"', '"'),
generate('availability_zone', vol.availability_zone, '"', '"'),
generate('volume_type', vol.volume_type, '"', '"'),
generate('state', vol.state, '"', '"'),
generate('size', vol.size, '', ''),
#The following vol.iops variable can be a number or None so you must wrap it with double quotes otherwise "invalid number" error will come.
generate('iops', vol.iops, '"', '"'),
generate('encrypted', vol.encrypted, '"', '"'),
generate('snapshot_id', vol.snapshot_id, '"', '"'),
generate('kms_key_id', vol.kms_key_id, '"', '"'),
]
for _ in vol.attachments:
# Will get any attachments and since it is a list
# we should write this to handle MULTIPLE attachments
output_parts.extend([
generate('InstanceId', _.get('InstanceId'), '"', '"'),
generate('InstanceVolumeState', _.get('State'), '"', '"'),
generate('DeleteOnTermination', _.get('DeleteOnTermination'), '"', '"'),
generate('Device', _.get('Device'), '"', '"'),
])
# only process when there are tags to process
if vol.tags:
for _ in vol.tags:
# Get all of the tags
output_parts.extend([
generate(_.get('Key'), _.get('Value'), '"', '"'),
])
# output everything at once..
print ','.join(output_parts)
if __name__ == '__main__':
main()
Final aws-vol-info.sh is:
#!/bin/bash
cat aws-vol-info.csv | sed "s/^/awsebsvol,host=`hostname|head -1|sed "s/[ \t][ \t]*/_/g"` /"
Final telegraf exec plugin config file is (/etc/telegraf/telegraf.d/exec-plugin-aws-info.conf) give any name with .conf:
#--- https://github.com/influxdata/telegraf/tree/master/plugins/inputs/exec
[[inputs.exec]]
commands = ["/some/valid/path/where/csvfileexists/aws-vol-info.sh"]
## Timeout for each command to complete.
timeout = "5s"
# Data format to consume.
# NOTE json only reads numerical measurements, strings and booleans are ignored.
data_format = "influx"
name_suffix = "_telegraf_exec"
Run: and everything will work now!
$ telegraf --config-directory=/etc/telegraf --test --input-filter=exec

mongoexport regex unnknown option

mongoexport --db ucc_prod /host:myserver /port:27017 --username user1 --password password1 /query:'{copysheet: {$regex: "/^.*pdf/"}}' /out:copysheets.csv --type=csv --fields svOrderId,svItemId --collection copies
gives me error
2016-09-02T08:17:34.632-0500 error parsing command line options: unknown option "^.*pdf/}}'"
What syntax am I missing here?
You may use
--query "{ 'copysheet': { '$regex': '^.*pdf', '$options':'' }}"
The point is that you should pass the data to the query argument as JSON.
See reference:
--query <JSON>, -q <JSON>
Provides a JSON document as a query that optionally limits the documents returned in the export. Specify JSON in strict format.
Note: on different systems, you might need to swap single with double quotes.