I was trying to add the auto complete feature to sublime for all brackets , inverted commas etc. and escape it using tab but getting parsing error using the following code
// Auto-complete behaviour for ), ], '', ""
// Bind to , to step over the auto-completed character(s)
// -------------------------------------------------------------------------
{ "keys": ["enter"], "command": "move", "args": {"by": "characters", "forward": true}, "context":
[
{ "key": "following_text", "operator": "regex_contains", "operand": "^[)\\]'\"]", "match_all": true },
{ "key": "preceding_text", "operator": "regex_contains", "operand": "[(['\"]", "match_all": true },
{ "key": "auto_complete_visible", "operator": "equal", "operand": false }
]
},
{ "keys": ["tab"], "command": "move", "args": {"by": "characters", "forward": true}, "context":
[
{ "key": "following_text", "operator": "regex_contains", "operand": "^[)\\]'\"]", "match_all": true },
{ "key": "preceding_text", "operator": "regex_contains", "operand": "[(['\"]", "match_all": true },
{ "key": "auto_complete_visible", "operator": "equal", "operand": false }
]
}
but after that i am getting this as result
If you install package dev for Sublime Text 3 it'll also show you where your errors are with a coloured cursor.
The error is at line 10, character 3, so some line numbers would help with the code you've posted..
I've pasted it into my keymap file (in ST3) and can detect no errors with the code you've posted, is that the entire contents of .sublime-keymap ?
I can also recommend posting your question on the Sublime Text Forum they may be able to help you too.
You could be missing a comma after your final curly bracket, if there's more keybindings in that file i.e. the last line should be:-
},
Related
My ListInputSecurityGroup task returns this json:
{
"output": [
{
"Arn": "arn:aws:medialive:eu-north-1:xxx:inputSecurityGroup:1977625",
"Id": "1977625",
"Inputs": [],
"State": "IDLE",
"Tags": {},
"WhitelistRules": [
{
"Cidr": "5.5.5.5/32"
}
]
},
{
"Arn": "arn:aws:medialive:eu-north-1:xxx:inputSecurityGroup:5411101",
"Id": "5411101",
"Inputs": [],
"State": "IDLE",
"Tags": {
"use": "some_other_use"
},
"WhitelistRules": [
{
"Cidr": "1.1.1.1/0"
}
]
},
{
"Arn": "arn:aws:medialive:eu-north-1:xxx:inputSecurityGroup:825926",
"Id": "825926",
"Inputs": [
"4011716"
],
"State": "IN_USE",
"Tags": {
"use": "for_rtmp_pipeline"
},
"WhitelistRules": [
{
"Cidr": "0.0.0.0/0"
}
]
}
]
}
I want to use OutputPath to extract the InputSecurityGroup with the tag {use:for_rtmp_pipeline}. According to this JSONPath tester this expression works $.output[?(#.Tags.use == for_rtmp_pipeline)] and it returns the 3rd element in this array. But when used in the StepFunction itself, or in the Data Flow Simulator, it doesn't return anything. Is this a limitation of the JSONPath engine in AWS, or is there a different syntaxis? How can I extract the one element I want?
Note that in the tester the searched string should be in quotes, while in AWS there's no need for quotes.
I am new to CloudWatch MatchExpression. I’m trying to plot the percent of 5xx errors. The following is the widget I constructed in Ruby with Math expression:
{:title=>"5xx Errors", :view=>"timeSeries", :stacked=>false, :start=>"-P7D", :period=>300, :yAxis=>{"left"=>{:min=>0, :max=>100}}, :annotations=>{:horizontal=>[{:color=>"#ff7f00", :label=>"10", :value=>10}, {:color=>"#ff0000", :label=>"50", :value=>50}]}, :metrics=>[[{"id"=>"percent_5xx_error", "expression"=>"100*(5xx/(2xx+3xx+4xx+5xx))", "label"=>"IAD", "accountId"=>"967992492170", "region"=>"us-east-1"}], ["TangerineBox", "StatusCode2xx", "ConsoleName", "fsx-console", {"accountId"=>"967992492170", "region"=>"us-east-1", "label"=>"IAD", "id"=>"2xx", "stat"=>"Sum", "visible"=>false}], ["TangerineBox", "StatusCode3xx", "ConsoleName", "fsx-console", {"accountId"=>"967992492170", "region"=>"us-east-1", "label"=>"IAD", "id"=>"3xx", "stat"=>"Sum", "visible"=>false}], ["TangerineBox", "StatusCode4xx", "ConsoleName", "fsx-console", {"accountId"=>"967992492170", "region"=>"us-east-1", "label"=>"IAD", "id"=>"4xx", "stat"=>"Sum", "visible"=>false}], ["TangerineBox", "StatusCode5xx", "ConsoleName", "fsx-console", {"accountId"=>"967992492170", "region"=>"us-east-1", "label"=>"IAD", "id"=>"5xx", "stat"=>"Sum", "visible"=>false}]]}
But Its giving me the error : MetricWidget/metrics/1 should not have more than 1 item, when I try to embed the graph in wiki.
I opened up the graph is AWS CloudWatch console too and this is the following I get:
{
"title": "5xx Errors",
"view": "timeSeries",
"stacked": false,
"period": 300,
"yAxis": {
"left": {
"min": 0,
"max": 100
}
},
"annotations": {
"horizontal": [
{
"color": "#ff7f00",
"label": "10",
"value": 10
},
{
"color": "#ff0000",
"label": "50",
"value": 50
}
]
},
"metrics": [
[ { "id": "percent_5xx_error", "expression": "100*(5xx/(2xx+3xx+4xx+5xx))", "label": "IAD", "accountId": "967992492170", "region": "us-east-1" } ],
[ "TangerineBox", "StatusCode2xx", "ConsoleName", "fsx-console", { "accountId": "967992492170", "label": "IAD", "id": "2xx", "stat": "Sum", "visible": false } ],
[ ".", "StatusCode3xx", ".", ".", { "accountId": "967992492170", "label": "IAD", "id": "3xx", "stat": "Sum", "visible": false } ],
[ ".", "StatusCode4xx", ".", ".", { "accountId": "967992492170", "label": "IAD", "id": "4xx", "stat": "Sum", "visible": false } ],
[ ".", "StatusCode5xx", ".", ".", { "accountId": "967992492170", "label": "IAD", "id": "5xx", "stat": "Sum", "visible": false } ]
],
"width": 1401,
"height": 754,
"region": "us-east-1"
}
Can someone please help me to debug this issue.
First thing to look are the IDs. IDs need to start with a lower case letter. Try changing 2xx, 3xx ..., to something like m2xx, m3xx, ...
I am trying to get the percentage memory used when running a lambda to display in a graph on cloudwatch. I know there are other ways I can pull the data, but for reasons outside of the scope of this question, I would like to stick to using search to pull the metrics.
I have the following graph
{
"metrics": [
[ { "expression": "SEARCH('{SomeMetricNamespace} MetricName=\"MemorySize\"', 'Average', 300)", "id": "m1", "visible": "true" } ],
[ { "expression": "SEARCH('{SomeMetricNamespace} MetricName=\"MaxMemoryUsed\"', 'Average', 300)", "id": "m2", "visible": "true" } ],
[ { "expression": "m2/m1*100", "label": "pecentage memory used", "id": "e1", "stat": "Average" } ]
],
"view": "timeSeries",
"stacked": false,
"region": "us-west-2",
"stat": "Average",
"period": 300,
"title": "Memory",
"yAxis": {
"left": {
"label": "Percentage Usage",
"showUnits": false
}
},
"liveData": false
}
The error I am getting
Error in expression e1 [Unsupported operand type(s) for /: '[Array[TimeSeries], Array[TimeSeries]]']
Is there a way to combine the first 2 expressions to give me the percentage memory used?
The result of the expressions are arrays of time series so you can not apply directly operations (+ - * / ^). As a workaround you could transform each time series into single values (average values) for each expression and then calculate the percentage.
The source should be similar to this:
{
"metrics": [
[ { "expression": "SEARCH('{SomeMetricNamespace} MetricName=\"MemorySize\"', 'Average', 300)", "id": "m1", "visible": "false" } ],
[ { "expression": "SEARCH('{SomeMetricNamespace} MetricName=\"MaxMemoryUsed\"', 'Average', 300)", "id": "m2", "visible": "false" } ],
[ { "expression": "AVG(m1)", "label": "AVGMemorySize", "id": "e1", "visible": "false" } ],
[ { "expression": "AVG(m2)", "label": "AVGMaxMemoryUsed", "id": "e2", "visible": "false" } ],
[ { "expression": "e2/e1*100", "label": "pecentage memory used", "id": "e3", "stat": "Average" } ]
],
"view": "timeSeries",
"stacked": false,
"region": "us-west-2",
"stat": "Average",
"period": 300,
"title": "Memory",
"yAxis": {
"left": {
"label": "Percentage Usage",
"showUnits": false
}
},
"liveData": false
}
I would like to upgrade my AWS data pipeline definition to EMR 4.x or 5.x, so I can take advantage of Hive's latest features (version 2.0+), such as CURRENT_DATE and CURRENT_TIMESTAMP, etc.
The change from EMR 3.x to 4.x/5.x requires the use of releaseLabel in EmrCluster, versus amiVersion.
When I use a "releaseLabel": "emr-4.1.0", I get the following error: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.TezTask
Below is my data pipeline definition, for EMR 3.x. It works well, so I hope others find this useful (including the answer for emr 4.x/5.x), as the common answer/recommendation to importing data into DynamoDB from a file is to use Data Pipeline, but literally no one has put forward a solid & simple working example (say for custom data format).
{
"objects": [
{
"type": "DynamoDBDataNode",
"id": "DynamoDBDataNode1",
"name": "OutputDynamoDBTable",
"dataFormat": {
"ref": "DynamoDBDataFormat1"
},
"region": "us-east-1",
"tableName": "testImport"
},
{
"type": "Custom",
"id": "Custom1",
"name": "InputCustomFormat",
"column": [
"firstName", "lastName"
],
"columnSeparator" : "|",
"recordSeparator" : "\n"
},
{
"type": "S3DataNode",
"id": "S3DataNode1",
"name": "InputS3Data",
"directoryPath": "s3://data.domain.com",
"dataFormat": {
"ref": "Custom1"
}
},
{
"id": "Default",
"name": "Default",
"scheduleType": "ondemand",
"failureAndRerunMode": "CASCADE",
"resourceRole": "DataPipelineDefaultResourceRole",
"role": "DataPipelineDefaultRole",
"pipelineLogUri": "s3://logs.data.domain.com"
},
{
"type": "HiveActivity",
"id": "HiveActivity1",
"name": "S3ToDynamoDBImportActivity",
"output": {
"ref": "DynamoDBDataNode1"
},
"input": {
"ref": "S3DataNode1"
},
"hiveScript": "INSERT OVERWRITE TABLE ${output1} SELECT reflect('java.util.UUID', 'randomUUID') as uuid, TO_DATE(FROM_UNIXTIME(UNIX_TIMESTAMP())) as loadDate, firstName, lastName FROM ${input1};",
"runsOn": {
"ref": "EmrCluster1"
}
},
{
"type": "EmrCluster",
"name": "EmrClusterForImport",
"id": "EmrCluster1",
"coreInstanceType": "m1.medium",
"coreInstanceCount": "1",
"masterInstanceType": "m1.medium",
"amiVersion": "3.11.0",
"region": "us-east-1",
"terminateAfter": "1 Hours"
},
{
"type": "DynamoDBDataFormat",
"id": "DynamoDBDataFormat1",
"name": "OutputDynamoDBDataFormat",
"column": [
"uuid", "loadDate", "firstName", "lastName"
]
}
],
"parameters": []
}
A sample file could look like
John|Doe
Jane|Doe
Carl|Doe
Bonus: rather than setting CURRENT_DATE in a column, how I can set as a variable in the hiveScript section? I tried SET loadDate = CURRENT_DATE;\n\n INSERT OVERWRITE..." to no avail. Not shown in my example are other dynamic fields I would like to set before the query clause.
I am rather new to sublime text but wanted to make a key binding to move the cursor to a specific/constant column, regardless of any previous text entered. This would be used to add comments to code on the far right, or just improving uniformity in coding verilog (or other language).
Basically after any entered text, hit ctrl+1 (or any other key binding) and cursor goes to a specified column adding spaces as filler.
Here is what I have so far.
[{ "keys": ["ctrl+1"],
"command": "run_multiple_commands",
"args": {
"commands": [
{ "command": "move_to", "args": {"to": "eol", "extend": false} },
{ "command": "insert", "args": {"characters": " "} },
{ "command": "insert_snippet", "args": {"contents": "$TM_LINE_INDEX"} },
{ "command": "move", "args": {"by": "words", "forward": false} },
{ "command": "insert", "args": {"characters": "\t\t\t\t"},"context":
[
{ "key": "following_text", "operator": "regex_match", "operand": "40", "match_all": true }
]
},
{ "command": "insert", "args": {"characters": "\t\t\t"},"context":
[
{ "key": "following_text", "operator": "regex_match", "operand": "41", "match_all": true }
]
},
{ "command": "delete_word", "args": { "forward": true } }
]
}
}]
The way I have the code setup is (STEP 3 IS NOT WORKING):
I have a .py script to run multiple commands for a single key binding. Link
Get current column number through a snippet command($TM_LINE_INDEX) and move cursor left of number.
ATTEMPT to use regex to add according number of tabs/spaces. I've only added two cases, but this would be repeated for various cursor column_#'s(locations), not just cursor column locations 40 and 41. Below is the non-working block of code.
{ "command": "insert", "args": {"characters": "\t\t\t\t"},"context":
[
{ "key": "following_text", "operator": "regex_match", "operand": "40", "match_all": true }
]
},
{ "command": "insert", "args": {"characters": "\t\t\t"},"context":
[
{ "key": "following_text", "operator": "regex_match", "operand": "41", "match_all": true }
]
},
Delete the trailing column number snippet (from step 2).
Thanks in advance for any help or comments.