Extending Calcite parser to Adding Custom relational operators - apache-calcite

I want to support FOO_GREATER_THAN clause instead of > in my sql parser.
For ex:
select * from people where age FOO_GREATER_THAN 10
I followed https://calcite.apache.org/docs/adapter.html#extending-the-parser and I have added custom fmpp and ftl file, but I am not able to add it.
parserImpls.ftl
SqlKind regRealtionalOperator() :
SqlKind regRealtionalOperator() :
{
}
{
<FOO_EQUAL> { return SqlKind.EQUALS; }
|
<FOO_GREATER_THAN> { return SqlKind.NOT_EQUALS; }
}
config.fmpp
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to you under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This file is an FMPP (http://fmpp.sourceforge.net/) configuration file to
# allow clients to extend Calcite's SQL parser to support application specific
# SQL statements, literals or data types.
#
# Calcite's parser grammar file (Parser.jj) is written in javacc
# (http://javacc.java.net/) with Freemarker (http://freemarker.org/) variables
# to allow clients to:
# 1. have custom parser implementation class and package name.
# 2. insert new parser method implementations written in javacc to parse
# custom:
# a) SQL statements.
# b) literals.
# c) data types.
# 3. add new keywords to support custom SQL constructs added as part of (2).
# 4. add import statements needed by inserted custom parser implementations.
#
# Parser template file (Parser.jj) along with this file are packaged as
# part of the calcite-core-<version>.jar under "codegen" directory.
data: {
parser: {
# Generated parser implementation package and class name.
package: "org.apache.calcite.sql.parser.impl",
class: "SqlParserImpl",
# List of additional classes and packages to import.
# Example. "org.apache.calcite.sql.*", "java.util.List".
imports: [
]
# List of new keywords. Example: "DATABASES", "TABLES". If the keyword is not a reserved
# keyword add it to 'nonReservedKeywords' section.
keywords: [
"FOO_EQUAL"
"FOO_GREATER_THAN"
]
# List of keywords from "keywords" section that are not reserved.
nonReservedKeywords: [
"A"
"ABSENT"
"ABSOLUTE"
"ACTION"
"ADA"
"ADD"
"ADMIN"
"AFTER"
"ALWAYS"
"APPLY"
"ASC"
"ASSERTION"
"ASSIGNMENT"
"ATTRIBUTE"
"ATTRIBUTES"
"BEFORE"
"BERNOULLI"
"BREADTH"
"C"
"CASCADE"
"CATALOG"
"CATALOG_NAME"
"CENTURY"
"CHAIN"
"CHARACTER_SET_CATALOG"
"CHARACTER_SET_NAME"
"CHARACTER_SET_SCHEMA"
"CHARACTERISTICS"
"CHARACTERS"
"CLASS_ORIGIN"
"COBOL"
"COLLATION"
"COLLATION_CATALOG"
"COLLATION_NAME"
"COLLATION_SCHEMA"
"COLUMN_NAME"
"COMMAND_FUNCTION"
"COMMAND_FUNCTION_CODE"
"COMMITTED"
"CONDITION_NUMBER"
"CONDITIONAL"
"CONNECTION"
"CONNECTION_NAME"
"CONSTRAINT_CATALOG"
"CONSTRAINT_NAME"
"CONSTRAINT_SCHEMA"
"CONSTRAINTS"
"CONSTRUCTOR"
"CONTINUE"
"CURSOR_NAME"
"DATA"
"DATABASE"
"DATETIME_INTERVAL_CODE"
"DATETIME_INTERVAL_PRECISION"
"DECADE"
"DEFAULTS"
"DEFERRABLE"
"DEFERRED"
"DEFINED"
"DEFINER"
"DEGREE"
"DEPTH"
"DERIVED"
"DESC"
"DESCRIPTION"
"DESCRIPTOR"
"DIAGNOSTICS"
"DISPATCH"
"DOMAIN"
"DOW"
"DOY"
"DYNAMIC_FUNCTION"
"DYNAMIC_FUNCTION_CODE"
"ENCODING"
"EPOCH"
"ERROR"
"EXCEPTION"
"EXCLUDE"
"EXCLUDING"
"FINAL"
"FIRST"
"FOLLOWING"
"FORMAT"
"FORTRAN"
"FOUND"
"FRAC_SECOND"
"G"
"GENERAL"
"GENERATED"
"GEOMETRY"
"GO"
"GOTO"
"GRANTED"
"HIERARCHY"
"IGNORE"
"IMMEDIATE"
"IMMEDIATELY"
"IMPLEMENTATION"
"INCLUDING"
"INCREMENT"
"INITIALLY"
"INPUT"
"INSTANCE"
"INSTANTIABLE"
"INVOKER"
"ISODOW"
"ISOYEAR"
"ISOLATION"
"JAVA"
"JSON"
"K"
"KEY"
"KEY_MEMBER"
"KEY_TYPE"
"LABEL"
"LAST"
"LENGTH"
"LEVEL"
"LIBRARY"
"LOCATOR"
"M"
"MAP"
"MATCHED"
"MAXVALUE"
"MICROSECOND"
"MESSAGE_LENGTH"
"MESSAGE_OCTET_LENGTH"
"MESSAGE_TEXT"
"MILLISECOND"
"MILLENNIUM"
"MINVALUE"
"MORE_"
"MUMPS"
"NAME"
"NAMES"
"NANOSECOND"
"NESTING"
"NORMALIZED"
"NULLABLE"
"NULLS"
"NUMBER"
"OBJECT"
"OCTETS"
"OPTION"
"OPTIONS"
"ORDERING"
"ORDINALITY"
"OTHERS"
"OUTPUT"
"OVERRIDING"
"PAD"
"PARAMETER_MODE"
"PARAMETER_NAME"
"PARAMETER_ORDINAL_POSITION"
"PARAMETER_SPECIFIC_CATALOG"
"PARAMETER_SPECIFIC_NAME"
"PARAMETER_SPECIFIC_SCHEMA"
"PARTIAL"
"PASCAL"
"PASSING"
"PASSTHROUGH"
"PAST"
"PATH"
"PLACING"
"PLAN"
"PLI"
"PRECEDING"
"PRESERVE"
"PRIOR"
"PRIVILEGES"
"PUBLIC"
"QUARTER"
"READ"
"RELATIVE"
"REPEATABLE"
"REPLACE"
"RESPECT"
"RESTART"
"RESTRICT"
"RETURNED_CARDINALITY"
"RETURNED_LENGTH"
"RETURNED_OCTET_LENGTH"
"RETURNED_SQLSTATE"
"RETURNING"
"ROLE"
"ROUTINE"
"ROUTINE_CATALOG"
"ROUTINE_NAME"
"ROUTINE_SCHEMA"
"ROW_COUNT"
"SCALAR"
"SCALE"
"SCHEMA"
"SCHEMA_NAME"
"SCOPE_CATALOGS"
"SCOPE_NAME"
"SCOPE_SCHEMA"
"SECTION"
"SECURITY"
"SELF"
"SEQUENCE"
"SERIALIZABLE"
"SERVER"
"SERVER_NAME"
"SESSION"
"SETS"
"SIMPLE"
"SIZE"
"SOURCE"
"SPACE"
"SPECIFIC_NAME"
"SQL_BIGINT"
"SQL_BINARY"
"SQL_BIT"
"SQL_BLOB"
"SQL_BOOLEAN"
"SQL_CHAR"
"SQL_CLOB"
"SQL_DATE"
"SQL_DECIMAL"
"SQL_DOUBLE"
"SQL_FLOAT"
"SQL_INTEGER"
"SQL_INTERVAL_DAY"
"SQL_INTERVAL_DAY_TO_HOUR"
"SQL_INTERVAL_DAY_TO_MINUTE"
"SQL_INTERVAL_DAY_TO_SECOND"
"SQL_INTERVAL_HOUR"
"SQL_INTERVAL_HOUR_TO_MINUTE"
"SQL_INTERVAL_HOUR_TO_SECOND"
"SQL_INTERVAL_MINUTE"
"SQL_INTERVAL_MINUTE_TO_SECOND"
"SQL_INTERVAL_MONTH"
"SQL_INTERVAL_SECOND"
"SQL_INTERVAL_YEAR"
"SQL_INTERVAL_YEAR_TO_MONTH"
"SQL_LONGVARBINARY"
"SQL_LONGVARNCHAR"
"SQL_LONGVARCHAR"
"SQL_NCHAR"
"SQL_NCLOB"
"SQL_NUMERIC"
"SQL_NVARCHAR"
"SQL_REAL"
"SQL_SMALLINT"
"SQL_TIME"
"SQL_TIMESTAMP"
"SQL_TINYINT"
"SQL_TSI_DAY"
"SQL_TSI_FRAC_SECOND"
"SQL_TSI_HOUR"
"SQL_TSI_MICROSECOND"
"SQL_TSI_MINUTE"
"SQL_TSI_MONTH"
"SQL_TSI_QUARTER"
"SQL_TSI_SECOND"
"SQL_TSI_WEEK"
"SQL_TSI_YEAR"
"SQL_VARBINARY"
"SQL_VARCHAR"
"STATE"
"STATEMENT"
"STRUCTURE"
"STYLE"
"SUBCLASS_ORIGIN"
"SUBSTITUTE"
"TABLE_NAME"
"TEMPORARY"
"TIES"
"TIMESTAMPADD"
"TIMESTAMPDIFF"
"TOP_LEVEL_COUNT"
"TRANSACTION"
"TRANSACTIONS_ACTIVE"
"TRANSACTIONS_COMMITTED"
"TRANSACTIONS_ROLLED_BACK"
"TRANSFORM"
"TRANSFORMS"
"TRIGGER_CATALOG"
"TRIGGER_NAME"
"TRIGGER_SCHEMA"
"TYPE"
"UNBOUNDED"
"UNCOMMITTED"
"UNCONDITIONAL"
"UNDER"
"UNNAMED"
"USAGE"
"USER_DEFINED_TYPE_CATALOG"
"USER_DEFINED_TYPE_CODE"
"USER_DEFINED_TYPE_NAME"
"USER_DEFINED_TYPE_SCHEMA"
"UTF8"
"UTF16"
"UTF32"
"VERSION"
"VIEW"
"WEEK"
"WRAPPER"
"WORK"
"WRITE"
"XML"
"ZONE"
]
# List of additional join types. Each is a method with no arguments.
# Example: LeftSemiJoin()
joinTypes: [
]
# List of methods for parsing custom SQL statements.
# Return type of method implementation should be 'SqlNode'.
# Example: SqlShowDatabases(), SqlShowTables().
statementParserMethods: [
]
# List of methods for parsing custom literals.
# Return type of method implementation should be "SqlNode".
# Example: ParseJsonLiteral().
literalParserMethods: [
]
# List of methods for parsing custom data types.
# Return type of method implementation should be "SqlIdentifier".
# Example: SqlParseTimeStampZ().
dataTypeParserMethods: [
]
# List of methods for parsing extensions to "ALTER <scope>" calls.
# Each must accept arguments "(SqlParserPos pos, String scope)".
# Example: "SqlUploadJarNode"
alterStatementParserMethods: [
]
# List of methods for parsing extensions to "CREATE [OR REPLACE]" calls.
# Each must accept arguments "(SqlParserPos pos, boolean replace)".
createStatementParserMethods: [
]
# List of methods for parsing extensions to "DROP" calls.
# Each must accept arguments "(SqlParserPos pos)".
dropStatementParserMethods: [
]
# List of files in #includes directory that have parser method
# implementations for parsing custom SQL statements, literals or types
# given as part of "statementParserMethods", "literalParserMethods" or
# "dataTypeParserMethods".
implementationFiles: [
"parserImpls.ftl"
]
includeCompoundIdentifier: true
includeBraces: true
includeAdditionalDeclarations: false
}
}
freemarkerLinks: {
includes: includes/
}
I am able to see the method regRealtionalOperator present in generated SqlParseImp class in javacc package of the target directory. Also in fmpp of generate-source, the generated Parser.jj also contains by the method.
Still, upon running the above query, SqlPArseException [ Encountered FOO_GREATER_THAN error is being thrown.

You've added it as a keyword, but you still need to indicate where this operator can be used in a query. For example, in Parser.jj, you'll see the comp() production which is used wherever a comparison operator is required. Based on where your operator is valid in a query, you'll have to decide how to modify the grammar.

Related

How to use RegExp grouping in javascript to parse strings

I need to create a RegExp that will allow me to use groups to properly parse a string for some comparison logic.
consider the following list of strings:
const testSet: string[] = [
"alpha-4181a",
"alpha-4181a-2",
"alpha-4181a_3",
"example",
"smokeTest"
]
Note the -2 and _3 which are valid methods of versioning in this naming convention. We wish to maintain support for such.
If we loop through the above set, I am expecting the entire string, WITHOUT versioning if it exists (as shown below)...
const returnSet: string[] = [
"alpha-4181a",
"alpha-4181a",
"alpha-4181a",
"example",
"smokeTest"
]
so far I have the following regex
/([-_]\d?)$/gi
which does properly identify the versioning at the end of the string. From here, I would like to create an additional group that matches everything that is NOT the versioning convention, but I can't seem to figure it out...
You just need to match everything before the versioning at the end. But you also need lazy matching, which is what +? - see this question for more.
const testSet = [
"alpha-4181a",
"alpha-4181a-2",
"alpha-4181a_3",
"example",
"smokeTest"
];
const resultSet = testSet.map((x) => x.match(/^(.+?)(?:[_-]\d)?$/)?.[1] ?? x);
// ^^^^^^^^^^ versioning here
// ^^^^^ match everything before
console.log(resultSet);

Change default behavior of callout blocks in Quarto

In Quarto, I'd like to change the default behavior of a single callout block type so that it will
always automatically have the same caption (e.g. "Additional Resources")
always be folded (collapse="true")
Let's say I want this for the tip callout block type while the others (note, warning, caution, and important) should not be affected.
In other words, I want the behavior/output of this:
:::{.callout-tip collapse="true"}
## Additional Resources
- Resource 1
- Resource 2
:::
by only having to write this:
:::{.callout-tip}
- Resource 1
- Resource 2
:::
Update:
I have actually converted the following lua filter into a quarto filter extension collapse-callout, which allows specifying default options for specific callout blocks more easily. See the github readme for detailed instructions on installation and usage.
As #stefan mentioned, you can use pandoc Lua filter to do this more neatly.
quarto_doc.qmd
---
title: "Callout Tip"
format: html
filters:
- custom-callout.lua
---
## Resources
:::{.custom-callout-tip}
- Resource 1
- Resource 2
:::
## More Resources
:::{.custom-callout-tip}
- Resource 3
- Resource 4
:::
custom-callout.lua
local h2 = pandoc.Header(2, "Additional Resources")
function Div(el)
if quarto.doc.isFormat("html") then
if el.classes:includes('custom-callout-tip') then
local content = el.content
table.insert(content, 1, h2)
return pandoc.Div(
content,
{class="callout-tip", collapse='true'}
)
end
end
end
Just make sure that quarto_doc.qmd and custom-callout.lua files are in the same directory (i.e. folder).
After a look at the docs and based on my experience with customizing Rmarkdown I would guess that this requires to create a custom template and/or the use of pandoc Lua filters.
A more lightweight approach I used in the past would be to use a small custom function to add the code for your custom callout block to your Rmd or Qmd. One drawback is that this requires a code chunk. However, to make your life a bit easier you could e.g. create a RStudio snippet to add a code chunk template to your document.
---
title: "Custom Callout"
format: html
---
```{r}
my_call_out <- function(...) {
cat(":::{.callout-tip collapse='true'}\n")
cat("## Additional Resources\n")
cat(paste0("- ", ..., collapse = "\n\n"))
cat("\n:::\n")
}
```
```{r results="asis"}
my_call_out(paste("Resource", 1:2))
```
Blah blah
```{r results="asis"}
my_call_out("Resource 3", "Resource 4")
```
Blah blah

Understanding the GN build system in Fuchsia OS, what is `build_api_module`?

GN stands for Generate Ninja. It generates ninja files which build things. The main file is BUILD.GN at the root of the fuchsia source tree
It contains a lot of build_api_module calls:
build_api_module("images") {
testonly = true
data_keys = [ "images" ]
deps = [
# XXX(46415): as the build is specialized by board (bootfs_only)
# for bringup, it is not possible for this to be complete. As this
# is used in the formation of the build API with infrastructure,
# and infrastructure assumes that the board configuration modulates
# the definition of `zircon-a` between bringup/non-bringup, we can
# not in fact have a complete description. See the associated
# conditional at this group also.
"build/images",
# This has the images referred to by $qemu_kernel_label entries.
"//build/zircon/zbi_tests",
]
}
however, it's unclear for me what this does exactly. Looking at its definition on build/config/build_api_module.gn for example:
template("build_api_module") {
if (current_toolchain == default_toolchain) {
generated_file(target_name) {
outputs = [ "$root_build_dir/$target_name.json" ]
forward_variables_from(invoker,
[
"contents",
"data_keys",
"deps",
"metadata",
"testonly",
"visibility",
"walk_keys",
"rebase",
])
output_conversion = "json"
metadata = {
build_api_modules = [ target_name ]
if (defined(invoker.metadata)) {
forward_variables_from(invoker.metadata, "*", [ "build_api_modules" ])
}
}
}
} else {
not_needed([ "target_name" ])
not_needed(invoker, "*")
}
}
it looks like it simply generates a file.
Can someone explain to me how build_api_module("images") ends up building all the zircon kernel images?
The build_api_module() targets generate JSON files that describe something about the current build system configuration. These files are typically consumed by other tools (in some cases dependencies to other build rules) that need to know about the current build.
One example is the tests target which generates the tests.json file. This file is used by fx test to determine which tests are available and match the test name you provide to the component URL to invoke.
Can someone explain to me how build_api_module("images") ends up building all the zircon kernel images?
It doesn't. These targets are descriptive of the current build configuration, they are not prescriptive of what artifacts the build generates. In this specific case, the images.json file is typically used by tools like FEMU and ffx to determine what system images to use on a target device.

Nifi - Extracting Key Value pairs into new fields

With Nifi I am trying to use the ReplaceText processor to extract key value pairs.
The relevant part of the JSON file is the 'RuleName':
"winlog": {
"channel": "Microsoft-Windows-Sysmon/Operational",
"event_id": 3,
"api": "wineventlog",
"process": {
"pid": 1640,
"thread": {
"id": 4452
}
},
"version": 5,
"record_id": 521564887,
"computer_name": "SERVER001",
"event_data": {
"RuleName": "Technique=Commonly Used Port,Tactic=Command and Control,MitreRef=1043"
},
"provider_guid": "{5790385F-C22A-43E0-BF4C-06F5698FFBD9}",
"opcode": "Info",
"provider_name": "Microsoft-Windows-Sysmon",
"task": "Network connection detected (rule: NetworkConnect)",
"user": {
"identifier": "S-1-5-18",
"name": "SYSTEM",
"domain": "NT AUTHORITY",
"type": "Well Known Group"
}
},
Within the ReplaceText processor I have this configuration
ReplaceText
"winlog.event_data.RuleName":"MitreRef=(.*),Technique=(.*),Tactic=(.*),Alert=(.*)"
"MitreRef":"$1","Technique":"$2","Tactic":"$3","Alert":"$4"
The first problem is that the new fields MitreRef etc. are not created.
The second thing is that the fields may appear in any order in the original JSON, e.g.
"RuleName": "Technique=Commonly Used Port,Tactic=Command and Control,MitreRef=1043"
or,
MitreRef=1043,Tactic=Command and Control,Technique=Commonly Used Port
Any ideas on how to proceed?
Welcome to StackOverflow!
As your question is quite ambiqious I'll try to guess what you aimed for.
Replacing string value of "RuleName" with JSON representation
I assume that you want to replace the entry
"RuleName": "Technique=Commonly Used Port,Tactic=Command and Control,MitreRef=1043"
with something along the lines of
"RuleName": {
"Technique": "Commonly Used Port",
"Tactic": "Command and Control",
"MitreRef": "1043"
}
In this case you can grab basically the whole line and assume you have three groups of characters, each consisting of
A number of characters that are not the equals sign: ([^=]+)
The equals sign =
A number of characters that are not the comma sign: ([^,]+)
These groups in turn are separated by a comma: ,
Based on these assumptions you can write the following RegEx inside the Search Value property of the ReplaceText processor:
"RuleName"\s*:\s*"([^=]+)=([^,]+),([^=]+)=([^,]+),([^=]+)=([^,]+)"
With this, you grab the whole line and build a group for every important data point.
Based on the groups you may set the Replacement Value to:
"RuleName": {
"${'$1'}": "${'$2'}",
"${'$3'}": "${'$4'}",
"${'$5'}": "${'$6'}"
}
Resulting in the above mentioned JSON object.
Some remarks
The RegEx assumes that the entry is on a single line and does NOT work when it is splitted onto multiple lines, e.g.
"RuleName":
"Technique=Commonly Used Port,Tactic=Command and Control,MitreRef=1043"
The RegEx assumes the are exactly three "items" inside the value of RuleName and does NOT work with different number of "items".
In case your JSON file can grow larger you may try to avoid using the Entire text evaluation mode, as this loads the content into a buffer and routes the FlowFile to the failure output in case the file is to large. In that case I recommend you to use the Line-by-Line mode as seen in the attached image.
Allowing a fourth additional value
In case there might be a fourth additional value, you may adjust the RegEx in the Search Value property.
You can add (,([^=]+)=([^,]+))? to the previous expression, which roughly translated to:
( )? - match what is in the bracket zero or one times
, - match the character comma
([^=]+)=([^,]+) - followed by the group of characters as explaind above
The whole RegEx will look like this:
"RuleName"\s*:\s*"([^=]+)=([^,]+),([^=]+)=([^,]+),([^=]+)=([^,]+)(,([^=]+)=([^,]+))?"
To allow the new value to be used you have to adjust the replacement value as well.
You can use the Expression Language available in most NiFi processor properties to decide whether to add another item to the JSON object or not.
${'$7':isEmpty():ifElse(
'',
${literal(', "'):append(${'$8'}):append('": '):append('"'):append(${'$9'}):append('"')}
)}
This expression will look if the seventh RegEx group exists or not and either append an empty string or the found values.
With this modification included the whole replacement value will look like the following:
"RuleName": {
"${'$1'}": "${'$2'}",
"${'$3'}": "${'$4'}",
"${'$5'}": "${'$6'}"
${'$7':isEmpty():ifElse(
'',
${literal(', "'):append(${'$8'}):append('": '):append('"'):append(${'$9'}):append('"')}
)}
}
regarding multiple occurrences
The ReplaceText processor replaces all occurrences it finds where the RegEx matches. Using the settings provided in the last paragraph given the following example input
{
"event_data": {
"RuleName": "Technique=Commonly Used Port,Tactic=Command and Control,MitreRef=1043,Foo=Bar"
},
"RuleName": "Technique=Commonly Used Port,Tactic=Command and Control,MitreRef=1043"
}
will result in the following:
{
"event_data": {
"RuleName": {
"Technique": "Commonly Used Port",
"Tactic": "Command and Control",
"MitreRef": "1043",
"Foo": "Bar"
}
},
"RuleName": {
"Technique": "Commonly Used Port",
"Tactic": "Command and Control",
"MitreRef": "1043"
}
}
example template
You may download a template I created that includes the above processor from gist.

Commands Not Showing in Command Palette with RegReplace (Sublime Text 3)

I'm trying to run a series of commands with the RegReplace plugin in Sublime Text 3 but I cannot get the command to load and I cannot get the keybindings to work either. I have no clue what's wrong.
Steps Taken:
Installed RegReplace
Opened the Command Palette
Searched for "RegReplace: Create New Regular Expression"
Modified the Rule to the following
"""
If you don't need a setting, just leave it as None.
When the rule is parsed, the default will be used.
Each variable is evaluated separately, so you cannot substitute variables in other variables.
"""
# name (str): Rule name. Required.
name = "extract_variables"
# find (str): Regular expression pattern or literal string.
# Use (?i) for case insensitive. Use (?s) for dotall.
# See https://docs.python.org/3.4/library/re.html for more info on regex flags.
# Required unless "scope" is defined.
find = r".*\[(.*[^(<|>)]*?)\].*"
# replace (str - default=r'\g<0>'): Replace pattern.
replace = r"\1"
# literal (bool - default=False): Preform a non-regex, literal search and replace.
literal = None
# literal_ignorecase (bool - default=False): Ignore case when "literal" is true.
literal_ignorecase = None
# scope (str): Scope to search for and to apply optional regex to.
# Required unless "find" is defined.
scope = None
# scope_filter ([str] - default=[]): An array of scope qualifiers for the match.
# Only used when "scope" is not defined.
#
# - Any instance of scope qualifies match: scope.name
# - Entire match of scope qualifies match: !scope.name
# - Any instance of scope disqualifies match: -scope.name
# - Entire match of scope disqualifies match: -!scope.name
scope_filter = None
# greedy (bool - default=True): Apply action to all instances (find all).
# Used when "find" is defined.
greedy = None
# greedy_scope (bool - default=True): Find all the scopes specified by "scope."
greedy_scope = None
# format_replace (bool - default=False): Use format string style replace templates.
# Works only for Regex (with and without Backrefs) and Re (with Backrefs).
# See http://facelessuser.github.io/backrefs/#format-replacements for more info.
format_replace = None
# selection_inputs (bool -default=False): Use selection for inputs into find pattern.
# Global setting "selection_only" must be disabled for this to work.
selection_inputs = None
# multi_pass (bool - default=False): Perform multiple sweeps on the scope region to find
# and replace all instances of the regex when regex cannot be formatted to find
# all instances. Since a replace can change a scope, this can be useful.
multi_pass = None
# plugin (str): Define replace plugin for more advanced replace logic.
plugin = None
# args (dict): Arguments for 'plugin'.
args = None
# ----------------------------------------------------------------------------------------
# test: Here you can setup a test command. This is not saved and is just used for this session.
# - replacements ([str]): A list of regex rules to sequence together.
# - find_only (bool): Highlight current find results and prompt for action.
# - action (str): Apply the given action (fold|unfold|mark|unmark|select).
# This overrides the default replace action.
# - options (dict): optional parameters for actions (see documentation for more info).
# - key (str): Unique name for highlighted region.
# - scope (str - default="invalid"): Scope name to use as the color.
# - style (str - default="outline"): Highlight style (solid|underline|outline).
# - multi_pass (bool): Repeatedly sweep with sequence to find all instances.
# - no_selection (bool): Overrides the "selection_only" setting and forces no selections.
# - regex_full_file_with_selections (bool): Apply regex search to full file then apply
# action to results under selections.
test = {
"replacements": ["extract_variables"],
"find_only": True,
"action": None,
"options": {},
"multi_pass": False,
"no_selection": False,
"regex_full_file_with_selections": False
}
This code Generates the following in AppData\Roaming\Sublime Text 3\Packages\User\reg_replace_rules.sublime-settings
{
"replacements":
{
"extract_variables":
{
"find": ".*\\[(.*[^(<|>)]*?)\\].*",
"name": "extract_variables",
"replace": "\\1"
}
}
}
And then I created the following command under the same directory with filename Default.sublime-commands
[
{
"caption": "Reg Replace: Extract ERS Variables",
"command": "extract_ers_variables",
"args": {
"replacements": [
"extract_variables"
]
}
}
]
After saving all of this, I still do not see the command in the command palette and it didn't show when I tried to save it as a keymap either.
Any help is much appreciated
Came here with my own troubles and may as well document my dumb mistakes. I know nothing of JSON.
When adding two replacements used together going by the examples at the developer's site, I could not get the command to show up in the Command Palette. I could get a keybinding to work, but it gave error messages that the first replacement could not be found…after having successfully used it. The culprit was a malformed reg_replace_rules.sublime-settings file:
//Wrong
{
"replacements":
{
"rep_one":
//stuff
},
"replacements":
{
"rep_two":
//other stuff
}
}
//Correct
{
"replacements":
{
"rep_one":
//stuff, comma
"rep_two":
//other stuff
}
}
Fixing that cleared up the error message, but the command still would not appear in the Command Palette. The problem there was more bad JSON, this time in Default.sublime-commands.
//Wrong
{
"caption": "My Command",
"command": "reg_replace",
"args": {"replacements": ["rep_one", "rep_two"]}
}
//Correct
[
{
"caption": "My Command",
"command": "reg_replace",
"args": {"replacements": ["rep_one", "rep_two"]}
}
]
This is probably obvious to people who have learned JSON properly and use it regularly, and perhaps one day I will be one of those.
The reason this doesn't work for you is that you have the command wrong in your Default.sublime-commands file. In particular, the command extract_ers_variables does not exist, so the entry for it in the command palette is hidden because selecting it wouldn't do anything. Visually speaking, if this command was in a sublime-menu file, the entry in the menu would appear disabled.
If you select Preferences > Package Settings > RegReplace > Quick Start Guide from the menu and follow through the example that's displayed, note that when it comes to the part about creating the command entry in Default.sublime-commands, it tells you to use reg_replace as the command, and the name of the replacements argument is what tells the command which replacement to do.
As such, your entry should look more like:
[
{
"caption": "Reg Replace: Extract ERS Variables",
"command": "reg_replace",
"args": {
"replacements": [
"extract_variables"
]
}
}
]