Sometimes we want to implement validation dynamically while taking input from the users. This means you want to store the regex somewhere in the DB & want to add/update the validation accordingly and get it in the API response. But right now, this is not possible:
final regex = new RegExp(r'${regex_value}'); // will raise error
So what can be the solution to work with dynamic regex?
So to work with dynamic regex, we can do this - get the base64 encoded value of regex from the server/Backend/DB and decode it like this:
import 'dart:convert';
void main() {
final base64Decoder = base64.decoder;
const base64Bytes = 'XlxkezAsM30oPzpbLixdXGR7MSwyfSk/JA==';
final decodedBytes = base64Decoder.convert(base64Bytes);
print( utf8.decode(decodedBytes) );
final regex = RegExp(utf8.decode(decodedBytes));
print( regex.hasMatch('111.2') );
print( regex.hasMatch('111.22') );
print( regex.hasMatch('111.222') );
print( regex.hasMatch('111.') );
}
Related
I'm curious if this is the correct method to use the like operator when using queryExecute() in a cfscript function.
if( len(arguments?.lastName) ){
local.sqlWhere & = " AND t_lastname LIKE :lName";
local.sqlParams.lName = { value : arguments.lastName & '%', cfsqltype:'cf_sql_varchar'};
};
Is it just appended like a string with & '%'?
I've just go through your issue. In coldfusion & symbol always concatenation the two string. So we could not able to use like that. Here I've wrote some sample code for you please check that. I hope it will more help full to wrote a script based query.
local.MyQry = "SELECT * FROM Users WHERE 1=1 ";
I've used same condition from you. Not sure about your conditions
if( len(arguments?.lastName) ){
local.MyQry &= " AND Email like :email"
}
Here concatenate the query with previous one if the condition is true. And mentioned :(colon as we are going to use as queryparam)
local.qry = new Query( datasource = 'your DB name' , sql = Local.MyQry);
if( len(arguments?.lastName) ){
local.qry.addParam( name="email", value="%#Arguments.email#%", cfsqltype="cf_sql_varchar");
}
return local.qry.execute();
You can give the % symbol here based on your scenario . Ex %#Arguments.email#. or %#Arguments.email#%
I hope this will help you more. Thanks
I need to substitute characters of a tuple using Pig UDF. For eg, if i have a line in the file as "hello world, Hello WORLD, hello\WORLD" required to be transformed as "hello_world,hello_world,hello_world". To accomplish this, i tried below UDF:
package myUDF;
import java.io.IOException;
import org.apache.pig.EvalFunc;
import org.apache.pig.data.Tuple;
import org.apache.pig.data.TupleFactory;
public class ReplaceValues extends EvalFunc<Tuple>
{
public Tuple exec(Tuple input) throws IOException {
if (input == null || input.size() == 0)
return null;
try{
String str = (String)input.get(0);
str=str.replace(" ", "_");
str=str.replace("/","");
str=str.replace("\\","");
TupleFactory tf = TupleFactory.getInstance();
Tuple t = tf.newTuple();
t.append(str);
return t;
}catch(Exception e){
throw new IOException("Caught exception processing input row ", e);
}
}
}
but when calling this UDF via pig script i am facing issues, please help me in resolving this:
A = load '/user/cloudera/Stage/ActualDataSet.csv' using PigStorage(',') AS (Rank:chararray,NCTNumber:chararray,Title:chararray,Recruitment:chararray);
B = FILTER A by Rank == 'Rank';
C = FOREACH B GENERATE PigUDF.ReplaceValues(B);
Error: Pig script failed to parse:
Invalid scalar projection: B : A column needs to be projected from a relation for it to be used as a scalar
You have to pass the field that you are trying to modify and not the relation B.Assuming the field that you are trying to match is Title, then you would call the UDF like below
C = FOREACH B GENERATE B.Rank,B.NCTNumber,PigUDF.ReplaceValues(B.Title),B.Recruitment;
Note that if you are trying to replace it in the entire record then your load statement is incorrect.You will have to load the entire record as one line:chararray and then pass the line to your UDF.
Also, instead of an UDF you can use REGEX to match and replace the string of your choice.
In your Pig script, you are passing entire Bag "B" in the UDF, while it accepts tuple as an argument.
Instead pass the field like B.Title as given below.
C = FOREACH B GENERATE PigUDF.ReplaceValues(B.Title);
you can call this UDF on other fields also in the same line as:
C = FOREACH B GENERATE PigUDF.ReplaceValues(B.Title), PigUDF.ReplaceValues(B.Rank);
I am trying to get the following working code in JavaScript also working in Dart.
https://jsfiddle.net/8xyxy8jp/1/
var s = "We live, on the # planet earth";
var results = s.replace(/[^\w]+/g, '-');
document.getElementById("output").innerHTML = results;
Which gives the output
We-live-on-the-planet-earth
I have tried this Dart code
void main() {
print( "We live, on the # planet earth".replaceAll("[^\w]+","-"));
}
But the output becomes the same.
What am I missing here?
If you want replaceAll() to process the argument as regular expression you need to pass a RegExp instance. I usually use r as prefix for the regex string to make it a raw string where not interpolation ($, \, ...) takes place.
main() {
var s = "We live, on the # planet earth";
var result = s.replaceAll(new RegExp(r'[^\w]+'), '-');
print(result);
}
Try it in DartPad
My IDE PHPstorm allows you to do search and replace using regex, one of the things I find myself often doing is switching the order or action, aka, in function a I will set a value on items from list a using list b as the values.
but then in function b I want to invert it.
so I want to set a value on items from list b using list a as the values.
A proper example is this:
var $clipDetailsGame = $('#clipDetailsGame');
var $clipDetailsTitle = $('#clipDetailsTitle');
var $clipDetailsByline = $('#clipDetailsByline');
var $clipDetailsTeamOne = $('#clipDetailsTeamOne');
var $clipDetailsTeamTwo = $('#clipDetailsTeamTwo');
var $clipDetailsReferee = $('#clipDetailsReferee');
var $clipDetailsDescription = $('#clipDetailsDescription');
var $clipDetailsCompetition = $('#clipDetailsCompetition');
function a(clip){
clip.data('gameId' , $clipDetailsGame.val());
clip.data('title' , $clipDetailsTitle.val());
clip.data('byline' , $clipDetailsByline.val());
clip.data('team1' , $clipDetailsTeamOne.val());
clip.data('team2' , $clipDetailsTeamTwo.val());
clip.data('refereeId' , $clipDetailsReferee.val());
clip.data('description' , $clipDetailsDescription.val());
clip.data('competitionId', $clipDetailsCompetition.val());
}
function b (clip){
$clipDetailsGame .val(clip.data('gameId'));
$clipDetailsTitle .val(clip.data('title'));
$clipDetailsByline .val(clip.data('byline'));
$clipDetailsTeamOne .val(clip.data('team1'));
$clipDetailsTeamTwo .val(clip.data('team2'));
$clipDetailsReferee .val(clip.data('refereeId'));
$clipDetailsDescription.val(clip.data('description'));
$clipDetailsCompetition.val(clip.data('competitionId'));
}
Excluding the formatting (It's just there to make my point clearer), what kind of regex could I use to do the replacement for me?
Basic regex -- nothing fancy or complex at all
Search for: (clip\.data\('[a-zA-Z0-9]+')\s*, (\$[a-zA-Z0-9]+\.val\()(\)\);)
Replace with: \$2\$1\$3
The only PhpStorm-related thing here is replacement string format -- you have to "escape" $ to have it work (i.e. it has to be \$2 to use 2nd back-trace instead of just $2 or \2 (as used in other engines)).
This will transform this:
clip.data('gameId' , $clipDetailsGame.val());
clip.data('title' , $clipDetailsTitle.val());
clip.data('byline' , $clipDetailsByline.val());
clip.data('team1' , $clipDetailsTeamOne.val());
clip.data('team2' , $clipDetailsTeamTwo.val());
clip.data('refereeId' , $clipDetailsReferee.val());
clip.data('description' , $clipDetailsDescription.val());
clip.data('competitionId', $clipDetailsCompetition.val());
into this:
$clipDetailsGame.val(clip.data('gameId'));
$clipDetailsTitle.val(clip.data('title'));
$clipDetailsByline.val(clip.data('byline'));
$clipDetailsTeamOne.val(clip.data('team1'));
$clipDetailsTeamTwo.val(clip.data('team2'));
$clipDetailsReferee.val(clip.data('refereeId'));
$clipDetailsDescription.val(clip.data('description'));
$clipDetailsCompetition.val(clip.data('competitionId'));
Useful link: http://www.jetbrains.com/phpstorm/webhelp/regular-expression-syntax-reference.html
Mopping up (not quite the answer to this question, but another way of organizing the code to make search and replace unnecessary):
var $details = {};
var fields = [
'Game', 'Title', 'Byline', 'TeamOne', 'TeamTwo', 'Referee', 'Description',
'Competition'
];
for(field in fields) {
$details[field] = $('#clipDetails' + field);
}
function a(clip) {
for(field in fields) {
clip.data(field, $details[fields].val());
}
}
function b(clip) {
for(field in fields) {
$details[field].val(clip.data(field));
}
}
Yes, I know that there are tiny naming inconsistencies that means that this isn't working out of the box, such as Game versus gameId. This is an excellent occasion to clean that up too :). If you still want to keep the title case for the ids (such as #clipDetailsGame in stead of #clipDetailsgame), keep it in title case in the fields array and use toLowerCase where you need lower case.
By the way, there is an interesting read on what makes DRY a good thing here: https://softwareengineering.stackexchange.com/questions/103233/why-is-dry-important
Starting to play with Cascading on Amazon EMR, have managed to get it running BUT falling at a fairly simple hurdle and I was hoping someone could shed some light on it.
My code:
import java.util.Properties;
import cascading.flow.Flow;
import cascading.flow.FlowDef;
import cascading.flow.hadoop.HadoopFlowConnector;
import cascading.pipe.Pipe;
import cascading.property.AppProps;
import cascading.scheme.hadoop.TextLine;
import cascading.tap.Tap;
import cascading.tap.hadoop.Hfs;
import cascading.tuple.Fields;
import cascading.operation.regex.RegexParser;
import cascading.pipe.Each;
import cascading.tap.SinkMode;
public class Main
{
public static void
main( String[] args )
{
String inPath = args[ 0 ];
String outPath = args[ 1 ];
Properties properties = new Properties();
AppProps.setApplicationJarClass( properties, Main.class );
HadoopFlowConnector flowConnector = new HadoopFlowConnector( properties );
// create the source tap
TextLine sourceScheme = new TextLine(new Fields("line"));
Tap inTap = new Hfs( sourceScheme, inPath );
// create the sink tap
TextLine sinkScheme = new TextLine( new Fields("custid", "movieids"));
Tap outTap = new Hfs( sinkScheme, outPath, SinkMode.REPLACE );
Fields filmFields = new Fields("custid", "movieids");
String filmRegex = "([0-9]:*[,.]*)";
RegexParser parser = new RegexParser(filmFields, filmRegex);
Pipe importPipe = new Each("import", new Fields("line"), parser, Fields.RESULTS );
// connect the taps, pipes, etc., into a flow
Flow parsedFlow = new HadoopFlowConnector(properties).connect(inTap, outTap, importPipe);
// run the flow
parsedFlow.start();
parsedFlow.complete();
}
}
My input (no empty lines):
1:2
2:4
5:1
3:9
My output:
Task TASKID="task_201305241444_0003_m_000000" TASK_TYPE="MAP" TASK_STATUS="FAILED" FINISH_TIME="1369408133954" ERROR="cascading\.tuple\.TupleException: operation added the wrong number of fields, expected: ['custid', 'movieids'], got result size: 1
at cascading\.tuple\.TupleEntryCollector\.add(TupleEntryCollector\.java:82)
at cascading\.operation\.regex\.RegexParser\.onFoundGroups(RegexParser\.java:168)
at cascading\.operation\.regex\.RegexParser\.operate(RegexParser\.java:151)
at cascading\.flow\.stream\.FunctionEachStage\.receive(FunctionEachStage\.java:99)
at cascading\.flow\.stream\.FunctionEachStage\.receive(FunctionEachStage\.java:39)
at cascading\.flow\.stream\.SourceStage\.map(SourceStage\.java:102)
at cascading\.flow\.stream\.SourceStage\.run(SourceStage\.java:58)
at cascading\.flow\.hadoop\.FlowMapper\.run(FlowMapper\.java:127)
at org\.apache\.hadoop\.mapred\.MapTask\.runOldMapper(MapTask\.java:441)
at org\.apache\.hadoop\.mapred\.MapTask\.run(MapTask\.java:377)
at org\.apache\.hadoop\.mapred\.Child$4\.run(Child\.java:255)
at java\.security\.AccessController\.doPrivileged(Native Method)
at javax\.security\.auth\.Subject\.doAs(Subject\.java:396)
at org\.apache\.hadoop\.security\.UserGroupInformation\.doAs(UserGroupInformation\.java:1132)
at org\.apache\.hadoop\.mapred\.Child\.main(Child\.java:249)
The reg ex checks out fine at http://regexpal.com/
Thanks a lot
Duncan
You get an exception because your regular expression yields one result, where two result fields are excepted (namely "custid" and "movieids"), because the regular expression contains just a single group (...).
If you just want to split at the colon, either use an expression with 2 groups, for example:
String filmRegex = "(\\d):(\\d)";
or \d+, respectively, if your numbers can have more than one digit.
Or, more easily, just split the input data into its fields automatically when reading from the file by using a TextDelimited input scheme:
Scheme sourceScheme = new TextDelimited(new Fields("custid", "movieids"), ":");