I'm trying to parse a bunch of javascript files and pull out certain objects. an example of the file would be:
import { foo } from "blah";
import { bar, baz } from "../module";
const myobject = {
name: NAME,
title: {
name: `${NAME}.title`,
defaultMessage: "title",
},
description: {
name: `${NAME}.description`,
defaultMessage: "description",
},
property: 'stringvalue',
};
const anotherObject = {
name: `${NAME}.other`,
defaultMessage: "other",
}
I need to pull out all the objects that have the property "defaultMessage". For the matcher I have:
/\{([\s\S]*?)defaultMessage([\s\S]*?)\}/g
This is matching anotherObject and myobject.description correctly, but for myobject.title it's getting everything from the first { before foo. e.g:
{ foo } from "blah";
import { bar, baz } from "../module";
const myobject = {
name: NAME,
title: {
name: `${NAME}.title`,
defaultMessage: "Main",
}
How can I get this to lazy match further so I only get:
{
name: `${NAME}.title`,
defaultMessage: "Main",
}
Update: I'll be using node to parse the javascript files, so I get access to negative look behinds, I tried the following with no luck:
(?<!\{[\s\S]+?)\{([\s\S]+?)defaultMessage([\s\S]*?)\}
I would not use this in any production code but this would work for your case:
([a-zA-Z]*?)(?:\s=|:)\s(\{(?:(?!([a-zA-Z]*?)(\s=|:)\s\{)[\s\S])*?defaultMessage(?:[\s\S]*?)\})
I added ([a-zA-Z]*?)(\s=|:)\s, so that we only capture the opening bracket if it is prefixed by xyZ = or `abC:'.
Also added a negative lookahead so that the pattern doesn't repeat itself:
(?!([a-zA-Z]*?)(\s=|:)\s\{)
Furthermore I added some non capturing groups in order to decrease regex group clutter if you intend to use this in some code. For example this regex captures:
title: {
name: `${NAME}.title`,
defaultMessage: "title",
}
With title as sub group 1
and:
{
name: `${NAME}.title`,
defaultMessage: "title",
}
As subgroup 2, you can approach subgroup 2 in order to get your desired capture.
Alternatively use:
(?=[a-zA-Z]*?)\{(?:(?!([a-zA-Z]*?)(\s=|:)\s\{)[\s\S])*?defaultMessage(?:[\s\S]*?)\}
To only capture:
{
name: `${NAME}.title`,
defaultMessage: "title",
}
With no subgroups
Related
I'm trying to write an elasticsearch regexp that excludes elements that have a key that contains a substring, let's say in the title of books.
The elasticsearch docs suggest that a substring can be excluded with the following snippet:
#&~(foo.+) # anything except string beginning with "foo"
However, in my case, I've tried to create such a filter and failed.
{
query: {
constant_score: {
filter: {
bool: {
filter: query_filters,
},
},
},
},
size: 1_000,
}
def query_filters
[
{ regexp: { title: "#&~(red)" } },
# goal: exclude titles that start with "Red"
]
end
I've used other regexp in the same query filter that have worked, so I don't think there's a bug in the way the regexp is being passed to ES.
Any ideas? Thanks in advance!
Update:
I found a workaround: I can add a must_not clause to the filter.
{
query: {
constant_score: {
filter: {
bool: {
filter: query_filters,
must_not: must_not_filters,
},
},
},
},
size: 1_000,
}
def must_not_filters
[ { regexp: { title: "red.*" } } ]
end
Still curious if there's another idea for the original regex though
How can I pick all the dates with time value as 00:00:00 despite the date value? Regex doesn't work for me.
{
"_id" : ObjectId("59115a92bbf6401d4455eb21"),
"name" : "sfdfsdfsf",
"create_date" : ISODate("2013-05-13T02:34:23.000Z"),
}
something like :
db.myCollection.find({"create_date": /*T00:00:00.000Z/ })
You need to first convert created date into string of time, and if time is 00:00:00:000, then include the document.
db.test.aggregate([
// Part 1: Project all fields and add timeCriteria field that contain only time(will be used to match 00:00:00:000 time)
{
$project: {
_id: 1,
name: "$name",
create_date: "$create_date",
timeCriteria: {
$dateToString: {
format: "%H:%M:%S:%L",
date: "$create_date"
}
}
}
},
// Part 2: match the time
{
$match: {
timeCriteria: {
$eq: "00:00:00:000"
}
}
},
// Part 3: re-project document, to exclude timeCriteria field.
{
$project: {
_id: 1,
name: "$name",
create_date: "$create_date"
}
}
]);
From MongoDB version >= 4.4 we can write custom filters using $function operator.
Note: Donot forget to chage the timezone to your requirement. Timezone is not mandatory.
let timeRegex = /.*T00:00:00.000Z$/i;
db.myCollection.find({
$expr: {
$function: {
body: function (createDate, timeRegex) {
return timeRegex.test(createDate);
},
args: [{ $dateToString: { date: "$create_date", timezone: "+0530" } }, timeRegex],
lang: "js"
}
}
});
I am so sorry, but after one day researching and trying all different combinations and npm packages, I am still not sure how to deal with the following task.
Setup:
MongoDB 2.6
Node.JS with Mongoose 4
I have a schema like so:
var trackingSchema = mongoose.Schema({
tracking_number: String,
zip_code: String,
courier: String,
user_id: Number,
created: { type: Date, default: Date.now },
international_shipment: { type: Boolean, default: false },
delivery_info: {
recipient: String,
street: String,
city: String
}
});
Now user gives me a search string, a rather an array of strings, which will be substrings of what I want to search:
var search = ['15323', 'julian', 'administ'];
Now I want to find those documents, where any of the fields tracking_number, zip_code, or these fields in delivery_info contain my search elements.
How should I do that? I get that there are indexes, but I probably need a compound index, or maybe a text index? And for search, I then can use RegEx, or the $text $search syntax?
The problem is that I have several strings to look for (my search), and several fields to look in. And due to one of those aspects, every approach failed for me at some point.
Your use case is a good fit for text search.
Define a text index on your schema over the searchable fields:
trackingSchema.index({
tracking_number: 'text',
zip_code: 'text',
'delivery_info.recipient': 'text',
'delivery_info.street': 'text',
'delivery_info.city': 'text'
}, {name: 'search'});
Join your search terms into a single string and execute the search using the $text query operator:
var search = ['15232', 'julian'];
Test.find({$text: {$search: search.join(' ')}}, function(err, docs) {...});
Even though this passes all your search values as a single string, this still performs a logical OR search of the values.
Why just dont try
var trackingSchema = mongoose.Schema({
tracking_number: String,
zip_code: String,
courier: String,
user_id: Number,
created: { type: Date, default: Date.now },
international_shipment: { type: Boolean, default: false },
delivery_info: {
recipient: String,
street: String,
city: String
}
});
var Tracking = mongoose.model('Tracking', trackingSchema );
var search = [ "word1", "word2", ...]
var results = []
for(var i=0; i<search.length; i++){
Tracking.find({$or : [
{ tracking_number : search[i]},
{zip_code: search[i]},
{courier: search[i]},
{delivery_info.recipient: search[i]},
{delivery_info.street: search[i]},
{delivery_info.city: search[i]}]
}).map(function(tracking){
//it will push every unique result to variable results
if(results.indexOf(tracking)<0) results.push(tracking);
});
Okay, I came up with this.
My schema now has an extra field search with an array of all my searchable fields:
var trackingSchema = mongoose.Schema({
...
search: [String]
});
With a pre-save hook, I populate this field:
trackingSchema.pre('save', function(next) {
this.search = [ this.tracking_number ];
var searchIfAvailable = [
this.zip_code,
this.delivery_info.recipient,
this.delivery_info.street,
this.delivery_info.city
];
for (var i = 0; i < searchIfAvailable.length; i++) {
if (!validator.isNull(searchIfAvailable[i])) {
this.search.push(searchIfAvailable[i].toLowerCase());
}
}
next();
});
In the hope of improving performance, I also index that field (also the user_id as I limit search results by that):
trackingSchema.index({ search: 1 });
trackingSchema.index({ user_id: 1 });
Now, when searching I first list all substrings I want to look for in an array:
var andArray = [];
var searchTerms = searchRequest.split(" ");
searchTerms.forEach(function(searchTerm) {
andArray.push({
search: { $regex: searchTerm, $options: 'i'
}
});
});
I use this array in my find() and chain it with an $and:
Tracking.
find({ $and: andArray }).
where('user_id').equals(userId).
limit(pageSize).
skip(pageSize * page).
exec(function(err, docs) {
// hooray!
});
This works.
I was wondering if it is possible to use capturing groups with MongoDB.
For example, assuming I have a collection of users with only their full name, and I want to get their first and last name.
Here's what I was thinking of using capturing groups :
bulk.find( { full_name: /<first_name>(.*) <last_name>(.*)/i } ).upsert().replaceOne(
{
first_name: <first_name>,
last_name: <last_name>
}
);
bulk.execute();
Is it possible using only MongoDB ? How would you do that ?
Maybe using javascript :
doc here : http://docs.mongodb.org/manual/reference/method/cursor.forEach/
Example :
db.collection.find().forEach(function(e) {
var fullName = e.full_name
e.firstname = full_name.substring(\*something*\)
e.lastname = full_name.substring(\*something*\)
db.collection.save(e);
});
MongoDB version 4.2 (released in August 2019) provides the
regexFind operator. From the documentation:
Provides regular expression (regex) pattern matching capability in
aggregation expressions. If a match is found, returns a document that
contains information on the first match... If your regex pattern
contains capture groups and the pattern finds a match in the input,
the captures array in the results corresponds to the groups captured
by the matching string.
Syntax:
{ $regexFind: { input: <expression> , regex: <expression>, options: <expression> } }
E.g. (I didn't verify your regex does what you want)
db.collection.aggregate([
{
$project: {
names: {
$regexFind: { input: "$phone", regex: /(.*) (.*)/i }
}
}
}
])
and the output would be
{ "names" : { "match" : "John Doe", "idx" : 0, "captures" : [ "John", "Doe" ] } }
var thename = 'Andrew';
db.collection.find({'name':thename});
How do I query case insensitive? I want to find result even if "andrew";
Chris Fulstow's solution will work (+1), however, it may not be efficient, especially if your collection is very large. Non-rooted regular expressions (those not beginning with ^, which anchors the regular expression to the start of the string), and those using the i flag for case insensitivity will not use indexes, even if they exist.
An alternative option you might consider is to denormalize your data to store a lower-case version of the name field, for instance as name_lower. You can then query that efficiently (especially if it is indexed) for case-insensitive exact matches like:
db.collection.find({"name_lower": thename.toLowerCase()})
Or with a prefix match (a rooted regular expression) as:
db.collection.find( {"name_lower":
{ $regex: new RegExp("^" + thename.toLowerCase(), "i") } }
);
Both of these queries will use an index on name_lower.
You'd need to use a case-insensitive regular expression for this one, e.g.
db.collection.find( { "name" : { $regex : /Andrew/i } } );
To use the regex pattern from your thename variable, construct a new RegExp object:
var thename = "Andrew";
db.collection.find( { "name" : { $regex : new RegExp(thename, "i") } } );
Update: For exact match, you should use the regex "name": /^Andrew$/i. Thanks to Yannick L.
I have solved it like this.
var thename = 'Andrew';
db.collection.find({'name': {'$regex': thename,$options:'i'}});
If you want to query for case-insensitive and exact, then you can go like this.
var thename = '^Andrew$';
db.collection.find({'name': {'$regex': thename,$options:'i'}});
With Mongoose (and Node), this worked:
User.find({ email: /^name#company.com$/i })
User.find({ email: new RegExp(`^${emailVariable}$`, 'i') })
In MongoDB, this worked:
db.users.find({ email: { $regex: /^name#company.com$/i }})
Both lines are case-insensitive. The email in the DB could be NaMe#CompanY.Com and both lines will still find the object in the DB.
Likewise, we could use /^NaMe#CompanY.Com$/i and it would still find email: name#company.com in the DB.
MongoDB 3.4 now includes the ability to make a true case-insensitive index, which will dramtically increase the speed of case insensitive lookups on large datasets. It is made by specifying a collation with a strength of 2.
Probably the easiest way to do it is to set a collation on the database. Then all queries inherit that collation and will use it:
db.createCollection("cities", { collation: { locale: 'en_US', strength: 2 } } )
db.names.createIndex( { city: 1 } ) // inherits the default collation
You can also do it like this:
db.myCollection.createIndex({city: 1}, {collation: {locale: "en", strength: 2}});
And use it like this:
db.myCollection.find({city: "new york"}).collation({locale: "en", strength: 2});
This will return cities named "new york", "New York", "New york", etc.
For more info: https://jira.mongodb.org/browse/SERVER-90
... with mongoose on NodeJS that query:
const countryName = req.params.country;
{ 'country': new RegExp(`^${countryName}$`, 'i') };
or
const countryName = req.params.country;
{ 'country': { $regex: new RegExp(`^${countryName}$`), $options: 'i' } };
// ^australia$
or
const countryName = req.params.country;
{ 'country': { $regex: new RegExp(`^${countryName}$`, 'i') } };
// ^turkey$
A full code example in Javascript, NodeJS with Mongoose ORM on MongoDB
// get all customers that given country name
app.get('/customers/country/:countryName', (req, res) => {
//res.send(`Got a GET request at /customer/country/${req.params.countryName}`);
const countryName = req.params.countryName;
// using Regular Expression (case intensitive and equal): ^australia$
// const query = { 'country': new RegExp(`^${countryName}$`, 'i') };
// const query = { 'country': { $regex: new RegExp(`^${countryName}$`, 'i') } };
const query = { 'country': { $regex: new RegExp(`^${countryName}$`), $options: 'i' } };
Customer.find(query).sort({ name: 'asc' })
.then(customers => {
res.json(customers);
})
.catch(error => {
// error..
res.send(error.message);
});
});
To find case Insensitive string use this,
var thename = "Andrew";
db.collection.find({"name":/^thename$/i})
I just solved this problem a few hours ago.
var thename = 'Andrew'
db.collection.find({ $text: { $search: thename } });
Case sensitivity and diacritic sensitivity are set to false by default when doing queries this way.
You can even expand upon this by selecting on the fields you need from Andrew's user object by doing it this way:
db.collection.find({ $text: { $search: thename } }).select('age height weight');
Reference: https://docs.mongodb.org/manual/reference/operator/query/text/#text
You can use Case Insensitive Indexes:
The following example creates a collection with no default collation, then adds an index on the name field with a case insensitive collation. International Components for Unicode
/*
* strength: CollationStrength.Secondary
* Secondary level of comparison. Collation performs comparisons up to secondary * differences, such as diacritics. That is, collation performs comparisons of
* base characters (primary differences) and diacritics (secondary differences). * Differences between base characters takes precedence over secondary
* differences.
*/
db.users.createIndex( { name: 1 }, collation: { locale: 'tr', strength: 2 } } )
To use the index, queries must specify the same collation.
db.users.insert( [ { name: "Oğuz" },
{ name: "oğuz" },
{ name: "OĞUZ" } ] )
// does not use index, finds one result
db.users.find( { name: "oğuz" } )
// uses the index, finds three results
db.users.find( { name: "oğuz" } ).collation( { locale: 'tr', strength: 2 } )
// does not use the index, finds three results (different strength)
db.users.find( { name: "oğuz" } ).collation( { locale: 'tr', strength: 1 } )
or you can create a collection with default collation:
db.createCollection("users", { collation: { locale: 'tr', strength: 2 } } )
db.users.createIndex( { name : 1 } ) // inherits the default collation
This will work perfectly
db.collection.find({ song_Name: { '$regex': searchParam, $options: 'i' } })
Just have to add in your regex $options: 'i' where i is case-insensitive.
To find case-insensitive literals string:
Using regex (recommended)
db.collection.find({
name: {
$regex: new RegExp('^' + name.replace(/[-\/\\^$*+?.()|[\]{}]/g, '\\$&') + '$', 'i')
}
});
Using lower-case index (faster)
db.collection.find({
name_lower: name.toLowerCase()
});
Regular expressions are slower than literal string matching. However, an additional lowercase field will increase your code complexity. When in doubt, use regular expressions. I would suggest to only use an explicitly lower-case field if it can replace your field, that is, you don't care about the case in the first place.
Note that you will need to escape the name prior to regex. If you want user-input wildcards, prefer appending .replace(/%/g, '.*') after escaping so that you can match "a%" to find all names starting with 'a'.
Regex queries will be slower than index based queries.
You can create an index with specific collation as below
db.collection.createIndex({field:1},{collation: {locale:'en',strength:2}},{background : true});
The above query will create an index that ignores the case of the string. The collation needs to be specified with each query so it uses the case insensitive index.
Query
db.collection.find({field:'value'}).collation({locale:'en',strength:2});
Note - if you don't specify the collation with each query, query will not use the new index.
Refer to the mongodb doc here for more info - https://docs.mongodb.com/manual/core/index-case-insensitive/
The following query will find the documents with required string insensitively and with global occurrence also
db.collection.find({name:{
$regex: new RegExp(thename, "ig")
}
},function(err, doc) {
//Your code here...
});
An easy way would be to use $toLower as below.
db.users.aggregate([
{
$project: {
name: { $toLower: "$name" }
}
},
{
$match: {
name: the_name_to_search
}
}
])