OpenLDAP regex search with shell script - regex

For an OpenLDAP database I need to find all users that have a telephone number matching a regex pattern, and are in a given Organizational Unit.
According to this: LDAP search using regular expression it is impossible by an ldapsearch (what would have been my first choice otherwise).
I would like to do the least possible work on clientside, and querying all users from an organizational unit and filter them by a grep or something similar seems too resource consuming. Is there a better way to do it?
Also I'm not very familiar with shell, so I'm a little afraid of "sed", but I heard it's powerful and performs well in a regex filtering. If I'd need to do the filtering client side which would be the easiest way (not compromising performance)?
And about batched inputs. If I get a lot of partial phone numbers in a CSV file, and each partial number could have the type "prefix"/"postfix"/"regex" (so it's tow coloumns: type, and partialnumber), what would be the best performance-wise?
Should I just get all the users in the organization unit and filter them by the shell script (iterating through all the users and trying to match any of the numbers)?
Or should I make a query for every number (this is only a viable option if regex filter for attributes is possible in an ldap query).
At my level of knowledge the first one is the way to go but is there a better solution?
I'm using OpenLDAP 2.4.23 it that matters in any way.

The results of using regular expressions with LDAP data might not be what you expect. LDAP data is not strings, but specific types of data defined by the schema, and application must always retrieve the schema to learn how to deal with attribute values. The telephoneNumber attribute has a specific syntax, and regular expressions may not work. In general, matching rules must be used by LDAP clients to compare and match data sooted in a directory server. In fact, best practices are that applications must alway sure matching rules, not native language comparison operators or regular expressions. For more information, please see LDAP: Programming Practices and LDAP: Using Matching Rules.

Related

Working with strings in GCP-Workflows and GCP-Admin

I'm integrating a project in GCP-Workflows with GCP-Admin, but I'm having trouble working with some data, when extracting a date it is delivered in this format: 2020-12-28T11: 20: 05.000Z, so I can't turn the string into int, and apparently there is no function in GCP like substring() either. I need to use the date with an IF, checking if it is greater or less than the reference.
How can I do this?
There is some lack of function implementation for now in Workflows. New ones are coming very soon. But I don't know if they will solve your problem
Anyway, with workflows, the correct pattern, if a built-in function isn't implemented, is to call an endpoint, for example a Cloud Function or a Cloud Run, which perform the transformation for you and return the expected result.
Quite boring to do, but don't hesitate to open feature request on the issues tracker product team is very reactive and love user feedbacks!
The Workflows standard library now includes a text module with functions for searching (including regular expressions), splitting, substrings and case transformations.

How to do an efficient search for dynamically defined regexes in Elasticsearch?

I am working in a file system project (like dropbox). For the file system, I have an indexed data for full text search in elastic search. I have lots of large documents and searching works really well. But now my requirement is to use this data to query for some regexes. We have an admin panel for the customer and regexes will be defined dynamically by the customer in admin panel.
I know i can do regex searches in elastic search, but here the problem is tokenizer. For instance, let’s assume that user wants to create a regex pattern and wants to search 3 letters, ‘-’ and 2 digits such that “ABC-12” or "ASD-34". Problem here is my tokenizer. The defined tokenizer omits the character ‘-’, and indexes “ABC” and “12” separately. You may say not the omit ‘-’ character. But user may want to search a pattern with 3 letters, white space and 2 digits to retrieve data "ABC 12". Here white space is the problem. Somehow I have to use a tokenizer and cannot cover all dynamic regexes. So searching in the index does not solve my problem.
Actually for this type of search, I have another option which is to query all data with match all. With search scroll api, I can query all original documents partially. After each response from scroll api, I can run my regex finder in separate thread. So that I can prepare the desired data after the scrolling operation. Do you think this option is good for big data? I think I will need good cpu power and ram. I know it is not a special solution but I can not find any effective solution for my requirement. I am open for better solutions. Thanks.
I believe, ES allows you to analyse the same field multiple times. Documentation states that new analysers can be added to existing fields later:
New multi-fields can be added to existing fields using the PUT mapping API.
This opens up a possibility to dynamically add new analysers (and tokenisers for that matter) as you find what sort of regex your users are after. I am not sure how trivial it will be for your particular use case, but this seems like an avenue to explore

Reorder list of numbered items using regular expressions

I got this list of items (it's in a sql script) and I would like to reorder it by number :
from this :
,user_1
,user_2
,user_3
,name_1
,name_2
,name_3
to this
,user_1
,name_1
,user_2
,name_2
,user_3
,name_3
I use sql server management studio 2008 so I have ability to replace using regex but I don't know if that kind of manipulation is even possible with regular expressions.
Just copy paste them in excel, then sort and then copy paste back to ssms.
It's that simple :)
I think you need to add a bit more description for this to really make sense.
Perhaps post the SQL script?
Is this data stored in a single varchar field and this is the reason you are looking for a regex solution?
You can easily parse the comma-seperated values using a regex, but you would need some other function to sort that result and it can fairly quickly get messy to do this in SQL.
In general I would say this problem is better handled outside of the SQL statement - eg. process this in your favorite programming/scripting language after getting the result back from the SQL.
Also this problem indicates a design problem with the database layout, if in any way possible the preferred way to solve this would probably be to restructure it.

File system regular expression search tool

What is the best tool to make complex (multi-line) regular expression file contents searches with good reporting capabilities?
I need to make a report over large Java/JSP code base and I have to make some charts afterward.
Eclipse is rather good at searches, but it does not provide good report of what is found. It just shows the tree of files, but I would like to see a table with columns corresponding to full match, each group, file name, file path, file date, may some version control information etc. Then I can transfer this table to Excel and make some graphs that I want.
Is there some generic file system search tool that has such capabilities? Or maybe there is some Eclispe plugin that can give better reports (note that I'm stuck on eclipse 3.1.2)?
Agent Ransack, TextPad, and UltraEdit allow you to perform regular expression searches against the file system. My favorite is Agent Ransack as you can specify regular expressions for the file names and for the content.
PowerGREP (on Windows) can be used to do (most of) that. You can define the format of your search results quite freely. I haven't tried yet to also add file meta information to the search results, but that should work. Not sure if you can add version control information (where would that come from?) - perhaps if you could be a bit more specific, I could check.
Other than that, why not write a small Python/Ruby/Perl script like JasonTrue suggested?
For searches over code bases with queries that understand the language structure, look at SD Search Engine. This tool indexes larges source base to provide very fast query response.
Queries are stated in terms of langauge elements (identifiers, operators, strings, ...) with constraints over the language elements (including wildcards and regexps on identifiers, strings and comments, as well as range constraints on numbers). Language whitespace and linebreaks (and comments unless you insist) are ignored.
If you want to do a plain regexp search on file character content, you can do that too but you don't get the speed advantage of the index, runs more like regular grep.
The interactive query result is shown in a hit window with other hits; by clicking, you can go to window containg the full source code of a hit.
In logging mode, all hits found are written to a log file with N lines of context, where you configure N. That's probably the report you want.
um... grep -r ?
Or ruby/perl/python, if you want to have more control over the final output; it sounds like what you're after would only be a few lines.

Allowing code snippets in form input while preventing XSS and SQL injection attacks

How can one allow code snippets to be entered into an editor (as stackoverflow does) like FCKeditor or any other editor while preventing XSS, SQL injection, and related attacks.
Part of the problem here is that you want to allow certain kinds of HTML, right? Links for example. But you need to sanitize out just those HTML tags that might contain XSS attacks like script tags or for that matter even event handler attributes or an href or other attribute starting with "javascript:". And so a complete answer to your question needs to be something more sophisticated than "replace special characters" because that won't allow links.
Preventing SQL injection may be somewhat dependent upon your platform choice. My preferred web platform has a built-in syntax for parameterizing queries that will mostly prevent SQL-Injection (called cfqueryparam). If you're using PHP and MySQL there is a similar native mysql_escape() function. (I'm not sure the PHP function technically creates a parameterized query, but it's worked well for me in preventing sql-injection attempts thus far since I've seen a few that were safely stored in the db.)
On the XSS protection, I used to use regular expressions to sanitize input for this kind of reason, but have since moved away from that method because of the difficulty involved in both allowing things like links while also removing the dangerous code. What I've moved to as an alternative is XSLT. Again, how you execute an XSL transformation may vary dependent upon your platform. I wrote an article for the ColdFusion Developer's Journal a while ago about how to do this, which includes both a boilerplate XSL sheet you can use and shows how to make it work with CF using the native XmlTransform() function.
The reason why I've chosen to move to XSLT for this is two fold.
First validating that the input is well-formed XML eliminates the possibility of an XSS attack using certain string-concatenation tricks.
Second it's then easier to manipulate the XHTML packet using XSL and XPath selectors than it is with regular expressions because they're designed specifically to work with a structured XML document, compared to regular expressions which were designed for raw string-manipulation. So it's a lot cleaner and easier, I'm less likely to make mistakes and if I do find that I've made a mistake, it's easier to fix.
Also when I tested them I found that WYSIWYG editors like CKEditor (he removed the F) preserve well-formed XML, so you shouldn't have to worry about that as a potential issue.
The same rules apply for protection: filter input, escape output.
In the case of input containing code, filtering just means that the string must contain printable characters, and maybe you have a length limit.
When storing text into the database, either use query parameters, or else escape the string to ensure you don't have characters that create SQL injection vulnerabilities. Code may contain more symbols and non-alpha characters, but the ones you have to watch out for with respect to SQL injection are the same as for normal text.
Don't try to duplicate the correct escaping function. Most database libraries already contain a function that does correct escaping for all characters that need escaping (e.g. this may be database-specific). It should also handle special issues with character sets. Just use the function provided by your library.
I don't understand why people say "use stored procedures!" Stored procs give no special protection against SQL injection. If you interpolate unescaped values into SQL strings and execute the result, this is vulnerable to SQL injection. It doesn't matter if you are doing it in application code versus in a stored proc.
When outputting to the web presentation, escape HTML-special characters, just as you would with any text.
The best thing that you can do to prevent SQL injection attacks is to make sure that you use parameterized queries or stored procedures when making database calls. Normally, I would also recommend performing some basic input sanitization as well, but since you need to accept code from the user, that might not be an option.
On the other end (when rendering the user's input to the browser), HTML encoding the data will cause any malicious JavaScript or the like to be rendered as literal text rather than executed in the client's browser. Any decent web application server framework should have the capability.
I'd say one could replace all < by <, etc. (using htmlentities on PHP, for example), and then pick the safe tags with some sort of whitelist. The problem is that the whitelist may be a little too strict.
Here is a PHP example
$code = getTheCodeSnippet();
$code = htmlentities($code);
$code = str_ireplace("<br>", "<br>", $code); //example to whitelist <br> tags
//One could also use Regular expressions for these tags
To prevent SQL injections, you could replace all ' and \ chars by an "innofensive" equivalent, like \' and \, so that the following C line
#include <stdio.h>//'); Some SQL command--
Wouldn't have any negative results in the database.