I run a Django-based forum (the framework is probably not important to the question, but still) and it has been increasingly getting spammed with posts that link to a specific website constantly (www.solidwoodkitchen.co.uk - these people are apparently the worst).
I've implemented a string blocking system that stops them posting to the forum if the URL of the website is included in the post, but as spam bots usually do, it has figured out a way around that by breaking up the URL with other characters (eg. w_w_w.s*olid_wood*kit_ch*en._*co.*uk .). So a couple of questions:
Is it even possible to build a regex capable of finding the specific URL within a block of text even when it has been modified like that?
If it is, would this cause a performance hit?
Description
You could break the url into a string of characters, then join them together with [^a-z0-9]*?. So in this case with www.solidwoodkitchen.co.uk the resulting regex would look like:
w[^a-z0-9]*?w[^a-z0-9]*?w[^a-z0-9]*?[.][^a-z0-9]*?s[^a-z0-9]*?o[^a-z0-9]*?l[^a-z0-9]*?i[^a-z0-9]*?d[^a-z0-9]*?w[^a-z0-9]*?o[^a-z0-9]*?o[^a-z0-9]*?d[^a-z0-9]*?k[^a-z0-9]*?i[^a-z0-9]*?t[^a-z0-9]*?c[^a-z0-9]*?h[^a-z0-9]*?e[^a-z0-9]*?n[^a-z0-9]*?[.][^a-z0-9]*?c[^a-z0-9]*?o[^a-z0-9]*?[.][^a-z0-9]*?u[^a-z0-9]*?k
Edit live on Debuggex
This could would basically search for the entire string of characters seperated by zero or more non alphanumeric characters.
Or you could take the input text and strip out all punctuation then simply search for wwwsolidwoodkitchencouk.
Related
I have no knowledge of RegEx code but I've just downloaded a Google Chrome extension that lets me automatically direct downloads to specific folders on my computer.
I want jpgs from a stock photo website to be downloaded in a specific folder, but part of the URL changes for every single file. how do I write out the File URL so it ignores the random section of the URL?
https://website.com/photos/IGNORE THIS PART with azAZ01 RANDOM CODE/download?force=true
Your question was a little confusing to me, but this should be the regex you need:
https://website.com/photos/(\w|\d)*
To help break it down, the basic text (e.g. https:, website.com, photos) just matches with the raw text. The / thing is an escape character '\' followed by the slash we want. Then for the random part, I'm assuming it's made up of letters and numbers, so that last part translates roughly to, any letter '\w' or any number '\d', and the * means any number of those.
Also, you can use Regex101.com as a helpful tool when making regex
I'm trying to create a regex for validating URLs. I know there are many advanced ones out there, but I want to create my own for learning purposes.
So far I have a regex that works quite well, however I want to improve the validation for the TLD part of the URI because I feel it's not quite there yet.
Here's my regex (or find it on regexr):
/^[(http(s)?):\/\/(www\.)?a-zA-Z0-9#:._\+~#=]{2,256}\.[a-zA-Z]{2,6}\b([/#?]{0,1}([A-Za-z0-9-._~:?#[\]#!$&''()*+,;=]|(%[A-Fa-f0-9]{2}))*)$/
It works well for links such as foo.com or http://foo.com or foo.co.uk
The problem appears when you introduce subdomains or second-level domains such as co.uk because the regex will accept foo.co.u or foo.co..
I did try using the following to select the substring after the last .:
/[(http(s)?):\/\/(www\.)?a-zA-Z0-9#:._\+~#=]{2,256}[^.]{2,}$/
but this prevents me from defining the path rules of the URI.
How can I ensure that the substring after the last . but before the first /, ? or # is at least 2 characters long?
From what I can see, you're almost there. Made some modification and it seems to work.
^(http(s)?:\/\/)?(www\.)?[a-zA-Z0-9#:._\+~#=]{2,256}\.[a-zA-Z]{2,6}([/#?;]([A-Za-z0-9-._~:?#[\]#!$&''()*+,;=]|(%[A-Fa-f0-9]{2}))*)?$
Can be somewhat shortened by doing
^(http(s)?:\/\/)?(www\.)?[\w#:.\+~#=]{2,256}\.[a-zA-Z]{2,6}([/#?;]([-\w.~:?#[\]#!$&''()*+,;=]|(%[A-Fa-f0-9]{2}))*)?$
(basically just tweaked your regex)
The main difference is that the parameter part is optional, but if it is there it has to start with one of /#?;. That part could probably be simplified as well.
Check it out here.
Edit:
After some experimenting I think this one is about as simple it'll get:
^(http(?:s)?:\/\/)?([-.~\w]+\.[a-zA-Z]{2,6})(:\d+)?(\/[-.~\w]*)?([#/#?;].*)?$
It also captures the separate parts - scheme, host, port, path and query/params.
Example here.
There are tons of solutions to find and/or parse normal URLs, but none of them deals with arbitrary text, i.e. URLs that are split over several lines? How would you find a URL that can have line breaks after any character?
Note: I'm not interested in the individual parts of the URL. I just want to find all URLs in a given text to convert them to links (e.g. like in plain e-mail text).
Example:
Text text text text text. Look at this:
http://stackoverfl
ow.com/
questions/15252042/
find-urls-in-text
Question question question.
Several approaches are possible:
1) Write a regex with whitespace rules after each regular char. This will certainly blow up the regex pattern but is the most flexible one. For catching line breaks use DOT_ALL mode. DOT_ALL will however produce the same problems as the next approach.
2) (Temporarily) remove line breaks and use normal regex pattern matching. This approach has problems though as it can happen that you include more text than necessary (at the end of the URL) or don't find a URL (if the linebreak is at the start, messing up the protocol string).
2a) A modification of 2) could be to do several match attempts removing only certain line breaks, e.g. after looking for an initial URL part (e.g. www, http etc.). Only possible if recognition time is secondary.
3) Ease your task with domain specific knowlege. For instance if you know where line breaks can occur (or if they occur only at specific positions) then look for these specific cases and solve them first. Then return to the usual regex search.
3a) A variation of 3) could be to look specifically for the protocol and and page extension using a regex with full whitespace rules to find start and stop of an URL. This works obviously only if there's always a protocol/filename_with_extension. Transform the found tokens into regular ones without whitespaces (but include a space before the protocol and after the extension) and then remove all line breaks in the text. Now you can match the URL with a regular regex.
There are certainly more variations possible, but the general idea is the same.
I have a long HTML file that contains the names of organizations and their URL's. Each organization's "section" in the code is demarcated by the word "organization" followed by a lot of code, with their URL located inside that code, and ends with the word "organization".
For example:
organization -- a lot of code (with the URL located somewhere inside) -- organization
I have tried to use regex to search and extract the URL, but to no avail.
organization(?<Protocol>\w+):\/\/(?<Domain>[\w#][\w.:#]+)\/?[\w\.?=%&=\ #/$,]*organization
I suspect my problem lies somewhere in my trying to demarcate the search for URL's by just using the word "organization", but I am not sure.
Try group 1 from this:
organization.*\b(\w+://[\w.?%&=#/$,-]+).*?organization
Your current regex is searching for something sandwiched immediately between two instances of "organization". If there's any chance of characters existing between "organization" and your URL, you'll need to introduce a non-greedy match for any instances of anything (.*?), and if there are newlines in the mix you'll need to use (?:.|\n)*?.
So your regex becomes:
organization(?:.|\n)*?(?<Protocol>\w+):\/\/(?<Domain>[\w#][\w.:#]+)\/?[\w\.?=%&=\ #/$,]*(?:.|\n)*?organization
(Because of the bold insertions, this mistakenly appears to have spaces, but it does not. If you select it and copy/paste, it will paste correctly without spaces)
First of all I created my own regex to find all URLs in a text, because:
When I searched SO and google only found regex for specific URL constructions, like images, etc.
I found a pretty complete regex from the PHP's manual itself (see "splattermania at freenet dot de 01-Oct-2009 12:01" post at http://php.net/manual/en/function.preg-match.php) that can find almost anything that resembles a URL, as little as "bit.ly".
This pattern has a few errors and constraints, so I'm fixing and enhancing it.
Now the pattern structure seems right, but I'm not sure all valid characters are present. Please post samples of URLs to test my pattern. Might be laziness, but I don't want to read pages and pages of references to find all of them, need to focus on the development. If you have a summary of valid chars for username, password, path, query and anchor that you can share, that would be very very helpful.
Best Regards!
The pattern you linked to does indeed match a lot of URLs, both valid and invalid. It's not really a surprise since nearly everything in that regex is optional; as you wrote yourself, it even matches bit.ly, so it's easy to see how it would match lots of non-URL stuff.
It doesn't take new Unicode domain names into account, for one (e.g., http://www.müller.de).
It doesn't match valid URLs like
http://msdn.microsoft.com/en-us/library/aa752574(VS.85).aspx
It doesn't match relative paths (might not be necessary, though) like /cgi-bin/version.pl.
It doesn't match mailto: links.
It doesn't match URLs like http://1.2.3.4. Don't even ask about IPv6 :)
All in all, regular expressions are NOT the right tool to reliably match or validate URLs. This is a job for a parser. If you can live with many false positive and false negative matches, then regexes are fine.
Please read Jan Goyvaerts' excellent essay on this subject: Detecting URLs in a block of text.