Regexp to grab protocol from URL - regex

Let's say I have a variable called URL and it's assigned a value of http://www.google.com. I can also received the URL via ftp, hence it'll be ftp://ftp.google.com. How can I have it so I grab everything before the :? I'll have an if/else condition afterwards to test the logic.

/^[^:]+/
If you want to prevent 'www.foobar.com' (which has no protocol specified) to match as protocol:
/^[^:]+(?=:\/\/)/

You mean like this?
/^(.*?):/

Related

Regex to differentiate APIs

I need to create a regex to help determine the number the number of times an API is called. We have multiple APIs and this API is of the following format:
/foo/bar/{barId}/id/{id}
The above endpoint also supports query parameters so the following requests would be valid:
/foo/bar/{barId}/id/{id}?start=0&limit=10
The following requests are also valid:
/foo/bar/{barId}/id/{id}/
/foo/bar/{barId}/id/{id}
We also have the following endpoints:
/foo/bar/{barId}/id/type/
/foo/bar/{barId}/id/name/
/foo/bar/{barId}/id/{id}/price
My current regex to extract calls made only to /foo/bar/{barId}/id/{id} looks something like this:
\/foo\/bar\/(.+)\/id\/(?!type|name)(.+)
But the above regex also includes calls made to /foo/bar/{barId}/id/{id}/price endpoint.
I can check if the string after {id}/ isn't price and exclude calls made to price but it isn't a long term solution since if we add another endpoint we may need to update the regex.
Is there a way to filter calls made only to:
/foo/bar/{barId}/id/{id}
/foo/bar/{barId}/id/{id}/
/foo/bar/{barId}/id/{id}?start=0&limit=10
Such that /foo/bar/{barId}/id/{id}/price isn't also pulled in?
\/foo\/bar\/(.+)\/id\/(?!type|name)(.+)
There is something in your RegEx which is the cause to your problem. "(.+)" RegEx code matches every character after it. So replace it with "[^/]" and add the following code "/?(?!.+)". This is working for me.
/foo/bar/([^/]+)/id/(?!type|name)([^/]+)/?(?!.+)

Write a url path parameter to a query string with haProxy

I'm trying to re-write a URL such as
http://ourdomain.com/hotels/vegas?cf=0
to
http://ourdomain.com?d=vegas&cf=0
using haProxy.
We used to do it with Apache using
RewriteRule ^hotels/([^/]+)/?\??(.*)$ ?d=$1&$2 [QSA]
I've tried
reqrep ^([^\ :]*)\ /hotels/(.*) \1\ /?d=\2
But that gives me http://ourdomain.com?d=vegas?cf=0
And
reqrep ^([^\ :]*)\ /hotels/([^/]+)/?\??(.*) \1\ /?d=\2&\3
Just gives me a 400 error.
It would be nice to do it with acl's but I can't see how that would work.
reqrep ^([^\ :]*)\ /hotels/([^/]+)/?\??(.*) \1\ /?d=\2&\3
Just gives me a 400 error.
([^/]+) is too greedy when everything following it /?\??(.*) is optional. It's mangling the last part of the request, leading to the 400.
Remember what sort of data you're working with:
GET /path?query HTTP/1.(0|1)
Replace ([^/]+) with ([^/\ ]+) so that anything after and including the space will be captured by \3, not \2.
Update: it seems that the above is not quite perfect, since the alignment of the ? still doesn't work out. This -- and the original 400 error -- highlight some of the pitfalls with req[i]rep -- it's very low level request munging.
HAProxy 1.6 introduced several new capabilities that make request tweaking much cleaner, and this is actually a good case to illustrate several of them together. Note that these examples also use anonymous ACLs, wrapped in { }. The documentation seems to discourage these a little bit -- but this is only because they're unwieldy to maintain when you need to test the same set of conditions for multiple reasons (named ACLs can of course be more easily reused), but they're perfect for a case like this. Note that the braces must be surrounded by at least 1 whitespace character due to configuration parser limitations.
Variables, scoped to request (go out of scope as soon as a back-end is selected), response (go into scope only after the back-end responds), transaction (persistent from request to response, these can be used before the trip to the back-end and are still in scope when the response comes back), or session (in scope across multiple requests by this browser during this connection, if the browser reuses the connection), can be used to stash values.
The regsub() converter takes the preceding value as its input and returns that value passed through a simple regex replacement.
If the path starts with /hotels/, capture the path, scrub out ^/hotels/ (replacing it with the empty string that appears after the next comma), and stash it in a request variable called req.hotel.
http-request set-var(req.hotel) path,regsub(^/hotels/,) if { path_beg /hotels/ }
Processing of most http-request steps is done in configuration file order, so, at the next instruction, if (and only if) that variable has a value, we use http-request set-path with an argument of / in order to empty the path. Testing the variable is needed so that we don't do this with every request -- only the ones for /hotels/. It might be that you actually need something more like if { path_reg /hotels/.+ } since /hotels/ by itself might be a valid path we should leave alone.
http-request set-path / if { var(req.hotel) -m found }
Then, we use http-request set-query to set the query string to a value created by concatenating the value of the req.hotel variable with & and the original query string, which we obtain with using the query fetch.
http-request set-query d=%[var(req.hotel)]&%[query] if { var(req.hotel) -m found }
Note that the query fetch and http-request set-query both have some magical behavior -- they take care of the ? for you. The query fetch does not return it, and http-request set-query does not expect you to provide it. This is helpful because we may need to be able to handle requests correctly whether or not the ? is present in the original request, without having to manage it ourselves.
With the above configuration, GET /hotels/vegas?&cf=0 HTTP/1.1 becomes GET /?d=vegas&cf=0 HTTP/1.1.
If the initial query string is completely empty, GET /hotels/vegas HTTP/1.1 becomes GET /?d=vegas& HTTP/1.1. That looks a little strange, but it should be completely valid. A slightly more convoluted configuration to test for the presence of an intial query string could prevent that, but I don't see it being an issue.
So, we've turned 1 line of configuration into 3, but I would argue that those three lines are much more intuitive about what they are accomplishing and it's certainly a less delicate operation than massaging the entire start line of the request with a regex. Here they are, together, with some optional whitespace:
http-request set-var(req.hotel) path,regsub(^/hotels/,) if { path_beg /hotels/ }
http-request set-path / if { var(req.hotel) -m found }
http-request set-query d=%[var(req.hotel)]&%[query] if { var(req.hotel) -m found }
This is a working solution using reqrep
acl is_destination path_beg /hotels/
reqrep ^([^\ :]*)\ /hotels/([^/\ \?]+)/?\??([^\ ]*)(.*)$ \1\ /?d=\2&\3\4 if is_destination
I'm hoping that the acl will remove the need to run regex on everything (hence lightening the load a bit), but I'm not sure that's the case.

nginx - URL encode query string

I have an nginx reverse-proxy which needs to pass on the query string it receives. However this query string it receives is not well formatted and can contain JSON that is not URL encoded i.e. it contains curly brackets i.e. {}, commas, colons and double quotes! Unfortunately, I have no control over this and this causes the downstream server to barf when parsing the string.
Is there a way to correctly URL encode this string before proxying it?
I can replace the curly brackets as I know there will only be one instance of each using the config:
if ($args ~* '(.*){(.*)}(.*)') {
set $args $1%7B$2%7D$3;
rewrite (.*)$ $1;
}
proxy_pass http://127.0.0.1:8080;
However, I don't know in advance how many fields the JSON will have so it's difficult to use the same logic as above for the rest of the object.
I should also mention that I don't think this is related to nginx url-decoding parameters as I am not using a URI in the proxy_pass.
Thanks!
UPDATE: For the time being, the JSON object seems to be sending the same properties so this is what I've used as a workaround. It's pretty hideous and will break if the number of properties changes but does the job for now.
if ($args ~* '(.*){"(.*)":"(.*)","(.*)":"(.*)","(.*)":"(.*)","(.*)":"(.*)","(?<group10>.*)":"(?<group11>.*)"}(?<group12>.*)') {
set $args $1%7B%22$2%22%3A%22$3%22%2C%22$4%22%3A%22$5%22%2C%22$6%22%3A%22$7%22%2C%22$8%22%3A%22$9%22%2C%22${group10}%22%3A%22${group11}%22%7D${group12};
rewrite (.*)$ $1;
}
proxy_pass http://127.0.0.1:8080;
Note that since this returns more than 9 regex groups, I had to name groups 10, 11 and 12 otherwise they get interpreted as $1 + the digit 0, 1 or 2.
Is there a more robust way of doing this?
Personally, I don't like a solution with a single if statement, because it doesn't look very readable, flexible or maintainable. You may see whether having a combination of location or rewrite statements, where each one handles a specific encoding case, may work; see http://mdoc.su/ for a fun project that's very heavy with internal redirects, although I believe at one point nginx may have a limit on the total number of indirections.
Otherwise, provided that you cannot modify the backend, another option is to automatically redirect misbehaving clients and/or requests to an auxiliary backend, whose only purpose is to re-encode the string correctly, providing an X-Accel-Redirect HTTP Response Header as its output (as per http://nginx.org/r/proxy_ignore_headers), which nginx will use to make a subsequent internal redirect / request to the actual backend.

URL general format

I have written a C++ program that allows URLs to be posted onto YouTube. It works by taking in the URL as input either from you typing it into the program or from direct input, and then it will replace every '/', '.' in the string with '*'. This modified string is then put on your clipboard (this is solely for Windows-users).
Of course, before I can even call the program usable, it has to go back: I will need to know when '.', '/' are used in URLs. I have looked at this article: http://en.wikipedia.org/wiki/Uniform_Resource_Locator , and know that '.' is used when dealing with the "master website" (in the case of this URL, "en.wikipedia.org"), and then '/' is used afterwards, but I have been to other websites, http://msdn.microsoft.com/en-us/library/windows/desktop/ms649048%28v=vs.85%29.aspx , where this simply isn't the case (it even replaced '(', ')' with "%28", "%29", respectively!)
I also seemed to have requested a .aspx file, whatever that is. Also, there is a '.' inside the parentheses in that URL. I have even tried to view the regular expressions (I don't quite fully understand those yet...) regarding URLs. Could someone tell me (or link me to) the rules regarding the use of '.', '/' in URLs?
Can you explain why you are doing this convoluted thing? What are you trying to achieve? It may be that you don't need to know as much as you think, once you answer that question.
In the mean time here is some information. A URL is really comprised of a number of sections
http: - the "scheme" or protocol used to access the resource. "HTTP", "HTTPS",
"FTP", etc are all examples of a scheme. There are many others
// - separates the protocol from the host (server) address
myserver.org - the host. The host name is looked up against a DNS (Dynamic Name Server)
service and resolved to an IP address - the "phone number" of the machine
which can serve up the resource (like "98.139.183.24" for www.yahoo.com)
www.myserver.org - the host with a prefix. Sometimes the same domain (`myserver.org`)
connects multiple servers (or ports) and you can be sent straight to the
right server with the prefix (mail., www., ftp., ... up to the
administrators of the domain). Conventionally, a server that serves content
intended for viewing with a browser has a `www.` prefix, but there's no rule
that says this must be the case.
:8080/ - sometimes, you see a colon followed by up to five digits after the domain.
this indicates the PORT on the server where you are accessing data
some servers allow certain specific services on just a particular port
they might have a "public access" website on port 80, and another one on 8080
the https:// protocol defaults to port 443, there are ports for telnet, ftp,
etc. Add these things only if you REALLY know what you are doing.
/the/pa.th/ this is the path relative to DOCUMENTROOT on the server where the
resource is located. `.` characters are legal here, just as they are in
directory structures.
file.html
file.php
file.asp
etc - usually the resource being fetched is a file. The file may have
any of a great number of extensions; some of these indicate to the server that
instead of sending the file straight to the requester,
it has to execute a program or other instructions in this file,
and send the result of that
Examples of extensions that indicate "active" pages include
(this is not nearly exhaustive - just "for instance"):
.php = contains a php program
.py = contains a python program
.js = contains a javascript program
(usually called from inside an .htm or .html)
.asp = "active server page" associated with a
Microsoft Internet Information Server
?something=value&somethingElse=%23othervalue%23
parameters that are passed to the server can be shown in the URL.
This can be used to pass parameters, entries in a form, etc.
Any character might be passed here - including '.', '&', '/', ...
But you can't just write those characters in your string...
Now comes the fun part.
URLs cannot contain certain characters (quite a few, actually). In order to get around this, there exists a mechanism called "escaping" a character. Typically this means replacing a character with the hexadecimal equivalent, prefixed with a % sign. Thus, you frequently see a space character represented as %20, for example. You can find a handly list here
There are many functions available for converting "illegal" characters in a URL automatically to a "legal" value.
To learn about exactly what is and isn't allowed, you really need to go back to the original specifications. See for example
http://www.ietf.org/rfc/rfc1738.txt
http://www.ietf.org/rfc/rfc2396.txt
http://www.ietf.org/rfc/rfc3986.txt
I list them in chronological order - the last one being the most recent.
But I repeat my question -- what are you really trying to do here, and why?

How to create a reg exp to parse such url?

So we have http://127.0.0.1:4773/robot10382.flv?action=read we need to get out from it protocol, ip/adress, port, actual url (robot10382.flv here) and actions (action=read here) how to parse all that into string vars in one reg exp?
I'm surprised that AS3 does not include proper URL parsing facilities. To put it simply, it is not easy to safely parse a URL using an RE. Here's an example of doing it though.
/(\w+)\:\/\/(\d+\.\d+\.\d+\.\d+)\:(\d+)\/(\w+)\?(.+)/ : $1 - protocol, $2 - ip, $3 - port, $4 - actual url, $5 - actions
there's also another way:
protocol : url.split('://')[0]
ip/domain name : url.split('://')[1].split(':')[0] (or if no port specified - url.split('://')[1].split('/)[0]
port : url.split('://')[1].split(':')[1].split('/')[0]
actual url : url.split('?')[0].split('/').reverse()[0]
actions : url.split('?')[1].split('&')/*the most possible separator imho*/ elements of this array can also be spliced('=') to separate variable names and values.
i know there's an opinion that splice shouldn't be used, but i think it's just beautiful when used properly.
Sometimes when passing a file path to a SWF you would like to perform FileExistance check before passing the file to an AS3 class. To do so you want to know if a URI is a file or an http URL or any other URI with specific protocol (moniker).
The following code will tell you if you are dealing with a local full or relative path.
http://younsi.blogspot.com/2009/08/as3-uri-parser-and-code-sequence-to.html