How to make my web service url case insensitive? - web-services

I have generated a web service from an existing wsdl file using Axis2, and now my service is reachable by a url
http://something/Service?wsdl
The problem is, there are some applications which call this url, adding an upper case word "WSDL" at the end of url (please don't ask why..), so they are calling it as,
http://something/Service?WSDL
And they can't access it at that url.
Is it possible to solve this problem? Maybe setting some parameter or making this url case insensitive someway?

I've taken a quick look at the Axis2 code and it seems the ?wsdl extension compare is case sensitive. This thing sometimes happen.
You could have a look at the code yourself and see if there is some switch somewhere to make this case insensitive (in case I missed something when looking at the code).
What you could do is have a filter in your application that looks at the query string and if it finds ?WSDL in there, in any case whatsoever, to do a redirect to the same URL but with a lower case ?wsdl. This of course assumes that the clients that try to access the WSDL can follow redirects.
The problem is, there are some applications which call this url, adding an upper case word "WSDL" at the end of url (please don't ask why..)
Sorry, but why? The most simple way is to tell clients to use a lower case parameter instead of an upper case one. If they can do a call with ?WSDL why is it so hard to do one with ?wsdl?

Related

Case Insensitive Search parameters for API endpoint

I am working on a project that involves integrating the PUBG API. From my site, the player can lookup stats using their player name, platform and season. One issue I am facing is that the player name have to be exact and is case sensitive. Now I assumed it to be the case at the beginning. However, after searching for the name in this site I found that they don't need the name to be case sensitive. Also, referring to this post from the PUBG Dev community here I saw that it confirmed my initial assumption. So my question is if PUBG API requires the names to be case sensitive then, how is the site (linked) can search for the player even if the name provided is not in exact, matching case? For example,:
I looked up the player name MyCholula. From the PUBG API page for player lookup, it returns the proper value. When I tried mycholula, it doesn't and sends a 404. From the linked site above, both combination seems to work. Now if spaces or other separators were involved in the name then, it would be easy to convert it assuming that separated words are all capitalized (somewhat naive assumption though). For this name, I don't see any way of converting mycholula to MyCholula. I also tried many other combination in the linked site above (also different user names I got from my friends) to confirm that the linked site is actually returning the data as expected for any combination of user names. I also tried it on other sites like this and it didn't work just like it doesn't work from the PUBG DEV API page or from my page.
I am really confused as to how they are doing it. The only possible explanation I can come up with is that they have the player records stored in their database from where, they can perform advanced regexp based search to get the actual name. However, this sounds far fetched since, there are millions of players and it would require them to know all the player names and associated IDs. Also, as far as I know, it is not possible to use regex or other string manipulation to convert to the actual name because there can be many combinations (not an expert on regex so can't be definitive on this).
Any help or suggestions will be greatly appreciated. Thanks.

Ignoring cookies list efficiently in NGINX reverse proxy setup

I am currently working/testing microcache feature in NGINX reverse proxy setup for dynamic content.
One big issue that occurs is sessions/cookies that need to be ignored otherwise people will logon with random accounts on the site(s).
Currently I am ignoring popular CMS cookies like this:
if ($http_cookie ~* "(joomla_[a-zA-Z0-9_]+|userID|wordpress_(?!test_)[a-zA-Z0-9_]+|wp-postpass|wordpress_logged_in_[a-zA-Z0-9]+|comment_author_[a-zA-Z0-9_]+|woocommerce_cart_hash|woocommerce_items_in_cart|wp_woocommerce_session_[a-zA-Z0-9]+|sid_customer_|sid_admin_|PrestaShop-[a-zA-Z0-9]+")
{
# set ignore variable to 1
# later used in:
# proxy_no_cache $IGNORE_VARIABLE;
# proxy_cache_bypass $IGNORE_VARIABLE;
# makes sense ?
}
However this becomes a problem if I want to add more cookies to the ignore list. Not to mention that using too many "if" statements in NGINX is not recommended as per the docs.
My questions is, if this could be done using a map method ? I saw that regex in map is different( or maybe I am wrong ).
Or is there another way to efficiently ignore/bypass cookies ?
I have search a lot on stackoverflow, and whilst there are so many different examples; I could not find something specific for my needs.
Thank you
Update:
A lot of reading and "digging" on the internet ( we might as well just say Google ), and I found quite some interesting examples.
However I am very confused with these, as I do not fully understand the regex usage and I am afraid to implement such without understanding it.
Example 1:
map $http_cookie $cache_uid {
default nil;
~SESS[[:alnum:]]+=(?<session_id>[[:alnum:]]+) $session_id;
}
In this example I can notice that the regex is very different from
the ones used in "if" blocks. I don't understand why the pattern
starts without any "" and directly with just a ~ sign.
I don't understand what does [[:alnum:]]+ mean ? I search for this
but I was unable to find documentation. ( or maybe I missed it )
I can see that the author was setting "nil" as default, this will
not apply for my case.
Example 2:
map $http_cookie $cache_uid {
default '';
~SESS[[:alnum:]]+=(?<session_id>[[:graph:]]+) $session_id;
}
Same points as in Example 1, but this time I can see [[:graph:]]+.
What is that ?
My Example (not tested):
map $http_cookie $bypass_cache {
"~*wordpress_(?!test_)[a-zA-Z0-9_]+" 1;
"~*wp-postpass|wordpress_logged_in_[a-zA-Z0-9]+" 1;
"~*comment_author_[a-zA-Z0-9_]+" 1;
"~*[a-zA-Z0-9]+_session)" 1;
default 0;
}
In my pseudo example, the regex must be wrong since I did not find any map cookie examples with such regex.
So once again my goal is to have a map style list of cookies that I can bypass the cache for, with proper regex.
Any advice/examples much appreciated.
What exactly are you trying to do?
The way you're doing it, by trying to blacklist only certain cookies from being cached, through if ($http_cookie …, is a wrong approach — this means that one day, someone will find a cookie that is not blacklisted, and which your backend would nonetheless accept, and cause you cache poisoning or other security issues down the line.
There's also no reason to use the http://nginx.org/r/map approach to get the values of the individual cookies, either — all of this is already available through the http://nginx.org/r/$cookie_ paradigm, making the map code for parsing out $http_cookie rather redundant and unnecessary.
Are there any cookies which you actually want to cache? If not, why not just use proxy_no_cache $http_cookie; to disallow caching when any cookies are present?
What you'd probably want to do is first have a spec of what must be cached and under what circumstances, only then resorting to expressing such logic in a programming language like nginx.conf.
For example, a better approach would be to see which URLs should always be cached, clearing out the Cookie header to ensure that cache poisoning isn't possible (proxy_set_header Cookie "";). Else, if any cookies are present, it may either make sense to not cache anything at all (proxy_no_cache $http_cookie;), or to structure the cache such that certain combination of authentication credentials are used for http://nginx.org/r/proxy_cache_key; in this case, it might also make sense to reconstruct the Cookie request header manually through a whitelist-based approach to avoid cache-poisoning issues.
You 2nd example that you have is what you actually need
map $http_cookie $bypass_cache {
"~*wordpress_(?!test_)[a-zA-Z0-9_]+" 1;
"~*wp-postpass|wordpress_logged_in_[a-zA-Z0-9]+" 1;
"~*comment_author_[a-zA-Z0-9_]+" 1;
"~*[a-zA-Z0-9]+_session)" 1;
default 0;
}
Basically here what you are saying the bypass_cache value will be 1 if the regex is matched else 0.
So as long as you got the pattern right, it will work. And that list only you can have, since you would only know which cookies to bypass cache on

CFWheels: Redirect to URL with Params Hidden

I am using redirectTo() function with params to redirect to another pages with a query string in the url. For security purpose this does not look appealing because the user can change the parameters in the url, thus altering what is inserted into the database.
My code is:
redirectTo(action="checklist", params="r=#r#&i=#insp#&d=#d#");
Is there anyway around this? I am not using a forms, I just wish to redirect and I want the destination action/Controller to know what I am passing but not display it in the url.
You can obfuscate the variables in the URL. CfWheels makes this really easy.
All you have to do is call set(obfuscateURLs=true) in the config/settings.cfm file to turn on URL obfuscation.
I am sure this works with linkTo() function. I hope it works with RedirectTo() funcation as well. I do not have a set up to check it now. But if doesn't work for RedirectTo(), you can obfuscateParam() and deObfuscateParam() functions to do job for you.
Caution: This will only make harder for user to guess the value. It doesn't encrypt value.
To know more about this, Please read the document configuration and defaults and obfuscating url
A much better approach to this particular situation is to write params to the [flash].1 The flash is exactly the same thing as it is in Ruby on Rails or the ViewBag in ASP.Net. It stores the data in a session or cookie variable and is deleted at the end of the next page's load. This prevents you from posting back long query strings like someone that has been coding for less than a year. ObfuscateParam only works with numbers and is incredibly insecure. Any power user can easily deobfuscate, even more so with someone that actually makes a living stealing data.

Django url patterns - how to get absolute url to the page?

i'm trying to get full path of the requested url in Django. I use a such url pattern:
('^', myawesomeview),
It works good for domain.com/hello, domain.com/hello/sdfsdfsd and even for domain.com/hello.php/sd""^some!bullshit.index.aspx (although, "^" is replaced with "%5E")
But when I try to use # in request (ex. http://127.0.0.1:8000/solid#url) it returns only "/sold". Is there any way to get the full path without ANY changes or replacements?
BTW, I'getting url with return HttpResponse(request.path)
Thanks in advance.
The part of URI separated by '#' sign is called a fragment identifier. Its sense is to be processed on client side only, and not to be passed to server. So if you really need this, you have to process it with JS, for example, and pass it as a usual parameter. Otherwise, this information will never be sent to Django.

Can you construct an xpath for an input element that will be case insensitive regarding the type being matched?

So this issue stems from Watir-Webdriver which uses the webdriver ruby bindings
The challenge to us is a situation where someone is testing against HTML where the web developer has specified an input element with the type of RADIO (in upper case). While uncommon that's valid HTML as far as I can tell.
When the user tries to work with the element via a water statement such as
browser.radio(:id => "RadioM").set
then the .radio method in watir-webdriver converts this selector to XPath -
//input[#type='radio' and #id='RadioM']
which of course ends up being case sensitive and thus webdriver won't find their element
Is there some way we could be converting this to an XPATH that would be case insensitive with regard to the #type value it's looking for, and still work with webdriver to locate/manipulate the element?
edit: An additional complication is that as I understand it webdriver tries to delegate to the browser's xpath engine and right now, none of the browser's support xpath2.0. So that would also seem to indicate we would have to order webdriver to handle all the xpath stuff internally, and I do not know if webdriver's implementation of xpath supports xpath2.0 either. (and if it does, what affect this would have on test performance etc)