I've got a webservice which is executed through javascript (jquery) to retrieve data from the database. I would like to make sure that only my web pages can execute those web methods (ie I don't want people to execute those web methods directly - they could find out the url by looking at the source code of the javascript for example).
What I'm planning to do is add a 'Key' parameter to all the webmethods. The key will be stored in the web pages in a hidden field and the value will be set dynamically by the web server when the web page is requested. The key value will only be valid for, say, 5 minutes. This way, when a webmethod needs to be executed, javascript will pass the key to the webmethod and the webmethod will check that the key is valid before doing whatever it needs to do.
If someone wants to execute the webmethods directly, they won't have the key which will make them unable to execute them.
What's your views on this? Is there a better solution? Do you forsee any problems with my solution?
MORE INFO: for what I'm doing, the visitors are not logged in so I can't use a session. I understand that if someone really wants to break this, they can parse the html code and get the value of the hidden field but they would have to do this regularly as the key will change every x minutes... which is of course possible but hopefully will be a pain for them.
EDIT: what I'm doing is a web application (as opposed to a web site). The data is retrieved through web methods (+jquery). I would like to prevent anyone from building their own web application using my data (which they could if they can execute the web methods). Obviously it would be a risk for them as I could change the web methods at any time.
I will probably just go for the referrer option. It's not perfect but it's easy to implement. I don't want to spend too much time on this as some of you said if someone really wants to break it, they'll find a solution anyway.
Thanks.
Well, there's nothing technical wrong with it, but your assumption that "they won't have the key which will make them unable to execute them" is incorrect, and thus the security of the whole thing is flawed.
It's very trivial to retrieve the value of a hidden field and use it to execute the method.
I'll save you a lot of time and frustration: If the user's browser can execute the method, a determined user can. You're not going to be able to stop that.
With that said, any more information on why you're attempting to do this? What's the context? Perhaps there's something else that would accomplish your goal here that we could suggest if we knew more :)
EDIT: Not a whole lot more info there, but I'll run with it. Your solution isn't really going to increase the security at all and is going to create a headache for you in maintenance and bugs. It will also create a headache for your users in that they would then have an 'invisible' time limit in which to perform actions on pages. With what you've told us so far, I'd say you're better off just doing nothing.
What kind of methods are you trying to protect here? Why are you trying to protect them?
ND
MORE INFO: for what I'm doing, the visitors are not logged in so I can't use a session.
If you are sending a client a key that they will send back every time they want to use a service, you are in effect creating a session. The key you are passing back and forth is functionally no different than a cookie (expect that it will be passed back only on certain requests.) Might as well just save the trouble and set a temporary cookie that will expire in 5 minutes. Add a little server side check for expired cookies and you'll have probably the best you can get.
You may already have such a key, if you're using a language or framework that sets a session id. Send that with the Ajax call. (Note that such a session lasts a bit longer than five minutes, but note also it's what you're using to keep state for the users regular HTPP gets and posts.)
What's to stop someone requesting a webpage, parsing the results to pull out the key and then calling the webservice with that?
You could check the referrer header to check the call is coming from one of your pages, but that is also easy to spoof.
The only way I can see to solve this is to require authentication. If your webpages that call the webservice require the user to be logged in then you can check the that they're logged in when they call the webservice. This doesn't stop other pages from using your webservice, but it does let you track usage more and with some rate limiting you should be able to prevent abuse of your service.
If you really don't want to risk your webservice being abused then don't make it public. That's the only failsafe solution.
Let's say that you generate a key valid from 12.00 to 12.05. At 12.04 i open the page, read it with calm, and at 12.06 i trigger action which use your web service. I'll be blocked from doing so even i'm a legit visitor.
I would suggest to restrain access to web services by http referrer (allow only those from your domain and null referrers) and/or require user authentication for calling methods.
Related
Currently I set up a RESTful API backend using Django and I can list a set of articles by the following GET:
api/articles/
Also, I can get a single article by:
api/article/1/
Each article is owned by a certain user, and one user could have multiple articles of course.
On the frond end side, I present all the articles at loading of the page, and I hope the user who is logged in currently could see the articles that they own in a different style, e.g, outlined by a box, and has a associated "delete" or "edit" button.
This requires me to tell, after the retrieval of the articles, which ones are owned by the current user programmatically. One way of doing this is to check the current user id with the owner id. However I feel this is not a good choice as the user id is the check is done fully on the client side and may be not consistent with the actual server judgement.
Therefore, is there a way, to tell by looking at the response of the GET, (say, let the server return a property "editable=true/false") to get whether the current user could edit(PUT) the resource?
I understand that this could be done at the server side, by attaching such a property manually. However, I am just asking whether there is better/common practice.
I just started learning web development and I am sorry if the question sounds trivial. Thank you!
You can attach propriety manually as you suggested. The advance of this approach is that you dont need any other http request.
Second possibility might be, that your client intentionally request information about endpoint permissions. In this case I would suggest to use OPTIONS HTTP method. You send OPTIONS HTTP request to api/articles/1 and backend returns wanted info. This might be exactly what OPTIONS method and DRF metadata were made for.
http://www.django-rest-framework.org/api-guide/metadata/
I think that this is a very interesting question.
Several options that come to me:
You can add to the GET api/article/1 response a HTTP header with this information i.e. HTTP_METHODS_ALLOWED=PUT,PATH,DELETE. Doing this way helps the API client because it does not need to know anything else. I think that this is not a good approach when more than one entity is returned.
call to OPTIONS api/article/1. Allowed methods for that user on that resource can be returned but notice that, in my opinion, this approach is not very good in terms of performance, because it duplicates the number of requests to the server.
But what if the entity returned also contains information on the owner or it? can, in this case the client know which policy apply and try to figure out it by itself? notice that the policy can be obtained from another endpoint (just one call would be needed) or even with the login response. If your entities do not contain that kind of information, it could be also returned as a HTTP header (like first option above)
I am creating a mobile app that will interact with a django back-end api. I want to add ability for app user to change some "critical" account attributes via the app, with call to back-end. Eg, change username, which is used now to authenticate with back-end on each call. The success path is simple, but I'm concerned about some failure along the way that leads the app and back end to be out of sync. Eg, user invokes username change on app, the username is successfully updated on the back-end, but something fails and the app never gets a response. So app now is left not knowing if old username still intact or new username is now at back-end. Just wondering if there is any standard pattern for making this type of thing bulletproof. Same scenario holds for password change via app.
Only thing I can think of now is keep both usernames in app until app can confirm current state of back-end...
There is a pattern called "post exactly once" semantics that may be what you're looking for. (See Mark Nottingham's draft for a proposed implementation of this pattern.) It incurs overhead, but for cases when you want to guarantee that a request was processed exactly once, it's appropriate.
The problem is that there, as of yet, isn't a standard way to do this, so using it will (somehwat) couple your client and server together with out-of-band knowledge.
I am wondering where the best place to put XSS protection in our website. Our team is split up into a front end and back end teams and are using REST as an API between our two groups since we use different platforms. We have a field that could hold a subset of HTML that should be protected and I was wondering at what layer this should be done?
Should it not be allowed into the database by the webservice or should it be validated by the consumer on the way out, ensuring safety? For fields that cannot contain HTML, we are just saving as the raw input, and having the front end escape them before presentation.
My viewpoint is that the webservice should respond that the data is invalid (we have been using 422 to indicate invalid updates) if someone tries to use disallowed tags. I am just wondering what other people think.
It's probably not an either/or. The web service is potentially callable from many UIs, and Uis change over time, it should not assume that all its callers are careful/trusted. Indeed could someone invoke your service directly by hand-crafting a query?
However for the sake of usability we often choose to do friendly validation and error reporting in the UI. I've just finished filling in an online form at a web site that barfs in the service layer if any field contains a non alpha-numeric. It would have been so much nicer if the UI had validated a the point of entry rather than rejecting my request after 3 pages of input.
(Not to mention that if the web site asks you for an employer's name, and the name actually contains an apostrophe you seem a bit stymied!)
You should be using both. The typical pattern is to attempt to sanitize scary data on the way in (and you should really be rejecting the request if sanitization was necessary for a given value) and encoding on the way out.
The reason for the former is that encoding sometimes gets missed. The reason for the latter is that your database cannot be trusted as a source of data (people can access it without hitting your client, or your client might have missed something).
I have a internal website that users log into. This data is saved as a cookie. From there the users go on their merry way. Every so often the application(s) will query the authentication record to determine what permissions the user has.
My question is this: Is it more efficent to just query the cookie for the user data when it is needed or to save the user information in viewstate?
[Edit] As mentioned below, Session is also an option.
Viewstate is specific to the page they are viewing, so its gone once they go along thier merry way. Not a good way to persist data.
Your best bet is to use Forms Authentication, its built in to ASP.NET and you can also shove any user-specific information into the Forms Authentication Ticket's Value. You can get 4000 bytes in (after encrypting) there that should hold whatever you need. It will also take care of allowing and denying users access to pages on the site, and you can set it to expire whenever you need.
Storing in the session is a no-no because it scales VERY poorly (eats up resources on the server), and it can be annoying to users with multiple browser connections to the same server. It is sometimes unavoidable, but you should take great pains to avoid it if you can.
Personally, I prefer using a session to store things, although the other developers here seem to think that's a no-no.
There is one caveat: You may want to store the user's IP in the session and compare it to the user's current IP to help avoid session hijacking. Possibly someone else here has a better idea on how to prevent session hijacking.
You can use session data - that way you know that once you have stored it there, users can't fool around with it by changing the query string.
I would use the cookie method. Session is okay but gets disposed by asp.net on recompile, and you have to use a non session cookie if you want to persist it after session anyway. Also if you ever use a stateserver its essentially doing the same thing (stores session in the db). Session is like a quick and dirty fix, real men use cookies.
You're building a web application. You need to store the state for a shopping cart like object during a user's session.
Some notes:
This is not exactly a shopping cart, but more like an itinerary that the user is building... but we'll use the word cart for now b/c ppl relate to it.
You do not care about "abandoned" carts
Once a cart is completed we will persist it to some server-side data store for later retrieval.
Where do you store that stateful object? And how?
server (session, db, etc?)
client (cookie key-vals, cookie JSON object, hidden form-field, etc?)
other...
Update: It was suggested that I list the platform we're targeting - tho I'm not sure its totally necessary... but lets say the front-end is built w/ASP.NET MVC.
It's been my experience with the Commerce Starter Kit and MVC Storefront (and other sites I've built) that no matter what you think now, information about user interactions with your "products" is paramount to the business guys. There's so many metrics to capture - it's nuts.
I'll save you all the stuff I've been through - what's by far been the most successful for me is just creating an Order object with "NotCheckedOut" status and then adding items to it and the user adds items. This lets users have more than one cart and allows you to mine the tar out of the Orders table. It also is quite easy to transact the order - just change the status.
Persisting "as they go" also allows the user to come back and finish the cart off if they can't, for some reason. Forgiveness is massive with eCommerce.
Cookies suck, session sucks, Profile is attached to the notion of a user and it hits the DB so you might as well use the DB.
You might think you don't want to do this - but you need to trust me and know that you WILL indeed need to feed the stats wonks some data later. I promise you.
I have considered what you are suggesting but have not had a client project yet to try it. The closest actually is a shopping list that you can find here...
http://www.scottcommonsense.com/toolbox.aspx
Click on Grocery Checklist to open the window. It does use ASPX, but only to manage the JS references placed on the page. The rest is done via AJAX using web services.
Previously I built an ASP.NET 2.0 site for a commerce site which used anon/auth cookies automatically. Each provides you with a GUID value which you can use to identify a user which is then associated with data in your database. I wanted the auth cookies so a user could move to different computers; work, home, etc. I avoided using the Profile fields to hold onto a complex ShoppingBasket object which was popular during the time in all the ASP.NET 2.0 books. I did not want to deal with "magic" serialization issues as the data structure changed over time. I prefer to manage db schema changes with update/alter scripts synced with software changes.
With the anon/auth cookies identifying the user on the client you can use the ASP.NET AJAX client-side to call the authentication web services using the JS proxies that are provided for you as a part of ASP.NET. You need to implement the Membership API to at least authenticate the user. The rest of the provider implementation can throw a NotImplementedException safely. You can then use your own custom ASMX web services via AJAX (see ScriptReference attribute) and update the pages with server-side data. You can completely do away with ASPX pages and just use static HTML/CSS/JS if you like.
The one big caveat is memory leaks in JS. Staying on the same page a long time increases your potential issue with memory leaks. It is a risk you can minimize by testing for long sessions and using tools like Firebug and others to look for memory leaks. Use the JS Lint tool as well as it will help identify major problems as you go.
I'd be inclined to store it as a session object. This is because you're not concerned with abandoned carts, and can therefore remove the overhead of storing it in the database as it's not necessary (not to mention that you'd also need some kind of cleanup routine to remove abandoned carts from the database).
However, if you'd like users to be able to persist their carts, then the database option is better. This way, a user who is logged in will have their cart saved across sessions (so when they come back to the site and login, their cart will be restored).
You could also use a combination of the two. Users who come to the site use the session-based cart by default. When they log in, all items are moved from the session-based cart to a database-based cart, and any subsequent cart activity is applied directly to the database.
In the DB tied to whatever you're using for sessions (db/memcache sessions, signed cookies) or to an authenticated user.
Store it in the database.
Do you envision folks needing to be able to start on one machine (e.g. their work PC) but continue/finsih from a different machine (e.g. home PC)? If so, the answer is obvious.
If you don't care about abandoned carts and have things in place for someone messing with the data on the client side... I think a cookie would be good -- especially if it's just a cookie of JSON data.
I'd use an (encrypted) cookie on the client which holds the ID of the users basket. Unless it's a really busy site then abandoned baskets won't fill up the database by too much, and you can run a regular admin task to clear the abandoned orders down if you care that much. Also doing it this way the user will keep their order if they close their browser and go away, a basket in the session would be cleared at this point..
Finally this means that you don't have to worry about writing code to deal with de/serialising the data from a client-side cookie, while later worrying about actually putting that data into the database when it gets converted into an order (too many points of failure for my liking)..
Without knowing the platform I can't give a direct answer. However, since you don't care about abandoned carts, then I would differ from my colleagues here and suggest storing it on the client. Why store it in the database if you don't care if it's abandoned?
Then again, it does depend on the size of the object you're storing -- cookies have their limits after all.
Edit: Ahh, asp.net MVC? Why not use the profile system? You can enable an anonymous profile if you don't want to bother making them log in
I'd say store the state somewhere on the server and correlate it to the user's session. While a cookie could ostensibly be an equal place to store things, if you consider security and data size, keeping as much data on the server as possible becomes a good thing.
For example, in a public terminal setting, would it be OK for someone to look at the contents of the cookie and see the list? If so, cookie's fine; if not, you'll just want an ID that links the user to the data. Doing that would also allow you to ensure the user is authenticated to the site in order to get to that data rather than storing everything on the machine - they'd need some form of credentials as well as the session identifier.
From a size perspective, sure, you're not going to be too concerned about a 4K cookie or something for a browser/broadband user, but if one of your targets is to allow a mobile phone or BlackBerry (not on 3G) to connect and have a snappy experience (and not get billed for the data), minimizing the amount of data getting passed to the client will be key.
The server storage also gives you some flexibility mentioned in some of the other answers - the user can save their cart on one machine and resume working with it on another; you can tie the cart to some form of credentials (rather than a transient session) and persist the cart long after the user has cleared their cookies; you get a little more in the way of fault tolerance - if the user's browser crashes, the site still has the data safe and sound.
If fault tolerance is important, you'll need some sort of persistent store like a database. If not, in application memory is probably fine, but you'll lose data if the app restarts. If you're in a farm environment, the store has to be centrally accessible, so you're again looking at a database.
Whether you choose to key by transient session or by credentials is going to depend on whether the users can save their data and come back later to get it. Transient session will eventually get cleaned up as "abandoned," and maybe that's OK. Tying to a user profile will let the user keep their data and explicitly abandon it. Either way, I'd make use of some sort of backing store like a database for fault tolerance and central accessibility. (Or maybe I'm overengineering the solution?)
If you care about supporting users without Javascript enabled, then the server side sessions will let you use URL rewriting.
If a relatively short time-out (around 2 hours, depending on your server config) is OK for the cart, then I'd say the server-side session. It's faster and more efficient than accessing the DB.
If you need a longer persistence (say some users like to leave and come back the next day), then store it in a cookie that is tamper-evident (use encryption or hashes).