I have heard of people being able to access other sites cookies using XSS. Is this is a legitimate option and how do you achieve this?
It's not a legitimate option, and will probably get you flagged as malware.
If you're trying to do something useful (i.e. non-evil), there's probably a legitimate way of doing it.
It's definitely not a legitimate option. It's considered a security hole anywhere it exists, and if you rely on it in your application, it will fail when those holes are fixed.
Related
I want to add some custom rules in order to eliminate certain false positives and to add certain rules of my own (say 3 level locks should be shown as warning, uninitialized variables should not be shown as warnings, etc).
How can I add my custom rules to coverity?
It sounds like you’re asking how to write custom checkers using the Coverity Extend SDK, but actually just need to change the behavior of existing built-in checkers. The first should be well-documented behind the paywall (an onsite course is even included in some corporate deals, which is how I took it), but in my experience should be the last thing you get around to—there’s a far faster return from existing checkers.
Changing the behavior of individual checkers is covered in the documentation for their configuration options (also paywalled), though it’s not clear whether the existing options will cover what you want, in which case you may need to file an enhancement ticket and wait in hope. I cover this, probably in more generality than you care about, in my Dr Dobbs article, http://pobox.com/~flash/Deploying_Static_Analysis.pdf.
I know this is a big one. In fact, it may be used for some SO community wiki.
Anyways, I am running a website that DOES NOT use explicit authentication of users. It's public as in open to everybody. However, due to the nature of the service, some users need to be locked out due to misbehavior.
I am currently blocking IP addresses, but I am aware of the supposed fact that many people purposefully reset their DHCP client cache to have their ISP assign them new addresses. Is that a fact? I think it certainly is a lucrative possibility for some people who want to circumvent being denied access.
So IPs turn out to be a suboptimal way of dealing with this. But there is nothing else, is it?
MAC addresses don't survive on WAN (change from hop to hop?), and even if they did - these can also be spoofed, although I think less easily than IP renewal.
Cookies and even Flash cookies are out of the question, because there are tons of "tutorials" how to wipe these, and those intent on wreaking havoc on Internet are well aware and well equipped against such rudimentary measures I would employ.
Is there anything else to lean on? I was thinking heuristical profiling - collecting available data from client-side and forming some key with it, but have not gone as far as to implementing it - is it an option?
Due to the nature of the internet, this isn't practically possible. Yes, you can block specfic IPs, but as you've said, it's easy enough for the average "misbehaver" to simply change their IP. Even MAC addresses can be spoofed. This is why sites with these problems use authentication. It's the only real solution.
You are not going to be able to completely block a user who is determined to access your site. You can, however, make it difficult enough for them that it isn't worth their time.
As others have said, this is an impossible problem. Anyone determined enough can always find another way in. The canonical example of this problem is with Wikipedia, and you can read about the various blocking steps they take here: http://en.wikipedia.org/wiki/Blocking_policy
The simple answer is that this is impossible. As others (including yourself) have already said, anyone determined will find another way.
You can block IPs or use cookies, to deter the casual troublemaker. Someone who just wants to post rude words in blog comments will probably go elsewhere, but it won't scare off someone who wants to cause trouble on your site specifically,
If this misbehaviour is a serious problem for you, then I think your only recourse is to require authentication for any kind of access that could be subject to such abuse.
You can minimise the annoyance to your users by using OAuth, and accepting many different providers, much as SO does, rather than forcing all your users to sign up and memorise yet another set of login credentials.
I'm looking to create a new site and in order to encourage myself to create a powerful API for others to use, I'm tempted to write the API and use it myself to build the the actual site. The idea being, if it is capable of running the primary site, then it will give other users plenty of options to put their own spin on things. It will also encourage me to keep the API up to date.
What I'd like to know is whether this idea is worth going with, or whether its just plain nuts.
Is this common practice? Will it likely result in over complicated code? Will it cause performance issues if (by some chance) the site was to take off?
Thanks in advance.
It's a great idea, as long as you are doing it for yourself and not using up someone else's time/money.
Writing your own framework from scratch is a great way to teach yourself about planning and writing code. It may take a long time and be a long adventure, but I can personally attest that it forces you to become an expert in everything.
For anything that is being developed on someone else's dime, or which is mission-critical (security or performance) however, I would recommend re-using an existing framework where it is logical to do so.
It is common practice to build a public api and consume it internally, and from my experience it results in cleaner code (rather than maintaining two sets, one internal, one external). There may be a performance hit, but I would not worry about that too much until you see some real demand. Otherwise you can get yourself wrapped around solving problems that don't exist.
Definitely a good idea. Always program to an interface, not an implementation. So consuming your own API makes a lot of sense and not doing so is probably a form of redundancy.
The one thing to watch out for would be early optimisation. Do you really need all that functionality?
There are some really great APIs already. Why reinvent the wheel?(I'm assuming this is what you want to do)
Our whole system is being designed around REST and are now considering how processes which are quite clearly RPC in intent can be mapped to RESTful resources without using verbs in the URL. Our remote procedure call is used to rebuild our search index when a content listing has been modified elsewhere.
What we are thinking about doing is this:
POST /index_updates
<indexUpdate><contentId>123</contentId></indexUpdate>
Nothing wrong with that in itself, but the smell is this resource which has been created does not return the URL of the newly created resource e.g. /index_updates/1234 which we can then access with a GET.
The indexing engine we are using does have a log mechanism, so in theory we could return a URL to a index_update resource so as to allow a GET to retrieve the resource, but to be honest we're not interested in the resource as this is nothing more than an RPC in disguise.
So my question is whether RESTfulness is expressed in structure or intent. I feel the structure of what I have outlined is restful, but the intent is not.
Does anyone have an comments or advice?
Thanks,
Chris
Use the right tool for the job. In this case, it definitely seems like the right tool is a pure remote procedure call, and there's no reason to pretend it's REST.
One reason you might return a new resource identifier from your POST /index_updates call is to monitor the status of the operation.
POST /index_updates
<contentId>123</contentId>
201 Created
Location: /index_updates/a9283b734e
GET /index_jobs/a9283b734e
<index_update><percent_complete>89</percent_complete></index_update>
This is obviously a subjective field, but GET PUT POST DELETE is a rich enough vocabulary to describe anything. And when I go to non-English-speaking Asian countries I just point and they know what I mean since I don't speak the language... but it's hard to really get into a nice conversation with someone...
It's not a bad idea to disguise RPC as REST, since that's the whole exercise. Personally, I think SOAP has been bashed and hated while in fact it has many strengths (and with HTTP compression, HTTP/SSL, and cookies, many more strengths)... and your app is really exposing methods for the client to call. Why would you want to translate that to REST? I've never been convinced. SOAP lets you use a language that we know and love, that of the programming interface.
But to answer your question, is it a bad idea to disguise RPC as REST? No. Disguising RPC as REST and translating to the four basic operations is what the thing is about. Whether you think that's cool or not is a different story.
We have a lot of open discussions with potential clients, and they ask frequently about our level of technical expertise, including the scope of work for our current projects. The first thing I do in order to gauge the level of expertise on staff they have now or have previously used is to check for security vulnerabilities like XSS and SQL injection. I have yet to find a potential client who is vulnerable, but I started to wonder, would they actually think this investigation was helpful, or would they think, "um, these guys will trash our site if we don't do business with them." Non-technical folks get scared pretty easily by this stuff, so I'm wondering is this a show of good faith, or a poor business practice?
I would say that surprising people by suddenly penetration-testing their software may bother people if simply for the fact that they didn't know ahead of time. I would say if you're going to do this (and I believe it's a good thing to do), inform your clients ahead of time that you're going to do this. If they seem a little distraught by this, tell them the benefits of checking for human error from the attacker's point of view in a controlled environment. After all, even the most securely minded make mistakes: the Debian PRNG vulnerability is a good example of this.
I think this is a fairly subjective decision and different prospects would react differently if you told them.
I think an idea might be to let them know after they have given business to someone else.
At least this way, the ex-prospect will not think that you are trying to pressure them into giving you the business.
I think the problem with this would be, that it would be quite hard to do checks on XSS without messing up their site. Also, things like SQL injection could be quite dangerous. If you stuck with appending selects, you might not have too much of a problem, but then the question is, how do you know it's even executing the injected SQL?
From the way you described it, it seems like a poor business practice that could be a beneficial one with some modification.
First off, any vulnerability assessment or penetration test you conduct on a customer should be agreed upon in writing by that customer, period. This covers your actions legally. Without a written agreement, if you inadvertently cause damage (application crash, denial-of-service, data leak, etc) during your inspection, you are liable and could be charged (under US law; other countries have different standards).
Even if you do not cause damage, a clueless or potentially malicious customer could take you to court claiming damages; a clueless judge might just award them.
If you have written authorization to do so, then a free vulnerability assessment to attract potential customers sounds like a show of good faith and demonstrates what you want -- your skills.