I'm having a small IT issue - cookies

I was recently doing some work on my browser and looking at my cookies. In the url it listed it was written as https://www.examplesite[*.].com/
what are the brackets and asterisk for? I've never run across that before. would appreciate any help I could

Matt, sometimes programmers when they are setting up their code put in strings that they intend to come back to and expand, but never get around to doing it. The code or comment passes the normal checks and ends up in your log files or cookies or whatever. In this case the programmer evidently has a domain with a number of subdomains or the same subdomain on a number of different domains and intends to use the same code for each. It is lazy programming, but might make some sense if there are many subdomains to consider and they cannot find the time to programme a way to be more explicit, relying on the pattern to make sense to anyone that needs to read it.
It might also have been generated by your browser (which one are you using?) - although scanning through my cookie list in FF I see linkedin.com has left cookies from subdomains uk. and ca. for me so evidently FF does not try to reduce the length of the list in this way.

Related

Can the _ga cookie value be used to exclude self traffic in Google analytics universal?

Google analytics store a unique user id in a cookie names _ga. Some self traffic was already counted, and I was wondering if there's a way to filter it out by providing the _ga cookie value to some exclusion filter.
Any ideas?
Firstly, I'm gonna put it out there that there is no solution for excluding or removing historical data, except to make a filter or segment for your reports, which doesn't remove or prevent that data from showing up; it simply hides it. So if you're looking for something that gets rid of the data that is already there, sorry, not happening. Now on to making sure more data doesn't show up.
GA does not offer a way to exclude traffic by its visitor cookie (or any cookie in general). In order to do this, you will need to read the cookie yourself and expose it to something that GA can exclude by. For example, you can pop a custom variable or override/append the page name.
But this isn't really that convenient for lots of reasons, such as having to burn a custom variable slot, or having to write some server-side or client-side code to read the cookie and act on a value, etc..
And even if you do decide to do this, you're going to have to consider at least 2 scenarios in which this won't work or break:
1) This won't work if you go to your site from a different browser, since browsers can't read each other's cookies.
2) It will break as soon as you clear your cookies.
An alternative you should consider is to make an exclusion filter on your IP address. This has the benefit of:
works regardless of which browser you are on
you don't have to write any code for it that burns or overwrites any variables
you don't have to care about the cookie
General Notes
I don't presume to know your situation or knowledge, so take this for what it's worth: simply throwing out general advice because nothing goes without saying.
If you were on your own site to QA something and are wanting to remove data from some kind of development or QA efforts, a better solution in general is to have a separate environment for developing and QAing stuff. For example a separate dev.yoursite.com subdomain that mirrors live. Then you can easily make an exclusion on that subdomain (or have a separate view or property or any number of ways to keep that dev/qa traffic out).
Another thing is.. how much traffic are we talking here anyway? You really shouldn't be worrying about a few hits and whatnot that you've personally made on your live site. Again, I don't know your full situation and what your normal numbers look like, but in the grand scheme of things, a few extra hits is a drop of water in a bucket, and in general it's more useful to look at trends in the data over time, not exact numbers at a given point in time.

FB.XFBML.parse loops and gives thousands of javascript errors

I have a number of dynamically generated like buttons on a site (http://www.thepropaganda.com), and so I use FB.XFBML.parse to generate them all. For some reason the parser always gets into a loop and repeatedly generates "domain and protocol must match" errors as per FB.SO Question 3577947. All the facebook social plugins are created correctly.
I understand what the errors are, and they're not really a problem, other than that there's thousands of them. Funnily enough this doesn't happen at all in incognito.
I'd really like to know what's going on here, as it's a live site for a paying client.
No idea why that's occurring but if it doesn't happen in incognito I'm assuming it's something to do with a cookie/session variable.
I would have added this as a comment but I need 50 rep to do it :(

What are my options for white-listing HTML in ColdFusion?

I want to allow my users to input HTML.
Requirements
Allow a specific set of HTML tags.
Preserve characters (do not encode ã into ã, for example)
Existing options
AntiSamy. Unfortunately AntiSamy encodes special characters and breaks requirement 2.
Native ColdFusion functions (HTMLCodeFormat() etc...) don't work as they encode HTML into entities, and thus fail requirement 1.
I found this set of functions somewhere, but I have no way of telling how secure this is: http://pastie.org/2072867
So what are my options? Are there existing libraries for this?
Portcullis works well for Cold Fusion for attack-specific issues. I've used a couple of other regex solutions I found on the web over time that have worked well, though they haven't been nearly as fleshed out. In 15 years (10 as a CMS developer) nothing I've built has been hacked....knock on wood.
When developing input fields of any type, it's good to look at the problem from different angles. You've got the UI side, which includes both usability and client-side validation. Yes, it can be bypassed, but javascript-based validation is quicker, more responsive, and rates higher on the magical UI scale than backend-interruption method or simply making things "disappear" without warning. It will speed up the back-end validation because it does the initial screening. So, it's not an "instead of" but an "in-addition to" type solution that can't be ignored.
Also on the UI front, giving your users a good quality editor also can make a huge difference in the process. My personal favorite is CKeditor simply because it's the only one that can handle Microsoft Word code on the front-side, keeping it far away from my DB. It seems silly, but Word HTML is valid, so it won't setoff any red flags....but on a moderately sized document it will quickly overload a DB field insert max, believe it or not. Not only will a good editor reduce the amount of silly HTML that comes in, but it will also just make things faster for the user....win/win.
I personally encode and decode my characters...it's always just worked well so I've never changed practice.

Web Application Cross Site Scripting

My website http://www.imayne.com seems to have this issue, verified by MacAfee. Can someone show me how to fix this? (Title)
It says this:
General Solution:
When accepting user input ensure that you are HTML encoding potentially malicious characters if you ever display the data back to the client.
Ensure that parameters and user input are sanitized by doing the following:
Remove < input and replace with "&lt";
Remove > input and replace with "&gt";
Remove ' input and replace with "&apos";
Remove " input and replace with "&#x22";
Remove ) input and replace with "&#x29";
Remove ( input and replace with "&#x28";
I cannot seem to show the actual code. This website is showing something else.
Im not a web dev but I can do a little. Im trying to be PCI compliant.
Let me both answer your question and give you some advice. Preventing XSS properly needs to be done by defining a white-list of acceptable values at the point of user input, not a black-black of disallowed values. This needs to happen first and foremost before you even begin thinking about encoding.
Once you get to encoding, use a library from your chosen framework, don't attempt character substitution yourself. There's more information about this here in OWASP Top 10 for .NET developers part 2: Cross-Site Scripting (XSS) (don't worry about it being .NET orientated, the concepts are consistent across all frameworks).
Now for some friendly advice: get some expert support ASAP. You've got a fundamentally obvious reflective XSS flaw in an e-commerce site and based on your comments on this page, this is not something you want to tackle on your own. The obvious nature of this flaw suggests you've quite likely got more obscure problems in the site as well. By your own admission, "you're a noob here" and you're not going to gain the competence required to sufficiently secure a website such as this overnight.
The type of changes you are describing are often accomplished in several languages via an HTML Encoding function. What is the site written in. If this is an ASP.NET site this article may help:
http://weblogs.asp.net/scottgu/archive/2010/04/06/new-lt-gt-syntax-for-html-encoding-output-in-asp-net-4-and-asp-net-mvc-2.aspx
In PHP use this function to wrap all text being output:
http://ch2.php.net/manual/en/function.htmlentities.php
Anyplace you see echo(...) or print(...) you can replace it with:
echo(htmlentities( $whateverWasHereOriginally, ENT_COMPAT));
Take a look at the examples section in the middle of the page for other guidance.
Follow those steps exactly, and you're good to go. The main thing is to ensure that you don't treat anything the user submits to you as code (HTML, SQL, Javascript, or otherwise). If you fail to properly clean up the inputs, you run the risk of script injection.
If you want to see a trivial example of this problem in action, search for
<span style="color:red">red</span>
on your site, and you'll see that the echoed search term is red.

An open-ended .htaccess rewrite allows for anything to be put on the end of an URL, how bad is this?

Basically I'm working on a clients site and I've just realised that many of their re-write regex rules don't check the end of the URL, and in pretty much every case you can sling any junk on the end of an URL and it still returns ok for example:
/article_23.html
/article_23.htmlaijdasduahds
/article_23.html.jpg
etc
This actually happens on about 4 different areas of the site, meaning that most of the sites pages are susceptible to this.
AFAIK everything is sanitised ok when it's being read for the ID etc, I pretty much know how I am going to fix it, but what I want to know what are the main problems that are going to occur from this?
Additionally, what HTTP status should be returned? On one hand you'd think it should be a straight 404, but is it worth 301'ing to the right page if we can?
A 301 to the correct page will not be very harmful for the performance, but might lead a lot of users "to the right place". I have a client that is obsessed about that sort of thing, never leave any old valid URL without 301'ing to the new one (if there is a new one of course). He claims that this alone has allowed him to keep very good ranks in search engines and saved a lot of users the trouble of finding the right URL themselves. I believe that this helps a lot. Maybe if the site is relatively new it's not worth the effort and the overhead, but if it's not that new I'd do it.