I want to know what is the maximum value of the cookie name?
Is the cookie name unique per domain, and/or path?
All those informations are specified in RFC 2965 - HTTP State Management Mechanism.
A cookie name must be, like Jay said, unique within a path.
The RFC also specifies that there should be no maximum length of a cookie's name or value (in ) :
From chapter 5.3 - Implementation Limits
Practical user agent implementations have limits on the number and size of cookies that they can store. In general, user agents' cookie support should have no fixed limits. They should strive to store as many frequently-used cookies as possible. Furthermore, general-use user agents SHOULD provide each of the following minimum capabilities individually, although not necessarily simultaneously:
at least 300 cookies
at least 4096 bytes per cookie (as measured by the characters that comprise the cookie non-terminal in the syntax description of the Set-Cookie2 header, and as received in the Set-Cookie2 header)
at least 20 cookies per unique host or domain name
User agents created for specific purposes or for limited-capacity devices SHOULD provide at least 20 cookies of 4096 bytes, to ensure that the user can interact with a session-based origin server...
In practice, each browsers defines it's own maximum length. For more concrete data on the subject, you can consult the following stackoverflow question : What is the maximum size of a web browser's cookie's key?.
It must be unique within a path.
I dont know about max size but each cookie should not be more than 4,000 characters and in all practicality it should not be more that 2000 characters
Related
In our setup, we are receiving a token for the primary account number (PAN), expiration date, and some other information. We need to save this information into our database. Is it necessary/recommended to encrypt the expiration date if the we are not getting the actual PAN? Does the token itself need to be encrypted?
The token is already meaningless, so encrypting it seems an overkill.
Edit
Just looked up the PCI DSS 3.0 Quick Ref. and here's what it says:
Merchants and any other service providers involved with payment card
processing must never store sensitive authentication data after
authorization. This includes sensitive data that is printed on a card,
or stored on a card’s magnetic stripe or chip – and personal
identification numbers entered by the cardholder.
Looks like you need to discard the info rather than encrypt it.
According to the PCI/DSS standards encryption actually has almost no value as it doesn't take anything out of scope. Not to say you shouldn't be encrypting data, but doing so really has virtually no effect on the PCI/DSS question.
Is there any limit on the number of pre signed URL's per object in AWS S3 presigned URL's. Say If I want to create 1000 presigned url's per object in a 2 minutes. Is that valid scenario ?
You can create as many signed URLs as you wish. Depending on your motivation and strategy, however, there is a practical limitation on the number of unique presigned URLs for the exact same object.
S3 (in S3 regions that were first deployed before 2014) supports two authentication algorithms, V2 and V4, and the signed urls look very different since the algorithms are very different.
In V2, the signed URL for a given expiration time will always look the same, if signed by the same AWS key.
If you sign the url for an object, set to expire one minute in the future... and immediately repeat the process, the two signed URLs will be identical.
Next, exactly one second later, sign a url for the same object to expire 59 seconds in the future, and that new signed URL will also be identical.
Why? Because in V2, the expiration time is an absolute wall clock time in UTC, and the particular time in history when you actually generated the signed URL doesn't change anything.
V4 is different. In the scenario above, the first two would still be identical, but the second one would not, because V4 auth includes the date and time when you created the signed url, or when you say you did. The expiration time is relative to the signing time, instead of absolute.
Note that both forms of signed URL are tamper-resistant -- the expiration time is embedded in the url, but attempting to tweak it after signing will invalidate the signing and make it useless.
If you need to generate a large number of signed urls for the same object, you'll need to increment the expiration time for each individual signing attempt in order to get unique values. (Edit: or not, if you're feeling clever... see below).
It also occurs to me that you may be under the impression that S3 has an active role in the signing process, but it doesn't. That's all done in your local code.
S3 isn't aware, in any sense, of the signed urls you generate unless or until they are used. When a signed request arrives, S3 does exactly the same thing your code will do -- it canonicalizes certain attributes of the request, and generates a signature. Then it compares what it generated with what your code should have generated, given precisely the same parameters. If their generated signature matches your provided signature (and the key you used has permission to perform the requested action) then the request succeeds.
Update: it turns out, there is an unofficial mechanism that allows you to embed additional "entropy" into the signing process, generating unique, per-user (for example) signed URLs for the same object and expiration time.
Under V2 authentication, which doesn't nornally want you to include non-S3-specific parameters in your signing logic, it looks suspiciously like a bug as well as a feature... add &x-amz-meta-{anything-here}={unique-value-here} query string parameters to your URL. These are used as headers in PUT request but are meaningless in a GET request, and yet, if present, S3 still requires them to be included in the signature calculation, even though the parameter keys and values will ultimately be discarded by S3... but the added values are tamper-resistant and can't be maliciously removed or altered without invalidating the signature.
The same mechanism works in V4, even though it's for a different reason.
Credit for this technique: http://www.bennadel.com/blog/2488-generating-pre-signed-query-string-authentication-amazon-s3-urls-with-user-specific-data.htm
The accepted answer is now outdated. For future viewers, there is no need to include anything as extra header as now AWS includes a Signature field in every signed url which is different everytime you make a request.
Yes. In fact, i believe AWS can't even limit that, as there is no such API call on S3. URL signing is done purely by the SDK.
But if creating so many URLs is a good idea or not is completely context dependent though...
AWS' query parameter ordering code can be seen on their Github repository.
I have thought about why they might require API clients to sign requests:
intermediate proxies might canonicalize URLs and mess up the original query string order
The URI RFC specifies absolutely nothing about the order of the query string parameters, or that it should be preserved
My best guess is that, because of the RFC, Amazon reckoned they'd play it safe and require both sides to sign the ORDERED request.
I do, however, would like the final/official word on this. Surely the implementors had a good reason for this requirement.
The request signature ensures that the sender and receiver can agree on exactly what was sent in the request and that no intermediate parties tampered with it.
Many parts of an HTTP request can change without changing the semantics of the request. For example the HTTP headers can be re-ordered, as can the query parameters as you rightly point out.
So the request must be canonicalized into a form that removes these ambiguities and that both parties will use to sign the request. Otherwise each party could generate different signatures for the same request. Ordering the query parameters is just part of this process. Amazon describes their canonicalization process and their motivation in the docs for the AWS V4 signature format.
I've asked a question related to this one here:
Securely Passing UserID from ASP.Net to Javascript
However now I have a more detailed/specific question. I have the service and I have the application that is going to consume the service my plan to secure it, is to generate a hash based on some values, a nonce, and a secret key. My only issue is that it seems that in order to verify the hash I will have to send all of the values plus the nonce, except the secret key. Is this a flaw in my design or is this how such things are done? I have googled around and haven't been able to find out if this is the right and secure way to do this.
For example lets say I need to pass values 1,2, and 3 to my rest service, so I users phone number, the nonce, and, the secret key to generate a hash, now in order to generate the hash again I would need to pass all of the above except the key (which I can retrieve based on the users phone number).
I am totally leaving my service up for attack, securing it properly, or somewhere in between?
EDIT: made a spelling and grammar correction
EDIT 2: Finally came to to a satisfactory solution by using MVC 4 with forms authentication, identical cookie names between two projects, and making use of a globally applied [Authorize] attribute
There is nothing inherently wrong with this plan. If the client sends:
data . nonce . hash(data . nonce . shared-secret)
Then the server verifies the message by checking that hash(data . nonce . shared-secret) matches the hash provided by the client, you would be safe against both tampering and replay (assuming, of course, that you're using a reasonable cryptographic hashing algorithm).
Under this design, the client can even generate its own nonces, provided there is no risk that two clients will generate the same nonce.
However, eavesdroppers will still be able to see all the data you send… So, unless there is a very good reason not to, I would simply use https (which, unless there are other requirements I'm unaware of, be entirely sufficient).
I am looking for protocol/algorithm that will allow me to use a shared secret between my App & a HTML page.
The shared secret is designed to ensure only people who have the app can access the webpage.
My Problem: I do not know what algorithm(my methodology to validate a valid access to the HTML page) & what encryption protocol I should use for this.
People have suggested to me that I use HMAC SHAXXX or DES or AES, I am unsure which I should use - do you have any suggestions?
My algorithm is like so:
I create a shared secret that the App & the HTML page know of(lets call it "MySecret"). To ensure that that shared secret is always unique I will add the current date & minute to the end of the secret then hash it using XXX algorithm/protocol(HMAC/AES/DES). So the unencrypted secret will be "MySecret08/17/2011-11-11" & lets say the hash of that is "xyz"
I then add this hash to the url CGI: http://mysite.com/comp.py?sharedSecret=xyz
The comp.py script then uses the same shared secret & date combination, hashes it, then checks that the resulting hash is the same as the CGI variable sharedSecret("xyz"). If it is then I know a valid user is accessing the webpage.
Can you think of a better methodology to ensure on valid people can access my webpage(the webpage allows the user to enter a competition)?
I think I am on the correct track using a shared secret but my methodology for validating the secret seems flawed especially if the hash algorithm doesn't produce the same result for the same in put all the time.
especially if the hash algorithm doesn't produce the same result for the same in put all the time.
Then the hash is broken. Why wouldn't it?
You want HMAC in the simple case. You are "signing" your request using the shared secret, and the signature is verified by the server. Note that the HMAC should include more data to prevent replay attacks - in fact it should include all query parameters (in a specified order), along with a serial number to prevent the replay of the same message by an eavesdropper. If all you are verifying is the shared secret, anyone overhearing the message can continue to use this shared secret until it expires. By including a serial number, or a short validity range, you can configure the server to flag that.
Note that this is still imperfect. TLS supports client and server side certificate support - why not use that?
The looks like it would work. Clock drift could be a problem, you may need to validate a range of, say, +/- 3 minutes if it fails for the exact time.
flawed especially if the hash algorithm doesn't produce the same result for the same input all the time
Well, that would be a broken hash algorithm then. A hash reliable produces the same output for the same input every time (and almost always a different output for a different input).
Try using some sort of network encryption. Your web server should be able to handle that type of authentication automatically. All that remains is for you to write it into your app (which you have to do anyway). Depending on your app platform, you may be able to do that automatically as well.
Google these: Kerberos, SPNEGO and HTTP 401 Authorization Required. You may be able to get away with simple hard-coded user name and password HTTP headers and run your connections over HTTPS. That way you have less custom code on your server and your server takes care of authenticating your requests for you. Not to mention you are taking advantage of some additional features of HTTP.