maximum key length in AppFabric? - appfabric

Currently I am working on a ASP.NET MVC application.It is using AppFabric for Session and cache management.I am using API methods(put,get) to add and retrieve (key,value) pairs.I am having no of keys which are created based on number of conditions.i.e, Keys are variable in length.
What is the maximum length/size of a key in AppFabric?

From the AppFabric MSDN forum:
Q. What is the maximum length of the cache key?
A. There is no limit on the key as such but it is subject to maximum
message size. The maximum message size by default is 8 MB

Related

Can page size be set with dynamodb.ScanPages?

The documentation for working with dynamodb scans, found here, makes reference to a page-size parameter for the AWS CLI.
In looking at the documentation for the go AWS SDK, found here, there is function ScanPages. There is an example of how to use the function, but no where in the documentation is there a way to specify something like page-size as the AWS CLI has. I can't determine how the paging occurs other than assuming if the results exceed 1MB, then that would be considered a page based on the go documentation and the general scan documentation.
I'm also aware of the Limit value that can be set on the ScanInput, but the documentation indicates that value would function as a page size only if every item processed matched the filter expression of the scan:
The maximum number of items to evaluate (not necessarily the number of matching items)
Is there a way to set something equivalent to page-size with the go SDK?
How Pagination Works in AWS?
DynamoDB paginates the results from Scan operations. With pagination,
the Scan results are divided into "pages" of data that are 1 MB in
size (or less). An application can process the first page of results,
then the second page, and so on.
So for each request if you have more items in the result you will always get the LastEvaluatedKey. You will have re-issue scan request using this LastEvaluatedKey to get the complete result.
For example for a sample query you have 400 results and each result fetches to the upper limit 100 results, you will have to re-issue the scan request till the lastEvaluatedKey is returned empty. You will do something like below. documentation
var result *ScanOutput
for{
if(len(resultLastEvaluatedKey) == 0){
break;
}
input := & ScanInput{
ExclusiveStartKey= LastEvaluatedKey
// Copying all parameters of original scanInput request
}
output = dynamoClient.Scan(input)
}
What page-size on AWS-CLI does?
The scan operation scan's all the dynamoDB and returns result according to filter. Ordinarily, the AWS CLI handles pagination automatically.The AWS CLI keeps on re-issuing scan request for us. This request and response pattern continues, until the final response.
The page-size tells specifically to scan only the page-size number of rows in the DB table at a time and filter on those. If the complete table is not scanned or the result is more than 1MB the result will send out lastEvaluatedKey and cli will re-issue the request.
Here is a sample request response from documentation.
aws dynamodb scan \
--table-name Movies \
--projection-expression "title" \
--filter-expression 'contains(info.genres,:gen)' \
--expression-attribute-values '{":gen":{"S":"Sci-Fi"}}' \
--page-size 100 \
--debug
b'{"Count":7,"Items":[{"title":{"S":"Monster on the Campus"}},{"title":{"S":"+1"}},
{"title":{"S":"100 Degrees Below Zero"}},{"title":{"S":"About Time"}},{"title":{"S":"After Earth"}},
{"title":{"S":"Age of Dinosaurs"}},{"title":{"S":"Cloudy with a Chance of Meatballs 2"}}],
"LastEvaluatedKey":{"year":{"N":"2013"},"title":{"S":"Curse of Chucky"}},"ScannedCount":100}'
We can clearly see that the scannedCount:100 and the filtered count Count:7, so out of 100 items scanned only 7 items are filtered. documentation
From Limit's Documentation
// The maximum number of items to evaluate (not necessarily the number of matching
// items). If DynamoDB processes the number of items up to the limit while processing
// the results, it stops the operation and returns the matching values up to
// that point, and a key in LastEvaluatedKey to apply in a subsequent operation,
// so that you can pick up where you left off.
So basically, page-size and limit are same. Limit will limit the number of rows to scan in one Scan request.

Why does the s3 manager delete in batch size of 100. Can I increase this limit?

As per the S3 Manager docs
AWS SDK for Go API Reference
const (
// DefaultBatchSize is the batch size we initialize when constructing a batch delete client.
// This value is used when calling DeleteObjects. This represents how many objects to delete
// per DeleteObjects call.
DefaultBatchSize = 100
)
The default batch size is 100. What's the maximum size I can use for this variable ? Or is not meant to be changed ? What would be the repercussions of making it a very large number like 1000000 ?
If you check the API docs for DeleteObjects, which is what the golang sdk uses when batching, the max number of keys you can provide per request is 1000 https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjects.html The S3 manager's NewBatchDelete client can take as many objects as you provide it, and will automatically batch them into the specified size to make the API calls. You can find the relevant code here https://github.com/aws/aws-sdk-go/blob/v1.25.45/service/s3/s3manager/batch.go#L301

How to change or get all keys using the maxkeys returned by the listbucket xml?

I am trying to list all my files in my public bucket using the url http://gameexperiencesurvey.s3.amazonaws.com/
You can visit the url to see the xml.
The XML contains an element called MaxKeys with value 1000 which is the maximum number of keys returned in the response body. What if I want to list all the keys that I have, how do I do that?
Also, what is the max limit for number of keys and their size on on a free aws s3 account?
It is called S3 pagination. See: Iterating Through Multi-Page Results
Iterating Through Multi-Page Results
As buckets can contain a virtually unlimited number of keys, the
complete results of a list query can be extremely large. To manage
large result sets, the Amazon S3 API supports pagination to split them
into multiple responses. Each list keys response returns a page of up
to 1,000 keys with an indicator indicating if the response is
truncated. You send a series of list keys requests until you have
received all the keys. AWS SDK wrapper libraries provide the same
pagination.
You need to have sufficient privileges to list the object keys.
AWS Free Tier for S3

Generating a new Google service account credential P12 key greater than 1024 bits

I've got an existing service account and the P12 key that was generated at the time has a key length of 1024 bits. We've used this in production for a couple of years.
We've now got a requirement (imposed by the System.IdentityModel.Tokens.Jwt.JwtSecurityTokenHandler in the https://www.nuget.org/packages/System.IdentityModel.Tokens.Jwt/ package) to sign JWTs using a key with a minimum key length of 2048 bits.
If I create a new service account in the Google Developers Console and generate a P12 key it's got a key length of 2048 bits - all good. However, if I 'Generate new P12 key' for my existing service account the new key has a key length of only 1024 bits (just like the existing one).
I need a way to create a new P12 key for my existing service account that has a 2048 bit key length.
Short answers is it is currently not possible to do without creating a new client ID. This restriction may be relaxed at some point, but for the near future you'd need to create a new client to move to 2048 bit keys.

Cookie name length, uniqueness

I want to know what is the maximum value of the cookie name?
Is the cookie name unique per domain, and/or path?
All those informations are specified in RFC 2965 - HTTP State Management Mechanism.
A cookie name must be, like Jay said, unique within a path.
The RFC also specifies that there should be no maximum length of a cookie's name or value (in ) :
From chapter 5.3 - Implementation Limits
Practical user agent implementations have limits on the number and size of cookies that they can store. In general, user agents' cookie support should have no fixed limits. They should strive to store as many frequently-used cookies as possible. Furthermore, general-use user agents SHOULD provide each of the following minimum capabilities individually, although not necessarily simultaneously:
at least 300 cookies
at least 4096 bytes per cookie (as measured by the characters that comprise the cookie non-terminal in the syntax description of the Set-Cookie2 header, and as received in the Set-Cookie2 header)
at least 20 cookies per unique host or domain name
User agents created for specific purposes or for limited-capacity devices SHOULD provide at least 20 cookies of 4096 bytes, to ensure that the user can interact with a session-based origin server...
In practice, each browsers defines it's own maximum length. For more concrete data on the subject, you can consult the following stackoverflow question : What is the maximum size of a web browser's cookie's key?.
It must be unique within a path.
I dont know about max size but each cookie should not be more than 4,000 characters and in all practicality it should not be more that 2000 characters