I was wondering whether the AWSAccessKeyId in the presigned urls are static? Do these ever change (either over time) or are they unique linked to the user that generated it?
It depends on the credentials used to sign the URL.
If it begins with AKIA, then it's a long-lived user access key. These exist as long as the user chooses to let them exist, although it's a good practice to rotate your keys regularly.
If it begins with ASIA, then it's an assumed role key. These typically expire in an hour (although they can live longer).
What's your real question?
Related
I have some functionality that uploads Documents to an S3 Bucket.
The key names are programmatically generated via some proprietary logic for the layout/naming convention needed.
The results of my S3 upload command is the actual url itself. So, it's in the format of
REGION/BUCKET/KEY
I was planning on storing that full url into my DB so that users can access their uploads.
Given that REGION and BUCKET probably wouldn't change, does it make sense to just store the KEY - and then dynamically generate the full url when the client needs it?
Just want to know what the desired pattern here is and what others do. Thanks!
Storing the full URL is a bad idea. As you said in the question, the region and bucket are already known, so storing the full URL is a waste of disk space. Also, if in the future say, you want to migrate your assets to a different bucket may be in a different region, having full URLs stored in the DB just make things harder.
We name the S3 object name with the birthday of the employees. It is stupid. We want to avoid creating object name with sensitive data. Is it safe to store the sensitive data using S3 user-defined metadata or Add an S3 bucket policy that denies the action S3:Getobject. Which will work?
As you mentioned; its not a good idea to create object name with sensitive data; but its ok... Not too bad also.. I will suggest to remove listAllObjects() permissions in the S3 policy. Policy should only allow getObject() which means anyone can get the object ONLY when they know object name; i.e. when calling api already knows DOB of the user.
With listAllObjects() permissions; caller can list all the objects in the bucket and get DOB of users.
Object keys and user metadata should not be used for sensitive data. The reasoning behind object keys is readily apparent, but metadata may be less obvious;
metadata is returned in the HTTP headers every time an object is fetched. This can't be disabled, but it can be worked around with CloudFront and Lambda#Edge response triggers, which can be used to redact the metadata when the object is downloaded through CloudFront; however,
metadata is not stored encrypted in S3, even if the object itself is encrypted.
Object tags are also not appropriate for sensitive data, because they are also not stored encrypted. Object tags are useful for flagging objects that contain sensitive data, because tags can be used in policies to control access permissions on the object, but this is only relevant when the object itself contains the sensitive data.
In the case where "sensitive" means "proprietary" rather than "personal," tags can be an acceptable place for data... this might be data that is considered sensitive from a business perspective but that does not need to be stored encrypted, such as the identification of a specific software version that created the object. (I use this strategy so that if a version of code is determined later to have a bug, I can identify which objects might have been impacted because they were generated by that version). You might want to keep this information proprietary but it would not be "sensitive" in this context.
If your s3 bucket is used to store private data and your allowing public access to the bucket this is always a bad idea - it's basically security by obscurity.
Instead of changing your existing s3 structure you could lock down the bucket to just your app then you serve the data via cloudfront signed urls?
Basically in your code where you currently inject the s3 url You can instead call the aws api to create a signed url from the s3 url and a policy and send this new url to the end user. This would mask the s3 url, and you can enforce other restrictions like how long the link is valid, enforce requiring a specific header or limit access to a specific ip etc. You also get cdn edge caching and reduced costs as side benefits.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html
In the Drive.Files.List I can, using the 'q' parameter, get all files a user can read/write or own. I would like to be able to use regular expression in the query value. For example set q to be "not '.+#my-org.com' in writers".
Is such a query already supported?
Do I have another way (except invoking Drive.Permissions.List for each and every file in my Drive) to get this information from?
Seems the only account level drive API is part of the report API - activities list. This API (and admin console - audit - drive) section is only supported in the unlimited license. Still haven't found the proper API get the drive state (list all files metadata in the account, permissions etc.) seems that the state can only be inferred from analyzing the relevant activity events assuming the activity is not being evicted after a predefined period of time.
My conclusion, at the moment, is that there is no "root" directory at the account level. "root" is only with respect to the logged in user.
I would be more than happy to be proved wrong.
Uri
All the documentation about AWS keys seems to always tell you to have both the key id and the secret key. Are there any practical uses to have only the key id without the secret key? If not, why aren't the two combined into one ever so slightly more manageable single setting?
Seems to me if you must ask the user to produced the secret you might just as well ask for their own key id as well in the process.
More general: https://en.wikipedia.org/wiki/Public-key_cryptography
All Amazon APIs only work with the access key + signature. The signature is the way you prove you also have the secret key. The secret key never goes over the wire.
If you would "combine" them in the same key you would not know what account the request is for. You would also have to send the secret key over the wire which, in general, is a very bad thing.
So basically the public (access) key servers as an account selector and the private key serves to prove you actually have access to the account.
Is there any limit on the number of pre signed URL's per object in AWS S3 presigned URL's. Say If I want to create 1000 presigned url's per object in a 2 minutes. Is that valid scenario ?
You can create as many signed URLs as you wish. Depending on your motivation and strategy, however, there is a practical limitation on the number of unique presigned URLs for the exact same object.
S3 (in S3 regions that were first deployed before 2014) supports two authentication algorithms, V2 and V4, and the signed urls look very different since the algorithms are very different.
In V2, the signed URL for a given expiration time will always look the same, if signed by the same AWS key.
If you sign the url for an object, set to expire one minute in the future... and immediately repeat the process, the two signed URLs will be identical.
Next, exactly one second later, sign a url for the same object to expire 59 seconds in the future, and that new signed URL will also be identical.
Why? Because in V2, the expiration time is an absolute wall clock time in UTC, and the particular time in history when you actually generated the signed URL doesn't change anything.
V4 is different. In the scenario above, the first two would still be identical, but the second one would not, because V4 auth includes the date and time when you created the signed url, or when you say you did. The expiration time is relative to the signing time, instead of absolute.
Note that both forms of signed URL are tamper-resistant -- the expiration time is embedded in the url, but attempting to tweak it after signing will invalidate the signing and make it useless.
If you need to generate a large number of signed urls for the same object, you'll need to increment the expiration time for each individual signing attempt in order to get unique values. (Edit: or not, if you're feeling clever... see below).
It also occurs to me that you may be under the impression that S3 has an active role in the signing process, but it doesn't. That's all done in your local code.
S3 isn't aware, in any sense, of the signed urls you generate unless or until they are used. When a signed request arrives, S3 does exactly the same thing your code will do -- it canonicalizes certain attributes of the request, and generates a signature. Then it compares what it generated with what your code should have generated, given precisely the same parameters. If their generated signature matches your provided signature (and the key you used has permission to perform the requested action) then the request succeeds.
Update: it turns out, there is an unofficial mechanism that allows you to embed additional "entropy" into the signing process, generating unique, per-user (for example) signed URLs for the same object and expiration time.
Under V2 authentication, which doesn't nornally want you to include non-S3-specific parameters in your signing logic, it looks suspiciously like a bug as well as a feature... add &x-amz-meta-{anything-here}={unique-value-here} query string parameters to your URL. These are used as headers in PUT request but are meaningless in a GET request, and yet, if present, S3 still requires them to be included in the signature calculation, even though the parameter keys and values will ultimately be discarded by S3... but the added values are tamper-resistant and can't be maliciously removed or altered without invalidating the signature.
The same mechanism works in V4, even though it's for a different reason.
Credit for this technique: http://www.bennadel.com/blog/2488-generating-pre-signed-query-string-authentication-amazon-s3-urls-with-user-specific-data.htm
The accepted answer is now outdated. For future viewers, there is no need to include anything as extra header as now AWS includes a Signature field in every signed url which is different everytime you make a request.
Yes. In fact, i believe AWS can't even limit that, as there is no such API call on S3. URL signing is done purely by the SDK.
But if creating so many URLs is a good idea or not is completely context dependent though...