I know there is likely to be documentation out there somewhere but I have been drowning in Google searches trying to get my head around this!
I am working on my first Symfony project and I have a requirement to store files on AWS S3. There are three categories of file I am storing:
Type 1 - This should be accessible to anyone (although only on a request from my website).
Type 2 - This should be accessible to certain users. The list of users will change from time to time (friends list).
Type 3 - This should be accessible to the creating user and at times other users when accessed from a specific page.
I user the FOSUserBundle to handle my user authentication in my project.
At this time I'm lost in a sea of "IAM" users, "ACL" policies and I really don't know how to set something like this up - or if it's even possible. I also have the Gaufrette and liip/imagine-bundle bundles installed in Symfony (so I could add watermarks and resize)
Any help or resources that would point me in the right direction would be grateful.
t2t
Edit (21st Feb 2017
OK, so based on my further reading and the comment below I believe I can simplify what I need to do:
I want to have a bucket on AWS S3 which is restricted so that:
Files can only be read by a request from my domain, that provides a security token of some sort.
That will mean that even if the HTTP referred is spoofed a request to the S3 file will be declined as the token was not sent...
So, the question is - is this possible? If so, how should I proceed?
Thanks,
t2t
Please do not mix your project users with IAM users. Those things are completely separated. You need only one IAM user, which will upload files of all users of your PHP app. Any logic should be written in Symfony.
Related
Ok, I hope I don't get too beat up here for this question as it is kind of complex. At least in my view, with what I know so far. So the details first:
I built a nice app with django that brings in event data for users, utilizes that data for many things (not relevant to this question) but one of the things is that it syncs these events to the users Google calendar. I made the google app within the developer console, and it uses the provided credentials.json file to allow users to authenticate the app, thus creating individual user token.json files per user, then I have another script (not within django, just a custom python file) that runs from a cron job to automatically sync/ update the calendar info from the database to the google calendars.
Now, the new problem is having this work without my help. IE: a new user logs in and creates a profile, then if they should choose to sync to their Google calendars I have to be there, running the authentication process from my personal server. So I did that, by moving the whole app to a hosted platform and brought it up to speed in production mode.
Users can create a profile, using django-allauth it works to make an initial user account where they can fill in the rest of the profile. It does populate the token string for their account, but here is where I'm stuck.
What process is there to make the token.json file OR use the existing token string (the one it saves now on the server version) to allow the system to sync the calendars? Once the token files are created, the rest of this works. I just can't get the right answers to how django-allauth will handshake with Google and do this?
Thanks for any help!
Update: ultimately wound up using a service account with google api, and directing my users to combine the service account email (adding it as a shared user to the specific calendar) and they copy/paste the shared calendar ID in their profile on my app. All the logic now just uses this share function to sync the calendars, and it works great.
"The Facebook SDK obtains an OAuth token that Amazon Cognito uses to generate AWS credentials for your authenticated end user. Amazon Cognito also uses the token to check against your user database for the existence of a user matching this particular Facebook identity. If the user already exists, the API returns the existing identifier. Otherwise a new identifier is returned." - AWS Docs
Is Amazon Cognito only checking for the same Facebook user already in the database, or is it checking all users for matching fields, such as email? I am needing to allow for a user to sign in with email, Facebook, or Google and get the same data regardless. Basically I'm asking if Amazon Cognito links the users together automatically by email, or if this isn't the way to do it.
So if I understand your question correctly in line with what I've said above - I can't currently see an update to the features that I'm talking about - I'm happy to be wrong but....
Federated identities are very separate things if they're not the same session, in order for an identity to be linked they have to already be logged in with the old one and login with something else - in effect although on paper Cognito looks awesome this is why we hadn't used it, our problems were this:
(replacing # with !)
marc!marc.com signs into the user pool.
marc!marc.com signs in via facebook.
marc!marc.com signs in via anything else.
You'll probably end up with 3 users if they're three separate sessions - despite what AWS say if you read here it looks like you can link ID's you can but only if they logged in with the original ID first.
Yes this is silly, yes it really isn't fit for purpose - at a major hotel chain I talked in AWS offices about this and the upgrades were on the roadmap - I haven't revisited it since so hopefully I'm wrong!
NOTE
Please do not accept this answer for about 24 hours as I'd like to give anyone else the opportunity to chip in, I think I'm 100% correct but I haven't looked at Cognito for a while(and I'm a bit scared to because the docs aren't great(sorry AWS) and it nearly drove me off a cliff last time trying to do exactly what you're trying to).
I have been trying to find an answer to this question for a couple of hours now, but have not managed to come up with a conclusive answer. I am hoping someone here will be able to shed some light on my question. Consider the following Example AWS S3 URL:
https://some-bucket.s3-eu-west-2.amazonaws.com/uploads/images/some_image.jpg?X-Amz-Expires=600&X-Amz-Date=20170920T124015Z&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAI6CJYFYSSWMXXXXX/20170920/eu-west-2/s3/aws4_request&X-Amz-SignedHeaders=host&X-Amz-Signature=0481296b70633de9efb2fce6e20751df2f55fd79b5ff9570c02ff8f587dce825
In my specific example, the above URL is a request to view an image on S3 which I am exposing directly in a HTML img tag, and the user in Amz-Credential has both read and write permissions. The URL is also set to expire in 10 minutes.
Is is safe to link to the image directly via this URL, or is there any possibility that within these 10 minutes, the signature from this URL could be used in a maliciously crafted REST request to delete or modify the image instead of viewing it?
I do suspect a different action will have a different signature to make this impossible, but given my very limited understanding of AWS auth, I thought it better to ask just in case.
I know I could create a read-only user (extra complexity) or hide the S3 URL behind a controller action on my own web app (requires 2 total requests to load each image, making it inefficient), but I would rather learn whether my current approach is safe or not before resorting to either of these.
Thank you kindly for your time. :)
If your pre-signed url has PUT or DELETE permission someone could try to get the Signature + AccessKeyId to overwrite or delete your object.
Just make sure that you are signing the url with a read-only permission and I guess you're good.
I manage a domain of users and would like to be able to transfer all the documents of a user to another user. As far as I understand the best way to achieve that is to find the fileID's of all files belonging to one user and transfer them to another user. However, I have problem constructing a query.
UPDATE:
So the correct query to retrieve the list of files would be:
response = drive_service.files().list(q="'user#company.com' in owners").execute()
However, it only works for me as an admin. If I try to retrieve the list of files for any other user in my domain it returns an empty list.
Files.list will retrieve all the user's files, in this case it will get all your own files. In order for that query to work would be only if that user is also owner one(or more) of your files.
Even as an admin you cannot access users files directly.
To access other user's files, as an admin you need to impersonate the users and then perform actions in their behalf.
This is achieved by using a service account with domain wide delegation of authority.
Here you can find more information on that as well as a python example.
Hope it helps.
If you want to transfer all the files of one user into another user's Drive, the easiest way would be to use the Data Transfer API provided by Google. This way you don't have to list the files and transfer them one by one. Also you only need the admin access token and wouldn't need domain wide delegation either. You can get the official documentation here
Good Day Everybody,
I'm fairly new to AWS, and I have a problem right now. I'm not even sure if this is something that is possible with S3. I tried googling it, but couldn't find any proper response (Probably because the keywords I searched doesn't make much sense ;) ).
So my problem is this, I have an node application which uploads user images to S3. I wan't to know how to properly access this images later in the front-end(Some sort of direct link). But at the same time, I should be able to restrict the users who can access the image. For eg: If user xyz uploads an image only that user should be able to see it. Another user say abc tries to open the direct link, it should say access restricited or something similar.
Or if that is not possible, atleast I should be able to put an encrypted timestamp on the get url, so that the image will be accessible through that particular url for only a limited amount of time.
Thanks in advance.
This is the typical use case for S3 Pre-signed URLs.
In S3, you are able to specify some query strings on the URL of your object that include an Access Key, an expiration timestamp and a signature. S3 validates the signature and checks if the request has been made before the expiration timestamp. If that's the case, it will serve the object. Otherwise, it will return an error.
The AWS SDK for JavaScript (Node.js) includes an example on how to generate pre-signed URLs: http://docs.aws.amazon.com/AWSJavaScriptSDK/guide/node-examples.html#Amazon_S3__Getting_a_pre-signed_URL_for_a_getObject_operation__getSignedUrl_