JupyterHub access AWS S3 - amazon-web-services

I want to give access to JupyterHub users so they can use data in AWS S3. I would appreciate if anyone explain how to set up for this usage.
Also, I would prefer if there is a way to not to give AWS credentials to the JupyterHub users but they would be just allowed to have access to data in AWS S3.
Thank you!

You would definitely need an IAM user configured with the right permissions to access the S3 bucket you need (or if you need full rights on all S3 buckets, you could attach the AmazonS3FullAccess policy to your IAM user).
Then on Jupyterhub you would need to have the AWS CLI installed so that you could run the aws configure command, by which you'd be able to configure the credentials of this IAM user in the .credentials file for AWS on Jupyterhub.
Once this is all done, you could use either the CLI or the boto3 library to interact with your S3 bucket from a Jupyterhub notebook.

You can use S3 as a file system in JupyterNotebboks with this extension:
https://github.com/danielfrg/s3contents

Related

AWS Boto3/Botocore assume IAM role in ECS task

I'm trying to create a botocore session (that does not use my local AWS credentials on ~/.aws/credentials). In other words, I want to create a "burner AWS account". With that burner credentials/session, I want to setup an STS client and with that client, assume a role in order to access a DynamoDB database. Can someone provide some example code which accomplishes exactly this?
Because if I want my system to go into production environment, I CANNOT store the AWS credentials on Github because AWS will scan for it. I'm trying to implement a workaround such that we don't have to store ~/.aws/credentials file on Github.
The running a task in Amazon ECS, simply assign an IAM Role to the task.
Amazon ECS will then generate temporary credentials for that IAM Role. Any code that uses an AWS SDK (such as boto3 for Python) knows how to access those credentials via the metadata service.
The result is that your code using boto3 will automatically receive credentials that have the permissions associated with the IAM Role assigned to the task.
See: IAM roles for tasks - Amazon Elastic Container Service

Accessing S3 bucket data from EC2 instance through IAM

So I have created an IAM user and added a permission to access S3 then I have created an EC2 instance and SSH'ed into the it.
After giving "aws s3 ls" command, the reply was
"Unable to locate credentials. You can configure credentials by running "aws configure".
so what's the difference between giving IAM credentials(Key and Key ID) using "aws configure" and editing the bucket policy to allow s3 access to my instance's public IP.
Even after editing the bucket policy(JSON) to allow S3 access to my instance's public IP why am I not able to access the s3 bucket unless I use "aws configure"(Key and Key ID)?
Please help! Thanks.
Since you are using EC2 you should really use EC2 Instance Profiles instead of running aws configure and hard-coding credentials in the file system.
As for the your question of S3 bucket policies versus IAM roles, here is the official documentation on that. They are two separate tools you would use in securing your AWS account.
As for your specific command that failed, note that the AWS CLI tool will always try to look for credentials by default. If you want it to skip looking for credentials you can pass the --no-sign-request argument.
However, if you were just running aws s3 ls then that was trying to list all the buckets in your account, which you would have to have IAM credentials for. Individual bucket policies would not be taken into account in that scenario.
If you were running aws s3 ls s3://bucketname then that may have worked as aws s3 ls s3://bucketname --no-sign-request.
When you create iam user so there are two parts
policies
roles
Policies are attached to user, like what all services user can pr can't access
roles are attached to application, what all access that application can have
So you have to permit ec2 to access S3
There are two ways for that
aws configure
attach role to ec2 instance
while 1 is tricky and legthy , 2 is easy
Go to ec2-instance-> Actions -> Security -> Modify IAM role -> then select role (ec2+s3 access role)
thats it , you can simply do aws s3 ls from ec2 instance

Accessing S3 bucket from a script using IAM Role

We are trying to upload and display a file to and from S3 bucket through our .Net Script.
We are currently using the user's access key and secret key in our code, Which is a bad practice.
Could anyone let me if there is a way that we can use roles in the pace of these keys directly? If there is then how ?
As you're going to run this on EC2 the answer is yes you can attach an IAM role to an EC2 host.
This is indeed the best practice for running your scripts on your EC2 host. Once attached the EC2 your script will have access to all permissions that your EC2 has as long as you do not provide an IAM key/secret in the credentials of the SDK or have any of the environment variables set as these will override the IAM role.
More information is available in the IAM roles for Amazon EC2 documentation.
If you run your application in EC2, try to attach the role to EC2 directly.
If you are run on your local server, try to save your credentials on your server by using aws configure command

How to collect logs from EC2 instance and store it in S3 bucket?

Step 1: Created an Amazon S3 Bucket
Step 2: Created an IAM User with Full Access to Amazon S3 and CloudWatch Logs
Step 3: Granted Permissions on an Amazon S3 Bucket
What should I do next?
A few things.
You're probably better off using an IAM instance profile. That way, your credentials are not static IAM user credentials.
If you want to only copy the logs to S3, I'd suggest setting up some scheduled job to use the AWS CLI to copy the directory with your logs to S3.
Alternatively, I'd suggest you install and configure the CloudWatch agent on your instance. From there, you can copy logs to S3 using the methodology outlined here: https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/S3ExportTasks.html

Mounting AWS S3 bucket using AWS IAM roles instead of using a passwd file

I am mounting an AWS S3 bucket as a filesystem using s3fs-fuse. It requires a file which contains AWS Access Key Id and AWS Secret Access Key.
How do I avoid the access using this file? And instead use AWS IAM roles?
As per Fuse Over Amazon document, you can specify the credentials using 4 methods. If you don't want to use a file, then you can set AWSACCESSKEYID and AWSSECRETACCESSKEY environment variables.
Also, if your goal is to use AWS IAM instance profile, then you need to run your s3fs-fuse from an EC2 instance. In that case, you don't have to set these credential files/environment variables. This is because while creating the instance, if you attach the instance role and policy, the EC2 instance will get the credentials at boot time. Please see the section 'Using Instance Profiles' in page 190 of AWS IAM User Guide
there is an argument -o iam_role=--- which helps you to avoid AccessKey and SecretAccessKey
The Full steps to configure this is given below
https://www.nxtcloud.io/mount-s3-bucket-on-ec2-using-s3fs-and-iam-role/