Hi Cloud Computing Geeks,
Is there any way of mounting/connecting S3 bucket with EC2 instances using JAVA AWS SDK (not by using CLI commands/ec2-api-tools). I do have all the JAVA sdks required. I successfully created a bucket using Java AWS SDK, now further want to connect it with my EC2 instances being present in North Virginia region. I didn't find any way of to do it but I hope there must be some way.
Cheers,
Hammy
You don't "mount S3 buckets", they don't work that way for they are not filesystems. Amazon S3 is a service for key-based storage. You put objects (files) with a key and retrieve them back with the same key. The key is merely a name that you assign arbitrarily and you can include "/" markers in the names to simulate a hierarchy of files. For example, a key could be folder1/subfolder/file1.txt. For illustration I show the basic operations with the java sdk.
First of all you must set up your amazon credentials and get an S3 client:
AWSCredentials credentials = new BasicAWSCredentials("your_accessKey", your_secretKey");
AmazonS3Client s3client = new AmazonS3Client(credentials);
Store a file:
File file = new File("some_local_path/file1.txt");
String fileKey = "folder1/subfolder/file1.txt";
s3client.putObject("bucket_name", fileKey, file);
Retrieve back the file:
S3ObjectInputStream objectInputStream = s3client.getObject("bucket_name", fileKey).getObjectContent();
You can read the InputStream or you can save it as file.
List the objects of a (simulated) folder: See my answer in another question.
Related
I am on a federated account that only allows for 60 minutes access tokens. This makes using AWS difficult since I have to constantly relog in with MFA, even for the AWS CLI on my machine. I'm fairly certain that any programmatic secret access key and token I generate would be useless after an hour.
I am writing a .NET program (.NET framework 4.8) that will run on a EC2 instance to read and write from an S3 bucket. As per the documentation example, they give this example to initalize the AmazonS3Client:
// Before running this app:
// - Credentials must be specified in an AWS profile. If you use a profile other than
// the [default] profile, also set the AWS_PROFILE environment variable.
// - An AWS Region must be specified either in the [default] profile
// or by setting the AWS_REGION environment variable.
var s3client = new AmazonS3Client();
I've looked into SecretManager and ParameterStore, but that would matter if the programmatic access keys go inactive after an hour. Perhaps there is another way to give the program access to S3 and the SDK...
If I cannot use access keys and tokens stored in a file, could I use the IAM access that awscli uses? For example, I can type into powershell aws s3 ls s3://mybucket to list and read files from s3 to the ec2 instance. Could the .NET SDK use the same credentials to access the S3 bucket?
Cyberduck version: Version 7.9.2
Cyberduck is designed to access non-public AWS buckets. It asks for:
Server
Port
Access Key ID
Secret Access Key
The Registry of Open Data on AWS provides this information for an open dataset (using the example at https://registry.opendata.aws/target/):
Resource type: S3 Bucket
Amazon Resource Name (ARN): arn:aws:s3:::gdc-target-phs000218-2-open
AWS Region: us-east-1
AWS CLI Access (No AWS account required): aws s3 ls s3://gdc-target-phs000218-2-open/ --no-sign-request
Is there a version of s3://gdc-target-phs000218-2-open that can be used in Cyberduck to connect to the data?
If the bucket is public, any AWS credentials will suffice. So as long as you can create an AWS account, you only need to create an IAM user for yourself with programmatic access, and you are all set.
No doubt, it's a pain because creating an AWS account needs your credit (or debit) card! But see https://stackoverflow.com/a/44825406/1094109 and https://stackoverflow.com/a/44825406/1094109
I tried this with s3://gdc-target-phs000218-2-open and it worked:
For RODA buckets that provide public access to specific prefixes, you'd need to edit the path to suit. E.g. s3://cellpainting-gallery/cpg0000-jump-pilot/source_4/ (this is a RODA bucket maintained by us, yet to be released fully)
NOTE: The screenshots below show a different URL that's no longer operational. The correct URL is s3://cellpainting-gallery/cpg0000-jump-pilot/source_4/
No, it's explicitly stated in the documentation that
You must obtain the login credentials [in order to connect to Amazon S3 in Cyberduck]
I have moved my aws credentials from ~/.aws/credentials to resources folder of maven project . the folder structure looks like this
resources/aws/
->config
->credentials
I am using aws java sdk version 2+ . How can i read the values from resources folder to get region, access keys , create bucket and perform operations.
You should not place credentials files in resources directory. AWS Java SDK supports credential files in ~/.aws out-of-the box:
The following list shows the supported credential retrieval techniques:
Java system properties–aws.accessKeyId and aws.secretAccessKey. The AWS SDK for Java uses the SystemPropertyCredentialsProvider to load these credentials.
Environment variables–AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. The AWS SDK for Java uses the EnvironmentVariableCredentialsProvider class to load these credentials.
The default credential profiles file– The specific location of this file can vary per platform, but is typically located at ~/.aws/credentials. This file is shared by many of the AWS SDKs and by the AWS CLI. The AWS SDK for Java uses the ProfileCredentialsProvider to load these credentials.
You can create a credentials file by using the aws configure command provided by the AWS CLI. You can also create it by editing the file with a text editor. For information about the credentials file format, see AWS Credentials File Format.
Amazon ECS container credentials– This is loaded from Amazon ECS if the environment variable AWS_CONTAINER_CREDENTIALS_RELATIVE_URI is set. The AWS SDK for Java uses the ContainerCredentialsProvider to load these credentials.
Instance profile credentials– This is used on Amazon EC2 instances, and delivered through the Amazon EC2 metadata service. The AWS SDK for Java uses the InstanceProfileCredentialsProvider to load these credentials.
So, either use ProfileCredentialsProvider or pass the credentials via system properties or environment variables and use SystemPropertyCredentialsProvider / EnvironmentVariableCredentialsProvider.
AWS Java SDK v2 does not support getting credentials from the resource folder (classpath) directly.
As an alternative, you can put AWS credentials in a properties file in the resource folder:
[[project]/src/test/resources/aws-credentials.properties:
aws_access_key_id = xxx
aws_secret_access_key = xxx
Spring config:
<util:properties id="awsCredentialFile"
location="classpath:aws-credentials.properties"/>
and your code:
#Resource(name = "awsCredentialFile")
public void setProperties(Properties properties) {
this.accessKey = properties.getProperty("aws_access_key_id");
this.secretKey = properties.getProperty("aws_secret_access_key");
}
StaticCredentialsProvider credentialsProvider = StaticCredentialsProvider.create(AwsBasicCredentials.create(accessKey, secretKey));
S3Client s3 = S3Client.builder()
.credentialsProvider(credentialsProvider)
.build();
I have mounted my S3 bucket to my original Instance and I am able access the files. Then I made copy of my Original Instance using Launch More Like This from AWS Console UI. Again I mounted the same S3 bucket to the new Instance but I am unable to access files in the new Instance.
PS: AWS Beginner
I created 3 different buckets, 1 using aws management console and 2 using boto api.
Bucket created using aws management console was created in tokyo region whereas the one created using boto were created us-east-1 region.
When I access by bucket using boto how does it findout the correct region in which the buckets are created. Also, how does it choose which region to create bucket in.
I have gone thought the connection.py file in boto source code but I am not able to make any sense of the code.
Any help is greatly appreciated!
You can control the location of a new bucket by specifying a value for the location parameter in the create_bucket method. For example, to create a bucket in the ap-northeast-1 region you would do this:
import boto.s3
from boto.s3.connection import Location
c = boto.s3.connect_to_region('ap-northeast-1')
bucket = c.create_bucket('mynewbucket', location=Location.APNortheast)
In this example, I am connecting to the S3 endpoint in the ap-northeast-1 region but that is not required. Even if you are connected to the universal S3 endpoint you can still create a bucket in another location using this technique.
To access the bucket after it has been created, you have a couple of options:
You could connect to the S3 endpoint in the region you created the bucket and then use the get_bucket method to lookup your bucket and get a Bucket object for it.
You could connect to the universal S3 endpoint and use the get_bucket method to lookup your bucket. In order for this to work, you need to follow the more restricted bucket naming conventions described here. This allows your bucket to be accessed via virtual hosting style addressing, e.g. https://mybucket.s3.amazonaws.com/. This, in turn, allows DNS to resolve your request to the correct S3 endpoint. Note that the DNS records take time to propagate so if you try to address your bucket in this manner immediately after it has been create it might not work. Try again in a few minutes.