Aws sdk for .NET custom region endpoint configuration - amazon-web-services

I am trying to configure aws sdk for .net for using cloud service provider which provides aws compatible api. The code below for uploading using aws sdk for php works, but how to configure it properly for aws sdk .net, especially regions part:
This code works in php:
$bucketName = 'big_bucket';
$filePath = './img33.png';
$keyName = basename($filePath);
$IAM_KEY = 'top_secret_key';
$IAM_SECRET = 'top_secret_secret';
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
// Set Amazon S3 Credentials
$s3 = S3Client::factory(
array(
'endpoint' => 'https://s3-kna1.citycloud.com:8080',
'credentials' => array(
'key' => $IAM_KEY,
'secret' => $IAM_SECRET
),
'version' => 'latest',
'region' => 's3-kna1',
'use_path_style_endpoint' => true
)
);
$s3->putObject(
array(
'Bucket'=>$bucketName,
'Key' => $keyName,
'SourceFile' => $keyName,
'StorageClass' => 'REDUCED_REDUNDANCY',
'ACL' => 'public-read'
)
);
The code below for .net does not work yet:
using Amazon.S3;
using Amazon.S3.Model;
using Amazon.S3.Transfer;
public class CityCloudFileHandler
{
private string _accessKey;
private string _secretKey;
private string _mediaBucket;
private string _serviceUrl;
private S3CannedACL _s3CannedAcl;
private AmazonS3Config _s3Config;
public CityCloudFileHandler(string accessKey, string secretKey, string mediaBucket, string serviceUrl,
S3CannedACL s3CannedAcl = null)
{
_accessKey = accessKey;
_secretKey = secretKey;
_mediaBucket = mediaBucket;
_serviceUrl = serviceUrl;
_s3CannedAcl = s3CannedAcl;
_s3Config = new AmazonS3Config
{
ServiceURL = "https://s3-kna1.citycloud.com:8080",
ForcePathStyle = true
};
}
private IAmazonS3 MediaS3()
{
return new AmazonS3Client(_accessKey, _secretKey, _s3Config);
}
}

A lot of the constructors for the AmazonS3Client accept a RegionEndpoint example. The value looks like this:
RegionEndpoint.USWest2 or RegionEndpoint.USEast1
Here is a link to the AmazonS3Client API documentation: AmazonS3Client
In addition, if you have setup a default user using the AWS CLI by creating two files (stored in: C:\Users\USER_NAME.aws\ on Windows)
The file credentials should contain the following information:
[default]
aws_access_key_id = your_access_key_id
aws_secret_access_key = your_secret_access_key
and a second file named config will contain (at least) the following lines:
[default]
region = us-east-2
Once that user has been setup, you can call the client constructor without parameters as long as the bucket is in the same region as the default user.
There is an example for uploading objects to an S3 bucket here: UploadObjectExample

Related

S3Exception: The bucket you are attempting to access must be addressed using the specified endpoint

I know that there are many similar questions, and this one is no exception
But unfortunately I can't decide on the region for my case, how can I decide on the right region?
For example, when making a request to Postman, I encounter a similar error:
In my console i'm using EU (Frankfurt) eu-central-1 and also in terminal write smth like this:
heroku config:set region="eu-central-1"
And as I understand it, mine does not fit.
Also here is my AWS class:
class AmazonFileStorage : FileStorage {
private val client: S3Client
private val bucketName: String = System.getenv("bucketName")
init {
val region = System.getenv("region")
val accessKey = System.getenv("accessKey")
val secretKey = System.getenv("secretKey")
val credentials = AwsBasicCredentials.create(accessKey, secretKey)
val awsRegion = Region.of(region)
client = S3Client.builder()
.credentialsProvider(StaticCredentialsProvider.create(credentials))
.region(awsRegion)
.build() as S3Client
}
override suspend fun save(file: File): String =
withContext(Dispatchers.IO) {
client.putObject(
PutObjectRequest.builder().bucket(bucketName).key(file.name).acl(ObjectCannedACL.PUBLIC_READ).build(),
RequestBody.fromFile(file)
)
val request = GetUrlRequest.builder().bucket(bucketName).key(file.name).build()
client.utilities().getUrl(request).toExternalForm()
}
}
I think you may have the wrong region code; you do know that a Bucket is available in one and only one Region?
In your logging settings, set this scope to debug:
logging:
level:
org.apache.http.wire: debug
Then you should see something like this:
http-outgoing-0 >> "HEAD /somefile HTTP/1.1[\r][\n]"
http-outgoing-0 >> "Host: YOURBUCKETNAME.s3.eu-west-2.amazonaws.com[\r][\n]"
That log is from a bucket in the London region eu-west-2
To use Kotlin to interact with an Amazon S3 bucket (or other AWS services), consider using the AWS SDK for Kotlin. This SDK is meant for Kotlin developers. You are using the AWS SDK for Java.
To put an object into an Amazon S3 bucket using the AWS SDK for Kotlin, use this code. Notice the region that you want to use is specified in the code block where you define the aws.sdk.kotlin.services.s3.S3Client.
import aws.sdk.kotlin.services.s3.S3Client
import aws.sdk.kotlin.services.s3.model.PutObjectRequest
import aws.smithy.kotlin.runtime.content.asByteStream
import java.io.File
import kotlin.system.exitProcess
/**
Before running this Kotlin code example, set up your development environment,
including your credentials.
For more information, see the following documentation topic:
https://docs.aws.amazon.com/sdk-for-kotlin/latest/developer-guide/setup.html
*/
suspend fun main(args: Array<String>) {
val usage = """
Usage:
<bucketName> <objectKey> <objectPath>
Where:
bucketName - The Amazon S3 bucket to upload an object into.
objectKey - The object to upload (for example, book.pdf).
objectPath - The path where the file is located (for example, C:/AWS/book2.pdf).
"""
if (args.size != 3) {
println(usage)
exitProcess(0)
}
val bucketName = args[0]
val objectKey = args[1]
val objectPath = args[2]
putS3Object(bucketName, objectKey, objectPath)
}
suspend fun putS3Object(bucketName: String, objectKey: String, objectPath: String) {
val metadataVal = mutableMapOf<String, String>()
metadataVal["myVal"] = "test"
val request = PutObjectRequest {
bucket = bucketName
key = objectKey
metadata = metadataVal
body = File(objectPath).asByteStream()
}
S3Client { region = "us-east-1" }.use { s3 ->
val response = s3.putObject(request)
println("Tag information is ${response.eTag}")
}
}
You can find this Kotlin example and many more in the AWS Code Library here:
Amazon S3 examples using SDK for Kotlin
ALso you can read the Kotlin DEV guide too. The link is at the start of the Code Example.

How to use DefaultAWSCredentialsProviderChain() in java code to fetch credentials from instance profile and allow access to s3 bucket

I am working on a requirement where i want to connect to s3 bucket using springboot application.
When i am connecting through my local environment i am using seeting loadCredentials(true) which uses Amazon STS which fetches the temperoriy credentials using a role and allow access to s3 bucket.
When i am deploying to the qa/prd envirment i am setting loadCredentials(false) which uses DefaultAWSCredentialsProviderChain() class to fetch the credential from aws instance profile(role is assigned to ec2 instance) and allow access to s3 bucket. My code is
#Configuration
public class AmazonS3Config
{
static String clientRegion = "ap-south-1";
static String roleARN = "arn:aws:iam::*************:role/awss3acess";
static String roleSessionName = "bucket_storage_audit";
String bucketName = "testbucket";
//Depending on environment is set to true(for local environment) or false(for qa and prd environment)
private static AWSCredentialsProvider loadCredentials(boolean isLocal) {
final AWSCredentialsProvider credentialsProvider;
if (isLocal) {
AWSSecurityTokenService stsClient = AWSSecurityTokenServiceAsyncClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();
AssumeRoleRequest assumeRoleRequest = new AssumeRoleRequest().withDurationSeconds(3600)
.withRoleArn(roleARN)
.withRoleSessionName(roleSessionName);
AssumeRoleResult assumeRoleResult = stsClient.assumeRole(assumeRoleRequest);
Credentials creds = assumeRoleResult.getCredentials();
credentialsProvider = new AWSStaticCredentialsProvider(
new BasicSessionCredentials(creds.getAccessKeyId(),
creds.getSecretAccessKey(),
creds.getSessionToken())
);
} else {
System.out.println("inside default");
credentialsProvider = new DefaultAWSCredentialsProviderChain();
}
return credentialsProvider;
}
// Amazon s3client Bean return an instance of s3client
. #Bean
public AmazonS3 s3client() {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(Regions.fromName(clientRegion))
.withCredentials(loadCredentials(false))
.build();
return s3Client;
}
}
My question since the credentials of instance profile rotate after every 12 hours my application will fail after 12 hours.
What will i do to avoid this from happening in my code.
You can directly use ProfileCredentialsProvider instead of DefaultAWSCredentialsProviderChain as there is no need in your case to chain the credsproviders.
and about your question, AWSCredentialsProvider has refresh() method that will reread the config file.when an Authentication exception Occurs while using S3Client you can retry again and call refresh() first.

How to get the AWS S3 bucket location via PHP api call?

I am searching on the internet on how can I get the AWS s3 bucket region with an API call or directly in PHP using their library but have not luck finding the info.
I have the following info available:
Account credentials, bucket name, access key + secret. That is for multiple buckets, that I have access to, and I need to get the region programatically, so logging in to aws console and checking out is not an option.
Assuming you have an instance of the AWS PHP Client in $client, you should be able to find the location with $client->getBucketLocation().
Here is some example code:
<?php
$result = $client->getBucketLocation([
'Bucket' => 'yourBucket',
]);
The result will look like this
[
'LocationConstraint' => 'the-region-of-your-bucket',
]
When you create a S3 client, you can use any of the available regions in AWS, even if it's not one that you use.
$s3Client = new Aws\S3\S3MultiRegionClient([
'version' => 'latest',
'region' => 'us-east-1',
'credentials' => [
'key' => $accessKey,
'secret' => $secretKey,
],
]);
$region = $s3Client->determineBucketRegion($bucketname);

How to get the file from Amazon AWS S3 bucket from URL?

I've created an Amazon S3 bucket and I've uploaded the files/images from mobile phone app. I've to show the posts with a lot of images and the images are automatically bind for image URLs. But I don't know how to get the URL because images should not be public to show directly. How can I show them in my app?
$cmd = $client->getCommand('GetObject',[
'Bucket' => 'myinstaclassbucket',
'Key' => 'e12e682c-936d-4a97-a049-6f104dd7c904.jpg',
]);
$request = $client->createPresignedRequest($cmd,$timetoexpire);
$presignedurl = (string) $request->getUri();
echo $presignedurl;
First of all you need to use AWS PHP SDK. Also make sure you have valid Access key and Secret key.
Than everything is straight forward.
$bucket = 'some-bucket';
$key = 'mainFolder/subFolder/file.xx';
// Init client
$client = new S3Client([
'key' => '*YOUR ACCESS KEY*',
'secret' => '*YOUR SECRET KEY*',
]);
if ($client->doesObjectExists($bucket, $key)) {
// If passing `expire` time you will get signed URL
$url = $client->getObjectUrl($bucket, $key, time() + (60 * 60 * 2));
} else {
$url = null;
}

Location to put credentials file for AWS PHP SDK

I created an EC2 Ubuntu instance.
The following is working using the AWS 2.6 SDK for PHP:
$client = DynamoDbClient::factory(array(
'key' => 'xxx',
'secret' => 'xxx',
'region' => 'eu-west-1'
));
I created a credentials file in ~/.aws/credentials.
I put this in /home/ubuntu/.aws/credentials
[default]
aws_access_key_id=xxx
aws_secret_access_key=xxx
Trying the following does not work and gives an InstanceProfileCredentialsException :
$client = DynamoDbClient::factory(array(
'profile' => 'default',
'region' => 'eu-west-1'
));
There is a user www-data and a user ubuntu.
In what folder should I put the credentials file?
One solution to set the credentials is:
sudo nano /etc/apache2/envvars
add environment variables:
export AWS_ACCESS_KEY_ID="xxx"
export AWS_SECRET_ACCESS_KEY="xxx"
sudo service apache2 restart
After that the following works:
$client = DynamoDbClient::factory(array(
'region' => 'eu-west-1'
));
If you are calling the API from an EC2 instance, you should use IAM roles.
Using IAM roles is the preferred technique for providing credentials
to applications running on Amazon EC2. IAM roles remove the need to
worry about credential management from your application. They allow an
instance to "assume" a role by retrieving temporary credentials from
the EC2 instance's metadata server. These temporary credentials, often
referred to as instance profile credentials, allow access to the
actions and resources that the role's policy allows.
This is way too late, but the solution I found for shared servers where you can't actually use environment vars is to define a custom ini file location, like this:
require (__DIR__.'/AWSSDK/aws-autoloader.php');
use Aws\Credentials\CredentialProvider;
use Aws\S3\S3Client;
$profile = 'default';
$path = '/path/to/credentials';
$provider = CredentialProvider::ini($profile, $path);
$provider = CredentialProvider::memoize($provider);
$client = new \Aws\S3\S3Client([
'version' => 'latest',
'region' => 'us-west-2',
'credentials' => $provider
]);
Note that you could even define different profiles with this method.
Documentation HERE
I have a non-EC2 server that accesses SQS and needs credentials. I can't use envvars because there are various people with differing rights who run on this server and envvars is global. For the same reason I don't think I can use an AWS credential file stored under a user's home (although I also couldn't figure out how to make that work for user www-data.)
What I have done is set up a small file AWS_Creds.php
<?php
define ("AWS_KEY", "MY KEY HERE");
define ("AWS_SECRET", "MY SECRET");
?>
The file is stored outside of the webroot and included with include ('ABSOLUTEPATH/AWS_Creds.php') and I include the hard wired reference to the client factory.
Elegant? No. Done? Yes.
EDIT
I forgot to mention: I gitignore AWS_Creds.php so that it doesn't go into our repo.
basicly you can use like this :
$client = DynamoDbClient::factory(array(
'key' => 'aws_key',
'secret' => 'aws_secret',
'region' => 'us-east-1'
));
but in documentation :
Starting with the AWS SDK for PHP version 2.6.2, you can use an AWS credentials file to specify your credentials. This is a special, INI-formatted file stored under your HOME directory, and is a good way to manage credentials for your development environment. The file should be placed at ~/.aws/credentials, where ~ represents your HOME directory.
and usage :
$dynamoDbClient = DynamoDbClient::factory(array(
'profile' => 'project1',
'region' => 'us-west-2',
));
more info : http://docs.aws.amazon.com/aws-sdk-php/guide/latest/credentials.html
After watch the source code Credential.php in aws/aws-sdk-php/src,
php can not directly access /root folder in default. You can write $_SERVER['HOME']=[your new home path] in your php, and put the credential file in newHomePath/.aws/credentials.
require('vendor/autoload.php');
use Aws\Ec2\Ec2Client;
$credentials = new Aws\Credentials\Credentials('Your Access Key',
'Your Secret Key'); // Place here both key
$ec2Client = new Aws\Ec2\Ec2Client([
'version' => 'latest',
'region' => 'ap-south-1',
'credentials' => $credentials
]);
$result = $ec2Client->describeKeyPairs();
echo '<pre>';
print_r($result);
Reference site : https://docs.aws.amazon.com/aws-sdk-php/v2/guide/credentials.html#passing-credentials-into-a-client-factory-method