Why am I failing to read and write to Amazon S3 from a Delphi VCL application? - amazon-web-services

I have created a Delphi application from which I want to read and write from an Amazon S3 (simple storage service) bucket.
In the S3 Management Console I have created a new bucket and set Block all public access to On.
I then created a new user for IAM (Identify and Access Management) and granted this user AmazonS3FullAccess privileges (for now). In my application I added a TAmazonConnectionInfo component to my project, and set the AccountKey property to the secret access key, and the AccountName property to my account key ID of this IAM user.
In my code I am instantiating a TAmazonStorageService class, passing the TAmazonConnectionInfo object to it in the constructor. I am then invoking the UploadObject method to which I pass the bucket name, an object name, and a TArray that contains the object I want to store. The call to UploadObject returns False. I’ve tried several different byte arrays, including one based on the example shown in one of the YouTube videos that I’ve referenced at the bottom of this post, so I’m pretty sure that it’s not a problem with the object I am trying to store.
I tried setting Block all public access to Off, but that did not solve the problem. I don’t know how long it takes for those settings to take effect, but there was no difference in result after half of an hour.
Either I have not sufficiently configured my TAmazonConnectionInfo object, or there are one or more objects that I need to add to the project, or some configuration that I need to perform on the bucket.
One concern I have is that my S3 bucket is located in US East (Ohio) region. The Region property of the TAmazonConnectionInfo component is set to amzrUSEast1, but I am not sure that that is correct. I tried setting Region to amzrNotSpecified, but that did not solve the problem.
Also, I tried setting StorageEndPoint to s3.us-east-2.amazonaws.com (http) and s3-accesspoint.us-east-2.amazonaws.com (https), based on Paweł’s comments.
I’ve exhausted my options. If you’re having success working with your S3 buckets from Delphi I would be grateful if you could help point me in the right direction.
I am using Delphi Rio 10.3.3 on Windows 10 64-bit
References:
https://www.youtube.com/watch?v=RUT9clew4PM&t=396s
https://www.youtube.com/watch?v=rtZkVAOvavU&t=1582s
https://www.youtube.com/watch?v=8VjTEtK_VaM&list=PLwUPJvR9mZHg3YgQKG8QCJAqdNxZyDVfg&index=50&t=0s

The code below works. Plug in your values for AccountName, AccountKey and BucketName and try it. If it fails, let us know what it returned for your response.
To test this, I created a new bucket with default settings: US East (Ohio) us-east-2 region and block all public access checked. I created a new user with programmatic access and assigned it the policy AmazonS3FullAccess.
During my initial test I got this 400 response which was resolved by setting the StorageEndpoint:
400: Bad Request - The authorization header is malformed; the region
'us-east-1' is wrong; expecting 'us-east-2'
(AuthorizationHeaderMalformed)
If you get a 403 response like below, check to make sure the user policy (or however you configured your permissions) is set correctly, and that your AccountName and AccountKey are correct.
403: Forbidden
program AmazonS3test;
{$APPTYPE CONSOLE}
{$R *.res}
uses
System.SysUtils, Data.Cloud.CloudAPI, Data.Cloud.AmazonAPI, System.Classes;
const
AccountName = ''; // IAM user Access key ID
AccountKey = ''; // IAM user Secret access key
BucketName = ''; // S3 bucket name
StorageEndpoint = 's3-us-east-2.amazonaws.com'; // s3-<bucket region>.amazonaws.com
ObjectName = 'TestFile.txt';
MyString = 'Testing 1 2 3';
var
ResponseInfo: TCloudResponseInfo;
ConnectionInfo: TAmazonConnectionInfo;
StorageService: TAmazonStorageService;
StringStream: TStringStream;
begin
ReportMemoryLeaksOnShutdown := True;
ConnectionInfo := TAmazonConnectionInfo.Create(nil);
StorageService := TAmazonStorageService.Create(ConnectionInfo);
ResponseInfo := TCloudResponseInfo.Create;
StringStream := TStringStream.Create(MyString);
try
try
ConnectionInfo.AccountName := AccountName;
ConnectionInfo.AccountKey := AccountKey;
ConnectionInfo.StorageEndpoint := StorageEndpoint;
StorageService.UploadObject(BucketName, ObjectName,
StringStream.Bytes, True, nil, nil, amzbaPrivate, ResponseInfo);
WriteLn('StatusCode: ' + IntToStr(ResponseInfo.StatusCode));
WriteLn('Status : ' + ResponseInfo.StatusMessage);
except
on E: Exception do
WriteLn(E.ClassName, ': ', E.Message);
end;
finally
StringStream.Free;
ResponseInfo.Free;
StorageService.Free;
ConnectionInfo.Free;
end;
end.

Related

How to create a s3 object download link once file uploaded to bucket? [duplicate]

I'm using an AWS Lambda function to create a file and save it to my bucket on S3, it is working fine. After executing the putObject method, I get a data object, but it only contains an Etag of the recently added object.
s3.putObject(params, function(err, data) {
// data only contains Etag
});
I need to know the exact URL that I can use in a browser so the client can see the file. The folder has been already made public and I can see the file if I copy the Link from the S3 console.
I tried using getSignedUrl but the URL it returns is used for other purposes, I believe.
Thanks!
The SDKs do not generally contain a convenience method to create a URL for publicly-readable objects. However, when you called PutObject, you provided the bucket and the object's key and that's all you need. You can simply combine those to make the URL of the object, for example:
https://bucket.s3.amazonaws.com/key
So, for example, if your bucket is pablo and the object key is dogs/toto.png, use:
https://pablo.s3.amazonaws.com/dogs/toto.png
Note that S3 keys do not begin with a / prefix. A key is of the form dogs/toto.png, and not /dogs/toto.png.
For region-specific buckets, see Working with Amazon S3 Buckets and AWS S3 URL Styles. Replace s3 with s3.<region>.amazonaws.com or s3-<region>.amazonaws.com in the above URLs, for example:
https://seamus.s3.eu-west-1.amazonaws.com/dogs/setter.png (with dot)
https://seamus.s3-eu-west-1.amazonaws.com/dogs/setter.png (with dash)
If you are using IPv6, then the general URL form will be:
https://BUCKET.s3.dualstack.REGION.amazonaws.com
For some buckets, you may use the older path-style URLs. Path-style URLs are deprecated and only work with buckets created on or before September 30, 2020. They are used like this:
https://s3.amazonaws.com/bucket/key
https://s3.amazonaws.com/pablo/dogs/toto.png
https://s3.eu-west-1.amazonaws.com/seamus/dogs/setter.png
https://s3.dualstack.REGION.amazonaws.com/BUCKET
Currently there are TLS and SSL certificate issues that may require some buckets with dots (.) in their name to be accessed via path-style URLs. AWS plans to address this. See the AWS announcement.
Note: General guidance on object keys where certain characters need special handling. For example space is encoded to + (plus sign) and plus sign is encoded to %2B. Also here.
in case you got the s3bucket and filename objects and want to extract the url, here is an option:
function getUrlFromBucket(s3Bucket,fileName){
const {config :{params,region}} = s3Bucket;
const regionString = region.includes('us-east-1') ?'':('-' + region)
return `https://${params.Bucket}.s3${regionString}.amazonaws.com/${fileName}`
};
You can do an another call with this:
var params = {Bucket: 'bucket', Key: 'key'};
s3.getSignedUrl('putObject', params, function (err, url) {
console.log('The URL is', url);
});

C# AWS SDK SecurityTokenServiceClient.AssumeRole returning "SignatureDoesNotMatch" 403 forbidden

I've been implementing an AWS S3 integration with the C# AWS SDK in a development environment, and everything has been going well. Part of the requirement is that the IAM AccessKey and SecretKey rotate, and the credential/config files stored or cached, and there is also a Role to be assumed in the process.
I have a method which returns credentials after initializing a AmazonSecurityTokenServiceClient with AccessKey, SecretKey, and RegionEndpoint, formats a AssumeRoleRequest with the RoleArn and then executes the request:
using (var STSClient = new AmazonSecurityTokenServiceClient(accessKey, secretKey, bucketRegion))
{
try
{
var response = STSClient.AssumeRole(new AssumeRoleRequest(roleARN));
if (response.HttpStatusCode == System.Net.HttpStatusCode.OK) return response.Credentials;
}
catch (AmazonSecurityTokenServiceException ex)
{
return null;
}
}
This is simplified, as the real implementation validates the credential variables, etc.. And it matches the AWS Developer code examples (although I can't find the link to that page anymore).
This has been working in dev just fine. Having moved this to a QA env with new AWS credentials, which I've been assured have been set up in the same process as the dev credentials, I'm now receiving an exception on the AssumeRole call.
The actual AssumeRole method doesn't include documentation that it would throw that exception, it's just the one it raises. The StatusCode: 403 Forbidden, ErrorCode: SignatureDoesNotMatch, ErrorType: Sender, Message "The request signature we calculated does not match the signature you provided....".
Things I have ruled out:
Keys are correct and do not contain escaped characters (/), or leading/trailing spaces
bucket region is correct us-west-2
sts auth region is us-east-1
SignatureVersion is 4
Switching back to the dev keys works, but that is not a production-friendly solution. Ultimately I will not be in charge of the keys, or the Aws account to create them. I've been in touch with the IT Admin who created the accounts/keys/roles and he assured me they are created the same way I created the dev accounts/keys/roles (which was an agreed-upon process prior to development).
The provided accounts/keys/roles can be accessed via the CLI or web console, so I can confirm they work and are active. I've been diligent to not have any CLI created credential or config files floating around that the sdk might access by default.
Any thoughts or suggestions are welcome.
The reason why AWS returns this error is usually that the secret key is incorrect
The request signature we calculated does not match the signature you provided. Check your key and signing method. (Status Code: 403; Error Code: SignatureDoesNotMatch)

access credentials error in Copy Command in S3

I am facing access credentials error when i ran copy Command in S3.
my copy command is :
copy part from 's3://lntanbusamplebucket/load/part-csv.tbl'
credentials 'aws_access_key_id=D93vB$;yYq'
csv;
error message is:
error: Invalid credentials. Must be of the format: credentials 'aws_iam_role=...' or 'aws_access_key_id=...;aws_secret_access_key=...[;token=...]'
'aws_access_key_id=?;
aws_secret_access_key=?''
Could you please can any one explain what is aws_access_key_id and aws_secret_access_key ?
where we can see this?
Thanks in advance.
Mani
The access key you're using looks more like a secret key, they usually look something like "AKIAXXXXXXXXXXX".
Also, don't post them openly in StackOverflow questions. If someone gets a hold of a set of access keys, they can access your AWS environment.
Access Key & Secret Key are the most basic form of credentials / authentication used in AWS. One is useless without the other, so if you've lost one of the two, you'll need to regenerate a set of keys.
To do this, go into the AWS console, go to the IAM services (Identity and Access Management) and go into users. Here, select the user that you're currently using (probably yourself) and go to the Security Credentials tab.
Here, under Access keys, you can see which sets of keys are currently active for this user. You can only have 2 sets active at one time, so if there's already 2 sets present, delete one and create a new pair. You can download the new pair as a file called "credentials.csv" and this will contain your user, access key and secret key.

Allow 3rd party app to upload file to AWS s3

I need a way to allow a 3rd party app to upload a txt file (350KB and slowly growing) to an s3 bucket in AWS. I'm hoping for a solution involving an endpoint they can PUT to with some authorization key or the like in the header. The bucket can't be public to all.
I've read this: http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html
and this: http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html
but can't quite seem to find the solution I'm seeking.
I'd suggests using a combination of the AWS API gateway, a lambda function and finally S3.
You clients will call the API Gateway endpoint.
The endpoint will execute an AWS lambda function that will then write out the file to S3.
Only the lambda function will need rights to the bucket, so the bucket will remain non-public and protected.
If you already have an EC2 instance running, you could replace the lambda piece with custom code running on your EC2 instance, but using lambda will allow you to have a 'serverless' solution that scales automatically and has no min. monthly cost.
I ended up using the AWS SDK. It's available for Java, .NET, PHP, and Ruby, so there's very high probability the 3rd party app is using one of those. See here: http://docs.aws.amazon.com/AmazonS3/latest/dev/UploadObjSingleOpNET.html
In that case, it's just a matter of them using the SDK to upload the file. I wrote a sample version in .NET running on my local machine. First, install the AWSSDK Nuget package. Then, here is the code (taken from AWS sample):
C#:
var bucketName = "my-bucket";
var keyName = "what-you-want-the-name-of-S3-object-to-be";
var filePath = "C:\\Users\\scott\\Desktop\\test_upload.txt";
var client = new AmazonS3Client(Amazon.RegionEndpoint.USWest2);
try
{
PutObjectRequest putRequest2 = new PutObjectRequest
{
BucketName = bucketName,
Key = keyName,
FilePath = filePath,
ContentType = "text/plain"
};
putRequest2.Metadata.Add("x-amz-meta-title", "someTitle");
PutObjectResponse response2 = client.PutObject(putRequest2);
}
catch (AmazonS3Exception amazonS3Exception)
{
if (amazonS3Exception.ErrorCode != null &&
(amazonS3Exception.ErrorCode.Equals("InvalidAccessKeyId")
||
amazonS3Exception.ErrorCode.Equals("InvalidSecurity")))
{
Console.WriteLine("Check the provided AWS Credentials.");
Console.WriteLine(
"For service sign up go to http://aws.amazon.com/s3");
}
else
{
Console.WriteLine(
"Error occurred. Message:'{0}' when writing an object"
, amazonS3Exception.Message);
}
}
Web.config:
<add key="AWSAccessKey" value="your-access-key"/>
<add key="AWSSecretKey" value="your-secret-key"/>
You get the accesskey and secret key by creating a new user in your AWS account. When you do so, they'll generate those for you and provide them for download. You can then attach the AmazonS3FullAccess policy to that user and the document will be uploaded to S3.
NOTE: this was a POC. In the actual 3rd party app using this, they won't want to hardcode the credentials in the web config for security purposes. See here: http://docs.aws.amazon.com/AWSSdkDocsNET/latest/V2/DeveloperGuide/net-dg-config-creds.html

AWS DynamoDB Requested resource not found

I am trying to connect my app to DynamoDB. I have set everything up the way Amazon recommends. But i still keep getting the same error over and over again:
7-21 11:02:29.856 10027-10081/com.amazonaws.cognito.sync.demo E/AndroidRuntime﹕ FATAL EXCEPTION: AsyncTask #1
Process: com.amazonaws.cognito.sync.demo, PID: 10027
java.lang.RuntimeException: An error occured while executing doInBackground()
at android.os.AsyncTask$3.done(AsyncTask.java:304)
at java.util.concurrent.FutureTask.finishCompletion(FutureTask.java:355)
at java.util.concurrent.FutureTask.setException(FutureTask.java:222)
at java.util.concurrent.FutureTask.run(FutureTask.java:242)
at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:231)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1112)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:587)
at java.lang.Thread.run(Thread.java:818)
Caused by: com.amazonaws.services.dynamodbv2.model.ResourceNotFoundException: Requested resource not found (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ResourceNotFoundException; Request ID: GIONOKT7E3AMTC4PO19CPLON93VV4KQNSO5AEMVJF66Q9ASUAAJG)
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:710)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:385)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:196)
at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.invoke(AmazonDynamoDBClient.java:2930)
at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.updateItem(AmazonDynamoDBClient.java:930)
at com.amazonaws.mobileconnectors.dynamodbv2.dynamodbmapper.DynamoDBMapper$SaveObjectHandler.doUpdateItem(DynamoDBMapper.java:1173)
at com.amazonaws.mobileconnectors.dynamodbv2.dynamodbmapper.DynamoDBMapper$2.executeLowLevelRequest(DynamoDBMapper.java:873)
at com.amazonaws.mobileconnectors.dynamodbv2.dynamodbmapper.DynamoDBMapper$SaveObjectHandler.execute(DynamoDBMapper.java:1056)
at com.amazonaws.mobileconnectors.dynamodbv2.dynamodbmapper.DynamoDBMapper.save(DynamoDBMapper.java:904)
at com.amazonaws.mobileconnectors.dynamodbv2.dynamodbmapper.DynamoDBMapper.save(DynamoDBMapper.java:688)
at com.amazonaws.cognito.sync.Utils.FriendsSyncManager.initalize_credentialprovider(FriendsSyncManager.java:43)
at com.amazonaws.cognito.sync.ValU.FriendListActivity$SyncFriends.doInBackground(FriendListActivity.java:168)
at com.amazonaws.cognito.sync.ValU.FriendListActivity$SyncFriends.doInBackground(FriendListActivity.java:160)
at android.os.AsyncTask$2.call(AsyncTask.java:292)
 
         
What could be the solution?   
Okey it seems you need to add:
ddbClient.setRegion(Region.getRegion(Regions.EU_WEST_1));
// Add correct Region. In my case its EU_WEST_1
after the following line:
AmazonDynamoDBClient ddbClient = new AmazonDynamoDBClient(credentialsProvider);
Now it works. The table was successfully created.
Have a nice day and thanks!
It seems that the table you are trying to connect to doesn't exist. Verify the table name in your code agains the name of the table you created.
Please note that table name is case sensative.
You need to check a few things:
Check your credentials in your code:
private static String awsSecretKey = "your_secret_key"; //get it in AWS web UI
private static String awsAccessKey = "your_access_key"; //get it in AWS web UI
Check your Region code and set correct value:
client.setRegion(Region.getRegion(Regions.US_EAST_1));
You can get this value from your AWS Web Console. Details
Check does you have already created DynamoDB table & indexes under your Region.
If no, check your code
#DynamoDBTable(tableName = "Event")
public class Event implements Serializable {
public static final String CITY_INDEX = "City-Index";
public static final String AWAY_TEAM_INDEX = "AwayTeam-Index";
And create manualy from AWS Console or somehow else your table (Event in my case) and indexes (City-Index, AwayTeam-Index in my case). Please note - table and index name in case sensative.
Good sample - https://github.com/aws-samples/lambda-java8-dynamodb
From the docs it's either you don't have a Table with that name or it is in CREATING status.
I would double check to verify that the table does in fact exist, in the correct region, and you're using an access key that can reach it.
Or you might select the wrong region.
Along with #Yuliia Ashomok's answer
AWS C++ SDK 1.7.25
Aws::Client::ClientConfiguration clientConfig;
clientConfig.region = Aws::Region::US_WEST_2;
If using Spring boot, you can configure the region via application properties:
in src/main/resources/application.yaml
cloud:
aws:
region:
static: eu-west-1
As this problem is some what platform agnostic. For anyone coming at the same problem from .NET/C# ...
You can instantiate your client with the Endpoint in the constructor:
AmazonDynamoDBClient client = new AmazonDynamoDBClient(credentials, Amazon.RegionEndpoint.USEast1 );
I would have assumed that this would have been brought in from you AWS Profile but seems not although you could do something like this, where profile is imported from your SharedCredentialsFile:
new AmazonDynamoDBClient(credentials, profile.Region );
If you are sure that you have already created the table in DynamoDB but still getting this error. That means you may chances that your region is not correct. Just look at the top right corner of your AWS portal, with your profile dropdown. One another dropdown will give you the option to select your region. And now follow the process again with the right region.
Hope this works. This works for me.