Allow 3rd party app to upload file to AWS s3 - amazon-web-services

I need a way to allow a 3rd party app to upload a txt file (350KB and slowly growing) to an s3 bucket in AWS. I'm hoping for a solution involving an endpoint they can PUT to with some authorization key or the like in the header. The bucket can't be public to all.
I've read this: http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html
and this: http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html
but can't quite seem to find the solution I'm seeking.

I'd suggests using a combination of the AWS API gateway, a lambda function and finally S3.
You clients will call the API Gateway endpoint.
The endpoint will execute an AWS lambda function that will then write out the file to S3.
Only the lambda function will need rights to the bucket, so the bucket will remain non-public and protected.
If you already have an EC2 instance running, you could replace the lambda piece with custom code running on your EC2 instance, but using lambda will allow you to have a 'serverless' solution that scales automatically and has no min. monthly cost.

I ended up using the AWS SDK. It's available for Java, .NET, PHP, and Ruby, so there's very high probability the 3rd party app is using one of those. See here: http://docs.aws.amazon.com/AmazonS3/latest/dev/UploadObjSingleOpNET.html
In that case, it's just a matter of them using the SDK to upload the file. I wrote a sample version in .NET running on my local machine. First, install the AWSSDK Nuget package. Then, here is the code (taken from AWS sample):
C#:
var bucketName = "my-bucket";
var keyName = "what-you-want-the-name-of-S3-object-to-be";
var filePath = "C:\\Users\\scott\\Desktop\\test_upload.txt";
var client = new AmazonS3Client(Amazon.RegionEndpoint.USWest2);
try
{
PutObjectRequest putRequest2 = new PutObjectRequest
{
BucketName = bucketName,
Key = keyName,
FilePath = filePath,
ContentType = "text/plain"
};
putRequest2.Metadata.Add("x-amz-meta-title", "someTitle");
PutObjectResponse response2 = client.PutObject(putRequest2);
}
catch (AmazonS3Exception amazonS3Exception)
{
if (amazonS3Exception.ErrorCode != null &&
(amazonS3Exception.ErrorCode.Equals("InvalidAccessKeyId")
||
amazonS3Exception.ErrorCode.Equals("InvalidSecurity")))
{
Console.WriteLine("Check the provided AWS Credentials.");
Console.WriteLine(
"For service sign up go to http://aws.amazon.com/s3");
}
else
{
Console.WriteLine(
"Error occurred. Message:'{0}' when writing an object"
, amazonS3Exception.Message);
}
}
Web.config:
<add key="AWSAccessKey" value="your-access-key"/>
<add key="AWSSecretKey" value="your-secret-key"/>
You get the accesskey and secret key by creating a new user in your AWS account. When you do so, they'll generate those for you and provide them for download. You can then attach the AmazonS3FullAccess policy to that user and the document will be uploaded to S3.
NOTE: this was a POC. In the actual 3rd party app using this, they won't want to hardcode the credentials in the web config for security purposes. See here: http://docs.aws.amazon.com/AWSSdkDocsNET/latest/V2/DeveloperGuide/net-dg-config-creds.html

Related

multi-part upload Google Storage

I'm trying to implement multi-part upload to Google Storage but to my surprise it does not seem to be straightforward (I could not find java example).
Only mention I found was in the XML API https://cloud.google.com/storage/docs/multipart-uploads
Also found some discussion around a compose API StorageExample.java#L446 mentioned here google-cloud-java issues 1440
Any recommendations how to do multipart upload?
I got the multi-part upload working with #Koblan suggestion. (for details check blog post)
This is how I create the S3 Client and point it to Google Storage
def createClient(accessKey: String, secretKey: String, region: String = "us"): AmazonS3 = {
val endpointConfig = new EndpointConfiguration("https://storage.googleapis.com", region)
val credentials = new BasicAWSCredentials(accessKey, secretKey)
val credentialsProvider = new AWSStaticCredentialsProvider(credentials)
val clientConfig = new ClientConfiguration()
clientConfig.setUseGzip(true)
clientConfig.setMaxConnections(200)
clientConfig.setMaxErrorRetry(1)
val clientBuilder = AmazonS3ClientBuilder.standard()
clientBuilder.setEndpointConfiguration(endpointConfig)
clientBuilder.withCredentials(credentialsProvider)
clientBuilder.withClientConfiguration(clientConfig)
clientBuilder.build()
}
Because I'm doing the upload from the frontend (after I generate signled URLs for each part using the AmazonS3 client) I needed to enable CORS.
For testing, I enabled everything for now
$ gsutil cors get gs://bucket
$ echo '[{"origin": ["*"],"responseHeader": ["Content-Type", "ETag"],"method": ["GET", "HEAD", "PUT", "DELETE", "PATCH"],"maxAgeSeconds": 3600}]' > cors-config.json
$ gsutil cors set cors-config.json gs://bucket
See https://cloud.google.com/storage/docs/configuring-cors#gsutil_1
Currently Java Client library for multi part upload in Cloud Storage is not available. You can raise a feature request for the same in this link. As mentioned by John Hanley, the next best thing you can do is, do a parallel composite upload with gsutil (CLI), JSON and XML support/ resumable upload with Java libraries.
In parallel compose, the parallel writes can be done by using the JSON or XML API for Google Cloud Storage. Specifically, you would write a number of smaller objects in parallel and then (once all of those objects have been written) call the Compose request to compose them into one larger object.
If you're using the JSON API the compose documentation is at : https://cloud.google.com/storage/docs/json_api/v1/objects/compose
If you're using the XML API the compose documentation is at : https://cloud.google.com/storage/docs/reference-methods#putobject (see the compose query parameter).
Also there is an interesting document link provided by Kolban which you can try and work out. Also I would like to mention that you can have multi part uploads in Java, if you use the Google Drive API(v3). Here is the code example where we use the files.create method with uploadType=multipart.

AWS .Net API - The provided token has expired

I am facing this weird scenario. I generate my AWS AccessKeyId, SecretAccessKey and SessionToken by running assume-role-with-saml command. After copying these values to .aws\credentials file, I try to run command "aws s3 ls" and can see all the S3 buckets. Similarly I can run any AWS command to view objects and it works perfectly fine.
However, when I write .Net Core application to list objects, it doesn't work on my computer. Same .Net application works find on other colleagues' computers. We all have access to AWS through the same role. There are no users in IAM console.
Here is the sample code, but I am not sure there is nothing wrong with the code, because it works fine on other users' computers.
var _ssmClient = new AmazonSimpleSystemsManagementClient();
var r = _ssmClient.GetParameterAsync(new Amazon.SimpleSystemsManagement.Model.GetParameterRequest
{
Name = "/KEY1/KEY2",
WithDecryption = true
}).ConfigureAwait(false).GetAwaiter().GetResult();
Any idea why running commands through CLI works and API calls don't work? Don't they both look at the same %USERPROFILE%.aws\credentials file?
I found it. Posting here since it can be useful for someone having same issue.
Go to this folder: %USERPROFILE%\AppData\Local\AWSToolkit
Take a backup of all files and folders and delete all from above location.
This solution applies only if you can run commands like "aws s3 ls" and get the results successfully, but you get error "The provided token has expired" while running the same from .Net API libraries.

C# AWS SDK SecurityTokenServiceClient.AssumeRole returning "SignatureDoesNotMatch" 403 forbidden

I've been implementing an AWS S3 integration with the C# AWS SDK in a development environment, and everything has been going well. Part of the requirement is that the IAM AccessKey and SecretKey rotate, and the credential/config files stored or cached, and there is also a Role to be assumed in the process.
I have a method which returns credentials after initializing a AmazonSecurityTokenServiceClient with AccessKey, SecretKey, and RegionEndpoint, formats a AssumeRoleRequest with the RoleArn and then executes the request:
using (var STSClient = new AmazonSecurityTokenServiceClient(accessKey, secretKey, bucketRegion))
{
try
{
var response = STSClient.AssumeRole(new AssumeRoleRequest(roleARN));
if (response.HttpStatusCode == System.Net.HttpStatusCode.OK) return response.Credentials;
}
catch (AmazonSecurityTokenServiceException ex)
{
return null;
}
}
This is simplified, as the real implementation validates the credential variables, etc.. And it matches the AWS Developer code examples (although I can't find the link to that page anymore).
This has been working in dev just fine. Having moved this to a QA env with new AWS credentials, which I've been assured have been set up in the same process as the dev credentials, I'm now receiving an exception on the AssumeRole call.
The actual AssumeRole method doesn't include documentation that it would throw that exception, it's just the one it raises. The StatusCode: 403 Forbidden, ErrorCode: SignatureDoesNotMatch, ErrorType: Sender, Message "The request signature we calculated does not match the signature you provided....".
Things I have ruled out:
Keys are correct and do not contain escaped characters (/), or leading/trailing spaces
bucket region is correct us-west-2
sts auth region is us-east-1
SignatureVersion is 4
Switching back to the dev keys works, but that is not a production-friendly solution. Ultimately I will not be in charge of the keys, or the Aws account to create them. I've been in touch with the IT Admin who created the accounts/keys/roles and he assured me they are created the same way I created the dev accounts/keys/roles (which was an agreed-upon process prior to development).
The provided accounts/keys/roles can be accessed via the CLI or web console, so I can confirm they work and are active. I've been diligent to not have any CLI created credential or config files floating around that the sdk might access by default.
Any thoughts or suggestions are welcome.
The reason why AWS returns this error is usually that the secret key is incorrect
The request signature we calculated does not match the signature you provided. Check your key and signing method. (Status Code: 403; Error Code: SignatureDoesNotMatch)

DynamoDB + Flutter

I am trying to create an app that uses AWS Services, I already use Cognito plugin for flutter but can't get it to work with DynamoDB, should I use a lambda function and point to it or is it possible to get data form a table directly from flutter, if that's the case which URL should I use?
I am new in AWS Services don’t know if is it possible to access a dynamo table with a URL or I should just use a lambda function
Since this is kind of an open-ended question and you mentioned Lambdas, I would suggest checking out the Serverless framework. They have a couple of template applications in various languages/frameworks. Serverless makes it really easy to spin up Lambdas configured to an API Gateway, and you can start with the default proxy+ resource. You can also define DynamoDB tables to be auto-created/destroyed when you deploy/destroy your serverless application. When you successfully deploy using the command 'serverless deploy' it will output the URL to access your API Gateway which will trigger your Lambda seamlessly.
Then once you have a basic "hello-word" type API hosted on AWS, you can just follow the docs along for how to set up the DynamoDB library/sdk for your given framework/language.
Let me know if you have any questions!
-PS: I would also, later on, recommend using the API Gateway Authorizer against your Cognito User Pool, since you already have auth on the Flutter app, then all you have to do is pass through the token. The Authorizer can also be easily set up via the Serverless Framework! Then your API will be authenticated at the Gateway level, leaving AWS to do all the hard work :)
If you want to read directly from Dynamo It is actually pretty easy.
First add this package to your project.
Then create your models you want to read and write. Along with conversion methods.
class Parent {
String name;
late List<Child> children;
factory Parrent.fromDBValue(Map<String, AttributeValue> dbValue) {
name = dbValue["name"]!.s!;
children = dbValue["children"]!.l!.map((e) =>Child.fromDB(e)).toList();
}
Map<String, AttributeValue> toDBValue() {
Map<String, AttributeValue> dbMap = Map();
dbMap["name"] = AttributeValue(s: name);
dbMap["children"] = AttributeValue(
l: children.map((e) => AttributeValue(m: e.toDBValue())).toList());
return dbMap;
}
}
(AttributeValue comes from the package)
Then you can consume dynamo db api as per normal.
Create Dynamo service
class DynamoService {
final service = DynamoDB(
region: 'af-south-1',
credentials: AwsClientCredentials(
accessKey: "someAccessKey",
secretKey: "somesecretkey"));
Future<List<Map<String, AttributeValue>>?> getAll(
{required String tableName}) async {
var reslut = await service.scan(tableName: tableName);
return reslut.items;
}
Future insertNewItem(Map<String, AttributeValue> dbData, String tableName) async {
service.putItem(item: dbData, tableName: tableName);
}
}
Then you can convert when getting all data from dynamo.
List<Parent> getAllParents() {
List<Map<String, AttributeValue>>? parents =
await dynamoService.getAll(tableName: "parents");
return parents!.map((e) =>Parent.fromDbValue(e)).toList()
}
You can check all Dynamo operations from here

How to test amazon s3 in sandbox environment with AmazonS3Client

How do you test S3 in a sandbox environment when your code uses an AmazonS3Client?
IAmazonS3 amazonS3Client =
Amazon.AWSClientFactory.CreateAmazonS3Client(s3AccessKey, s3SecretKey, config);
var request = new PutObjectRequest()
{
BucketName = "bucketname",
Key = "filename",
ContentType = "application/json",
ContentBody = "body"
};
amazonS3Client.PutObject(request);
I've tried S3Ninja and FakeS3 but the AmazonS3Client didn't hit them, which leads me to believe that an AmazonS3Client doesn't behave like a normal rest client.
The only solution I can think of is to convert the code to use a rest client and manually build the requests and headers just so that I can use S3Ninja or FakeS3.
I got FakeS3 to work. My problem was that I needed to update my host file (/etc/hosts):
127.0.0.1 myBucketName.s3.amazon.com s3.amazon.com
The reason we have to do this is because the url to put objects to is http://bucketname.s3.amazon.com, so when running locally your dns will not understand where to go for bucketname.s3.amazon.com
Instead of spending time & energy in getting on the S3 Mockups & Fake S3 - better approach would be to create a new S3 bucket and use it.
S3 isn't costly ( relatively ); there are several way you can optimize the cost in S3 - by setting up life-cycle rule ( delete all the objects - in a day - so today's object would be deleted at Close of today ) you keep the cost at check.
Also setup RRS - Reduced Redundant Storage for your S3 bucket - it roughly reduces the cost further by 1/3 the regular price.
I don't know it is relevant nowdays for you, but there is a fully functional local AWS cloud stack called localstack, and it's free.