I have an S3 bucket which has multiple folders. I want to list S3 objects from specific sub folders. For example my folder structure is: Folder1/Download, Folder1/Upload. Here, I want the objects only from Folder1/Download and not the upload. The function that I'm using is:
public void GetFTPFilesAWS(long ID)
{
AmazonS3Client amazonS3Client = new AmazonS3Client("", "", Amazon.Region);
ListObjectsRequest request = new ListObjectsRequest
{
BucketName = "",
//Prefix = "Download",
//Delimiter = "/"
};
var practiceFolderName = "Practice_" + ID;
string[] businessClaim = { "Folder1/Download/", "Folder2/Download/",};
foreach (string s in businessClaim)
{
_ = practiceFolderName + s;
}
ListObjectsResponse response = amazonS3Client.ListObjects(request);
}
I'm getting every object from my bucket using this and when I'm trying to use the prefix and delimiter it's not working.
Related
I was unable to find any documentation on how to empty an Amazon S3 bucket programmatically in Java. Any help will be appreciated.
To delete Amazon S3 objects using Java, you can use the AWS SDK for Java V2. To empty a bucket, first get a list of all objects in the Bucket using this code:
public static void listBucketObjects(S3Client s3, String bucketName ) {
try {
ListObjectsRequest listObjects = ListObjectsRequest
.builder()
.bucket(bucketName)
.build();
ListObjectsResponse res = s3.listObjects(listObjects);
List<S3Object> objects = res.contents();
for (ListIterator iterVals = objects.listIterator(); iterVals.hasNext(); ) {
S3Object myValue = (S3Object) iterVals.next();
System.out.print("\n The name of the key is " + myValue.key());
System.out.print("\n The object is " + calKb(myValue.size()) + " KBs");
System.out.print("\n The owner is " + myValue.owner());
}
} catch (S3Exception e) {
System.err.println(e.awsErrorDetails().errorMessage());
System.exit(1);
}
}
For each object, you can delete it using this code:
public static void deleteBucketObjects(S3Client s3, String bucketName, String objectName) {
ArrayList<ObjectIdentifier> toDelete = new ArrayList<ObjectIdentifier>();
toDelete.add(ObjectIdentifier.builder().key(objectName).build());
try {
DeleteObjectsRequest dor = DeleteObjectsRequest.builder()
.bucket(bucketName)
.delete(Delete.builder().objects(toDelete).build())
.build();
s3.deleteObjects(dor);
} catch (S3Exception e) {
System.err.println(e.awsErrorDetails().errorMessage());
System.exit(1);
}
System.out.println("Done!");
}
You can find these examples and many others in AWS Github here:
Amazon S3 Java code examples
I'm trying to put an image on a s3 bucket via an upload URL. Here's what I've done so far:
//Get Upload URL & Access the file details
string path = HttpUtility.UrlPathEncode("image/getUploadURL/" + imageFileDetails.Extension + "/" + imageFileDetails.Name + "/" + sessionID);
string uploadURL = " ";
string mimeType = MimeTypesMap.GetMimeType(imageFileDetails.FullName);
HttpResponseMessage response = await client.GetAsync(path);
if (response.IsSuccessStatusCode)
{
var getResponse = await response.Content.ReadAsStringAsync();
dynamic JSONObject = JObject.Parse(getResponse);
uploadURL = JSONObject.data.uploadURL;
if (uploadURL != "" && uploadURL != null)
{
using (var content = new MultipartFormDataContent())
{
Console.WriteLine("UploadURL: {0}", uploadURL);
Console.WriteLine("newFilePath: {0}", newFilePath);
FileStream fileStream = new FileStream(newFilePath, FileMode.Open);
content.Add(new StreamContent(fileStream), "data");
using (
var message = await client.PutAsync(uploadURL, content))
{
var putResponse = await message.Content.ReadAsStringAsync();
Console.WriteLine("PutResponse: {0} " , putResponse);
}
}
Console.ReadKey();
}
I continue to get the following error:
The request signature we calculated does not match the signature you provided. Check your key and signing method.
Should I be approaching this differently? I've tried with RestSharp as well and have not gotten any positive results yet. Any ideas, or workarounds would be appreciated!
Use the Amazon S3 Strongly typed C# client. To upload an object using a presigned URL, you can use this code.
namespace UploadUsingPresignedURLExample
{
using System;
using System.IO;
using System.Net;
using System.Threading.Tasks;
using Amazon.S3;
using Amazon.S3.Model;
/// <summary>
/// This example shows how to upload an object to an Amazon Simple Storage
/// Service (Amazon S3) bucket using a presigned URL. The code first
/// creates a presigned URL and then uses it to upload an object to an
/// Amazon S3 bucket using that URL. The example was created using the
/// AWS SDK for .NET version 3.7 and .NET Core 5.0.
/// </summary>
public class UploadUsingPresignedURL
{
public static void Main()
{
string bucketName = "doc-example-bucket";
string keyName = "samplefile.txt";
string filePath = $"source\\{keyName}";
// Specify how long the signed URL will be valid in hours.
double timeoutDuration = 12;
// If the AWS Region defined for your default user is different
// from the Region where your Amazon S3 bucket is located,
// pass the Region name to the S3 client object's constructor.
// For example: RegionEndpoint.USWest2.
IAmazonS3 client = new AmazonS3Client();
var url = GeneratePreSignedURL(client, bucketName, keyName, timeoutDuration);
var success = UploadObject(filePath, url);
if (success)
{
Console.WriteLine("Upload succeeded.");
}
else
{
Console.WriteLine("Upload failed.");
}
}
/// <summary>
/// Uploads an object to an S3 bucket using the presigned URL passed in
/// the url parameter.
/// </summary>
/// <param name="filePath">The path (including file name) to the local
/// file you want to upload.</param>
/// <param name="url">The presigned URL that will be used to upload the
/// file to the S3 bucket.</param>
/// <returns>A Boolean value indicating the success or failure of the
/// operation, based on the HttpWebResponse.</returns>
public static bool UploadObject(string filePath, string url)
{
HttpWebRequest httpRequest = WebRequest.Create(url) as HttpWebRequest;
httpRequest.Method = "PUT";
using (Stream dataStream = httpRequest.GetRequestStream())
{
var buffer = new byte[8000];
using (FileStream fileStream = new FileStream(filePath, FileMode.Open, FileAccess.Read))
{
int bytesRead = 0;
while ((bytesRead = fileStream.Read(buffer, 0, buffer.Length)) > 0)
{
dataStream.Write(buffer, 0, bytesRead);
}
}
}
HttpWebResponse response = httpRequest.GetResponse() as HttpWebResponse;
return response.StatusCode == HttpStatusCode.OK;
}
/// <summary>
/// Generates a presigned URL which will be used to upload an object to
/// an S3 bucket.
/// </summary>
/// <param name="client">The initialized S3 client object used to call
/// GetPreSignedURL.</param>
/// <param name="bucketName">The name of the S3 bucket to which the
/// presigned URL will point.</param>
/// <param name="objectKey">The name of the file that will be uploaded.</param>
/// <param name="duration">How long (in hours) the presigned URL will
/// be valid.</param>
/// <returns>The generated URL.</returns>
public static string GeneratePreSignedURL(
IAmazonS3 client,
string bucketName,
string objectKey,
double duration)
{
var request = new GetPreSignedUrlRequest
{
BucketName = bucketName,
Key = objectKey,
Verb = HttpVerb.PUT,
Expires = DateTime.UtcNow.AddHours(duration),
};
string url = client.GetPreSignedURL(request);
return url;
}
}
}
You can find this example and other Amazon S3 examples in Github here:
https://github.com/awsdocs/aws-doc-sdk-examples/tree/master/dotnetv3/S3
After some extensive research, I was able to located the error in my code. When doing the GET call I was passing the incorrect file type, then, when trying to do the PUT call with the MIME type, it 403'd.
I had to update my initial GET path with the matching file type, and the appropriate encoding.
On my example below, it's hard coded, that'll be changing of course, but that was the error:
string path = HttpUtility.UrlPathEncode("image/getUploadURL/image%2Fpng/" + imageFileDetails.Name + "/" + sessionID);
I have an AWS S3 backed static website and a RestApi. I am configuring a single Cloudfront Distribution for the static website and the RestApi. I have OriginConfigs done for the S3 origins and the RestApi origin. I am using AWS CDK to define the infrastructure in code.
The approach has been adopted from this article: https://dev.to/evnz/single-cloudfront-distribution-for-s3-web-app-and-api-gateway-15c3]
The API are defined under the relative path /r/<resourcename> or /r/api/<methodname>. Examples would be /r/Account referring to the Account resource and /r/api/Validate referring to an rpc-style method called Validate (in this case a HTTP POST method). The Lambda methods that implement the resource methods are configured with the proper PREFLIGHT OPTIONS with the static website's url listed in the allowed origins for that resource. For eg: the /r/api/Validate method lambda has
exports.main = async function(event, context) {
try {
var method = event.httpMethod;
if(method === "OPTIONS") {
const response = {
statusCode: 200,
headers: {
"Access-Control-Allow-Headers" : "*",
"Access-Control-Allow-Credentials": true,
"Access-Control-Allow-Origin": website_url,
"Vary": "Origin",
"Access-Control-Allow-Methods": "OPTIONS,POST,GET,DELETE"
}
};
return response;
} else if(method === "POST") {
...
}
...
}
The API and website are deployed fine. Here's the CDK deployment code fragment.
const string api_domain = "myrestapi.execute-api.ap-south-1.amazonaws.com";
const string api_stage = "prod";
internal WebAppStaticWebsiteStack(Construct scope, string id, IStackProps props = null) : base(scope, id, props)
{
// The S3 bucket to hold the static website contents
var bucket = new Bucket(this, "WebAppStaticWebsiteBucket", new BucketProps {
PublicReadAccess = false,
BlockPublicAccess = BlockPublicAccess.BLOCK_ALL,
RemovalPolicy = RemovalPolicy.DESTROY,
WebsiteIndexDocument = "index.html",
Cors = new ICorsRule[] {
new CorsRule() {
AllowedHeaders = new string[] { "*" },
AllowedMethods = new HttpMethods[] { HttpMethods.GET, HttpMethods.POST, HttpMethods.PUT, HttpMethods.DELETE, HttpMethods.HEAD },
AllowedOrigins = new string[] { "*" }
}
}
});
var cloudfrontOAI = new OriginAccessIdentity(this, "CloudfrontOAI", new OriginAccessIdentityProps() {
Comment = "Allows cloudfront access to S3"
});
bucket.AddToResourcePolicy(new PolicyStatement(new PolicyStatementProps() {
Sid = "Grant cloudfront origin access identity access to s3 bucket",
Actions = new [] { "s3:GetObject" },
Resources = new [] { bucket.BucketArn + "/*" },
Principals = new [] { cloudfrontOAI.GrantPrincipal }
}));
// The cloudfront distribution for the website
var distribution = new CloudFrontWebDistribution(this, "WebAppStaticWebsiteDistribution", new CloudFrontWebDistributionProps() {
ViewerProtocolPolicy = ViewerProtocolPolicy.REDIRECT_TO_HTTPS,
DefaultRootObject = "index.html",
PriceClass = PriceClass.PRICE_CLASS_ALL,
GeoRestriction = GeoRestriction.Whitelist(new [] {
"IN"
}),
OriginConfigs = new [] {
new SourceConfiguration() {
CustomOriginSource = new CustomOriginConfig() {
OriginProtocolPolicy = OriginProtocolPolicy.HTTPS_ONLY,
DomainName = api_domain,
AllowedOriginSSLVersions = new OriginSslPolicy[] { OriginSslPolicy.TLS_V1_2 },
},
Behaviors = new IBehavior[] {
new Behavior() {
IsDefaultBehavior = false,
PathPattern = $"/{api_stage}/r/*",
AllowedMethods = CloudFrontAllowedMethods.ALL,
CachedMethods = CloudFrontAllowedCachedMethods.GET_HEAD_OPTIONS,
DefaultTtl = Duration.Seconds(0),
ForwardedValues = new CfnDistribution.ForwardedValuesProperty() {
QueryString = true,
Headers = new string[] { "Authorization" }
}
}
}
},
new SourceConfiguration() {
S3OriginSource = new S3OriginConfig() {
S3BucketSource = bucket,
OriginAccessIdentity = cloudfrontOAI
},
Behaviors = new [] {
new Behavior() {
IsDefaultBehavior = true,
//PathPattern = "/*",
DefaultTtl = Duration.Seconds(0),
Compress = false,
AllowedMethods = CloudFrontAllowedMethods.ALL,
CachedMethods = CloudFrontAllowedCachedMethods.GET_HEAD_OPTIONS
}
},
}
}
});
// The distribution domain name - output
var domainNameOutput = new CfnOutput(this, "WebAppStaticWebsiteDistributionDomainName", new CfnOutputProps() {
Value = distribution.DistributionDomainName
});
// The S3 bucket deployment for the website
var deployment = new BucketDeployment(this, "WebAppStaticWebsiteDeployment", new BucketDeploymentProps(){
Sources = new [] {Source.Asset("./website/dist")},
DestinationBucket = bucket,
Distribution = distribution
});
}
I am encountering the following error (extracted from Browser console error log):
bundle.js:67 POST https://mywebapp.cloudfront.net/r/api/Validate 405
bundle.js:67
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>MethodNotAllowed</Code>
<Message>The specified method is not allowed against this resource.</Message>
<Method>POST</Method>
<ResourceType>OBJECT</ResourceType>
<RequestId>xxxxx</RequestId>
<HostId>xxxxxxxxxxxxxxx</HostId>
</Error>
The intended flow is that the POST call (made using fetch() api) to https://mywebapp.cloudfront.net/r/api/Validate is forwarded to the RestApi backend by cloudfront. It appears like Cloudfront is doing it, but the backend is returning an error (based on the error message).
What am I missing? How do I make this work?
This was fixed by doing the following:
Moved to the Distribution construct (which as per AWS documentation is the one to use as it is receiving latest updates).
Adding a CachePolicy and OriginRequestPolicy to control Cookie forwarding and Header forwarding
Before I used to create a .csv file and store it in a local system where the application is running and then I used to send it to AWS S3 Bucket.
Now I don't want to create any file on the application side. I want to write my data to S3 Bucket as a .csv file directly, Is it possible? If possible please guide me the way I can do this.
There is a way to achieve this, we have to send our data as InputStream
In my case, I did these changes:
[1]I updated my Bean class to Override toString() method as follows
public class Employee{
String Id;
String employeeNo;
String name;
//get and set
#Override
public String toString() {
return "{"+ Id +","+ employeeNo + ","+ name + "}";
}
[2]I took the list of data and converted into List<String[]>
List<Employee> employeeList=//get employee list
List<String[]> csvData = toStringArray(employeeList);
My toStringArray() will return List<String[]>
private List<String[]> toStringArray(List<Employee> employeeList) {
List<String[]> records = new ArrayList<String[]>();
// adding header to csv
records.add(new String[] { "EmployeeId", "EmployeeNo", "Name"});
// add data to csv
Iterator<Employee> emp = employeeList.iterator();
while (emp.hasNext()) {
Employee data = emp.next();
records.add(new String[] { emp.getId(), emp.getEmployeeNo(), emp.getName() });
}
return records;
}
[3] Now I converted this data to String
String strCsvData=writeCsvAsString(csvData);
my writeCsvAsString() method is
public String writeCsvAsString(List<String[]> csvData) {
StringWriter s = new StringWriter();
CSVWriter writer = new CSVWriter(s);
writer.writeAll(csvData);
try {
writer.close();
} catch (IOException e) {
}
String finalString = s.toString();
logger.debug("Actual data:- {}", finalString);
return finalString;
}
Now I converted this String into InputStream and I send this to S3 bucket with contentType metadata "text/csv"
InputStream targetStream = new ByteArrayInputStream(strCsvData.getBytes());
I am using ASP.NET Core and AWSSDK.S3 nuget package.
I am able to upload file by providing accessKeyID, secretKey, bucketName and region
Like this:
var credentials = new BasicAWSCredentials(accessKeyID, secretKey);
using (var client = new AmazonS3Client(credentials, RegionEndpoint.USEast1))
{
var request = new PutObjectRequest
{
AutoCloseStream = true,
BucketName = bucketName,
InputStream = storageStream,
Key = fileName
}
}
But I am given only an URL to upload file
11.11.11.111:/aa-bb-cc-dd-useast1
How to upload file through the URL? I am new to AWS,I will be grateful to get some help.
using Amazon.S3;
using Amazon.S3.Transfer;
using System;
using System.IO;
using System.Threading.Tasks;
namespace Amazon.DocSamples.S3
{
class UploadFileMPUHighLevelAPITest
{
private const string bucketName = "*** provide bucket name ***";
private const string filePath = "*** provide the full path name of the file to upload ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;
public static void Main()
{
s3Client = new AmazonS3Client(bucketRegion);
UploadFileAsync().Wait();
}
private static async Task UploadFileAsync()
{
try
{
var fileTransferUtility =
new TransferUtility(s3Client);
// Option 1. Upload a file. The file name is used as the object key name.
await fileTransferUtility.UploadAsync(filePath, bucketName);
Console.WriteLine("Upload 1 completed");
}
}
}
}
https://docs.aws.amazon.com/AmazonS3/latest/dev/HLuploadFileDotNet.html
You can use the provided access point in place of the bucket name.
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/S3/TPutObjectRequest.html