I am uploading files to Amazon S3 bucket. The files are being uploaded but i get the following Warning.
WARNING: No content length specified for stream data. Stream contents
will be buffered in memory and could result in out of memory errors.
So I added the following line to my code
metaData.setContentLength(IOUtils.toByteArray(input).length);
but then i got the following message. I don't even know if it is a warning or what.
Data read has a different length than the expected: dataLength=0;
expectedLength=111992; includeSkipped=false; in.getClass()=class
sun.net.httpserver.FixedLengthInputStream; markedSupported=false;
marked=0; resetSinceLastMarked=false; markCount=0; resetCount=0
How can i set contentLength to the metaData of InputSteam? Any help would be greatly appreciated.
When you read the data with IOUtils.toByteArray, this consumes the InputStream. When the AWS API tries to read it, it's zero length.
Read the contents into a byte array and provide an InputStream wrapping that array to the API:
byte[] bytes = IOUtils.toByteArray(input);
metaData.setContentLength(bytes.length);
ByteArrayInputStream byteArrayInputStream = new ByteArrayInputStream(bytes);
PutObjectRequest putObjectRequest = new PutObjectRequest(bucket, key, byteArrayInputStream, metadata);
client.putObject(putObjectRequest);
You should consider using the multipart upload API to avoid loading the whole InputStream into memory. For example:
byte[] bytes = new byte[BUFFER_SIZE];
String uploadId = client.initiateMultipartUpload(new InitiateMultipartUploadRequest(bucket, key)).getUploadId();
int bytesRead = 0;
int partNumber = 1;
List<UploadPartResult> results = new ArrayList<>();
bytesRead = input.read(bytes);
while (bytesRead >= 0) {
UploadPartRequest part = new UploadPartRequest()
.withBucketName(bucket)
.withKey(key)
.withUploadId(uploadId)
.withPartNumber(partNumber)
.withInputStream(new ByteArrayInputStream(bytes, 0, bytesRead))
.withPartSize(bytesRead);
results.add(client.uploadPart(part));
bytesRead = input.read(bytes);
partNumber++;
}
CompleteMultipartUploadRequest completeRequest = new CompleteMultipartUploadRequest()
.withBucketName(bucket)
.withKey(key)
.withUploadId(uploadId)
.withPartETags(results);
client.completeMultipartUpload(completeRequest);
Note that by using a ByteBuffer you simply do manually what the AWS SDK already did for you automatically! It still buffers the entire stream into memory and is as good as the original solution which produces the warning from the SDK.
You can only get rid of the memory-problem if you have another way to know the length of the stream, for instance, when you create the stream from a file:
void uploadFile(String bucketName, File file) {
try (final InputStream stream = new FileInputStream(file)) {
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentLength(file.length());
s3client.putObject(
new PutObjectRequest(bucketName, file.getName(), stream, metadata)
);
}
}
Breaking News! AWS SDK 2.0 has built-in support for uploading files:
s3client.putObject(
(builder) -> builder.bucket(myBucket).key(file.getName()),
RequestBody.fromFile(file)
);
There's also RequestBody methods for taking Strings or Buffers which automatically and efficiently set Content-Length. Only when you have another kind of InputStream, you still need to provide the length yourself – however that case should be more rare now with all the other options available.
Related
Task: Upload an already-compressed file to an Amazon AWS S3 bucket using the AWSSDK.S3 package in .NET Core, and set the Content-Encoding header to "gzip".
I have an S3 bucket that I am trying to upload to. I'm using an IO stream for the data, but there's no way to set the item's Content-Encoding during the upload process. This is the header I want to set that the item will have when served up by the S3 service (it is not the Content-Encoding for the upload itself).
I am using the TransferUtility and TransferUtilityUploadRequest objects. Whereas you can set Content-Type just fine (see below), there is no property to set Content-Encoding. If you use the Metadata.Add function, it will automatically rewrite Content-Encoding to a custom "x-amz-meta-content-encoding" key, which is of course useless.
From what I understand you can't set the Metadata in S3 after the fact except by copying the object, or doing it yourself manually (this is a no-go for me, too many files). With copying the object, I'm not sure I'd be able to set the metadata anyway.
var s3Client = new AmazonS3Client(awsCredentials, bucketRegion);
var fileTransferUtility = new TransferUtility(s3Client);
byte[] compressedDataArray = <some compressed data>;
using (MemoryStream memoryStream = new MemoryStream(compressedDataArray)) {
var fileTransferUtilityRequest = new TransferUtilityUploadRequest {
BucketName = bucketName,
ContentType = "application/json",
StorageClass = S3StorageClass.Standard,
Key = fileKey + ".gz",
CannedACL = S3CannedACL.PublicRead,
InputStream = memoryStream,
AutoCloseStream = true
};
fileTransferUtilityRequest.Metadata.Add("Content-Encoding", "gzip");
fileTransferUtility.Upload(fileTransferUtilityRequest);
}
It will upload to a the new file in S3, but will not have Content-Encoding header set.
Please help, anyone! thanks!
okay what seems to work is to set the ContentEncoding on the Headers collection for the request:
fileTransferUtilityRequest.Headers.ContentEncoding = "gzip";
I am trying to use google-kms nodejs library.
What I was expecting is that the end result will be encrypted text but what I get is a buffer if I don't decide base64 or if I do then I get something like $�k+��l�k��:
Does someone know what's wrong r is my expectation wrong about the encoded text.
exports.encryptKMS = async (req, res) => {
let message = req.query.message || req.body.message || 'Hello World!';
const projectId = 'xxx';
const kms = require('#google-cloud/kms');
const client = new kms.KeyManagementServiceClient();
const locationId = 'global';
const keyRingId = 'Test-Ring-01';
const cryptoKeyId = 'Test-Crypto-01';
const cryptoKeyPath = client.cryptoKeyPath(
projectId,
locationId,
keyRingId,
cryptoKeyId
);
const [result] = await client.encrypt({name: cryptoKeyPath, plaintext: message});
console.log(result);
const cryptoText = Buffer.from(result.ciphertext, 'base64').toString('utf-8');
console.log(cryptoText);
res.status(200).send(cryptoText);
}
This looks correct to me, the encrypted object will be a base64 encoded binary string which you won't be able to see any structure in. If you take that result and feed it back into Decrypt you should recover the original message.
If you wanted a readable string, please provide more info -- there's such a thing as "Format-Preserving Encryption" which attempts to encrypt data in such a fashion that it will continue to be in a compatible format to the input, for when systems aren't ready to handle encrypted data. Cloud KMS doesn't support format-preserving encryption, but I may have some suggestions depending on what you need.
Thanks for using GCP and Cloud KMS!
I am trying to read and print the contents of a file from a s3 bucket using AWS Java Sdk. I have a presigned URL that lets me access (and download) the file. But, I am unable to read the file using the presigned-URL.
I am looking to do something similar to the code snippet below -
public void readFromS3(String bucketName, String key) throws IOException {
S3Object s3object = s3.getObject(new GetObjectRequest(bucketName, key));
System.out.println(s3object.getObjectMetadata().getContentType());
System.out.println(s3object.getObjectMetadata().getContentLength());
BufferedReader reader = new BufferedReader(new InputStreamReader(s3object.getObjectContent()));
String line;
while((line = reader.readLine()) != null) {
// can copy the content locally as well
// using a buffered writer
System.out.println(line);
}
}
The URL I have access to, lets me download the file.
I have also looked at the following reference with no success -
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3Client.html
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/model/GetObjectRequest.html
Can someone please help?
Thanks in advance!
Using URLConnection is probably the simplest way, as others have pointed out it's just a regular HTTP URL at this point.
BufferedReader reader = new BufferedReader(new InputStreamReader(URI.create(presignedUrl).toURL().openConnection().getInputStream())
If you have a pre-signed URL, you don't need to use the AWS sdk to access the S3 object.
As #EricNord commented, The url itself provides the authentication with S3 to allow access. The URL will have an STS token appended to it in the query parameters, which enables authentication.
A basic HTTP client will be able to read the contents of the URL.
You could very well do with TransferManager :)
String presignedURL = "presignedURL";
String targetDestination = "fileLocation"
new TransferManager().download(new PresignedUrlDownloadRequest(new URL(presignedURL)),
new File(targetDestination)).waitForCompletion();
Original Question
I have tried as you have suggested, there are few things I have run into few issues.
Here are the following I did: filePath = Capture.capturePhoto(1024, -1);
I had issue passing the S3_BUCKET_URL in the MutipartRequest Constructor so I used rq.setUrl(S3_BUCKET_URL)
I had to add rq.setHttpMethod("PUT"); // since I got error as 405: Method was not supported
Finally I got no errors and I did see a sub-folder created under the bucket I created. I set the url as "https://s3.amazonaws.com/bucket123/test/" I saw test bucket created but image files wasn't uploaded. The folder size wasn't 0 as it showed the size of the file but no image files were found in that sub folder.
When I try to access the folder through S3 explorer I got the the Access Denied. 1. I manually created a sub-folder using S3 explorer and gave the read and write permission yet once the uploadFileToS3 method called from codename I saw the permissions was lost.
2. So I did change the acl to "public-read-write" still same effect.
Please advise if I am missing anything.
Thank you.
Modified question
I have modified the old questions and following this with Sahi's answer.
// Modified the url as following - full path including bucket name and key
private static final String S3_BUCKET_URL = "https://s3.amazonaws.com/my-bucket/myfile.jpg";
//String filePath = captured from Camera;
public void uploadFileToS3(String filePath, String uniqueFileName) {
MultipartRequest rq = new MultipartRequest();
rq.setUrl(S3_BUCKET_URL);
//rq.addArgument("acl", "private");
//rq.addArgument("key", uniqueFileName);
rq.addData(uniqueFileName, filePath, "image/jpeg");
rq.setReadResponseForErrors(true);
NetworkManager.getInstance().addToQueue(rq);
}
Now I do see that the file has been uploaded to the bucket as myfile.jpg under my bucket. The bucket has all the rights since the bucket is given rights to read and write also applied the permissions to sub folders.
Now the myfile.jpg lost all the permissions and I am not able to download that file even though I see it in the bucket.
I accessed it by the url but the file cannot be opened.
I am not sure what I am missing in the header or request. I am really trying to get this working but not getting where I wanted to. This is very important piece for the app I am working on.
Please provide any feedback.
Thank you.
--------------------------------------------------------------------------
Adding code
private static final String S3_BUCKET_URL = "https://s3.amazonaws.com/my-bucket/";
When I had the url as above, the my-bucket was created as unknown object (no folder icon) but I did see the file size was actual image size.
//String filePath = captured from Camera;
public void uploadFileToS3(String filePath, String uniqueFileName) {
//uniqueFileName including sub folder - subfolder/my-image.jpg
MultipartRequest rq = new MultipartRequest();
rq.setUrl(S3_BUCKET_URL);
rq.addArgument("acl", "public-read-write");
rq.addArgument("key", uniqueFileName);
rq.addData(uniqueFileName, filePath, "image/jpeg");
rq.setReadResponseForErrors(true);
NetworkManager.getInstance().addToQueue(rq);
}
Looks like I am not uploading the object properly. I did go through the amazon document and I wasn't able to find anything related to upload through http. I am really stuck and I hope to get this resolved. Would you have any working code that simply uploads to s3?
More details: I am adding more detail to the question as I am still not able to resolve this.
// Original S3_BUCKET_URL
String S3_BUCKET_URL = "https://s3.amazonaws.com/my-bucket/";
// With this url I am getting 400 : Bad Request Error
// I added the uniqueFilename (key) to the url as
String uniqueFilename = "imageBucket/image12345.jpg";
String S3_BUCKET_URL = "https://s3.amazonaws.com/my-bucket /"+uniqueFilename;
//Now I do see the file I believe it's the file, the
//subfolder inherits the rights from bucket (read, write).
//Not the image file.
//When I click on the file using s3 client browser I got message
//prompted Access Denied.
// my function to call s3 bucket
public void takePicture()
{
String stringImg = Capture.capturePhoto(1024, -1);
MultipartRequest rq = new MultipartRequest();
rq.setUrl(S3_BUCKET_URL);
rq.addArgument("key", uniqueFilename );
rq.addArgument("acl", "public-read-write");
rq.setHttpMethod("PUT"); // If I don't specify this I am getting
// 405 : Method Not Allowed Error.
rq.addData("file", stringImg, "image/jpeg");
//Captured Image from camera
rq.setFilename("file", uniqueFilename);
NetworkManager.getInstance().addToQueue();
}
At this point I believe I have done all the suggested way and I am still getting the same error. I believe I am not doing anything wrong and I don't want to believe that I am the only one trying to do this :)
I really need your help to get this resolved in order to keep this project. I am wondering if there is any Amazon s3 sdk or library I can use in codename one.
Please review this and let me know if this really can be achieved by codename one.
Thank you.
I've tested this code and it should work however it depends on a fix to Codename One to deal with some stupid behavior of the Amazon API:
Form hi = new Form("Upload");
Button upload = new Button("Upload");
hi.add(upload);
upload.addActionListener(e -> {
String file = Capture.capturePhoto();
if(file != null) {
try {
String uniqueFileName = "Image" + System.currentTimeMillis() + ".jpg";
MultipartRequest rq = new MultipartRequest();
rq.setUrl("https://s3.amazonaws.com/my-bucket/");
rq.addArgument("key", uniqueFileName);
rq.addArgument("acl", "private");
rq.addData("file", file, "image/jpeg");
rq.setReadResponseForErrors(true);
NetworkManager.getInstance().addToQueueAndWait(rq);
if(rq.getResponseCode() != 200 && rq.getResponseCode() != 204) {
System.out.println("Error response: " + new String(rq.getResponseData()));
}
ToastBar.showMessage("Upload completed", FontImage.MATERIAL_INFO);
} catch(IOException err) {
Log.e(err);
}
}
});
hi.show();
When running the code below (with post) I didn't get the error you described, instead I got an error from Amazon indicating the key wasn't in the right order. Apparently Amazon relies on the order of the submitted fields which is damn stupid. Unfortunately we ignore that order when you add an argument and since this is a post request the code to workaround this is non-trivial.
I've made a fix to Codename One that will respect the natural order in which arguments are added but since we already made this weeks release this will only go in next Friday (December 23rd 2016). So the code above should work as expected following that update.
Older answer for reference
I haven't tested this but something like this should work:
private static final String S3_BUCKET_URL = "https://s3.amazonaws.com/my-bucket/";
public void uploadFileToS3(String filePath, String uniqueFileName) {
MultipartRequest rq = new MultipartRequest(S3_BUCKET_URL);
rq.addArgument("acl", "private");
rq.addArgument("key", uniqueFileName);
rq.addData("file", filePath, "image/jpeg");
rq.setFilename("file", uniqueFileName);
rq.setReadResponseForErrors(true);
NetworkManager.getInstance().addToQueue(rq);
}
Notice that this doesn't do authorization so it assumes the bucket is accessible at least for write.
Generate a presigned URL on you server:
Pass thar URL to the mobile client.
IAmazonS3 client;
client = new AmazonS3Client(Amazon.RegionEndpoint.USEast1);
// Generate a pre-signed URL.
GetPreSignedUrlRequest request = new GetPreSignedUrlRequest
{
BucketName = bucketName,
Key = objectKey,
Verb = HttpVerb.PUT,
Expires = DateTime.Now.AddMinutes(5)
};
string url = null;
url = s3Client.GetPreSignedURL(request);
Mobile client can upload a file to that URL using old plain PUT HTTP request:
// Upload a file using the pre-signed URL.
HttpWebRequest httpRequest = WebRequest.Create(url) as HttpWebRequest;
httpRequest.Method = "PUT";
using (Stream dataStream = httpRequest.GetRequestStream())
{
// Upload object.
}
HttpWebResponse response = httpRequest.GetResponse() as HttpWebResponse;
You can take a look at official documentation at http://docs.aws.amazon.com/AmazonS3/latest/dev/UploadObjectPreSignedURLDotNetSDK.html
I am current working with Google Apps script and am attempting to write & sign an HTTP request to AWS CloudWatch.
On the Amazon API documentation here regarding how to create a signing key, they use pseudo to explain that the HMAC algorithm is to return binary format.
HMAC(key, data) represents an HMAC-SHA256 function
that returns output in binary format.
Google apps script offers a method to do such a hash,
Utilities.computeHmacSignature(Utilities.MacAlgorithm.HMAC_SHA_256,
data,
key);
but the return type is always a byte array.
Byte[]
How do I convert the Byte[] to the binary data AWS wants? Or is there a vanilla javascript function I can use in Google Apps Script to compute the hash?
Thanks
I am quite sure it is a bug that Utilities.computeHmacSignature take key as an ASCII. But there was no way to parse byte[] to ASCII correctly in GAS
And the library writer is too stupid too just provide function which take key as byte[]
So I use this instead : http://caligatio.github.com/jsSHA/
Just copy SHA.js and SHA-256.js then it work fine
PS. it waste my time for whole 2 days so I'm very annoying
The conversion from byte array to the binary data required should be simple:
kDate = Utilities.computeHmacSignature(Utilities.MacAlgorithm.HMAC_SHA_256,
'20130618', 'AWS4' + kSecret);
kDate = Utilities.newBlob(kDate).getDataAsString();
kRegion = Utilities.computeHmacSignature(Utilities.MacAlgorithm.HMAC_SHA_256,
'eu-west-1', kDate);
BUT you have to look onto this open issue in the bugtracker - there could be some issues in conversion.
maybe you could try to make a String.fromCharCode() loop and avoid negative numers:
kDateB = Utilities.computeHmacSignature(Utilities.MacAlgorithm.HMAC_SHA_256,
'20130618', 'AWS4' + kSecret);
kDate = '';
for (var i=0; i<kDateB.length; i++)
kDate += String.fromCharCode(kDateB[i]<0?256+kDateB[i]:0+kDateB[i]);
kRegion = Utilities.computeHmacSignature(Utilities.MacAlgorithm.HMAC_SHA_256,
'eu-west-1', kDate);