Convert .tif image on ASW S3 to base64 string by C# - amazon-web-services

I have a tif image stored on AWS S3 with a path. Because some browser don't support display .tif file, so I must convert it to base64 string.
On my local, it works successfully. But, when I deploy my website to AWS, base64 string which is generated is different with on my local. So, I can't display.
This is my code:
byte[] data = (new WebClient()).DownloadData(filePath);
using (var ms = new MemoryStream(data))
{
var image = Image.FromStream(ms);
image.Save(ms, System.Drawing.Imaging.ImageFormat.Png);
byte[] imageBytes = ms.ToArray();
string base64 = Convert.ToBase64String(imageBytes);
}
Anybody has experience with this problem?
Thank you very much!

I noticed that you are reusing the MemoryStream for your source as the MemoryStream for your output.
I think you should use a separate memory stream for image.Save()

Related

Problems decompressing gzip

I'm trying to use a gzip c++ library to decompress some text that i compressed using this website that had a tool to do it, but when i try to decompress it in my project it says that its not compressed and fails to decompress. Am i just misunderstanding these compression formats because the names are the same or is this some other issue that i'm not aware of?
//'test message' compressed using the website
std::string test_string = R"(eJwrSS0uUchNLS5OTE8FAB8fBMY=)";
//returns false
bool is_compressed = gzip::is_compressed(test_string.data(), test_string.size());
//crashes
std::string decompressed = gzip::decompress(test_string.data(), test_string.size());
Website outputs a Base64 encoded string as ASCII, instead of the byte array. I need to decode the Base64 encoding before trying to decompress.

Is it possible to write to s3 via a stream using s3 java sdk

Normally when a file has to be uploaded to s3, it has to first be written to disk, before using something like the TransferManager api to upload to the cloud. This cause data loss if the upload does not finish on time(application goes down and restarts on a different server, etc). So I was wondering if it's possible to write to a stream directly across the network with the required cloud location as the sink.
You don't say what language you're using, but I'll assume Java based on your capitalization. In which case the answer is yes: TransferManager has an upload() method that takes a PutObjectRequest, and you can construct that object around a stream.
However, there are two important caveats. The first is in the documentation for PutObjectRequest:
When uploading directly from an input stream, content length must be specified before data can be uploaded to Amazon S3
So you have to know how much data you're uploading before you start. If you're receiving an upload from the web and have a Content-Length header, then you can get the size from it. If you're just reading a stream of data that's arbitrarily long, then you have to write it to a file first (or the SDK will).
The second caveat is that this really doesn't prevent data loss: your program can still crash in the middle of reading data. One thing that it will prevent is returning a success code to the user before storing the data in S3, but you could do that anyway with a file.
Surprisingly this is not possible (at time of writing this post) with standard Java SDK. Anyhow thanks to this 3rd party library you can atleast avoid buffering huge amounts of data to either memory or disk since it buffers internally ~5MB parts and uploads them automatically within multipart upload for you.
There is also github issue open in SDK repository one can follow to get updates.
It is possible:
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.build();
s3Client.putObject("bucket", "key", youtINputStream, s3MetData)
AmazonS3.putObject
public void saveS3Object(String key, InputStream inputStream) throws Exception {
List<PartETag> partETags = new ArrayList<>();
InitiateMultipartUploadRequest initRequest = new
InitiateMultipartUploadRequest(bucketName, key);
InitiateMultipartUploadResult initResponse =
s3.initiateMultipartUpload(initRequest);
int partSize = 5242880; // Set part size to 5 MB.
try {
byte b[] = new byte[partSize];
int len = 0;
int i = 1;
while ((len = inputStream.read(b)) >= 0) {
// Last part can be less than 5 MB. Adjust part size.
ByteArrayInputStream partInputStream = new ByteArrayInputStream(b,0,len);
UploadPartRequest uploadRequest = new UploadPartRequest()
.withBucketName(bucketName).withKey(key)
.withUploadId(initResponse.getUploadId()).withPartNumber(i)
.withFileOffset(0)
.withInputStream(partInputStream)
.withPartSize(len);
partETags.add(
s3.uploadPart(uploadRequest).getPartETag());
i++;
}
CompleteMultipartUploadRequest compRequest = new
CompleteMultipartUploadRequest(
bucketName,
key,
initResponse.getUploadId(),
partETags);
s3.completeMultipartUpload(compRequest);
} catch (Exception e) {
s3.abortMultipartUpload(new AbortMultipartUploadRequest(
bucketName, key, initResponse.getUploadId()));
}
}

character encoding of the server logs in s3

As the question suggest, I lneed to know the encoding of the data in the server logs.
I am getting the server logs using S3ObjectInputStream. as following:
amazonS3Client as3c;
S3ObjectInputStream is = as3c.getObject(bucketName, key).getObjectContent();
//read it for processing using buffered input stream.
BufferedReader br = new BufferedReader(new InputStreamReader(is,..unknown..));
//need character encoding(charset eg: UTF-8, UTF-16 etc.) of the data in the object
//to pass it to InputStreamReader.
In the docs, I only see getContentEncoding() function but I do not think that it fits my purpose.
Useful references:
ObjectMetadata
AmazonS3Interface
Did you check the other constructor of InputStreamReader? There is a constructor that receives only the InputStream as a parameter.
http://docs.oracle.com/javase/7/docs/api/java/io/InputStreamReader.html
As far as I know, the files in S3 are saved using the encoding the writer has chosen. Anyway I would suggest you to try the UTF-8 encoding and check whether it throws a UnsupportedEncodingException.

Why image generated from base64 binary data won't display?

I'm trying to grab an image from a remote location, resize it and save it to Amazon S3.
Problem is, I can save the image to S3 allright, but when I try to display it, the browser says image can't be displayed because it contains errors. I'm sure this is due to me doing the following:
a) Grab image from remote location:
<cfhttp timeout="45"
throwonerror="no"
url="#variables.testFilePath#"
method="get"
useragent="Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.8.1.12) Gecko/20080201 Firefox/2.0.0.12"
getasbinary="yes"
result="objGet">
b) Create image to validate and resize
<cfset objImage = ImageNew(objGet.FileContent);
c) After resizing, I'm converting the resized image (!) back to binary data, because the following function call to S3.cfc needs a valid file path to read the image again as binaryData. I'm doing this:
<cfset variables.filekey = toBase64( objImage )>
instead of this:
<cffile action="readBinary" file="#arguments.uploadDir##arguments.fileKey#" variable="binaryFileData">
because I can't get a valid arguments.uploadDir to work, so instead of re-reading the image here, I thought I just convert back to binary data and save this to S3.
Which converts my image back to a base64 string, which I then save at S3.
Question:
Can someone tell me what I'm doing wrong? I guess it's the base64 getasbinary and toBase64. But I'm not sure.
Thanks for help!
You don't need to convert it back to binary. Just use the cfimage tag to write it to disk, and then use the path to send the image off to Amazon. Then delete the image when you're done with it.

How can I create an Image in GDI+ from a Base64-Encoded string in C++?

I have an application, currently written in C#, which can take a Base64-encoded string and turn it into an Image (a TIFF image in this case), and vice versa. In C# this is actually pretty simple.
private byte[] ImageToByteArray(Image img)
{
MemoryStream ms = new MemoryStream();
img.Save(ms, System.Drawing.Imaging.ImageFormat.Tiff);
return ms.ToArray();
}
private Image byteArrayToImage(byte[] byteArrayIn)
{
MemoryStream ms = new MemoryStream(byteArrayIn);
BinaryWriter bw = new BinaryWriter(ms);
bw.Write(byteArrayIn);
Image returnImage = Image.FromStream(ms, true, false);
return returnImage;
}
// Convert Image into string
byte[] imagebytes = ImageToByteArray(anImage);
string Base64EncodedStringImage = Convert.ToBase64String(imagebytes);
// Convert string into Image
byte[] imagebytes = Convert.FromBase64String(Base64EncodedStringImage);
Image anImage = byteArrayToImage(imagebytes);
(and, now that I'm looking at it, could be simplified even further)
I now have a business need to do this in C++. I'm using GDI+ to draw the graphics (Windows only so far) and I already have code to decode the string in C++ (to another string). What I'm stumbling on, however, is getting the information into an Image object in GDI+.
At this point I figure I need either
a) A way of converting that Base64-decoded string into an IStream to feed to the Image object's FromStream function
b) A way to convert the Base64-encoded string into an IStream to feed to the Image object's FromStream function (so, different code than I'm currently using)
c) Some completely different way I'm not thinking of here.
My C++ skills are very rusty and I'm also spoiled by the managed .NET platform, so if I'm attacking this all wrong I'm open to suggestions.
UPDATE: In addition to the solution I've posted below, I've also figured out how to go the other way if anyone needs it.
OK, using the info from the Base64 decoder I linked and the example Ben Straub linked, I got it working
using namespace Gdiplus; // Using GDI+
Graphics graphics(hdc); // Get this however you get this
std::string encodedImage = "<Your Base64 Encoded String goes here>";
std::string decodedImage = base64_decode(encodedImage); // using the base64
// library I linked
DWORD imageSize = decodedImage.length();
HGLOBAL hMem = ::GlobalAlloc(GMEM_MOVEABLE, imageSize);
LPVOID pImage = ::GlobalLock(hMem);
memcpy(pImage, decodedImage.c_str(), imageSize);
IStream* pStream = NULL;
::CreateStreamOnHGlobal(hMem, FALSE, &pStream);
Image image(pStream);
graphics.DrawImage(&image, destRect);
pStream->Release();
GlobalUnlock(hMem);
GlobalFree(hMem);
I'm sure it can be improved considerably, but it works.
This should be a two-step process. Firstly, decode the base64 into pure binary (the bits you would have had if you loaded the TIFF from file). The first Google result for this looks to be pretty good.
Secondly, you'll need to convert those bits to a Bitmap object. I followed this example when I had to load images from a resource table.