Can't upload file to aws s3 asp.net - amazon-web-services

fileTransferUtility = new TransferUtility(s3Client);
try
{
if (file.ContentLength > 0)
{
var filePath = Path.Combine(Server.MapPath("~/Files"), Path.GetFileName(file.FileName));
var fileTransferUtilityRequest = new TransferUtilityUploadRequest
{
BucketName = bucketName,
FilePath = filePath,
StorageClass = S3StorageClass.StandardInfrequentAccess,
PartSize = 6291456, // 6 MB.
Key = keyName,
CannedACL = S3CannedACL.PublicRead
};
fileTransferUtilityRequest.Metadata.Add("param1", "Value1");
fileTransferUtilityRequest.Metadata.Add("param2", "Value2");
fileTransferUtility.Upload(fileTransferUtilityRequest);
fileTransferUtility.Dispose();
}
Im getting this error
The file indicated by the FilePath property does not exist!
I tried changing the path to the actual path of the file to C:\Users\jojo\Downloads but im still getting the same error.

(Based on a comment above indicating that file is an instance of HttpPostedFileBase in a web application...)
I don't know where you got Server.MapPath("~/Files") from, but if file is an HttpPostedFileBase that's been uploaded to this web application code then it's likely in-memory and not on your file system. Or at best it's on the file system in a temp system folder somewhere.
Since your source (the file variable contents) is a stream, before you try to interact with the file system you should see if the AWS API you're using can accept a stream. And it looks like it can.
if (file.ContentLength > 0)
{
var transferUtility = new TransferUtility(/* constructor params here */);
transferUtility.Upload(file.InputStream, bucketName, keyName);
}
Note that this is entirely free-hand, I'm not really familiar with AWS interactions. And you'll definitely want to take a look at the constructors on TransferUtility to see which one meets your design. But the point is that you're currently looking to upload a stream from the file you've already uploaded to your web application, not looking to upload an actual file from the file system.
As a fallback, if you can't get the stream upload to work (and you really should, that's the ideal approach here), then your next option is likely to save the file first and then upload it using the method you have now. So if you're expecting it to be in Server.MapPath("~/Files") then you'd need to save it to that folder first, for example:
file.SaveAs(Path.Combine(Server.MapPath("~/Files"), Path.GetFileName(file.FileName)));
Of course, over time this folder can become quite full and you'd likely want to clean it out.

Related

file payload in google cloud function bucket triggered

I have a question about a Google Cloud functions triggered by an event on a storage bucket (I’m developing it in Python).
I have to read the data of the file just finalized (a PDF file) on the bucket that is triggering the event and I was looking for the file payload on the event object passed to my function (data, context) but it seems there is not payload on that object.
Do I have to use the cloud storage library to get the file from the bucket ? Is there a way to get the payload directly from the context of the triggered function ?
Enrico
From checking the more complete examplein the Firebase documentation, it indeed seems that the payload of the file is not included in the parameters. That make sense, since there's no telling how big the file is that was just finalized, and if that will even fit in the memory of your Functions runtime.
So you'll have to indeed grab the file from the bucket with a separate call, based on the information in the metadata. The full Firebase example grabs the filename and other info from its context/data with:
exports.generateThumbnail = functions.storage.object().onFinalize(async (object) => {
const fileBucket = object.bucket; // The Storage bucket that contains the file.
const filePath = object.name; // File path in the bucket.
const contentType = object.contentType; // File content type.
const metageneration = object.metageneration; // Number of times metadata has been generated. New objects have a value of 1.
...
I'll see if I can find a more complete example. But I'd expect it to work similarly on raw Google Cloud Functions, which Firebase wraps, even when using Python.
Update: from looking at this Storage/Function/PubSub documentation that the Python binding is apparently based on, it looks like the the path should be available as data['resource'] or as data['name'].

SmtpClient.send - Could not find a part of the path

I'm trying to write an email to my local folder. I successfully wrote an email to my documents folder using this code:
using (var client = new SmtpClient())
{
client.UseDefaultCredentials = true;
client.DeliveryMethod = SmtpDeliveryMethod.SpecifiedPickupDirectory;
client.PickupDirectoryLocation = tempDocsPath;
client.Send(message);//Writes to the PickupDirectoryLocation
}
However, when I ported this same code to another project, it gives me this error:
System.Net.Mail.SmtpException : Failure sending mail. ---> System.IO.DirectoryNotFoundException : Could not find a part of the path 'C:\Users\josh.bowdish\source\repos\GenerateEmail\GenerateEmail\bin\Debug\net461\tempFiles\AAMkAGUyODNhN2JkLThlZWQtNDE4MS1hODM1LWU0ZDY4Y2NhYmMxOQBGAAAAAABKB1jlHZSIQZSWN7AYZH2SBwDZdOTdKcayQ5NMwcwkNT7UAAAAAAEMAADZdOTdKcayQ5NMwcwkNT7UAACn\0a5b24a5-d625-4ecd-9990-af5654679820.eml'.
I've verified that the directory it's trying to write to exists, even rewrote it to look like this:
private static string WriteEmail(MailMessage message, string messageDirectory)
{
if (Directory.Exists(messageDirectory))
{
using (var client = new SmtpClient())
{
client.UseDefaultCredentials = true;
client.DeliveryMethod = SmtpDeliveryMethod.SpecifiedPickupDirectory;
client.PickupDirectoryLocation = messageDirectory;
client.Send(message);//Writes to the PickupDirectoryLocation
}
...
}
//stuff that returns the full email path
}
It breaks on the client.Send() line with the above error. As far as I can tell the code paths are identical. I've tried writing to the same folder that the other project is working with to no avail. The only thing I can think of is it's trying to write the email file before it exists, but the other project is writing it just fine.
Can someone tell me what is generating this error?
Thanks,
~Josh
This could be a permissions problem. Ensure that the account that your application is running under has permissions to "write" to this directory. Your Directory.Exists could be passing since it is only checking if the directory is there, but failing when trying to actually write to it.

doubleclick search report file to Google Cloud Storage

I'm trying to save a doubleclick search report file into GCS. I tried with the following method, but even though there is no exception, thrown the file is not saved.
public void saveToGCS(String reportId, String fileName) throws Exception {
WritableByteChannel outputChannel = storageService.create(StorageResourceId.fromObjectName(fileName));
OutputStream outputStream = Channels.newOutputStream(outputChannel);
doubleclicksearch.reports().getFile(reportId, 0).executeAndDownloadTo(outputStream);
}
I tried using a FileOutputStream to save it to a local location, and that worked just fine.
What's wrong with the code above?
Ok, it was SIMPLER than I thought. I just needed to close the stream at the end.

Amazon web service batch file upload using specific key

I would like to ask if there is any way to set a key for each uploaded file using the TransferManager (or any other class)? I am currently using the method uploadFileList for this and I noticed that I can define a callback for each file sent using the ObjectMetadataProvider interface, but I only have the ObjectMetadata at my disposal. I thought it would be possible to get the parent ObjectRequest and set the key value in there, but that does not seem to be possible.
What I am trying to achieve:
MultipleFileUpload fileUpload = tm.uploadFileList(bucketName, "", new File(directory), files, new ObjectMetadataProvider() {
#Override
public void provideObjectMetadata(File file, ObjectMetadata objectMetadata) {
objectMetadata.getObjectRequest().setKey(myOwnKey);
}
});
I am most likely missing something obvious, but I spent some time looking for the answer and cannot find it anywhere. My problem is that if I supply some files for this method, it takes their absolute path (or something like that) as a key name and that is not acceptable for me. Any help is appreciated.
I almost forgot about this post.
There was no elegant solution, so I had to resort to making my own transfer manager (MultiUpload) and check the list of each upload manually.
I can then set the key for each object upon creating the Upload object.
List<Upload> uploads = new ArrayList();
MultiUpload mu = new MultiUpload(uploads);
for (File f : files) {
// Check, if file, since only files can be uploaded.
if (f.isFile()) {
String key = ((!directory.isEmpty() && !directory.equals("/"))?directory+"/":"")+f.getName();
ObjectMetadata metadata = new ObjectMetadata();
uploads.add(tm.upload(
new PutObjectRequest(bucketName,
key, f)
.withMetadata(metadata)));
}
}

Upload file to SharePoint WSS 3.0 with WebRequest PUT

Hey, I've got this nice little piece of code, much like all the other versions of this method of upload using WSS WebServices. I've got one major problem though - once I have uploaded a file into my doc list, and updated the list item to write a comment/description, the file is stuck there. What I mean is that this method will not overwrite the file once I've uploaded it. Nobody else out there seems to have posted this issue yet, so .. anyone?
I have another version of the method which uses a byte[] instead of a Stream .. same issue though.
Note: I have switched off the 'require documents to be checked out before they can be edited' option for the library. No luck tho .. The doc library does have versioning turned on though, with a major version being created for each update.
private void UploadStream(string fullPath, Stream uploadStream)
{
WebRequest request = WebRequest.Create(fullPath);
request.Credentials = CredentialCache.DefaultCredentials; // User must have 'Contributor' access to the document library
request.Method = "PUT";
request.Headers.Add("Overwrite", "t");
byte[] buffer = new byte[4096];
using (Stream stream = request.GetRequestStream())
{
for (int i = uploadStream.Read(buffer, 0, buffer.Length); i > 0; i = uploadStream.Read(buffer, 0, buffer.Length))
{
stream.Write(buffer, 0, i);
}
}
WebResponse response = request.GetResponse(); // Upload the file
response.Close();
}
Original credits to: http://geek.hubkey.com/2007/10/upload-file-to-sharepoint-document.html
EDIT -- major finding .. when I call it from my nUnit test project it works fine. It seems it only fails when I call it from my WCF application (nUnit running under logged on user account, WCF app has app pool running under that same user -- my account, which also has valid permissions in SharePoint).
Nuts. "Now where to start?!", I muses to myself.
SOLVED -- I found a little bug - the file was being created in the right place, but the update path was wrong.. I ended up finding a folder full of files with many, many new versions.. doh!
Why not use the out-of-the-box SharePoint webservice, Lists.asmx? You'll find it in
http://SITEURL/___vti_bin/Lists.asmx
Edit, I checked out the link and it seems you are calling the out of the box web service. This has got be versioning related then. Can you check out the different versions that exist in the doc lib of the specific file? see if it perhaps gets added as a minor version through the service?
Have you tried using a capital T? SharePoint's webdav header processing is not very likely to be case-sensitive, but the protocol does specify a capital T. Oh, and what is the response? A 412 error code or something altogether different?