Rest web service synchronisation - web-services

I am new to web services. I have written a rest web service which creates and returns pdf file. My code is as follows
#Path("/hello")
public class Hello {
#GET
#Path("/createpdf")
#Produces("application/pdf")
public Response getpdf() {
synchronized(this){
try {
OutputStream file = new FileOutputStream(new File("c:/temp/FirstPdf5.pdf"));
Document document = new Document();
PdfWriter.getInstance(document, file);
document.open();
document.add(new Paragraph("Hello Kiran"));
document.add(new Paragraph(new Date().toString()));
document.close();
file.close();
} catch (Exception e) {
e.printStackTrace();
}
File file1 = new File("c:/temp/FirstPdf5.pdf");
ResponseBuilder response = Response.ok((Object) file1);
response.header("Content-Disposition",
"attachment; filename=new-android-book.pdf");
return response.build();
}
}
}
If multiple clients try to call the web service simultaneousy , Does it impact on my code?
I mean , if client A using the web service and at the same time if client B tries to use the web service will the pdf file gets over writted.
If my question is not proper,Please let me know
Thanks

As you are writing the file to the hard disk multiple calls to the service will cause the file to be overwritten or cause exceptions where the file is already in use.
If the file is the same for all users then you would only need to read the file rather than write it every time.
However if the file is different for each user you might try one of the 2 following options:
You could build the file in memory and then write the binary response directly to the response stream.
Alternatively you could create the file using a unique name, this could be a GUID or a random number this would ensure that you never have a clash between the multiple calls arriving at the server.
I would also ensure that you remove the files

Related

Can't upload file to aws s3 asp.net

fileTransferUtility = new TransferUtility(s3Client);
try
{
if (file.ContentLength > 0)
{
var filePath = Path.Combine(Server.MapPath("~/Files"), Path.GetFileName(file.FileName));
var fileTransferUtilityRequest = new TransferUtilityUploadRequest
{
BucketName = bucketName,
FilePath = filePath,
StorageClass = S3StorageClass.StandardInfrequentAccess,
PartSize = 6291456, // 6 MB.
Key = keyName,
CannedACL = S3CannedACL.PublicRead
};
fileTransferUtilityRequest.Metadata.Add("param1", "Value1");
fileTransferUtilityRequest.Metadata.Add("param2", "Value2");
fileTransferUtility.Upload(fileTransferUtilityRequest);
fileTransferUtility.Dispose();
}
Im getting this error
The file indicated by the FilePath property does not exist!
I tried changing the path to the actual path of the file to C:\Users\jojo\Downloads but im still getting the same error.
(Based on a comment above indicating that file is an instance of HttpPostedFileBase in a web application...)
I don't know where you got Server.MapPath("~/Files") from, but if file is an HttpPostedFileBase that's been uploaded to this web application code then it's likely in-memory and not on your file system. Or at best it's on the file system in a temp system folder somewhere.
Since your source (the file variable contents) is a stream, before you try to interact with the file system you should see if the AWS API you're using can accept a stream. And it looks like it can.
if (file.ContentLength > 0)
{
var transferUtility = new TransferUtility(/* constructor params here */);
transferUtility.Upload(file.InputStream, bucketName, keyName);
}
Note that this is entirely free-hand, I'm not really familiar with AWS interactions. And you'll definitely want to take a look at the constructors on TransferUtility to see which one meets your design. But the point is that you're currently looking to upload a stream from the file you've already uploaded to your web application, not looking to upload an actual file from the file system.
As a fallback, if you can't get the stream upload to work (and you really should, that's the ideal approach here), then your next option is likely to save the file first and then upload it using the method you have now. So if you're expecting it to be in Server.MapPath("~/Files") then you'd need to save it to that folder first, for example:
file.SaveAs(Path.Combine(Server.MapPath("~/Files"), Path.GetFileName(file.FileName)));
Of course, over time this folder can become quite full and you'd likely want to clean it out.

Preventing a WCF client from issuing too many requests

I am writing an application where the Client issues commands to a web service (CQRS)
The client is written in C#
The client uses a WCF Proxy to send the messages
The client uses the async pattern to call the web service
The client can issue multiple requests at once.
My problem is that sometimes the client simply issues too many requests and the service starts returning that it is too busy.
Here is an example. I am registering orders and they can be from a handful up to a few 1000s.
var taskList = Orders.Select(order => _cmdSvc.ExecuteAsync(order))
.ToList();
await Task.WhenAll(taskList);
Basically, I call ExecuteAsync for every order and get a Task back. Then I just await for them all to complete.
I don't really want to fix this server-side because no matter how much I tune it, the client could still kill it by sending for example 10,000 requests.
So my question is. Can I configure the WCF Client in any way so that it simply takes all the requests and sends the maximum of say 20, once one completes it automatically dispatches the next, etc? Or is the Task I get back linked to the actual HTTP request and can therefore not return until the request has actually been dispatched?
If this is the case and WCF Client simply cannot do this form me, I have the idea of decorating the WCF Client with a class that queues commands, returns a Task (using TaskCompletionSource) and then makes sure that there are no more than say 20 requests active at a time. I know this will work but I would like to ask if anyone knows of a library or a class that does something like this?
This is kind of like Throttling but I don't want to do exactly that because I don't want to limit how many requests I can send in a given period of time but rather how many active requests can exist at any given time.
Based on #PanagiotisKanavos suggjestion, here is how I solved this.
RequestLimitCommandService acts as a decorator for the actual service which is passed in to the constructor as innerSvc. Once someone calls ExecuteAsync a completion source is created which along with the command is posted to the ActonBlock, the caller then gets back the a Task from the completion source.
The ActionBlock will then call the processing method. This method sends the command to the web service. Depending on what happens, this method will use the completion source to either notify the original sender that a command was processed successfully or attach the exception that occurred to the source.
public class RequestLimitCommandService : IAsyncCommandService
{
private class ExecutionToken
{
public TaskCompletionSource<bool> Source { get; }
public ICommand Command { get; }
public ExecutionToken(TaskCompletionSource<bool> source, ICommand command)
{
Source = source;
Command = command;
}
}
private IAsyncCommandService _innerSrc;
private ActionBlock<ExecutionToken> _block;
public RequestLimitCommandService(IAsyncCommandService innerSvc, int maxDegreeOfParallelism)
{
_innerSrc = innerSvc;
var options = new ExecutionDataflowBlockOptions { MaxDegreeOfParallelism = maxDegreeOfParallelism };
_block = new ActionBlock<ExecutionToken>(Execute, options);
}
public Task IAsyncCommandService.ExecuteAsync(ICommand command)
{
var source = new TaskCompletionSource<bool>();
var token = new ExecutionToken(source, command);
_block.Post(token);
return source.Task;
}
private async Task Execute(ExecutionToken token)
{
try
{
await _innerSrc.ExecuteAsync(token.Command);
token.Source.SetResult(true);
}
catch (Exception ex)
{
token.Source.SetException(ex);
}
}
}

doubleclick search report file to Google Cloud Storage

I'm trying to save a doubleclick search report file into GCS. I tried with the following method, but even though there is no exception, thrown the file is not saved.
public void saveToGCS(String reportId, String fileName) throws Exception {
WritableByteChannel outputChannel = storageService.create(StorageResourceId.fromObjectName(fileName));
OutputStream outputStream = Channels.newOutputStream(outputChannel);
doubleclicksearch.reports().getFile(reportId, 0).executeAndDownloadTo(outputStream);
}
I tried using a FileOutputStream to save it to a local location, and that worked just fine.
What's wrong with the code above?
Ok, it was SIMPLER than I thought. I just needed to close the stream at the end.

Move a file after an email is sent

I am writing a program that looks for files in a folder, attaches the files to the MailMessage and sends an email using SmtpClient.
After the email is sent out successfully, I want to move the emailed files to a different folder.
I get this message "The process cannot access the file because it is being used by another process.". I tried Thread.Sleep() but did not work.
smtpClient.Send(mail);
foreach (var report in reports)
{
string source = Path.Combine(reportsFolder, report);
string destination = Path.Combine(sentReportsFolder, report);
File.Move(source, destination);
}
First, try to dispose your smtpclient class:
smtpClient.Send(mail);
smtpClient.Dispose();
http://msdn.microsoft.com/pt-br/library/system.net.mail.smtpclient.dispose.aspx
But, when creating the class, you could use an using statemant.
Like:
using (SmtpClient smtpClient = new SmtpClent()) {
//attach file
smtpClient.Send();
}
This will ensure that, after send an email, the class will releases any resources that might be locked by the class. So, you not need to call .Dispose() explicitly.
http://msdn.microsoft.com/pt-br/library/system.net.mail.smtpclient.aspx
http://msdn.microsoft.com/en-us/library/yh598w02.aspx

Sharepoint 2013 Query very slow

we set up a new SharePoint 2013 Server to test how it would work as Document-Storage.
The Problem is, that it is very slow and I dont know why..
I adapted from msdn:
ClientContext _ctx;
private void btnConnect_Click(object sender, RoutedEventArgs e)
{
try
{
_ctx = new ClientContext("http://testSP1");
Web web = _ctx.Web;
Stopwatch w = new Stopwatch();
w.Start();
List list = _ctx.Web.Lists.GetByTitle("Test");
Debug.WriteLine(w.ElapsedMilliseconds); //24 first time, 0 second time
w.Restart();
CamlQuery q = CamlQuery.CreateAllItemsQuery(10);
ListItemCollection items = list.GetItems(q);
_ctx.Load(items);
_ctx.ExecuteQuery();
Debug.WriteLine(w.ElapsedMilliseconds); //1800 first time, 900 second Time
}
catch (Exception)
{
throw;
}
}
There arent very much Documents in the Test list.
Just 3 Folders and 1 Word-File.
Any suggestions/ideas why it is this slow?
Storing unstructured content (Word docs, PDFs, anything except metadata) in SharePoint's SQL content database is going to result in slower upload and retrieval than if the files are stored on the file system. That's why Microsoft created the Remote BLOB (Binary Large Object) Storage interface to enable files to be managed in SharePoint but live on the file system or in the cloud. The bigger the files, the greater the performance hit.
There are several third-party solutions that leverage this interface, including my company's offering, Metalogix StoragePoint. You can reach out to me at trossi#metalogix.com if you would like to learn more or visit http://www.metalogix.com/Products/StoragePoint/StoragePoint-BLOB-Offloading.aspx