MultipartConfig changing path in real time - jetty

I'm using MultipartConfig to define a specific url where I can store files with jetty, but how can I change this value depending of the user request?
For example write in /tmp/upload/share the user file.
#SuppressWarnings("serial")
#MultipartConfig(location="/tmp/upload", fileSizeThreshold=1024)
#WebServlet(urlPatterns={"/upload"}, name="upload")
public class UploadServlet extends HttpServlet {
#Override
protected void doPost(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException
{
resp.setContentType("text/plain");
PrintWriter out = resp.getWriter();
int i=0;
for(Part part: req.getParts())
{
part.write(String.format(part.getName(),i++));
}
}
}
With this code I change the name of the file, but I can't change the file path.

The use of Part.write(String relativeFilename) is for the management of those temporary files (once the servlet finishes its dispatch, those files are deleted).
That method exists to make sure that files in memory are written to disk.
Its up to you to move the file out from the temporary location to a more permanent location (such as another filesystem location, or database, or CMS, or CDN, or archive location, etc...)

Related

Using a HTTP Module on a Virtual Directory in IIS

I have a default website in my IIS where I have created one virtual directory "wsdls".
I would want to gather statistics on how many requests are triggered to my virtual directory. This would need a request interception at web server level and gather statistics. "HTTPModule" was one of the many solutions I have considered which is suitable for such scenario. Hence I have started building one.
For testing purpose, I wanted to create a HTTP Module and apply it on a particular extension files (say *.wsdl) and on every GET request of any .wsdl files in this virtual directory, I will want to redirect the application to "www.google.com". This would demonstrate a good example of how HTTP Module can be used and deployed on IIS.
HTTPModule which is written using Visual Studio is shown below,
namespace Handler.App_Code
{
public class HelloWorldModule : IHttpModule
{
public HelloWorldModule(){
}
public String ModuleName{
get { return "HelloWorldModule"; }
}
// In the Init function, register for HttpApplication
// events by adding your handlers.
public void Init(HttpApplication application){
application.BeginRequest +=
(new EventHandler(this.Application_BeginRequest));
application.EndRequest +=
(new EventHandler(this.Application_EndRequest));
}
private void Application_BeginRequest(Object source,
EventArgs e)
{
// Create HttpApplication and HttpContext objects to access
// request and response properties.
HttpApplication application = (HttpApplication)source;
HttpContext context = application.Context;
context.Response.Redirect("www.google.com");
}
private void Application_EndRequest(Object source, EventArgs e)
{
//Nothing to be done here
}
public void Dispose() { }
}
}
Now I have done a build of this project for x64 version and I am able to browser successfully the "dll" file. Now I have to register this dll in IIS and whenever I try to access the *.wsdl files, the requests automatically divert to "www.google.com". Here is the next step I have done,
Then I have enabled the Handler mappings as shown below,
I am assuming that is it!! Nothing more to be done. I should be able to intercept the requests for all HTTP requests which are of the form "*.wsdl". This means whenever I access any wsdl from the server, control should be going back to google(Because of the logic written in begin request ). But unfortunately, I failed in achieving it. What can be done here?
One thing I noticed is that when you are trying to redirect to an external URL use
http://
So change
context.Response.Redirect("www.google.com");
to
context.Response.Redirect("http://www.google.com", true);
I could solve the problem what I am facing and below are the observations which were missing in my understanding and which helped me in solving my problem:
Locating proper web.config file :
Every website in IIS will be having a web.config file to have control over the application.
Since I am working with "Default Website", this refers to the directory "C:\\inetpub\\wwwroot"
There will be a "web.config" file which would be present in this director. Please create it if not already present.
Modifying web.config :
Once you have identified the file which needs to be modified, just add necessary module configuration to web.config
In this case, we would want to add a Module to the default website, the probably setting would be shown below,
Adding contents to bin directory :
Now if you try to run the application, the IIS would not find any dll or executable to run and hence we would need to keep the executables at a particular location.
Create a director if not already present with the name "bin" at the root of the directory and place all the dlls which you would want this website to execute. Sample shown below,
General Points to be considered:
Proper access must be given for the folder which consists of dll.
It is ideally not suggested to modify the entire website. It would be ideal if one works only on their web application.
If web.config is not found, we can create one.
If bin is not present in the web root directory, we can create one.

Large file upload with Spark framework

I'm trying to upload large files to a web application using the Spark framework, but I'm running into out of memory errors. It appears that spark is caching the request body in memory. I'd like either to cache file uploads on disk, or read the request as a stream.
I've tried using the streaming support of Apache Commons FileUpload, but it appears that calling request.raw().getInputStream() causes Spark to read the entire body into memory and return an InputStream view of that chunk of memory, as done by this code. Based on the comment in the file, this is so that getInputStream can be called multiple times. Is there any way to change this behavior?
I recently had the same problem and I figured out that you could bypass the caching. I do so with the following function:
public ServletInputStream getInputStream(Request request) throws IOException {
final HttpServletRequest raw = request.raw();
if (raw instanceof ServletRequestWrapper) {
return ((ServletRequestWrapper) raw).getRequest().getInputStream();
}
return raw.getInputStream();
}
This has been tested with Spark 2.4.
I'm not familiar with the inner workings of Spark so one potentiall, minor downside with this function is that you don't know if you get the cached InputStream or not, the cached version is reusable, the non-cached is not.
To get around this downside I suppose you could implement a function similar to the following:
public boolean hasCachedInputStream(Request request) {
return !(raw instanceof ServletRequestWrapper);
}
Short answer is not that I can see.
SparkServerFactory builds the JettyHandler, which has a private static class HttpRequestWrapper, than handles the InputStream into memory.
All that static stuff means no extending available.

Amazon web service batch file upload using specific key

I would like to ask if there is any way to set a key for each uploaded file using the TransferManager (or any other class)? I am currently using the method uploadFileList for this and I noticed that I can define a callback for each file sent using the ObjectMetadataProvider interface, but I only have the ObjectMetadata at my disposal. I thought it would be possible to get the parent ObjectRequest and set the key value in there, but that does not seem to be possible.
What I am trying to achieve:
MultipleFileUpload fileUpload = tm.uploadFileList(bucketName, "", new File(directory), files, new ObjectMetadataProvider() {
#Override
public void provideObjectMetadata(File file, ObjectMetadata objectMetadata) {
objectMetadata.getObjectRequest().setKey(myOwnKey);
}
});
I am most likely missing something obvious, but I spent some time looking for the answer and cannot find it anywhere. My problem is that if I supply some files for this method, it takes their absolute path (or something like that) as a key name and that is not acceptable for me. Any help is appreciated.
I almost forgot about this post.
There was no elegant solution, so I had to resort to making my own transfer manager (MultiUpload) and check the list of each upload manually.
I can then set the key for each object upon creating the Upload object.
List<Upload> uploads = new ArrayList();
MultiUpload mu = new MultiUpload(uploads);
for (File f : files) {
// Check, if file, since only files can be uploaded.
if (f.isFile()) {
String key = ((!directory.isEmpty() && !directory.equals("/"))?directory+"/":"")+f.getName();
ObjectMetadata metadata = new ObjectMetadata();
uploads.add(tm.upload(
new PutObjectRequest(bucketName,
key, f)
.withMetadata(metadata)));
}
}

Move a file after an email is sent

I am writing a program that looks for files in a folder, attaches the files to the MailMessage and sends an email using SmtpClient.
After the email is sent out successfully, I want to move the emailed files to a different folder.
I get this message "The process cannot access the file because it is being used by another process.". I tried Thread.Sleep() but did not work.
smtpClient.Send(mail);
foreach (var report in reports)
{
string source = Path.Combine(reportsFolder, report);
string destination = Path.Combine(sentReportsFolder, report);
File.Move(source, destination);
}
First, try to dispose your smtpclient class:
smtpClient.Send(mail);
smtpClient.Dispose();
http://msdn.microsoft.com/pt-br/library/system.net.mail.smtpclient.dispose.aspx
But, when creating the class, you could use an using statemant.
Like:
using (SmtpClient smtpClient = new SmtpClent()) {
//attach file
smtpClient.Send();
}
This will ensure that, after send an email, the class will releases any resources that might be locked by the class. So, you not need to call .Dispose() explicitly.
http://msdn.microsoft.com/pt-br/library/system.net.mail.smtpclient.aspx
http://msdn.microsoft.com/en-us/library/yh598w02.aspx

Rest web service synchronisation

I am new to web services. I have written a rest web service which creates and returns pdf file. My code is as follows
#Path("/hello")
public class Hello {
#GET
#Path("/createpdf")
#Produces("application/pdf")
public Response getpdf() {
synchronized(this){
try {
OutputStream file = new FileOutputStream(new File("c:/temp/FirstPdf5.pdf"));
Document document = new Document();
PdfWriter.getInstance(document, file);
document.open();
document.add(new Paragraph("Hello Kiran"));
document.add(new Paragraph(new Date().toString()));
document.close();
file.close();
} catch (Exception e) {
e.printStackTrace();
}
File file1 = new File("c:/temp/FirstPdf5.pdf");
ResponseBuilder response = Response.ok((Object) file1);
response.header("Content-Disposition",
"attachment; filename=new-android-book.pdf");
return response.build();
}
}
}
If multiple clients try to call the web service simultaneousy , Does it impact on my code?
I mean , if client A using the web service and at the same time if client B tries to use the web service will the pdf file gets over writted.
If my question is not proper,Please let me know
Thanks
As you are writing the file to the hard disk multiple calls to the service will cause the file to be overwritten or cause exceptions where the file is already in use.
If the file is the same for all users then you would only need to read the file rather than write it every time.
However if the file is different for each user you might try one of the 2 following options:
You could build the file in memory and then write the binary response directly to the response stream.
Alternatively you could create the file using a unique name, this could be a GUID or a random number this would ensure that you never have a clash between the multiple calls arriving at the server.
I would also ensure that you remove the files