I need to read a zip file and send it as a Jersey response.The below code snippet returns a zip file. But the zip file(sample1.zip) cannot be opened as it reports "The archive is either unknown format or damaged." I'm able to other zip files on my machine. Can anyone help me to figure out what the issue could be?
#Produces("application/zip")
public Response getFile() {
StreamingOutput soo = new StreamingOutput() {
public void write(OutputStream output) throws IOException,
WebApplicationException {
try {
ZipFile zipFile = new ZipFile("sample.zip");
InputStream in = zipFile.getInputStream(zipFile
.getEntry("test.xml"));
IOUtils.copy(in, output);
} catch (Exception e) {
throw new WebApplicationException(e);
}
}
};
return Response
.ok()
.entity(soo)
.header("Content-Disposition",
"attachment; filename = sample1.zip").build();
}
I've used apache commons io utility (IOUtils) to copy input to ouput .
have you defined the #Provider that's #Produces("application/zip")?
Check the log, and look if a messagebodywriter is needed
Related
I'm trying to create an endpoint and corresponding swagger endpoint_info which uploads a file via multipart. I was hoping to use the TemporaryFile to write the parts then iterate the parts from getAllParts() to dump them to my final file. I can't seem to get the endpoint_info to create the right boundary in my request. I'm getting an exception when I try to create the PartList: Error. No 'boundary' value found in headers.
ENDPOINT_INFO(upload) {
info->summary =
"Upload"
info->addResponse<Object<CommandResponseDto>>(Status::CODE_200,
"application/json");
info->addConsumes<oatpp::String>("multipart/form-data");
}
ENDPOINT("POST", "upload",
upload,
REQUEST(std::shared_ptr<IncomingRequest>, request)) {
namespace mp = oatpp::web::mime::multipart;
try {
mp::PartList multipart(request->getHeaders());
} catch (const std::exception& e) {
logger_->error("Error creating multipart object: {}", e.what());
return create_error_response(e.what());
}
mp::PartList multipart(request->getHeaders());
mp::Reader multipartReader(&multipart);
multipartReader.setDefaultPartReader(
mp::createTemporaryFilePartReader("/tmp" /* /tmp directory */));
request->transferBody(&multipartReader);
auto parts = multipart.getAllParts();
for (auto& p : parts) {
/* print part name and filename */
logger_->error("Multipart", "Part name={}, filename={}",
p->getName()->c_str(), p->getFilename()->c_str());
/* some append all files into one large file */
}
return createResponse("OK");
}
I've tried searching around the docs and using both synchronous and asynchronous endpoints.
I am trying to upload 2GB+ file to my bucket using MultipartFile and AmazonS3, controller:
#PostMapping("/uploadFile")
public String uploadFile(#RequestPart(value = "file") MultipartFile file) throws Exception {
String fileUploadResult = this.amazonClient.uploadFile(file);
return fileUploadResult;
}
amazonClient-uploadFile:
public String uploadFile(MultipartFile multipartFile) throws Exception {
StringBuilder fileUrl = new StringBuilder();
try {
File file = convertMultiPartToFile(multipartFile);
String fileName = generateFileName(multipartFile);
fileUrl.append(endpointUrl);
fileUrl.append("/");
fileUrl.append(bucketName);
fileUrl.append("/");
fileUrl.append(fileName);
uploadFileTos3bucket(fileName, file);
file.delete();
} catch (Exception e) {
e.printStackTrace();
throw e;
}
return fileUrl.toString();
}
amazonClient-convertMultiPartToFile:
private File convertMultiPartToFile(MultipartFile file) throws IOException {
File convFile = new File(file.getOriginalFilename());
FileOutputStream fos = new FileOutputStream(convFile);
fos.write(file.getBytes());
fos.close();
return convFile;
}
amazonClient-uploadFileTos3bucket:
private void uploadFileTos3bucket(String fileName, File file) {
s3client.putObject(
new PutObjectRequest(bucketName, fileName, file)
.withCannedAcl(CannedAccessControlList.PublicRead));
}
The process works well for small file, to deal with large ones I defined in my application.properties -
spring.servlet.multipart.max-file-size=5GB
spring.servlet.multipart.max-request-size=5GB
spring.servlet.multipart.enabled=true
And getting - java.lang.OutOfMemoryError so:
1- How can I upload file without loading it to memory(not sure that is possible)?
2- How to load it in smaller parts?
Excption -
{"timestamp":"2018-11-12T12:50:38.250+0000","status":500,"error":"Internal Server Error","message":"No message available","trace":"java.lang.OutOfMemoryError\r\n\tat java.io.ByteArrayOutputStream.hugeCapacity(ByteArrayOutputStream.java:123)\r\n\tat java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:117)\r\n\tat java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)\r\n\tat java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:153)\r\n\tat org.springframework.util.StreamUtils.copy(StreamUtils.java:143)\r\n\tat org.springframework.util.FileCopyUtils.copy(FileCopyUtils.java:110)\r\n\tat org.springframework.util.FileCopyUtils.copyToByteArray(FileCopyUtils.java:162)\r\n\tat org.springframework.web.multipart.support.StandardMultipartHttpServletRequest$StandardMultipartFile.getBytes(StandardMultipartHttpServletRequest.java:245)\r\n\tat com.siemens.plm.it.aws.connect.handels.AmazonClient.convertMultiPartToFile(AmazonClient.java:51)\r\n\tat com.siemens.plm.it.aws.connect.handels.AmazonClient.uploadFile(AmazonClient.java:75)\r\n\tat com.siemens.plm.it.aws.connect.controllers.BucketController.uploadFile(BucketController.java:48)\r\n\tat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\r\n\tat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\r\n\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\r\n\tat java.lang.reflect.Method.invoke(Method.java:498)\r\n\tat org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:215)\r\n\tat org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:142)\r\n\tat org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:102)\r\n\tat org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:895)\r\n\tat org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:800)\r\n\tat org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87)\r\n\tat org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1038)\r\n\tat org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:942)\r\n\tat org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:998)\r\n\tat org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:901)\r\n\tat javax.servlet.http.HttpServlet.service(HttpServlet.java:660)\r\n\tat org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:875)\r\n\tat javax.servlet.http.HttpServlet.service(HttpServlet.java:741)\r\n\tat org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231)\r\n\tat org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)\r\n\tat org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53)\r\n\tat org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)\r\n\tat org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)\r\n\tat org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:99)\r\n\tat org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)\r\n\tat org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)\r\n\tat org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)\r\n\tat org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:92)\r\n\tat org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)\r\n\tat org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)\r\n\tat org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)\r\n\tat org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:93)\r\n\tat org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)\r\n\tat org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)\r\n\tat org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)\r\n\tat org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:200)\r\n\tat org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)\r\n\tat org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)\r\n\tat org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)\r\n\tat org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:199)\r\n\tat org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96)\r\n\tat org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:490)\r\n\tat org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:139)\r\n\tat org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92)\r\n\tat org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74)\r\n\tat org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343)\r\n\tat org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:408)\r\n\tat org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)\r\n\tat org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:770)\r\n\tat org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1415)\r\n\tat org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\r\n\tat org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)\r\n\tat java.lang.Thread.run(Thread.java:748)\r\n","path":"/storage/uploadFile"}
I too had this problem and I solved it by directly uploading the file as stream to the bucket.
In case you are using java servlet , below line of code is to read the video from form-data.
Part filePart = request.getPart("video");
//Now convert it to inputstream
InputStream in = request.getPart("video").getInputStream();
Now I made one S3Class, below is the part of code from the function.
try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(DefaultAWSCredentialsProviderChain.getInstance())
.build();
ObjectMetadata meta = new ObjectMetadata();
meta.setContentLength(in.available());
meta.setContentType("video/mp4");
TransferManager tm = TransferManagerBuilder.standard()
.withS3Client(s3Client)
.build();
PutObjectRequest request = new PutObjectRequest(bucketName, keyName, in, meta);
Upload upload = tm.upload(request);
// Optionally, you can wait for the upload to finish before continuing.
upload.waitForCompletion();
if(upload.isDone())
{
System.out.println("Total bytes transferred is : "+totalBytesTransferred);
tm.shutdownNow();
}
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
output = "Couldn't Upload the file.";
output_code = 0;
System.out.println("Inside exception");
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
output = "Couldn't Upload the file.";
output_code = 0;
e.printStackTrace();
}
P.S. Don't forget to add setContentLength() to ObjectMetaData else it will lead to OOM error.
I am trying to sequence file from distributed cache in EMR but its unable to read the file from distributed cache in EMR. My code works fine in local but its giving me issue on emr. Here is my code snippet-
Putting sequence file to distributed cache-
job.addCacheFile(new URI(status.getPath().toString()));
Reading the path-
for (Path eachPath : cacheFilesLocal) {
loadMap(eachPath.getName(),context.getConfiguration());
}
Reading the file from path-
private void loadMap(String filePath,Configuration conf) throws IOException
{
try {
Path somePath=new Path(filePath);
reader=new Reader(somePath.getFileSystem(conf),somePath,conf);
// brReader = new BufferedReader(new FileReader(filePath));
Writable key= new Text();
Writable value=new Text();
// Read each line, split and load to HashMap
while (reader.next(key,value)) {
// String index[]=strLineRead.toString().split(Pattern.quote(" - "));
rMap.put(key.toString(),value.toString());
}
}
catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
finally {
if (reader != null) {
reader.close();
}
}
}
Any help will be appreciated.
In the arguments provide the S3 path as per the documentation enter link description here
Now in the Driver class use the arguments
like:
job.addCacheFile(new URI(args[3]));
job.addCacheFile(new URI(args[4]));
job.addCacheFile(new URI(args[5]));
job.addCacheFile(new URI(args[5]));
And in Mapper use the Cache files as usual.
cacheFiles = context.getCacheFiles();
if (cacheFiles != null) {
File cityCacheFile = new File("AreaCityCountryCache");
worked for me...
I am trying to append a string to the end of a text file stored in S3.
Currently I just read the contents of the file into a String, append my new text and resave the file back to S3.
Is there a better way to do this. I am thinkinig when the file is >>> 10MB then reading the entire file would not be a good idea so how should I do this correctly?
Current code
[code]
private void saveNoteToFile( String p_note ) throws IOException, ServletException
{
String str_infoFileName = "myfile.json";
String existingNotes = s3Helper.getfileContentFromS3( str_infoFileName );
existingNotes += p_note;
writeStringToS3( str_infoFileName , existingNotes );
}
public void writeStringToS3(String p_fileName, String p_data) throws IOException
{
ByteArrayInputStream byteArrayInputStream = new ByteArrayInputStream( p_data.getBytes());
try {
streamFileToS3bucket( p_fileName, byteArrayInputStream, p_data.getBytes().length);
}
catch (AmazonServiceException e)
{
e.printStackTrace();
} catch (AmazonClientException e)
{
e.printStackTrace();
}
}
public void streamFileToS3bucket( String p_fileName, InputStream input, long size)
{
//Create sub folders if there is any in the file name.
p_fileName = p_fileName.replace("\\", "/");
if( p_fileName.charAt(0) == '/')
{
p_fileName = p_fileName.substring(1, p_fileName.length());
}
String folder = getFolderName( p_fileName );
if( folder.length() > 0)
{
if( !doesFolderExist(folder))
{
createFolder( folder );
}
}
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentLength(size);
AccessControlList acl = new AccessControlList();
acl.grantPermission(GroupGrantee.AllUsers, Permission.Read);
s3Client.putObject(new PutObjectRequest(bucket, p_fileName , input,metadata).withAccessControlList(acl));
}
[/code]
It's not possible to append to an existing file on AWS S3. When you upload an object it creates a new version if it already exists:
If you upload an object with a key name that already exists in the
bucket, Amazon S3 creates another version of the object instead of
replacing the existing object
Source: http://docs.aws.amazon.com/AmazonS3/latest/UG/ObjectOperations.html
The objects are immutable.
It's also mentioned in these AWS Forum threads:
https://forums.aws.amazon.com/message.jspa?messageID=179375
https://forums.aws.amazon.com/message.jspa?messageID=540395
It's not possible to append to an existing file on AWS S3.
You can delete existing file and upload new file with same name.
Configuration
private string bucketName = "my-bucket-name-123";
private static string awsAccessKey = "AKI............";
private static string awsSecretKey = "+8Bo..................................";
IAmazonS3 client = new AmazonS3Client(awsAccessKey, awsSecretKey,
RegionEndpoint.APSoutheast2);
string awsFile = "my-folder/sub-folder/textFile.txt";
string localFilePath = "my-folder/sub-folder/textFile.txt";
To Delete
public void DeleteRefreshTokenFile()
{
try
{
var deleteFileRequest = new DeleteObjectRequest
{
BucketName = bucketName,
Key = awsFile
};
DeleteObjectResponse fileDeleteResponse = client.DeleteObject(deleteFileRequest);
}
catch (Exception ex)
{
throw new Exception(ex.Message);
}
}
To Upload
public void UploadRefreshTokenFile()
{
FileInfo file = new FileInfo(localFilePath);
try
{
PutObjectRequest request = new PutObjectRequest()
{
InputStream = file.OpenRead(),
BucketName = bucketName,
Key = awsFile
};
PutObjectResponse response = client.PutObject(request);
}
catch (Exception ex)
{
throw new Exception(ex.Message);
}
}
One option is to write the new lines/information to a new version of the file. This would create a LARGE number of versions. But, essentially, whatever program you are using the file for could read ALL the versions and append them back together when reading it (this seems like a really bad idea as I write it out).
Another option would be to write a new object each time with a time stamp appended to the object name. my-log-file-date-time . Then whatever program is reading from it could append them all together after downloading my-log-file-*.
You would want to delete objects older than a certain time just like log rotation.
Depending on how busy your events are this might work. If you have thousands per second, I don't think this would work. But if you just have a few events per minute it may be reasonable.
You can do it with s3api put-object.
First download the version you want and use below commend. it will upload as the latest version.
ᐅ aws s3api put-object --bucket $BUCKET --key $FOLDER/$FILE --body $YOUR_LOCAL_DOWNLOADED_VERSION_FILE
We've created a custom webservice in Umbraco to add (async) files and upload them. After upload the service is called with node and file-information to add a new node to the content tree.
At first our main problem was that the service was running outside of the Umbraco context, giving strange errors with get_currentuser.
Now, we inherit the umbraco BaseWebService from the umbraco.webservices dll and we've set all acces information in the settings file; we authenticatie before doing anything else using (correct and ugly-hardcoded) administrator.
When we now execute the webservice (from the browser or anything else) we get:
at umbraco.DataLayer.SqlHelper`1.ExecuteReader(String commandText, IParameter[] parameters)
at umbraco.cms.businesslogic.CMSNode.setupNode()
at umbraco.cms.businesslogic.web.Document.setupNode()
at umbraco.cms.businesslogic.CMSNode..ctor(Int32 Id)
at umbraco.cms.businesslogic.Content..ctor(Int32 id)
at umbraco.cms.businesslogic.web.Document..ctor(Int32 id)
at FileUpload.AddDocument(String ProjectID, String NodeID, String FileName)*
Where AddDocument is our method. The node (filename w/o extension) does not exist in the tree (not anywhere, it's a new filename/node). We've cleared the recycle bin, so it's not in there either.
Are we missing something vital, does anyone has a solution?
Below is the source for the webservice;
using umbraco.cms.businesslogic.web;
using umbraco.BusinessLogic;
using umbraco.presentation.nodeFactory;
using umbraco.cms.businesslogic.member;
using umbraco.cms;
/// <summary>
/// Summary description for FileUpload
/// </summary>
[WebService(Namespace = "http://umbraco.org/webservices/")]
[WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)]
public class FileUpload : umbraco.webservices.BaseWebService //System.Web.Services.WebService
{
private string GetMimeType(string fileName)
{
string mimeType = "application/unknown";
string ext = System.IO.Path.GetExtension(fileName).ToLower();
Microsoft.Win32.RegistryKey regKey = Microsoft.Win32.Registry.ClassesRoot.OpenSubKey(ext);
if (regKey != null && regKey.GetValue("Content Type") != null)
mimeType = regKey.GetValue("Content Type").ToString();
return mimeType;
}
[WebMethod]
public string HelloWorld() {
return "Hello World";
}
[WebMethod]
public void AddDocument(string ProjectID, string NodeID, string FileName)
{
Authenticate("***", "***");
string MimeType = GetMimeType(FileName); //"application/unknown";
// Create node
int nodeId = 1197;
string fileName = System.IO.Path.GetFileNameWithoutExtension(#"*****\Upload\" + FileName);
string secGroups = "";
//EDIT DUE TO COMMENT: Behavior remains the same though
Document node = umbraco.cms.businesslogic.web.Document.MakeNew(fileName.Replace(".", ""), new DocumentType(1049), umbraco.BusinessLogic.User.GetUser(0), nodeId);
secGroups = "Intern";
StreamWriter sw = null;
try
{
//EXCEPTION IS THROWN SOMEWHERE HERE
Document doc = NodeLevel.CreateNode(fileName, "Bestand", nodeId);
doc.getProperty("bestandsNaam").Value = fileName;
byte[] buffer = System.IO.File.ReadAllBytes(#"****\Upload\" + FileName);
int projectId = 0;
int tempid = nodeId;
//EXCEPTION IS THROWN TO THIS POINT (SEE BELOW)
try
{
Access.ProtectPage(false, doc.Id, 1103, 1103);
Access.AddMembershipRoleToDocument(doc.Id, secGroups);
}
catch (Exception ex)
{
// write to file
}
try
{
doc.Publish(umbraco.BusinessLogic.User.GetUser(0));
umbraco.library.UpdateDocumentCache(doc.Id);
umbraco.content.Instance.RefreshContentFromDatabaseAsync();
}
catch (Exception ex)
{
// write to file
}
System.IO.File.Delete(FileName);
}
catch (Exception ex)
{
// THIS EXCEPTION IS CAUGHT!!
}
}
public override umbraco.webservices.BaseWebService.Services Service
{
get { return umbraco.webservices.BaseWebService.Services.DocumentService; }
}
}
If anyone has a solution, pointer, hint or whatever; help is appreciated!!
TIA,
riffnl
We've rewritten the whole procedure (dumped all code and restart) and we've got it working now.
I think we've been messing around with the old code so much in trying to get it to work we were missing some key issues, because it functions.
Thanks for thinking along anyway!