Amazon S3 service ConnectionPoolTimeoutException: Timeout waiting for connection from pool - amazon-web-services

When i work with s3 on Amazon, I keep getting the ugly ConnectionPoolTimeoutException.
The problem is that due to the front end of the application, i can not close the opened s3 objects before front end is done with them so i have implemented this solution :
#Autowired
private AmazonS3 s3client; // credentials are set properly.
private static List<S3Object> openedObjects = new ArrayList<S3Object>();
// initialize bucket :
private String bucketName = "myShinyNewBucket";
private synchronized boolean initBucket(){
try{
Boolean exists = null;
try{
exists = s3client.doesBucketExist(bucketName);
}catch(Exception e1){
System.out.println("\n\n\tToo many opened objects ; closing...\n\n");
deleteOpenedS3Objects();
exists = s3client.doesBucketExist(bucketName);
}
if(exists!=null){
if(!exists){
s3client.createBucket(new CreateBucketRequest(bucketName));
}
return true;
}
}
catch(Exception e){
System.out.println("\n\n\tFailed to initialize bucket.\n");
e.printStackTrace();
}
return false;
}
private synchronized void deleteOpenedS3Objects(){
System.out.println("\n\tClosing opened objects...");
try{
for(int i=0 ; i<openedObjects.size() ; i++){
openedObjects.get(i).close();
openedObjects.remove(i);
}
}catch(Exception e1){
System.out.println("\tCould not close all opened s3 objects, only the first "+i);
}
System.out.println("\tTrying again :\n\n");
}
// GET :
public final String getFromAWS(final String amazonName){
S3Object s3object = null;
if(initBucket()){
try{
try{
s3object = s3client.getObject(new GetObjectRequest(bucketName, amazonName));
}catch(AmazonClientException e){
deleteOpenedS3Objects();
s3object = s3client.getObject(new GetObjectRequest(bucketName, amazonName));
}
openedObjects.add(s3object);
return s3object.getObjectContent().getHttpRequest().getURI().toString();
}catch(Exception e1){
if (((AmazonS3Exception)e1).getStatusCode() == HttpStatus.SC_NOT_FOUND){
System.out.println("\n\nNo such object in bucket.\n");
}
else{
System.out.println("\n\n\tCould not read bject from bucket.\n\n");
e1.printStackTrace();
}
}
}
return null;
}
Yet, the exception is still happening.
org.apache.http.conn.ConnectionPoolTimeoutException: Timeout waiting for connection from pool
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.leaseConnection(PoolingHttpClientConnectionManager.java:286) ~[httpclient-4.5.1.jar!/:4.5.1]
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager$1.get(PoolingHttpClientConnectionManager.java:263) ~[httpclient-4.5.1.jar!/:4.5.1]
at sun.reflect.GeneratedMethodAccessor144.invoke(Unknown Source) ~[na:na]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_91]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_91]
at com.amazonaws.http.conn.ClientConnectionRequestFactory$Handler.invoke(ClientConnectionRequestFactory.java:70) ~[aws-java-sdk-core-1.11.8.jar!/:na]
at com.amazonaws.http.conn.$Proxy188.get(Unknown Source) ~[na:na]
...
Only when i do ctrl+c in console, does it get to the part where it is closing the opened s3 connections :
... Caused by: java.lang.InterruptedException: Operation interrupted
at org.apache.http.pool.PoolEntryFuture.await(PoolEntryFuture.java:142) ~[httpcore-4.4.4.jar!/:4.4.4]
at org.apache.http.pool.AbstractConnPool.getPoolEntryBlocking(AbstractConnPool.java:306) ~[httpcore-4.4.4.jar!/:4.4.4]
at org.apache.http.pool.AbstractConnPool.access$000(AbstractConnPool.java:64) ~[httpcore-4.4.4.jar!/:4.4.4]
at org.apache.http.pool.AbstractConnPool$2.getPoolEntry(AbstractConnPool.java:192) ~[httpcore-4.4.4.jar!/:4.4.4]
at org.apache.http.pool.AbstractConnPool$2.getPoolEntry(AbstractConnPool.java:185) ~[httpcore-4.4.4.jar!/:4.4.4]
at org.apache.http.pool.PoolEntryFuture.get(PoolEntryFuture.java:107) ~[httpcore-4.4.4.jar!/:4.4.4]
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.leaseConnection(PoolingHttpClientConnectionManager.java:276) ~[httpclient-4.5.1.jar!/:4.5.1]
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager$1.get(PoolingHttpClientConnectionManager.java:263) ~[httpclient-4.5.1.jar!/:4.5.1]
at sun.reflect.GeneratedMethodAccessor144.invoke(Unknown Source) ~[na:na]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_91]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_91]
at com.amazonaws.http.conn.ClientConnectionRequestFactory$Handler.invoke(ClientConnectionRequestFactory.java:70) ~[aws-java-sdk-core-1.11.8.jar!/:na]
at com.amazonaws.http.conn.$Proxy188.get(Unknown Source) ~[na:na]
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:190) ~[httpclient-4.5.1.jar!/:4.5.1]
... 148 common frames omitted
Too many opened objects. // <-- This is where it catches it.
Closing opened objects...
Could not close all opened s3 objects.
Trying again :
Failed to initialize bucket.
Again, I am unfortunately not in a position where i can close opened s3objects before i leave the functions in my s3-client class. The only hope i have had, was to wait until TimeoutException happens, catch it, then closing all opened objects and trying again.
However, i can't seem to catch it in the right place.
Please, help.
Thank you.

We must use try with resources whenever we are dealing with any type of file read or write. This will auto close the resources used, even if we face any sort of exception.
In My case : I was getting 404 while reading from S3 for bulk operation, which i ignored while writing script by logging the error and forget to close the connection in finally block.
Example snippet:
static String readFirstLineFromFile(String path) throws IOException {
try (BufferedReader br =
new BufferedReader(new FileReader(path))) {
return br.readLine();
}
}
Note:
Try with resources have support from JDK 8 and above.
All the resources declared in try() must implement the AutoCloseable interface.

I think the sollution is to catch and process the ugly TimeoutException in my #ControllerAdvice class. I have done so and so far i have not had it happen in the app. I'll post a confirmation once i'm sure.

Related

Uploading large files to aws s3 bucket without loading to memory

I am trying to upload 2GB+ file to my bucket using MultipartFile and AmazonS3, controller:
#PostMapping("/uploadFile")
public String uploadFile(#RequestPart(value = "file") MultipartFile file) throws Exception {
String fileUploadResult = this.amazonClient.uploadFile(file);
return fileUploadResult;
}
amazonClient-uploadFile:
public String uploadFile(MultipartFile multipartFile) throws Exception {
StringBuilder fileUrl = new StringBuilder();
try {
File file = convertMultiPartToFile(multipartFile);
String fileName = generateFileName(multipartFile);
fileUrl.append(endpointUrl);
fileUrl.append("/");
fileUrl.append(bucketName);
fileUrl.append("/");
fileUrl.append(fileName);
uploadFileTos3bucket(fileName, file);
file.delete();
} catch (Exception e) {
e.printStackTrace();
throw e;
}
return fileUrl.toString();
}
amazonClient-convertMultiPartToFile:
private File convertMultiPartToFile(MultipartFile file) throws IOException {
File convFile = new File(file.getOriginalFilename());
FileOutputStream fos = new FileOutputStream(convFile);
fos.write(file.getBytes());
fos.close();
return convFile;
}
amazonClient-uploadFileTos3bucket:
private void uploadFileTos3bucket(String fileName, File file) {
s3client.putObject(
new PutObjectRequest(bucketName, fileName, file)
.withCannedAcl(CannedAccessControlList.PublicRead));
}
The process works well for small file, to deal with large ones I defined in my application.properties -
spring.servlet.multipart.max-file-size=5GB
spring.servlet.multipart.max-request-size=5GB
spring.servlet.multipart.enabled=true
And getting - java.lang.OutOfMemoryError so:
1- How can I upload file without loading it to memory(not sure that is possible)?
2- How to load it in smaller parts?
Excption -
{"timestamp":"2018-11-12T12:50:38.250+0000","status":500,"error":"Internal Server Error","message":"No message available","trace":"java.lang.OutOfMemoryError\r\n\tat java.io.ByteArrayOutputStream.hugeCapacity(ByteArrayOutputStream.java:123)\r\n\tat java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:117)\r\n\tat java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)\r\n\tat java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:153)\r\n\tat org.springframework.util.StreamUtils.copy(StreamUtils.java:143)\r\n\tat org.springframework.util.FileCopyUtils.copy(FileCopyUtils.java:110)\r\n\tat org.springframework.util.FileCopyUtils.copyToByteArray(FileCopyUtils.java:162)\r\n\tat org.springframework.web.multipart.support.StandardMultipartHttpServletRequest$StandardMultipartFile.getBytes(StandardMultipartHttpServletRequest.java:245)\r\n\tat com.siemens.plm.it.aws.connect.handels.AmazonClient.convertMultiPartToFile(AmazonClient.java:51)\r\n\tat com.siemens.plm.it.aws.connect.handels.AmazonClient.uploadFile(AmazonClient.java:75)\r\n\tat com.siemens.plm.it.aws.connect.controllers.BucketController.uploadFile(BucketController.java:48)\r\n\tat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\r\n\tat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\r\n\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\r\n\tat java.lang.reflect.Method.invoke(Method.java:498)\r\n\tat org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:215)\r\n\tat org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:142)\r\n\tat org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:102)\r\n\tat org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:895)\r\n\tat org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:800)\r\n\tat org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87)\r\n\tat org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1038)\r\n\tat org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:942)\r\n\tat org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:998)\r\n\tat org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:901)\r\n\tat javax.servlet.http.HttpServlet.service(HttpServlet.java:660)\r\n\tat org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:875)\r\n\tat javax.servlet.http.HttpServlet.service(HttpServlet.java:741)\r\n\tat org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231)\r\n\tat org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)\r\n\tat org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53)\r\n\tat org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)\r\n\tat org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)\r\n\tat org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:99)\r\n\tat org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)\r\n\tat org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)\r\n\tat org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)\r\n\tat org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:92)\r\n\tat org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)\r\n\tat org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)\r\n\tat org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)\r\n\tat org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:93)\r\n\tat org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)\r\n\tat org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)\r\n\tat org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)\r\n\tat org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:200)\r\n\tat org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)\r\n\tat org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)\r\n\tat org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)\r\n\tat org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:199)\r\n\tat org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96)\r\n\tat org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:490)\r\n\tat org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:139)\r\n\tat org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92)\r\n\tat org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74)\r\n\tat org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343)\r\n\tat org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:408)\r\n\tat org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)\r\n\tat org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:770)\r\n\tat org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1415)\r\n\tat org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)\r\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\r\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\r\n\tat org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)\r\n\tat java.lang.Thread.run(Thread.java:748)\r\n","path":"/storage/uploadFile"}
I too had this problem and I solved it by directly uploading the file as stream to the bucket.
In case you are using java servlet , below line of code is to read the video from form-data.
Part filePart = request.getPart("video");
//Now convert it to inputstream
InputStream in = request.getPart("video").getInputStream();
Now I made one S3Class, below is the part of code from the function.
try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(DefaultAWSCredentialsProviderChain.getInstance())
.build();
ObjectMetadata meta = new ObjectMetadata();
meta.setContentLength(in.available());
meta.setContentType("video/mp4");
TransferManager tm = TransferManagerBuilder.standard()
.withS3Client(s3Client)
.build();
PutObjectRequest request = new PutObjectRequest(bucketName, keyName, in, meta);
Upload upload = tm.upload(request);
// Optionally, you can wait for the upload to finish before continuing.
upload.waitForCompletion();
if(upload.isDone())
{
System.out.println("Total bytes transferred is : "+totalBytesTransferred);
tm.shutdownNow();
}
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
output = "Couldn't Upload the file.";
output_code = 0;
System.out.println("Inside exception");
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
output = "Couldn't Upload the file.";
output_code = 0;
e.printStackTrace();
}
P.S. Don't forget to add setContentLength() to ObjectMetaData else it will lead to OOM error.

Connect AWS SQS to Apache-Flink

Why is AWS SQS not a default connector for Apache Flink? Is there some technical limitation to doing this? Or was it just something that didn't get done? I want to implement this, any pointers would be appreciated
Probably too late for an answer to the original question... I wrote a SQS consumer as a SourceFunction, using the Java Messaging Service library for SQS:
SQSConsumer extends RichParallelSourceFunction<String> {
private volatile boolean isRunning;
private transient AmazonSQS sqs;
private transient SQSConnectionFactory connectionFactory;
private transient ExecutorService consumerExecutor;
#Override
public void open(Configuration parameters) throws Exception {
String region = ...
AWSCredentialsProvider credsProvider = ...
// may be use a blocking array backed thread pool to handle surges?
consumerExecutor = Executors.newCachedThreadPool();
ClientConfiguration clientConfig = PredefinedClientConfigurations.defaultConfig();
this.sqs = AmazonSQSAsyncClientBuilder.standard().withRegion(region).withCredentials(credsProvider)
.withClientConfiguration(clientConfig)
.withExecutorFactory(()->consumerExecutor).build();
this.connectionFactory = new SQSConnectionFactory(new ProviderConfiguration(), sqs);
this.isRunning = true;
}
#Override
public void run(SourceContext<String> ctx) throws Exception {
SQSConnection connection = connectionFactory.createConnection();
// ack each msg explicitly
Session session = connection.createSession(false, SQSSession.UNORDERED_ACKNOWLEDGE);
Queue queue = session.createQueue(<queueName>);
MessageConsumer msgConsumer = session.createConsumer(queue);
msgConsumer.setMessageListener(msg -> {
try {
String msgId = msg.getJMSMessageID();
String evt = ((TextMessage) msg).getText();
ctx.collect(evt);
msg.acknowledge();
} catch (JSMException e) {
// log and move on the next msg or bail with an exception
// have a dead letter queue is configured so this message is not lost
// msg is not acknowledged so it may be picked up again by another consumer instance
}
};
// check if we were canceled
if (!isRunning) {
return;
}
connection.start();
while (!consumerExecutor.awaitTermination(1, TimeUnit.MINUTES)) {
// keep waiting
}
}
#Override
public void cancel() {
isRunning = false;
// this method might be called before the task actually starts running
if (sqs != null) {
sqs.shutdown();
}
if(consumerExecutor != null) {
consumerExecutor.shutdown();
try {
consumerExecutor.awaitTermination(1, TimeUnit.MINUTES);
} catch (Exception e) {
//log e
}
}
}
#Override
public void close() throws Exception {
cancel();
super.close();
}
}
Note if you are using a standard SQS queue you may have to de-dup the messages depending on whether exactly-once guarantees are required.
Reference:
Working with JMS and Amazon SQS
At the moment, there is no connector for AWS SQS in Apache Flink. Have a look at the already existing connectors. I assume you already know about this, and would like to give some pointers. I was also looking for an SQS connector recently and found this mail thread.
Apache Kinesis Connector is somewhat similar to what you can implement on this. See whether you can get a start on this using this connector.

SolrJ - NPE when accessing to SolrCloud

I'm running the following test code on SolrCloud using Solrj library:
public static void main(String[] args) {
String zkHostString = "192.168.56.99:2181";
SolrClient solr = new CloudSolrClient.Builder().withZkHost(zkHostString).build();
List<MyBean> beans = new ArrayList<>();
for(int i = 0; i < 10000 ; i++) {
// creating a bunch of MyBean to be indexed
// and temporarily storing them in a List
// no Solr operations performed here
}
System.out.println("Adding...");
try {
solr.addBeans("myCollection", beans);
} catch (IOException | SolrServerException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
System.out.println("Committing...");
try {
solr.commit("myCollection");
} catch (SolrServerException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
This code fails due to the following exception
Exception in thread "main" java.lang.NullPointerException
at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1175)
at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1057)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:160)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:106)
at org.apache.solr.client.solrj.SolrClient.addBeans(SolrClient.java:357)
at org.apache.solr.client.solrj.SolrClient.addBeans(SolrClient.java:312)
at com.togather.solr.testing.SolrIndexingTest.main(SolrIndexingTest.java:83)
This is the full stacktrace of the exception. I just "upgraded" from a Solr standalone installation to a SolrCloud (with an external Zookeeper single instance, not the embedded one). With standalone Solr the same code (with just some minor differences, like the host URL) used to work perfectly.
The NPE sends me inside the SolrJ library, which I don't know.
Anyone can help me understand where the problem originates from and how I can overcome it? Due to my unexperience and the brevity of the error message, I can't figure out where to start inquiring from.
Looking at your code, I would suggest to specify the default collection as first thing.
CloudSolrClient solr = new CloudSolrClient.Builder().withZkHost(zkHostString).build();
solr.setDefaultCollection("myCollection");
Regarding the NPE you're experiencing, very likely is due to a network error.
In these lines your exception is raised by for loop: for (DocCollection ext : requestedCollections)
if (wasCommError) {
// it was a communication error. it is likely that
// the node to which the request to be sent is down . So , expire the state
// so that the next attempt would fetch the fresh state
// just re-read state for all of them, if it has not been retired
// in retryExpiryTime time
for (DocCollection ext : requestedCollections) {
ExpiringCachedDocCollection cacheEntry = collectionStateCache.get(ext.getName());
if (cacheEntry == null) continue;
cacheEntry.maybeStale = true;
}
if (retryCount < MAX_STALE_RETRIES) {//if it is a communication error , we must try again
//may be, we have a stale version of the collection state
// and we could not get any information from the server
//it is probably not worth trying again and again because
// the state would not have been updated
return requestWithRetryOnStaleState(request, retryCount + 1, collection);
}
}

Random "error reading MIME multipart body" in Web API

Occasionally, an upload of a gzipped file from a phone app to the web service fails with the following error:
Error reading MIME multipart body part.
at System.Net.Http.HttpContentMultipartExtensions.<MultipartReadAsync>d__8.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Net.Http.HttpContentMultipartExtensions.<ReadAsMultipartAsync>d__0`1.MoveNext()
The endpoint itself is pretty basic:
[System.Web.Http.HttpPost]
public async Task<HttpResponseMessage> TechAppUploadPhoto()
{
if (!Request.Content.IsMimeMultipartContent())
{
return Request.CreateErrorResponse(HttpStatusCode.UnsupportedMediaType, "The request isn't valid!");
}
try
{
var provider = new MultipartMemoryStreamProvider();
await Request.Content.ReadAsMultipartAsync(provider);
foreach (StreamContent file in provider.Contents)
{
Stream dataStream = await file.ReadAsStreamAsync();
String fileName = file.Headers.ContentDisposition.FileName;
fileName = [unique name];
String filePath = Path.Combine(ConfigurationManager.AppSettings["PhotoUploadLocation"], fileName);
using (var fileStream = File.Create(filePath))
{
dataStream.Seek(0, SeekOrigin.Begin);
dataStream.CopyTo(fileStream);
fileStream.Close();
dataStream.Close();
}
// Enable overwriting with ZipArchive
using (ZipArchive archive = ZipFile.OpenRead(filePath))
{
foreach (ZipArchiveEntry entry in archive.Entries)
{
entry.ExtractToFile(Path.Combine(ConfigurationManager.AppSettings["PhotoUploadLocation"], entry.FullName), true);
}
}
File.Delete(filePath);
}
return Request.CreateResponse(HttpStatusCode.Accepted);
}
catch (Exception e)
{
return Request.CreateErrorResponse(HttpStatusCode.InternalServerError, "PostCatchErr: " + e.Message + e.StackTrace);
}
}
This error appears to be pretty random and hard to recreate. Unfortunately, I didn't write either end and don't have much experience here, but it doesn't appear to be related to the size of the upload either - it's a large limit in the web config and larger files can be uploaded than those that error out. Is there something I'm missing that could cause this issue? It only appears to read the body information once, which is the other possible cause I've found. Any thoughts?
This answer on a similar question helped me.
The value for maxRequestLength is incorrect though. It should be 30000 for 30MB (as pointed out in one of the comments on the answer).

not able to invoke messge bean from restful webservice

I am trying to create Restful Webservice as a client of Message Driven Bean, But when i invoke the restful method its giving me following error when
Connection connection = connectionFactory.createConnection();
SEVERE: The RuntimeException could not be mapped to a response, re-throwing to the HTTP container
java.lang.NullPointerException
at com.quant.ws.GetConnection.startThread(GetConnection.java:99)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
here is the following code:
// Inside class declaration
#Resource(mappedName = "jms/testFactory")
private static ConnectionFactory connectionFactory;
#Resource(mappedName = "jms/test")
private static Queue queue;
Web services Method
#GET
#Path("startThread")
#Produces("application/xml")
public String startThread()
{
try{
Connection connection = connectionFactory.createConnection(); // its line number 99
Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
MessageProducer producer = session.createProducer( queue);
Message message = session.createTextMessage();
message.setStringProperty("name", "start");
producer.send(message);
}catch(JMSException e){
System.out.println(e);
}
return "<data>START</data>";
}
Do i need to specify anything in sun-web.xml or web.xml ?
I think it depends on your applicationserver setup. Did you inject the connectionFactory somewhere above? Or did a context lookup?
connectionFactory is null. It needs to be initialised somehow.
I have solved it by replacing following code
try{
InitialContext ctx = new InitialContext();
queue = (Queue) ctx.lookup("jms/test");
QueueConnectionFactory factory =
(QueueConnectionFactory) ctx.lookup("jms/testFactory");
Connection connection = factory.createConnection();
Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
MessageProducer producer = session.createProducer( queue);
Message message = session.createTextMessage();
message.setStringProperty("name", "start");
producer.send(message);
}
catch(NamingException e){
System.out.println(e);
}
catch(JMSException e){
System.out.println(e);
}