java.io.IOException: Unable to write on channel java.nio.channels.SocketChannel in Play Framework - web-services

I'm trying to upload a byte array to a remote WS with multipart/form-data in play! framework using Scala. My code is:
//create byte array from file
val myFile = new File(pathName)
val in = new FileInputStream(myFile)
val myByteArray = new Array[Byte](audioFile.length.toInt)
in.read(audioByteArray)
in.close()
// create parts
val langPart = new StringPart("lang", "pt")
val taskPart = new StringPart("task","echo")
val audioPart = new ByteArrayPart("sbytes", "myFilename", myByteArray, "default/binary", "UTF-8")
val client: AsyncHttpClient = WS.client
val request = client.preparePost(RemoteWS)
.addHeader("Content-Type", "multipart/form-data")
.addBodyPart(audioPart)
.addBodyPart(taskPart)
.addBodyPart(langPart).build()
val result = Promise[Response]()
// execute request
client.executeRequest(request, new AsyncCompletionHandler[AHCResponse]{
override def onCompleted(response: AHCResponse): AHCResponse = {
result.success(Response(response))
response
}
override def onThrowable(t: Throwable) {
result.failure(t)
}
})
// handle async response
result.future.map(result =>{
Logger.debug("response: " + result.getAHCResponse.getResponseBody("UTF-8"))
})
Every time I execute this request, it throws this exception:
java.io.IOException: Unable to write on channel java.nio.channels.SocketChannel
The remote server is OK, I can make sucessfull requests using for example Postman.
I'm using Play Framework:
play 2.2.3 built with Scala 2.10.3 (running Java 1.8.0_05)
Version of AsyncHttpClient:
com.ning:async-http-client:1.7.18
Any help is welcome!

Related

how do i process Flux<Object> returned by webflux at client side in plain java code

I have written a webservice using spring webflux and reactive mongodb connectors, but my client side could be non spring based client.
So, how do I write a plain java code to consume flex at client side?
ServerSide code:
#GetMapping(value = "/findAll")
public Flux<Security> findAll() {
Flux<Security> flux = service.findAll();
return flux;
}
Client side code:
public static void sendRequest() {
try {
long start = System.currentTimeMillis();
for (int i = 0; i <= 100; i++) {
long start1 = System.currentTimeMillis();
URL url = new URL("http://localhost:8080/findAll/");
HttpURLConnection conn = (HttpURLConnection) url.openConnection();
conn.setRequestMethod("GET");
conn.setRequestProperty("Accept", "application/stream+json");
if (conn.getResponseCode() == 200) {
// url = new URL("http://localhost:8182/status/");
String json = "";
try (BufferedReader br = new BufferedReader(new InputStreamReader((conn.getInputStream())))) {
json = br.lines().collect(Collectors.joining());
}
conn.disconnect();
System.out.println("size of each Security: " + json.length());
ArrayList<Security> list = getListOfsecurities(json);
System.out.println(list.get(0).getIsin());
}
}
The above client side gives me an empty array.
I don't think it is possible. It's an asynchronous response.At least you have to use Java 5 Futures to invoke asynchronous response.

Akka actor for http request Java

Hello I am trying to look for a simple example in AKKA - Java to create a HTTP Client un an Actor. So far I am able to create a request and get the response Http Entity. I need to migrate it to an actor , so I can call multiple actors in parallel with a time out.
final ActorSystem system = ActorSystem.create();
final Materializer materializer = ActorMaterializer.create(system);
final List<HttpRequest> httpRequests = Arrays.asList(
HttpRequest.create(url) // Content-Encoding: gzip in respons
);
Unmarshaller<ByteString, BitTweet> unmarshal = Jackson.byteStringUnmarshaller(BitTweet.class);
JsonEntityStreamingSupport support = EntityStreamingSupport.json();
final Http http = Http.get(system);
final Function<HttpResponse, HttpResponse> decodeResponse = response -> {
// Pick the right coder
final Coder coder;
if (HttpEncodings.gzip().equals(response.encoding())) {
coder = Coder.Gzip;
} else if (HttpEncodings.deflate().equals(response.encoding())) {
coder = Coder.Deflate;
} else {
coder = Coder.NoCoding;
}
// Decode the entity
return coder.decodeMessage(response);
};
List<CompletableFuture<HttpResponse>> futureResponses = httpRequests.stream()
.map(req -> http.singleRequest(req, materializer)
.thenApply(decodeResponse))
.map(CompletionStage::toCompletableFuture)
.collect(Collectors.toList());
for (CompletableFuture<HttpResponse> futureResponse : futureResponses) {
final HttpResponse httpResponse = futureResponse.get();
system.log().info("response is: " + httpResponse.entity()
.toStrict(1, materializer)
.toCompletableFuture()
.get());
HttpEntity.Strict entity_ = HttpEntities.create(ContentTypes.APPLICATION_JSON, httpResponse.entity().toString());
Source<BitTweet, Object> BitTweet =
entity_.getDataBytes()
.via(support.framingDecoder()) // apply JSON framing
.mapAsync(1, // unmarshal each element
bs -> unmarshal.unmarshal(bs, materializer)
);

akka stream custom graph stage

I have an akka stream from a web-socket like akka stream consume web socket and would like to build a reusable graph stage (inlet: the stream, FlowShape: add an additional field to the JSON specifying origin i.e.
{
...,
"origin":"blockchain.info"
}
and an outlet to kafka.
I face the following 3 problems:
unable to wrap my head around creating a custom Inlet from the web socket flow
unable to integrate kafka directly into the stream (see the code below)
not sure if the transformer to add the additional field would be required to deserialize the json to add the origin
The sample Project (flow only) looks like:
import system.dispatcher
implicit val system = ActorSystem()
implicit val materializer = ActorMaterializer()
val incoming: Sink[Message, Future[Done]] =
Flow[Message].mapAsync(4) {
case message: TextMessage.Strict =>
println(message.text)
Future.successful(Done)
case message: TextMessage.Streamed =>
message.textStream.runForeach(println)
case message: BinaryMessage =>
message.dataStream.runWith(Sink.ignore)
}.toMat(Sink.last)(Keep.right)
val producerSettings = ProducerSettings(system, new ByteArraySerializer, new StringSerializer)
.withBootstrapServers("localhost:9092")
val outgoing = Source.single(TextMessage("{\"op\":\"unconfirmed_sub\"}")).concatMat(Source.maybe)(Keep.right)
val webSocketFlow = Http().webSocketClientFlow(WebSocketRequest("wss://ws.blockchain.info/inv"))
val ((completionPromise, upgradeResponse), closed) =
outgoing
.viaMat(webSocketFlow)(Keep.both)
.toMat(incoming)(Keep.both)
// TODO not working integrating kafka here
// .map(_.toString)
// .map { elem =>
// println(s"PlainSinkProducer produce: ${elem}")
// new ProducerRecord[Array[Byte], String]("topic1", elem)
// }
// .runWith(Producer.plainSink(producerSettings))
.run()
val connected = upgradeResponse.flatMap { upgrade =>
if (upgrade.response.status == StatusCodes.SwitchingProtocols) {
Future.successful(Done)
} else {
throw new RuntimeException(s"Connection failed: ${upgrade.response.status}")
system.terminate
}
}
// kafka that works / writes dummy data
val done1 = Source(1 to 100)
.map(_.toString)
.map { elem =>
println(s"PlainSinkProducer produce: ${elem}")
new ProducerRecord[Array[Byte], String]("topic1", elem)
}
.runWith(Producer.plainSink(producerSettings))
One issue is around the incoming stage, which is modelled as a Sink. where it should be modelled as a Flow. to subsequently feed messages into Kafka.
Because incoming text messages can be Streamed. you can use flatMapMerge combinator as follows to avoid the need to store entire (potentially big) messages in memory:
val incoming: Flow[Message, String, NotUsed] = Flow[Message].mapAsync(4) {
case msg: BinaryMessage =>
msg.dataStream.runWith(Sink.ignore)
Future.successful(None)
case TextMessage.Streamed(src) =>
src.runFold("")(_ + _).map { msg => Some(msg) }
}.collect {
case Some(msg) => msg
}
At this point you got something that produces strings, and can be connected to Kafka:
val addOrigin: Flow[String, String, NotUsed] = ???
val ((completionPromise, upgradeResponse), closed) =
outgoing
.viaMat(webSocketFlow)(Keep.both)
.via(incoming)
.via(addOrigin)
.map { elem =>
println(s"PlainSinkProducer produce: ${elem}")
new ProducerRecord[Array[Byte], String]("topic1", elem)
}
.toMat(Producer.plainSink(producerSettings))(Keep.both)
.run()

How to call .Net webservice in Blackberry?

I would like to call .Net webservice from my Blackbrry application. How can I call webservice from my app and which protocol is user and which jar file i have to used to call webservice. and how to get responce from webservice in Blackberry?
you can use something like this (you probably need to setup correct request headers and cookies):
connection = (HttpConnection) Connector.open(url
+ ConnectionUtils.getConnectionString(), Connector.READ_WRITE);
connection.setRequestProperty("ajax","true");
connection.setRequestProperty("Cookie", "JSESSIONID=" + jsessionId);
inputStream = connection.openInputStream();
byte[] responseData = new byte[10000];
int length = 0;
StringBuffer rawResponse = new StringBuffer();
while (-1 != (length = inputStream.read(responseData))) {
rawResponse.append(new String(responseData, 0, length));
}
int responseCode = connection.getResponseCode();
if (responseCode != HttpConnection.HTTP_OK) {
throw new IOException("HTTP response code: " + responseCode);
}
responseString = rawResponse.toString();

Why does SQL Server CLR procedure hang in GetResponse() call to web service

Environment: C#, .Net 3.5, Sql Server 2005
I have a method that works in a stand-alone C# console application project. It creates an XMLElement from data in the database and uses a private method to send it to a web service on our local network. When run from VS in this test project, it runs in < 5 seconds.
I copied the class into a CLR project, built it, and installed it in SQL Server (WITH PERMISSION_SET = EXTERNAL_ACCESS). The only difference is the SqlContext.Pipe.Send() calls that I added for debugging.
I am testing it by using an EXECUTE command one stored procedure (in the CLR) from an SSMS query window. It never returns. When I stop execution of the call after a minute, the last thing displayed is "Calling GetResponse() using http://servername:53694/odata.svc/Customers/". Any ideas as to why the GetResponse() call doesn't return when executing within SQL Server?
private static string SendPost(XElement entry, SqlString url, SqlString entityName)
{
// Send the HTTP request
string serviceURL = url.ToString() + entityName.ToString() + "/";
HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(serviceURL);
request.Method = "POST";
request.Accept = "application/atom+xml,application/xml";
request.ContentType = "application/atom+xml";
request.Timeout = 20000;
request.Proxy = null;
using (var writer = XmlWriter.Create(request.GetRequestStream()))
{
entry.WriteTo(writer);
}
try
{
SqlContext.Pipe.Send("Calling GetResponse() using " + request.RequestUri);
WebResponse response = request.GetResponse();
SqlContext.Pipe.Send("Back from GetResponse()");
/*
string feedData = string.Empty;
Stream stream = response.GetResponseStream();
using (StreamReader streamReader = new StreamReader(stream))
{
feedData = streamReader.ReadToEnd();
}
*/
HttpStatusCode StatusCode = ((HttpWebResponse)response).StatusCode;
response.Close();
if (StatusCode == HttpStatusCode.Created /* 201 */ )
{
return "Created # Location= " + response.Headers["Location"];
}
return "Creation failed; StatusCode=" + StatusCode.ToString();
}
catch (WebException ex)
{
return ex.Message.ToString();
}
finally
{
if (request != null)
request.Abort();
}
}
The problem turned out to be the creation of the request content from the XML. The original:
using (var writer = XmlWriter.Create(request.GetRequestStream()))
{
entry.WriteTo(writer);
}
The working replacement:
using (Stream requestStream = request.GetRequestStream())
{
using (var writer = XmlWriter.Create(requestStream))
{
entry.WriteTo(writer);
}
}
You need to dispose the WebResponse. Otherwise, after a few calls it goes to timeout.
You are asking for trouble doing this in the CLR. And you say you are calling this from a trigger? This belongs in the application tier.
Stuff like this is why when the CLR functionality came out, DBAs were very concerned about how it would be misused.