WSO2 ESB VFS Transport - Expects SOAP envelope - wso2

I am trying to do a FTP based integration using WSO2 ESB. I am able to transfer files from one FTP location to another using VFS transport. But WSO2 ESB expects soap envelope inside the file am transferring.
This logic will not suit in case if Am transferring an image.
How to transfer the image/files with non soap envelope using VFS transport ?
Below error occurs if I transfer any file without soap envelope:
[2013-06-07 14:01:31,314] ERROR - VFSTransportListener Error processing File URI
: ftp://isova1:admin#10.208.29.144/isova.png
org.apache.axiom.om.OMException: com.ctc.wstx.exc.WstxIOException: Invalid UTF-8
start byte 0x89 (at char #1, byte #-1)
at org.apache.axiom.om.impl.builder.StAXOMBuilder.next(StAXOMBuilder.jav
a:296)
at org.apache.axiom.soap.impl.builder.StAXSOAPModelBuilder.getSOAPEnvelo
pe(StAXSOAPModelBuilder.java:204)
at org.apache.axiom.soap.impl.builder.StAXSOAPModelBuilder.<init>(StAXSO
APModelBuilder.java:154)
at org.apache.axiom.om.impl.AbstractOMMetaFactory.createStAXSOAPModelBui
lder(AbstractOMMetaFactory.java:73)
at org.apache.axiom.om.impl.AbstractOMMetaFactory.createSOAPModelBuilder
(AbstractOMMetaFactory.java:79)
at org.apache.axiom.om.OMXMLBuilderFactory.createSOAPModelBuilder(OMXMLB
uilderFactory.java:196)
at org.apache.axis2.builder.SOAPBuilder.processDocument(SOAPBuilder.java
:55)
at org.apache.synapse.transport.vfs.VFSTransportListener.processFile(VFS
TransportListener.java:558)
at org.apache.synapse.transport.vfs.VFSTransportListener.scanFileOrDirec
tory(VFSTransportListener.java:312)
at org.apache.synapse.transport.vfs.VFSTransportListener.poll(VFSTranspo
rtListener.java:158)
at org.apache.synapse.transport.vfs.VFSTransportListener.poll(VFSTranspo
rtListener.java:107)
at org.apache.axis2.transport.base.AbstractPollingTransportListener$1$1.
run(AbstractPollingTransportListener.java:67)
at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(Native
WorkerPool.java:172)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExec
utor.java:895)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor
.java:918)
at java.lang.Thread.run(Thread.java:662)
Caused by: com.ctc.wstx.exc.WstxIOException: Invalid UTF-8 start byte 0x89 (at c
har #1, byte #-1)
at com.ctc.wstx.sr.StreamScanner.constructFromIOE(StreamScanner.java:625
)
at com.ctc.wstx.sr.StreamScanner.loadMore(StreamScanner.java:994)
at com.ctc.wstx.sr.StreamScanner.getNext(StreamScanner.java:754)
at com.ctc.wstx.sr.BasicStreamReader.nextFromProlog(BasicStreamReader.ja
va:1977)
at com.ctc.wstx.sr.BasicStreamReader.next(BasicStreamReader.java:1114)
at org.apache.axiom.util.stax.wrapper.XMLStreamReaderWrapper.next(XMLStr
eamReaderWrapper.java:225)
at org.apache.axiom.util.stax.dialect.DisallowDoctypeDeclStreamReaderWra
pper.next(DisallowDoctypeDeclStreamReaderWrapper.java:34)
at org.apache.axiom.util.stax.wrapper.XMLStreamReaderWrapper.next(XMLStr
eamReaderWrapper.java:225)
at org.apache.axiom.om.impl.builder.StAXOMBuilder.parserNext(StAXOMBuild
er.java:681)
at org.apache.axiom.om.impl.builder.StAXOMBuilder.next(StAXOMBuilder.jav
a:214)
... 15 more
Caused by: java.io.CharConversionException: Invalid UTF-8 start byte 0x89 (at ch
ar #1, byte #-1)
at com.ctc.wstx.io.UTF8Reader.reportInvalidInitial(UTF8Reader.java:303)
at com.ctc.wstx.io.UTF8Reader.read(UTF8Reader.java:189)
at com.ctc.wstx.io.ReaderSource.readInto(ReaderSource.java:87)
at com.ctc.wstx.io.BranchingReaderSource.readInto(BranchingReaderSource.
java:57)
at com.ctc.wstx.sr.StreamScanner.loadMore(StreamScanner.java:988)
... 23 more
Regards
Guru
#gnanagurus

This most likely occurs because the right ContentType of the VFS transport is not used.
When transferring binary data, use this:
<parameter name="transport.vfs.ContentType">application/octet-stream</parameter>
I just tried it on WSO2 ESB 4.0.3 and it worked fine for a png file, while using text/plain for the value of the transport.vfs.ContentType parameter was throwing the same exception as you described.
Check out the Synapse VFS service parameters here.

Related

Invalid function argument when setting receivebuffer size by netty

Starting up micro services with wso2 msf4j throws error 4022 socket exception on one non-stop machine but works fine on multiple other machines machine including j and l series. We are currently using msf4j 2.1.0, ws02 5.1, and Netty 4.0.3. Most of the settings are left in default and we only provide the configuration value that is required to start the service.
We have tried setting the TCP stack with ipv6 and it still fails, made sure the socket port is available and test the socket port with a test program which shows the port is available to use, and tried to use more bytes then available receive buffer of the system which states a max of about 1mb with java which didn't produce the symptom unless done directly with C code.
We are lost in what to do next. Is it possible to set Netty buffer sizes with a parameter when invoking the service?
Expected output is that the service is able to start up on port.
The stack trace received is :
Exception: java.net.SocketException: Invalid function argument (errno:4022)
2019-06-11 11:28:02 DEBUG - io.netty.channel.ChannelException: java.net.SocketEx
ception: Invalid function argument (errno:4022)
at io.netty.channel.socket.DefaultServerSocketChannelConfig.setReceiv
eBufferSize(DefaultServerSocketChannelConfig.java:123)
at io.netty.channel.socket.DefaultServerSocketChannelConfig.setOption
(DefaultServerSocketChannelConfig.java:78)
at io.netty.channel.DefaultChannelConfig.setOptions(DefaultChannelCon
fig.java:113)
at io.netty.bootstrap.ServerBootstrap.init(ServerBootstrap.java:152)
at io.netty.bootstrap.AbstractBootstrap.initAndRegister(AbstractBoots
trap.java:308)
at io.netty.bootstrap.AbstractBootstrap.doBind(AbstractBootstrap.java
:271)
at io.netty.bootstrap.AbstractBootstrap.bind(AbstractBootstrap.java:2
67)
at org.wso2.carbon.transport.http.netty.listener.NettyListener.startT
ransport(NettyListener.java:103)
at org.wso2.carbon.transport.http.netty.listener.NettyListener.start(
NettyListener.java:69)
at org.wso2.carbon.kernel.transports.CarbonTransport.startTransport(C
arbonTransport.java:47)
at java.util.HashMap$Values.forEach(HashMap.java:980)
at org.wso2.carbon.kernel.transports.TransportManager.startTransports
(TransportManager.java:61)
at org.wso2.msf4j.MicroservicesRunner.start(MicroservicesRunner.java:
191)
at com.xypro.nonstop.main.AppInitializer.main(AppInitializer.java:147
)
Caused by: java.net.SocketException: Invalid function argument (errno:4022)
at sun.nio.ch.Net.setIntOption0(Native Method)
at sun.nio.ch.Net.setSocketOption(Net.java:334)
at sun.nio.ch.ServerSocketChannelImpl.setOption(ServerSocketChannelIm
pl.java:151)
at sun.nio.ch.ServerSocketAdaptor.setReceiveBufferSize(ServerSocketAd
aptor.java:190)
at io.netty.channel.socket.DefaultServerSocketChannelConfig.setReceiv
eBufferSize(DefaultServerSocketChannelConfig.java:121)
... 13 more

The flume event was truncated

Here I'm facing a issue that I receive message from Kafka source, and write a interceptor to extract two fields(dataSoure and businessType) from the kafka message(json format). Here I'm using gson.fromJson(). But the issue is I got below error.
Here I want to know whether the Flume truncate the Flume event when it exceed a limit? If yes, how to setup it to bigger value. As my kafka message always very long, about 60K bytes.
Looking forward reply. Thanks in advance!
2015-12-09 11:48:05,665 (PollableSourceRunner-KafkaSource-apply)
[ERROR -
org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:153)]
KafkaSource EXCEPTION, {} com.google.gson.JsonSyntaxException:
com.google.gson.stream.MalformedJsonException: Unterminated string at
line 1 column 4096
at com.google.gson.Gson.fromJson(Gson.java:809)
at com.google.gson.Gson.fromJson(Gson.java:761)
at com.google.gson.Gson.fromJson(Gson.java:710)
at com.xxx.flume.interceptor.JsonLogTypeInterceptor.intercept(JsonLogTypeInterceptor.java:43)
at com.xxx.flume.interceptor.JsonLogTypeInterceptor.intercept(JsonLogTypeInterceptor.java:61)
at org.apache.flume.interceptor.InterceptorChain.intercept(InterceptorChain.java:62)
at org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:146)
at org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:130)
Finally, I find the root cause by debug the source code.
It is becaues I tried to convert event.getBody() to a map using Gson, which is incorrect, as the event.getBody() is a byte[], not a String, which can't be converted. The correct code should be as below:
String body = new String(event.getBody(), "UTF-8");
Map<String, Object> map = gson.fromJson(body, new TypeToken<Map<String, Object>>() {}.getType());

SSL Header returned by querycontextattributes is larger than expected using TLS1.2

I'm working to update a security library and I'm noticing some strange behavior when completing the SSL handshake and then calling querycontextattribute to obtain the header, maximum size, and trailer of the the messages. What I notice is when I have the grbitEnabledProtocols set to TLS 1_0 the handshake occurs as expected and returns a 5 byte header for the SSL packet as expected.
However, when I change the bit enabled protocol to either 0, and schannel selects tls 1.2, or if I set the bitenabled protocol to use TLS 1.2 when the function to query the message sizes is called, a 21 byte header is returned where 5 is expected.
Are there any additional calls that should be made when moving to TLS 1.2 as I have that would have not been necessary using TLS 1.0? Or, does returning the 21 byte header indicate an error in the processing of initializesecuritycontext that I'm not catching currently that should have been caught?

Compojure: Trap 500 URL Decoding Error

I have a web service in Compojure with one route that looks like this:
"/myapp/dosomething/:input"
This works well when :input is something normal for the app to handle, such as a word, a string of digits etc, but when garbage is put in, such as
GET /myapp/dosomething/%25%24%25%5E%24%25%5E%25%24%5E
I get a 500 error. My question is, how to I trap this, and return 400 instead?
HTTP ERROR 500
Problem accessing /myapp/dosomething/%25%24%25%5E%24%25%5E%25%24%5E. Reason:
Server Error
Caused by:
java.lang.IllegalArgumentException: URLDecoder: Illegal hex characters in escape (%) pattern - For input string: "$%"
at java.net.URLDecoder.decode(URLDecoder.java:192)
at clout.core$path_decode.invoke(core.clj:33)
at clout.core$path_decode.invoke(core.clj:31)
at clojure.core$map$fn__4207.invoke(core.clj:2485)
at clojure.lang.LazySeq.sval(LazySeq.java:42)
at clojure.lang.LazySeq.seq(LazySeq.java:60)
at clojure.lang.RT.seq(RT.java:484)
at clojure.core$seq.invoke(core.clj:133)
at clojure.core$map$fn__4211.invoke(core.clj:2490)
at clojure.lang.LazySeq.sval(LazySeq.java:42)
at clojure.lang.LazySeq.seq(LazySeq.java:60)
at clojure.lang.RT.seq(RT.java:484)
at clojure.core$seq.invoke(core.clj:133)
at clojure.core.protocols$seq_reduce.invoke(protocols.clj:30)
at clojure.core.protocols$fn__6026.invoke(protocols.clj:54)
at clojure.core.protocols$fn__5979$G__5974__5992.invoke(protocols.clj:13)
at clojure.core$reduce.invoke(core.clj:6177)
at clout.core$assoc_keys_with_groups.invoke(core.clj:54)
at clout.core.CompiledRoute.route_matches(core.clj:84)
at compojure.core$if_route$fn__472.invoke(core.clj:38)
at compojure.core$if_method$fn__465.invoke(core.clj:24)
at compojure.core$routing$fn__490.invoke(core.clj:106)
at clojure.core$some.invoke(core.clj:2443)
at compojure.core$routing.doInvoke(core.clj:106)
at clojure.lang.RestFn.applyTo(RestFn.java:139)
at clojure.core$apply.invoke(core.clj:619)
at compojure.core$routes$fn__494.invoke(core.clj:111)
at ring.middleware.keyword_params$wrap_keyword_params$fn__710.invoke(keyword_params.clj:27)
at ring.middleware.nested_params$wrap_nested_params$fn__749.invoke(nested_params.clj:65)
at ring.middleware.params$wrap_params$fn__682.invoke(params.clj:55)
at ring.middleware.multipart_params$wrap_multipart_params$fn__777.invoke(multipart_params.clj:103)
at ring.middleware.flash$wrap_flash$fn__1064.invoke(flash.clj:14)
at ring.middleware.session$wrap_session$fn__1055.invoke(session.clj:40)
at ring.middleware.cookies$wrap_cookies$fn__986.invoke(cookies.clj:160)
at vinws_chrome.servlet$_service$fn__116.invoke(servlet.clj:1)
at ring.util.servlet$make_service_method$fn__54.invoke(servlet.clj:145)
This issue occurs in war files generated by the Lein-Ring plugin, and has been recently fixed in Lein-Ring 0.8.6 as a result of this report.
The cause has to do with the difference in how Java Servlets and Ring deal with the path-info field. The Java Servlet specification has the context path url-encoded, but the path info is decoded. Ring treats both the :context and :path-info keys as url-encoded.

com.ctc.wstx.exc.WstxParsingException: Text size limit

I am sending a big attachment to a CXF webservice and I get the following exception:
Caused by: javax.xml.bind.UnmarshalException
- with linked exception:
[com.ctc.wstx.exc.WstxParsingException: Text size limit (134217728) exceeded
at [row,col {unknown-source}]: [1,134855131]]
at com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallerImpl.handleStreamException(UnmarshallerImpl.java:426)
at com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallerImpl.unmarshal0(UnmarshallerImpl.java:362)
at com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallerImpl.unmarshal(UnmarshallerImpl.java:339)
at org.apache.cxf.jaxb.JAXBEncoderDecoder.doUnmarshal(JAXBEncoderDecoder.java:769)
at org.apache.cxf.jaxb.JAXBEncoderDecoder.access$100(JAXBEncoderDecoder.java:94)
at org.apache.cxf.jaxb.JAXBEncoderDecoder$1.run(JAXBEncoderDecoder.java:797)
at java.security.AccessController.doPrivileged(Native Method)
at org.apache.cxf.jaxb.JAXBEncoderDecoder.unmarshall(JAXBEncoderDecoder.java:795)
... 25 more
The issue seems to come from the Woodstox library that says
Text size limit (134217728) exceeded
Does someone know if it is possible to increase this limit? if yes, how to do?
If it's coming from woodstox like that, then you aren't sending it as an MTOM attachment. My first suggestion would be to flip it to MTOM so it can be handled outside the XML parsing. Much more efficient as we can keep it as an inputstream or similar and not have it in memory.
If you want to keep it in the XML, you can set the property: "org.apache.cxf.stax.maxTextLength" to some larger value. Keep in mind, stuff coming in from the stax parser like this are held in memory as either a String or byte[] and will thus consume memory.