AppFabric Error # objCache.Get(key) - appfabric

I am able to put/add 70MB of data to AppFabric successfully. But I am getting following error when I try to retrieve the same thru _cache.Get(key) method.
Please let me know what's wrong.
Let me know if its good this huge data
Error
"ErrorCode:SubStatus:The connection was terminated,
possibly due to server or network problems or serialized Object size
is greater than MaxBufferSize on server. Result of the request is
unknown."
Stack Trace
at Microsoft.ApplicationServer.Caching.DataCache.ThrowException(ResponseBody respBody)
at Microsoft.ApplicationServer.Caching.DataCache.InternalGet(String key, DataCacheItemVersion& version, String region)
at Microsoft.ApplicationServer.Caching.DataCache.Get(String key)
Web Config at client
<dataCacheClient requestTimeout="150000" channelOpenTimeout="20000" maxConnectionsToServer="1">
<localCache isEnabled="false" sync="TimeoutBased" ttlValue="300" objectCount="10000"/>
<clientNotification pollInterval="300" maxQueueLength="10000"/>
<hosts><host name="MachineName" cachePort="22233"/></hosts>
<securityProperties mode="None" protectionLevel="None" />
<transportProperties connectionBufferSize="131072" maxBufferPoolSize="2147483647" maxBufferSize="2147483647" maxOutputDelay="2" channelInitializationTimeout="60000" receiveTimeout="2147483647"/>
</dataCacheClient>
Configuration at Server
<transportProperties maxBufferPoolSize="2147483647" maxBufferSize="2147483647" />

You should consider an alternate solution to put a 70MB object. AppFabric is meant for very frequently accessed volatile data, ideally below 256kB (looking at their benchmark results, which were available here).
Note: The benchmark results link seems to be broken at the moment.

Related

Twisted ssh - session execCommand implementation

Good day. I apologize for asking for obvious things because I'm writing in PHP and I know Python at the level "I started learning this yesterday". I've already spent a few days on this - but to no avail.
I downloaded twisted example of the SSH server for version 20.3 from here https://docs.twistedmatrix.com/en/twisted-20.3.0/conch/examples/. Line 162 has an execCommand method that I need to implement to make it work. Then I noticed a comment in this method "We don't support command execution sessions". Therefore, the question: Is this comment apply only to the example, or twisted library entirely. Ie, is it possible to implement this method to make the example server will work as I need?
More information. I don't think that this info is required to answer my questions above.
Why do I need it? I'm trying to compile an environment for writing functional (!) tests (there would be no such problems with the unit tests, I guess). Our API uses the SSH client (phpseclib / SSH2) by 30%+ of endpoints. Whatever I do, I had only 3 options of the results depending on how did I implement this method: (result: success, response: "" - empty; result: success, response: "1"; result: failed, response: "Unable to fulfill channel request at… SSH2.php:3853"). Those were for an SSH2 Client. If the error occurs (3rd case), the server shows logs in the terminal:
[SSHServerTransport, 0,127.0.0.1] Got remote error, code 11 reason: ""
[SSHServerTransport, 0,127.0.0.1] connection lost
I just found this works:
def execCommand(self, protocol, cmd):
protocol.write('Some text to return')
protocol.session.conn.sendEOF(protocol.session)
If I don't send EOF the client throws a timeout error.

WSO2 EI log about Java heap space

I have called an endpoint and it response a large data, unfortunately show the error message in WSO2 carbon log . How can I solve it? Thank you.
TID: [-1] [] [2018-02-26 17:48:47,869] ERROR {org.wso2.carbon.das.messageflow.data.publisher.data.MessageFlowObserverStore} - Error occurred while notifying the statistics observer {org.wso2.carbon.das.messageflow.data.publisher.data.MessageFlowObserverStore}
java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:3236)
at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:118)
at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:153)
at com.esotericsoftware.kryo.io.Output.flush(Output.java:181)
at com.esotericsoftware.kryo.io.Output.require(Output.java:160)
at com.esotericsoftware.kryo.io.Output.writeString_slow(Output.java:462)
at com.esotericsoftware.kryo.io.Output.writeString(Output.java:363)
at com.esotericsoftware.kryo.serializers.DefaultSerializers$StringSerializer.write(DefaultSerializers.java:191)
at com.esotericsoftware.kryo.serializers.DefaultSerializers$StringSerializer.write(DefaultSerializers.java:184)
at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:628)
at com.esotericsoftware.kryo.serializers.CollectionSerializer.write(CollectionSerializer.java:100)
at com.esotericsoftware.kryo.serializers.CollectionSerializer.write(CollectionSerializer.java:40)
at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:628)
at com.esotericsoftware.kryo.serializers.CollectionSerializer.write(CollectionSerializer.java:100)
at com.esotericsoftware.kryo.serializers.CollectionSerializer.write(CollectionSerializer.java:40)
at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:628)
at com.esotericsoftware.kryo.serializers.MapSerializer.write(MapSerializer.java:113)
at com.esotericsoftware.kryo.serializers.MapSerializer.write(MapSerializer.java:39)
at com.esotericsoftware.kryo.Kryo.writeObject(Kryo.java:534)
at org.wso2.carbon.das.messageflow.data.publisher.publish.StatisticsPublisher.addEventData(StatisticsPublisher.java:116)
at org.wso2.carbon.das.messageflow.data.publisher.publish.StatisticsPublisher.process(StatisticsPublisher.java:67)
at org.wso2.carbon.das.messageflow.data.publisher.observer.DASMediationFlowObserver.updateStatistics(DASMediationFlowObserver.java:55)
at org.wso2.carbon.das.messageflow.data.publisher.data.MessageFlowObserverStore.notifyObservers(MessageFlowObserverStore.java:71)
at org.wso2.carbon.das.messageflow.data.publisher.services.MessageFlowReporterThread.processAndPublishEventList(MessageFlowReporterThread.java:225)
at org.wso2.carbon.das.messageflow.data.publisher.services.MessageFlowReporterThread.run(MessageFlowReporterThread.java:95)
By looking at the out of memory issue it is hard to say anything about the culprit. In order to find out the actual root cause we have to analyze the heapdump (There will heapdump created by wso2 servers automatically in CARBON_HOME/repository/logs/heap-dump.hprof) using an analyzing tool such as MAT, jprofile.
However, if the response message is large, there is a possibility that the server goes OOM as it keeps and may build the response message in memory. If you want to process large messages, you can tune the heap memory allocation as in the doc.

Icecast2: Mountpoint buffers at browser playback for a long time

When I try to play my icecast stream at a browser player or directly by visiting the mountpoint, it lasts up to one to two minutes till I hear any sound. Which setting at icecast does affect such behaviour? My server hardware can´t be the reason. Also the problem affects only the browser - with desktop player there is no buffering time. When I use shoutcast all web player loads just in time.
<icecast>
<location>Earth</location>
<admin>mail#test.test</admin>
<limits>
<clients>200</clients>
<sources>3</sources>
<threadpool>5</threadpool>
<queue-size>524288</queue-size>
<client-timeout>20</client-timeout>
<header-timeout>15</header-timeout>
<source-timeout>60</source-timeout>
<burst-on-connect>0</burst-on-connect>
<burst-size>65535</burst-size>
</limits>
<authentication>
<source-password>hackme</source-password>
<relay-password>hackme</relay-password>
<admin-user>admin</admin-user>
<admin-password>hackme</admin-password>
</authentication>
<!-- {%comment-open-if:icecast.directory.yp-url==""%} -->
<directory>
<yp-url-timeout>15</yp-url-timeout>
<yp-url>http://yp.shoutcast.com</yp-url>
</directory>
<directory>
<yp-url-timeout>15</yp-url-timeout>
<yp-url>http://www.oddsock.org/cgi-bin/yp-cgi</yp-url>
</directory>
<directory>
<yp-url-timeout>15</yp-url-timeout>
<yp-url>http://dir.xiph.org/cgi-bin/yp-cgi</yp-url>
</directory>
<!-- {%comment-close-if:icecast.directory.yp-url==""%} -->
<hostname>test.test</hostname>
<port>8008</port>
<bind-address>1.1.1.1</bind-address>
<!-- Only define a <mount> section if you want to use advanced options,
like alternative usernames or passwords -->
<mount>
<bitrate>128</bitrate>
<mount-name>/mp3</mount-name>
<fallback-override>0</fallback-override>
<fallback-when-full>0</fallback-when-full>
<public>1</public>
<max-listeners>150</max-listeners>
<fallback-mount></fallback-mount>
<genre>alternative</genre>
<type>audio/mpeg</type>
</mount>
<mount>
<bitrate>64</bitrate>
<mount-name>/mobile</mount-name>
<fallback-override>0</fallback-override>
<fallback-when-full>0</fallback-when-full>
<public>1</public>
<max-listeners>50</max-listeners>
<fallback-mount></fallback-mount>
</mount>
<fileserve>1</fileserver>
<paths>
<basedir>/usr/local/centovacast/var/vhosts/tester/</basedir>
<logdir>var/log/</logdir>
<webroot>web/</webroot>
<adminroot>admin/</adminroot>
<pidfile>var/run/server.pid</pidfile>
<alias source="/" dest="/status.xsl"></alias>
</paths>
<logging>
<accesslog>access.log</accesslog>
<errorlog>error.log</errorlog>
<playlistlog>playlist.log</playlistlog>
<loglevel>2</loglevel>
<!-- 4 Debug, 3 Info, 2 Warn, 1 Error -->
</logging>
<security>
<chroot>0</chroot>
</security>
</icecast>
Try to increase the queue-size and burst-size:
<queue-size>1048576</queue-size>
<burst-size>943718</burst-size>
I can´t explain it clearly, but you can follow this thread: https://www.internet-radio.com/community/threads/very-slow-buffering-in-android.20139/
When MP3 is streamed, it is segmented arbitrarily and sent as-is. That is, Icecast doesn't do any inspection or alignment of frames, meaning there can be a slight delay on the client when re-syncing to servers with small buffers.
However, a delay of a minute is an extremely long time. There was an issue with LAME (3.99.1 I think?) which was creating streams that Chrome had difficulty syncing to. You didn't mention what encoder you're using, but if it's using LAME, try upgrading or downgrading the version. If you're using something else, try switching to LAME. Also, try different browsers to see if you're getting this problem just in Chrome.
Progressive MP3 over HTTP (same as your setup) is the most common form of internet radio, and it can work just fine if set up correctly.

Getting hadoop pipes to work on os x with file access

my apologies if this question is trivial, but I haven't been able to solve it properly despite spending a few hours on google…
I have compiled and installed hadoop 2.5.2 from source on OS X 10.8. I think all went well with this, even though getting the native libraries compiled was a bit of a pain.
To verify my installation, I tried the examples that ship with hadoop like this:
$ hadoop jar /Users/per/source/hadoop-2.5.2-src/hadoop-dist/target/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.2.jar pi 2 5
which gives me back
Job Finished in 1.774 seconds
Estimated value of Pi is 3.60000000000000000000
So that seems to indicate that hadoop is at least working, I think.
Since my end goal is to get this working with file I/O from a C++ program using pipes, I also did try the following as an intermediate step:
$ hdfs dfs -put etc/hadoop input
$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.2.jar grep input output 'dfs[a-z.]+'
Which also seems to work (as in producing the correct output).
Then, finally, I tried the wordcount.cpp example, and this is not working, although I fail to understand how to fix it. The error message is quite clear.
2015-10-21 13:38:33,539 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user per
2015-10-21 13:38:33,678 INFO org.apache.hadoop.security.JniBasedUnixGroupsMapping: Error getting groups for per: getgrouplist: error looking up group. 5 (Input/output error)
So obviously there is something I don't get with the file permissions, but what? My hdfs-site.xml file looks like this, where I have tried to turn off permissions checks altogether.
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.web.ugi</name>
<value>per</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>
But I am confused since I at least seem to be able to use grep on files in hdfs (per the built-in example above).
Any feedback would be greatly appreciated. I am running all of this as myself, so I haven't created a new user for only hadoop, since I am only running it locally on my laptop for now.
Edit: Update with some additional output from my terminal window, according to the discussion below:
<snip>
15/10/21 14:47:41 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
15/10/21 14:47:41 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
15/10/21 14:47:41 INFO mapred.LocalJobRunner: map task executor complete.
15/10/21 14:47:41 WARN mapred.LocalJobRunner: job_local676909674_0001
java.lang.Exception: java.lang.NullPointerException
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
Caused by: java.lang.NullPointerException
at org.apache.hadoop.mapred.pipes.Application.<init>(Application.java:104)
at org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:69)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
<snip>
I can add more if that is relevant

XMPP File transfert with gloox

I'm currently working with gloox in order to send XMPP messages from my C++ program. I work in local network with my private prosody XMPP server.
Sending text messages between two client works but not files. I tried the gloox examples (ft_rcv & ft_send) but it did not worked neither (obviously I modified the examples to match my configuration), I always have the same error :
<error type='cancel'><service-unavailable xmlns='urn:ietf:params:xml:ns:xmpp-stanzas'/>
At the beginning I thought it was due to my prosody server but I added the following lines in the conf files :
Component "proxy.jabberserver.local" "proxy65"
proxy65_address = "proxy.jabberserver.local"
proxy65_ports = { 7777 }
I tried different server and different port but I'm currently in a dead end. If someone have an idea it would be great.
Thank you
f->addStreamHost( JID("proxy.jabberserver.local"), "proxy.jabberserver.local", 7777 ); should do the trick. If no - show full XML log.