Audio call crash before call activity started.
06-24 10:28:11.336: D/RTCClient(30532): Create new session
06-24 10:28:11.336: D/dalvikvm(30532): Trying to load lib /data/app-lib/packagename-3/libjingle_peerconnection_so.so 0x42a22eb0
06-24 10:28:11.341: D/dalvikvm(30532): Added shared lib /data/app-lib/packagename-3/libjingle_peerconnection_so.so 0x42a22eb0
06-24 10:28:11.341: D/EglBase(30532): SDK version: 19
06-24 10:28:11.346: D/WEBRTCN(30532): SetRenderAndroidVM
06-24 10:28:11.346: E/rtc(30532): #
06-24 10:28:11.346: E/rtc(30532): # Fatal error in ../../talk/app/webrtc/java/jni/jni_helpers.cc, line 267
06-24 10:28:11.346: E/rtc(30532): # Check failed: ret
06-24 10:28:11.346: E/rtc(30532): #
06-24 10:28:11.346: E/rtc(30532): #
06-24 10:28:11.346: D/dalvikvm(30532): [SWE] ### S.LSI JIT optimization list BEGIN ###
06-24 10:28:11.346: D/dalvikvm(30532): [SWE] ### S.LSI JIT optimization list END ###
06-24 10:28:11.346: A/libc(30532): Fatal signal 6 (SIGABRT) at 0x00007744 (code=-6), thread 30532 (packagename)
Error occurs at the line where
QBRTCSession newSessionWithOpponents = QBRTCClient.getInstance().createNewSessionWithOpponents(opponents, qbConferenceType);
in method
public void addConversationFragmentStartCall(List<Integer> opponents,
QBRTCTypes.QBConferenceType qbConferenceType,
Map<String, String> userInfo) {
// init session for new call
try {
QBRTCSession newSessionWithOpponents = QBRTCClient.getInstance().createNewSessionWithOpponents(opponents, qbConferenceType);
Log.d("Crash", "addConversationFragmentStartCall. Set session " + newSessionWithOpponents);
setCurrentSession(newSessionWithOpponents);
ConversationFragment fragment = new ConversationFragment();
Bundle bundle = new Bundle();
bundle.putIntegerArrayList(ApplicationSingleton.OPPONENTS,
new ArrayList<Integer>(opponents));
bundle.putInt(ApplicationSingleton.CONFERENCE_TYPE, qbConferenceType.getValue());
bundle.putInt(START_CONVERSATION_REASON, StartConversetionReason.OUTCOME_CALL_MADE.ordinal());
bundle.putString(CALLER_NAME, DataHolder.getUserNameByID(opponents.get(0)));
for (String key : userInfo.keySet()) {
bundle.putString("UserInfo:" + key, userInfo.get(key));
Toast.makeText(this, userInfo.get(key), Toast.LENGTH_SHORT).show();
}
fragment.setArguments(bundle);
getFragmentManager().beginTransaction().replace(R.id.fragment_container, fragment, CONVERSATION_CALL_FRAGMENT).commit();
} catch (IllegalStateException e) {
Toast.makeText(this, e.getMessage(), Toast.LENGTH_SHORT).show();
}
}
My mistake. Create session method used in application was old. The latest jar recognise the create session method but video call webRTC sample don't. By implementing new session style, audio call works successfully. Thank you.
Related
I'm using WSO2 IS 5.10 with docker and after making a change to the image, which has nothing to do with JSPs, opening the dashboard on the service provider list I see a white screen.
In wso2 log I found errors like this:
Servlet.service() for servlet [bridgeservlet] threw exception org.apache.jasper.JasperException: Unable to compile class for JSP:
An error occurred at line: [17] in the generated java file: [/home/wso2carbon/wso2is-5.10.0/lib/tomcat/work/Catalina/localhost/ROOT/proxytemp/hc_1893914628/org/apache/jsp/application/list_002dservice_002dproviders_jsp.java]
Only a type can be imported. org.wso2.carbon.identity.application.common.model.xsd.ApplicationBasicInfo resolves to a package
An error occurred at line: [118] in the jsp file: [/application/list-service-providers.jsp]
ApplicationBasicInfo cannot be resolved to a type
115: <%
116: String BUNDLE = "org.wso2.carbon.identity.application.mgt.ui.i18n.Resources";
117: ResourceBundle resourceBundle = ResourceBundle.getBundle(BUNDLE, request.getLocale());
118: ApplicationBasicInfo[] applications = null;
119:
120: String filterString = request.getParameter(ApplicationMgtUIConstants.SP_NAME_FILTER);
121: filterString = ApplicationMgtUIUtil.resolveFilterString(filterString);
this disappears when restarting the image.
I’d like to know what it’s due to
How do I make sure that my program exits when Armeria fails to start because of an Address already in use error?
I have the following code:
import com.linecorp.armeria.common.HttpRequest;
import com.linecorp.armeria.common.HttpResponse;
import com.linecorp.armeria.server.AbstractHttpService;
import com.linecorp.armeria.server.Server;
import com.linecorp.armeria.server.ServerBuilder;
import com.linecorp.armeria.server.ServiceRequestContext;
import java.util.concurrent.CompletableFuture;
public class TestMain {
public static void main(String[] args) {
ServerBuilder sb = Server.builder();
sb.http(8080);
sb.service("/greet/{name}", new AbstractHttpService() {
#Override
protected HttpResponse doGet(ServiceRequestContext ctx, HttpRequest req) throws Exception {
String name = ctx.pathParam("name");
return HttpResponse.of("Hello, %s!", name);
}
});
Server server = sb.build();
CompletableFuture<Void> future = server.start();
future.join();
}
}
When I run it once everything is fine.
But when I run it the second time I get an Address already in use error, which is of course expected, but the program doesn't terminate on its own. This may be how it is supposed to be, but how do I make sure that it terminates upon errors during initialization?
$ gradle run
> Task :run
14:36:04.811 [main] DEBUG io.netty.util.internal.logging.InternalLoggerFactory - Using SLF4J as the default logging framework
14:36:04.815 [main] DEBUG io.netty.util.internal.PlatformDependent0 - -Dio.netty.noUnsafe: false
14:36:04.816 [main] DEBUG io.netty.util.internal.PlatformDependent0 - Java version: 8
14:36:04.817 [main] DEBUG io.netty.util.internal.PlatformDependent0 - sun.misc.Unsafe.theUnsafe: available
14:36:04.817 [main] DEBUG io.netty.util.internal.PlatformDependent0 - sun.misc.Unsafe.copyMemory: available
14:36:04.818 [main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.Buffer.address: available
...
14:36:05.064 [globalEventExecutor-3-1] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxCachedByteBuffersPerChunk: 1023
14:36:05.068 [globalEventExecutor-3-1] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.allocator.type: pooled
14:36:05.068 [globalEventExecutor-3-1] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.threadLocalDirectBufferSize: 0
14:36:05.068 [globalEventExecutor-3-1] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.maxThreadLocalCharBufferSize: 16384
Exception in thread "main" java.util.concurrent.CompletionException: io.netty.channel.unix.Errors$NativeIoException: bind(..) failed: Address already in use
at java.util.concurrent.CompletableFuture.encodeRelay(CompletableFuture.java:326)
at java.util.concurrent.CompletableFuture.completeRelay(CompletableFuture.java:338)
at java.util.concurrent.CompletableFuture.uniRelay(CompletableFuture.java:911)
at java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:953)
at java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:926)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:561)
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:739)
at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
at io.netty.util.concurrent.GlobalEventExecutor$TaskRunner.run(GlobalEventExecutor.java:250)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: io.netty.channel.unix.Errors$NativeIoException: bind(..) failed: Address already in use
<==========---> 80% EXECUTING [6s]
> :run
^C
It's a bug in Armeria, where Netty event loop threads are not terminated when a Server fails to start up. Here's the fix PR, which should be part of the next release (0.97.0): https://github.com/line/armeria/pull/2288
Here is my issue :
I download some netCDF4 files from an FTP server, in two different ways: via FileZilla and via a Python 2.7 script using ftplib.
Code of Python script (running on Windows) :
# download the file
try:
ftp = FTP(server_address)
ftp.login(server_login, server_pass)
filepath = 'the_remote_rep/myNetCDF4File.nc'
filename = 'myNetCDF4File.nc'
local_dir = 'toto'
new_file = open('%s/%s' % (local_dir, filename), "w")
ftp.retrbinary('RETR %s' % filepath, new_file.write)
ftp.close()
new_file.close()
except Exception as e:
print("Error FTP : '" + str(e) + "'")
# update title into the file
try:
fname = 'toto/myNetCDF4File.nc'
dataset = netCDF4.Dataset(fname, mode='a')
setattr(dataset, 'title', 'In Situ Observation Re-Analysis')
dataset.close()
except Exception as e:
print("Error netCDF4 : '" + str(e) + "'")
Then, I get this message :
Error netCDF4 : '[Errno 22] Invalid argument: 'toto/myNetCDF4File.nc''
When I try the second block of code with a netCDF4 file downloaded via FileZilla (the same file for example), there is no error.
Also, when I try to get the netCDF version of the file using "ncdump -k", here is the response (OK with the other file) :
ncdump: myNetCDF4File.nc: Invalid argument
In addition, files do not have the same size depending on the method :
FileZilla : 22 972 Ko
Python ftplib : 23 005 Ko
Is it a problem from ftplib when writing the retrieved file? Or did I miss some parameters to correctly encode the file?
Thanks in advance.
EDIT : verbose messages from FileZilla :
...
Response: 230 Login successful.
Trace: CFtpLogonOpData::ParseResponse() in state 5
Trace: CControlSocket::SendNextCommand()
Trace: CFtpLogonOpData::Send() in state 9
Command: OPTS UTF8 ON
Trace: CFtpControlSocket::OnReceive()
Response: 200 Always in UTF8 mode.
Trace: CFtpLogonOpData::ParseResponse() in state 9
Status: Logged in
Trace: Measured latency of 114 ms
Trace: CFtpControlSocket::ResetOperation(0)
Trace: CControlSocket::ResetOperation(0)
Trace: CFtpLogonOpData::Reset(0) in state 14
Trace: CFtpControlSocket::FileTransfer()
Trace: CControlSocket::SendNextCommand()
Trace: CFtpFileTransferOpData::Send() in state 0
Status: Starting download of /INSITU_OBSERVATIONS/myNetCDF4File.nc
Trace: CFtpChangeDirOpData::Send() in state 0
Trace: CFtpControlSocket::ResetOperation(0)
Trace: CControlSocket::ResetOperation(0)
Trace: CFtpChangeDirOpData::Reset(0) in state 0
Trace: CFtpFileTransferOpData::SubcommandResult(0) in state 1
Trace: CControlSocket::SendNextCommand()
Trace: CFtpFileTransferOpData::Send() in state 5
Trace: CFtpRawTransferOpData::Send() in state 2
Command: PASV
Trace: CFtpControlSocket::OnReceive()
Response: 227 Entering Passive Mode (193,68,190,45,179,16).
Trace: CFtpRawTransferOpData::ParseResponse() in state 2
Trace: CControlSocket::SendNextCommand()
Trace: CFtpRawTransferOpData::Send() in state 4
Trace: Binding data connection source IP to control connection source IP 134.xx.xx.xx
Command: RETR myNetCDF4File.nc
Trace: CTransferSocket::OnConnect
Trace: CFtpControlSocket::OnReceive()
Response: 150 Opening BINARY mode data connection for myNetCDF4File.nc (9411620 bytes).
Trace: CFtpRawTransferOpData::ParseResponse() in state 4
Trace: CControlSocket::SendNextCommand()
Trace: CFtpRawTransferOpData::Send() in state 5
Trace: CTransferSocket::TransferEnd(1)
Trace: CFtpControlSocket::TransferEnd()
Trace: CFtpControlSocket::OnReceive()
Response: 226 Transfer complete.
Trace: CFtpRawTransferOpData::ParseResponse() in state 7
Trace: CFtpControlSocket::ResetOperation(0)
Trace: CControlSocket::ResetOperation(0)
Trace: CFtpRawTransferOpData::Reset(0) in state 7
Trace: CFtpFileTransferOpData::SubcommandResult(0) in state 7
Trace: CFtpControlSocket::ResetOperation(0)
Trace: CControlSocket::ResetOperation(0)
Trace: CFtpFileTransferOpData::Reset(0) in state 7
Status: File transfer successful, transferred 9 411 620 bytes in 89 seconds
Status: Disconnected from server
Trace: CFtpControlSocket::ResetOperation(66)
Trace: CControlSocket::ResetOperation(66)
In fact, this is a problem of binary configuration (thanks to your questions).
I added ftp.voidcmd('TYPE I') before retrieving file with ftplib, then I modified writing parameter of local file as new_file = open('%s/%s' % (local_ftp_path, filename), "wb") to specify that's a binary file.
Now the file is readable after download via ftplib and has same size as downloaded from FileZilla.
Thanks to your contribution.
I am using the Amazon Java SDK (that latest is version 1.11.147 as of this writing) with Groovy (Groovy Version: 2.4.11 JVM: 1.8.0_112 Vendor: Oracle Corporation OS: Mac OS X) to upload files to S3. Using the instructions from Amazon's documentation for a Transfer Manager, I was able to copy contents from one bucket to another. However, uploading always fails. I attempted the the following 3 methods. I have Grapes grab new versions of httpclient and httpcore because of what I read at the this Stack Overflow Post Apache PoolingHttpClientConnectionManager throwing illegal state exception
#Grapes([
#Grab(group='com.amazonaws', module='aws-java-sdk', version='1.11.147'),
// https://mvnrepository.com/artifact/org.apache.commons/commons-compress
#Grab(group='org.apache.commons', module='commons-compress', version='1.13'),
// https://mvnrepository.com/artifact/org.apache.httpcomponents/httpclient
#Grab(group='org.apache.httpcomponents', module='httpclient', version='4.5.3'),
// https://mvnrepository.com/artifact/org.apache.httpcomponents/httpcore
#Grab(group='org.apache.httpcomponents', module='httpcore', version='4.4.6')
])
// creds is String saying which ~/.aws/credentials profile to use
def credentials = new ProfileCredentialsProvider(creds)
def s3client = AmazonS3ClientBuilder.standard().
withCredentials(credentials).
withRegion(region).
build()
def tx = TransferManagerBuilder.standard().
withS3Client(s3client).
build()
def config_path = "/path/to/my/root"
def dir = "key_path"
def f = new File("${config_path}/${dir}/")
// Method 1, upload whole directory in one call
def mfu = tx.uploadDirectory(data_bucket, dir, f, true)
mfu.waitForCompletion() // <-- throws exception, w/o this line, just doesn't upload, but script continues
// Method 2, upload each file separately
def ld
ld = { File file ->
file.listFiles().each { g ->
if (g.isDirectory()) {
ld.call(g)
} else {
def key = g.toString().substring(config_path_length)
def fu = tx.upload(data_bucket, key, g)
def fu = tx.upload(data_bucket, key, g)
fu.waitForCompletion() // <-- throws exception, w/o this line, just doesn't upload, but script continues
}
}
}
ld.call(f)
// Finally Method 3, avoiding TransferManager altogether, and call putObject directly on each file
def ld
ld = { File file ->
file.listFiles().each { g ->
if (g.isDirectory()) {
ld.call(g)
} else {
def key = g.toString().substring(config_path_length)
def fu = tx.upload(data_bucket, key, g)
s3client.putObject(new PutObjectRequest(data_bucket, key, g)) // <-- throws exception
}
}
}
ld.call(f)
However, no matter which method I try, I always get the following stacktrace:
Caught: java.lang.IllegalStateException: Connection pool shut down
java.lang.IllegalStateException: Connection pool shut down at
org.apache.http.util.Asserts.check(Asserts.java:34) at
org.apache.http.pool.AbstractConnPool.lease(AbstractConnPool.java:184)
at
org.apache.http.impl.conn.PoolingHttpClientConnectionManager.requestConnection(PoolingHttpClientConnectionManager.java:251)
at
com.amazonaws.http.conn.ClientConnectionManagerFactory$Handler.invoke(ClientConnectionManagerFactory.java:76)
at com.amazonaws.http.conn.$Proxy11.requestConnection(Unknown Source)
at
org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:175)
at
org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at
org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at
com.amazonaws.http.apache.client.impl.SdkHttpClient.execute(SdkHttpClient.java:72)
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1190)
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1030)
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:742)
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:716)
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
at
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
at
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
at
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4221)
at
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4168)
at
com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1718)
at com.amazonaws.services.s3.AmazonS3$putObject.call(Unknown Source)
...
I am unable at this time to confirm whether groovy is using the updated httpcomponent libraries, or if that is the problem. Installed in the 1.8 JDK are httpclient_4.2.6 and httpclient_4.3.5. Any suggestions for how to get unstuck would be greatly appreciated. Thank you. -Vincent
I figured it out. the s3client had another TranferManager attached to it elsewhere in the code. when tx.shutdownNow() was called, it closed this TransferManager as well.
In pig, I'm using my udf which is using external jar file routines, and in eclipse I did export that jar file too. But while running pig -x local script1.pig, it gives me error for external jar file routines.
Please help!
Thanks.
EDIT 1:
As asked in comments for my code:
script1.pig:
REGISTER ./csv2arff.jar;
csvraw = LOAD 'sample' USING PigStorage('\n') as (c);
arffraws = FOREACH csvraw GENERATE pighw2java.CSV2ARFF(c);
pighw2java.CSV2ARFF:
public String exec(Tuple input) throws IOException {
if (input == null || input.size() == 0)
return null;
try{
System.out.println(">>> " + input.get(0).toString());
// 1.1) csv to instances
ByteArrayInputStream inputStream = new ByteArrayInputStream(input.get(0).toString().getBytes("UTF-8"));
CSVLoader loader = new CSVLoader(); **HERE IS ERROR**
.....
}
}
Error I got:
java.lang.NoClassDefFoundError: weka/core/converters/CSVLoader
at pighw2java.CSV2ARFF.exec(CSV2ARFF.java:24)
at pighw2java.CSV2ARFF.exec(CSV2ARFF.java:1)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.expressionOperators.POUserFunc.getNext(POUserFunc.java:216)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.expressionOperators.POUserFunc.getNext(POUserFunc.java:305)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.getNext(PhysicalOperator.java:322)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POForEach.processPlan(POForEach.java:332)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POForEach.getNext(POForEach.java:284)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.runPipeline(PigGenericMapBase.java:271)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:266)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:212)
Caused by: java.lang.ClassNotFoundException: weka.core.converters.CSVLoader
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
... 14 more
You need to register all of the 3rd party dependencies that are used in your custom UDF. Here:
register '/path/to/weka.jar'