I am trying to get the push notifications to work from amazon AWS simple notifications services and unity using their AmazonAWS SDK. I've been following the setup guide linked here. but when I tried to build the sample scene provided with the sdk I get this error on my phone.
Error Image
I did put google-play-services.jar and android support.jar inside the Assets\plugins\android folder but for some reason its not able to find the GCM class. Could you please tell me as of what I might be doing wrong?
Error :
AndroidJavaException: java.lang.NoclassDefFoundError: Failed resolution of Lcom/google/android/gms/gcm/googlecloudmessaging;
caused by java.lang.classnotfoundexception: com.google.android.gms.gcm.googlecloudmessaging
(I screen shot it from the logscreen on my phone there is no way to copy the whole error message.)
after a bit of digging I found the another jar file of play sevices , turns out the one I was using didnt have the reference to the GCM class
Related
I'm trying to use GCP Data Fusion Basic Edition with Private IP option, buth when I try to create a pipeline every action gives me this error
No discoverable found for request POST /v3/namespaces/system/apps/pipeline/services/studio/methods/v1/contexts/default/validations/stage HTTP/1.1
any suggestion on how to solve this issue
Thanks
This error is indicative of Pipeline Studio service being down. Check the status of Pipeline Studio in System Admin and look at the logs as described here.
You can restart the pipeline studio service by going to System Admin > Configuration > Make HTTP Call.
Change the method to POST and set path to namespaces/system/apps/pipeline/services/studio/start
You can validate your pipeline once pipeline studio status becomes green.
I am building a spark application using aws sdk to access an S3 source. I am getting the below error:
java.lang.NoSuchMethodError:
org.apache.http.conn.ssl.SSLConnectionSocketFactory.(Ljavax/net/ssl/SSLContext;Ljavax/net/ssl/HostnameVerifier;)V
Looked up online for solution and it appears my spark application is using a wrong httpclient. The following thread seems to offer a solution but not sure how I can override the default default httpclient.
What version of httpclient is compatible with the Amazon SDK v 1.11.5?
Here are the different httpclients that I have in my system.
./Applications/IBM Notes.app/Contents/MacOS/shared/eclipse/plugins/org.apache.wink_1.1.2.20150826-0855/lib/httpclient-4.0.1.jar
./Users/XXXXX/.ivy2/cache/org.apache.httpcomponents/httpclient/jars/httpclient-4.1.2.jar
./Users/XXXXX/.ivy2/cache/org.apache.httpcomponents/httpclient/jars/httpclient-4.5.1.jar
./Users/XXXXX/.m2/repository/org/apache/httpcomponents/httpclient/4.0.2/httpclient-4.0.2.jar
./Users/XXXXX/.m2/repository/org/apache/httpcomponents/httpclient/4.3.6/httpclient-4.3.6.jar
./Users/XXXXX/Downloads/aws-java-sdk-1.11.110/third-party/lib/httpclient-4.5.2.jar
./usr/local/aws-java/aws-java-sdk-1.11.109/third-party/lib/httpclient-4.5.2.jar
./usr/local/spark/spark-2.1.0-bin-hadoop2.7/jars/httpclient-4.5.2.jar
./usr/local/zeppelin/interpreter/alluxio/httpclient-4.3.6.jar
./usr/local/zeppelin/interpreter/bqsql/httpclient-4.3.6.jar
./usr/local/zeppelin/interpreter/elasticsearch/httpclient-4.3.6.jar
./usr/local/zeppelin/interpreter/hbase/httpclient-4.3.6.jar
./usr/local/zeppelin/interpreter/kylin/httpclient-4.3.6.jar
./usr/local/zeppelin/interpreter/lens/httpclient-4.3.6.jar
./usr/local/zeppelin/interpreter/livy/httpclient-4.3.4.jar
./usr/local/zeppelin/interpreter/pig/httpclient-4.3.6.jar
./usr/local/zeppelin/lib/httpclient-4.3.6.jar
./usr/local/zeppelin/lib/interpreter/httpclient-4.3.6.jar
I do not have classpath specified, so I am not sure which httpclient it is picking. How do I override such that it always picks up ./usr/local/aws-java/aws-java-sdk-1.11.109/third-party/lib/httpclient-4.5.2.jar?
Copying httpclient-4.5.2.jar and httpcore-4.4.4.jar into the zeppelin/interpreter/spark folder got rid of this error.
While trying to follow the instructions from the wso2am (1.10.0) manual, regarding working with statistics with the wso2das (3.0.1) server i have encountered a problem.
If i choose to let the wso2am server define the stream while making the first call of the api, the wso2das server refuses to post statistics to the WSO2_STATS_DB.
If on the other hand i choose to import the analytics.car file in wso2das (as stated here ) i get an exception (AsyncDataPublisher Stream definition already exist) because the org.wso2.apimgt.statistics.request defined in the latest Analytics.car is different to the one being send from wso2am.
I pinpointed the problem in the definition of the Eventstream_request_1.0 in files
org.wso2.apimgt.statistics.request_1.0.0.json ,
throttledOutORG_WSO2_APIMGT_STATISTICS_REQUEST.xml
where the definition of the throttledOut option is missing
Is there a way to solve this issue?
Thank you.
I think your DAS is in some kind of a corrupted state. Can you first delete the car application (/repository/deployment/server/carbonapps) and then log in to DAS and go to Manage > Event > Streams and delete any existing streams. Then try again to deploy the car app in the /repository/deployment/server/carbonapps location.
If everything goes well you would see two scripts in Manage > Batch Analytics > Scripts section. Try to execute each script and see if there is any error. If not then you can point the API manager to DAS
after running AWS Elastic Beanstalk application for few weeks suddenly I can't open my application. Page simply displays an error which doesn't provide much information how to fix it.
Error
A problem occurred while loading your page: AWS Query failed to deserialize response
(and there is no more information, Googling also haven't found any answer)
So before updating my subscription and starting paying to Amazon not insignificant amount of money for being able to contact their technical support I thought I will ask here first if someone here encountered this issue.
Thanks for any suggestions.
After receiving this generic error, I was able to dig into the actual error message by using the EB CLI. In my case the CLI threw "ZIP does not support timestamps before 1980".
I'm trying to develop a system service, so I use the echo service as a test.
I developed the service by following the directions on the CF doc.
Now the echo node can be running, but the echo gateway failed with the error "echo_gateway - pid=15040 tid=9321 fid=290e ERROR -- Exiting due to NATS error: Could not connect to server on nats://localhost:4222/"
I got into this issue and struck for almost a week finally someone helped me to resolve it. The underlying problemn is something else and since errors are not trapped properly it gives a wrong message. You need to goto github and get the latest code base. The fix for this issue is http://reviews.cloudfoundry.org/#/c/8891 . Once you fix this issue, you will most likely encounter a timeout field issue. the solution for that is to define the timeout field gateway.yml
A few additional properties became required in the echo_gateway.yml.erb file - specifically, the latest were default_plan and timeout, under the service group. The properties have been added to the appropriate file in the vcap-services-sample-release repo.
Looks like the fix for the misleading error has been merged into github. I haven't updated and verified this myself just yet but the gerrit comments indicate the solution is the same as what the node base has had for some time. I did previously run into that error handling and it was far more helpful.