JanusGraph: access Amazon Managed Cassandra from EC2 - amazon-web-services

I'm trying to set up JanusGraph to access Amazon MCS. The infrastructure is all there to allow access, but I'm facing difficulties at the config step.
This is the config I'm setting for janusgraph-cql.properties:
storage.backend=cql
storage.hostname=cassandra.ap-southeast-1.amazonaws.com
storage.port=9142
storage.username=${CASSANDRA_USERNAME}
storage.password=${CASSANDRA_PASSWORD}
storage.cql.ssl.truststore.location=${CASSANDRA_TRUSTSTORE_LOCATION}
storage.cql.ssl.truststore.password=${CASSANDRA_TRUSTSTORE_PASSWORD}
storage.cql.ssl.enabled=true
Amazon MCS exposes port 9142, instead of 9402.
When I start gremlin-server.sh, I can see the following outputs:
2897 [main] INFO com.datastax.driver.core.Cluster - New Cassandra host /13.251.116.141:9042 added
2898 [main] INFO com.datastax.driver.core.Cluster - New Cassandra host /13.251.117.140:9042 added
2898 [main] INFO com.datastax.driver.core.Cluster - New Cassandra host /13.251.117.134:9042 added
2898 [main] INFO com.datastax.driver.core.Cluster - New Cassandra host /13.251.116.137:9042 added
2898 [main] INFO com.datastax.driver.core.Cluster - New Cassandra host /13.251.116.182:9042 added
2899 [main] INFO com.datastax.driver.core.Cluster - New Cassandra host cassandra.ap-southeast-1.amazonaws.com/13.251.117.0:9142 added
2899 [main] INFO com.datastax.driver.core.Cluster - New Cassandra host /13.251.116.84:9042 added
2899 [main] INFO com.datastax.driver.core.Cluster - New Cassandra host /13.251.117.219:9042 added
2899 [main] INFO com.datastax.driver.core.Cluster - New Cassandra host /13.251.116.144:9042 added
2899 [main] INFO com.datastax.driver.core.Cluster - New Cassandra host /13.251.116.1:9042 added
Even though I've set the port to 9142, new Cassandra hosts with port 9042 are still being added, making the process fail (since 9042 is not available). Is there something I'm doing wrong?

Related

Unable to start Marklogic service on AWS

I have connected to an AWS instance which was set up for MarkLogic using the AWS Systems Manager. I am trying to start the MarkLogic Server, but I am receiving the following error response:
Set configuration: JAVA_HOME="/usr/lib/jvm/jre-1.8.0-openjdk-1.8.0.252.b09-2.amzn2.0.1.x86_64"
Set configuration: MARKLOGIC_MDB_TYPE=""
Set configuration: AWS_REGION="ap-southeast-2"
Set configuration: AWS_DEFAULT_REGION="ap-southeast-2"
Set configuration: MARKLOGIC_ZONE="ap-southeast-2a"
Initialize Configuration.
AWS Region: ap-southeast-2, ZONE: ap-southeast-2a. INSTANCE: i-08c0992c858711a67
Instance is not managed
Waiting for device mounted to come online : /dev/nvme1n1
Volume /dev/sdf has failed to attach - aborting
Warning: ec2-startup did not complete successfully
Check the error logs for details
Starting MarkLogic: [FAILED]
This was the output on the log for mlcmd:
"2020-08-17 02:10:26,821 0 INFO [main] shell.Shell - xmlsh initialize
"2020-08-17 02:10:26,952 131 INFO [main] builtin.log - loading init.xsh
"2020-08-17 02:10:27,102 281 INFO [main] builtin.log - initializing mlcmd
"2020-08-17 02:10:27,103 282 INFO [main] builtin.log - loading /var/local/mlcmd.conf
"2020-08-17 02:10:27,297 476 TRACE [main] mlcmd.trace - init-config: exit-status: 1 args: Not loading mdb functions - not a managed cluster
"2020-08-17 02:10:27,299 478 TRACE [main] mlcmd.trace - complete init.xsh: exit-status: 1 args:
"2020-08-17 02:10:27,299 478 INFO [main] builtin.log - runing init-config.xsh
"2020-08-17 02:10:27,942 0 INFO [main] shell.Shell - xmlsh initialize
"2020-08-17 02:10:28,042 100 INFO [main] builtin.log - loading init.xsh
"2020-08-17 02:10:28,173 231 INFO [main] builtin.log - initializing mlcmd
"2020-08-17 02:10:28,174 232 INFO [main] builtin.log - loading /var/local/mlcmd.conf
"2020-08-17 02:10:28,387 445 TRACE [main] mlcmd.trace - ec2-startup: exit-status: 1 args: Not loading mdb functions - not a managed cluster
"2020-08-17 02:10:28,389 447 TRACE [main] mlcmd.trace - complete init.xsh: exit-status: 1 args:
How do I resolve this issue?
If there is more required information, do let me know and I will try to get it
It appears that you are attempting to start a self-managed instance/cluster, while the Managed Cluster feature has not been disabled.
The MarkLogic Managed Cluster feature is the recommended way to deploy a MarkLogic Cluster on AWS so it is enabled by default. Managed clusters are meant to be deployed using the MarkLogic CloudFormation Templates.
Deploying MarkLogic on EC2 Using CloudFormation
The Managed Cluster feature reduces the amount of work necessary to setup the initial cluster, and creates an Auto Scaling Group that will automatically re-launch an instance that gets terminated, and when MarkLogic starts on the new instance, it will remount the associated EBS data drive.
CloudFormation Template Overview
If you wish to have a self-managed cluster, then you will need to create an /etc/marklogic.conf file to disable the feature at startup.
AWS Configuration Variables
Best Practice Editing MarkLogic Server Environment Variables
I would recommend reviewing the following guide, as it details using both the Managed Cluster feature, as well as self-managed clusters.
MarkLogic Server on Amazon Web Services (AWS) Guide

AWS BeanStalk doesn't tend to connect to AWS RDS

I think I am almost there.
I created an instance of AWS BeanStalk and added an oracle DB instance to it.
When I found the log, I saw the driver was loaded but it keeps saying that URL is
invalid.
Here are my RDS info and log message.
[RDS Info]
Endpoint = aa1c9autjaqoufk.c2k1ch01futy.ap-northeast-2.rds.amazonaws.com
Port = 1521
Public Access = yes
[System Log]
25-Jun-2018 02:42:56.759 INFO [main] org.apache.coyote.AbstractProtocol.init Initializing ProtocolHandler ["http-nio-8080"]
25-Jun-2018 02:42:56.787 INFO [main] org.apache.tomcat.util.net.NioSelectorPool.getSharedSelector Using a shared selector for servlet write/read
25-Jun-2018 02:42:56.796 INFO [main] org.apache.coyote.AbstractProtocol.init Initializing ProtocolHandler ["ajp-nio-8009"]
25-Jun-2018 02:42:56.799 INFO [main] org.apache.tomcat.util.net.NioSelectorPool.getSharedSelector Using a shared selector for servlet write/read
25-Jun-2018 02:42:56.800 INFO [main] org.apache.catalina.startup.Catalina.load Initialization processed in 1366 ms
25-Jun-2018 02:42:56.842 INFO [main] org.apache.catalina.core.StandardService.startInternal Starting service Catalina
25-Jun-2018 02:42:56.848 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet Engine: Apache Tomcat/8.0.50
25-Jun-2018 02:42:56.872 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory /var/lib/tomcat8/webapps/ROOT
25-Jun-2018 02:42:58.613 INFO [localhost-startStop-1] org.apache.jasper.servlet.TldScanner.scanJars At least one JAR was scanned for TLDs yet contained no TLDs. Enable debug logging for this logger for a complete list of JARs that were scanned but no TLDs were found in them. Skipping unneeded JARs during scanning can improve startup time and JSP compilation time.
25-Jun-2018 02:42:58.689 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory /var/lib/tomcat8/webapps/ROOT has finished in 1,817 ms
25-Jun-2018 02:42:58.693 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"]
25-Jun-2018 02:42:58.720 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-8009"]
25-Jun-2018 02:42:58.736 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 1935 ms
Loading driver...
Driver loaded!
jdbc:oracle:oci://aa1c9autjaqoufk.c2k1ch01futy.ap-northeast-2.rds.amazonaws.com:1521/ebdb?user=username&password=password
SQLException: Invalid Oracle URL specified
SQLState: 99999
VendorError: 17067
Closing the connection.
SQLException: Invalid Oracle URL specified
SQLState: 99999
VendorError: 17067
Closing the connection.
I included ojdbc8 drvier in my web project library and made a build.
Is this about driver? What am I doing wrong?
Message clearly says your URL is incorrect,
It should be something like below.
//step1 load the driver class
Class.forName("oracle.jdbc.driver.OracleDriver");
//step2 create the connection object
Connection con=DriverManager.getConnection(
"jdbc:oracle:thin:#aa1c9autjaqoufk.c2k1ch01futy.ap-northeast-2.rds.amazonaws.com:1521:edb","username","password");
`

I can connect to AWS RDS via sqldeveloper but can't by Java application

It is so werid that I can connect to AWS RDS with sqldeveloper but can't with my java application(java source code or jsp)
When I try to access to RDS, there are errors like:
coyote.AbstractProtocol.init Initializing ProtocolHandler ["http-nio-8080"]
26-Jun-2018 04:24:33.203 INFO [main] org.apache.tomcat.util.net.NioSelectorPool.getSharedSelector Using a shared selector for servlet write/read
26-Jun-2018 04:24:33.212 INFO [main] org.apache.coyote.AbstractProtocol.init Initializing ProtocolHandler ["ajp-nio-8009"]
26-Jun-2018 04:24:33.215 INFO [main] org.apache.tomcat.util.net.NioSelectorPool.getSharedSelector Using a shared selector for servlet write/read
26-Jun-2018 04:24:33.219 INFO [main] org.apache.catalina.startup.Catalina.load Initialization processed in 1387 ms
26-Jun-2018 04:24:33.265 INFO [main] org.apache.catalina.core.StandardService.startInternal Starting service Catalina
26-Jun-2018 04:24:33.266 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet Engine: Apache Tomcat/8.0.50
26-Jun-2018 04:24:33.286 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory /var/lib/tomcat8/webapps/ROOT
26-Jun-2018 04:24:35.020 INFO [localhost-startStop-1] org.apache.jasper.servlet.TldScanner.scanJars At least one JAR was scanned for TLDs yet contained no TLDs. Enable debug logging for this logger for a complete list of JARs that were scanned but no TLDs were found in them. Skipping unneeded JARs during scanning can improve startup time and JSP compilation time.
26-Jun-2018 04:24:35.097 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory /var/lib/tomcat8/webapps/ROOT has finished in 1,811 ms
26-Jun-2018 04:24:35.100 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"]
26-Jun-2018 04:24:35.106 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-8009"]
26-Jun-2018 04:24:35.108 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 1888 ms
Loading driver...
Driver loaded!
jdbc:oracle:thin://IP:1521/ORCL?user=username&password=password
SQLException: Invalid Oracle URL specified
SQLState: 99999
VendorError: 17067
Closing the connection.
SQLException: Invalid Oracle URL specified
SQLState: 99999
VendorError: 17067
Closing the connection.
But the URL is just the same value as I tried with sqldeveloper.
Is there anything wrong?
Please enlighten me since I've been suffering for this about a week! :(
I'm not sure how your application is set up, but I'm using Maven & Spring Boot and I got it working like this:
I mainly followed this guide, ignoring the .sql files, thymeleaf UI, "model.addAttribute("cities", cities);" part, and the html file:
https://zetcode.com/springboot/postgresql/
My application.properties file looks like this
postgres.comment.aa=https://zetcode.com/springboot/postgresql/
spring.main.banner-mode=off
logging.level.org.springframework=ERROR
spring.jpa.hibernate.ddl-auto=none
spring.datasource.initialization-mode=always
spring.datasource.platform=postgres
spring.datasource.url=jdbc:postgresql://your-rds-url-here.us-east-1.rds.amazonaws.com:yourDbPortHere/postgres
spring.datasource.username=postgres
spring.datasource.password=<your db password here>
spring.jpa.properties.hibernate.jdbc.lob.non_contextual_creation=true
If you have custom schemas, you can append "?currentSchema=users" to the url:
spring.datasource.url=jdbc:postgresql://your-rds-url-here.us-east-1.rds.amazonaws.com:yourDbPortHere/postgres?currentSchema=users
Thanks to this SO answer for the schema:
Is it possible to specify the schema when connecting to postgres with JDBC?
These other couple links also helped
https://turreta.com/2015/03/01/how-to-specify-a-default-schema-when-connecting-to-postgresql-using-jdbc/
https://doc.cuba-platform.com/manual-latest/db_schema_connection.html

WSO2 API Manager 2.1 Analytics - Fails to Start up while connecting to Oracle DB

I am deploying WSO2 API Manager 2.1 and Analytics using the Pattern as specified - https://github.com/wso2/docker-apim/tree/master/docker-compose/pattern-3
Here in all the components - nginx, Publisher, Store, Traffic Manager, Gateway Worker, Gateway Manager, Key Manager and Analytics are deployed as separate docker containers.
When I started these containers, it worked fine and by default it was using the mysql server for storing all the data.
But as per our requirement, we had to use the Oracle DB and hence, we created a user in there with all the required permissions and then run the oracle scripts and finally started all the containers one by one.
In addition for Analytics, we created two separate users for two data sources - WSO2_ANALYTICS_EVENT_STORE_DB and WSO2_ANALYTICS_PROCESSED_DATA_STORE_DB (Didnt run any scripts for these).
And now we have a problem with the Analytics COntainer not able to start and throwing the error -
[2017-07-11 12:53:54,017] INFO {org.wso2.carbon.core.internal.CarbonCoreActivator} - Starting WSO2 Carbon...
[2017-07-11 12:53:54,017] INFO {org.wso2.carbon.core.internal.CarbonCoreActivator} - Operating System : Linux 4.8.0-53-generic, amd64
[2017-07-11 12:53:54,017] INFO {org.wso2.carbon.core.internal.CarbonCoreActivator} - Java Home : /mnt/jdk-7u80/jre
[2017-07-11 12:53:54,017] INFO {org.wso2.carbon.core.internal.CarbonCoreActivator} - Java Version : 1.7.0_80
[2017-07-11 12:53:54,017] INFO {org.wso2.carbon.core.internal.CarbonCoreActivator} - Java VM : Java HotSpot(TM) 64-Bit Server VM 24.80-b11,Oracle Corporation
[2017-07-11 12:53:54,018] INFO {org.wso2.carbon.core.internal.CarbonCoreActivator} - Carbon Home : /mnt/186.12.12.12/wso2am-analytics-2.1.0
[2017-07-11 12:53:54,018] INFO {org.wso2.carbon.core.internal.CarbonCoreActivator} - Java Temp Dir : /mnt/186.12.12.12/wso2am-analytics-2.1.0/tmp
[2017-07-11 12:53:54,018] INFO {org.wso2.carbon.core.internal.CarbonCoreActivator} - User : root, en-US, GMT
[2017-07-11 12:54:34,104] INFO {org.wso2.carbon.core.internal.permission.update.PermissionUpdater} - Permission cache updated for tenant -1234
[2017-07-11 12:54:34,235] INFO {org.wso2.carbon.core.transports.http.HttpsTransportListener} - HTTPS port : 9444
[2017-07-11 12:54:34,235] INFO {org.wso2.carbon.core.transports.http.HttpTransportListener} - HTTP port : 9764
[2017-07-11 12:54:36,547] INFO {org.wso2.carbon.webapp.mgt.TomcatGenericWebappsDeployer} - Deployed webapp: StandardEngine[Catalina].StandardHost[localhost].StandardContext[/analytics].File[/mnt/186.12.12.12/wso2am-analytics-2.1.0/repository/deployment/server/webapps/analytics.war]
[2017-07-11 12:54:36,593] INFO {org.wso2.carbon.webapp.mgt.TomcatGenericWebappsDeployer} - Deployed webapp: StandardEngine[Catalina].StandardHost[localhost].StandardContext[/inputwebsocket].File[/mnt/186.12.12.12/wso2am-analytics-2.1.0/repository/deployment/server/webapps/inputwebsocket.war]
[2017-07-11 12:54:36,622] INFO {org.wso2.carbon.webapp.mgt.TomcatGenericWebappsDeployer} - Deployed webapp: StandardEngine[Catalina].StandardHost[localhost].StandardContext[/outputwebsocket].File[/mnt/186.12.12.12/wso2am-analytics-2.1.0/repository/deployment/server/webapps/outputwebsocket.war]
[
[2017-07-11 12:54:43,116] INFO {org.wso2.carbon.event.processor.core.EventProcessorDeployer} - Execution plan deployment held back and in inactive state : APIMAnalytics-RequestSummarizer-RequestSummarizer-realtime1.siddhiql, Dependency validation exception: Stream org.wso2.apimgt.statistics.requestsPerMinPerKeyStream:1.0.0 does not exist
[2017-07-11 12:54:43,186] INFO {org.wso2.carbon.event.processor.core.EventProcessorDeployer} - Execution plan deployment held back and in inactive state : APIMAnalytics-UnusualIPAccessTemplate-UnusualIPAccessAlert-realtime1.siddhiql, Dependency validation exception: Stream org.wso2.apimgt.statistics.perMinuteRequest:1.0.0 does not exist
[2017-07-11 12:54:43,218] INFO {org.wso2.carbon.databridge.receiver.thrift.ThriftDataReceiver} - Thrift Server started at 0.0.0.0
[2017-07-11 12:54:43,246] INFO {org.wso2.carbon.databridge.receiver.thrift.ThriftDataReceiver} - Thrift SSL port : 7712
[2017-07-11 12:54:43,253] INFO {org.wso2.carbon.databridge.receiver.thrift.ThriftDataReceiver} - Thrift port : 7612
[2017-07-11 12:54:43,277] INFO {org.apache.tomcat.util.net.NioSelectorPool} - Using a shared selector for servlet write/read
[2017-07-11 12:54:43,355] INFO {org.apache.tomcat.util.net.NioSelectorPool} - Using a shared selector for servlet write/read
[2017-07-11 12:54:43,408] INFO {org.wso2.carbon.ntask.core.service.impl.TaskServiceImpl} - Task service starting in STANDALONE mode...
[2017-07-11 12:54:44,030] ERROR {org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceComponent} - Error in activating analytics data service: null
java.lang.RuntimeException
at org.wso2.carbon.analytics.datasource.rdbms.RDBMSAnalyticsRecordStore$RDBMSResultSetIterator.next(RDBMSAnalyticsRecordStore.java:881)
at org.wso2.carbon.analytics.datasource.rdbms.RDBMSAnalyticsRecordStore$RDBMSResultSetIterator.hasNext(RDBMSAnalyticsRecordStore.java:843)
at org.apache.commons.collections.IteratorUtils.toList(IteratorUtils.java:848)
at org.apache.commons.collections.IteratorUtils.toList(IteratorUtils.java:825)
at org.wso2.carbon.analytics.datasource.core.util.GenericUtils.listRecords(GenericUtils.java:284)
[2017-07-11 12:54:55,566] INFO {org.wso2.carbon.databridge.core.DataBridge} - user admin connected
[2017-07-11 12:55:05,564] ERROR {org.wso2.carbon.databridge.core.internal.queue.QueueWorker} - Dropping wrongly formatted event sent for -1234
org.wso2.carbon.databridge.core.exception.EventConversionException: Error when converting loganalyzer:1.0.0 of event bundle with events 1
at org.wso2.carbon.databridge.receiver.thrift.converter.ThriftEventConverter.createEventList(ThriftEventConverter.java:181)
at org.wso2.carbon.databridge.receiver.thrift.converter.ThriftEventConverter.toEventList(ThriftEventConverter.java:90)
at org.wso2.carbon.databridge.core.internal.queue.QueueWorker.run(QueueWorker.java:73)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.wso2.carbon.databridge.core.exception.EventConversionException: No StreamDefinition for streamId loganalyzer:1.0.0 present in cache
Can someone please let me know how to resolve this issue.
You can get the newer version of jar from http://maven.wso2.org/nexus/content/groups/public/org/wso2/carbon/analytics/org.wso2.carbon.analytics.datasource.rdbms/

Vora 1.3 Thriftserver cannot start

I'm deploying Vora 1.3 Services on HDP 2.3 using the Manager web UI. Mostly default configuration and nodes assignment. I've assigned Vora Thriftserver service to the node that's been successfully hosting the same service of Vora 1.2 (which I removed already).
The service doesn't start though. Here's the related part of the log:
17/01/23 10:04:27 INFO Server: jetty-8.y.z-SNAPSHOT
17/01/23 10:04:27 INFO AbstractConnector: Started SelectChannelConnector#0.0.0.0:4040
17/01/23 10:04:27 INFO Utils: Successfully started service 'SparkUI' on port 4040.
17/01/23 10:04:27 INFO SparkUI: Started SparkUI at http://<jumpbox>:4040
17/01/23 10:04:28 INFO SparkContext: Added JAR file:/var/lib/ambari-agent/cache/stacks/HDP/2.3/services/vora-manager/package/lib/vora-spark/lib/spark-sap-datasources-1.3.102-assembly.jar at http://<jumpbox>:41874/jars/spark-sap-datasources-1.3.102-assembly.jar with timestamp 1485126268263
17/01/23 10:04:28 WARN MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set.
17/01/23 10:04:28 INFO Executor: Starting executor ID driver on host localhost
17/01/23 10:04:28 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 37523.
17/01/23 10:04:28 INFO NettyBlockTransferService: Server created on 37523
17/01/23 10:04:28 INFO BlockManagerMaster: Trying to register BlockManager
17/01/23 10:04:28 INFO BlockManagerMasterEndpoint: Registering block manager localhost:37523 with 530.0 MB RAM, BlockManagerId(driver, localhost, 37523)
17/01/23 10:04:28 INFO BlockManagerMaster: Registered BlockManager
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/spark/sql/execution/SparkPlanner
at org.apache.spark.sql.hive.sap.thriftserver.SapSQLEnv$.init(SapSQLEnv.scala:39)
at org.apache.spark.sql.hive.thriftserver.SapThriftServer$.main(SapThriftServer.scala:22)
at org.apache.spark.sql.hive.thriftserver.SapThriftServer.main(SapThriftServer.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
(.... goes on...)
Spark executable and Java executable paths in the Vora Thriftserver configuration tab are correct.
Did I miss something else?
You are running Vora 1.3 which means you must use HDP 2.4.2 which includes the required Spark 1.6.1 version. See the official Vora product availability matrix (PAM)