WS02 ESB GREG: remote instance and mount configuration - wso2

this is my registry on ESB 4.9.0 pointing to my GREG 5.2.0 instance
<dbConfig name="remote_registry">
<dataSource>jdbc/WSO2CarbonDB_GREG</dataSource>
</dbConfig>
<remoteInstance url="https://y.y.y.46:9445/registry">
<id>gregid</id>
<dbConfig>remote_registry</dbConfig>
<cacheId>regadmin#jdbc:mysql://x.x.x.45:3306/governancedb</cacheId>
<readOnly>true</readOnly>
<enableCache>true</enableCache>
<registryRoot>/</registryRoot>
</remoteInstance>
<mount path="/_system/governace" overwrite="true">
<instanceId>gregid</instanceId>
<targetPath>/_system/governance</targetPath>
</mount>
No error, but is simple ignored, the registry is local
If I change the mount point like that
<mount path="/_system/gov_reg" overwrite="true">
<instanceId>gregid</instanceId>
<targetPath>/_system/governance</targetPath> </mount>
everything works as expected.
It's an expected behaviour and I'm missing something here ?
TIA

Your configuration looks fine.
Since we already mount the all governance registry from config.
<mount path="/_system/governance" overwrite="true">
<instanceId>gregid</instanceId>
<targetPath>/_system/governance</targetPath>
</mount>
Configuring sub collection in governance registry again is not make sense. According to my understanding, However I don't see a real use case of mounting /_system/governace governance registry in ESB node. We can set the specific gov path for each ESB node (prod, dev, test) as,
<mount path="/_system/governance/env1" overwrite="true">
<instanceId>gregid</instanceId>
<targetPath>/_system/governance/prod</targetPath>
</mount>
For more validation please go through below posts,
G-Reg and ESB integration scenarios for Governance
Mounting a
Remote Repository (WSO2 GREG) to WSO2 ESB
Additional reading
Sharing Registry Space across Multiple Product Instances

Please make sure there's no duplicate mount config sections for /_system/governance in your registry.xml.

[SOLVED]
I do ignore the reason, but ive cheked the system/local reg
/_system/local/repository/components/org.wso2.carbon.registry/mount/-_system-governance
and noticed that in Properties target was pointing to the old value 'instanceid'
manually corrected and now all is working fine
Below the corrispondent bit in my ansible template
<remoteInstance url="https://{{ greg_ip }}:{{ greg_carbon_port }}/registry">
<id>gregid</id>
<dbConfig>remote_registry</dbConfig>
<cacheId>regadmin#jdbc:mysql://{{ mysql_db }}:3306/governancedb</cacheId>
{% if 'WKR' in group_names %}
<readOnly>true</readOnly>
{% else %}
<readOnly>false</readOnly>
{% endif %}
<enableCache>true</enableCache>
<registryRoot>/</registryRoot>
</remoteInstance>
<mount path="/_system/governance" overwrite="true">
<instanceId>gregid</instanceId>
<targetPath>/_system/governance</targetPath>
</mount>

Related

oozie error: Accessing local file system is not allowed

Sqoop import action is giving error while running as an oozie job.
I am using a pesudo-distributed hadoop cluster.
I have followed the following steps.
1.Started oozie server
2.edited job.properties and workflow.xml files
3.copied workflow.xml into hdfs
4.ran oozie job
my job.properties file
nameNode=hdfs://localhost:8020
jobTracker=localhost:8021
queueName=default
examplesRoot=examples
oozie.use.system.libpath=true
oozie.wf.application.path=${nameNode}/user/hduser/${examplesRoot}/apps/sqoop
workflow.xml file
<action name="sqoop-node">
<sqoop xmlns="uri:oozie:sqoop-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<prepare>
<delete path="${nameNode}/user/hduser/${examplesRoot}/output-data/sqoop"/>
<!--<mkdir path="${nameNode}/user/hduser/${examplesRoot}/output-data"/>-->
</prepare>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
</configuration>
<command>import --connect "jdbc:mysql://localhost/db" --username user --password pass --table "table" --where "Conditions" --driver com.mysql.jdbc.Driver --target-dir ${nameNode}/user/hduser/${examplesRoot}/output-data/sqoop -m 1</command>
<!--<file>db.hsqldb.properties#db.hsqldb.properties</file>
<file>db.hsqldb.script#db.hsqldb.script</file>-->
</sqoop>
<ok to="end"/>
<error to="fail"/>
</action>
<kill name="fail">
<message>Sqoop failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end"/>
I was expecting that the job will run without any errors. But the job got killed and it gave the following error.
UnsupportedOperationException: Accessing local file system is not allowed.
I don't understand where I am wrong and why it is not allowing to complete the job?
Can Anyone help me to solve the issue.
Oozie sharelib (with the Sqoop action's dependencies) is stored on HDFS, and the server needs to know how to communicate with the Hadoop cluster. Access to the sharelib stored on a local filesystem is not allowed, see CVE-2017-15712.
Please review conf/hadoop-conf/core-site.xml, and make sure it does not use the local filesystem. For example, if your HDFS namenode listens on port 9000 on localhost, configure fs.defaultFS accordingly.
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
...
</configuration>
Alternatively, you can remove the RawLocalFileSystem class (dummy implementation) and restart the server, but it isn't recommended (i.e. server becomes vulnerable to CVE-2017-15712).
Hope this helps. Also see this answer.

AWS EMR InvalidAuxServiceException: The auxService:mapreduce_shuffle does not exist

I am launching an EMR cluster at run time based on an User Event and once the job is done the cluster will be terminated.
How ever when i the cluster is launched and the tasks are getting executed i am getting the Error:
I read some posts where it is being suggested that we need to update yarn-site.xml in namenode and datanodes and restart the yarn instance.
Not sure how to configure this during the launch of the cluster itself.
org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: The auxService:mapreduce_shuffle does not exist
Container launch failed for container_1523533251407_0001_01_000002 : org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: The auxService:mapreduce_shuffle does not exist
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168)
at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)
at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:155)
at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:390)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Thanks
Answer:
Here is what i have added in my code to resolve the Issue:
Map<String,String> yarnProperties = new HashMap<String,String>();
yarnProperties.put("yarn.nodemanager.aux-services","mapreduce_shuffle");
yarnProperties.put("yarn.nodemanager.aux-services.mapreduce_shuffle.class","org.apache.hadoop.mapred.ShuffleHandler");
Configuration yarnConfig = new Configuration()
.withClassification("yarn-env")
.withProperties(yarnProperties);
RunJobFlowRequest request = new RunJobFlowRequest()
.withConfigurations(yarnConfig)
We were setting some other properties in the yarn-site.xml .
In case you are trying to create using AWS CLI, you can use
--configurations 'json file with the config'
Else if you are trying to create through java , for example
Application hive = new Application().withName("Hive");
Map<String,String> hiveProperties = new HashMap<String,String>();
hiveProperties.put("hive.join.emit.interval","1000");
hiveProperties.put("hive.merge.mapfiles","true");
Configuration myHiveConfig = new Configuration()
.withClassification("hive-site")
.withProperties(hiveProperties);
Then you can refer as
RunJobFlowRequest request = new RunJobFlowRequest()
.withName("Create cluster with ReleaseLabel")
.withReleaseLabel("emr-5.13.0")
.withApplications(hive)
.withConfigurations(myHiveConfig)
For the other problem :-
You need to add this 2 properties in the above way and then create the cluster:-
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
</configuration>

WS02 ESB GREG: remote instance and mount configuration in a cluster

in my lab I have this setup between an ESB 4.9.0 and a GREG 5.2.0
<dbConfig name="remote_registry">
<dataSource>jdbc/WSO2CarbonDB_GREG</dataSource>
</dbConfig>
<remoteInstance url="https://y.y.y.46:9445/registry">
<id>gregid</id>
<dbConfig>remote_registry</dbConfig>
<cacheId>regadmin#jdbc:mysql://x.x.x.45:3306/governancedb</cacheId>
<readOnly>true</readOnly>
<enableCache>true</enableCache>
<registryRoot>/</registryRoot>
</remoteInstance>
<mount path="/_system/governace" overwrite="true">
<instanceId>gregid</instanceId>
<targetPath>/_system/governance</targetPath>
</mount>
Now I have created two cluster, the GREG one (with a LB in front greg.my.cluster) and a MGR (mgt-esb.my.cluster) and two workers ( esb.my.cluster)
My question is: how to configure the ESB instances ?
I assume that
<remoteInstance url="https://y.y.y.46:9445/registry">
becomes
<remoteInstance url="https://greg.my.cluster/registry">
but where to put it ?
Shall I reproduce that conf only on the ESB MGR ? Only on the the workers ? Or on the three of them ?
Thank you in advance
Since you have a JDBC mount, remoteInstance should be set as the hostname value in the carbon.xml file.
As an example, if you have defined the hostname as governance.wso2.com and the server is running with 2 port offset, the remoteInstance URL should be:
<remoteInstance url="https://governance.wso2.com:9445/registry">

Unable to connect Cassandra Cluster in AWS from EC2 instance

I setup Cassandra Cluster Using DataStax AMI in AWS and run the cassandra service. I am trying to connect this cassandra service from another EC2 instance where titan is installed. Titan server version is 0.4.4. I also tried with 0.5.3 but still the same error.
Cassandra is backend storage for the titan .
Error is
20366 [main] WARN com.tinkerpop.rexster.config.GraphConfigurationContainer - Could not load graph graph. Please check the XML configuration.
20367 [main] WARN com.tinkerpop.rexster.config.GraphConfigurationContainer - GraphConfiguration could not be found or otherwise instantiated: [com.thinkaurelius.titan.tinkerpop.rexster.TitanGraphConfiguration]. Ensure that it is in Rexster's path.
com.tinkerpop.rexster.config.GraphConfigurationException: GraphConfiguration could not be found or otherwise instantiated: [com.thinkaurelius.titan.tinkerpop.rexster.TitanGraphConfiguration]. Ensure that it is in Rexster's path.at com.tinkerpop.rexster.config.GraphConfigurationContainer.getGraphFromConfiguration(GraphConfigurationContainer.java:137)
at com.tinkerpop.rexster.config.GraphConfigurationContainer.<init>(GraphConfigurationContainer.java:54)
at com.tinkerpop.rexster.server.XmlRexsterApplication.reconfigure(XmlRexsterApplication.java:99)
at com.tinkerpop.rexster.server.XmlRexsterApplication.<init>(XmlRexsterApplication.java:47)
at com.tinkerpop.rexster.Application.<init>(Application.java:96)
at com.tinkerpop.rexster.Application.main(Application.java:188)
Caused by: java.lang.IllegalArgumentException: Could not instantiate implementation: com.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxStoreManager
at com.thinkaurelius.titan.diskstorage.Backend.instantiate(Backend.java:355)
at com.thinkaurelius.titan.diskstorage.Backend.getImplementationClass(Backend.java:367)
at com.thinkaurelius.titan.diskstorage.Backend.getStorageManager(Backend.java:311)
at com.thinkaurelius.titan.diskstorage.Backend.<init>(Backend.java:121)
at com.thinkaurelius.titan.graphdb.configuration.GraphDatabaseConfiguration.getBackend(GraphDatabaseConfiguration.java:1173)
at com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.<init>(StandardTitanGraph.java:75)
at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:40)
at com.thinkaurelius.titan.tinkerpop.rexster.TitanGraphConfiguration.configureGraphInstance(TitanGraphConfiguration.java:25)
at com.tinkerpop.rexster.config.GraphConfigurationContainer.getGraphFromConfiguration(GraphConfigurationContainer.java:119)
Configuration file -
<rexster>
<http>
<server-port>7182</server-port>
<server-host>0.0.0.0</server-host>
<base-uri>http://localhost</base-uri>
<web-root>public</web-root>
<character-set>UTF-8</character-set>
<enable-jmx>false</enable-jmx>
<enable-doghouse>true</enable-doghouse>
<max-post-size>2097152</max-post-size>
<max-header-size>8192</max-header-size>
<upload-timeout-millis>30000</upload-timeout-millis>
<thread-pool>
<worker>
<core-size>8</core-size>
<max-size>8</max-size>
</worker>
<kernal>
<core-size>4</core-size>
<max-size>4</max-size>
</kernal>
</thread-pool>
<io-strategy>leader-follower</io-strategy>
</http>
<rexpro>
<server-port>7180</server-port>
<server-host>0.0.0.0</server-host>
<session-max-idle>1790000</session-max-idle>
<session-check-interval>3000000</session-check-interval>
<connection-max-idle>180000</connection-max-idle>
<connection-check-interval>3000000</connection-check-interval>
<enable-jmx>false</enable-jmx>
<thread-pool>
<worker>
<core-size>8</core-size>
<max-size>8</max-size>
</worker>
<kernal>
<core-size>4</core-size>
<max-size>4</max-size>
</kernal>
</thread-pool>
<io-strategy>leader-follower</io-strategy>
</rexpro>
<shutdown-port>7183</shutdown-port>
<shutdown-host>127.0.0.1</shutdown-host>
<graphs>
<graph>
<graph-name>graph</graph-name>
<graph-type>com.thinkaurelius.titan.tinkerpop.rexster.TitanGraphConfiguration</graph-type>
<graph-location>/tmp/titan</graph-location>
<graph-read-only>false</graph-read-only>
<properties>
<storage.hostname>ec2-52-22-199-210.amazonaws.com</storage.hostname>
<storage.backend>cassandra</storage.backend>
</properties>
<extensions>
<allows>
<allow>tp:gremlin</allow>
</allows>
</extensions>
</graph>
</graphs>
</rexster>

Icecast dont see mount point

I have a problem with mount points in icecast 2
the following is my config
<icecast>
<limits>
<clients>100</clients>
<sources>20</sources>
<threadpool>5</threadpool>
<queue-size>524288</queue-size>
<client-timeout>30</client-timeout>
<header-timeout>15</header-timeout>
<source-timeout>10</source-timeout>
<burst-on-connect>10</burst-on-connect>
<burst-size>65535</burst-size>
</limits>
<authentication>
<source-password>admin</source-password>
<relay-password>admin</relay-password>
<admin-user>admin</admin-user>
<admin-password>admin</admin-password>
</authentication>
<hostname>localhost</hostname>
<listen-socket>
<port>8000</port>
</listen-socket>
<fileserve>1</fileserve>
<mount>
<mount-name>/example-complex.ogg</mount-name>
<max-listeners>100000</max-listeners>
<dump-file>/tmp/dump-example1.ogg</dump-file>
<fallback-mount>example2.ogg</fallback-mount>
</mount>
<paths>
<basedir>/opt/local/share/icecast</basedir>
<logdir>/opt/local/var/log/icecast</logdir>
<webroot>/opt/local/share/icecast/web</webroot>
<adminroot>/opt/local/share/icecast/admin</adminroot>
<alias source="/" dest="/status.xsl"/>
</paths>
<logging>
<accesslog>access.log</accesslog>
<errorlog>error.log</errorlog>
<loglevel>3</loglevel>
<logsize>10000</logsize>
</logging>
<security>
<chroot>0</chroot>
<changeowner>
<user>djpasica</user>
<group>admin</group>
</changeowner>
</security>
</icecast>
and result is an empty mount point in icecast admin:
its after starting nicecast, I have a 1 mount point, but its an empty "/"
what i use:
icecast 2.3.2
nicecast 1.10.4
os: mac os x 10.7
nicecast config:Server Type: Icecast 2
Adress: localhost
Port: 8000
Mount Point: /example-complex.ogg
Hey it looks like you made an error configuring Nicecast. You might want to read the following guide.
Please note that Nicecast has an built-in Icecast server, make sure you are not using this one. As well not that Nicecast for some reason will kill any running Icecast server on the same machine where Nicecast is started. So you have to first start Nicecast and only afterwards start your Icecast Server. (This should not happen anymore after you have disable the built-in server though)
Another issue I experience with some versions of Nicecast is that the Settings or at least some (like the Mountpoint Name) only take effect after restarting Nicecast.
Additionally make sure your Mountpoint name is not /, as this is an impossible mountpoint name (it conflicts with the web interface).
under mount and above path use
<mount>
<mount-name>/myradio</mount-name>
<password>mypassword</password>
<public>1</public>
</mount>
Then in your encoder use /myradio as mountpoint and mypassword as password