Unable to connect Cassandra Cluster in AWS from EC2 instance - amazon-web-services

I setup Cassandra Cluster Using DataStax AMI in AWS and run the cassandra service. I am trying to connect this cassandra service from another EC2 instance where titan is installed. Titan server version is 0.4.4. I also tried with 0.5.3 but still the same error.
Cassandra is backend storage for the titan .
Error is
20366 [main] WARN com.tinkerpop.rexster.config.GraphConfigurationContainer - Could not load graph graph. Please check the XML configuration.
20367 [main] WARN com.tinkerpop.rexster.config.GraphConfigurationContainer - GraphConfiguration could not be found or otherwise instantiated: [com.thinkaurelius.titan.tinkerpop.rexster.TitanGraphConfiguration]. Ensure that it is in Rexster's path.
com.tinkerpop.rexster.config.GraphConfigurationException: GraphConfiguration could not be found or otherwise instantiated: [com.thinkaurelius.titan.tinkerpop.rexster.TitanGraphConfiguration]. Ensure that it is in Rexster's path.at com.tinkerpop.rexster.config.GraphConfigurationContainer.getGraphFromConfiguration(GraphConfigurationContainer.java:137)
at com.tinkerpop.rexster.config.GraphConfigurationContainer.<init>(GraphConfigurationContainer.java:54)
at com.tinkerpop.rexster.server.XmlRexsterApplication.reconfigure(XmlRexsterApplication.java:99)
at com.tinkerpop.rexster.server.XmlRexsterApplication.<init>(XmlRexsterApplication.java:47)
at com.tinkerpop.rexster.Application.<init>(Application.java:96)
at com.tinkerpop.rexster.Application.main(Application.java:188)
Caused by: java.lang.IllegalArgumentException: Could not instantiate implementation: com.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxStoreManager
at com.thinkaurelius.titan.diskstorage.Backend.instantiate(Backend.java:355)
at com.thinkaurelius.titan.diskstorage.Backend.getImplementationClass(Backend.java:367)
at com.thinkaurelius.titan.diskstorage.Backend.getStorageManager(Backend.java:311)
at com.thinkaurelius.titan.diskstorage.Backend.<init>(Backend.java:121)
at com.thinkaurelius.titan.graphdb.configuration.GraphDatabaseConfiguration.getBackend(GraphDatabaseConfiguration.java:1173)
at com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.<init>(StandardTitanGraph.java:75)
at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:40)
at com.thinkaurelius.titan.tinkerpop.rexster.TitanGraphConfiguration.configureGraphInstance(TitanGraphConfiguration.java:25)
at com.tinkerpop.rexster.config.GraphConfigurationContainer.getGraphFromConfiguration(GraphConfigurationContainer.java:119)
Configuration file -
<rexster>
<http>
<server-port>7182</server-port>
<server-host>0.0.0.0</server-host>
<base-uri>http://localhost</base-uri>
<web-root>public</web-root>
<character-set>UTF-8</character-set>
<enable-jmx>false</enable-jmx>
<enable-doghouse>true</enable-doghouse>
<max-post-size>2097152</max-post-size>
<max-header-size>8192</max-header-size>
<upload-timeout-millis>30000</upload-timeout-millis>
<thread-pool>
<worker>
<core-size>8</core-size>
<max-size>8</max-size>
</worker>
<kernal>
<core-size>4</core-size>
<max-size>4</max-size>
</kernal>
</thread-pool>
<io-strategy>leader-follower</io-strategy>
</http>
<rexpro>
<server-port>7180</server-port>
<server-host>0.0.0.0</server-host>
<session-max-idle>1790000</session-max-idle>
<session-check-interval>3000000</session-check-interval>
<connection-max-idle>180000</connection-max-idle>
<connection-check-interval>3000000</connection-check-interval>
<enable-jmx>false</enable-jmx>
<thread-pool>
<worker>
<core-size>8</core-size>
<max-size>8</max-size>
</worker>
<kernal>
<core-size>4</core-size>
<max-size>4</max-size>
</kernal>
</thread-pool>
<io-strategy>leader-follower</io-strategy>
</rexpro>
<shutdown-port>7183</shutdown-port>
<shutdown-host>127.0.0.1</shutdown-host>
<graphs>
<graph>
<graph-name>graph</graph-name>
<graph-type>com.thinkaurelius.titan.tinkerpop.rexster.TitanGraphConfiguration</graph-type>
<graph-location>/tmp/titan</graph-location>
<graph-read-only>false</graph-read-only>
<properties>
<storage.hostname>ec2-52-22-199-210.amazonaws.com</storage.hostname>
<storage.backend>cassandra</storage.backend>
</properties>
<extensions>
<allows>
<allow>tp:gremlin</allow>
</allows>
</extensions>
</graph>
</graphs>
</rexster>

Related

oozie error: Accessing local file system is not allowed

Sqoop import action is giving error while running as an oozie job.
I am using a pesudo-distributed hadoop cluster.
I have followed the following steps.
1.Started oozie server
2.edited job.properties and workflow.xml files
3.copied workflow.xml into hdfs
4.ran oozie job
my job.properties file
nameNode=hdfs://localhost:8020
jobTracker=localhost:8021
queueName=default
examplesRoot=examples
oozie.use.system.libpath=true
oozie.wf.application.path=${nameNode}/user/hduser/${examplesRoot}/apps/sqoop
workflow.xml file
<action name="sqoop-node">
<sqoop xmlns="uri:oozie:sqoop-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<prepare>
<delete path="${nameNode}/user/hduser/${examplesRoot}/output-data/sqoop"/>
<!--<mkdir path="${nameNode}/user/hduser/${examplesRoot}/output-data"/>-->
</prepare>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
</configuration>
<command>import --connect "jdbc:mysql://localhost/db" --username user --password pass --table "table" --where "Conditions" --driver com.mysql.jdbc.Driver --target-dir ${nameNode}/user/hduser/${examplesRoot}/output-data/sqoop -m 1</command>
<!--<file>db.hsqldb.properties#db.hsqldb.properties</file>
<file>db.hsqldb.script#db.hsqldb.script</file>-->
</sqoop>
<ok to="end"/>
<error to="fail"/>
</action>
<kill name="fail">
<message>Sqoop failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end"/>
I was expecting that the job will run without any errors. But the job got killed and it gave the following error.
UnsupportedOperationException: Accessing local file system is not allowed.
I don't understand where I am wrong and why it is not allowing to complete the job?
Can Anyone help me to solve the issue.
Oozie sharelib (with the Sqoop action's dependencies) is stored on HDFS, and the server needs to know how to communicate with the Hadoop cluster. Access to the sharelib stored on a local filesystem is not allowed, see CVE-2017-15712.
Please review conf/hadoop-conf/core-site.xml, and make sure it does not use the local filesystem. For example, if your HDFS namenode listens on port 9000 on localhost, configure fs.defaultFS accordingly.
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
...
</configuration>
Alternatively, you can remove the RawLocalFileSystem class (dummy implementation) and restart the server, but it isn't recommended (i.e. server becomes vulnerable to CVE-2017-15712).
Hope this helps. Also see this answer.

Artifactory OSS 6.5.2 - can't connect to the UI from servers on the network

I have recently installed Artifactory OSS 6.5.2 on a remote server in our network which runs on windows server 2012.
I can enter the UI locally (the machine running the Artifactory instance) through any of the browsers with this address:
"http://{local-ip}:8081/artifactory/webapp/#/"
When I try entering the UI from one of the machines on the network I get a "This site can’t be reached" message after multiple attempts to connect.
The request.log at {ARTIFACTORY_HOME}\logs\request.log shows that the request got through and succeeded:
"REQUEST|{remote-ip}|anonymous|GET|/webapp/|HTTP/1.1|200|0"
The same is showed for requests coming from the server running the Artifactory instance:
"REQUEST|{local-ip}|anonymous|GET|/webapp/|HTTP/1.1|200|0"
However, in contrary to the previous request from a remote machine, the initial request is followed by more requests:
"REQUEST|{local-ip}|anonymous|GET|/ui/auth/screen/footer|HTTP/1.1|200|0
REQUEST|{local-ip}|anonymous|GET|/ui/treebrowser/repoOrder|HTTP/1.1|200|0
REQUEST|{local-ip}|anonymous|GET|/ui/onboarding/initStatus|HTTP/1.1|200|0
REQUEST|{local-ip}|anonymous|GET|/ui/auth/current|HTTP/1.1|200|0"
I thought maybe there is an automatic redirection that uses 'localhost' instead of the ip or hostname so I tried changing the {ARTIFACTORY_HOME}\tomcat\conf\server.xml:
<Service name="Catalina">
<Connector port="8081" sendReasonPhrase="true" relaxedPathChars='[]' relaxedQueryChars='[]'/>
<!-- Must be at least the value of artifactory.access.client.max.connections -->
<Connector port="8040" sendReasonPhrase="true" maxThreads="50"/>
<!-- This is the optional AJP connector -->
<Connector port="8019" protocol="AJP/1.3" sendReasonPhrase="true"/>
<Engine name="Catalina" defaultHost="localhost">
<Host **name="localhost" -> name="{hostname}** appBase="webapps" startStopThreads="2"/>
</Engine>
</Service>
But then the Artifactory failed to initialize:
"[art-init] [INFO ] (o.a.s.a.ArtifactoryAccessClientConfigStore:643) -
Using Access Server URL: http://localhost:8040/access (bundled)
source: detected
[art-init] [INFO ] (o.a.s.a.AccessServiceImpl:308) - Waiting for
access server...
[art-init] [WARN ] (o.j.a.c.AccessClientHttpException:41) -
Unrecognized ErrorsModel by Access. Original message: Failed on
executing /api/v1/system/ping, with response: Not Found"
I did not set any proxies or reverse proxies as I don't think it's related, but I may be mistaken as I don't have a lot of experience with web services.
Any ideas or suggestions?
Thnx,
Tom.
I was deploying artifactory 6 via helm, then upgraded to 6.8.2 and ran into this.
had to
cd $ARTIFACTORY_HOME && chown -R artifactory:artifactory .
artifactory itself, on startup, seemed not to be able to deploy the access.war and then maybe also was not able to read the credentials it needed to hit this /access context health check "ping" api endpoint.

Yarn can not connect to namenode cluster

2018-03-08 16:36:16,775 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Downloading public rsrc:{ hdfs://mycluster/user/abc_user/udf/pig_udf-1.5.7_handle_input_error.jar, 1516336589685, FILE, null }
2018-03-08 16:36:16,775 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Failed to download resource { { hdfs://mycluster/user/oozie/share/lib/lib_20171215093741/pig/libgplcompression.so.0.0.0, 1513307849411, FILE, null },pending,[(container_1519371600813_0002_02_000001)],8140205165392614,DOWNLOADING}
java.lang.IllegalArgumentException: java.net.UnknownHostException: mycluster
at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:406)
at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:310)
at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176)
at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:728)
at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:671)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:155)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2815)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:98)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2852)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2834)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:387)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:249)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:356)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:60)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecumytor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.UnknownHostException: mycluster
Yarn-nodemanager service and Data-node service is on the same machine
Yarn-resource-manager service and NameNode in on the same machine
When run a simple pig script load data and print . I met above error .
Before add standby Namnode everything work well.
How can I config yarn to understand my NameNode Cluster
Thanks you
After check again hdfs-site.xml on 2 DataNode where Yarn Node Manager stand on , I see that the hdfs-site file missing this line when compare with the hdfs-site on Name Node
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
It working now

Visual Build Pro - Build Analysis Services Cube with MySQL datasource

I am trying to create a build script using Visual Build Pro v7 for an Analysis Services Cube I have created. The cube deploys to my local machine without issue using the following build script steps...
Other than replacing the target server in the .dwproj.user file, backing up and removing any traces of a possible previous version of the cube, my build script just contains the steps:
Step 1 : "%VS2008IDE%\devenv.exe" "%PROJDIR%\%CUBE_NAME%.sln" /Build
Step 2 : "%SSAS_Deploy_EXE%" "%PROJDIR%\%CUBE_NAME%\bin\%CUBE_NAME%.asdatabase" /s /o:"%PROJDIR%\deployscript.xmla"
Step 3 : "%ASCMD_LOCATION%" -S %CUBE_SQL_INSTANCE% -U DOMAIN\%UID% -P %PWD% -i "%PROJDIR%\deployscript.xmla"
The cube's data source is a MySQL db. The build fails on Step 3 when deploying to a remote server.
I have downloaded and installed MySql Connector/Net on the server, but when I run the build script I get the following error:
<return xmlns="urn:schemas-microsoft-com:xml-analysis">
<results xmlns="http://schemas.microsoft.com/analysisservices/2003/xmla-multipleresults">
<root xmlns="urn:schemas-microsoft-com:xml-analysis:empty"/>
<root xmlns="urn:schemas-microsoft-com:xml-analysis:empty">
<Exception xmlns="urn:schemas-microsoft-com:xml-analysis:exception" />
<Messages xmlns="urn:schemas-microsoft-com:xml-analysis:exception">
<Error ErrorCode="3238002695" Description="Internal error: The operation terminated unsuccessfully." Source="Microsoft SQL Server 2008 R2 Analysis Services" HelpFile="" />
<Error ErrorCode="3239116921" Description="Errors in the back-end database access module. The managed provider 'MySql.Data.MySqlClient' could not be instantiated." Source="Microsoft SQL Server 2008 R2 Analysis Services" HelpFile="" />
<Error ErrorCode="3239182436" Description="Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of 'Linkdex', Name of 'Linkdex'." Source="Microsoft SQL Server 2008 R2 Analysis Services" HelpFile="" />
<Error ErrorCode="3240034316" Description="Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of 'Keyword', Name of 'Keyword' was being processed." Source="Microsoft SQL Server 2008 R2 Analysis Services" HelpFile="" />
<Error ErrorCode="3240034317" Description="Errors in the OLAP storage engine: An error occurred while the 'Project Id' attribute of the 'Keyword' dimension from the 'LinkDexCube' database was being processed." Source="Microsoft SQL Server 2008 R2 Analysis Services" HelpFile="" />
<Error ErrorCode="3239837698" Description="Server: The operation has been cancelled." Source="Microsoft SQL Server 2008 R2 Analysis Services" HelpFile="" />
</Messages>
</root>
<root xmlns="urn:schemas-microsoft-com:xml-analysis:empty"/>
</results>
</return>
When I check the .asdatabase and .xmla I can see that the user id and password details from my ConnectionString have been removed. I'm not sure why this is or where this happens?
Does anyone have any ideas what's happening? Is it likely to be a permissions issue or something to do with the mysql connector? Or some third option?

AppFabric ErrorCode<ERRCA0017><ES0006>:

I have installed AppFabric on the server. I have created a cluster of a single computer . I have also create a cache named "Gagan".
used the following commands in order
Use-CacheCluster -Provider xml -ConnectionString \NB-GJANJUA\Cache
Start-CacheCluster
Result is that the cache service is up and running ..so far so good.
I then setup my web.config file like below
<?xml version="1.0"?>
<configuration>
<configSections>
<section name="dataCacheClient"
type="Microsoft.ApplicationServer.Caching.DataCacheClientSection,
Microsoft.ApplicationServer.Caching.Core, Version=1.0.0.0,
Culture=neutral, PublicKeyToken=31bf3856ad364e35"
allowLocation="true"
allowDefinition="Everywhere"/>
</configSections>
<!-- cache client -->
<dataCacheClient>
<!-- cache host(s) -->
<hosts>
<host
name="NB-GJANJUA.com"
cachePort="22233"/>
</hosts>
</dataCacheClient>
<system.web>
<compilation debug="true" targetFramework="4.0" >
<assemblies>
<add assembly="System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/>
<add assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>
<add assembly="System.Data.DataSetExtensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/>
<add assembly="System.Xml.Linq, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/>
<add assembly="Microsoft.ApplicationServer.Caching.Client, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
<add assembly="Microsoft.ApplicationServer.Caching.Core, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
</assemblies>
</compilation>
<sessionState mode="Custom" customProvider="SessionStore" cookieless="true">
<providers>
<add name="SessionStore" type="Microsoft.ApplicationServer.Caching.DataCacheSessionStoreProvider" cacheName="Gagan" />
</providers>
</sessionState>
</system.web>
<system.webServer>
<modules runAllManagedModulesForAllRequests="true"/>
</system.webServer>
</configuration>
But as soon as I launch my site , it comes up with this error
Parser Error Message: ErrorCode:SubStatus:There is a temporary failure. Please retry later. (One or more specified Cache servers are unavailable, which could be caused by busy network or servers. Ensure that security permission has been granted for this client account on the cluster and that the AppFabric Caching Service is allowed through the firewall on all cache hosts. Retry later.)
Source Error:
Line 44: <sessionState mode="Custom" customProvider="SessionStore" cookieless="true">
Line 45: <providers>
Line 46: <add name="SessionStore" type="Microsoft.ApplicationServer.Caching.DataCacheSessionStoreProvider" cacheName="Gagan" />
Line 47: </providers>
Line 48: </sessionState>
Is there something that I am missing ?
Note : I have already referenced the
Microsoft.ApplicationServer.Caching.Client and the
Microsoft.APplicationServer.Caching.Core assemblies
THanks for your time and patience
With Regards
Gagan Janjua
I was also having this error. Just to test client in development I switched off security by using AppFabric Power Shell command
Stop-CacheCluster
Set-CacheClusterSecurity -SecurityMode None -ProtectionLevel None
Start-CacheCluster
Also set following in client application in web.config
<dataCacheClient>
<securityProperties mode="None" protectionLevel="None"/>
</dataCacheClient>
This is not production scenario but the above error disappear when these settings are applied.
I had a similar issue, running IIS 7.5 on Windows Server 2008 R2. I resolved it by issuing the following commands in PowerShell (started from the Windows AppFabric folder in Start, All Programs):
New-Cache -CacheName NameOfCacheAsSetInWebConfig -TimeToLive 30
Grant-CacheAllowedClientAccount "IIS AppPool\NameOfAppPoolRunningSite"
Once I did that, I was all set.
Have you granted access to the cache for whatever user your website is running as?
Grant-CacheAllowedClientAccount Gagan
I solved this problem as follows:
Launch Windows Task manager and notice under what User Name your w3wp.exe is running?
In my case it was: ASP.NET v4.0
Launched Start -> All Programs -> Windows Server App Fabric -> IIS manager
In IIS , select Machine name and then Application Pools on top left handside.
In Application Pools..Verify that ASP.net v4.0 exists under Application Pools.
Launched Start -> All Programs -> Windows Server App Fabric -> Caching Administration Windows Power Shell
Type the following command on the prompt: Grant-CacheAllowedClientAccount "ASP.NET v4.0"
restarted the web application and following error went away:
ErrorCode<ERRCA0017>:SubStatus<ES0006>:There is a temporary failure. Please retry later. (One or more specified Cache servers are unavailable, which could be caused by busy network or servers. Ensure that security permission has been granted for this client account on the cluster and that the AppFabric Caching Service is allowed through the firewall on all cache hosts. Retry later.)
I had this problem and it was just that the Cache Cluster was down after a reboot. I didn't realize that you have to manually switch the service to start automatically in the services. Detailed information on that is here.
Commenting out the following in the config fixed it for me:
<sessionState customProvider="AppFabricCacheSessionStoreProvider" mode="Custom" timeout="90">
<providers>
<add name="AppFabricCacheSessionStoreProvider" type="Microsoft.ApplicationServer.Caching.DataCacheSessionStoreProvider" cacheName="Session" sharedId="SharedApp" />
</providers>
</sessionState>
By default the worker processes will be setup as an iis user, those users need access. In your Caching Administration Windows Powershell type the following
Grant-CacheAllowedClientAccount IIS_IUSRS