I'm trying to deploy a minimal Scalatra application on Openshift with DIY cartridge. I've managed to get SBT working, but when it comes to container:start, I get the error:
FAILED SelectChannelConnector#0.0.0.0:8080: java.net.SocketException: Permission denied
Apparently, embedded Jetty tries to open socket at 0.0.0.0, which is prohibited by Openshift (you can only open ports at $OPENSHIFT_INTERNAL_IP). How can I tell Jetty exactly which IP I need it to listen?
Yes you are right about $OPENSHIFT_INTERNAL_IP. So edit ${jetty.home}/etc/jetty.xml and set jetty.host in the connector section as follows:
…..
<Set name="connectors">
<Array type="org.mortbay.jetty.Connector">
<Item>
<New class="org.mortbay.jetty.nio.SelectChannelConnector">
<Set name="host"><SystemProperty name="jetty.host" />$OPENSHIFT_INTERNAL_IP</Set>
<Set name="port"><SystemProperty name="jetty.port" default="8080"/></Set>
...
</New>
</Item>
</Array>
</Set>
hth
I've never used Openshift, so I'm groping a bit here.
Do you have a jetty.host set?
You may need to set up a jetty.xml file and set it in there. See http://docs.codehaus.org/display/JETTY/Newbie+Guide+to+Jetty for how to set the host. You can tell the xsbt web plugin about jetty.xml by setting your project up like this:
https://github.com/JamesEarlDouglas/xsbt-web-plugin/wiki/Settings
Alternately, you may be able to pass the parameter to Jetty during startup. That'd look like this: -Djetty.host="yourhostname"
To get running with jetty 9.2.13.v20150730 on the Openshift with DIY cartridge you have to run with Java8 setting it to run on the $OPENSHIFT_INTERNAL_IP as follows. First ssh onto the host and download a jdk8 with
cd $OPENSHIFT_DATA_DIR
wget --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u5-b13/jdk-8u5-linux-x64.tar.gz
tar -zxf jdk-8u5-linux-x64.tar.gz
export PATH=$OPENSHIFT_DATA_DIR/jdk1.8.0_05/bin:$PATH
export JAVA_HOME="$OPENSHIFT_DATA_DIR/jdk/jdk1.8.0_05"
java -version
Then in your .openshift\action_hooks\start ensure you have the same exported variables with something like:
# see http://stackoverflow.com/a/23895161/329496 to install jdk1.8 no DIY cartridge
export JAVA_HOME="$OPENSHIFT_DATA_DIR/jdk/jdk1.8.0_05"
export PATH=$OPENSHIFT_DATA_DIR/jdk1.8.0_05/bin:$PATH
nohup java -cp ${OPENSHIFT_REPO_DIR}target/dependency/jetty-runner.jar org.eclipse.jetty.runner.Runner --host ${OPENSHIFT_DIY_IP} --port ${OPENSHIFT_DIY_PORT} ${OPENSHIFT_REPO_DIR}/target/thinbus-srp-spring-demo.war > ${OPENSHIFT_LOG_DIR}server.log 2>&1 &
(Note that jdk-8u20-linux-x64.tar.gz has also been reported to work so you may want to check for the latest available.)
That setup does not need a jetty.xml as it sets the --host and --port to bind to the correct interface and run the built war file. What it does require is that jetty-runner.jar is copied out of the ivy cache into the target folder. With maven to do that you add something like:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-dependency-plugin</artifactId>
<version>2.3</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>copy</goal>
</goals>
<configuration>
<artifactItems>
<artifactItem>
<groupId>org.eclipse.jetty</groupId>
<artifactId>jetty-runner</artifactId>
<version>${jetty.version}</version>
<destFileName>jetty-runner.jar</destFileName>
</artifactItem>
</artifactItems>
</configuration>
</execution>
</executions>
</plugin>
Google suggest that the SBT equivalent is simply retrieveManaged := true. You can ssh to the host and run find to figure out where the jetty-runner.jar dependency has been copied to and update the start command appropriately.
Related
Sqoop import action is giving error while running as an oozie job.
I am using a pesudo-distributed hadoop cluster.
I have followed the following steps.
1.Started oozie server
2.edited job.properties and workflow.xml files
3.copied workflow.xml into hdfs
4.ran oozie job
my job.properties file
nameNode=hdfs://localhost:8020
jobTracker=localhost:8021
queueName=default
examplesRoot=examples
oozie.use.system.libpath=true
oozie.wf.application.path=${nameNode}/user/hduser/${examplesRoot}/apps/sqoop
workflow.xml file
<action name="sqoop-node">
<sqoop xmlns="uri:oozie:sqoop-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<prepare>
<delete path="${nameNode}/user/hduser/${examplesRoot}/output-data/sqoop"/>
<!--<mkdir path="${nameNode}/user/hduser/${examplesRoot}/output-data"/>-->
</prepare>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
</configuration>
<command>import --connect "jdbc:mysql://localhost/db" --username user --password pass --table "table" --where "Conditions" --driver com.mysql.jdbc.Driver --target-dir ${nameNode}/user/hduser/${examplesRoot}/output-data/sqoop -m 1</command>
<!--<file>db.hsqldb.properties#db.hsqldb.properties</file>
<file>db.hsqldb.script#db.hsqldb.script</file>-->
</sqoop>
<ok to="end"/>
<error to="fail"/>
</action>
<kill name="fail">
<message>Sqoop failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end"/>
I was expecting that the job will run without any errors. But the job got killed and it gave the following error.
UnsupportedOperationException: Accessing local file system is not allowed.
I don't understand where I am wrong and why it is not allowing to complete the job?
Can Anyone help me to solve the issue.
Oozie sharelib (with the Sqoop action's dependencies) is stored on HDFS, and the server needs to know how to communicate with the Hadoop cluster. Access to the sharelib stored on a local filesystem is not allowed, see CVE-2017-15712.
Please review conf/hadoop-conf/core-site.xml, and make sure it does not use the local filesystem. For example, if your HDFS namenode listens on port 9000 on localhost, configure fs.defaultFS accordingly.
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
...
</configuration>
Alternatively, you can remove the RawLocalFileSystem class (dummy implementation) and restart the server, but it isn't recommended (i.e. server becomes vulnerable to CVE-2017-15712).
Hope this helps. Also see this answer.
I have an HTTPS WSDL URL (https://hostname/MyApp/MyApp.svc?wsdl) that needs to be consumed in a Maven project. The certificate on WSDL is expired and is issued to hostname.company.com.
I have following code in Maven pom.xml
<dependencies>
<dependency>
<groupId>com.sun.xml.ws</groupId>
<artifactId>jaxws-rt</artifactId>
<scope>compile</scope>
<version>2.1.3</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>jaxws-maven-plugin</artifactId>
<version>1.10</version>
<executions>
<execution>
<goals>
<goal>wsimport</goal>
</goals>
<configuration>
<sourceDestDir>${project.build.directory}/generated-sources/jaxws-wsimport</sourceDestDir>
<wsdlUrls>
<wsdlUrl>https://hostname/MyApp/MyApp.svc?wsdl</wsdlUrl>
</wsdlUrls>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
When I do a clean and build, I get following error
[jaxws:wsimport]
Processing: https://hostname/MyApp/MyApp.svc?wsdl
jaxws:wsimport args: [-s, C:\WorkspaceNetBeans\Maven\WSTest\target\generated-sources\jaxws-wsimport, -d, C:\WorkspaceNetBeans\Maven\WSTest\target\classes, https://hostname/MyApp/MyApp.svc?wsdl]
parsing WSDL...
java.security.cert.CertificateException: No name matching hostname found
Failed to read the WSDL document: https://hostname/MyApp/MyApp.svc?wsdl, because 1) could not find the document; /2) the document could not be read; 3) the root element of the document is not <wsdl:definitions>.
failed.noservice=Could not find wsdl:service in the provided WSDL(s):
At least one WSDL with at least one service definition needs to be provided.
Failed to parse the WSDL.
I added the certificate to keystore using keytool utility. What else do I need to do?
You can download your wsdl locally from web browser and then run wsimport on that local file to generate your Java model.
You have the possible options:
1) Import the certificate of the server in to the Trust Store, also ensure that the CN name presented in the Server Certificate is same as the Hostname you are using. This link can be helpful:http://bluefoot.info/howtos/how-to-avoid-java-security-cert-certificateexception-no-name-matching-localhost-found/
2) Use a tool like Netbeans or eclipse which can ignore (I doubt it though) SSL aspects of the server.
3) Create a local version of the WSDL resolving all the dependent XSD references and use that WSDL.
How do I change the log level for the classloaders in Jboss as 7?
As a side note;
I have found information on how to do this in Jboss 5 and below but since the classloading and logging has completely changed in Jboss 7 I cant figure out how to do it.
https://community.jboss.org/wiki/EnableClassloaderLogging
Add the following to JAVA_OPTS in your start up script.
-verbose:class
And the following to your Jboss config file (standalone.xml for example).
<profile>
<subsystem xmlns="urn:jboss:domain:logging:1.1">
<logger category="org.jboss.as.deployment">
<level name="DEBUG"/>
</logger>
</subsystem>
</profile>
For testing purposes I want to use Jetty 8 to serve only static content. I know how to start the webserver from the command line:
java -jar start.jar jetty.port=8082
I would like to be able to use a vanilla Jetty, preferably 8 or 7, and start it using something like:
java -jar start.jar OPTIONS=resources resources.root=../foo jetty.port=8082
The files should then be accessible from the root of the server. A file called ../foo/x.html should be accessible via http://localhost:8082/x.html.
I don't want to create a WAR file or anything fancy. Preferably it shouldn't do any caching on the server side, leaving the files unlocked on Windows machines. Also, I only want to serve files, even located in subdirectories, no fancy file browser or ways to modify them from a client.
Is this possible? If not, what is the minimum configuration needed to accomplish such behavior?
Additional information
I've tried the following command. I expected to be able to browse the javadoc shipped with Jetty 8 using http://localhost:8080/javadoc/, but it always gives me a 404
java -jar start.jar --ini OPTIONS=Server,resources etc/jetty.xml contexts/javadoc.xml
The simplest way to start Jetty and have it serve static content is by using the following xml file:
static-content.xml:
<?xml version="1.0"?>
<!DOCTYPE Configure PUBLIC "-//Jetty//Configure//EN" "http://www.eclipse.org/jetty/configure.dtd">
<Configure id="FileServer" class="org.eclipse.jetty.server.Server">
<Call name="addConnector">
<Arg>
<New class="org.eclipse.jetty.server.nio.SelectChannelConnector">
<Set name="host"><Property name="jetty.host" /></Set>
<Set name="port"><Property name="jetty.port" default="8080"/></Set>
</New>
</Arg>
</Call>
<Set name="handler">
<New class="org.eclipse.jetty.server.handler.ResourceHandler">
<Set name="resourceBase"><Property name="files.base" default="./"/></Set>
</New>
</Set>
</Configure>
Than you can start Jetty using:
java -jar start.jar --ini static-content.xml files.base=./foo jetty.port=8082
If you omit files.base, the current direcory will be used; if you omit jetty.port, port 8080 will be used.
The --ini will disable the settings from start.ini, therefore also make sure no other handlers etc. will be activated.
A bit of offtopic, but somebody using maven may wish to this something like this (supposing that static resources have been copied to target/web):
<plugin>
<groupId>org.mortbay.jetty</groupId>
<artifactId>jetty-maven-plugin</artifactId>
<version>8.1.9.v20130131</version>
<executions>
<execution>
<id>start-jetty</id>
<phase>install</phase>
<goals>
<goal>start</goal>
</goals>
<configuration>
<webAppConfig>
<resourceBases>
<contextPath>/</contextPath>
<resourceBase>${project.build.directory}/web</resourceBase>
</resourceBases>
</webAppConfig>
</configuration>
</execution>
</executions>
</plugin>
In your distribution under the contexts directory is a javadoc.xml that you can use as an example on how to do this easily enough.
http://git.eclipse.org/c/jetty/org.eclipse.jetty.project.git/tree/jetty-distribution/src/main/resources/contexts/javadoc.xml
that is what it actually looks like
you are looking to change the context path and the resource base
would also recommend just removing jetty-webapps.xml from the startup in the start.ini file and also removing the context files you don't want to deploy with
you can look at setting some of the other options in the start.ini file as well if you like
http://wiki.eclipse.org/Jetty/Feature/Start.jar
go there for information the start process
cheers
Not sure what's going on here exactly, but my local jetty server that I'm running via the maven plugin, as in:
<plugin>
<groupId>org.mortbay.jetty</groupId>
<artifactId>jetty-maven-plugin</artifactId>
<version>${org.mortbay.jetty.version}</version>
<configuration>
<systemProperties>
<systemProperty>
<name>webapp.env.name</name>
<value>local</value>
</systemProperty>
</systemProperties>
<stopPort>8080</stopPort>
<stopKey>foo</stopKey>
</configuration>
</plugin>
... is refusing all connections from 127.0.0.1...
For instance, if I run curl 'http://localhost:8080' I get a valid html response pointing at the contents of my webapp directory. But curl 'http://127.0.0.1:8080' returns curl: (52) Empty reply from server. Does anyone out there know how I might configure jetty properly to accept such connections? This is complicating our dev team's local configurations quite a bit.
Thanks!