I'm using WebLogic JWSC ant task to generate WebLogic Web Service artifacts from existing wsdl. JWSC generates all the required files and archives them in an ear file.
Since I don't want JWSC task to create a new application.xml, I use applicationXml attribute of the JWSC task by pointing the location of the existing application.xml. Then JWSC task updates the application.xml by adding a new <module> tag successfully. Inside the module tag there is <web-uri> tag. web-uri defines the location of the WAR file. So far so good.
If I set explode attribute to true, the task doesn't create an ear file, put all the required files inside a directory. JWSC task also update the specified application.xml too, but this time it puts the exloded directory's name to web-uri tag without the .war extension altough it is wrong to put here a non war file name.
The correct format should be like that
<module>
<web>
<web-uri>petStore.war</web-uri>
<context-root>store</context-root>
</web>
</module>
If you don't realize the situation, WebLogic will not find the specified war file (without .war extension)
Does anyone know why JWSC updates the application.xml with a wrong web-uri ?
We can have the application.xml some where , when we create an app.xml we will give property appxml = "${path-original-application.xml}" below is the snippet
<target name="dist-ear" depends="clean-build-webservices">
<delete file="${build.dir}/META-INF/application.xml"/>
<copy todir="${build.dir}/META-INF" overwrite="true">
<fileset dir="${webservices.resource.dir}">
<include name="weblogic-application.xml"/>
</fileset>
</copy>
<ear destfile="${dist.dir}/${webservice.name}.ear" appxml="${viwebservices.appxml.location}">
<fileset dir="${build.dir}" includes="*.war"/>
<zipfileset dir="${webservices.src.dir}/jdbc" prefix="jdbc"/>
<metainf dir="${build.dir}/META-INF"/>
</ear>
</target>
Related
I am using Elastic Beanstalk and searching for an in-code solution to increasing user file upload max size. Right now I get "413 Request Entity too large if I try" to upload say a picture of 10+MB. Nginx as a proxy server is automatically denying the request. I am using Amazon Linux 2 as the OS.
SSH solutions will not work for me as EC2 instances may go down at any point and redeploy without this file (storage is ephermal) which is bad for my users.
The solutions provided here do not seem to work for .NET Core either, with the mix of config and conf files. One comment mentioned that I could try updating the web .config file and so I did, placing it inside /.platform/ with the content:
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<bindings>
<basicHttpBinding>
<binding maxBufferPoolSize="2147483647"
maxBufferSize="2147483647" maxReceivedMessageSize="2147483647" transferMode="Streamed">
<readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647"
maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647" />
<security mode="TransportCredentialOnly">
<transport clientCredentialType="Ntlm" />
</security>
</binding>
</basicHttpBinding>
</bindings>
</configuration>
But this does not have any effect on the upload limit.
I have also tried adding a .sh script in .platform/hooks/postdeploy, hoping to modify instance after deployment with:
#!/bin/bash
sudo echo "client_max_body_size 100M;" > /etc/nginx/conf.d/proxy.conf
sudo nginx service restart
But this causes the deployment to fail with "exec format error", changing the shebang to #!/bin/sh or #!/usr/bin/bash did not help either.
Does anyone know what I can do to increase file size limits with .NET Core, or make Amazon Linux 2 accept bash scripts?
Solved. All I had to do was make a file in /.platform/nginx/conf.d/proxy.conf with the content:
client_max_body_size 100M;
I don't know why this only worked now but I needed to also make sure in properties on Visual Studio that Copy to output directory was set to "Always" for this file and that the build action was "content". No .config file necessary.
I want to build spring-boot application with external log4j2 configuration. I created project where log4j config is bundled with spring-boot application jar file, but it is stored outside jar file. Application loads configuration file that is defined in application.properties eg. logging.config=file:config/log4j2.xml. The app structure looks like this:
app.jar
config:
->application.properties
->log4j2.xml
App starts correctly and sees both configuration files. However tests can't locate log4j2.xml that is defined in application.properties.
I've created test.properties file as follows:
logging.level.com.mypackage=TRACE
# This sets the global logging level and specifies the appenders
log4j.rootLogger=TRACE, theConsoleAppender
# settings for the console appender
log4j.appender.theConsoleAppender=org.apache.log4j.ConsoleAppender
log4j.appender.theConsoleAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.theConsoleAppender.layout.ConversionPattern=%d %p %c{10} [%t] %m%n
and annotated my test class with:
#TestPropertySource(locations="classpath:test.properties")
But test runner still looks for file defined in logging.config.
What properties should I override in application.properties file so that log4j2.xml is not required during unit testing?
Thanks for any help.
Best think, I could came up with, was overriding property logging.config to point to log4j configuration contained on classpath and added log4j configuration as test resource.
In my test.properties
logging.config=classpath:log4j2.xml
I am trying to upload files to my bluemix app and I am having problems using and understanding the file system. After I have succesfully uploaded files I want to give their path on my configuration files.
Specifically, I want to upload a jar file to the server and later use it as javaagent.
I have tried approaching this isuue from several directions.
I see that I can create a folder in the liberty_buildpack and place the files inside I can later access it on the compilation-release phases from the tmp folder:
/tmp/buildpacks/ibm-websphere-liberty-buildpack/lib/liberty_buildpack/my_folder
Also I can see that in the file system that I see when building and deploying the app I can copy only to the folder located in:
/app
So I copied the JAR file to the app file and set it as a javaagent using 2 method:
Manually set enviorment variable JAVA_OPTS with java agent to point to /app/myjar.jar using cf set-env
Deploy a war file of the app using cf push from wlp server and set the java agent inside the server.xml file and attribute genericJvmArguments
Both of those methods didnt work, and either the deploy phase of the application failed or my features simply didnt work.
So I tried searching the application file system using cf files and came up with the app folder, but strangly it didn't have the same file as the folder I deploy and I couldn't find any connection to the deployed folder ot the build pack.
Can someone explain how this should be done correctly? namely, uploading the file and then how should I point to it from the enviorment variable/server file?
I mean should it be /app/something or maybe other path?
I have also seen the use of relative paths like #droplet.sandbox maybe its the way to address those files? and how should I access those folders from cf files
Thanks.
EDIT:
As I have been instructed in the comments I have added the jar file to the system, the problem is that when I add the javaagent variable to the enviorment variable JAVA_OPTS the deploy stage fails with the timeout error:
payload: {... "reason"=>"CRASHED", "exit_status"=>32, "exit_description"=>"failed to accept connections within health check timeout", "crash_timestamp"=>
1433864527}
The way I am assigning the javaagent is as follows:
cf set-env myApp JAVA_OPTS "path/agent.jar"
I have tried adding several location:
1. I have found that if I add the jar files to my WebContent folder I can find it in: /app/wlp/usr/servers/defaultServer/apps/myapp.war/resources/
2. I have copied the jar file from the /tmp location in the compilation phase to /home/vcap/app/agent.jar
3. I have located the jar file in /app/.java/jre/lib
none of those 3 paths worked.
I found out that if I give a wrong path the system behaves the same so it may be a path problem.
Any ideas?
Try this:
Put your agent jars in a folder called ".profile.d" inside your WAR package;
cf se your-app JAVA_OPTS -javaagent:/home/vcap/app/.profile.d/your.jar ;
Push the war to Bluemix.
Not sure if this is exactly the right answer, but I am using additional jar files in my Liberty application, so maybe this will help.
I push up a myapp.war file to bluemix. Within the war file, inside the WEB-INF folder, I have a lib folder that contains a number of jar files. The classes in those jar files are then used within the java code of my application.
myapp.war/WEB-INF/lib/myPlugin.jar
You could try doing something like that with the jar file(s) you need, building them into the war file.
Other than that, you could try the section Overlaying the JRE from the bluemix liberty documentation to add jars to the JRE.
I have a webapp which I deploy on a Jetty server through jetty runner. I inject properties read through a properties file using Spring. As of now, I have kept the properties file within the webapp itself (in WEB-INF/classes) directory. I want to keep this properties file external to the webapp and then inject them using Spring. Is there any configuration in the jetty.xml file I can do so I can achieve this?
I managed to solve this by adding the properties file as a command line parameter while starting jetty. Something like:
java -Dexternal.properties.file=myfile.properties -jar jetty.jar
In java code, I read it by getting system property and then using FileUtils to get file.
String externalPropertiesFile = System.getProperty("extern.properties.file");
File file = FileUtils.getFile(externalPropertiesFile);
I am trying to launch a map reduce job in amazon map reduce cluster. My map reduce job does some pre-processing before generating map/reduce tasks. This pre-processing requires third party libs such as javacv, opencv. Following the amazon's documentation, I have included those libraries in HADOOP_CLASSPATH such that I have a line HADOOP_CLASSPATH= in hadoop-user-env.sh in the location /home/hadoop/conf/ of master node. According to the documentation, the entry in this script should be included in hadoop-env.sh. Hence, I assumed that HADOOP_CLASSPATH now has my libs in the classpath. I did this in bootstrap actions. However, when i launch the job, it still complains class not found exception pointing to a class in the jar which is supposed to be in the classpath.
Can someone tell me where I am going wrong? bbtw, i am using hadoop 2.2.0. In my local infrastructure, i have a small bash script that exports HADOOP_CLASSPATH with all the libs included in it and calls hadoop jar -libjars .
I solved this with an AWS EMR bootstrap task to add a jar to the hadoop classpath:
Uploaded my jar to S3
Created a bootstrap script to copy the jar from S3 to the EMR instance and add the jar to the classpath:
#!/bin/bash
hadoop fs -copyToLocal s3://my-bucket/libthrift-0.9.2.jar /home/hadoop/lib/
echo 'export HADOOP_CLASSPATH="$HADOOP_CLASSPATH:/home/hadoop/lib/libthrift-0.9.2.jar"' >> /home/hadoop/conf/hadoop-user-env.sh
Saved that script as "add-jar-to-hadoop-classpath.sh" and uploaded it to S3.
My "aws emr create-cluster" command adds the bootstrap script with this argument: --bootstrap-actions Path=s3://my-bucket/add-jar-to-hadoop-classpath.sh
When the EMR spins up the instance will have the file /home/hadoop/conf/hadoop-user-env.sh created and my MR job was able to instantiate the thrift classes in the jar.
UPDATE : I was able to instantiate thrift classes from the MASTER node, but not from the CORE node. I sshed into the CORE node and the lib was properly copied to /home/hadoop/lib and my HADOOP_CLASSPATH setting was there, but I was still getting class not found at runtime when the mapper tried to use thrift.
Solution ended up being to the the maven-shade-plugin and embed the thrift jar:
<plugin>
<!-- Use the maven shade plugin to embed the thrift classes in our jar.
Couldn't get the HADOOP_CLASSPATH on AWS EMR to load these classes even
with the jar copied to /home/hadoop/lib and the proper env var in
/home/hadoop/conf/hadoop-user-env.sh -->
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.3</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<artifactSet>
<includes>
<include>org.apache.thrift:libthrift</include>
</includes>
</artifactSet>
</configuration>
</execution>
</executions>
</plugin>
When your job is executed, the "controller" logfile contains the actually executed commandline. This could look something like:
2014-06-02T15:37:47.863Z INFO Fetching jar file.
2014-06-02T15:37:54.943Z INFO Working dir /mnt/var/lib/hadoop/steps/13
2014-06-02T15:37:54.944Z INFO Executing /usr/java/latest/bin/java -cp /home/hadoop/conf:/usr/java/latest/lib/tools.jar:/home/hadoop:/home/hadoop/hadoop-tools.jar:/home/hadoop/hadoop-tools-1.0.3.jar:/home/hadoop/hadoop-core-1.0.3.jar:/home/hadoop/hadoop-core.jar:/home/hadoop/lib/*:/home/hadoop/lib/jetty-ext/* -Xmx1000m -Dhadoop.log.dir=/mnt/var/log/hadoop/steps/13 -Dhadoop.log.file=syslog -Dhadoop.home.dir=/home/hadoop -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,DRFA -Djava.io.tmpdir=/mnt/var/lib/hadoop/steps/13/tmp -Djava.library.path=/home/hadoop/native/Linux-amd64-64 org.apache.hadoop.util.RunJar <YOUR_JAR> <YOUR_ARGS>
The log is located on the master node in /mnt/var/lib/hadoop/steps/ - it´s easily accessible when you SSH into the master node (requires specifying a key pair when creating the cluster).
I´ve never really worked with what´s in HADOOP_CLASSPATH, but if you define a bootstrap action to just copy your libraries into /home/hadoop/lib, that should solve the issue.