How can I utilize AWS::Serverless::LayerVersion in order to use external libraries in my AWS Lambda functions - amazon-web-services

I need to use external library that is located on my local file system in order to successfully execute my Lambda function. Using AWS SAM framework I found out that this can be done by specifying AWS::Serverless::LayerVersion resource.
What I am not sure is how does this exactly work and how do I specify path to my external library. Do I first need to deploy my external library to S3 bucket or?

You need to deploy the jar on layer in AWS Lambda layers section
AWS Lambda Layers :
You can configure your Lambda function to pull in
additional code and content in the form of layers. A layer is a ZIP
archive that contains libraries, a custom runtime, or other
dependencies. With layers, you can use libraries in your function
without needing to include them in your deployment package.
https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html
Following are the steps to use AWS lambda layers
Write a Lambda layer code
Package Lambda layer
Deploy Lambda layer
Attached a layer to function Call a method
Verify the results
Once you complete writing your function make sure the pom.xml contains the artifacts and maven-shade-plugin
<groupId>java-lambda-layer</groupId>
<artifactId>java-lambda-layer</artifactId>
<version>1.0-SNAPSHOT</version>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.3</version>
<configuration> <createDependencyReducedPom>false</createDependencyReducedPom> </configuration>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
Run Maven
mvnclean install and package
Please read further on following link
https://medium.com/#zeebaig/working-with-aws-lambda-layers-ddf5c91674d3

Related

AWS Lambda function throws ClassNotFoundException

I have a Spring Boot demo project. I am trying to deploy them in AWS Lambda, but I get ClassNotFoundException even thought my jar that I upload contains of the necessary dependencies.
Here goes my code:
pom.xml
<dependencies>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-lambda-java-core</artifactId>
<version>1.2.1</version>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-lambda-java-events</artifactId>
<version>3.6.0</version>
</dependency>
<dependency>
<groupId>com.amazonaws.serverless</groupId>
<artifactId>aws-serverless-java-container-spring</artifactId>
<version>[0.1,)</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>3.2.4</version>
<configuration>
<createDependencyReducedPom>false</createDependencyReducedPom>
</configuration>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
Application Class
#SpringBootApplication
public class DemoApplication extends SpringBootServletInitializer {
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
System.out.println("Welcome to demo project");
}
}
Controller Class
#RestController
public class DemoController {
#GetMapping(value="/getValue")
public String getId() {
return " Call from controller";
}
}
LambdaHandler Class
public class DemoLambdaHandler implements RequestStreamHandler {
public static SpringBootLambdaContainerHandler<AwsProxyRequest, AwsProxyResponse> handler;
static {
try {
handler = SpringBootLambdaContainerHandler.getAwsProxyHandler(DemoApplication.class);
} catch (ContainerInitializationException e) {
e.printStackTrace();
throw new RuntimeException("Container not initialized", e);
}
}
#Override
public void handleRequest(InputStream input, OutputStream output, Context context) throws IOException {
// TODO Auto-generated method stub
handler.proxyStream(input, output, context);
}
}
Not sure what I could be missing in this. Kindly help. Below is my inspected jar
Did you find a solution to this? I have been getting the same exact error.
What I can tell so far is that unlike other languages like nodejs, the docker image for java has the command specified like CMD [ "com.example.LambdaHandler::handleRequest" ], instead the nodejs one is CMD [ "app.handler" ]. The difference here is that the nodejs one specifies what the file is e.g. app.js with function called handler.
But the java one only has the class path without the ability to specify the path to the fat JAR file even when the jar file is located in "var/task". How would lambda know the JAR file name?
I had the same issue, I fixed it marking Spring Boot as a dependency.
By default, Spring Boot plugin will pack your classes into "BOOT-INF/classes" directory, but AWS is looking for the handler class from the root directory so it can't find it.
You can check this by extracting your .jar file and seeing if your file is in:
BOOT-INF/classes/com/example/demo/handler/LambdaHandler.class
instead of:
com/example/demo/handler/LambdaHandler.class
To solve it, just mark your package as exec in your pom.xml file:
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<version>2.6.1</version>
<configuration>
<classifier>exec</classifier>
</configuration>
</plugin>
More info here: Spring Boot as a Dependency
Try building the project with Maven and ensure you have the required maven-shade-plugin in your POM so that the JAR that is built with the dependencies when you run mvn package. If you are missing this plugin, you will create a JAR that does not contain the dependencies and you will encounter ClassNotFoundException.
To learn how to build and deploy a Lambda function by using the Lambda runtime Java API, see this AWS tutorial. It will walk you step by step through the process of building Lambda functions that work:
https://github.com/awsdocs/aws-doc-sdk-examples/tree/master/javav2/usecases/creating_workflows_stepfunctions
This tutorial then uses the Lambda functions within Step Functions to create a workflow.
Lambda functions are created using the Lambda runtime Java API -- com.amazonaws.services.lambda.runtime.RequestHandler. Please do not use Spring BOOT APIs to create a Lambda function.
However, If you want to invoke a Lambda function from a Spring boot app and then for example display the result in a view, then you can use the Lambda Client Java API - which is software.amazon.awssdk.services.lambda.LambdaClient.
So in summary:
com.amazonaws.services.lambda.runtime.RequestHandler is used to create a Lambda function by using the Java Lambda runtime API.
software.amazon.awssdk.services.lambda.LambdaClient is the Lambda client Java API V2 that lets you interact with deployed Lambda functions. For
example, you can use this client to invoke a Lambda function from a
Java app - like a Spring boot web app. To see a working example of how to use the Lambda client API to invoke a Lambda function, see https://github.com/awsdocs/aws-doc-sdk-examples/blob/master/javav2/example_code/lambda/src/main/java/com/example/lambda/LambdaInvoke.java.
Please add these two dependencies in pomfile
1)aws-lambda-java-core
2)aws-lambda-java-events
This should fix it

SpringBoot fails to deploy after adding .ebextensions for ngingx SSL -[An error occurred during execution of command [app-deploy]]

I have a SpringBoot app that deploys just fine to AWS Beanstalk, and the default nginx proxy works, allowing me to connect via port 80.
Following the instructions here: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/https-singleinstance.html, and verifying with another of my projects that works with this exact config, Beanstalk fails to deploy the app with error:
2020/05/29 01:27:56.418780 [ERROR] An error occurred during execution of command [app-deploy] - [CheckProcfileForJavaApplication]. Stop running the command. Error: there is no Procfile and no .jar file at root level of your source bundle
The contents of my war file are as such:
app.war
-.ebextensions
-nginx/conf.d/https.conf
-https-instance-single.config
-https-instance.config
-web-inf/
My config files pass as valid yaml files. (These files are identical to those in the AWS doc, and those that work in other project on mine.)
I am using a single instance, with port 443 set open.
These are the errors reported throughout the various log files:
----------------------------------------
/var/log/eb-engine.log
----------------------------------------
2020/05/29 01:37:53.054366 [ERROR] /usr/bin/id: healthd: no such user
...
2020/05/29 01:37:53.254965 [ERROR] Created symlink from /etc/systemd/system/multi-user.target.wants/healthd.service to /etc/systemd/system/healthd.service.
...
2020/05/29 01:37:53.732794 [ERROR] Created symlink from /etc/systemd/system/multi-user.target.wants/cfn-hup.service to /etc/systemd/system/cfn-hup.service.
----------------------------------------
/var/log/cfn-hup.log
----------------------------------------
ReadTimeout: HTTPSConnectionPool(host='sqs.us-east-1.amazonaws.com', port=443): Read timed out. (read timeout=23)
Taking count #Dean Wookey's answer for Java 11, I have successfully deployed Spring Boot application jar along with .ebextensions folder. I just added maven antrun plug to my maven build configurations and for output I am receiving .zip file, which contains .ebextensions folder and spring Boot .jar file at the same level. Just deploying this final zip file to AWS UI Console.
The following is the maven antrun plugin configuration
....
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId>
<version>1.8</version>
<executions>
<execution>
<id>prepare</id>
<phase>package</phase>
<configuration>
<tasks>
<copy todir="${project.build.directory}/${project.build.finalName}/" overwrite="false">
<fileset dir="./" includes=".ebextensions/**"/>
<fileset dir="${project.build.directory}" includes="*.jar"/>
</copy>
<zip destfile="${project.build.directory}/${project.build.finalName}.zip" basedir="${project.build.directory}/${project.build.finalName}"/>
</tasks>
</configuration>
<goals>
<goal>run</goal>
</goals>
</execution>
</executions>
</plugin>
....
Issue with Java and Linux version
If you are using Java 8 and Linux 2.10.9 code will work and override ngingx configuration but if you choose Corretto 11 and Linux 2.2.3 get following error.
Error: there is no Procfile and no .jar file at root level of your
source bundle
Create new environment with Java 8 and deploy app again will resolve issue.
Instead of changing to java 8 as described in vaquar khan's answer, an alternative is to package your source jar inside a zip that also contains the .ebextensions folder.
In other words:
source.zip
-.ebextensions
-nginx/conf.d/https.conf
-https-instance-single.config
-https-instance.config
-web-inf/
-app.war
If you look at the latest documentation https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/platforms-linux-extend.html, you'll see that the nginx config now goes in the .platform folder instead, so your structure would be:
source.zip
-.ebextensions
-https-instance-single.config
-https-instance.config
-.platform
-nginx/conf.d/https.conf
-web-inf/
-app.war
After following vaquar's answer above, also change the 'buildspec.yml' file to have the correct java version. E.g:
runtime-versions:
java: corretto8 # previously this was openjdk8
Should work.
It is still possible to use .ebextensions within your war file.
Add following to your pom.xml in the <build><plugins> section:
<build>
<plugins>
...
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>build-helper-maven-plugin</artifactId>
<executions>
<execution>
<id>add-resource-ebextensions</id>
<phase>generate-resources</phase>
<goals>
<goal>add-resource</goal>
</goals>
<configuration>
<resources>
<resource>
<directory>${basedir}/.ebextensions</directory>
<targetPath>.ebextensions</targetPath>
</resource>
</resources>
</configuration>
</execution>
</executions>
</plugin>
...
</plugins>
</build>
This will copy .ebextensions folder to WEB-INF/classes folder. There AWS picks it up while starting and applies scripts from there.

Arquillian tomee remote jacoco code coverage

I am doing integration test using Arquillian in TomEE-Plus 7.0.4 remote and trying to get Code coverage using Jacoco 0.8.2. My code coverage is not covered since I am using arquillian-tomee-remote. Since code is not covered not able to take build. I need sample code with has TomEE-plus arquillian remote and Code coverage using Jacoco. I will appreciate if I get any sample working code or sample project.
I used prepare-agent goal which will generate surefireArgLine ( javaagent) and passed the same in surefire plugin. issue here is, I am using remote Tomee and don't know how to generate correct java agent surefireArgLine set to -javaagent:/home/user/.m2/repository/org/jacoco/org.jacoco.agent/0.8.2/org.jacoco.agent-0.8.2-runtime.jar=destfile=/home/user/project/target/coverage-reports/jacoco-ut.exec,append=true,excludes=/config/*.class:/util/*Constants.class
what is the correct javaagent option for my configuration which will connect to arquillian-remote-tomee ?
Jacoco plugin
<plugin>
<groupId>org.jacoco</groupId>
<artifactId>jacoco-maven-plugin</artifactId>
<version>${plugin.maven.jacoco.version}</version>
<configuration>
<propertyName>coverageAgent</propertyName>
<append>true</append>
<excludes>
<exclude>**/config/*.class</exclude>
<exclude>**/util/*Constants.class</exclude>
</excludes>
</configuration>
<executions>
<execution>
<id>pre-unit-test</id>
<goals>
<goal>prepare-agent</goal>
</goals>
<configuration>
<destFile>${sonar.jacoco.reportPath}</destFile>
<propertyName>surefireArgLine</propertyName>
<append>true</append>
</configuration>
</execution>
<execution>
<id>post-unit-test</id>
<phase>test</phase>
<goals>
<goal>report</goal>
</goals>
<configuration>
<dataFile>${sonar.jacoco.reportPath}</dataFile>
<outputDirectory>${project.reporting.outputDirectory}/jacoco-ut</outputDirectory>
<append>true</append>
</configuration>
</execution>
<execution>
<id>check</id>
<goals>
<goal>check</goal>
</goals>
<configuration>
<dataFile>${sonar.jacoco.reportPath}</dataFile>
<haltOnFailure>true</haltOnFailure>
<rules>
<rule>
<element>BUNDLE</element>
<limits>
<limit>
<counter>LINE</counter>
<value>COVEREDRATIO</value>
<minimum>0.99</minimum>
</limit>
<limit>
<counter>BRANCH</counter>
<value>COVEREDRATIO</value>
<minimum>0.99</minimum>
</limit>
<limit>
<counter>CLASS</counter>
<value>MISSEDCOUNT</value>
<maximum>0</maximum>
</limit>
</limits>
</rule>
</rules>
</configuration>
</execution>
</executions>
</plugin>
Dependency
<dependency>
<groupId>org.jboss.arquillian.testng</groupId>
<artifactId>arquillian-testng-container</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.jboss.arquillian.config</groupId>
<artifactId>arquillian-config-api</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.jboss.arquillian.extension</groupId>
<artifactId>arquillian-jacoco</artifactId>
<version>1.0.0.Alpha10</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.jacoco</groupId>
<artifactId>org.jacoco.agent</artifactId>
<classifier>runtime</classifier>
<scope>test</scope>
<version>${plugin.maven.jacoco.version}</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.jacoco/org.jacoco.core -->
<dependency>
<groupId>org.jacoco</groupId>
<artifactId>org.jacoco.core</artifactId>
<version>${plugin.maven.jacoco.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.tomee</groupId>
<artifactId>arquillian-tomee-remote</artifactId>
<version>${tomee.version}</version>
<scope>test</scope>
</dependency>
Arquillian.xml
<extension qualifier="jacoco">
<property name="includes">com.demo.*</property>
</extension>
You can set catalina_opts in arquillian.xml for tomee container. Filter it with maven to pass jacoco javaagent and you are done :).
I have added the proper java agent ( surefireArgLine) to TomEE remote server via catalina opts in surefire pluggin. it works.
surefireArgLine - Will be populate by Surefire prepare-agent at runtime.
<tomee.catalina_opts> ${surefireArgLine}</tomee.catalina_opts>
Disclaimer: I'm not an expert in neither Arquillian nor TomEE, so you might adjust the answer for your purposes.
Anyway, in a nutshell, JaCoCo instruments bytecode in order to provide a coverage report.
Since when Arquillian is used, the actual test execution happens in a TomEE JVM and not in a JVM that actually runs the test suite (probably a CI server or just a build script that runs the test), so configuring JaCoCo on this test machine won't do much, you'll have to configure the server itself.
JaCoCo has a -javaagent option for doing this, and this Java Agent will "intercept" the loading of classes by the server and instrument them.
Now, when JaCoCo works, it produces a jacoco.exec file that actually contains a coverage report that can be shown later in various ways (jenkins plugin to show coverage, sonar integration whatever).
And this is by far the most used option AFAIK, so if you go with this approach, given the instrumentation really works, after the tests are done, you'll have to find the server on the test machine and download it to the build machine and integrate with CI/Sonar whatever.
However, there are alternative solutions:
JaCoCo Documentation states that there are three modes of running an instrumenting Java Agent:
File System: At JVM termination execution data is written to a local file.
TCP Socket Server: External tools can connect to the JVM and retrieve execution data over the socket connection. Optional execution data reset and execution data dump on VM exit is possible.
TCP Socket Client: At startup, the JaCoCo agent connects to a given TCP endpoint. Execution data is written to the socket connection on request. Optional execution data reset and execution data dump on VM exit is possible.
Technically you can just give different parameters to that javaagent so that it will run JaCoCo in one of these modes.
Anyway, we've discussed the first option, but you can also work with TCP configurations if it's required. Of course, here you'll have to handle security concerns (like permission to expose/access the port, etc).
If you work with TCP mode, there is a Maven Plugin that can come in handy. I haven't used it by myself, just googled so I can't comment whether its any good, it has only 2 stars on Github, so probably it's not production ready but maybe you could get some ideas from its source code.

Can maven surefire plugin run multiple test executions when invoked directly? (Sonar + Jenkins not running all unit tests)

I have a maven project that uses the surefire plugin for unit testing. Currently the testing is split into two executions default-test and unitTests, both bound to the test goal, as follows:
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<executions>
<!-- Test one set of files -->
<execution>
<id>default-test</id>
<goals>
<goal>test</goal>
</goals>
<configuration>
<includes> ... </includes>
</configuration>
</execution>
<!-- Test a different set of files -->
<execution>
<id>unitTests</id>
<goals>
<goal>test</goal>
</goals>
<configuration>
<includes> ... </includes>
</configuration>
<execution>
</executions>
</plugin>
</plugins>
This works fine when running mvn test:
$ mvn test
...
[INFO] maven-surefire-plugin:2.13:test (default-test)
...
[INFO] maven-surefire-plugin:2.13:test (unitTests)
I am using the Sonar plugin on Jenkins to generate code metrics, and Jenkins is set up as follows:
run the build action mvn clean install -DskipTests
run the Sonar plugin as a post-build action
This means that the Sonar plugin runs the unit tests, and the unit tests are run once only. And I see the following output when the Sonar plugin runs:
[INFO] maven-surefire-plugin:213:test (default-cli)
Note that only the default-cli execution is invoked rather than default-test + unitTests, because (I assume) the surefire plugin is invoked directly via mvn surefire:test instead of via the mvn test lifecycle.
I can add a default-cli execution to the POM file, and copy the configuration from the default-test execution, but this will only run one set of unit tests. The unit tests configured in the unitTests execution are not run at all, even though that execution is bound to the same goal.
How can I ensure that the both the default-cli and unitTests executions are invoked when the Sonar plugin invokes mvn surefire:test ?
I am thinking that perhaps my Jenkins setup should change, so that the unit tests are run in the normal build action, which generates code coverage reports, and then the Sonar plugin runs as a post-build action and loads these reports to perform analysis. Not sure if this is possible, or what changes are required to my POM files to support this (hoping to make minimal changes to POM files is another goal).
I am thinking that perhaps my Jenkins setup should change, so that the
unit tests are run in the normal build action, which generates code
coverage reports, and then the Sonar plugin runs as a post-build
action and loads these reports to perform analysis.
That seems like a good idea. You could use maven profiles to do this. You can configure the surefire plugin in different profiles and make the one you want your tools to use active by default.

Lauching a map reduce job in amazon elastic map reduce

I am trying to launch a map reduce job in amazon map reduce cluster. My map reduce job does some pre-processing before generating map/reduce tasks. This pre-processing requires third party libs such as javacv, opencv. Following the amazon's documentation, I have included those libraries in HADOOP_CLASSPATH such that I have a line HADOOP_CLASSPATH= in hadoop-user-env.sh in the location /home/hadoop/conf/ of master node. According to the documentation, the entry in this script should be included in hadoop-env.sh. Hence, I assumed that HADOOP_CLASSPATH now has my libs in the classpath. I did this in bootstrap actions. However, when i launch the job, it still complains class not found exception pointing to a class in the jar which is supposed to be in the classpath.
Can someone tell me where I am going wrong? bbtw, i am using hadoop 2.2.0. In my local infrastructure, i have a small bash script that exports HADOOP_CLASSPATH with all the libs included in it and calls hadoop jar -libjars .
I solved this with an AWS EMR bootstrap task to add a jar to the hadoop classpath:
Uploaded my jar to S3
Created a bootstrap script to copy the jar from S3 to the EMR instance and add the jar to the classpath:
#!/bin/bash
hadoop fs -copyToLocal s3://my-bucket/libthrift-0.9.2.jar /home/hadoop/lib/
echo 'export HADOOP_CLASSPATH="$HADOOP_CLASSPATH:/home/hadoop/lib/libthrift-0.9.2.jar"' >> /home/hadoop/conf/hadoop-user-env.sh
Saved that script as "add-jar-to-hadoop-classpath.sh" and uploaded it to S3.
My "aws emr create-cluster" command adds the bootstrap script with this argument: --bootstrap-actions Path=s3://my-bucket/add-jar-to-hadoop-classpath.sh
When the EMR spins up the instance will have the file /home/hadoop/conf/hadoop-user-env.sh created and my MR job was able to instantiate the thrift classes in the jar.
UPDATE : I was able to instantiate thrift classes from the MASTER node, but not from the CORE node. I sshed into the CORE node and the lib was properly copied to /home/hadoop/lib and my HADOOP_CLASSPATH setting was there, but I was still getting class not found at runtime when the mapper tried to use thrift.
Solution ended up being to the the maven-shade-plugin and embed the thrift jar:
<plugin>
<!-- Use the maven shade plugin to embed the thrift classes in our jar.
Couldn't get the HADOOP_CLASSPATH on AWS EMR to load these classes even
with the jar copied to /home/hadoop/lib and the proper env var in
/home/hadoop/conf/hadoop-user-env.sh -->
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.3</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<artifactSet>
<includes>
<include>org.apache.thrift:libthrift</include>
</includes>
</artifactSet>
</configuration>
</execution>
</executions>
</plugin>
When your job is executed, the "controller" logfile contains the actually executed commandline. This could look something like:
2014-06-02T15:37:47.863Z INFO Fetching jar file.
2014-06-02T15:37:54.943Z INFO Working dir /mnt/var/lib/hadoop/steps/13
2014-06-02T15:37:54.944Z INFO Executing /usr/java/latest/bin/java -cp /home/hadoop/conf:/usr/java/latest/lib/tools.jar:/home/hadoop:/home/hadoop/hadoop-tools.jar:/home/hadoop/hadoop-tools-1.0.3.jar:/home/hadoop/hadoop-core-1.0.3.jar:/home/hadoop/hadoop-core.jar:/home/hadoop/lib/*:/home/hadoop/lib/jetty-ext/* -Xmx1000m -Dhadoop.log.dir=/mnt/var/log/hadoop/steps/13 -Dhadoop.log.file=syslog -Dhadoop.home.dir=/home/hadoop -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,DRFA -Djava.io.tmpdir=/mnt/var/lib/hadoop/steps/13/tmp -Djava.library.path=/home/hadoop/native/Linux-amd64-64 org.apache.hadoop.util.RunJar <YOUR_JAR> <YOUR_ARGS>
The log is located on the master node in /mnt/var/lib/hadoop/steps/ - it´s easily accessible when you SSH into the master node (requires specifying a key pair when creating the cluster).
I´ve never really worked with what´s in HADOOP_CLASSPATH, but if you define a bootstrap action to just copy your libraries into /home/hadoop/lib, that should solve the issue.