I am passing jacocoagent.jar of version 0.8.1 as a java agent to record the code coverage on the server to my server start-up script. But I see that Pre-main class attribute is missing in the jar, as a result, I get the following error:
Error occurred during initialization of VM
Failed to find Premain-Class manifest attribute in
/u01/jetty_home/jacoco/jacocoagent.jar
agent library failed to init: instrument.
Does anyone have thoughts on how to fix this?
Make sure that you use proper JAR file.
lib/jacocoagent.jar in jacoco-0.8.1.zip that is linked from JaCoCo homepage has following checksums
$ wget http://repo1.maven.org/maven2/org/jacoco/jacoco/0.8.1/jacoco-0.8.1.zip
$ unzip jacoco-0.8.1.zip
$ md5sum lib/jacocoagent.jar
2873d7006dc9672d84981792df2c5b7a lib/jacocoagent.jar
$ sha256sum lib/jacocoagent.jar
cd40d1c1aea4112adb82049df3f462b60380ce1bb00bdecb1cfdb862e34be8dd lib/jacocoagent.jar
JaCoCo homepage also contains link on JaCoCo documentation, which contains page "Maven Repository" with explanation that exactly the same artifact in Maven Central Repository has groupId org.jacoco, artifactId org.jacoco.agent and most importantly classifier runtime :
Following JAR files are available:
Group ID | Artifact ID | Classifier | Description
-----------+------------------+------------+-------------
...
org.jacoco | org.jacoco.agent | | API to get a local copy of the agent
org.jacoco | org.jacoco.agent | runtime | Agent
...
so its filename is org.jacoco.agent-0.8.1-runtime.jar
$ wget http://repo1.maven.org/maven2/org/jacoco/org.jacoco.agent/0.8.1/org.jacoco.agent-0.8.1-runtime.jar
$ md5sum org.jacoco.agent-0.8.1-runtime.jar
2873d7006dc9672d84981792df2c5b7a org.jacoco.agent-0.8.1-runtime.jar
$ sha256sum org.jacoco.agent-0.8.1-runtime.jar
cd40d1c1aea4112adb82049df3f462b60380ce1bb00bdecb1cfdb862e34be8dd org.jacoco.agent-0.8.1-runtime.jar
and both have Premain-Class attribute
$ unzip lib/jacocoagent.jar
$ cat META-INF/MANIFEST.MF | grep Premain
Premain-Class: org.jacoco.agent.rt.internal_c13123e.PreMain
Related
I am trying to deploy a series of AWS step functions through a setup.sh file.
I have successfully tested the step functions in a test environment and there are no issues in the source code.
This is the Deployment Code
./setup.sh <data dictionary command> <step function name>
Output looks like this
*** Step Function Json Uploading to AWS ***
TENANT : <Tenant Name>
EX_AWS_REGION : eu-west-2
EX_AWS_ACCT_ALIAS : <Environment>
File Name : <Step Function File Path>
/path/step_functions
error: unknown command '.Account'
Exception ignored in: <_io.TextIOWrapper name='<stdout>' mode='w' encoding='cp1252'>
OSError: [Errno 22] Invalid argument
/directory_path/
In setup.sh
.Account has been used as follows
dummy=`aws sts get-caller-identity | jq .Account`
jq has been installed globally and no issues in the setup.sh as well.
It is a jq installation issue. Download and install the jq under the following steps.
Open git bash with the administration privileges. (In Linx based system run with the sudo privileges)
Run the following command curl -L -o /usr/bin/jq.exe https://github.com/stedolan/jq/releases/latest/download/jq-win64.exe
Replace the link with an appropriate one for lnux base systems
In yaml file below we are running with next command: java org.testng.TestNG testng.xml.
Is it possible to run tests something like this ./gradlew clean runTests testng.xml?
**version: 0.1
# Phases are collection of commands that get executed on Device Farm.
phases:
# The install phase includes commands that install dependencies that your tests use.
# Default dependencies for testing frameworks supported on Device Farm are already installed.
install:
commands:
# This test execution environment uses Appium version 1.9.1 by default, however we enable you to change it using the Appium version manager (avm). An
# example "avm" command below changes the version to 1.14.2.
# For your convenience, we have preinstalled the following versions: 1.9.1, 1.10.1, 1.11.1, 1.12.1, 1.13.0, 1.14.1, 1.14.2, 1.15.1 or 1.16.0.
# To use one of these Appium versions, change the version number in the "avm" command below to your desired version:
- export APPIUM_VERSION=1.14.2
- avm $APPIUM_VERSION
- ln -s /usr/local/avm/versions/$APPIUM_VERSION/node_modules/.bin/appium /usr/local/avm/versions/$APPIUM_VERSION/node_modules/appium/bin/appium.js
# The pre-test phase includes commands that setup your test environment.
pre_test:
commands:
# Setup environment variables for java
- export CLASSPATH=$CLASSPATH:$DEVICEFARM_TESTNG_JAR
- export CLASSPATH=$CLASSPATH:$DEVICEFARM_TEST_PACKAGE_PATH/*
- export CLASSPATH=$CLASSPATH:$DEVICEFARM_TEST_PACKAGE_PATH/dependency-jars/*
# We recommend starting appium server process in the background using the command below.
# Appium server log will go to $DEVICEFARM_LOG_DIR directory.
# The environment variables below will be auto-populated during run time.
- echo "Start appium server"
- >-
appium --log-timestamp
--default-capabilities "{\"deviceName\": \"$DEVICEFARM_DEVICE_NAME\", \"platformName\":\"$DEVICEFARM_DEVICE_PLATFORM_NAME\",
\"app\":\"$DEVICEFARM_APP_PATH\", \"udid\":\"$DEVICEFARM_DEVICE_UDID\", \"platformVersion\":\"$DEVICEFARM_DEVICE_OS_VERSION\",
\"chromedriverExecutable\":\"$DEVICEFARM_CHROMEDRIVER_EXECUTABLE\"}"
>> $DEVICEFARM_LOG_DIR/appiumlog.txt 2>&1 &
- >-
start_appium_timeout=0;
while [ true ];
do
if [ $start_appium_timeout -gt 60 ];
then
echo "appium server never started in 60 seconds. Exiting";
exit 1;
fi;
grep -i "Appium REST http interface listener started on 0.0.0.0:4723" $DEVICEFARM_LOG_DIR/appiumlog.txt >> /dev/null 2>&1;
if [ $? -eq 0 ];
then
echo "Appium REST http interface listener started on 0.0.0.0:4723";
break;
else
echo "Waiting for appium server to start. Sleeping for 1 second";
sleep 1;
start_appium_timeout=$((start_appium_timeout+1));
fi;
done;
# The test phase includes commands that start your test suite execution.
test:
commands:
# Your test package is downloaded in $DEVICEFARM_TEST_PACKAGE_PATH so we first change directory to that path.
- echo "Navigate to test package directory"
- echo $DEVICEFARM_TEST_PACKAGE_PATH
- cd $DEVICEFARM_TEST_PACKAGE_PATH
# By default, the following command is used by Device Farm to run your Appium TestNG test.
# The goal is to run to your tests jar file with all the dependencies jars in the CLASSPATH.
# Alternatively, You may specify your customized command.
# Note: For most use cases, the default command works fine.
# Please refer "http://testng.org/doc/documentation-main.html#running-testng" for more options on running TestNG tests from the command line.
- echo "Unzipping TestNG tests jar"
- unzip tests.jar
- echo "Start Appium TestNG test"
- cd suites
- ls -l
- java org.testng.TestNG testng.xml
# The post test phase includes are commands that are run after your tests are executed.
post_test:
commands:
- ls -l
- zip -r allure.zip allure-results artifacts report test-output
- ls -l
- cp allure.zip $DEVICEFARM_LOG_DIR
- cd $DEVICEFARM_LOG_DIR
- ls -l
# The artifacts phase lets you specify the location where your tests logs, device logs will be stored.
# And also let you specify the location of your test logs and artifacts which you want to be collected by Device Farm.
# These logs and artifacts will be available through ListArtifacts API in Device Farm.
artifacts:
# By default, Device Farm will collect your artifacts from following directories
- $DEVICEFARM_LOG_DIR**
Thank you for reaching out. Are you trying to replace "java org.testng.TestNG testng.xml" with "./gradlew clean runTests testng.xml", or expecting to run gradle command locally?
I found solution:
You need to zip all project with your build.gradle files
Select Appium Node config, upload your zip
Use yaml config from TestNg or from my question, but replace command java org.testng.TestNG testng.xml to ./gradlew clean runTests(task in your gradle) your_test_suite.xml
I'm trying to implement a pipeline that package and copy Python code to S3 using Gitlab CI.
Here is the job that is causing the problem:
package:
stage: package
image: python:3.8
script:
- apt-get update && apt-get install -y zip unzip jq
- pip3 install awscli
- aws s3 ls
- ./cicd/scripts/copy_zip_to_s3.sh
only:
refs:
- developer
I want to mention that in the section before_script in .gitlab-ci.yml, I've already exported the AWS credentials (AWS SECRET ACCESS KEY, AWS_ACCESS_KEY_ID, etc) from Gitlab environment variables.
I've checked thousands of times my credentials and they are totally correct. I want also to mention that the same script works perfectly for another project under the same group in Gitlab.
Here is the error:
$ aws s3 ls
An HTTP Client raised an unhandled exception: Invalid header value b'AWS4-HMAC-SHA256 Credential=AKIAZXXXXXXXXXX\n/2020XX2/us-east-1/sts/aws4_request, SignedHeaders=content-type;host;x-amz-date, Signature=ab53XX6eb72XXXXXX2152e4XXXX93b104XXXXXXX363b1da6f9XXXXX'
ERROR: Job failed: exit code 1
./cicd/scripts/copy_zip_to_s3.sh do the package and the copy, same error occurs when executing it, that's why I've added a simple aws command aws s3 ls to show that even a simple 'ls' is not working.
Any solutions, please? Thank you all in advance.
This was because of an additional line added to AWS ACCESS KEY variable.
Thanks to #jordanm
I had similar issue when running a bash script on Cygwin in Windows. The fix was removing the \r\n from the end of the values I was putting into environment variables.
Here's my whole script if anyone is interested. It assumes a new AWS role, sets those creds into environment variables, then opens a new bash shell which will respect those set variables.
#!/bin/bash
hash aws 2>/dev/null
if [ $? -ne 0 ]; then
echo >&2 "'aws' command line tool required, but not installed. Aborting.";
exit 1;
fi;
unset AWS_ACCESS_KEY_ID
unset AWS_SECRET_ACCESS_KEY
unset AWS_SESSION_TOKEN
ROLEID=123918273981723
TARGETARN="arn:aws:iam::${ROLEID}:role/OrganizationAccountAccessRole"
COMMAND="aws sts assume-role --role-arn $TARGETARN --role-session-name damien_was_here"
RESULT=$($COMMAND)
#the calls to tr -d \r\n are the important part with regards to this question.
AccessKeyId=$(echo -n "$RESULT" | jq -r '.Credentials.AccessKeyId' | tr -d '\r\n')
SecretAcessKey=$(echo -n "$RESULT" | jq -r '.Credentials.SecretAccessKey' | tr -d '\r\n')
SessionToken=$(echo -n "$RESULT" | jq -r '.Credentials.SessionToken' | tr -d '\r\n')
export AWS_ACCESS_KEY_ID=$AccessKeyId
export AWS_SECRET_ACCESS_KEY=$SecretAcessKey
export AWS_SESSION_TOKEN=$SessionToken
echo Running a new bash shell with the environment variable set.
bash
I want to create a docker image that runs on a Java Service with OpenJ9's Class Data Sharing feature to improve startup performance.
I want to create the Class Cache while building the image using a multi stage docker build.
I saw a few mentions of pre warming a docker image like this online
https://github.com/barecode/adopt-openj9-spring-boot/blob/master/Dockerfile.openj9.warmed
however, i'm not able to recreate it here is my Dockerfile
FROM adoptopenjdk/openjdk11-openj9:alpine as base
ADD libs/ /libs
ADD service.jar /service.jar
RUN mkdir /hi
WORKDIR /hi
RUN ls /
RUN java -Xshareclasses:name=mycache -Xshareclasses:cacheDir=/hi -Xshareclasses -jar /usr/share/app/service.jar &
RUN sleep 5
RUN ls -la /hi
FROM adoptopenjdk/openjdk11-openj9:alpine-jre
COPY --from=base libs/ /usr/share/app/libs
COPY --from=base service.jar /usr/share/app/service.jar
RUN /bin/sh -c 'ps aux | grep java | grep service | awk '{print $2}' | xargs kill -1'
#RUN java -Xshareclasses:listAllCaches
ENTRYPOINT ["java","-jar", "-Xshareclasses" , "-Xtune:virtualized", "-XX:+UseContainerSupport", "/usr/share/app/service.jar"]
my problem is that when I'm running
RUN java -Xshareclasses:name=mycache -Xshareclasses:cacheDir=/hi -Xshareclasses -jar /usr/share/app/service.jar &
and then expecting the cache file to be saved on /hi, the file isn't there.
any help will be appreciated.
Thanks.
OpenJ9 only reads the last -Xshareclasses option provided. This makes it easy to replace previous options in the commandline when developing / debugging as in some environments, it's hard to modify the existing commandline args.
Change the command to:
RUN java -Xshareclasses:name=mycache,cacheDir=/hi -jar /usr/share/app/service.jar &
and the cache will be created in the /hi directory.
For example:
# java -Xshareclasses:name=mycache,cacheDir=/hi -version
openjdk version "11.0.4" 2019-07-16
OpenJDK Runtime Environment AdoptOpenJDK (build 11.0.4+11)
Eclipse OpenJ9 VM AdoptOpenJDK (build openj9-0.15.1, JRE 11 Linux amd64-64-Bit Compressed References 20190717_286 (JIT enabled, AOT enabled)
OpenJ9 - 0f66c6431
OMR - ec782f26
JCL - fa49279450 based on jdk-11.0.4+11)
# ls /hi
C290M11F1A64P_mycache_G37
When running chef zero via AWS userdata, the run always fails. However, if I ssh onto the machine and manually execute the same commands, it works as expected. This is the output that I get:
Chef: 11.12.8
[2014-06-11T12:40:34+00:00] INFO: Auto-discovered chef repository at /opt/chef-zero
[2014-06-11T12:40:34+00:00] INFO: Starting chef-zero on port 8889 with repository at repository at /opt/chef-zero
One version per cookbook
[2014-06-11T12:40:34+00:00] INFO: Forking chef instance to converge...
[2014-06-11T12:40:35+00:00] DEBUG: Fork successful. Waiting for new chef pid: 1530
[2014-06-11T12:40:35+00:00] DEBUG: Forked instance now converging
[2014-06-11T12:40:35+00:00] ERROR: undefined method `[]' for nil:NilClass
[2014-06-11T12:40:35+00:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)
The userdata that I set when launching the EC2 instance in AWS includes the following:
curl -L https://www.opscode.com/chef/install.sh | bash
mkdir /opt/chef-zero
cd /opt/chef-zero
wget http://myserver/chef-repo.tar.gz
tar zxf chef-repo
INSTANCE_ID=`curl http://169.254.169.254/latest/meta-data/instance-id`
cat <<EOF > /opt/chef-zero/solo.rb
ssl_verify_mode :verify_peer
node_name "$INSTANCE_ID"
EOF
/opt/chef/bin/chef-client -v >chef-zero.log 2>&1
/opt/chef/bin/chef-client -z -l debug -c solo.rb -o 'role[someRole]' -E BUILD >> chef-zero.log 2>&1
The AMI that I'm using is a custom one that was initially provisioned using knife + knife-ec2 (that bootstrapped chef 11.6.0 from an ubuntu 13.04 public ami). The omnibus installer from userdata (curl ... | bash) is upgrading chef to 11.12.8. The original knife run included chef-client::service in it's run, and the host is initially configured for use with chef-client + chef-server (i.e. there's a "validation.pem" and "client.rb" in /etc/chef - not sure if that makes a difference).
I am able to log onto the machine and execute chef-client -z -c solo.rb -o 'role[someRole]' -E BUILD as soon as the machine comes up (after waiting for files to be retrieved and the user-data chef-client to fail) and the chef run executes normally.
I have no idea why the userdata chef-client run fails with undefined method, any ideas what's causing it?
After some further investigation, and thanks to bit of chatting with the #chef guys on freenode, the problem was narrowed down to the environment.
When executing the script with userdata, the "HOME" variable is not set. shell.rb from the chef gem is littered with references to ENV["HOME"].
SSH:
# unset HOME
# chef-client -z -o 'role[test]'
ERROR: undefined method `[]' for nil:NilClass
# export HOME=/root
# chef-client -z -o 'role[test]'
Starting Chef Client, version ....
...
Chef Client finished, ...
If you need to execute chef-client via user data, you should manually export HOME before trying to execute chef.
Bug has been reported at https://tickets.opscode.com/browse/CHEF-5365
edit
Submitted a pull request which has since been merged into master. https://github.com/opscode/chef/pull/1494
This likely has nothing to do with chef-zero but indicates a problem in your recipe code (whatever's inside that chef-repo.tar.gz, or is driven by role[someRole]). It indicates an attempt to access a sub-element of a hash like
node['foo']['bar']
but when node['foo'] is nil (undefined)
Check the stacktrace that's generated by the chef client run to narrow it down.