I've created a task that can extract local .tar files (which I previously manually downloaded from Artifactory) as a test. How can I reference the files when they are on Artifactory from my gradle script? Should I just use the server path? All I've done with gradle is basic stuff and haven't worked with repositories.
I'd also like to perform a certain action based on whether the file has changed since I last ran the script, is that possible?
One way of doing this would be to create a new configuration for your TAR file. In my example I gave it the name myTar. In the repositories closure you define the URL to your Artifactory repository and reference the TAR file as dependency in the dependencies closure. When running Gradle it will download the file for you and put it in your local repository. As I read you already created a task that extracts the TAR file. I created a task named extractMyTar which references your downloaded TAR file by its configuration name and untars it into a local directory.
configurations {
myTar
}
repositories {
mavenRepo urls: 'http://my.artifactory/repo'
}
dependencies {
myTar 'your.org:artifact-name:1.0#tar'
}
task extractMyTar << {
File myTarFile = configurations.getByName('myTar').singleFile
if(myTarFile.exists()) {
ant.untar(src: myTarFile, dest: file('myDestDir'))
}
}
Related
So I want to deploy my application. I have a moving script that moves all the deployed files to where they need to be sent.
But when that script is running in BeforeInstall phase it's not capable of finding the files.
So I added a pwd to the script and the directory is "deployment-root". I suppose I need to cd into the deployment folder, but the id is always different.
Is there any way I can get that id in my appspec.yml file so that I can cd into it in my deploy scripts?
Thanks,
You don't have to do a manual copy, in appspect.yml, in "files" section, you can specify what and where your files copied to.
files:
- source: Config/config.txt
destination: /webapps/Config
- source: source
destination: /webapps/myApp
Provides information to CodeDeploy about which files from your application revision should be installed on the instance during the deployment's Install event.
More details via this page:
https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-files.html
When adding a custom jar step for an EMR cluster - how do you set the classpath to a dependent jar (required library)?
Let's say I have my jar file - myjar.jar but I need an external jar to run it - dependency.jar. Where do you configure this when creating the cluster? I am not using the command line, using the Advanced Options interface.
Thought I would post this after spending a number of hours poking around and reading outdated documentation.
The 2.x/3.x documentation that talks about setting the HADOOP_CLASSPATH does not work. They specify this does not work for 4.x and above anyway. Somewhere you need to specify a --libjars option. However, specifying that in the arguments list does not work either.
For example:
Step Name: MyCustomStep
Jar Location: s3://somebucket/myjar.jar
Arguments:
myclassname
option1
option2
--libjars dependentlib.jar
Copy your required jars to /usr/lib/hadoop-mapreduce/ in a bootstrap action. No other changes are necessary. Additional info below:
This command below works for me to copy a specific JDBC driver version:
sudo aws s3 cp s3://<your bucket>/mysql-connector-java-5.1.23-bin.jar /usr/lib/hadoop-mapreduce/
I have other dependencies so I have a bootstrap action for each jar I need copied, of course you could put all the copies in a single bash script. Below is .net code I use to get a bootstrap action to run the copy script. I am using .net SDK versions 3.3.* and launching the job with release label emr-5.2.0
public static BootstrapActionConfig CopyEmrJarDependency(string jarName)
{
return new BootstrapActionConfig()
{
Name = $"Copy jars for EMR dependency: {jarName}",
ScriptBootstrapAction = new ScriptBootstrapActionConfig()
{
Path = $"s3n://{Config.AwsS3CodeBucketName}/EMR/Scripts/copy-thirdPartyJar.sh",
Args = new List<string>()
{
$"s3://{Config.AwsS3CodeBucketName}/EMR/Java/lib/{jarName}",
"/usr/lib/hadoop-mapreduce/"
}
}
};
}
Note that the ScriptBootstrapActionConfig Path property uses the protocol "s3n://", but the protocol for the aws cp command should be "s3://"
My script copy-thirdPartyJar.sh contains the following:
#!/bin/bash
# $1 = location of jar
# $2 = attempted magic directory for java classpath
sudo aws s3 cp $1 $2
I am trying to upload files to my bluemix app and I am having problems using and understanding the file system. After I have succesfully uploaded files I want to give their path on my configuration files.
Specifically, I want to upload a jar file to the server and later use it as javaagent.
I have tried approaching this isuue from several directions.
I see that I can create a folder in the liberty_buildpack and place the files inside I can later access it on the compilation-release phases from the tmp folder:
/tmp/buildpacks/ibm-websphere-liberty-buildpack/lib/liberty_buildpack/my_folder
Also I can see that in the file system that I see when building and deploying the app I can copy only to the folder located in:
/app
So I copied the JAR file to the app file and set it as a javaagent using 2 method:
Manually set enviorment variable JAVA_OPTS with java agent to point to /app/myjar.jar using cf set-env
Deploy a war file of the app using cf push from wlp server and set the java agent inside the server.xml file and attribute genericJvmArguments
Both of those methods didnt work, and either the deploy phase of the application failed or my features simply didnt work.
So I tried searching the application file system using cf files and came up with the app folder, but strangly it didn't have the same file as the folder I deploy and I couldn't find any connection to the deployed folder ot the build pack.
Can someone explain how this should be done correctly? namely, uploading the file and then how should I point to it from the enviorment variable/server file?
I mean should it be /app/something or maybe other path?
I have also seen the use of relative paths like #droplet.sandbox maybe its the way to address those files? and how should I access those folders from cf files
Thanks.
EDIT:
As I have been instructed in the comments I have added the jar file to the system, the problem is that when I add the javaagent variable to the enviorment variable JAVA_OPTS the deploy stage fails with the timeout error:
payload: {... "reason"=>"CRASHED", "exit_status"=>32, "exit_description"=>"failed to accept connections within health check timeout", "crash_timestamp"=>
1433864527}
The way I am assigning the javaagent is as follows:
cf set-env myApp JAVA_OPTS "path/agent.jar"
I have tried adding several location:
1. I have found that if I add the jar files to my WebContent folder I can find it in: /app/wlp/usr/servers/defaultServer/apps/myapp.war/resources/
2. I have copied the jar file from the /tmp location in the compilation phase to /home/vcap/app/agent.jar
3. I have located the jar file in /app/.java/jre/lib
none of those 3 paths worked.
I found out that if I give a wrong path the system behaves the same so it may be a path problem.
Any ideas?
Try this:
Put your agent jars in a folder called ".profile.d" inside your WAR package;
cf se your-app JAVA_OPTS -javaagent:/home/vcap/app/.profile.d/your.jar ;
Push the war to Bluemix.
Not sure if this is exactly the right answer, but I am using additional jar files in my Liberty application, so maybe this will help.
I push up a myapp.war file to bluemix. Within the war file, inside the WEB-INF folder, I have a lib folder that contains a number of jar files. The classes in those jar files are then used within the java code of my application.
myapp.war/WEB-INF/lib/myPlugin.jar
You could try doing something like that with the jar file(s) you need, building them into the war file.
Other than that, you could try the section Overlaying the JRE from the bluemix liberty documentation to add jars to the JRE.
I'm working in a python 2.7 elastic beanstalk environment.
I'm trying to use the sources key in an .ebextensions .config file to copy a tgz archive to a directory in my application root -- /opt/python/current/app/utility. I'm doing this because the files in this folder are too big to include in my github repository.
However, it looks like the sources key is executed before the ondeck symbolic link is created to the current bundle directory so I can't reference /opt/python/ondeck/app when using the sources command because it creates the folder and then beanstalk errors out when trying to create the ondeck symbolic link.
Here are copies of the .ebextensions/utility.config files I have tried:
sources:
/opt/python/ondeck/app/utility: http://[bucket].s3.amazonaws.com/utility.tgz
Above successfully copies to /opt/python/ondec/app/utility but then beanstalk errors out becasue it can't create the symbolic link from /opt/python/bundle/x --> /opt/python/ondeck.
sources:
utility: http://[bucket].s3.amazonaws.com/utility.tgz
Above copies the folder to /utility right off the root in parallel with /etc.
You can use container_commands instead of sources as it runs after the application has been set up.
With container_commands you won't be able to use sources to automatically get your files and extract them so you will have to use commands such as wget or curl to get your files and untar them afterwards.
Example: curl http://[bucket].s3.amazonaws.com/utility.tgz | tar xz
In my environment (php) there is no transient ondeck directory and the current directory where my app is eventually deployed is recreated after commands are run.
Therefore, I needed to run a script post deploy. Searching revealed that I can put a script in /opt/elasticbeanstalk/hooks/appdeploy/post/ and it will run after deploy.
So I download/extract my files from S3 to a temporary directory in the simplest way by using sources. Then I create a file that will copy my files over after the deploy and put it in the post deploy hook directory .
sources:
/some/existing/directory: https://s3-us-west-2.amazonaws.com/my-bucket/vendor.zip
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/99_move_my_files_on_deploy.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
mv /some/existing/directory /var/app/current/where/the/files/belong
I want to deploy war from Jenkins to Cloud.
Could you please let me know how to deploy war file from Jenkins on my local to AWS Bean Stalk ?
I tried using a Jenkins post-process plugin to copy the artifact to S3, but I get the following error:
ERROR: Failed to upload files java.io.IOException: put Destination [bucketName=https:, objectName=/s3-eu-west-1.amazonaws.com/bucketname/test.war]:
com.amazonaws.AmazonClientException: Unable to execute HTTP request: Connect to s3.amazonaws.com/s3.amazonaws.com/ timed out at hudson.plugins.s3.S3Profile.upload(S3Profile.java:85) at hudson.plugins.s3.S3BucketPublisher.perform(S3BucketPublisher.java:143)
Some work has been done on this.
http://purelyinstinctual.com/2013/03/18/automated-deployment-to-amazon-elastic-beanstalk-using-jenkins-on-ec2-part-2-guide/
Basically, this is just adding a post-build task to run the standard command line deployment scripts.
From the referenced page, assuming you have the post-build task plugin on Jenkins and the AWS command line tools installed:
STEP 1
In a Jenkins job configuration screen, add a “Post-build action” and choose the plugin “Publish artifacts to S3 bucket”, specify the Source (in our case, we use Maven so the source is target/.war and destination is your S3 bucket name)
STEP 2
Then, add a “Post-build task” (if you don’t have it, this is a plugin in Maven repo) to the same section above (“Post-build Actions”) and drag it below the “Publish artifacts to S3 bucket”. This is important that we want to make sure the war file is uploaded to S3 before proceeding with the scripts.
In the Post-build task portion, make sure you check the box “Run script only if all previous steps were successful”
In the script text area, put in the path of the script to automate the deployment (described in step 3 below). For us, we put something like this:
<path_to_script_file>/deploy.sh "$VERSION_NUMBER" "$VERSION_DESCRIPTION"
The $VERSION_NUMBER and $VERSION_DESCRIPTION are Jenkins’ build parameters and must be specified when a deployment is triggered. Both variables will be used for AEB deployment
STEP 3
The script
#!/bin/sh
export AWS_CREDENTIAL_FILE=<path_to_your aws.key file>
export PATH=$PATH:<path to bin file inside the "api" folder inside the AEB Command line tool (A)>
export PATH=$PATH:<path to root folder of s3cmd (B)>
//get the current time and append to the name of .war file that's being deployed.
//This will create a unique identifier for each .war file and allow us to rollback easily.
current_time=$(date +"%Y%m%d%H%M%S")
original_file="app.war"
new_file="app_$current_time.war"
//Rename the deployed war file with the new name.
s3cmd mv "s3://<your S3 bucket>/$original_file" "s3://<your S3 bucket>/$new_file"
//Create application version in AEB and link it with the renamed WAR file
elastic-beanstalk-create-application-version -a "Hoiio App" -l "$1" -d "$2" -s "<your S3 bucket>/$new_file"