We have our Jenkins controller on GCP and the Jenkins Node on AWS (Auto Scaling Groups). Initially, We had the controller on AWS (Ubuntu 18) with openjdk11. We used the ec2-fleet jenkins plugin to spin-up and destroy instances based on usage.
The setup was working fine but we wanted to have the controller (Ubuntu 18) in GCP (Our existing jenkins system is in GCP). I used the same ec2-fleet plugin. The Jenking controller in GCP was able to spin up instances but I kept getting the error - java.io.FileNotFoundException: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.332.b09-1.amzn2.0.2.x86_64/jre/lib/currency.data (No such file or directory)
The nodes in AWS use Amazon Linux AIM images. So I thought I have to downgrade the JVM in controller to openjdk8 and did downgraded from openjdk11 to openjdk8. I still keep getting the same error. We then checked the openjdk version in AWS controller and it had openjdk11. The setup is working fine in AWS but doesn't work with GCP.
java.io.FileNotFoundException: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.332.b09-1.amzn2.0.2.x86_64/jre/lib/currency.data (No such file or directory)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at java.io.FileInputStream.<init>(FileInputStream.java:93)
at java.util.Currency$1.run(Currency.java:221)
Also: hudson.remoting.Channel$CallSiteStackTrace: Remote call to i-09b4990d879f5e7e4
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1784)
at hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:356)
at hudson.remoting.Channel.call(Channel.java:1000)
at hudson.FilePath.act(FilePath.java:1194)
at hudson.FilePath.act(FilePath.java:1183)
at org.jenkinsci.plugins.gitclient.Git.getClient(Git.java:140)
at hudson.plugins.git.GitSCM.createClient(GitSCM.java:916)
at hudson.plugins.git.GitSCM.createClient(GitSCM.java:847)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1297)
at org.jenkinsci.plugins.workflow.steps.scm.SCMStep.checkout(SCMStep.java:129)
at org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:97)
at org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:84)
at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
Upgrading both the controller and node to JDK11 solved the issue.
Related
I've deployed my Spring boot application to an ELB with Corretto 11 running on 64bit Amazon Linux 2/3.0.1 platform.
When I am trying to add a new Environment Variable from the AWS Console ( Configuration -> Software) and I hit Apply the update fails and rollbacks to the previous configuration.
This what I get from the AWS Console on my environment dashboard
Here are some of the logs that might be useful
The interesting part is when I create a fresh new environment and upload my .jar file and add the environment variables at the instantiation of my environment it works (meaning the environment variables are set correctly). The problem occurs when I try to update my environment variables when then environment already exists. Am I missing something?
I tried to use $ eb setenv after the $ eb deploy from my circleci but I still get the same error.
I've been digging into this. And now I know why it fails.
The reason is that when you add the env variable to your EB, the EB engine is going to download last application version, unzip and replaces it as current application.
This means, no deployment hooks nor .ebextenstions scripts are not executed. Therefore, if you do any application setup during deployment it is not going to be re-applied, leading to failure.
This is based on my own observations using Python 3.7 running on 64bit Amazon Linux 2/3.0.3 and single-instance EB type.
I found a workaround. If you set your deployment to immutable, this will go away as it’s gonna create a band new ec2 instance for you. Not the best solution if you have quota limitation but it works.
I'm deploying a docker image from Github to AWS elastic beanstalk using travis. That part goes OK, the actual deployment exits with 0 and there is a .zip file in the S3 bucket.
The issue is that, since this is my first time using AWS I created the app using the Sample Application since the code is deployed from Github, and after the deployment I get the health status as degraded (red exclamation sign) with this message:
ERROR
During an aborted deployment, some instances may have deployed the new application version. To ensure all instances are running the same version, re-deploy the appropriate application version.
If I go to Causes I find this:
Application deployment failed at 2020-05-01T16:01:58Z with exit status 1 and error: Engine execution has encountered an error.
Incorrect application version "travis-e55e05342a8cc16f3f28f8e184735667a9531ffa-1588311901" (deployment 4). Expected version "Sample Application" (deployment 1).
I even deleted the sample application and re-deployed the one that was uploaded and got that particular error. As you can see in the last message I've deployed this 3 times already, getting the same result.
Finally I downloaded the zip file from the S3 bucket and I found inside basically the src and public folders along with all the files in the root folder such as package.json, .gitignore all the docker files, etc.
EDIT
I created two separate repos in github to test this.
The first repo is a static page in a Docker container, quite simple. I create an environment in EB and start everything with the sample app. Then I push the changes to github, travis does it's thing and deploys the app to AWS. This works fine and the app's env is updated with no errors. This is the repo:
https://github.com/rhernandog/docler-static-page-aws
The second repo is a simple react app. Same procedure, create the environment in EB with the sample app. Push the code to github, travis does it's thing and deploys to AWS. This fails and I keep getting the same error:
Environment health has transitioned from Info to Degraded. Command failed on all
instances. Incorrect application version found on all instances. Expected version
"Sample Application" (deployment 1). Application update failed 1 second ago and
took 2 minutes.
This is the repo for the react app:
https://github.com/rhernandog/react-docker-awseb
In terms of Docker, everything works fine in my local machine.
EDIT 2
Based on #stefansundin suggestion I re-deployed the app to EB and check the logs. I ended looking at the full logs for more information and found this:
/var/log/cfn-hup.log
2020-05-14 17:07:42,605 [WARNING] Action for aws-eb-command-handler exited with 1, returning FAILURE
The only place where I found an error was in the engine log file:
/var/log/eb-engine.log
2020/05/14 17:07:42.514601 [INFO] Executing instruction: Docker Specific Build Application
2020/05/14 17:07:42.514605 [INFO] start build docker app
2020/05/14 17:07:42.514615 [INFO] fetch image name
2020/05/14 17:07:42.514639 [INFO] authenticate with ECR if the image is in an ECR repo
2020/05/14 17:07:42.514644 [INFO] pull docker image if update is not false in dockerrun.aws.json
2020/05/14 17:07:42.514657 [INFO] Running command /bin/sh -c docker pull node:12-alpine AS builder
2020/05/14 17:07:42.558923 [ERROR] "docker pull" requires exactly 1 argument.
So basically this is complaining about this in the dockerfile: FROM node:12-alpine AS builder. You can see the whole file in the repo: https://github.com/rhernandog/react-docker-awseb/blob/master/Dockerfile
The point is: Why this doesn't happen in my local machine? And how can I actually get the files from the build command and copy them to the nginx folder?
That is actually the only error I found in the log files.
I solved the issue here:
AWS Elastic Beanstalk Docker Does not support Multi-Stage Build
it is a stage-naming problem of multi-stage Dockerfile. Just use an Unamed one
I also got a similar error in my node app:
Incorrect application version "travis-e55e05342a8cc16f3f28f8e184735667a9531ffa-1588311901" (deployment 4). Expected version "Sample Application" (deployment 1)
What turned out to be an issue with my building and deployment scripts were corrected (debugged in Jenkins) the application successfully deploys in beanstalk with no error.
Turns out the issue was not with Beanstalk or app version but with the build mechanism. Something to look into when nothing else works :)
I had the same issue for java app in docker container.
I tried all the recommendations from this topic, links from this topic and nothing helped.
In the end, the following action helped:
Enable enhanced health panel https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/health-enhanced-enable.html#health-enhanced-enable-console
Go to the extended panel of the desired environment
Select the instance that crashed due to this "version" issue and click reboot
Additionally:
In one of the cases, I had to delete all previous versions (section on the left panel) and push a new one and only after that make the above recommendations.
Also make sure you have sufficient rights to deploy (codepipeline/deployment)
AWS Docs say that
To solve this issue, start another deployment. You can redeploy a previous version that you know works, or configure your environment to ignore health checks during deployment and redeploy the new version to force the deployment to complete.
You can also identify and terminate the instances that are running the wrong application version. Elastic Beanstalk will launch instances with the correct version to replace any instances that you terminate. Use the EB CLI health command to identify instances that are running the wrong application version.
Can you try to delete the instances that runs your applications and start a fresh install?
Also, you can use CodePipeline to deploy your codes to Elastic Beanstalk, you can use your S3 folder for the source stage and skip the build process if your code is build on travis and deploy using the deploy stage to install your new app to your Elastic Beanstalk. There might be some misconfiguration while installing the new app to your environment.
I suggest you to terminate your instances and start new instances sorry if I got your question wrong.
I haven't used Docker on Elastic Beanstalk. When my Ruby on Elastic Beanstalk deployments fail, I find that I usually find the problem if I request the 100 last lines from the logs. If you navigate to "Logs" -> "Request Logs" -> "Last 100 Lines", that may help you.
If that fails, I SSH in to the instance and look in the logs in /var/log. Maybe docker ps and docker logs may help you.
While creating a new webserver environment on platform branch select "Docker running on 64bit Amazon Linux" it will work.
So i am trying to load the WSO2 gateway from my local machine running the new 3.0.0-m6 version downloaded from their website.
I have run everything as described in the quickstart guide, and i get the following error on start up.
Could not load Logmanager "org.ballerinalang.launcher.BLogManager"
java.lang.ClassNotFoundException: org.ballerinalang.launcher.BLogManager
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.util.logging.LogManager$1.run(LogManager.java:195)
at java.util.logging.LogManager$1.run(LogManager.java:181)
at java.security.AccessController.doPrivileged(Native Method)
at java.util.logging.LogManager.<clinit>(LogManager.java:181)
at java.util.logging.Logger.demandLogger(Logger.java:448)
at java.util.logging.Logger.getLogger(Logger.java:502)
at com.sun.jmx.remote.util.ClassLogger.<init>(ClassLogger.java:55)
at sun.management.jmxremote.ConnectorBootstrap.<clinit>(ConnectorBootstrap.java:846)
at sun.management.Agent.startLocalManagementAgent(Agent.java:138)
at sun.management.Agent.startAgent(Agent.java:260)
at sun.management.Agent.startAgent(Agent.java:447)
ballerina: unknown command 'start'
Run 'ballerina help' for usage.
ActiveMQ is running
WSO2 Server is running
WSO2 Identity manager is running
WSO2 API Manager is running
I am starting the gateway from the root folder as explained in the start up guide.
Are you running this in Windows? The version 3.0.0-m6 is based on Ballerina v0.89 and in this version of Ballerina, there's a bug in the bin/ballerina.batfile. As you can see, it's looking for a class named org.ballerinalang.launcher.BLogManager and fails. This class was moved to another package and now, its fully qualified name is org.ballerinalang.logging.BLogManager. In the ballerina.bat script, change the property (towards the end of the file) -Djava.util.logging.manager="org.ballerinalang.launcher.BLogManager" to -Djava.util.logging.manager="org.ballerinalang.logging.BLogManager" and it should solve your problem.
I am trying to get my RoR app up and running on Elastic Beanstalk and am struggling to get the rgeo gem working. The error I am getting on the web server is:
I, [2015-09-28T11:26:54.982049 #21789] INFO -- : Completed 500 Internal Server Error in 5ms (ActiveRecord: 2.6ms)
F, [2015-09-28T11:26:54.983523 #21789] FATAL -- :
NoMethodError (undefined method `point' for nil:NilClass):
lib/app/weather_service.rb:61:in `block in get_location'
....
That error happens when the code accesses a model attribute that is backed by a POINT data type in the database. The error is typically due to a missing dependency, namely geos as described in this thread. So I connected to the underlying EC2 instance, installed geos and re-installed the rgeo gem. That resolved the issue in rails console:
[ec2-user#ip-xxx-xxx-xxx-xxx ~]$ cd /var/app/current
[ec2-user#ip-xxx-xxx-xxx-xxx current]$ rails c
Loading production environment (Rails 4.2.4)
irb(main):001:0> RGeo::Geos.supported?
=> true
That did not, however, resolve the error in the web server. I'm pretty sure I don't clearly understand the Elastic Beanstalk environment and maybe making direct changes to the underlying EC2 instance won't make dependencies available to the application instance. I do understand that I would need to either add the dependencies to either a custom AMI or ebextensions for future deployments, but I wanted to make sure I have the dependencies working before going through that process. Any guidance would be appreciated.
In case anyone else runs into this same issue, I did find a solution. These are the steps I followed:
Created an EC2 instance using AMI ami-bddbc48d
Installed the PostGIS dependencies using this helpful script https://gist.github.com/whyvez/8d19096712ea44ba66b0
Created a custom AMI from that instance
Updated my EBS environment configuration to use that new custom AMI
Voila! Problem solved
Is there any way to run few clustered immutant2 based apps with no deploy to wildfly? I would like to test distributed cache having two REPLs opened but I see no option in immutant docs to have these 2 sessions in 1 cluster.
Looks like for immutant 1.x it was --clustered option for lein.
For Immutant 2, clustering is only available when running inside of WildFly. However, you can still get a repl inside WildFly - just create a "dev" war with the lein-immutant plugin, and it will start a repl for you when deployed to WildFly. You create a dev war with:
lein immutant war --dev
(assuming you are using [lein-immutant "2.0.0"]. See the WildFly guide for instructions on starting a WildFly instance in clustered mode.