Deployment of process archive ‘null’: Cannot deploy process archive ‘null’ to default process: no such process engine exists: processEngine is null - camunda

I am getting the below error when I deploy my war file on the tomcat server and it doesn’t deploy it. Please advise.
localhost.log
19-Dec-2022 11:29:41.718 INFO [Catalina-utility-2] org.apache.catalina.core.ApplicationContext.log 2 Spring WebApplicationInitializers detected on classpath
19-Dec-2022 11:29:42.073 SEVERE [Catalina-utility-2] org.apache.catalina.core.StandardContext.listenerStart Exception sending context initialized event to listener instance of class [org.camunda.bpm.platform.example.migration.ExampleProcessApplication]
org.camunda.bpm.engine.ProcessEngineException: ENGINE-08043 Exception while performing ‘Deployment of Process Application camunda-example-migrate-on-deployment-1.0-SNAPSHOT’ => 'Deployment of process archive ‘null’: Cannot deploy process archive ‘null’ to process engine ‘default’ no such process engine exists: processEngine is null
I have processes.xml file with the following configurations.

Related

Unable to run selenium chrome as ECS Jenkins slave : exec: "-url": executable file not found in $PATH

I am using ecs plugin and EC2 plugin for jenkins.
I have set up a task definition which is mapped to use the latest selenium chrome standalone image.
Jenkins is able to try start spin up the slave, but the ecs slave task slave never reaches the running state. It goes to stopped state with the below error;
container_linux.go:380: starting container process caused: exec: "-url": executable file not found in $PATH
Anyone who knows why this is happening please help.

Jetty upgrade 9.3.27 to 9.4.43 - build success but application deployment failed "Shared scheduler not started" error

My application currently runs on embedded jetty version 9.3.27 and we are trying to upgrade to 9.4.43. deployment step fails with "Shared scheduler not started" error
build tool : Gradle
Using the default Session Manager:
DefaultSessionIdManager defaultSessionIdManager = new DefaultSessionIdManager(server);
HouseKeeper houseKeeper = new HouseKeeper();
houseKeeper.setIntervalSec(30);
defaultSessionIdManager.setSessionHouseKeeper(houseKeeper);
server.setSessionIdManager(defaultSessionIdManager);
Default SessionHandler:
WebAppContext webAppContext = new WebAppContext(ServletContextHandler.SESSIONS);
SessionHandler sessions = webAppContext.getSessionHandler();
SessionCache cache = new DefaultSessionCache(sessions);
cache.setSessionDataStore(new NullSessionDataStore());
sessions.setSessionCache(cache);
build is successful but deployment step(/gradlew clean run) fails with following exception.
Exception: Exception in thread "main" java.lang.IllegalStateException:
Shared scheduler not started
at org.eclipse.jetty.server.session.HouseKeeper.startScavenging(HouseKeeper.java:124)
at org.eclipse.jetty.server.session.HouseKeeper.setIntervalSec(HouseKeeper.java:206)
at org.eclipse.jetty.server.session.HouseKeeper.doStart(HouseKeeper.java:93)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
at org.eclipse.jetty.server.session.DefaultSessionIdManager.doStart(DefaultSessionIdManager.java:346)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
at org.eclipse.jetty.server.Server.start(Server.java:423)

Deployment of dotnet core application fails in Elastic Beanstalk

I have created Elastic Beanstalk using Cloudformation template and deploying a dotnet core application in it. While trying to deploy it, the deployment was failing and it shows the below issues:
Inside my environments in Beanstalk im seeing below error:
Create environment operation is complete, but with errors. For more information, see troubleshooting documentation.
[Instance: i-0d2a1ee08dc2813ff ConfigSet: Infra-EmbeddedPreBuild, Hook-PostInit, Hook-PreAppDeploy, Infra-EmbeddedPostBuild, Hook-EnactAppDeploy, Hook-PostAppDeploy, Infra-WriteVersionOnStartup] Command failed on instance. Return code: 1 Output: null.
The Health status is showing severe and if I visit the causes, im getting the below error:
Application deployment failed at 2021-08-19T05:36:34Z with exit status 1 and error: .
Process default has been unhealthy for 19 minutes (Target.FailedHealthChecks).
In Cloudwatch log groups, the deployment log shows the below deployment failure:
AWSBeanstalkCfnDeploy.DeploymentUtils - Unexpected Exception: System.Exception: Exception during deployment. ---> Microsoft.Web.Deployment.DeploymentDetailedException: Object of type 'package' and path 'C:\cfn\ebdata\source_bundle_final.zip' cannot be created. Learn more at: http://go.microsoft.com/fwlink/?LinkId=221672#ERROR_EXCEPTION_WHILE_CREATING_OBJECT. ---> Microsoft.Web.Deployment.DeploymentException: The Zip package 'C:\cfn\ebdata\source_bundle_final.zip' could not be loaded. ---> System.IO.FileNotFoundException: Could not find file 'C:\cfn\ebdata\source_bundle_final.zip'
INFO 1 AWSBeanstalkCfnDeployApp.DeployApp - Event [ERROR]: Deployment Failed: Unexpected Exception
I am using Solution stack as "64 bit windows server 2019 v2.6.8 running IIS 10.0", not sure if that is causing the issue. Can someone provide your inputs on the deployment part on what could could be the issue ? Thanks!

hadoop 3.3.1 show job history error: Exception in thread "main" java.lang.IllegalArgumentException: JobId string : /output_dir is not properly formed

hadoop 3.3.1
I have successfully run a program
hadoop jar units.jar com.clx.bigdata.ProcessUnits /input_dir /output_dir
from the printed message,I get the job id:job_1625033931379_0001
I can get job history list from web page of 'http://localhost:19888/jobhistory'.
but when I run
hadoop job -history /output_dir
returns error:
2021-06-30 14:54:56,356 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at >/0.0.0.0:8032
Exception in thread "main" java.lang.IllegalArgumentException: JobId string : /output_dir is not properly formed
at org.apache.hadoop.mapreduce.JobID.forName(JobID.java:156)
at org.apache.hadoop.mapreduce.tools.CLI.run(CLI.java:401)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.mapred.JobClient.main(JobClient.java:1277)

Cloudfoundry Error 310 : Staging failed : Staging plugin failed

CFoundry::AppStagingError: 310: Staging failed: 'Staging task failed:
Staging plugin failed: /var/vcap/packages/stager/vendor/bundle/ruby/1.9.1/gems/
vcap_staging-0.1.63/lib/vcap/staging/plugin/java_web/plugin.rb:28:in block in s
tage_application': Web application staging failed: web.xml not found (RuntimeErr
or)
from /var/vcap/packages/stager/vendor/bundle/ruby/1.9.1/gems/vcap_stagin
g-0.1.63/lib/vcap/staging/plugin/java_web/plugin.rb:22:inchdir'
from /var/vcap/packages/stager/vendor/bundle/ruby/1.9.1/gems/vcap_stagin
g-0.1.63/lib/vcap/staging/plugin/java_web/plugin.rb:22:in stage_application'
from /var/vcap/packages/stager/bin/run_plugin:19:in'
'
For more information, see ~/.vmc/crash
This is the error i'm getting whenever i run the vmc push command when trying to deploy any java files or .war files in windows. After running this command, i can see that the file is uploaded on the cloud but it never runs correctly. I experience almost the same error when i try to run through java/spring or any other framework.
Someone please help me with this issue.