Currently I'm just trying to use the spring cloud task feature in pivotal public cloud with my account, the api version is 2.63. I just copy the only complete sample code from the spring cloud task document, build and package locally and push to the cloud foundry, and specify "no-route: true" and "health-check-type: none" in manifest.yml. But it seems no use, the log shows error info "Process has crashed with type: web" after successfully run the sample and destroy the container. So I wonder why cloud foundry think my applications is a web based application, because judging from the dependency, I only use the spring-cloud-task-core and spring-boot-starter. Why it still perform health check even I've set it to avoid this kinds of check?
The health-check-type property has effects only for worker application to ensure not crash during application start and run in Diego container. But when the application finished and container destroyed, this property can't prevent Diego to take it as a crash issue and restart it. Only Cloud Foundry V3 start to support the feature of Task.
Related
I am working on a project where I can see all of the dags are queued up and not moving (appx over 24H or more)
Looks like its scheduler is broken but I need to confirm that.
So here are my questions
How to see if scheduler is broken
How to reset my airflow (web server) scheduler?
Expecting some help regarding how to reset airflow schedulers
The answer will depend a lot on how you are running Airflow (standalone, in Docker, Astro CLI, managed solution...?).
If your scheduler is broken the Airflow UI will usually tell you the time since the last heartbeat like this:
There is also an API endpoint for a scheduler health check at http://localhost:8080/health (if Airflow is running locally).
Check the scheduler logs. By default they are in a file at $AIRFLOW_HOME/logs/scheduler.
You might also want to look at how to do health checks in Airflow in general.
In terms of resetting it is usually best to restart the scheduler and again this will depend on how you started it in the first place. If you are using a standalone instance and have the processes in the foreground simply do ctr+c or close the terminal to stop it. If you are running airflow in docker restart the container, for the Astro CLI there is astro dev restart.
I'm developing an application with microservice architecture running on Google Cloud Run (fully managed). I want to add communication over events to my services. As I know, the only option is to use Eventarc. I'm curious what is the best way to reproduce the event-driven design when developing locally and how to make deployment as seamless as possible.
Not familiar with Google cloud explicitly, but I assume they all work similarly. As long as you can get your code running locally, then you can still use the cloud hosted message queue / pub/sub interface from your local code.
This way you can debug and try things out on your local machine while still using the messaging / eventing infrastructure.
I am trying to understand Serverless architecture which says 2 distinct things:
you as an app developer think about your function only and not about the server responsibilities. Well, the server still has got to be somewhere. By servers, I understand here that its mean both:
on the infrastructure side Physical Server/VM/container
as well as the on the software side: say, Tomcat
Now, I have worked on Cloud Foundry and studied the ER i.e. Diego Architecture of Cloud Foundry and the buildpack and open Service Broker API facility of Cloud foundry. Effectively, Cloud Foundry also already works on a "similar" model where the application developer focuses on his code and the deployment model with the help of buildpack prepares a droplet with the needed Java runtime and Tomcat runtime and then uses it to create a garden container that serves user requests. So, the developer does not have to worry about where the Tomcat server or the VM/container will come from. So, aren't we already meeting that mandate in Cloud Foundry?
your code comes into existence for the duration of execution and then dies. This I agree is different from the apps/microserevices that we write in Cloud Foundry in that they are long running server processes instead. Now, if I were to develop a Java webapp/microservice with 3 REST endpoints (myapp/resource1, myapp/resource2, myapp/resource3) possibly on a Tomcat Web Server, I need:
a physical machine or a VM or a container,
the Java runtime
the Tomcat container to be able to run my war file.
Going by what Serverless suggests, I infer I am supposed to concentrate only on the very specific function say handling the request to myapp/resource1. Now, in such a scenario:
What is my corresponding Java class supposed to look like?
Where do I get access to the J2EE objects like HttpServletRequest or HttpServletResponse objects and other http or servlet or JAX-RS or Spring MVC provided objects that are created by the Tomcat runtime?
Is my Java class executed within a container that is created for the duration of execution and then destroyed after execution? If yes, who manages the creation/destruction of such a container?
Would Tomcat even be required? Is there an altogether different generic way of handling requests to these three REST endpoints? Is it somewhat like httpd servers using python/Java CGI scripts to handle http requests?
After I deployed my app into cloud foundry got the following error message:
ERR Timed out after 1m0s: health check never passed.
Of course on my local machine works perfectly.
You should change your health check type.
if the application does not expose a Web interface you need to change the healthcheck type to process.
Valid values are port, process, and http.
To configure a health check while creating or updating an application, use the cf push command:
$ cf push YOUR-APP -u process
See the Health Check doc for more information:
https://docs.cloudfoundry.org/devguide/deploy-apps/healthchecks.html
Based on the discussion in the comments and my own testing of the actual application you are deploying, it appears that this particular app takes an age to start. Possibly related to individual Java service timeouts (as you have not bound any CF services to the application).
Anyway, while I'm not sure what the actual problem is (possibly an issue with PWS itself), this can be sort-of worked around by specifying the -t option when doing a push, or adding the timeout: <int> attribute to the manifest (see the manifest documentation.
OLD ANSWER
Need more details to be sure, but I imagine one of two things are happening:
You are not using the correct port. Cloud Foundry exposes the port it expects the application to be deployed using the PORT (or, pre-Diego, VCAP_APP_PORT) environment variable. This defaults to 8080, so if your application is not listening on 8080 (or is bound to 127.0.0.1 instead of 0.0.0.0), then the health check will fail.
Your application does not expose any API endpoints, and should be deployed with the --no-route option on CF, and (starting with Diego) needs to have cf set-health-check [app-name] executed against it. This should only be done if your application genuinely does not need a health check.
Some build packs can take care of the first automatically for you. Which build pack are you using? Or, alternatively, which language are you using?
You can disable the health with the below command
(Short term solution)
cf push app_name -p target/app.jar -u none
I have a WebJob that is triggered by an Azure Storage Queue. When I test this locally everything works fine. When I publish the WebJob (as part of an Azure WebSite) and then go to the Azure Management Portal and try to start the WebJob, it throws and error.
I had this running earlier, but it was having problems so I deleted the job in the management portal and tried to republish the web site with the web job.
Any suggestions on how to figure out what's going on?
In the old Azure Management Portal there was no clear way I could find to kill the process (stop the Job if there was one). Using the new portal, I looked at all the processes running on the site and there was the WebJob running 26 threads. After killing the process I was able to start the recent uploaded one.