After installation of SAP HANA Vora 1.2 on MapR 5.1, got below error messages and vora-catalog seems down.
Can anybody solve this problem?
2016-10-19 21:29:48,341 ERROR com.mapr.warden.service.baseservice.Service$ServiceMonitorRun run [vora-catalog_monitor]: Monitor command: [/opt/mapr/vora/warden-control.sh, catalog, check]can not determine if service: vora-catalog is running. Retrying. Retrial #1. Total retries count is: 3
The follwoing step solve this issue:
Shut down all vora services on all nodes.
Ensure, that also on OS level, no vora processes are still running.
Start the vora discovery service first on all nodes.
Please check that the discovery service is up and running.
The discovery service UI is available on port 8500.
Start the vora-dlog service on all nodes where it is installed.
check via discovery service UI that the service is available.
Start the vora-catalog service.
Check via discovery service UI that the service is available.
Start other vora services.
Related
I have project on GCP with a VM instance in it (CentOS 7). I want to monitor the status of some services running on the VM. Is there a way to monitor them through the OPS agent?
The objective would then be to have alerts based on the status of the service (using Grafana). Using agent.googleapis.com/processes/cpu_time in the GCP process metrics does show the processes currently running on the VM, but having an alert based on the CPU time of a process is not clear cut as having an alert based on the status of a service.
Also I have a hard-time finding an answer to what the difference between a service and process is in UNIX. Based on this answer https://superuser.com/questions/1235647/what-is-the-difference-between-a-service-and-a-process it seems that a service differs in that it runs continuously(?)
Does this mean that monitoring the processes associated to the service is not the same as monitoring the service itself, since the process may be killed but the service continue running?
You can setup custom alert on any running process on GCP.
In Alert policy you need to add Process name ( .exe process path).
Please go through below video. Explained in Details.
https://youtu.be/aaa_kwM7zkA
I have an app running in cloud foundry which has been working fine for months, but has suddenly stopped responding. The errors in the log are all related to connecting to a postgres database service. I don't really know how to administer this sort of thing in cf, so I decided to just remove the app and service and redeploy from scratch.
However I can't remove the app or service - all requests are blocked due to an in progress operation between the app and service.
For example:
Job (ac7753ee-19e8-4b7a-9f39-85284167fb7d) failed: The service broker rejected the request due to an operation being in progress for the service binding.
So I can't delete the app because it is bound to the service, and I can't unbind the app and service because there is an operation in progress.
What can I do?
For now, to get you unstuck you could try cf purge-service-instance instead, this removes the service instance without making a call to the broker.
I have created AWS EKS Self-managed Kubernetes Runtime Fabric in Runtime Manager. It was in Active State when I created and were able to deploy mule applications on RTF (Deployed 13 mule applications successfully).
Suddenly, RTF is in 'disconnected' state. Nodes are 'healthy' but Create/Manage applications deployments is in 'degraded' state.
Please let me know troubleshooting steps I have to follow to bring the RTF in 'Active' status.
I already have a ECS Cluster deployed in multiple EC2. Now I want to integrate them with X-Ray for troubleshooting some issues.
Is there a way to do this without re-deploy cluster?
On another side, I using the start.sh and the Dockerfile in Can I run aws-xray on the same ECS container? to generate a tomcat container with xray inside but the XRay console is still empty after open port 2000 TCP and UDP in role, open all in NACL. Do I miss anything?
Getting X-Ray to work with your application has two parts: (1) Instrumenting your application with an AWS X-Ray SDK. Doing this will generate trace data for your application. (2) Running the daemon process so that the SDK can send the trace data to the X-Ray service to be displayed in the console. You have done only the 2nd part. Once you instrument your application, you'd need to redeploy it in the cluster.
How can I monitor the Airflow web server when using Google Cloud Composer? If the web server goes down or crashes due to an error, I would like to receive an alert.
You can use Stackdriver Monitoring: https://cloud.google.com/composer/docs/how-to/managing/monitoring-environments. Alerts can also be set in Stackdriver.
At this time, fine-grained metrics for the Airflow web server are not exported to Stackdriver, so it cannot be monitored like other resources in a Cloud Composer environment (such as the GKE cluster, GCE instances, etc). This is because the web server runs in a tenant project alongside your main project that most of your environment's resources live in.
However, web server logs for Airflow in Composer are now visible in Stackdriver as of March 11, 2019. That means for the time being, you can configure logs-based metrics for the web server log (matching on lines that contain Traceback, etc).