I installed managed Anthos on a GKE cluster. Anthos Service Mesh is working and is displaying my API. Thanks to that Services that are in Monitoring automatically detect my API. This is great as it enables me to easily set SLOs and Error Budget for my API.
However I would like to be able to easily set SLOs for individual endpoints in my api. Services(in Monitoring) detect only my API and not the endpoints within my API(my API is one pod/container + sidecar). I tried to add endpoints to Services in Monitoring but it looks like it is only possible to add Kubernetes Objects there.
Is there a way to use Services in Monitoring with endpoints? Is the only way to do so to break endpoints to separate microservices?
You can monitor your endpoints using Cloud Endpoints with OpenAPI, which allows you to monitor the health of APIs you own by using the logs and metrics Cloud Endpoints maintains for you automatically. When users make requests to your API, Endpoints logs information about the requests and responses and also tracks three of the four golden signals of monitoring: latency, traffic, and errors. These usage and performance metrics help you monitor your API.
The following URL Configuring Cloud Endpoints has the configuration process for Cloud Endpoints. Use this URL Monitoring your API as a reference on the monitoring process for your API, and this last URL for the Cloud Endpoint’s overview.
Related
I'm having some trouble setting uptime checks for some Cloud Run services that don't allow unauthenticated invocations.
For context, I'm using Cloud Endpoints + ESPv2 as an API gateway that's connected to a few Cloud Run services.
The ESPv2 container/API gateway allows unauthenticated invocations, but the underlying Cloud Run services do not (since requests to these backends flow via the API gateway).
Each Cloud Run service has an internal health check endpoint that I'd like to hit periodically via Cloud Monitoring uptime checks.
This serves the purpose of ensuring that my Cloud Run services are healthy, but also gives the added benefit of reduced cold boot times as the containers are kept 'warm'
However, since the protected Cloud Run services expect a valid authorisation header all of the requests from Cloud Monitoring fail with a 403.
From the Cloud Monitoring UI, it looks like you can only configure a static auth header, which won't work in this case. I need to be able to dynamically create an auth header per request sent from Cloud Monitoring.
I can see that Cloud Scheduler supports this already. I have a few internal endpoints on the Cloud Run services (that aren't exposed via the API gateway) that are hit via Cloud Scheduler, and I am able to configure an OIDC auth header on each request. Ideally, I'd be able to do the same with Cloud Monitoring.
I can see a few workarounds for this, but all of them are less than ideal:
Allow unauthenticated invocations for the underlying Cloud Run services. This will make my internal services publicly accessible and then I will have to worry about handling auth within each service.
Expose the internal endpoints via the API gateway/ESPv2. This is effectively the same as the previous workaround.
Expose the internal endpoints via the API gateway/ESPv2 AND configure some sort of auth. This sort of works but at the time of writing the only auth methods supported by ESPv2 are API Keys and JWT. JWT is already out of the question but I guess an API key would work. Again, this requires a bit of set up which I'd rather avoid if possible.
Would appreciate any thought/advice on this.
Thanks!
This simple solution may work on your use case as it is easier to just use a TCP uptime check on port 443:
Create your own Cloud Run service using https://cloud.google.com/run/docs/quickstarts/prebuilt-deploy.
Create a new uptime check on TCP port 443 Cloud Run URL.
Wait a couple of minutes.
Location results: All locations passed
Virginia OK
Oregon OK
Iowa OK
Belgium OK
Singapore OK
Sao Paulo OK
I would also like to advise that Cloud Run is a Google fully managed product and it has a 99.95 % monthly up time SLA, with no recent incidents in the past few months, but proactively monitoring this on your end is a very good thing too.
I am currently building a rest api, for this I am using Google Cloud API Gateway and Google Cloud Run. I've been looking at all the google cloud documentation and researching elsewhere and I can't find how to add a custom domain to an API gateway instance. The funny thing is that there is more documentation for Google Cloud endpoints, I could find how to do it with endpoints but it does not apply to my use case.
I have 10 instances of google cloud run each one running a microservice respectively and I want to join everything in a single domain and add support with openapi, but I have failed in the attempt.
In any case, if someone has managed to customize the domain of an api gateway instance, I would appreciate if you could guide me, greetings.
For the beta release, custom domain names are not supported on GCP for API Gateway. Since it is still beta as of today, if you want to use a custom domain, you could use Cloud Endpoints in Cloud Run or you could even look into using Microservices in App Engine.
I am trying to setup a microservice architecture on AWS, each microservice is a REST API.
Some of the services are running on ECS using Fargate and some of the services are running as a set of lambdas.
I am trying to have each api route resolve to the correct service, whether it is a ECS or Lambda based service.
I can see how it would be possible using only ECS services (with Application Load Balancer and listeners) or using only Lambdas (with an API Gateway). But I just cant seem to figure out how to mix the two together.
I have been searching relentlessly all week and I cannot find any decent documentation or an example of how to implement something similar to this.
There appears to be a limit to the number of routes for ALB or API Gateway. If I have several lambda based services there will need to be a declared path for each Lambda function and they will use up the path limit very quickly.
Should there be an intermediary step between each service and the API Gateway? For instance, each Lambda service has its own API Gateway which 'groups' those functions together. Which would mean there will be a nested set of API Gateways that the parent API Gateway routes to. This doesn't feel correct though.
Any help in the right direction would be appreciated.
Thanks
Your AWS account's API Gateway REST and Websocket routes/resources limit can be increased with a request to AWS support.
I have a custom gRPC backend deployed behind an Endpoints Service Proxy (ESP) connected to Google Cloud Endpoints.
When sending a request with the X-Cloud-Trace-Context header set, I can see the spans recorded by ESP show up in my Stackdriver Trace dashboard.
However, my service is also sending requests to Google Cloud KMS as part of handling that request. I'd like Google Cloud to create trace spans for those sub-requests automatically for me as well; however, attaching the X-Cloud-Trace-Context header that ESP forwarded to me to the sub-requests sent to Cloud KMS does not cause any spans for those sub-requests to show up in Stackdriver Trace. The service account used to connect to Cloud KMS does have the "Stackdriver Trace Agent" role enabled.
Is it possible to tell Google Cloud services (such as Cloud KMS) to automatically generate trace spans for the current request's trace context, or do I need to manually generate traces for these requests in my backend code?
Cloud Trace doesn't currently generate service-side traces for requests to most GCP services, although we're aware of it as a valuable feature. To track how much of your latency is being consumed by KMS (or other services) you can create a client-side trace record using OpenCensus (Github) or similar.
Cloud KMS (as of this writing) doesn't support gRPC, but we are working on it.
I have a java web application running on Tomcat deployed on an EC2 instance. Is there any way I can monitor/set alarms for when the web application goes down or stops responding? Essentially what I would like to do is to check if a HTTP request to the web app responds with status 200. If it does not respond with 200 (for a few times) then it should raise an alarm and send an e-mail to some ops people.
I know there are third party options like Nagois / uptimerobot that I could use but I wanted to know if there are any AWS offerings for this? Is it possible to set up such automated monitoring using AWS Cloud Watch? I could not find a way to do this based on what I read up about Cloud Watch. If this isn't the sort of thing Cloud Watch can handle, then is there another AWS service suited for this?
I think Port Monitoring Feature is available under AWS Beanstalk.
You can consider checking this http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.healthstatus.html
Ashutosh,
Ec2 is an IAAS service from AWS and you will not have an AWS offering to monitor your Tomcat server. However, you have custom-built solutions, which I think you are not looking for here.
However, if you are using an Application Load balancer or Beanstalk you get options to trigger alarms.
Yes , you can achieve it through a cloudwatch . collect your logs with a cloudwatch agent and upload it on cloudwatch logstream. below is the reference url for configuring cloudwatch agent.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Install-CloudWatch-Agent.html
After that with "create matrix filter" you can set up an email trigger as per your requirements.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/MonitoringPolicyE
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Counting404Responses.htmlxamples.html