I created a global load balancer with backend service and enabled the logging in Google Cloud project. The Load Balancer charts and metrics is supposed to appear in Monitoring dashboard, however, the charts and metrics were not be created.
In the Google Cloud document, it looks like that if a load balancer exists in the project, the load balancer dashboard is ready to use. I also cannot find to create Load Balancers dashboard manually.
Go into Monitoring > Dashboards and create a new dasboard. The go Metrics explorer and type into Find resource type and metric field load balancer and then select your balancer type (HTTP, TCP or UDP). The select metric (for example utilisation for HTTP. Then choose filtering option (in my case it was backend service name) - it should pop up on the list.
After that you can save chart in the dasboard you created. Open this dasboard and you should have a working panel. You can add more charts to observe various metrics.
This solution may vary in case of TCP load balancer (different metrics) but generally that is the way you do it.
I cold provide more specific solution but you have to update your question with more detailes (LB type is the most important).
Related
I have a simple spring boot application deployed on Kubernetes on GCP. I wish to custom auto-scale the application using latency threshold (response time). Stackdriver has a set of metrics for load balancer. Details of the metrics can be found in this link.
I have exposed my application to an external IP using the following command
kubectl expose deployment springboot-app-new --type=LoadBalancer --port 80 --target-port 9000
I used this API explorer to view the metrics. The response code is 200, but the response is empty.
The metrics filter I used is metric.type = "loadbalancing.googleapis.com/https/backend_latencies"
Question
Why am I not getting anything in the response? Am I making any mistake?
I have already enabled Stackdriver API. Is there any other settings to be made to get the response?
As mentioned in the comments, the metric you're trying to use belongs to an HTTP(S) load balancer and the type LoadBalancer, when used in GKE, will deploy a Network Load Balancer instead.
The reason you're not able to find its metrics using the Stackdriver Monitoring page is that, the link shared in the comment corresponds to a TCP/SSL Proxy load balancer (layer 7) documentation instead of a Network Load Balancer (layer 4), which is the one that is already created in your cluster and for now, the Network Load Balancer won't show up using the Stackdriver Monitoring page.
However, a feature request has been created to have this functionality enabled in the Monitoring dashboard.
If you need to have this particular metric (loadbalancing.googleapis.com/https/backend_latencies), it might be best to expose your deployment using an Ingress instead of using the LoadBalancer type. This will automatically create an HTTP(S) load balancer with the monitoring enabled instead of the current Network Load Balancer.
From GCP portal perspective Load balancer is a service and related services comes under it like backendServers, health Check etc.
However APIs are only available for services like backendService, address, healthcheck etc.
Using UI we could find direct relationship between service like backendServers and LoadBalancer but backend service API doesn't have respective field.
While on UI we have:
Where as supported fields from backend service:
affinityCookieTtlSec,backends,cdnPolicy,connectionDraining,creationTimestamp,description,enableCDN,fingerprint,healthChecks,iap,id,kind,loadBalancingScheme,name,port,portName,protocol,region,selfLink,sessionAffinity,timeoutSec
Wanted to know if there is direct / indirect way to get List of Load Balancers
As mentioned by Patrick W, there is no direct entity 'load balancer', its just a collection of components. The list seen in the UI that appears to be the load balancer is actually the url-map component, which can be seen via the API with:
gcloud compute url-maps list
More information on the command
At the API level, there is no Load Balancer, only the components that make it up.
Your best bet to get a view similar to the UI is to list forwarding rules (global and regional). You can use gcloud compute forwarding-rules list which will show you all the forwarding rules in use (similar to the UI view), along with the IPs of each and the target (which may be a backend service or a target pool).
in my project I have two instances (based on ECS) which run Node.js app. Both of them are the same (just for HA purposes) use cookies and are located behind load balancer. Problem is that instances don't share session between themselves and when I login to first instance and do back action, load balancer sometimes switch me to second instance which doesn't have any session data (cookie generated by first instance) and I need to login again. I know that there is option to force two instances to share session between themselves but this approach require some modification in app code. So instead of it I would like to force my load balancer to hold and use this one instance which he had chosen for first time until the user finished his job and log off (or close the browser). Is it possible?
You can enable sticky sessions on your target groups. To do this:
In the Amazon EC2 console, go to Target Groups under LOAD BALANCING.
Select the target group and go to the Description tab.
Press the Edit attributes and enable Stickiness.
Set the duration and save.
These steps might be slightly different if you have Classic Load Balancer. Read more here and here.
I have a monitor installed into with my application, JavaMelody. The application is running on 7 different instances in AWS in an auto scaling group behind a load balancer in AWS. When I go to myapp.com/monitoring, I get statistics from JavaMelody. However, it is only giving me specifics for the node that the load balancer happens to direct me. Is there a way I can specify which node I am browsing to in a web browser?
The Load Balancer will send you to an Amazon EC2 instance based upon a least open connections algorithm.
It is not possible to specify which instance you wish to sent to.
You will need to connect specifically to each instance, or have the instances push their data to some central store.
You should use CloudWatch Custom Metrics to write data from your instances and their monitoring agent, and then use CloudWatch Dimensions to aggregate this data for the relevant instances
I have not tried this myself but you may create several listeners in your load balancer with a different listening port and a different target server for each listener. So the monitoring reports of the instance #1 may be available at http://...:81/monitoring etc for #2, #n
Otherwise, I think that there are other solutions such as:
host or path based load balancing rules (path based rules would need to add net.bull.javamelody.ReportServlet in your webapp to listen on different paths)
use a javamelody collector server to collect the data in a separate server and to have monitoring reports for each instance or aggregated for all instances
send some of the javamelody metrics to AWS CloudWatch or to Graphite
I'm trying to set up an AWS environment for the first time and have so far got a DNS entry pointing to a Classic ELB. I also have an EC2 instance running but I don't seem to be able to add the EC2 instance to the ELB in the UI. The documentation says that I can do this in the "Instances" tab in Load balancer screen but I don't have the tab at all. All I can see is Description | Listeners | Monitoring | Tags.
Does anyone have any ideas why the "Instances" tab night be missing from the UI?
There are now two different types of Elastic Load Balancer.
ELB/1.0, the original, is now called a "Classic" balancer.
ELB/2.0, the new version, is an "Application Load Balancer" (or "ALB").
They each have different features and capabilities.
One notable difference is that ALB doesn't simply route traffic to instances, it routes traffic to targets on instances, so (for example) you could pool multiple services on the same instance (e.g. port 8080, 8081, 8082) even though those requests all came into the balancer on port 80. And these targets are configured in virtual objects called target groups. So there are a couple of new abstraction layers, and this part of the provisioning workflow is much different. It's also provisioned using a different set of APIs.
Since the new Application Load Balancer is the default selection in the console wizard, it's easy to click past it, not realizing that you've selected something other than the classic ELB you might have been expecting, and that sounds like what occurred in this case.
It might be possible that you have selected Application Load Balancer instead of selecting Classic Load Balancer.
If that is the case, then you need to add your instance in Target Group as Instances tab is not available in the Application Load Balancer.
I hope above may resolve your case.