Is there a way to find the IP addresses of all connections to AWS RDS instances.
You can get connected Host IP from information_schema using MySQL Query.
select host from information_schema.processlist WHERE ID=connection_id();
Because in basic monitoring only these metrics are available.
For Monitoring, choose the option for how you want to view your metrics from these:
CloudWatch:
Shows a summary of DB instance metrics available from Amazon
CloudWatch. Each metric includes a graph showing the metric monitored
over a specific time span.
Enhanced monitoring:
Shows a summary of OS metrics available for a DB instance with
Enhanced Monitoring enabled. Each metric includes a graph showing the
metric monitored over a specific time span.
OS Process list:
Shows details for each process running in the selected instance.
So the option that can help you in this regards is Performance Insights.
Performance Insights:
If you are interested in detail matrics like HOST IPs, the number of connections, Slow queries and many more which can eliminate the need of DBA I believe as very good experience with Using Amazon RDS Performance Insights
Top activity can list any of the dimensions indicated at the top of the list. For Aurora PostgreSQL, Performance Insights currently supports listing top SQL, waits, hosts, and users.
analyzing-amazon-rds-database-workload-with-performance-insights
If you are using MariaDB:
select id, user, host, db, command, time, state, info, progress from information_schema.processlist;
Other options specific to AWS.
Turn on VPC Flowlogs , then use loginishgts to query historical
traffic to the instances.
Inspect the security group of the RDS, it may list the client(s) which is allowed to connect to it.
Turn on the general query log https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.Concepts.MariaDB.html
Related
I need to setup a shared processing service that uses a load balancer and several EC2 instances to process incoming requests using a custom .NET application. My issue is that I need to be able to bill based on usage. Only white-listed IPs will be able to call the application, but each IP only gets a set number of calls before each call is a billable event.
Since the AWS documentation for the ELB states "We recommend that you use access logs to understand the nature of the requests, not as a complete accounting of all requests", I do not feel the Access Logs on the ELB is what I'm looking for.
The question I have is how to best manage this so that the accounting team has an easy report each month that says how many calls each client made.
Actually you can use Access logs and since access logs will be written to S3, you can query each IP with Athena by using standard SQL. You can analyze your logs and extract reports.
References:
https://docs.aws.amazon.com/athena/latest/ug/what-is.html
https://aws.amazon.com/premiumsupport/knowledge-center/athena-analyze-access-logs/
I am looking to build a POC for my current gig. It is an AWS ec2 with an application that needs to talk with internal AWS private IP's and several outside AWS public IP's.
After benchmarking, I am able to get all the costs for an hour of running this application except for the network data transfer cost. (All from the command line)
Does anyone have an idea on how to get current network costs from either the command line or web interface?
The only potential cost for data transfer between Amazon EC2 instances in the same region would be 1c/GB in & 1c/GB out if using Public IP addresses or going across AZs.
Traffic between instances via private IP addresses in the same AZ would be zero.
For reference, see https://www.duckbillgroup.com/resources/:
https://aws.amazon.com/blogs/mt/using-aws-cost-explorer-to-analyze-data-transfer-costs/
In Cost Explorer using filters you can analyze data transfer costs:
After your cost allocation tags have been activated, and your workloads have run for at least a day, you can use filters in Cost Explorer to analyze your costs over that period.
Sign in to AWS Cost Explorer at https://console.aws.amazon.com/cost-reports/home?#/
Choose Explore in the navigation pane, and then choose Cost and Usage.
Choose the date range for the period for which you want to see the costs, and choose the Apply button.
Choose Filters – Service, then EC2-instances, and then EC2-ELB.Next choose Apply filters.
To see the total EC2 data transfer cost:
Choose Filters, Usage Type Group, EC2: Data Transfer – inter-Availability Zone, Internet (Out), and Region to Region (Out), then choose Apply filters.
You can also choose each individual data transfer type by checking only the box for that type.
I have a small number of HTTP servers on GCP VMs. I have a mixture of different server languages and Linux based OS's.
Questions
A. It it possible to use the Stackdriver monitoring service to set alerts at specific percentiles for HTTP response latencies?
B. Can I do this without editing the code of each server process?
C. Will installing the agent into the VM report HTTP latencies?
For example, if the 95th percentile goes over 100ms for a certain time period I want to know.
I know I can do this for CPU utilisation and other hypervisor provided stats using:
https://console.cloud.google.com/monitoring/alerting
Thanks.
Request latencies are extracted by cloud load balancers. As long as you are using cloud load balancer you don't need to install monitoring agent to create alerts based 95th Percentile Metrics.
Monitoring agent captures latencies for some preconfigured systems such as riak, cassandra and some others. Here's a full list of systems and metrics monitoring agent supports by default: https://cloud.google.com/monitoring/api/metrics_agent
But if you want anything custom, i.e. you want to measure request latencies from VM you would need to capture response times yourself and configure logging agent to create a custom metric which you can use to create alerts. And as long as you are capturing them as distribution metrics you should be able to visualise different percentiles (i.e. 25, 50, 75, 80, 90, 95 and 99th etc.) and create alert based on that.
see: https://cloud.google.com/logging/docs/logs-based-metrics/distribution-metrics
A. It it possible to use the Stackdriver monitoring service to set
alerts at specific percentiles for HTTP response latencies?
If you want to simply consider network traffic, yes it is possible. Also if you are using a load balancer it's also possible to set alerts on that.
What you want to do should be pretty straight forward from the interface, however you can also find more info in the documentation.
If you want to use some advanced metric on top of tomcat/apache2 etc, you should check the list of metrics provided by the stackdriver monitoring agent here.
B. Can I do this without editing the code of each server process?
Yes, no need to update any program, stackdriver monitoring works transparently and will be able to fetch basic metrics from a GCP VMs without the need of the monitoring agent, including network traffic and cpu utilization.
C. Will installing the agent into the VM report HTTP latencies?
No, the agent shouldn't cause any http latencies.
I want help on understanding the AWS cost explorer graph to track the huge data transfer usage.
I have noticed the AWS account bills for jan, Feb and March (till current date) where it is showing a huge data transfer charge as a bill line item (image attached AWS Bill line Item)
regional data transfer - in/out/between EC2 AZs or using elastic IPs
or ELB
. Further i checked it in AWS Cost Explorer reports by applying Group by filter Region wise and can see that it has data transfer for each region but also for
No Region
, i am not able able to understand this bar graph (please see the image attached and yellow graph AWS Cost Explorer Reports Region Wise) with level "No Region".
A good starting point would be to enable VPC Flow Logs. VPC Flow Logs will show you where the source and destination of all the traffic within your VPC. After you've analysed the logs, you should have a good indication of where to begin investigating.
Out of context but adding it here as it might help you: for some services such as S3, you can enable object-level logging to help you understand what is accessing your objects, which could help you further understand why you're paying for data transfers.
You can avoid paying for data transfer charges between AWS services by using VPC Endpoints. VPC endpoints allow you to connect directly to the service rather than over the internet, which will avoid incurring extra data charges. More on VPC Endpoints here.
On stackdriver, creating an Uptime Check gives you access to the Uptime Dashboard that contains the uptime % of your service:
My problem is that uptime checks are restricted to http/tcp checks. I have other services running and those services report their health in different ways (say, for example, by a specific process running). I have incident policies already set up for this services, so if the service is not running I get notified.
Now I want to be able to look back and know how long the service was down for the last hour. Is there a way to do that?
There's no way to programmatically retrieve alerts at the moment, unfortunately. Many resource types expose uptime as a metric, though (e.g., instance/uptime on GCE instances) - could you pull those and do the math on them? Without knowing what resource types you're using, it's hard to give specific suggestions.
Aaron Sher, Stackdriver engineer