I have a DynamoDB application and it seems to be running well and using normal throughput generally. However once in a while it seems to spike pretty high (latest kicked up over 300, normal is around 10-20 max). I've looked through the code and I'm having a bit of trouble figuring out what it is that is causing these spikes. Is there any type of history of the calls in DynamoDB that could tell me what exactly were the calls that caused the spiking?
You can enable the cloudtrail logs for the dynamoDB.It will deliver these log files to S3 bucket. Taken directly from the AWS Docs :-
DynamoDB is integrated with CloudTrail, a service that captures
low-level API requests made by or on behalf of DynamoDB in your AWS
account and delivers the log files to an Amazon S3 bucket that you
specify. CloudTrail captures calls made from the DynamoDB console or
from the DynamoDB low-level API. Using the information collected by
CloudTrail, you can determine what request was made to DynamoDB, the
source IP address from which the request was made, who made the
request, when it was made, and so on. To learn more about CloudTrail,
including how to configure and enable it, see the AWS CloudTrail User
Guide.
Please follow the aws dynamoDB cloudtrail logging to enable it.
Related
I am developing a software application that calls AWS API using golang API (https://aws.amazon.com/sdk-for-go/). My software service is building a real-time topology of the cloud environment and so depends on calling this API for each service in the cloud account as often as possible.
I am looking for information pertaining to how many of the following example API calls, shown here using their AWS CLI equivalents, can I issue per second:
$ aws iam list-users
$ aws iam list-roles
$ aws ec2 describe-instances
...
These API calls are rate-limited by what AWS calls request token bucket sizes and refill rates (see https://docs.aws.amazon.com/AWSEC2/latest/APIReference/throttling.html#throttling-limits-cost-based as an example for these rates for EC2). The cloud customer, of course, can increase some of these rates and bucket sizes.
My question is, is there an API for those rates for all services so that my application can be aware of what the current bucket sizes are and the refill rates are so as not to get throttled and run afoul of my customers contracts. In other words, I want the EC2 request token bucket sizes delivered as a JSON to my software application (and for all the other AWS services too). Is this possible?
To the best of my knowledge, and as of the writing of this response, the token bucket algorithm implementation on AWS seems to be very difficult to get a handle on.
Many services do not have these numbers, max bucket size and bucket refill rate, published in the documentation, Unlike GCP's API throttling which is simple, easy to guess and write quick rules around (https://cloud.google.com/compute/docs/api-rate-limits) or Azures', which is is also well documented (https://learn.microsoft.com/en-us/graph/throttling and https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/request-limits-and-throttling).
Also, there is definitely no API for API throttling bucket sizes and refill rates available on AWS.
As a result, your AWS application's API throttling response has to be very application-specific and tuned for your specific needs.
There are close to 100,000 devices that are generating logs (total of 10-20 TB a day) which I would like them to directly upload to kinesis. How do I control access? IAM only lets me create a max of 1000 users per account (I know we can request user limit increase), but would like to know what is a better way to do this.
One requirement is, I would like to be able to grant/revoke access to kinesis per device.
Since you have IoT Core already, I think that I would first try to leverage it for logging. This will let you take advantage of the certificate-based authorization that's built-in to IoT core, and I know that you can hook an IoT topic into a Kinesis stream.
If you feel that this would be too much volume (and perhaps too expensive based on the number of messages and rules), then I'd provide my devices with temporary security credentials that let them write to Kinesis and nothing else.
You would generate these credentials on a per-device basis (as far as I can tell, there are no quotas on the number of credentials per account), using a scheduled job, either in Lambda or on ECS. This job would iterate through your devices and generate a set of credentials for each. It would then either publish these credentials to the device via IoT Core, or update the device shadow.
The device could then use these credentials to create a Kinesis client to publish log messages. Your client would have to create a new client whenever it receives new credentials.
As an alternative, if your devices maintain logfiles internally, you could use a similar approach to trigger uploading those files to S3. In that case, rather than publishing temporary credentials, the scheduled task would publish a pre-signed URL for each device. It would publish the URL to the device, and the device would use that to upload its accumulated logs. Then you'd need something to do something with the files on S3.
I am preparing for AWS exam and I found some documentation about AWS CloudTrail and AWS X-RAY where it creates confusion on their usage requirement.
I have came across following question where requirement was to trace and analyse the user request as it travels through Amazon API Gateway APIs to underlying services.
As per my understanding, we can use CludTrail to trace and analyse the user request. But the correct answer was AWS XRAY.
The documents which have referred mentions that, we can use AWS CloudTrail logs for tracing,Security Analysis, Resource Change Tracking and Compliance/Auditing. On the other hand, we can use AWS X-RAY to analyse and debug applications running on distributed micro service architecture.
XRAY and CloudTrail usage both have the term Analyse and trace. So it is quite confusing to which service should we choose under such requirement to trace and analyse the user request
X-Ray is more detailed in the information it provides for the request's flow and state. It scans the request all the way through its lifetime from when it is received in the api gateway to whatever services are called and executed after that. So I imagine that is why it is the preferred option.
I need a way to log API gateway deployments (date/time, user, swagger diff etc.). Is there an event thats fired that i can attach a lambda to, or alternatively is this information already available on the dashboard somewhere?
As Krishna mentioned, CloudTrail can capture API events (both from the AWS console as well as the AWS APIs) for API Gateway, including the deployment of APIs. Since CloudTrail stores the events in S3, you can take advantage of S3 bucket notifications as a means to trigger your Lambda function.
I have an AWS account that use multiple devs and teams [dev/qa/mobile].
I would like to be notified when any change takes place in my AWS account.
For example a dev launches a new instance , or a new open port is added in a security group etc and he forgets to announce it to me or the rest of the team.
I want to be fully informed for these changes in order to apply specific architecture and/or security and people tend to mess with them.
Is there any dashboard or service inside AWS that I can customise it?
Someone suggested that I should take a look in CloudTrail.
Has anyone done something like this?
The easiest way to go is to use cloudtrail with cloudwatch logs. In AWS FAQ:
Q:What are the benefits of CloudTrail integration with CloudWatch Logs?
This integration enables you to receive SNS notifications of API activity captured by CloudTrail. For example, you can create CloudWatch alarms to monitor API calls that create, modify and delete Security Groups and Network ACL’s. For examples, go to the examples section of the user guide.
Based on SNS, you can then send email through SES
I think the easier way is to use Amazon Cloudtrail service.
Cloudtrail logs any API call which is made on your AWS account. Every operation done on AWS is and API call (including instances operations as you have requested)
Here you can find more information about it
http://docs.aws.amazon.com/awscloudtrail/latest/userguide/configure-cloudtrail-to-send-notifications.html
I hope this helps somehow.
You can find logs of your AWS account in S3,
Find below path in S3:
s3://security-logging/AWS_/AWSLogs/AWS Account no./CloudTrail/your region/year
You can also integrate CloudTrail with SQS to send notifications.