I am currently unsure whether I can send outbound emails for my lambda functions through SES for free up to 62k messages. Documentation says that Ec2 and Beanstalk instances qualify but I read an old reddit thread from a few years ago saying that Lambdas execute on an EC2 instance and therefore do quality. What's the truth?
Official Documentation:
As part of AWS Free Tier, AWS SES offers 62,000 Outbound Messages per month to any recipient when you call Amazon SES from an Amazon EC2 instance directly or through AWS Elastic Beanstalk, and 1,000 Inbound Messages per month.
Reddit Post:
How AWS runs it is an implementation detail and is unrelated to billing. Most services in AWS run on EC2 internally.
The billing details is a but unclear on what it means for EC2, but it means any application running inside AWS. For example: EC2 native, ECS, EKS, or as you're asking lambda. Basically it costs more if your application is not running in AWS.
Can anyone confirm if this is true? I cannot find any sources to totally confirm or deny.
I browsed through the official documentation here and found their explanation to be somewhat unclear, primarily due to what is stated here in the first comment.
Related
The other day, I received the following alert in GuardDuty.
Behavior:EC2/NetworkPortUnusual
port:80
Target:3.5.154.156
The EC2 that was the target of the alert was not being used for anything in particular. (However, it had been started up.)
There was no communication using port 80 until now.
Also, the IPAddress of the Target seems to be AWS S3.
The only recent change is that I recently deleted the EC2 InstanceProfile.
Therefore, there is currently no InstanceProfile attached to anything.
Do you know why this EC2 suddenly tried to use port 80 to communicate with the S3 page?
I looked at CloudTrail, etc., and found nothing suspicious.
(If there are any other items I should check, please let me know.)
Thankyou.
We have experienced similar alerts and after tedious debugging we found that SSM agent is responsible for this kind of GuardDuty findings.
SSM Agent communications with AWS managed S3 buckets
"In the course of performing various Systems Manager operations, AWS Systems Manager Agent (SSM Agent) accesses a number of Amazon Simple Storage Service (Amazon S3) buckets. These S3 buckets are publicly accessible, and by default, SSM Agent connects to them using HTTP calls."
I suggest to review CloudTrail logs and look for "UpdateInstanceInformation" event (this is how we found it eventually)
My free AWS tier is going to expire in 8 days. I removed every EC2 resource and elastic IP associated with it. Because that is what I recall initializing and experimenting with. I deleted all the roles I created because as I understand it, roles permit AWS to perform actions for AWS services. And yet, when I go to the billing page it shows I have these three services that are in current usage.
[1]: https://i.stack.imgur.com/RvKZc.png
I used the script as recommended by AWS documentation to check for all instances and it shows "no resources found".
Link for script: https://docs.aws.amazon.com/systems-manager-automation-runbooks/latest/userguide/automation-awssupport-listec2resources.html
I tried searching for each service using the dashboard and didn't get anywhere. I found an S3 bucket, I don't remember creating it but I deleted it anyway, and still, I get the same output.
Any help is much appreciated.
ok, I was able to get in touch with AWS support via Live chat, and they informed me that those services in my billing were usages generated before the services were terminated. AWS support was much faster than I expected.
I am stuck on one point I have created one EC2 Linux based instance in Aws.
Now I want to send the EC2 metrics data to the managed Elasticsearch domain for monitoring purposes in Kiban, I go through the cloud watch console and check the metric is present of instance but didn't get how to connect with the Elasticsearch domain that I have created.
Can anyone please help me with this situation?
There is no build in mechanism for extraction/streaming of metrics data points in real time. You have to develop a custom solution for that. For example, by having a lambda function which is invoked every minute and which reads data points using get_metric_data. The the lambda would inject the points into your ES.
To invoke a lambda function periodically, e.g. every 1 minute you would have to setup CloudWatch Event rule with schedule Expressions. Lambda function would also need to have permissions granted to interact with CloudWatch metrics.
Welcome to SO :)
An alternative to the solution suggested by Marcin is to install metricbeat on the EC2 Instance and configure the metricbeat config file to send metrics to your Managed AWS ES Domain.
This is pretty simple and you should be able to do this fairly quickly.
I am building some form of a monitoring agent application that is running on AWS EC2 machines.
I need to be able to send commands to the agent running on a specific EC2 instance and only an agent running on that instance should pick it up and act on it. New EC2 instances can come and go at any point in time.
I can use kinesis and push all commands for all instances there and agents can pick up the ones targeted for them. The problem with this is that agents will have to receive a lot of commands that are not for them and filter it out.
I can also use SQS per instance, but then this will require to create/delete SQS every time new instance is being provisioned.
Would like to hear if there are already proven solutions for a similar scenario.
There already is a fully functional feature provided by AWS. I would rather use that one as opposed to reinventing the wheel, as it is a robust, well-integrated, and proven solution that’s being leveraged by thousands of AWS customers to gain operational insights into their instance fleets:
AWS Systems Manager Agent (SSM Agent) is a piece of software that can be installed and configured on an EC2 instance (and it’s pre-installed on many of the default AMIs, including both versions of Amazon Linux, Ubuntu, and various versions of Windows Server). SSM Agent makes it possible to update, manage, and configure these resources. The agent processes requests from the Systems Manager service in the AWS Cloud, and then runs them as specified in the request. SSM Agent then sends status and execution information back to the Systems Manager service by using the Amazon Message Delivery Service.
You can learn more about AWS Systems Manager and the breadth and depth of functionality it provides here.
Have you considered using Simple Notifications Service? Each new EC2 instance could subscribe to a topic using e.g. http, and remove previous subscribers.
That way the topic would stay constant regardless of EC2 rotation.
It might be worth noting that SNS supports subscription filters, so it can decide which messages deliver to which endpoint.
To my observation, AWS SWF could be the option here. Since Amazon SWF is to coordinate work across distributed application components and it provides SDKs for various platforms. Refer to the official FAQs for more in-depth understanding. https://aws.amazon.com/swf/faqs/
Not entirely clear what the volume of the monitoring system messages will be.
But the architecture requirements described sounds to me as follows:
The agents on the EC2 instances are (constantly?) polling some centralized service, which is a poll based architecture
The messages being sent are to a specific predetermined EC2 instance, which is a push based architecture.
To support both options without significant filtering of the messages I suggest you try using an intermediate PubSub system such Kafka, which can be managed on AWS by MSK.
Then to differentiate between the instances, create a Kafka topic named by the EC2 instance ID.
This should give you a unique topic that the instance will easily know to access messages for itself on a topic denoted by it's own instance ID.
You can also send/push Producer messages to a specific EC2 instance by sending messages to the topic in the cluster named by it's EC2 instance ID.
Since there are many EC2 instances coming and going you will end up with many topics. To handle the volume of topics, you can trigger and notify CloudWatch on each EC2 termination event and check CloudWatch to see which EC2 instances were terminated and consequently their topic needs deleting.
Alternatively, you can trigger a Lambda directly on the EC2 termination event event and log it by creating a file denoted by the instance ID to an S3 Bucket, which you can watch using an additional Lambda that will delete old EC2 instance topics from the Kafka cluster when their instance ID's appear there.
There are a bunch of different AWS services that can start up EC2 instances: Elastic Beanstalk, ECS services / tasks, EC2 autoscaling groups, Ops Works scripts, Cloud Formation templates, and probably others that I haven't discovered yet. Today I am cleaning up after a bunch of experiments and demos. When I try to stop certain EC2 instances, some of them get restarted by something. Is there some way to determine why an EC2 instance was started, without digging around in each AWS product looking for a reference to a particular machine?
If you enable CloudTrail, you'll be able to see who issued what AWS API call. So you should be able to see what services are launching these instances by checking the CloudTrail logs and searching for the relevant instance ids.
See more about CloudTrail in the docs
But there is no way by default to get this information. It's possible that the free customer support team would be able to help if you provide them with the instance ids.