Can I use AWS LightSail with AWS CloudWatch? - amazon-web-services

I've recently started testing out LightSail, but I would like to keep my logging centralized in CloudWatch, but cannot seem to find anything that would enable this. Interestingly LightSail instances do not appear in the EC2 Dashboard. I thought they were just EC2 instances beneath the surface.

I thought they were just EC2 instances beneath the surface.
Yes... but.
Conceptually speaking, you are the customer of Lightsail, and Lightsail is the customer of EC2.
It's as though there were an intermediary between you and AWS. The Lightsail resources are in EC2, but they're not in your EC2. They appear to be owned by an AWS account other than your AWS account, so you can't see them directly.
Parallels for this:
RDS is a "customer" of EC2/EBS. RDS instances are EC2 machines with EBS volumes. Where are they in the console? They aren't there. The underlying resources aren't owned by your account.
In EC2, EBS snapshots are stored in S3. Which bucket? Not one that you can see. EBS is a "customer" of S3. It has its own buckets.
S3 objects can be migrated to the Glacier storage class. Which Glacier vault? Again, not one that you can see. S3 is a "customer" of Glacier. It has its own vaults.
Every API Gateway endpoint is automatically front-ended by CloudFront. Which distribution? You get the idea... API Gateway is a "customer" of CloudFront.
I am not implying in any way that Lightsail is actually a separate entity from AWS in any meaningful sense... I don't know how it's actually organized... but operationally, that is how it works. You can't see these resources.

It's possible to get it working. The problem is that Lightsail instances are EC2 instances under the hood, but without access to all of the EC2 configuration. The CloudWatch agent documentation explains how to set up IAM roles for EC2 instances to assume, but Lightsail boxes only use a single role which can't be changed and can't be edited. As a result, you need to follow instructions for setting it up as an on-premise server.
The problem you will then hit is as David J Eddy saw in his answer:
2018-10-20T16:04:37Z E! WriteToCloudWatch failure, err: AccessDenied: User: arn:aws:sts::891535117650:assumed-role/AmazonLightsailInstanceRole/i-0788a602f758b836f is not authorized to perform: cloudwatch:PutMetricData status code: 403, request id: b443ecc6-d481-11e8-a551-6d030b8667be
This is due to a bug in the CloudWatch agent which ignores the argument to use on-premise mode (-m onPremise) if it detects it is running on an EC2 instance. The trick is to edit the common-config.toml file to force using a local AWS CLI profile for authentication. You will need to add the following lines to that file (which can be found at /opt/aws/amazon-cloudwatch-agent/etc/common-config.toml on Debian - the installation location is OS dependent):
[credentials]
shared_credential_profile = "AmazonCloudWatchAgent"
Restart the agent and it should start reporting metrics. I've put together a full tutorial here

Running the CloudWatch Agent on Lightsail does NOT work at this time. When the agent attempts to communicate with CloudWatch it receives a 403 from the STS service. Selecting EC2 or OnPremise options during configuration wizards yields the same results.
2018-10-20T16:04:37Z E! WriteToCloudWatch failure, err: AccessDenied: User: arn:aws:sts::891535117650:assumed-role/AmazonLightsailInstanceRole/i-0788a602f758b836f is not authorized to perform: cloudwatch:PutMetricData status code: 403, request id: b443ecc6-d481-11e8-a551-6d030b8667be
Just to make sure, I installed the CloudWatch Agent on my Ubuntu 18.04 desktop and started the agent without error.
Plus, if it did work, why would people pay for EC2 at a higher prices point? CloudWatch is a free value added service for using the full services.

Related

Alert: Behavior:EC2/NetworkPortUnusual use port:80 to AWS S3 Webpage

The other day, I received the following alert in GuardDuty.
Behavior:EC2/NetworkPortUnusual
port:80
Target:3.5.154.156
The EC2 that was the target of the alert was not being used for anything in particular. (However, it had been started up.)
There was no communication using port 80 until now.
Also, the IPAddress of the Target seems to be AWS S3.
The only recent change is that I recently deleted the EC2 InstanceProfile.
Therefore, there is currently no InstanceProfile attached to anything.
Do you know why this EC2 suddenly tried to use port 80 to communicate with the S3 page?
I looked at CloudTrail, etc., and found nothing suspicious.
(If there are any other items I should check, please let me know.)
Thankyou.
We have experienced similar alerts and after tedious debugging we found that SSM agent is responsible for this kind of GuardDuty findings.
SSM Agent communications with AWS managed S3 buckets
"In the course of performing various Systems Manager operations, AWS Systems Manager Agent (SSM Agent) accesses a number of Amazon Simple Storage Service (Amazon S3) buckets. These S3 buckets are publicly accessible, and by default, SSM Agent connects to them using HTTP calls."
I suggest to review CloudTrail logs and look for "UpdateInstanceInformation" event (this is how we found it eventually)

Is the AWS CLI missing data for the "ec2 describe-instancess" method?

As of the date of this question I'm using the most recent version of the AWS CLI (2.4.6) running on macOS. According to the v2 docs the Instances that are returned should include properties like InstanceLifecycle, Licenses, MetadataOptions -> PlatformDetails and several others that are missing for me. While I'm getting back most data, some fields are absent... I've tried this is two separate AWS accounts and I have admin IAM creds that I'm using locally, why does the aws ec2 describe-instances call not return all of the fields listed in the docs?
Not all outputs is available for every ec2 instance, it depends on the way of provisioning of your ec2 instances.
Ex:
InstanceLifecycle: is exclusive if you provisioned the ec2 instance as spot instance or reserved one.
Licenses: If you used BYOL when provisioning EC2 (Bring your own license)
Extra.. The docs describe every possible output from querying ec2 api endpoint, but it depends on the different parameters of your provisioned ec2 instance.
For example, try to provision a spot instance, and query the instance lifecycle.

Upload files to Amazon EC2 in a private network from Github Actions

As part of our workflow, we want to upload files to our Amazon EC2 instance automatically.
It's currently only allowing whitelisted IP ranges to connect over SSH. And since we are running Github actions, it seems odd to white list roughly 1500 IP ranges.
Does anyone have an intelligent solution for this?
SCP and/or rsync don't matter for us.
It's merely getting access that I need help with.
I have access to the ssh key, and I can get a hold of an admin to get temporary access to the AWS Console should I need it.
Since the EC2 instance is in a private network, the hurdles to get Github Actions ssh access to it are many.
I would work with a decoupled architecture. Have the GitHub action upload the files to S3.
Then
Lambda can load the file onto the ec2 instance - S3 trigger for Lambda
OR
Have a process running on the ec2 instance poll for new events on the s3 bucket per SNS - S3 polling

AWS IAM policy to update specific ECS cluster through AWS console

We're running a staging env in a separate ECS Fargate cluster. I'm trying to allow an external developers to update tasks and services in this cluster through the AWS Console.
I've created a policy that looks OK for me based on the documentation. Updates through the AWS cli work.
However the AWS Console requires a lot of other, only loosly related permissions. Is there a way to find out which permissions are required? I'm looking at CloudTrail logs but it takes 20 min until somethin shows up. Also I'd like to avoid giving unrelated permissions, even if they are read-only.

Getting Cloudwatch EC2 server health monitoring into ElasticSearch

I have an AWS account, and have several EC2 servers and an ElasticSearch domain set up to take the syslogs from these servers. However, in Cloudwatch and when investigating a specific server instance in the EC2 control panel, I see specific metrics and graphs for things like CPU, memory load, storage use, etc. Is there some way I can pipe this information into my ElasticSearch as well?
Set up Logstash and use this plugin https://github.com/EagerELK/logstash-input-cloudwatch
Or go the other way and use AWS Logs agent to put your syslogs into Cloudwatch and stop using ElasticSearch