AWS: Not able to delete Elasticsearch Service - amazon-web-services

At AWS console, Elasticsearch dashboard, I chose Actions -> Delete domain to delete Elasticsearch service.
But, the domain name still shows at the Elasticsearch dashboard even though the "Domain status" shows "Being deleted". . There are three network interfaces attached to the Elasticsearch service. I am not able to Detach and delete those network interfaces because of it. Please help.

I had a similar situation with the AWS console being stuck at "Being deleted". When I tried with the CLI, the delete completed in less than a minute. That leads me to believe the cluster was already deleted but the UI was stuck. The command I ran was:
aws es delete-elasticsearch-domain --domain-name my-domain

Related

Alert: Behavior:EC2/NetworkPortUnusual use port:80 to AWS S3 Webpage

The other day, I received the following alert in GuardDuty.
Behavior:EC2/NetworkPortUnusual
port:80
Target:3.5.154.156
The EC2 that was the target of the alert was not being used for anything in particular. (However, it had been started up.)
There was no communication using port 80 until now.
Also, the IPAddress of the Target seems to be AWS S3.
The only recent change is that I recently deleted the EC2 InstanceProfile.
Therefore, there is currently no InstanceProfile attached to anything.
Do you know why this EC2 suddenly tried to use port 80 to communicate with the S3 page?
I looked at CloudTrail, etc., and found nothing suspicious.
(If there are any other items I should check, please let me know.)
Thankyou.
We have experienced similar alerts and after tedious debugging we found that SSM agent is responsible for this kind of GuardDuty findings.
SSM Agent communications with AWS managed S3 buckets
"In the course of performing various Systems Manager operations, AWS Systems Manager Agent (SSM Agent) accesses a number of Amazon Simple Storage Service (Amazon S3) buckets. These S3 buckets are publicly accessible, and by default, SSM Agent connects to them using HTTP calls."
I suggest to review CloudTrail logs and look for "UpdateInstanceInformation" event (this is how we found it eventually)

I can't find and disable AWS resources

My free AWS tier is going to expire in 8 days. I removed every EC2 resource and elastic IP associated with it. Because that is what I recall initializing and experimenting with. I deleted all the roles I created because as I understand it, roles permit AWS to perform actions for AWS services. And yet, when I go to the billing page it shows I have these three services that are in current usage.
[1]: https://i.stack.imgur.com/RvKZc.png
I used the script as recommended by AWS documentation to check for all instances and it shows "no resources found".
Link for script: https://docs.aws.amazon.com/systems-manager-automation-runbooks/latest/userguide/automation-awssupport-listec2resources.html
I tried searching for each service using the dashboard and didn't get anywhere. I found an S3 bucket, I don't remember creating it but I deleted it anyway, and still, I get the same output.
Any help is much appreciated.
ok, I was able to get in touch with AWS support via Live chat, and they informed me that those services in my billing were usages generated before the services were terminated. AWS support was much faster than I expected.

ECS service aws-cli vs. dashboard

Currently experiencing a weird behaviour of AWS ECS tool.
I find 2 different behaviours when using the aws-cli and the web dashboard.
The context is that I have an ECS cluster set up, I am writting a script that automates my deployment by (among other steps) creating or updating an ECS service.
Part of my script uses the command aws ecs describe-services
And it is here that I find different information than the dashboard (on the page of my cluster).
Indeed, when the service is created and ACTIVE if I run :
aws ecs describe-services --services my_service --cluster my_cluster
The service will show up as an output with all the informations that I need to parse. It will show up as well on the web dashboard as ACTIVE.
The problem is when I delete the service from the dashboard. As expected, it is deleted from the list and I can eventually recreate one from the dashboard with the same name.
But if when the service is deleted, I re-run the command above, the output will show the service as INACTIVE and all the infos about the previously deleted service will still appear.
If the service is deleted, shouldn't the command return the service as MISSING :
{
"services": [],
"failures": [
{
"reason": "MISSING",
"arn": "arn:aws:ecs:<my_regions>:<my_id>:service/my_service"
}
]
}
Because this complicates the parsing in my script, and even if I can find a workaround (maybe trying to create the service even if INACTIVE rather than not existing), it is kind of weird that even deleted, the service is still here, somewhere, clutering my stack.
Edit : I am using the latest versio of the aws-cli
This is the default behavior provided by aws. Please check below documentation:
When you delete a service, if there are still running tasks that require cleanup, the service status moves from ACTIVE to DRAINING , and the service is no longer visible in the console or in ListServices API operations. After the tasks have stopped, then the service status moves from DRAINING to INACTIVE . Services in the DRAINING or INACTIVE status can still be viewed with DescribeServices API operations. However, in the future, INACTIVE services may be cleaned up and purged from Amazon ECS record keeping, and DescribeServices API operations on those services return a ServiceNotFoundException error.
delete-service

How to visualize AWS Elastic Beanstalk application logs

We are using AWS Elastic Beanstalk for deploying application. Currently we have two Elastic Beanstalk applications and two worker processes (that pick message from AWS SQS Queue and process it).
What can be the best tools to view the combine logs from the Elastic Beanstalk application and worker and a few more on-premise applications in future?
Throw the logs in AWS ElasticSearch and the use Kibana, which comes with ElasticSearch, to visualize them.
I used the suggestion and configured Cloud watch logs, Elastic Search, and Kibana; but i am not getting all logs and all insights. I can see httpd access & error logs, ebs access & error logs. It also seems lot of AWS services and configuration. Since I am very new to AWS; therefore, so I am facing trouble in setting things up
Alternatively as suggested by my boss, I tried "New relic" - It was very simple to configure and I can see lot of insights of my EBS application in "New Relic" console. I can also configure my Browser, iOS app, Android app, AWS infrastructure (AWS Services) in one New Relic console. Some details are missing in New Relic console such error stack trace, request params in POST request, and so on; But I also don't want to share such details with New Relic, so, that is ok.
I will use "New Relic" and Cloudwatch logs (for real time investigation into failing HTTP REST services) right now; but I will explore more options inside AWS: Elastic Search and Kibana
Many Thanks

Can I use AWS LightSail with AWS CloudWatch?

I've recently started testing out LightSail, but I would like to keep my logging centralized in CloudWatch, but cannot seem to find anything that would enable this. Interestingly LightSail instances do not appear in the EC2 Dashboard. I thought they were just EC2 instances beneath the surface.
I thought they were just EC2 instances beneath the surface.
Yes... but.
Conceptually speaking, you are the customer of Lightsail, and Lightsail is the customer of EC2.
It's as though there were an intermediary between you and AWS. The Lightsail resources are in EC2, but they're not in your EC2. They appear to be owned by an AWS account other than your AWS account, so you can't see them directly.
Parallels for this:
RDS is a "customer" of EC2/EBS. RDS instances are EC2 machines with EBS volumes. Where are they in the console? They aren't there. The underlying resources aren't owned by your account.
In EC2, EBS snapshots are stored in S3. Which bucket? Not one that you can see. EBS is a "customer" of S3. It has its own buckets.
S3 objects can be migrated to the Glacier storage class. Which Glacier vault? Again, not one that you can see. S3 is a "customer" of Glacier. It has its own vaults.
Every API Gateway endpoint is automatically front-ended by CloudFront. Which distribution? You get the idea... API Gateway is a "customer" of CloudFront.
I am not implying in any way that Lightsail is actually a separate entity from AWS in any meaningful sense... I don't know how it's actually organized... but operationally, that is how it works. You can't see these resources.
It's possible to get it working. The problem is that Lightsail instances are EC2 instances under the hood, but without access to all of the EC2 configuration. The CloudWatch agent documentation explains how to set up IAM roles for EC2 instances to assume, but Lightsail boxes only use a single role which can't be changed and can't be edited. As a result, you need to follow instructions for setting it up as an on-premise server.
The problem you will then hit is as David J Eddy saw in his answer:
2018-10-20T16:04:37Z E! WriteToCloudWatch failure, err: AccessDenied: User: arn:aws:sts::891535117650:assumed-role/AmazonLightsailInstanceRole/i-0788a602f758b836f is not authorized to perform: cloudwatch:PutMetricData status code: 403, request id: b443ecc6-d481-11e8-a551-6d030b8667be
This is due to a bug in the CloudWatch agent which ignores the argument to use on-premise mode (-m onPremise) if it detects it is running on an EC2 instance. The trick is to edit the common-config.toml file to force using a local AWS CLI profile for authentication. You will need to add the following lines to that file (which can be found at /opt/aws/amazon-cloudwatch-agent/etc/common-config.toml on Debian - the installation location is OS dependent):
[credentials]
shared_credential_profile = "AmazonCloudWatchAgent"
Restart the agent and it should start reporting metrics. I've put together a full tutorial here
Running the CloudWatch Agent on Lightsail does NOT work at this time. When the agent attempts to communicate with CloudWatch it receives a 403 from the STS service. Selecting EC2 or OnPremise options during configuration wizards yields the same results.
2018-10-20T16:04:37Z E! WriteToCloudWatch failure, err: AccessDenied: User: arn:aws:sts::891535117650:assumed-role/AmazonLightsailInstanceRole/i-0788a602f758b836f is not authorized to perform: cloudwatch:PutMetricData status code: 403, request id: b443ecc6-d481-11e8-a551-6d030b8667be
Just to make sure, I installed the CloudWatch Agent on my Ubuntu 18.04 desktop and started the agent without error.
Plus, if it did work, why would people pay for EC2 at a higher prices point? CloudWatch is a free value added service for using the full services.