AWS ec2 instance created so many other resources - amazon-web-services

I am new to AWS. I want to test my free tier. Today i created a free EC2.micro instance. Its working successfully, i didn't create anything else, but when i go to Resource Group -> Tag Editor and try to list all resources in all region, it is listing 168 resources
One is my EC2 instance (I created)
others are DHCPOptions, InternetGateway, NetworkAcl, Subnet, RouteTable, VPC, SecurityGroup, Volume, DBSecurityGroup with different regions (I don't why they are here)
I want to know all this above are come under free tier or its going to make a huge $$$ bill???
How can i see my billing live?

Some resources are created by AWS and are not charged.
One of the reasons you see so many resources is because AWS creates a VPC in each region for you. It also creates a default SG, Subnets, Routing Tables and so on.
They are not charged.
You can click on your organization name and then on My Billing Dashboard to see the current bill amount and a monthly estimate.

VPC is like your private space. It is a network isolation from other customers and provide all the basics needed for network and therefore it contains things like subnets, Internet Gateway, routes, NACL, Subnet, Security group... and are all free (except things like data transfer between subnets in different datacenters, but it is not your case)
You also have an EBS created, which is the disk for your EC2. There is a free tier for that, so after one year you will pay it, like your EC2.

Related

Algorithm - What is the critical parameter to derive relationships across resources in VPC?

Goal is to visualise the relationship of resources within AWS account(may have multiple VPC's).
This would help daiy operations. For example: Resources getting affected after modifying the security group
Each resource has ARN assigned in AWS cloud
Below are some example relationsships among resources:
Route table has-many relationship with subnets
NACL has-many relationship with subnets
Availability zone has-many relationship with subnets
IAM resource has-many regions
has-many is something like compose relation
security group has association relation with any resource in VPC
NACL has association relation with subnet only
We also have VPC flow logs to find the relationships
Using AWS SDK's,
1)
For on-prem networks, we take IP range and send ICMP requests to verify existence of devices in the IP range and then we send snmp query to classify the device as (windows/linux/router/gateway etc...)
How to find the list of resources allocated within an AWS account? How to classify resources?
2)
What are the parameters that need to be queried from AWS resources(IAM, VPC, subnet, RTable, NACL, IGW etc...) that help create relationsip view of the resources within an AWS account?
you don't have to stitch your ressources together by your self in your app. you can use the ressourcegrouptagging api from aws. take a look on ressourcegroups in your aws console. there you can group things based on tags. then, you can tag the groups itself. requesting with the boto3 python lib will give you a bunch of information. read about boto3, its huge! another thing which might be intresting for you is "aws config".. there you can have your compliance, config histoty, relationship of ressources and plenty of other stuff!
also, check out aws cloudwatch for health monitoring

Trying to automatically register my EC2 instances in Route 53

I have approximately 40 Windows EC2 instances running at the moment. This number will start to grow substantially in the next few months. Each one is a t2.small Windows 2016 Server instance. Cost is starting to become an issue as the number increases. Each instance has its own Elastic IP address because when user Tom wants to access his machine he will use the DNS tom.mydomain.com.
tom.mydomain.com is registered in a Route53 hosted zone pointing to Elastic IP 22.33.44.55 which has been associated with Tom's EC2 instance.
Problem is that Tom only needs to use his machine 4 hours per day. When not using it he simply shuts the machine down. But... An Elastic IP that is pointing to a stopped instance costs almost as much per hour as a t1.micro instance in a running state.
So what I want to do is when Tom logs into AWS console and starts his EC2 instance, it will automatically register itself with Route53 against the DNS "tom.mydomain.com".
In short I want to do away with the need for Elastic IPs which are fast becoming a very substantial cost.
The tutorial Auto-Register EC2 Instance in AWS Route 53
looks like it does exactly what I want to do. The problem is the scripting is for Linux. I want to get it working for Windows. I have everything done down to step 6 in the tutorial but am stuck there. Any one get something similar to this working on Windows?
I would recommend:
Create a web-based front-end where your users can authenticate and request access to their Amazon EC2 instance
You could use Amazon Cognito for authentication and DynamoDB for data storage
Once the user authenticates, the service can:
Start their EC2 instance (if it was previously stopped)
Associate the random public IP address to the customer's domain name
Tell the user that the instance is now available
Users login to the instance and perform their work function
You then have some mechanism (I'm not sure what) that detects that they no longer need the instance, and then Stops the instance to save costs
The above process avoids assigning IAM credentials to your users. While IAM credentials are important for staff members who work on your AWS infrastructure, they should not be assigned to end-users of your service.
The process also avoids assigning IAM permissions to each EC2 instance. While the instances themselves could call Route 53 to update a record for their domain name, this requires an IAM Role to be assigned to the EC2 instance. If your users have access to the instance itself, this would potentially open a security hole where they could call Route 53 with incorrect data, such as assigning other users' domain names to their own instance.
It's worth mentioning that the above recommendations mirror the way that Amazon WorkSpaces operates — users authenticate, their instance is started and after a period of non-use the instance is stopped.
I will recommend use of cloudformation template. Cloudformation can create EC2 and then attach it to route53 url. So when tom like to use the EC2 instance, he have to run the stack in Cloudformation. Once he finished he have to go back to cloudformation and destroy the stack.
Yes Cloudformation would be a recommended approach. You can try cloudkast which is an online cloudformation template generator. It will make your task of creating cloudformation template very easy and effortless

AWS Regional Disaster Recovery

I would like to create a mirror image of my existing production environment in another AWS region for disaster recovery.
I know I don't need to recreate resouces such as IAM roles as its a "Global" service. (correct me if I am wrong)
Do I need to recreate key pairs in another region?
How about Launch configurations and Route 53 Records sets?
Launch configurations you will have to replicate into the another region as the AMIs, Security Groups, subnets, etc will all be different. Some instance types are not available in all regions so you will have to check that as well.
Route53 is another global thing but you will probably have to fiddle with your records to take advantage of multi-region architecture. If you have the same setup in two different regions you will probably want to implement latency based or geo routing to send traffic to the closest region. Heres some info on that
As for keys they are per region. But I read somewhere that you could create an AMI from your instance, move that to a new region, and fire an instance off that and as long as you use the same key name your existing key will work but take that with a grain of salt as I haven't tried it nor seen it documented anywhere.
Heres the official AWS info for migrating

AWS - HA NFS - Best practices

Anyone have a sound strategy for implementing NFS on AWS in such a way that it's not a SPoF (single point of failure), or at the very least, be able to recover quickly if an instance crashes?
I've read this SO post, relating to the ability to share files with multiple EC2 instances, but it doesn't answer the question of how to ensure HA with NFS on AWS, just that NFS can be used.
A lot of online assets are saying that AWS EFS is available, but it is still in preview mode and only available in the Oregon region, our primary VPC is located in N. Cali., so can't use this option.
Other online assets are saying that GlusterFS is a way to go, but after some research I just don't feel comfortable implementing this solution due to race conditions and performance concerns.
Another options is SoftNAS but I want to avoid bringing in an unknown AMI into a tightly controlled, homogeneous environment.
Which leaves NFS. NFS is what we use in our dev environment and works fine, but it's dev, so if it crashes we go get a couple beers while systems fixes the problem, but on production, this is obviously a no go.
The best solution I can come up with at this point is to create an EBS and two EC2 instances. Both instances will be updated as normal (via puppet) to maintain stack alignment (kernel, nfs libs etc), but only one instance will mount the EBS. We set up a monitor on the active NFS instance, and if it goes down, we are notified and we manually detach and attach to the backup EC2 instance. I'm thinking we also create a network interface that can also be de/re-attached so we only need to maintain a single IP in DNS.
Although I suppose we could do this automatically with keepalived, and a IAM policy that will allow the automatic detachment/re-attachment.
--UPDATE--
It looks like EBS volumes are tied to specific availability zones, so re-attaching to an instance in another AZ is impossible. The only other option I can think of is:
Create EC2 in each AZ, in public subnet (each have EIP)
Create route 53 healthcheck for TCP:2049
Create route 53 failover policies for nfs-1 (AZ1) and nfs-2 (AZ2)
The only question here is, what's the best way to keep the two NFS servers in-sync? Just cron an rsync script between them?
Or is there a best practice that I am completely missing?
There are a few options to build a highly available NFS server. Though I prefer using EFS or GlusterFS because all these solutions have their downsides.
a) DRBD
It is possible to synchronize volumes with the help of DRBD. This allows you to mirror your data. Use two EC2 instances in different availability zones for high availability. Downside: configuration and operation is complex.
b) EBS Snapshots
If a RPO of more than 30 minutes is reasonable you can use periodic EBS snapshots to be able to recover from an outage in another availability zone. This can be achieved with an Auto Scaling Group running a single EC2 instance, a user-data script and a cronjob for periodic EBS snapshots. Downside: RPO > 30 min.
c) S3 Synchronisation
It is possible to synchronize the state of an EC2 instance acting as NFS server to S3. The standby server uses S3 to stay up to date. Downside: S3 sync of lots of small files will take too long.
I recommend watching this talk from AWS re:Invent: https://youtu.be/xbuiIwEOCAs
AWS has reviewed and approved a number of SoftNAS AMIs, which are available on AWS Marketplace. The jointly published SoftNAS Architecture on AWS White Paper provides more details:
Security (pages 4-11)
HA across AZs (pages 13-14)
You can also try a 30 day free trial to see if it meets your needs.
http://softnas.com/tryaws
Full disclosure: I work for SoftNAS.

Amazon EC2 unexpected billing amount

I´m quite new with Amazon Web Services. Some months ago, I created a m3.medium instance on demand. According to AWS EC2 prices, this instance is 0.077$/hour. This means 55,44$/month (november). However, I got a billing of 74.76$ (91.12$ with taxes).
I guess I have some service that I´m missing and maybe they are charging me:
In example, I have an Elastic Load Balancer. Am I getting charged for that? Actually, I have realized I had two ELB. It looks like I created it another one for testing purposes and I forgot it there.
I also have an Elastic Block Store (EBS) with 8GB of size. Am I getting charged for that? Do I really need it?
When I check my billing status, I don´t see any reference to these both two services. So, I guess they are included in the EC2 billing, right?
I don´t know where I got the idea that when you start an EC2 instances, an ELB and EBS was included with no additional charges.
As you can see, I´m quite lost with these services.
Billing information is available from the account menu (in the top-right, next to the Region menu). It will display a simple breakdown of charges by service:
More detailed billing information is available by clicking the "Bill Details" link (in the top-right). It will show a breakdown of charges by service for any selected month:
EBS charges are included under "Elastic Compute Cloud":
To answer your questions:
Elastic Load Balancer pricing
Elastic Block Store (EBS) pricing: This is the disk storage for Amazon EC2. You will be charged for any volumes from the time they are created until they are deleted.
There is also a Free Usage Tier that includes 30GB of EBS storage each month in your first year (amongst other services). If you use services within this free tier, there will be no charge.