how to know the amount of data being transferred in digitalocean - digital-ocean

I have created one droplet with private IP for web server and one droplet with private IP for DB server. I've connected them via SSH.
In this graph it shows the bandwidth which is the speed of data transfer right?
But how am i able to know the amount of data being transferred from time to time?
Thanks

At this time traffic statistics are not yet available. Until these statistics are readily available to you, DO will not process any charges for any additional transfer beyond your Droplet plan.

Related

How to find AWS EC2 with lowest latency to another server

I have a client sever located in AWS and I want to reduce latency between his machine and my EC2 instance. I rented two same servers in one availability zone and started sending requests to client`s API. It turned out that these servers have different latencies: 95-th percentiles were different for about 5 milliseconds (that is about 30% from mean latency). And my aim is to reduce latency.
I think that I can rent more servers and repeat these experiment, but it will be the next step of my investigation. The first step for me is to understand the reasons why servers in the same zone have so big difference in API response latency and which metrics can be useful to explain it?
The second way to reduce latency is to rent bare metal server instead of EC2, but it seems to be too expensive. And I am afraid that renting this server make even worse if it stand further from client server.
So, tell me please:
Do you have any advice how to reduce latency?
How can I rent closest server to my client in the same AWS zone?

Central logs on personal laptop?

I got a new laptop and planning to dedicate the current laptop as a central log monitoring system for the server clusters already set up on AWS. AWS servers have static IP, while my personal laptop will be connected to Wifi. The clusters receive low to moderate traffic and there aren't many logs generated.
To use the laptop as a central log monitoring system, I can do one of these things:
Stream logs in realtime(Using streams to reduce reconnection overheads)
HTTP Long Polling(Can't push as my ISP doesn't allow me a static IP)
Make a VPN server and figure out some way to push/poll logs.
I think the 1st option(streaming logs) looks the most promising.
Is there some better way to this?
Also, how do I stream logs in this setup considering clients have static IP while my central server has dynamic IP?
Are there any open-source/existing services that achieves this already(Why re-invent the wheel when you have a start!)?
Thank you in advance!

Hosting rest-api server on aws workspace vs ec2 instance?

I need to host a service with rest-api on a server which does below listed tasks:
Download and upload files in s3 bucket
Run some cpu intensive computations
Return json response
I know an ec2 instance will be a better approach to host my service but given price differences between workspace and ec2 instance, I am exploring this route. Are there any limitations on amazon workspace that might prevent me from using them for my use case?
I came across ngrok which I believe can help me direct requests over the internet to my workspace local server.
Has anyone played around with it and could add some suggestion?
AWS terms of service do not allow you to do that I’m afraid. See section 36 on workspaces.
http://aws.amazon.com/service-terms/
36.3. You and End Users may only use the WorkSpaces Services for an End User’s personal or office productivity. WorkSpaces are not meant to accept inbound network connections, be used as server instances, or serve web traffic or your network traffic. You may not reconfigure the inbound network connections of your WorkSpaces. We may shut down WorkSpaces that are used in violation of this Section or other provisions of the Agreement.
I suggest you use an r5a.xlarge for the lowest cost 32GB RAM instance type (it’s AMD processor is cheaper than r5 on intel). Investigate whether spot instances would work if your state persists on S3 and not in the local instance, otherwise if you need it for at least a year reserved instances are discounted over on demand pricing.

How to calculate the data transfer rate of a particular site(virtual host) in AWS

I have virtual host files(sites) setup on two linux EC2 instances which are behind an ELB. I would like to know the data transfer rate of one particular site which is hosted on these EC2 instances. There are almost 30 virtual hosts on each EC2 instance and I need to calculate the average data transfer rate of all these sites. From cloudwatch I could only gather the information at service level but not for particular site. Is there any way to accomplish this?
In this case, I recommend 2 things:
Do not use ELB or EC2 directly for data transfer.
If you want to know the exactly MB/GB/TB:
Work with CloudFront in front of your ELB.
Register one CloudFront distribution for each site (domain, subdomain or whatever).
If you do that, you will have more control and save money (data transfer) but it depends on the region (sometimes it can be a bit more expensive).

WebServer and Database server hosted on seperate instances of Amazon EC2

I am planning to run a web-application and expecting a traffic of around 100 to 200 users.
Currently I have set up single Small instance on Amazon. This instance consist of everything – the Webserver(Apache) , the Database Server(MySQL) and the IMAP server( Dovcot). I am thinking of moving out my database server out of this instance and create a separate instance for it. Now my question is –
Do I get latency while communication between my webserver and Database server( Both hosted on separate instances on Amazon )
If yes, what is the standard way to overcome this ? ( or Do I need to set up a Virtual Private Cloud ?)
If you want your architecture to scale you should separate your web server from your database server.
The low latency that you will pay (~1-2ms even between multiple availability zone), will give you better performance as you can scale each tier separately.
You can add small (even micro) instances to handle more web requests behind a load balancer, without the need to duplicate an instance that has to have a database as well
You can add auto-scale group for your web server that will automatically scale your web server tier, based on usage load
You can scale up your DB instance, to have more memory, getting a better cache hit
You can add Elastic Cache between your web server and your database
You can use Amazon RDS as a managed database service, which remove the need for an instance for the database at all (you will pay only for the actual usage of the database in RDS)
Another benefit that you can have is better security on your database. If your database is on a separate instance, you can prevent access to it from the internet. You can use a security group that allows only sql connection from your web server and not http connection from the internet.
This configuration can run in a regular EC2 environment, without the usage of VPC. You can certainly add VPC for an even more control environment, without additional cost nor much increased complexity.
In short, for scalability and high availability you should separate your tiers (web and DB). You will probably also find yourself saving on cost as well.
Of course there will be latency when communicating between separate machines. If they are both in the same availability zone it will be extremely low, typically what you'd expect for two servers on the same LAN.
If they are in different availability zones in the same region, expect a latency on the order of 2-3ms (per information provided at the 2012 AWS re:Invent conference). That's still quite low.
Using a VPC will not affect latency. That does not give you different physical connections between instances, just virtual isolation.
Finally, consider using Amazon's RDB (Relational Database Service) instead of a dedicated EC2 instance for your MySql database. The cost is about the same, and Amazon takes care of the housekeeping.
Do I get latency while communication between my webserver and Database server( Both hosted on separate instances on Amazon )
Yes, but it's rather insignificant compared to the benefits gained by separating the roles.
If yes, what is the standard way to overcome this ? ( or Do I need to set up a Virtual Private Cloud ?)
VPC increases security and ease of management of the resources, it does not affect performance. A latency of a millisecond or two isn't normally problematic for a SQL database. Writes are transactional so data isn't accessible to other requests until it's 100% completed and committed. I/O throughput and availability are much more of a concern, which is why separating the database and the application is important.
I'd highly recommend that you take a look at RDS, which is AWS's version of a managed MySQL, Oracle, or MS SQL Server database server. This will allow you to easily setup and manage your database, including cross-availability zone replication and automated backups. I also wrote a blog post yesterday that's fairly relevant to your question.