How to scale horizontally amazon RDS instance? EC2 and load balancer+autoscaling is extremly easy to implement, but if I want scaling amazon RDS?
I can ugrade my RDS instance with more powerfull instance or I can create a read replica and I can direct SELECT queries to it. But in this mode I don't scale anything if I have a read-oriented web application. So, can I create RDS read replica with autoscaling and balance them with load balancer?
You can use a HAProxy to load-balance Amazon RDS Read Replica's. Check this http://harish11g.blogspot.ro/2013/08/Load-balancing-Amazon-RDS-MySQL-read-replica-slaves-using-HAProxy.html.
Hope this helps.
Note RDS covers several database engines- mysql, postgresql, Oracle, MSSQL.
Generally speaking, you can scale up (larger instance), use readonly databases, or shard. If you are using mysql, look at AWS Aurora. Think about using the database optimally- perhaps combining with memcached or Redis (both available under AWS Elasticache). Think about using a search engine (lucene, elasticsearch, cloudsearch).
Some general resources:
http://highscalability.com/
https://gigaom.com/2011/12/06/facebook-shares-some-secrets-on-making-mysql-scale/
If you are using PostgreSQL and have a workload that can be partitioned by a certain key and doesn't require complex transactions, then you could take a look at the pg_shard extension. pg_shard lets you create distributed tables that are sharded across multiple servers. Queries on the distributed table will be transparently routed to the right shard.
Even though RDS doesn't have the pg_shard extension installed, you can set up one or PostgreSQL servers on EC2 with the pg_shard extension and use RDS nodes as worker nodes. The pg_shard node only needs to store a tiny bit of metadata which can be backed up in one of the worker nodes, so they are relatively low maintenance and can be scaled out to accommodate higher query rates.
A guide with a link to a CloudFormation template to set everything up automatically is available at: https://www.citusdata.com/blog/14-marco/178-scaling-out-postgresql-on-amazon-rds-using-masterless-pg-shard
Related
We need to know what are the best options to set AWS RDS instance (Aurora mysql) that is standalone and does not get traffic from actual RDS cluster.
Requirement is for our data team to write analytical queries but we do not want it to impact actual application and DB performance. Hence we need a DB which always has near to live data but live traffic or application does not connect to this instance.
Need to know which fits better, DL clone OR AWS Pilot light OR AWS Warn standby OR AWS hot standby OR
multi-AZ configuration.
Kindly let us know which one would fit our requirement better.
We have so far read about below 3 options,
AWS Amazon Aurora DB clone, https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Clone.html
AWS Pilot light or AWS Warn standby or AWS hot standby
. https://aws.amazon.com/blogs/architecture/disaster-recovery-dr-architecture-on-aws-part-iii-pilot- light-and-warm-standby/
With multi-AZ configuration, we can create a new instance in new AZ, so that his instance will have a different host (kind off, a fail over strategy), where traffic to his instance will be from our queries and not from live prod application, unless there is some fail over issue.
Option 1, Aurora cloning says
Run workload-intensive operations, such as exporting data or running analytical queries on the clone.
...which seems to be your use case here.
Just be aware that the clone will not see any changes to the original data after it is made. So you will need to periodically delete and re-clone to get the updated data
Regarding option 2, I wrote those blog posts, and I do not think that approach suits your use case. That approach is for disaster recovery
Option 3 may work. To modify it a bit, the concept here is to create an Aurora Replica, which as you say is a separate instance. The problem here is the reader endpoint for your production workload, it may hit that instance (which is not what you want)
EDIT: Adding new option 4
Option 4. Check out Amazon Aurora zero-ETL integration with Amazon Redshift. This zero-ETL integration also enables you to analyze data from multiple Aurora database clusters in an Amazon Redshift cluster.
I have Amazon Aurora for MySQL t3.db.medium instances. I would like to scale down to t3.db.small.
If I modify the instance settings in AWS console, will my DB data be preserved? So can I scale down without service interruption? I think I should be able to do this, but I just wanna make sure. There is prod instance involved.
I have the same question about Elastic Cache (AWS redis). Can I scale that down without service interruption?
According to Docs, there is table(DB instance class) which tells which settings can be changed, you can change your instance class for your aurora, as a note An outage occurs during this change.
For redis
according to docs, you can scale down node type of your redis cluster (version 3.2 or newer). During scale down ElastiCache dynamically resizes your cluster while remaining online and serving requests.
In both the cases your data will be preserved.
I a simple question for someone with experience with AWS but I am getting a little confused with the terminology and know how to proceed with which node to purchase.
At my company we currently have a a postgres db that we insert into continuously.
We probably insert ~ 600M rows at year at the moment but would like to be able to scale up.
Each Row is basically a timestamp and two floats, one int and one enum type.
So the workload is write intensive but with also constant small reads.
(There will be the occasional large read)
There are also two services that need to be run (both Rust based)
1, We have a rust application that abstracts the db data allowing clients to access it through a restful interface.
2, We have a rust app that gets the data to import from thousands on individual devices through modbus)
These devices are on a private mobile network. Can I setup AWS cluster nodes to be able to access a private network through a VPN ?
We would like to move to Amazon Redshift but am confused with the node types
Amazon recommend choosing RA3 or DC2
If we chose ra3.4xlarge that means you get one cluster of nodes right ?
Can I run our rust services on that cluster along with a number of Redshift database instances?
I believe AWS uses docker and I could containerise my services easily I think.
Or am I misunderstanding things and when you purchase a Redshift cluster you can only run Redshift on this cluster and have to get a different one for containerised applications, possibly an ec2 cluster ?
Can anyone recommend a better fit for scaling this workload ?
Thanks
I would not recommend Redshift for this application and I'm a Redshift guy. Redshift is designed for analytic workloads (lots or reads and few, large writes). Constant updates is not what it is designed to do.
I would point you to Postgres RDS as the best fit. It has a Restful API interface already. This will be more of the transactional database you are looking for with little migration change.
When your data get really large (TB+) you can add Redshift to the mix to quickly perform the analytics you need.
Just my $.02
Redshift is a Managed service, you don't get any access to it for installing stuff, neither is there a possibility of installing/running any custom software of your own
Or am I misunderstanding things and when you purchase a Redshift cluster you can only run Redshift on this cluster
Yes, you don't run stuff - AWS manages the cluster and you run your analytics/queries etc.
have to get a different one for containerised applications, possibly an ec2 cluster ?
Yes, you could possibly make use of EC2, running the orchestrators on your own, or make use of ECS/Fargate/EKS depending on your budget/how skilled your members are etc
I am relatively experienced with many AWS services - but I do have a large gap around Aurora/RDS
I'm trying to create a multi-region multi-master (write replicas) setup
The purpose is to give low latency to users (if each read and write replica is in the user's region) and to give resilience (if there is a region outage, the users can have their requests routed to another region (the latency will be higher, but reduced service is better than no service))
I'm trying to learn about AWS Aurora and I've created a toy cluster to learn. It seems I can create a cluster that is served out of multiple regions (and Aurora replicates data between regions automatically). I've also read that it is possible to have a multi-master setup (in my toy cluster, it only had one write partition, I couldn't work out how to create another write partition in another region, which made me question if it's possible?)
Here is a diagram of what I'm thinking:
https://imgur.com/DzoSpHL
Thank you in advance!
The purpose is to give low latency to users (if each read and write replica is in the user's region)
I couldn't work out how to create another write partition in another region, which made me question if it's possible?
That is not possible (at least not currently) because of multi-master Aurora limitations.
all DB instances in a multi-master cluster must be in the same AWS Region.
and others such as
you can have a maximum of two DB instances in a multi-master cluster
You can't enable cross-Region replicas from multi-master clusters.
You can read more here
The best thing you can do in your scenario is to create single master and place read replicas into those additional regions (possibly with some caching in necessary).
As mentioned earlier it is not possible with Aurora.
However DynamoDB supports multi-active multi-region:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GlobalTables.html
As others have said, with Amazon Aurora, you cannot deploy multi-Region and multi-master. However you can deploy multi-Region using Aurora Global Database. Then one writer endpoint would be in one Region, while reader endpoints would be available in all the other Regions. Then you can also use write forwarding (assuming you are using the MySQL flavor of Aurora) in the read-only Regions. I know latency is a concern for you, so note the write actually goes back to the primary Region, so writes will incur that extra latency.
I want to autoscale AWS RDS automatically with scripts based on the metric monitoring.
RDS doesn't really do this for Read-Write
Multi AZ Write-Read database copies are intended for failover from primary to secondary if there is an availability problem. They don't address the problem of performance
Read replicas can be used to increase performance but they are read only
It might be possible to look at a load metric and use a Cloudwatch alarm to start an extra read replica. Read replicas can be used via an ELB or NLB
But probably this isn't a good idea. While an existing RDS is making a read replica, performance is degraded. RDS read replicas are quite slow to come up and become available so it's unlikely to respond in a good way to transient demand
You can make an API call to Modify an RDS Instance, including changing the instance class.
Amazon RDS will provision a new instance of the desired class and will then re-point the Endpoint to the new instance. Existing connections will be terminated, but applications can reconnect and all the data will be there.
Rather than scaling the RDS instance, you could always consider a caching layer, such as Amazon ElastiCache that supports Redis and Memcached. Most applications are read-heavy, which is ideal for using a cache. This can significantly improve application performance without having to scale the database.
In simple, it can be possible with Aurora 5.7 DB RDS instances only, they provide an option to auto-scale based on cloud watch metric conditions i.e CPU utilization etc.