How long does it take to scale up an AWS Aurora RDS instance - amazon-web-services

we have a single AWS Aurora RDS instance, and it hit 60% CPU, and our site might get a lot more traffic tomorrow, so I am concerned that it will hit 100%. I would like to scale up the single instance to a better instance class tonight, just in case (we're currently on db.r4.large).
A couple things:
1) If I go into AWS and just edit the instance class, how long will the downtime be as AWS scales it up
2) Do I have to do anything special with my data? Will it lose any data?
3) If I initiate the change, will it scale up immediately or will it wait? I keep seeing stuff about some sort of maintenance window, and if I scale it up, I would like it to scale immediatel
y.
This is currently somewhat of an emergency situation.
Thanks!

2 years passed but I will try to answer to complete this question:
If I go into AWS and just edit the instance class, how long will the downtime be as AWS scales it up
Usually it needs about 5~7 minutes. ScaleUp/Down downtime is pretty close to downtime when you restarting instance. But if you using MultiAZ instance downtime will be less (in my experience failover operation between two Aurora instances in cluster needs about 2 minutes).
Do I have to do anything special with my data? Will it lose any data?
Not sure if someone knows what happens under the hood.
According to my unconfirmed observations ScaleUp/Down operation doing Stop → Change Instance Type → Start. So all opened connections may be terminated without commit. Already stored data should be fine. Anyway, it will be better to create backup before change instance type (and that is applicable to any emergency situation)
If I initiate the change, will it scale up immediately or will it wait? I keep seeing stuff about some sort of maintenance window, and if I scale it up, I would like it to scale immediately.
Up to you. Default behaviour is changing when maintenance time. But you can check Apply Immediately to do modification immediately.

Related

AWS EC2 t3.micro instance sufficiently stable for spring boot services

I am new to AWS and recently set up a free t3.micro instance. My goal is to achieve a stable hosting of an Angular application with 2 spring boot services. I got everything working, but after a while, the spring boot services are not reachable anymore. When i redeploy the service it will run again. The spring boot services are packed as jar and after the deployment the process is started as a java process.
I thought AWS guarantees permanent availability out of the box. Do i need some more setup such as autoscaling to achieve the desired uptime of the services or is the t3.micro instance not suffienciently performant, so that i need to upgrade to a stronger instance to avoid the problem?
It depends :)
I think you did the right thing by starting with a small instance type and avoid over provisioning in the first place. T3 instance types are generally beneficial for 'burst' usage scenarios i.e. your application sporadically needs a compute spike but not a persistent one. T3 instance types usually work with credits based system, where you instance 'earns' credits when it is idle, and that buffer is always available in times of need (but only until consumed entirely). Then you need to wait for some time window again and earn the credits back.
For your current problem, I think first approach can be to get an idea of the current usage by going through the 'Monitoring' tab on the EC2 instance details page. This will help you understand if the needs are more compute related or i/o related and then you can choose an appropriate instance type from :
https://aws.amazon.com/ec2/instance-types
Next step could also be to profile your application and understand the memory, compute utilisation better. AWS does guarantee availability/durability of resources, but how you consume those resources is more of an application thing, which AWS does not guarantee/control
For your ideas around, autoscaling and availability, it again depends on what your needs are in terms of cost, outages in AWS data centres etc. To have a reliable production setup, you could consider them, but not something really important in the first place.

Does AWS DBInstance maintenance keep data intact

We are using CloudFormation Template to create MySQL AWS::RDS::DBInstance.
My question is when there is maintenance in progress while applying OS upgrades or software/security patches, will
Database Instance be unavailable for the time of maintenance
Does it wipe out data from database instance during maintenance?
If answer to first is yes, will using DBCluster help avoid that short downtime, if I use more than one instances?
From the documentation I did not receive any indication that there is any loss of data possibility.
Database Instance be unavailable for the time of maintenance
They may reboot the server to apply the maintenance. I've personally never seen anything more than a reboot, but I suppose it's possible they may have to shut it down for a few minutes.
Does it wipe out data from database instance during maintenance?
Definitely not.
If answer to first is yes, will using DBCluster help avoid that short
downtime, if I use more than one instances?
Yes, a database in cluster mode would fail-over to another node while they were applying patches to one node.
I am actively working of RDS Database system from the last 5 years. Based on my experience, my answer to your questions as follows in BOLD.
Database Instance be unavailable for the time of maintenance
[Yes, Your RDS system will be unavailable during maintenance of database]
Does it wipe out data from database instance during maintenance?
[ Definitely BIG NO ]
If answer to first is yes, will using DBCluster help avoid that short downtime, if I use more than one instances?
[Yes, In cluster Mode or Multi A-Z Deployment, Essentially AWS apply the patches on the backup node or replica first and then failover to this patch instance. Last, there would be some downtime during the g switchover process]

multiple micro vs. one large ec2 instance

Our website is getting slow and we are in need of an upgrade.
We are currently AWS and have 1 micro ec2 instance that proved effective while our website had less traffic. Now when we get more traffic, our site is getting slower.
We can't seem to settle an argument.
Which would be better:
Adding multiple additional micro/small instances and have them managed either by nginx or amazon cloud computing
OR
Upgrading our micro instance into a large/xlarge instance.
which would be more effective considering the tasks to be performed by the server are simple, and considering the total amount of ram and processing power is similar. 1 big, or many small?
Thanks
Tough to say -
Option #2 is going to be the easiest to do, turn your server off, resize it, turn it back on get more capacity just by paying more money. Easy to do, but maybe not the best long-term solution. What will you do when traffic continues to increase (either constantly or at certain times) and there are no more gains to be had simply by picking a bigger box?
Option #1 is going to be more work, but ultimately maybe a better strategy.
First of all, you didn't say if you have a constant need for more throughput, or if it is certain times of the day/week/month/year when the capacity is needed - if that is the case, multiple EC2 instances with auto-scale groups setup to respond to increases and decreases in demand by turning on additional instances as needed and then turning them off as demand decreases is a cost-effective option.
In addition, having multiple instances running - preferable in different availability zones, gives you fault-tolerance - when your big instance in #1 goes down, your website is down - if you have many small instances running across 2 or 3 availability zones, you can continue to function if one or more or your instances goes down, and even if AWS availability zone goes offline (rare, but it happens).
Besides the options above, without knowing anything about your application - other things you can do - move some static assets to S3 and/or use AWS cloudfront (or other CDN) to offload some of the work - this is often a cheap and easy way to get more out of an existing box.

Scaling Up an Elasticache Instance?

I'm currently running a site which uses Redis through Elasticache. We want to move to a larger instance with more RAM since we're getting to around 70% full on our current instance type.
Is there a way to scale up an Elasticache instance in the same way a RDS instance can be scaled?
Alternative, I wanted to create a replica group and add a bigger instance to it. Then, once it's replicated and running, promote the new instance to be the master. This doesn't seem possible through the AWS console as the replicas are created with the same instance type as the primary node.
Am I missing something or is it simply a use case which can't be achieved. I understand that I can start a bigger instance and manually deal with replication then move the web servers over to use the new server but this would require some downtime due to DNS migration, etc.
Thanks!,
Alan
Elasticache feels more like a cache solution in the memcached sense of the word, meaning that to scale up, you would indeed fire up a new cluster and switch your application over to it. Performance will degrade for a moment because the cache would have to be rebuilt, but nothing more.
For many people (I suspect you included), however, Redis is more of a NoSQL database solution in which data loss is unacceptable. Amazon offers the read replicas as a "solution" to that problem, but it's still a bit iffy. Of course, it offers replication to reduce the risk of data loss, but it's still nowhere near as production-safe (or mature) as RDS for a Redis database (as opposed to a cache, for which it's quite perfect), which offers backup and restore procedures, as well as well-structured change management to support scaling up. To my knowledge, ElastiCache does not support changing the instance type for a running cluster. This suggests that it's merely an in-memory solution that would lose all its data on reboot.
I'd go as far as saying that if data loss concerns you, you should look at a self-rolled Redis solution instead of simply using ElastiCache. Not only is it marginally cheaper to run, it would enable you to change the instance type like you would on any other EC2 instance (after stopping it, of course). It would also enable you to use RDB or AOF persistence.
You can now scale up to a larger node type while ElastiCache preserves:
https://aws.amazon.com/blogs/aws/elasticache-for-redis-update-upgrade-engines-and-scale-up/
Yes, you can instantly scale up a running Elasticache instance type to a larger size. I've tested it and experienced very little actual downtime (I think a few seconds at first, but very quickly it's back online, even while the Console will show the process taking roughly a few minutes to actually finish.) I went from a t2.micro to a m3.medium with no problem.
You can scale up or down
Go to Elasticache service
Select the cluster
From Actions menu in top, choose Modify
Modify Node Type as shown below
If you have a cluster, you can add more shards, decrease number of shards, rebalance slot distributions, or add more read replicas. just click on the cluster itself, you should be see something like this
Be aware when you delete shards, it will automatically redistribute data to other existing shards so it will affect on traffic and overloading other shards, when you try to delete a shard you would get a warning like this
Still need more help, please feel free to leave a comment and I would be more than happy to help.

Realistically, how do I setup Amazon AWS to make it auto-scale?

I have had an EC2 instance working just fine for months (still developing, app not live yet), but I just realized I don't even know how to make my EC2 instance scale up / down depending on traffic.
The sheer number of services offered by Amazon is overwhelming, and I'm very confused.
Initially, I though I'd just have one instance, and Amazon would transparently allocate resources or create identical instances to handle traffic but it seems my impression was wrong.
My question is: can someone please tell me (in simple words, bullet list or point me to a tutorial) how to make my instance automatically grows to handle 100,000 simultaneous users then automatically goes back when the surge is done?
Assuming this is possible, can I do this via the AWS control panel? If so, how?
All I can see is micro, small, medium, etc.. instances. Each one has limited resources and it's not clear how to automatically setup the instance so that Amazon dynamically allocate additional resources to handle traffic spikes (or even gradually go up to keep with natural traffic growth for that matter).
Side question May I assume that Amazon auto-handle DDOS attacks when scaling up? (meaning rogue traffic would eventually stopped/slowed down by Amazon and scaling would only affect legitimate traffic spike). I realize this side question may be really stupid, keep in mind I didn't take my coffee yet :)
This article details how to auto scale using load balancers and EC2: http://kkpradeeban.blogspot.com/2011/01/auto-scaling-with-amazon-ec2.html
For scalability you may also want to look into this article on implementing a pub/sub system for distributed systems: http://www.infoq.com/articles/AmazonPubSub
You can't automatically change the instance type (m1.small, m1.large, etc.) in response to changing load. You can, however, have AWS automatically create new instances as your load increases, and tear them down when load subsides.
I believe this article will help you: http://aws.amazon.com/autoscaling/.