How to right size cloud instance? - amazon-web-services

I have a Spring MVC web application which I want to deploy in cloud.It could be AWS, Azure, Google cloud. I have to find out how much RAM & hard disk is needed. Currently I have deployed the application at my local machine in tomcat. Now I go to localhost:8080 & click on server status button. Under OS header it tells me:
Physical memory: 3989.36 MB Available memory: 2188.51 MB Total page
file: 7976.92 MB Free page file: 5233.52 MB Memory load: 45
Under JVM header it tells me:
Free memory: 32.96 MB Total memory: 64.00 MB Max memory: 998.00 MB
How to infer RAM & hard disk size from these data? There must be some empirical formula, like memory of OS + factor*jvm_size & I assume jvm size = memory size of applications. And while deploying to cloud we will not deploy all those example applications.

These stats from your local machine is in the idle state and does not have any traffic so it will definitely taking and consuming fewer resources.
You can not decide the clouds machine size on the bases of local machine memory stats but it will help little like the minimum resources can consume the application.
So the better way is to perform some load test if you are expecting the huge number of user then design accordingly on the base of load test.
The short way is to read the requirement or recommended system of a deployed application.
Memory
256 MB of RAM minimum, 512 MB or more is recommended. Each user
session requires approximately 5 MB of memory.
Hard Disk Space
About 100 MB free storage space for the installed product (this does
not include WebLogic Server or Apache Tomcat storage space). Refer to
the database installation instructions for recommendations on database
storage allocation.
you can further look here and here
so if you want desing for staging or development then you can choose one of these two AWS instancs.
t2.micro
1VCPU 1GB RAM
t2.small
1vCPU 2GB RAM
https://aws.amazon.com/ec2/instance-types/

Related

Allocating more virtual memory vs lower down RAM usage for 2GB RAM AWS VM

I'm using AWS with 1vCPU + 2GB RAM (4GB Virtual Memory) for my Wordpress website. Sometimes I'm getting unresponsive website when it's running too many instances of PHP-FDM. 10 instances 200MB = 2GB RAM. I've over 40GB empty space, so allocate more virtual memory is fine.
At the moment, I'm reducing the PHP-FDM instances to 5 and it's running perfectly fine for a week.
I would like to ask experts if adding more virtual RAM, so server can run more PHP-FDM instances or best to keep the number down.

How does thread number map to CPU unit in ECS task?

I am using ECS Fargate as my web application deployment. And I'd like to support 200 per second request for my app. I see there is a task size from where I can configure CPU and Memory. And I wonder if I configure 1024 CPU unit with 2048MG memory, how many threads my app can support? Can I say this configuration support open up to 1024 thread in my process?
1 vCPU = 1024 CPU units. Source:
You can determine the number of CPU units that are available per Amazon EC2 instance type by multiplying the number of vCPUs listed for that instance type on the Amazon EC2 Instances detail page by 1,024.
And I wonder if I configure 1024 CPU unit with 2048MG memory, how many threads my app can support?
It's impossible to say, you have to run a load test and measure this.
Can I say this configuration support open up to 1024 thread in my process?
This would generally depend on which technology you use and what exactly a "thread" is.
But most probably you won't be able to run 1024 threads on 1024 CPU units (which is just one vCPU).

ColdFusion Production Server - Tuning for Performance

I'm trying to determine the optimal settings for my ColdFusion PRODUCTION server. The server has the following specs.
ColdFusion: Enterprise Version 10 O/S: Windows Server 2012R2
Standard Processor: Intel(R) Xeon(R) CPU E5-2660 v2 # 2.20GHz
Installed Memory (RAM): 20.0 GB System Type: 64-bit
Operating System, x64-based processor
My Java and JVM settings from the CFIDE are:
Minimum Heap Size (in MB): 2048 Maximum Heap Size (in MB): 4096
JVM Arguments
-server -XX:MaxPermSize=192m -XX:+UseParallelGC -Xbatch -Dcoldfusion.home={application.home} -Dcoldfusion.rootDir={application.home} -Dcoldfusion.libPath={application.home}/lib -Dorg.apache.coyote.USE_CUSTOM_STATUS_MSG_IN_HEADER=true -Dcoldfusion.jsafe.defaultalgo=FIPS186Random
I have multiple websites running on this production server, all of which use ColdFusion. The database server is completely separate, so all that this server is responsible for is the ColdFusion application and web server processes.
The websites are completely data-driven, all pulling from the database located on my production database server. Lately, I've been seeing the ColdFusion service locking up, as it is maxing out the CPU. The memory is stable, it's only the CPU that is maxing out.
Can anyone make suggestions as to how I can tune it to improve overall performance while reducing strain on the CPU?
Java Version
java version "1.8.0_73" Java(TM) SE Runtime Environment (build
1.8.0_73-b02) Java HotSpot(TM) Client VM (build 25.73-b02, mixed mode)
Thank you!
Are you running all of these sites on a single instance of ColdFusion? If so, I would recommend running multiple instances of CF. Each instance can run the same JVM settings given your total available memory.
Minimum Heap Size (in MB): 2048
Maximum Heap Size (in MB): 4096
So that's a max of about 16GB of memory allocated to a total of 4 instances of CF. Then you balance out the volume of sites run on each instance based on site usage. You might have one that needs its own instance and the rest can be spread across the other three.
It's also possible to run all of the sites on all of the instances using a load balancer to pass requests from one instance to another. Either approach should insure that one site doesn't cause the rest to run poorly or not at all.

What type of EC2 instance is better suitted for Wso2 CEP Standalone?

compute optimized (c3, c4) or a memory optimized (r3) instance for running a stand-alone wso2 cep server?
i searched the documentation but could not find anything regarding running this server on ec2
As per the WSO2 SA recommendations,
Hardware Recommendation
Physical :
3GHz Dual-core Xeon/Opteron (or latest), 4 GB RAM (minimum : 2 GB for JVM and 2GB for the OS, 10GB free disk space (minimum) disk based on the expected storage requirements (calculate by considering the file uploads and the backup policies) . (e.g if 3 Carbon instances running in a machine, it requires 4 CPU, 8 GB RAM 30 GB free space)
Virtual Machine :
2 compute units minimum (each unit having 1.0-1.2 GHz Opteron/Xeon processor) 4 GB RAM 10GB free disk space. One cpu unit for OS and one for JVM. (e.g if 3 Carbon instances running require VM of 4 compute units 8 GB RAM 30 GB free space)
EC2 : c3.large instance to run one Carbon instance. (e.g if 3 Carbon instances EC2 Extra-Large instance) Note : based on the I/O performance of c3.large instance, it is recommended to run multiple instances in a Larger instance (c3.xlarge or c3.2xlarge).
NoSQL-Data Nodes:
4 Core 8 GB (http://www.datastax.com/documentation/cassandra/1.2/cassandra/architecture/architecturePlanningHardware_c.html)
Example
Let's say a customer needs 87 carbon instance. Hence, they need 87 CPU core / 174GB of memory / 870GB free space.
This is calculated by not considering the resources for OS. Per each machine, they need 1CPU core, 2GB memory for OS.
Lets say they want to buy 10 machines, then total requirement will be 97 CPU core (10 core for OS + 87 core for Carbon) 194 GB memory (20 GB for OS + 174GB for Carbon) 870GB free space for carbon (Normally, storage will have more than this).
Which means, each machine will have 1/10 of above and can run about 9 carbon instances. i.e roughly 10 CPU core / 20 GB Memory/ 100 GB of free storage
Reference : https://docs.wso2.com/display/CLUSTER44x/Production+Deployment+Guidelines
Note:
However, everything depends on the what you're going to process using CEP. therefore, please refer #Tharik's answer as well.
It depends on the type of processing CEP node does. CEP node requires alot of memory if the processing event size is large, or the event flowing through put is high and if there are time windows in the queries. For those cases Memory Optimized EC2 instances are better as those provide lowest price for RAM size. If there are a lot of computation on algorithms you have extended you might more processing capabilities of compute optimized instances.

Rails - track application memory usage on localhost

How can I monitor memory usage(between requests, overall RAM usage etc) on local environment? Are there any gems available for it?
Thanks