What type of EC2 instance is better suitted for Wso2 CEP Standalone? - wso2

compute optimized (c3, c4) or a memory optimized (r3) instance for running a stand-alone wso2 cep server?
i searched the documentation but could not find anything regarding running this server on ec2

As per the WSO2 SA recommendations,
Hardware Recommendation
Physical :
3GHz Dual-core Xeon/Opteron (or latest), 4 GB RAM (minimum : 2 GB for JVM and 2GB for the OS, 10GB free disk space (minimum) disk based on the expected storage requirements (calculate by considering the file uploads and the backup policies) . (e.g if 3 Carbon instances running in a machine, it requires 4 CPU, 8 GB RAM 30 GB free space)
Virtual Machine :
2 compute units minimum (each unit having 1.0-1.2 GHz Opteron/Xeon processor) 4 GB RAM 10GB free disk space. One cpu unit for OS and one for JVM. (e.g if 3 Carbon instances running require VM of 4 compute units 8 GB RAM 30 GB free space)
EC2 : c3.large instance to run one Carbon instance. (e.g if 3 Carbon instances EC2 Extra-Large instance) Note : based on the I/O performance of c3.large instance, it is recommended to run multiple instances in a Larger instance (c3.xlarge or c3.2xlarge).
NoSQL-Data Nodes:
4 Core 8 GB (http://www.datastax.com/documentation/cassandra/1.2/cassandra/architecture/architecturePlanningHardware_c.html)
Example
Let's say a customer needs 87 carbon instance. Hence, they need 87 CPU core / 174GB of memory / 870GB free space.
This is calculated by not considering the resources for OS. Per each machine, they need 1CPU core, 2GB memory for OS.
Lets say they want to buy 10 machines, then total requirement will be 97 CPU core (10 core for OS + 87 core for Carbon) 194 GB memory (20 GB for OS + 174GB for Carbon) 870GB free space for carbon (Normally, storage will have more than this).
Which means, each machine will have 1/10 of above and can run about 9 carbon instances. i.e roughly 10 CPU core / 20 GB Memory/ 100 GB of free storage
Reference : https://docs.wso2.com/display/CLUSTER44x/Production+Deployment+Guidelines
Note:
However, everything depends on the what you're going to process using CEP. therefore, please refer #Tharik's answer as well.

It depends on the type of processing CEP node does. CEP node requires alot of memory if the processing event size is large, or the event flowing through put is high and if there are time windows in the queries. For those cases Memory Optimized EC2 instances are better as those provide lowest price for RAM size. If there are a lot of computation on algorithms you have extended you might more processing capabilities of compute optimized instances.

Related

E2 CPU Usage Goes Up Over Time on Google Compute Engine

It is quite strange that all of my 6 E2-small vm instances (all Debian 10) are increasing in CPU usage over time. Is this a bug from Google?
And I can verify that this does not happen on N1 CPU (g1-small, Debian 10, orange line):
I restarted the E2 instance (blue line) before end of January and created a new N1 instance (orange line). Both VM's are not utilised yet, and you can see that E2 is increasing its CPU usage over time.
Here's my top command on the E2:
Here are 3 more VM's (utilised in production) which shows CPU slowly creeping up over time (restarted Jan 26):
Is this a google bug?
This is a bug with google_os, fixed by:
sudo apt-get update && sudo apt-get upgrade google-osconfig-agent -y
Confirmed that all of the affected vm's are not increasing in cpu usage anymore after several days.
After updating and restarting on Feb 18, the CPU usage is now stable.
No, this is not a bug, E2 small machines are Shared-core machine.
Shared-core machine types use context-switching to share a physical core between vCPUs for the purpose of multitasking. Different shared-core machine types sustain different amounts of time on a physical core. Review the following sections to learn more.
In general, shared-core instances can be more cost-effective for running small, non-resource intensive applications than standard, high-memory or high-CPU machine types.
CPU Bursting
Shared-core machine types offer bursting capabilities that allow instances to use additional physical CPU for short periods of time. Bursting happens automatically when your instance requires more physical CPU than originally allocated. During these spikes, your instance will opportunistically take advantage of available physical CPU in bursts. Note that bursts are not permanent and are only possible periodically. Bursting doesn't incur any additional charges. You are charged the listed on-demand price for f1-micro, g1-small, and e2 shared-core machine types.
E2 shared-core machine types
E2 shared-core machines are cost-effective, have a virtio memory balloon device, and are ideal for small workloads. When you use E2 shared-core machine types, your VM runs two vCPUs simultaneously, shared on one physical core, for a specific fraction of time, depending on the machine type.
*e2-micro sustains 2 vCPUs, each for 12.5% of CPU time, totaling 25% vCPU time.
*e2-small sustains 2 vCPUs, each at 25% of CPU time, totaling 50% of vCPU time.
*e2-medium sustains 2 vCPUs, each at 50% of CPU time, totaling 100% vCPU time.
Each vCPU can burst up to 100% of CPU time, for short periods, before returning to the time limitations here.
It depends on the processes running on the instance for it to burst and increase usage.

Geth service gets killed on multiple concurrent requests of web3.personal.importRawKey function

I have an ethereum-POA node setup on a VM having below mentioned configurations. Using NodeJS Web3 client, I am trying to create new wallets using the web3.personal.importRawKey function.
VM Configurations Azure VM - Standard D2s v3 (2 vcpus, 8 GiB memory)
As part of our Stress testing, I tried concurrently creating the wallet for 5-10 users, it worked. But when I try to create 15-20 wallets concurrently then the geth process gets killed abruptly and the node stops. On a 1 CPU, 4 GB memory VM, I was able to create at max 4 concurrent wallets. While on 2 vcpus, 8 GiB memory CM, I could process max 10-12 concurrent users.
My concern is the number of concurrent users wallet creation compared to the RAM seems very low and I can't understand why geth processes get killed.One thing I observed was the CPU percentage goes to 200% and then kills the geth node pr
How would I be able to handle at least 1000 concurrent requests to the above-mentioned function to create Blockchain wallets?
Any help will be appreciated.
Thanks in advance!

How to right size cloud instance?

I have a Spring MVC web application which I want to deploy in cloud.It could be AWS, Azure, Google cloud. I have to find out how much RAM & hard disk is needed. Currently I have deployed the application at my local machine in tomcat. Now I go to localhost:8080 & click on server status button. Under OS header it tells me:
Physical memory: 3989.36 MB Available memory: 2188.51 MB Total page
file: 7976.92 MB Free page file: 5233.52 MB Memory load: 45
Under JVM header it tells me:
Free memory: 32.96 MB Total memory: 64.00 MB Max memory: 998.00 MB
How to infer RAM & hard disk size from these data? There must be some empirical formula, like memory of OS + factor*jvm_size & I assume jvm size = memory size of applications. And while deploying to cloud we will not deploy all those example applications.
These stats from your local machine is in the idle state and does not have any traffic so it will definitely taking and consuming fewer resources.
You can not decide the clouds machine size on the bases of local machine memory stats but it will help little like the minimum resources can consume the application.
So the better way is to perform some load test if you are expecting the huge number of user then design accordingly on the base of load test.
The short way is to read the requirement or recommended system of a deployed application.
Memory
256 MB of RAM minimum, 512 MB or more is recommended. Each user
session requires approximately 5 MB of memory.
Hard Disk Space
About 100 MB free storage space for the installed product (this does
not include WebLogic Server or Apache Tomcat storage space). Refer to
the database installation instructions for recommendations on database
storage allocation.
you can further look here and here
so if you want desing for staging or development then you can choose one of these two AWS instancs.
t2.micro
1VCPU 1GB RAM
t2.small
1vCPU 2GB RAM
https://aws.amazon.com/ec2/instance-types/

Google Compute Engine - Low on Resource Utilisation

I use a VM Instance provided by Google Compute Engine.
Machine Type: n1-standard-8 (8 vCPUs, 30 GB memory).
When I check for the CPU Utilisation, it never uses more than 12%. I use my VM for running Jupyter Notebook. I have tried loading dataframes which costed 7.5 GiB (And it takes a long time to process the data for simple operations). But still the utilisation is same
How can I utilise the CPU power ~ 100%?
Or Does my program use only 1 out of the 8 CPU (1/8)*100 =12.5%?
You can run stress command to impose a configurable amount of CPU, memory, I/O, and disk stress on the system.
Example to stress 4 cores for 90 seconds:
stress --cpu 4 --timeout 90
In the meantime go to your Google Cloud Console on your browser to check your CPU usage on your VM or open new SSH connection to your VM and run TOP command to see your CPU status.
After running those mentioned commands, if your CPU can reach over 99%, your instance is working fine and you have to check your application resources to know why it is restricted and cannot use CPU more than 12%.

What exactly is a "virtual core" on Amazon EC2?

The small Standard Instance is:
Small Instance (Default) 1.7 GB of memory, 1 EC2 Compute Unit (1 virtual core with 1 EC2 Compute Unit), 160 GB of local instance storage, 32-bit or 64-bit platform
Does this mean that you get access to an entire physical CPU core? Or are you sharing a more powerful core with other instances?
Is your performance affected by other people sharing the same "physical core" or other hardware?
You don't get a physical core for a small instance.
"One EC2 Compute Unit provides the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor. This is also the equivalent to an early-2006 1.7 GHz Xeon processor referenced in our original documentation." Amazon EC2 Instance Types
You can run cat /proc/cpuinfo to see what hardware you're on.
For example I have a micro instance which has the underlying processor Intel(R) Xeon(R) CPU E5430 # 2.66GHz.
From my understanding 40% cpu in top equals 1 Compute Unit. So I can burst to 80% with my 2 Compute Unit's.
this is rough estimate so take it for what its worth.
Funny enough the micro instance out preformed both the small & medium instances.
I ran Passmark Proformance Test 8.0 on each instance below.
Each was installed with Windows Server 2008 r2 basic config in Amazon's Virginia based data center.
AWS SIZE=======PASSMARK SCORE====SIMILAR SCORED CPU================
t1.micro=======963===============AMD Dual-Core Mobile ZM-80========
m1.small=======384.7=============Intel Celeron M 1.60GHz===========
m1.medium======961===============AMD Dual-Core Mobile ZM-80========
m1.large=======1249==============Intel Core2 Duo T6400 # 2.00GHz===
m1.xlarge======3010==============AMD Phenom 2 X4 12000=============
m3.xlarge======3911==============Intel Xeon X5365 # 3.00GHz========
m3.2xlarge=====6984==============Intel Xeon E3-1220 V2 # 3.10GHz===
Currently the m3.2xlarge would cost about $7169 pr year for a reserved instance or $1578 pr month on an on-demand instance.
Most unmanaged dedicated hosting companies I've seen offer Intel Xeon E3-1200 setups for around $2000-2500 pr year.
In my opinion AWS is great for scalability but very costly for anything long-term.
As seems to be the case with any "cloud" based server systems.
------UPDATE
Here is a great tool for measuring cloud hosting benchmarks.. http://cloudharmony.com/benchmarks
http://www.cpubenchmark.net/high_end_cpus.html
If look at this table 1 EC2 Compute unit ≈ 350 cpu points
Please go through these blogs to get an idea of virtual cores.Very well explained
http://www.pythian.com/blog/virtual-cpus-with-amazon-web-services/
http://samrueby.com/2015/01/12/what-are-amazon-aws-vcpus/
According to this AWS forum post, a virtual core equates to a physical CPU core. Each virtual core can have one or more EC2 Compute Units, depending on the clock speed of the CPU.
Here is a more detailed analysis.