Google VM RAM for Minecraft Server - google-cloud-platform

I've been tasking with creating a Minecraft server for about 500 players. I've never create a Minecraft server this large before and I think the best way to go about it is (how I've done all my other servers) with Google Cloud. Google Cloud has the following VM options:
I'm thinking about ~100MB per player to be on the safe side, so that's about 50 GB, so I'd say the n2-highmem-8 is probably a good VM to use. Is this overkill? Underkill?
Also, I know that a MC server can't use Multiple CPUs at the same time, so is it a waste paying for 8 Virtual CPUs?
Thanks!

For 500 players, I would recommend a minimum of 6 vCore and 32 GB or more.
You're right with Minecraft mainly using only one thread, so a high CPU Clock speed is most valuable. Also, I would recommend using PaperMC, it's a custom-made better performing Minecraft Server software. Here could you get different server versions if you do not want to use 1.15.2 Minecraft.
Maybe you should look around for different Servers, near your location. I bet you can find something less expensive and with higher Core clocks.

Minecraft has FREE servers in the launcher
Step1
Step2

Related

what will happen if my virtual machine too slow

i have a newbie question in here, but i'm new to clouds and linux, i'm using google cloud now and wondering when choosing a machine config
what if my machine is too slow? will it make the app crash? or just slow it down
how fast should my vm be? in the image bellow
last 6 hours of a python scripts i'm running and it's cpu usage, it's obviously running for less than %2 of the cpu for most of it's time, but there's a small spike, should i care about the spike? and also, how much should my cpu usage be max before i upgrade? if a script i'm running is using 50-60% of the cpu most of the i assume i'm safe, or what's the max before you upgrade?
what if my machine is too slow? will it make the app crash? or just
slow it down
It depends.
Some applications will just respond slower. Some will fail if they have timeout restrictions. Some applications will begin to thrash which means that all of a sudden the app becomes very very slow.
A general rule, which varies among architects, is to never consume more than 80% of any resource. I use the rule 50% so that my service can handle burst traffic or denial of service attempts.
Based on your graph, your service is fine. The spike is probably normal system processing. If the spike went to 100%, I would be concerned.
Once your service consumes more than 50% of a resource (CPU, memory, disk I/O, etc) then it is time to upgrade that resource.
Also, consider that there are other services that you might want to add. Examples are load balancers, Cloud Storage, CDNs, firewalls such as Cloud Armor, etc. Those types of services tend to offload requirements from your service and make your service more resilient, available and performant. The biggest plus is your service is usually faster for the end user. Some of those services are so cheap, that I almost always deploy them.
You should choose machine family based on your needs. Check the link below for details and recommendations.
https://cloud.google.com/compute/docs/machine-types
If CPU is your concern you should create a managed instance group that automatically scales based on CPU usage. Usually 80-85% is a good value for a max CPU value. Check the link below for details.
https://cloud.google.com/compute/docs/autoscaler/scaling-cpu
You should also consider the availability needed for your workload to keep costs efficient. See below link for other useful info.
https://cloud.google.com/compute/docs/choose-compute-deployment-option

AWS EC2 Performance explanation

I have a REST API web server, built in .NetCore, that has data heavy APIs.
This is hosted on AWS EC2, I have noticed that the average response time for certain APIs are ~4 seconds and if I turn up the AWS-EC2 specs, the response time goes down to a few milliseconds. I guess this is expected, what I don't understand is that even when I load test the APIs on a lower end CPU, the server never crosses 50% utilization of memory/CPU. So what is the correct technical explanation that makes the APIs perform faster if the lower end CPU never reaches a 100% utilization of memory/CPU?
There is no simple answer, there are so many ec2 variations you need to first figure out what is slowing down your API.
When you 'turn up' your ec2 instance, you are getting some combination of more memory, faster cpu, faster disk and more network bandwidth - and we can't tell which one of those 'more' features are improving your performance. Different instance classes ar optimized for different problems.
It could be as simple as the better network bandwidth, or it could be that your application is disk-bound and the better instance you chose is optimized for i/O performance.
Depending on what feature your instance is lacking, it would help you decide which type of instance to upgrade to - or as you have found out, just upgrade to something 'bigger' and be happy with the performance (at the tradeoff of being more expensive).

AWS EC2 ECS - How many tasks should I place on a single instance?

At the moment, I have a single c4.large (3.75GB RAM, 2 vCPU) instance in my workers cluster, currently running 21 tasks for 16 services. These tasks range from image processing, to data transformation, most sending HTTP requests too. As you can see, the instance is quite well utilisated.
My question is, how do I know how many tasks to place on an instance? I am placing up to 8 tasks for a service, but I'm unsure as to whether this results in a speed increase, given they are using the same underlying instance. How do I find the optimal placement?
Should I put many chefs in my kitchen, or will just two get the food out to customers faster?
We typically run lots of smaller sized server in our clusters. Like 4-6 t2.small for our workers and place 6-7 tasks on each. The main reason for this is not to speed up processing but reduce the blast radius of servers going down.
We've seen it quite often for a server to simply fail an instance health check and AWS take it down. Having the workers spread out reduces the effect on the system.
I agree with the other people’s 80% rule. But you never want a single host for any kind of critical applications. If that goes down you’re screwed. I also think it’s better to use larger sized servers because of their increase network performance. You should look into a host with enhanced networking, especially because you say you have a lot of HTTP work.
Another thing to consider is disk I/O. If you are piling too many tasks on a host and there is a failure, it’s going to try to schedule those all somewhere else. I have had servers crash because of too many tasks being scheduled and burning through disk credits.

blazeds increase concurrent user count by using servlet 3.0 and nio server

i am developing a turn based multiplayer game with flex and blazeds.
Problem is that i read that the blazeds can handle only hundereds of concurrent users,but this can be increased by using nio server like jetty 7 and servlet 3.0.
does Tomcat 7 supports nio? and i wonder if i can increase concurrent user count by using tomcat 7and blazeds to a few thousands.
Any clue or help will be appreciated.
Thank you.
Do not worry yet about performance. If your game will be successful you will be able to afford the better technical solution. If not, it will not matter if you can handle 1000 or 1000000 requests.
However, related to your question - you may be able to increase the number of concurrent users by doing server related tunings (like stack size, increase the size of the thread pool).
There a couple of solutions implementing Servlet 3.0 (NIO), but you will have to write your own BlazeDS NIO endpoint - so it does not work out of the box.
Edit:
Using the NIO Jetty connector by can be a good idea...but the first thing which should be done is building and testing a valid performance scenario. For example if you plan to support 10000 connected users and to push 1 msg/sec you need to write stress test for that. After that, you can experiment using various connectors/configurations.
There is one tool created by Adobe which can help you with performance testing - it's located here (take a look at the attachments of Adobe LiveCycle Data Services 3 ES2 Performance Brief.pdf). It contains instructions how to configure/run the stress tool. If you cannot manage to run it let me know
Just to give you an example, on my machine (i7 Q820 8gb ram), using the stress tool I was able to handle 10000 connected users.

vmware and performance for developing [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Curious, how many of you develop under a VMware environment?
Is it popular for employers to setup vmware for everyone?
Seems like a great way to rollout new desktop computers and perform backups etc.
Just worried about the performance though (PC vmwares).
Update
I was just looking at vmware's site, 1.3 BILLION in sales..wow!
I almost exclusively use Virtual Machines for development and am very happy doing so. The flexibility of multiple sand-boxed environments is definitely worth a small trade in performance.
Clearly a VM will never give you the same results as running on a native system, but you should be able to get performance that's easily within 10-15% of the real thing. In my experience many of the performance problems people encounter are due to underspecced or poorly configured systems and VM;s.
I primarily develop with a Vista x64 virtual machine on a 2.4Ghz Core 2 Duo with 4GB of Ram. Of this I assign 2GB of RAM and two virtual core's to my main VM. If I'm running more than one VM I usually change this to 1-1.5GB and one core.
Here's some quick GeekBench test results; (Note than GeekBench results under OSX and Vista don't seem comparable, they're listed here to show the impact of configs on both systems).
Fresh boot, no active applications:
Native OSX - 3115
Native OSX running Vista 64 VM - 3042
Native Vista 64 (2.4GHz x 2, 4GB) - 2596
Vista 64 VM (2 VCore, 3GB) - 2362
Vista 64 VM (1 VCore, 2GB) - 1892
These are the most common reasons for poor VM performance in my experience;
Under-specced machines. Ideally you should be able to dedicate one core and 1GB of memory to each VM you plan to work in. Contrary to what you might read I've found that Vista runs within a few percent of XP with 1GB of memory.
Running too many things on your VM. Keep your email, web browsing and IM's to Mummy on your native OS.
On your VM turn off items such as screensavers, background apps and non-essential services. If your VM's are backed-up you may want to turn off system restore.
If possible have your VM's on a separate hard-drive than your native OS so their disc access is independent if one or the other starts paging.
Defrag your VM drive. It does make a difference.
VMware Workstation 6.5 runs like a champ on my older Athlon X2. I use Visual Studio on my host machine and have many VMs installed with various OS, framework and browser combinations. VMware Workstation adds VM debugging into Visual Studio as well, so I can just hit F6 to start my app in any one of my VMs and debug it under any OS I want. The only catch is that you need at least 4gb RAM to make it practical to use more than 1 VM at a time.
My company uses VMware to test our webapp using different browsers/OS versions. Everyone has at least 1 VM on their machine for this purpose. We all develop on the native machine, however -- even on a quad core machine with 4GB RAM, it takes about 20 minutes to do a clean build of our app! For me, I dislike using VM images because of how much paging they do. A few developers here have started using Linux has the host OS and running Windows VMs inside it and they get much better performance due to reduced paging (Linux is way better at memory and disk cache management, plus is has a better scheduler). The extra VMs for testing that would normally be run inside our Windows instance thus get moved to run side by side on the Linux host, which improves performance.
I switched to developing exclusively in VMs around the time I started doing work with technologies like BizTalk Server, Sharepoint, and betas/CTPs of various things...it just got to be impossible to have all the stuff co-exist on the same box.
Since switching I have enjoyed many other benefits to developing in a VM - snapshots, portability, dynamically marshaling resources, etc.
The ultimate benefit is due to VMWare having a presence on many different hosts operating systems, thus I am free to select the host OS of my choice - XP, Vista, Linux, OSX, etc.
Now I run OSX on a MacBook Pro, which allows me to do Mac and iPhone development as well as Windows development, all on the same box.
That is the long winded backstory that brings me to answering the question - as long as your hardware is decently spec'd you should not run into any performance problems - even doing crazy shit with BizTalk and SQL Server.
We use it where I work. We are even making a dvd with the appliance on it to reduce the time it takes new developers to get up to speed.
Regarding performance, I have seen a performance hit. It seems mostly limited by the hard drive if you have snapshots enabled. Of course after I moved my vm's to a VelociRaptor, even that performance hit is no longer noticable.
Oh, I develop ASP websites and C/C++ applications using Visual Studio 2005 and 2008.
Sadly, it's not yet "popular" in the sense of "common," but it's definitely "popular" in the sense of "enjoyed" by those who try it. As a consultant, I love it, since it allows me to swap tool chains in a matter of minutes and, at the end of an engagement, burn a DVD, throw it in the project file, and be done with it.
Several responders seem to be emphasizing the use of VMs for testing, where I think it is beginning to gain some traction, at least within more sophisticated shops. It's clearly a huge win for deployment and compatibility testing.
Depends on the employer, I suppose. On a machine that is adequately-equipped, VMWare (or any virtualization software) performs perfectly fine. On machines that you are more likely to be forced to use at the majority of programming jobs, not so much.
I personally do not use VMWare at work. My work machine barely has enough power to natively handle the tools I need to use.
Its very popular unless employer is cheap, i used it in a few companies. its great for .NET or any language where you have to check if the thing works on different OS versions/platforms. The most common way is not to use VMWare on your own computer but to remotely join it.
I've started using VMware for almost everything on my personal PC.
I keep my native Windows install for games only and have seperate VMs for everything else:
a general office workstatation (MSOffice, accounting software, general crapware). This one stays on almost all the time.
a WAMP stack dev environment
a MS stack dev environment
a throwaway environment for beta testing and toying around with things that might break the OS install.
Everything is pretty fast. I use a streamlined WinXP base install that takes up very little space/memory.
Disk I/O seems to be the bottleneck for me, but I feel we are only one generation (6 months?) away from quite affordable SSDs.
I couldnt go back to physical computing.
Once you start using VM's you'll never go back. I use VMware on a MacBook Pro for Windows and Linux development and I'm very happy with the result.
Observations:
get plenty of RAM. 4GB is quite usable, but 8 is better. You're a developer, you have a lot of apps and web pages open, right?
allocate 1 core to the VM - it's faster than 2.
follow VMware's recommendations for allocating RAM to the guests
use a virtual hard drive for the guest OS. It's much faster than running the guest from a BootCamp partition.
VMware doesn't have the WDDM driver needed to enable Aero.
when I did an eval, the VMware Linux host video drivers didn't seem nearly as fast as for Windows or OSX hosts. Video for Windows guests is noticeably slower on a Linux host vs the other two OS's. This was the main reason I chose Mac over a Linux machine.
In my development environment I use a couple of VM's. Usually one (linux) server per role (such as subversion, MySQL databases, web server, trac server, etc.. ). This way my primary machine remains clean and can't affect my work by running amok, and the data remains secure on the VM-host.
VmWare is quite high-level, for production I'd recommend using a more low-level, bare-metal solution, like Xen.
VMWare as a windows development environment runs terrible on my dual core with 2GB ram (XP guest, XP host). Even with nothing running on the host except for VMware, constant paging that takes about a minute to settle every time I switch applications. Heck, native VS2008 doesn't even run that great during intellisense-heavy use (occasional noticible lag). While using a fixed VM image as my day-to-day working environment has a ton of benefits, the second-to-second performance lag is just too frustrating.
My employer is buying me a nice 64bit system with a ton of ram so I'll revisit the subject in a month. For now I just reimage my machine every couple months.
...console development is obviously performs just fine. for server applications (deployment) where high memory applications aren't launching and closing vmware is lovely and performs fine.
I am doing some SharePoint development and I really love the flexibility that comes from using the VMPlayer on my laptop. I have an image with WSS and the VS2005 tool chain and another image with MOSS and VS2008/SQL server 2008 when I need to it to the max.
When the 2008 image became corrupt (to many beta version I guess) I could just delete it and create a new one from a prior backup.
Being able to develop in a server environment while on the train speakes for it self.
PS: It only takes 4 GB to run the VMWare and it performing really nice, even with a slow 5600 rpm disk drive
Personally I would love to use a virtualization solution for my day to day development because of the ability to test and develop on multiple operating systems simultaneously. However, since my day-to-day development involves quite a bit of opengl this currently isn't a workable solution because most of the time the OS on the VM will default back to software rendering due to the lack of drivers and hardware acceleration.
I develop under a VMWare version of my entire network, including; AD Server, DB Server, etc, needless to say the performance is terrible even on our VMWare server that is running 4gb of ram. But it does allow me to develop without fear of accidentally destroying my companies live databases or shutting down an important server in the real world. And if something crazy happens, no biggy, I can just roll it back to yesterday. If my entire network wasn't housed inside the VMWare environment the performance would be incredible, but running all those other systems really bogs it down a lot.
We tried going all-in with VMs, but found that SQL Server running multiple times on the same physical box basically bogged it down to uselessness. However, I don't think we've seen any serious issues once the DBs were removed from the VM stacks.
Virtualization on desktop / workstation: Sun Virtualbox or VPC. Easy, light. We share our favorite images, keep it causal, and sometime even sysprep them.
Main QA environments get serious with Manager. It's a beast to get working, but can't live without it. There's no way we could afford our test matrix in real machines, or maintain it without the template management. Without such a resource, there are probably things you should do and don't.
Long lived servers or QA DB: VM Ware ESX. (No short explanation).
We don't have perf problems with DBs and virtualization. Well, I did in Lab Manager - which is part of why DB's live on ESX in our shop. For I/O, our IT guys do magic with SAN, iSCSI, and high quality wire. It is certainly simpler to avoid perf problems on db servers if they are bare metal, and probably possible to squeeze out more perf from a dedicated host.
Which brings up what virtualization is and isn't for: Virtualization isn't for a scenario where you are maxing out your hardware already. For example, I don't use it dev on, because I need everything my dev box can give me. It's to replace dozens of underutilized, hard to provision physical servers, with dozens of easy to provision virtual clones on many fewer hosts. It allows hot swapping more capacity, or allows engineering flexibility.
I also have some late 90s computer games that I run in virtualized Windows 98.