How many simultaneous streams can be pushed into Wowza - wowza

My Wowza Streaming Engine Perpetual Pro Edition is running on home build Windows 8 server with:
2 cpu: Intel Xeon E5-2699 V3 - Tray ( 18 cores X 2 )
16 ram slots: 512Gb ram ( 32 X 16 )
1Tb SSD
Internet speed: 1gb upload and download
How many users can push 800 kb - 1 mb streams live video into my wowza server at the same time? please give me estimation based of this server and explanation. Can this setup support 1000 users at the same time?

I would suggest that you complete some testing by leveraging a load test tool. This way any environmental factors will be taken into consideration and show up in your findings. You will also want to consider your playback protocol as additional overhead may incur.

Related

AWS Sagemaker Studio Local Ram Overload

Every time i open studio and get to the jupyter labs page about 10s seconds later the browser starts to eat local ram at ~0.1GB/s and one or two cores max out at 100% utilization.
It will continue doing this until all local ram on my local computer has been used up and then it overloads the swap and everything freezes requiring reboot.
This happens with Firefox, Chrome, and Gnome browser. I have yet to try this on windows, if possible I would like to get this working with my current environment.
If it helps:
Ubuntu 20.04.1 LTS, 64 bit
Intel i7-7700HQ CPU # 2.80GHz × 8
GeForce GTX 1060
In all cases there is a rather large spike in bandwidth corresponding with the start of the issue.
~7+ MBs for ~2 seconds. This seems to correspond with a fetch request being made.
The process name consuming the memory and CPU when using chrome appears to be:
chrome -type=renderer followed by a bunch of other stuff
Occasionally the page will crash before using all the ram the error message on chrome is:
Error code: SIGTRAP
If the sagemaker jupyter labs tab is killed or the browser is closed before ram maxes out.
The cores it is maxing out go to normal utilization and the ram is released.

Google Compute Engine - Low on Resource Utilisation

I use a VM Instance provided by Google Compute Engine.
Machine Type: n1-standard-8 (8 vCPUs, 30 GB memory).
When I check for the CPU Utilisation, it never uses more than 12%. I use my VM for running Jupyter Notebook. I have tried loading dataframes which costed 7.5 GiB (And it takes a long time to process the data for simple operations). But still the utilisation is same
How can I utilise the CPU power ~ 100%?
Or Does my program use only 1 out of the 8 CPU (1/8)*100 =12.5%?
You can run stress command to impose a configurable amount of CPU, memory, I/O, and disk stress on the system.
Example to stress 4 cores for 90 seconds:
stress --cpu 4 --timeout 90
In the meantime go to your Google Cloud Console on your browser to check your CPU usage on your VM or open new SSH connection to your VM and run TOP command to see your CPU status.
After running those mentioned commands, if your CPU can reach over 99%, your instance is working fine and you have to check your application resources to know why it is restricted and cannot use CPU more than 12%.

What type of EC2 instance is better suitted for Wso2 CEP Standalone?

compute optimized (c3, c4) or a memory optimized (r3) instance for running a stand-alone wso2 cep server?
i searched the documentation but could not find anything regarding running this server on ec2
As per the WSO2 SA recommendations,
Hardware Recommendation
Physical :
3GHz Dual-core Xeon/Opteron (or latest), 4 GB RAM (minimum : 2 GB for JVM and 2GB for the OS, 10GB free disk space (minimum) disk based on the expected storage requirements (calculate by considering the file uploads and the backup policies) . (e.g if 3 Carbon instances running in a machine, it requires 4 CPU, 8 GB RAM 30 GB free space)
Virtual Machine :
2 compute units minimum (each unit having 1.0-1.2 GHz Opteron/Xeon processor) 4 GB RAM 10GB free disk space. One cpu unit for OS and one for JVM. (e.g if 3 Carbon instances running require VM of 4 compute units 8 GB RAM 30 GB free space)
EC2 : c3.large instance to run one Carbon instance. (e.g if 3 Carbon instances EC2 Extra-Large instance) Note : based on the I/O performance of c3.large instance, it is recommended to run multiple instances in a Larger instance (c3.xlarge or c3.2xlarge).
NoSQL-Data Nodes:
4 Core 8 GB (http://www.datastax.com/documentation/cassandra/1.2/cassandra/architecture/architecturePlanningHardware_c.html)
Example
Let's say a customer needs 87 carbon instance. Hence, they need 87 CPU core / 174GB of memory / 870GB free space.
This is calculated by not considering the resources for OS. Per each machine, they need 1CPU core, 2GB memory for OS.
Lets say they want to buy 10 machines, then total requirement will be 97 CPU core (10 core for OS + 87 core for Carbon) 194 GB memory (20 GB for OS + 174GB for Carbon) 870GB free space for carbon (Normally, storage will have more than this).
Which means, each machine will have 1/10 of above and can run about 9 carbon instances. i.e roughly 10 CPU core / 20 GB Memory/ 100 GB of free storage
Reference : https://docs.wso2.com/display/CLUSTER44x/Production+Deployment+Guidelines
Note:
However, everything depends on the what you're going to process using CEP. therefore, please refer #Tharik's answer as well.
It depends on the type of processing CEP node does. CEP node requires alot of memory if the processing event size is large, or the event flowing through put is high and if there are time windows in the queries. For those cases Memory Optimized EC2 instances are better as those provide lowest price for RAM size. If there are a lot of computation on algorithms you have extended you might more processing capabilities of compute optimized instances.

Libtorrent speed being capped at 1 MB/s

Hi I'm trying to develop a LibTorrent client based on the example client provided by the LibTorrent library(client_test.cpp) but I'm getting the strange behavior of being capped at 1 MB/s download and upload speed between machines. I've tried using the example client and changing all the settings to max, I've also tried using the client_test in high performance mode but I still get the speed cap. I know it's not a network issue as transferring a file between these machines over the network through Windows gives an average of ~100 MB/s. Could there be a setting I've been missing that's capped by default at 1 MB/s?
"transferring a file between these machines over the network through Windows", that's not the same as using the torrent protocol that can be throttled by isp's.
To test simply try the same .torrent in utorrent on the same machine.

Amazon Web Services GPU G2

Today I got setup with AWS GPU G2 instance (g2.2xlarge). I wanted to test out the 3d hardware capability that is offered as mentioned here
http://aws.amazon.com/ec2/instance-types/
Features:
High Frequency Intel Xeon E5-2670 (Sandy Bridge) Processors
High-performance NVIDIA GPU with 1,536 CUDA cores and 4GB of video
memory On-board hardware video encoder designed to support up to eight
real-time HD video streams (720p#30fps) or up to four real-time FHD
video streams (1080p at 30 fps). Support for low-latency frame capture
and encoding for either the full operating system or select render
targets, enabling high-quality interactive streaming experiences.
But when I tried running 3dmark 2011 to try things out. I got an exception "No DXGI adapters found"
Also I noticed, dxdiag says no hardware acceleration available.
So im a bit puzzled as to why I dont see the NVIDIA GPU with 1500+ cuda cores.
Also, It would be great if Azure offered 3d compute capabilities.
To answer my own question, there is some setup required before GPU can be used. One needs to install nvidia grid k520 driver as well as the latest cuda toolkit. Finally install vnc server on the instance and then open relevant ports in the aws instance. Then install vnc client on your local pc and that should give you access to the gpu.
Thanks