Libtorrent speed being capped at 1 MB/s - c++

Hi I'm trying to develop a LibTorrent client based on the example client provided by the LibTorrent library(client_test.cpp) but I'm getting the strange behavior of being capped at 1 MB/s download and upload speed between machines. I've tried using the example client and changing all the settings to max, I've also tried using the client_test in high performance mode but I still get the speed cap. I know it's not a network issue as transferring a file between these machines over the network through Windows gives an average of ~100 MB/s. Could there be a setting I've been missing that's capped by default at 1 MB/s?

"transferring a file between these machines over the network through Windows", that's not the same as using the torrent protocol that can be throttled by isp's.
To test simply try the same .torrent in utorrent on the same machine.

Related

AWS Sagemaker Studio Local Ram Overload

Every time i open studio and get to the jupyter labs page about 10s seconds later the browser starts to eat local ram at ~0.1GB/s and one or two cores max out at 100% utilization.
It will continue doing this until all local ram on my local computer has been used up and then it overloads the swap and everything freezes requiring reboot.
This happens with Firefox, Chrome, and Gnome browser. I have yet to try this on windows, if possible I would like to get this working with my current environment.
If it helps:
Ubuntu 20.04.1 LTS, 64 bit
Intel i7-7700HQ CPU # 2.80GHz × 8
GeForce GTX 1060
In all cases there is a rather large spike in bandwidth corresponding with the start of the issue.
~7+ MBs for ~2 seconds. This seems to correspond with a fetch request being made.
The process name consuming the memory and CPU when using chrome appears to be:
chrome -type=renderer followed by a bunch of other stuff
Occasionally the page will crash before using all the ram the error message on chrome is:
Error code: SIGTRAP
If the sagemaker jupyter labs tab is killed or the browser is closed before ram maxes out.
The cores it is maxing out go to normal utilization and the ram is released.

Higher response times are observed in Jmeter Performance test when ran in AWS windows machine

While load/performance testing of an API on DNS in AWS using JMeter, we observed relatively higher response times(~ 230 ms) in AWS windows machine. When this test is performed in my local machine, the response times are around 110 ms.The throughput/# of samples served does change widely due to this response time.
The tests were ran for 1 hour each with no delay for three times in both the machines. The only difference I see is my RAM size is 16 GB while AWS is 4 GB. Will this really make such a big difference? or is there something I am missing.
AWS Machine configuration:
My local machine configuration:
Can anyone share their thoughts?
I can think of 2 possible reasons:
Your AWS machine is located in the region which is geographically more far from the endpoint than your local machine
It might really be the case JMeter lacks resources on the AWS instance and hence cannot send requests fast enough so make sure to:
Monitor the resources available to JMeter using Windows PerfMon or Amazon Cloudwatch or JMeter Perfmon Plugin as JMeter might be very resources intensive and it should heave sufficient headroom to operate
Follow JMeter Best Practices

Increase download speed for encrypted openssl

Hey I've been using openssl as a secure communication method between a client and my server, I've been downloading updates to the client using encrypted ssl and the speed is insanely slow, the packed file is around 6~ mb and takes 2-3 minutes to download. is there anyway to speed up openssl without taking a massive security hit? thanks!

GCP Compute Engine limits download to 50 K/s?

From some reason download traffic from virtual machine on GCP (Google Cloud Platform) with Debian 9 is limited to 50K/s? Upload seems to be fine, inline with my local upload link.
It is the same with scp or https download. Any suggestions what might be wrong, where to search?
Machine type
n1-standard-1 (1 vCPU, 3.75 GB memory)
CPU platform
Intel Skylake
Zone
europe-west4-a
Network interfaces
Premium tier
Thanks,
Mihaelus
Simple test:
wget https://hrcki.primasystems.si/Nova/assets/download.test.html
Output:
--2018-10-18 15:21:00-- https://hrcki.primasystems.si/Nova/assets/download.test.html Resolving
hrcki.primasystems.si (hrcki.primasystems.si)... 35.204.252.248
Connecting to hrcki.primasystems.si
(hrcki.primasystems.si)|35.204.252.248|:443... connected. HTTP request
sent, awaiting response... 200 OK Length: 541422592 (516M) [text/html]
Saving to: `download.test.html.1' 0% [] 1,073,152 48.7K/s eta
2h 59m
Always good to minimize variables when trying to diagnose. So while it is unlikely the use of HTTP is why things are that very slow, you might consider using netperf or iperf3 to measure TCP bulk transfer performance between your VM in GCP and your local system. You can do that either "by hand" or via PerfKit Benchmarker https://cloud.google.com/blog/products/networking/perfkit-benchmarker-for-evaluating-cloud-network-performance
It can be helpful to have packet traces - from both ends when possible - to look at. You want the packet traces to be started before the test - it is important to see the packets used to establish the TCP connection(s). They do not need to be "full packet" traces, and often you don't want them to be. Capturing just the first 96 bytes of each packet would be sufficient for this sort of investigating.
You might also consider taking snapshots of the network statistics offered by the OSes running in your GCP VM and local system. For example, if running *nix taking a snapshot of "netstat -s" before and after the test. And perhaps a traceroute from each end towards the other.
Network statistics and packet traces, along with as many details about the two endpoints as possible are among the sorts of things support organizations are likely to request when looking to help resolve an issue of this sort.

How many simultaneous streams can be pushed into Wowza

My Wowza Streaming Engine Perpetual Pro Edition is running on home build Windows 8 server with:
2 cpu: Intel Xeon E5-2699 V3 - Tray ( 18 cores X 2 )
16 ram slots: 512Gb ram ( 32 X 16 )
1Tb SSD
Internet speed: 1gb upload and download
How many users can push 800 kb - 1 mb streams live video into my wowza server at the same time? please give me estimation based of this server and explanation. Can this setup support 1000 users at the same time?
I would suggest that you complete some testing by leveraging a load test tool. This way any environmental factors will be taken into consideration and show up in your findings. You will also want to consider your playback protocol as additional overhead may incur.