Limited download speed explanation - google-cloud-platform

We want to download objects from Google Storage via the application via the 5G network. But the download speed does not rise above 130 Mbps, despite the fact that the speed test on the device shows 400 Mbps. Can you please tell me if there are any restrictions on the download speed in this service?

Google Cloud Storage doesn't have any hard limit 1, but the network between your device and Google infrastructure may affect your download speeds.
Check your ping to GCP regions with this tool. If your data is stored in a location that has very high latency, try moving your storage bucket somewhere closer.
You can also take a look at this article to find out how you can improve your Google Cloud Storage performance.

5G implies a mobile device, and I do not know how to tune the TCP settings on a mobile device, but one of the fundamental limits to the performance of a TCP connection is:
Throughput <= WindowSize / RoundTripTime
If you cannot make the RoundTripTime smaller by accessing things more closely to you, you can seek to increase WindowSize.

Related

Reduce TTFB while requesting public images from Google Cloud Storage [duplicate]

This question already has answers here:
Is Google Cloud Storage an automagical global CDN?
(4 answers)
Closed 3 years ago.
Based on the discussions here I made my Google Cloud Storage image public but its still taking a lot of time for TTFB? Any idea why? How can I reduce the TTFB while calling Google Cloud Storage?. My URL and snapshot of what i see on developer tools is given below
Public image URL
Ok, now I understand your question. Your concern is how to reduce the TTBF time of requesting an image from Google Cloud Storage. There is not a magic way to reduce the TTFB to 0. This is near to impossible. Time to First Byte is how long the browser has to wait before start receiving data. For the specific case of Google Cloud Storage is (in a general way) the time between you requesting an image, this message being delivered to the Google server where your image is stored, this server searching the image and delivering the image to you.
This will depend on 2 main factors:
The speed of the message being transport to/from the server. This will depend on the speed of your connection and the distance between the server and you. It is not the same if you are fetching an image form USA or from India, this will give you 2 very different TTFB.
You can see this example where I get the same image from 2 different buckets with public policy. For reference, I'm in Europe.
Here is my result calling the image from a bucket in Europe:
And here is my result calling the image from India:
As you can see that my download time doesn't increase that much while my TTFB is doubled.
The second factor to see if you want to reduce your TTFB is the speed of the request being processed by your server. In this case, you don't have much influence on this since you are requesting this directly from Google Cloud Storage and you can't modify the code. The only way to influence this is by removing load to the request. Making the image public helps with this because now the server doesn’t have to look for certs or permissions, it will just send you back the image.
So, in conclusion, there is not much that you can do in here to reduce the TTFB more than select a server closer to your user location and improve your internet speed.
I found this article really useful and could help you to understand better the TTFB and how to understand it measurement.
Thanks I have moved my bucket to a nearby location to my users. That has reduced the time

How to adjust and measure network performance on AWS

Lately, I have been struggling to understand what is my network speed (downlink) between nodes on AWS (in a multi-homed cluster, computers in different regions).
I have a lot of fluctuations when I measure it with a script which I have written (based on this link and SCP) or with Iperf.
I believe it is based on network use which changes rapidly (mostly between regions), but I still don't understand AWS documentation about what is the performance I am paying for, a minimum and a maximum downlink rate for example (aws instances).
At first, I have tried the T2 type, and as I saw it had burst CPU performance, I thought that maybe the NIC performance is also bursty so I have moved to M4 type, but I have got the same problems with M4.
Is there any way to know my NIC downlink rate based on the type and flavor?
*I have asked a similar question on the AWS forum, but I haven't got a response (https://forums.aws.amazon.com/thread.jspa?threadID=296389).
There is no way to get a better indication that your measuring. AWS does not publish anything indicating this performance, and unless we are talking the larger instance where network performance is actually specifically given. I.e. m5.12xlarge having 10 gbps. Most likely network performance does have a burst component for smaller instance types.
There are pages with other peoples benchmarks, but you won't find any official answer for any of this.

bandwidth in Amazon AW2 trying to calculate TCO

Trying to work out TCO for Amazon AWS for network what is 400GB upload and download per month comparable to Mbit/s in the network calculator?
You cannot calculate raw network bandwidth (Mbits per second) of the data centre link from average download rate, or vice versa.
The download rate will depend all all sorts of other factors, including end-to-end network latency, raw bandwidth in the entire network path from client to server, network congestion, the ability of the end points to read / write the data from / to disk, etcetera.
Also, you need to decide if you want your download / uploads to happen quickly or slowly. What "user experience" do you want your users to have?

Can I improve performance of my GCE small instance?

I'm using cloud VPS instances to host very small private game servers. On Amazon EC2, I get good performance on their micro instance (1 vCPU [single hyperthread on a 2.5GHz Intel Xeon], 1GB memory).
I want to use Google Compute Engine though, because I'm more comfortable with their UX and billing. I'm testing out their small instance (1 vCPU [single hyperthread on a 2.6GHz Intel Xeon], 1.7GB memory).
The issue is that even when I configure near-identical instances with the same game using the same settings, the AWS EC2 instances perform much better than the GCE ones. To give you an idea, while the game isn't Minecraft I'll use that as an example. On the AWS EC2 instances, succeeding world chunks would load perfectly fine as players approach the edge of a chunk. On the GCE instances, even on more powerful machine types, chunks fail to load after players travel a certain distance; and they must disconnect from and re-login to the server to continue playing.
I can provide more information if necessary, but I'm not sure what is relevant. Any advice would be appreciated.
Diagnostic protocols to evaluate this scenario may be more complex than you want to deal with. My first thought is that this shared core machine type might have some limitations in consistency. Here are a couple of strategies:
1) Try backing into the smaller instance. Since you only pay for 10 minutes, you could see if the performance is better on higher level machines. If you have consistent performance problems no matter what the size of the box, then I'm guessing it's something to do with the nature of your application and the nature of their virtualization technology.
2) Try measuring the consistency of the performance. I get that it is unacceptable, but is it unacceptable based on how long it's been running? The nature of the workload? Time of day? If the performance is sometimes good, but sometimes bad, then it's probably once again related to the type of your work load and their virtualization strategy.
Something Amazon is famous for is consistency. They work very had to manage the consistency of the performance. it shouldn't spike up or down.
My best guess here without all the details is you are using a very small disk. GCE throttles disk performance based on the size. You have two options ... attach a larger disk or use PD-SSD.
See here for details on GCE Disk Performance - https://cloud.google.com/compute/docs/disks
Please post back if this helps.
Anthony F. Voellm (aka Tony the #p3rfguy)
Google Cloud Performance Team

Determine available upload/download bandwidth

I have an application which does file upload and download. I also am able to limit upload/download speed to a desired level (CONFIGURABLE), so that my application does not consume the whole available bandwidth. I am able to achieve this using the libcurl (http) library.
But my question is, if I have to limit my upload speed to say 75% of the available upload bandwidth, how do I find out my available upload bandwidth programatically? preferably in C/C++. If it is pre-configured, I have no issues, but if it has to be learnt and adapted each time, like I said, 75% of the available upload limit, I do not know who to figure it out. Same is applicable to download. Any pointers would be of great help.
There's no way to determine the absolute network capacity between two points on a regular network.
The reason is that the traffic can be rerouted in between, other data streams appear or disappear or links can be severed.
What you can do is figure out what is the available bandwidth right now. One way to do it is to upload/download a chunk of data (say 1MB) as fast as possible (no artificial caps), and measure how long it takes. From there you can figure out what bandwidth is available now and go from there.
You could periodically measure the bandwidth again to make sure you're not too way off.