How to increase number of sockets on AWS EC2 instance? - amazon-web-services

I'm trying to run the same batch process(Linux) on Equinix VM and and EC2 instance. The configuration of the machines are same, yet the EC2 process is running 10x slower than the equinix one.
Found the difference in sockets. 8 sockets in Equinix, while only 1 in EC2 (from lscpu)
Note: the batch process utilizes WebURL to fetch features.
I figured out maybe increasing the sockets while launching the instance and then trying to run, shall match up the pace of equinix VM.
Any leads on how to go about doing it? Or any other hints to improve the performance of the EC2 instance?

Related

What can I do to solve Google Cloud vm instance network problem

The network is working before and I have not change anything on vm. After few months, I can not access the vm instance.
The vm instance is running
I will get "Request timed out" when ping to external network ip address.
I can not access SSH. The SSH port was open properly.
When troubleshooting my connection status of SSH in browser, it is stuck on Network status.
What should I do to know the reason of problem? After I restart the vm instance few times, it will running normally for a period, but the problem will appear again.
Any idea to make sure the vm instance will not disconnect from external network with this reason again?
Here are the resource consuming of my vm
In this case, VESTACP minimum system requirements for VM instances should be okay. But you can also consider the workload process for your VM instance.
I recommend switching to a higher N1 machine types to provide good performance for the workload and machine requirements.

How do I run Locust Load Distributed testing on AWS EC2 without running multiple sessions?

I'm trying to run a Locust test via EC2 but am running into high CPU usage problems. I would like to distribute the load via master-slave processes but is there a way to do it without creating multiple EC2's and logging into each one and running commands? The closest thing I found was:
https://aws.amazon.com/blogs/devops/using-locust-on-aws-elastic-beanstalk-for-distributed-load-generation-and-testing/
Here they use Elastic Beanstalk but some of the info seems quite dated.

Outgoing network performance (AWS)

There an external HTTP server (located somewhere in the US), which we must communicate with. We use AWS EC2 instances.
While we can buy a "bigger instance" to improve the internal network performance, is there a way to lessen (optimize?) the roundtrip time between our EC2 instance and the external server? Are therer any tools that could be useful?
You haven't specified what type of EC2 instance you use which is a big factor determining the network performance.
You also said
from my home network, it is much faster than when running on an AWS EC2 (regardless of where the ec2 is hosted)
I know nothing about your home network and your EC2 instance config so this is hard to judge but I'd expect, on average, the EC2 instance having faster network than what's available on the end user's site.
It's also not 100% clear what you are measuring. You said "round trip time" so you are only interested in end-to-end latency? Any particular throughput requirements?
That said, here's a useful cheat sheet which you can download and check your instance type: https://cloudonaut.io/ec2-network-performance-cheat-sheet/
Furthermore, you can use iperf (or iperf3) to perform some experiments on both sides of the connection:
https://www.tecmint.com/test-network-throughput-in-linux/
https://aws.amazon.com/premiumsupport/knowledge-center/network-throughput-benchmark-linux-ec2/

AWS - EC2 and RDS in different regions is very slow

I'm currently in Sydney and I do have the following scenario:
1 RDS on N. Virginia.
1 EC2 on Sydney
1 EC2 on N. Virginia
I need this to redundation, and this is the simplified scenario.
When my app on EC2 sydney connection to RDS on N. Virgnia, it takes almost 2.5 seconds to give me the result. We can think: Ok, that's the latency.
BUT, when I send the request to EC2 N. Virginia, I get the result in less then 500ms.
Why there is a slow connection when you access RDS from outside the region?
I mean: I can experience this slow connection when I'm running the application on my computer too. But when the application is in the same region that RDS, works quickier that on my own computer.
Most likely you request to RDS requires multiple roundtrips to complete. I.e. at first your EC2 instance requests something to RDS, then something else based on the first request etc. Without seeing your database code, it's hard to say exactly what might be the cause of that.
You say then when you talk to the remote EC2 instance, instead, you get the response in less than 500 ms. That suggests that setting up a TCP connection and sending a single request with reply is 500 ms. Based on that, my guess is that your database connection requires at least 5x back and forth traffic.
There is no additional penalty with RDS in terms of using it out of region, but most database protocols are not optimized for high latency conditions. You might be much better off setting up a read replica in Sydney.
If you are trying to connect the RDS using public-facing network, then it might be slow. AWS launched cross region VPC peering, please peer all the region's VPC (make sure there will not be any IP conflict) and try to connect using private connections.

Usefulness of Amazon ELB (Elastic Load Balancing

We're considering to implement an ELB in our production Amazon environment. It seems it will require that production server instances be synched by a nightly script. Also, there is a Solr search engine which will need to replicated and maintained for each paired server. There's also the issue of debugging - which server is it going to? If there's a crash, do you have to search both logs? If a production app isn't behaving, how do you isolate which one is is, or do you just deploy debugging code to both instances?
We aren't having issues with response time or server load. This seems like added complexity in exchange for a limited upside. It seems like it may be overkill to me. Thoughts?
You're enumerating the problems that arise when you need high availability :)
You need to consider how critical is the availability of the service and take that into account when defining what is the right solution or just over-engineering :)
Solutions to some caveats:
To avoid nightly syncs: Use an EC2 with NFS server and mount share in both EC2 instances. (Or use Amazon EFS when it's available)
Debugging problem: You can configure the EC2 instances behind the ELB to have public IPs, limited in the Security Groups just to the PCs of the developers, and when debugging point your /etc/hosts (or Windows equivalent) to one particular server.
Logs: store the logs in S3 (or in the NFS server commented above)