How to do load balancing of a FTP server running in a VM with different users? The FTP server is a passive one that only have ingest, how to make it autoscale if I add more users dynmaically? Is it specific to the server I chose, currently I'm using python-ftp server and in it's documentation saying that it can support 300max users? Elastic Load Balancing for FTP in AWS dooesn't support?
If it is possible what are the ports should I allow in ELC in GCP?
Or should I naively go for increasing capability of my VM?
Thanks!!
Eleastic load balancing ports support in GCP
You will need to use (install) an FTP server that supports load balancing. Google Cloud does not offer an FTP server service or software product.
Related
I have tried to search relevant info but couldn't find anything relevant. Please point me to some links on this.
I would like to know what is the best way to:
Connect to on-premise SOAP services from AWS cloud
on-premise Java RMI services
on-premise FTP to exchange files
Thanks
Connecting to SOAP, Java RMI or FTP service on-premise is something that will part of your application logic implementation. Which infrastructure you choose to deploy your application is a matter of choice depending on factors like what knowledge you have, what other application requirements you have and so on. Provided that you have configured your on-premise servers so that they are available on the public internet, you can choose to deploy your application using any server hosting option. For AWS specifically, EC2, Elastic Beanstalk and container options EKS and ECS comes to mind in addition to Lambda.
With the Cloud Foundry Feature, "Polyglot" for integrated Service Discovery and direct communication between service containers through the internal routes, How does the Load Balancing work? Is Cloud Foundry taking care of the Load Balancing? Is there a way to utilize Client Side Load Balancing, something like Ribbon on top of this Polyglot enabled communication?
When you are using container to container networking...
If you connect directly to IP addresses, no load balancing is done.
If you use the platform's DNS based polyglot service discovery, then you will get limited load balancing via round-robin DNS.
With the polyglot service discovery feature, DNS responses are rotated so that IPs are listed in different orders in the response. You can observe/validate this by doing the following:
Map an internal route to an app
Scale the same app up to have two or more instances
Run cf ssh into any app container
Inside the container, run dig <internal-route>
Repeat the last step any number of times. You should see the response from DNS come back with IP addresses in a different order (they are rotated).
That said, there is nothing to stop you from using a different form of load balancing be that a reverse proxy app you have deployed or something client side like Ribbon.
What I am trying to achieve:
sftp server.greedyguides.com
I basically want to connect a subdomain, to a load balancer that listens to port 22. I know i can ssh/sftp using the ip, but I also wanted to set up a domain version of that.
PS: I have never really asked questions on here, so sorry if format is bad.
SFTP would not be an appropriate protocol to serve via a Load Balancer.
The concept of a Load Balancer is that requests are spread across targets (typically Amazon EC2 instances). Using HTTP as an example, a person might request a page and Server 1 returns the response. When they click a link and request another page, it might be served from Server 2.
However, SFTP wouldn't be happy being served by multiple computers. One computer might provide a list of available files, but when the user requests a file such a request might go to a different computer that does not have the same set of files. SFTP has not been designed as a horizontally scalable system.
From a technical perspective, an Application Load Balancer will only work with web (HTTP) requests. A Network Load Balancer might be able to serve SFTP traffic because it does not modify the content of the requests being passed to the targets.
If you wish to provide an SFTP service to your users, I would recommend AWS Transfer for SFTP:
AWS Transfer for SFTP (AWS SFTP) is a fully managed AWS service that enables you to transfer files over Secure File Transfer Protocol (SFTP), into and out of Amazon Simple Storage Service (Amazon S3) storage. SFTP is also known as Secure Shell (SSH) File Transfer Protocol. SFTP is used in data exchange workflows across different industries such as financial services, healthcare, advertising, and retail, among others.
As a managed service, AWS takes care of scaling the system, so you don't need to load balance or manage the SFTP servers.
This is nothing different from using NLB for any other purpose
There is a valid use case to use NLB for sftp servers when these servers are synchronized with NAS or EFS
and clients upload files to them via sftp servers
So in that case all you do is create a TCP listener on NLB port 22 and have forwarding rules for however many sftp servers you have that have NAS or EFS mounted
Think about Microservices uploading files to EFS via sftp servers using key-pair methodology for authentication for a better security (user id password security isn’t strong)
Also, you don’t want all the load going to one sftp server
My dedicated server will expire soon. I just consider whether to renew it or migrate to Google cloud platform.
There are several points needs to consider:
Currently I am using Google Cloud Storage API to host static large files for my website. That will be fine.
My website also contain dynamic contents, such as PHP. Will Google supports to host such contents?
My website also use WordPress and MySQL database. Will Google support to host such contents?
My server is also host mailboxes and mail forwarders. Will Google support to host these?
My server is also host several add-on domains via cPanel. Will Google support to host these?
To the best, is it possible to use cPanel on Google cloud platform as I am familiar with cPanel.
Thank
Yes, you can migrate a dedicated server to Google Compute Engine. It is possible to run cPanel on a GCE instance. From your question, it sounds like you are used to a managed service where they have configured the server for you. GCE is not managed, so you will have to do much more systems administration to set it up and operate the server.
It is not easy to run email on a GCE instance because outbound port 25 is blocked by default.
I have a small nodejs application containing a web socket server.
The app is hosted inside an ecs container so it is basically a docker image running on an ec2 instance.
The web socket works as expected over ws://. I use port 5000 for this.
In order to use it on my SSL secured website (https), i need to use a secured web socket connection over wss://.
To archive that I've created a certificate on aws (like many times before) and after I create a load balancer.
I tried an application load balancer, a network load balancer and the classic load balancer (previous generation).
I read a few answers here on StackOverflow and followed the instructions as well as some tutorials found using google.
I tried a lot without success. Of course, this takes a lot of time because the creation of a load balancer and other resources takes quite a bit of time.
How I create a load balancer on aws pointing to my instance with wss://. Could someone please provide an example or instructions?
The solution posted
https://anandhub.wordpress.com/2016/10/06/websocket-ebs/ appears to work well.
Rather than selecting https and http, select the 'SSL' on port 443 and 'TCP' on your applications port (eg 5000)
You'll need to load your key/certificate via AWS and the loadbalancer will handle the secure part. I suspect you can not take advantage of 'sticky' features of the LB with this method.