Can I configure a multi clustered sitecore content management server with stateserver session mode ? I am using sitecore 8.1 .I tried with one server instance and it worked for me but not sure about multi clustered environment with load balanced .
You can configure session server with clusters of content delivery or processing servers.
There can be multiple scenario you can configure with.
Single standalone server
Single content delivery server and a separate content management server
Content delivery cluster with a sticky load balancer
Content delivery cluster with a non-sticky load balancer
For more information, you can review https://doc.sitecore.net/sitecore_experience_platform/setting_up__maintaining/xdb/session_state/session_state_configuration_scenarios
Related
We have a PHP web site with admin login and Android App. We are using admin login to add/remove products in the web site and customers uses the App for purchasing items.
Currently we are using single VPS server for the same.
We are planning to move the production to AWS for high availability and scalability.
We decided to use RDS for MySQL DB but not sure how to host the application behind the loadbalancer and autoscaling as we may need to add/remove the items from admin panel.
Please share your thoughts on this.
Thank You.
You can host your PHP application in ec2 this way:
1. run ec2 t2.micro instance
2. install your PHP app and make sure it runs smoothly even just for calling from the instance public IP
3. create an AMI image of your t2.micro instance
4. create a load balancer add listener port 80
5. create a target group and assign it to your load balancer
6. create launch configuration with your AMI image
7. create an auto-scaling group and select your launch configuration
8. add dynamic scaling policy
and if your domain is hosted on Route53, you can create an SSL with a certificate manager.
I have server-1 with node js and MongoDB installed, And I created server-2 from the snapshot of server-1 and I have created a load balancer and attached both servers,
My question here is does MongoDB is copied each other suppose if server-2 goes down and all the traffic will be to Server-1 right, Once when server-2 is up the database content will be different between 2 servers right?
The database content will be different as soon as the next insert or update happens, since the load will be balanced between the two servers, each server will get 50% of the requests, so each server will have 50% of the changes on their own database.
You can't run MongoDB on each server like this in a load balanced environment. You would need to run MongoDB on a separate server that both NodeJS servers connect to.
How to do load balancing of a FTP server running in a VM with different users? The FTP server is a passive one that only have ingest, how to make it autoscale if I add more users dynmaically? Is it specific to the server I chose, currently I'm using python-ftp server and in it's documentation saying that it can support 300max users? Elastic Load Balancing for FTP in AWS dooesn't support?
If it is possible what are the ports should I allow in ELC in GCP?
Or should I naively go for increasing capability of my VM?
Thanks!!
Eleastic load balancing ports support in GCP
You will need to use (install) an FTP server that supports load balancing. Google Cloud does not offer an FTP server service or software product.
I am writing Penetration Request Form to Amazon web services. I want help in filling the Request form, I am having dynamic IP, what shall I mention in Destination IP and source IP?
The server deployed in the AWS EC2 and running Ubuntu 12.02 OS.
The db is on another instance using the AWS RDS. No load balance - just a single server instance for the app and the single instance for the db.
Source: The IP(s) of the machines which will initiate the scan (usually machines external to AWS - like third party pentest service)
Destination: Your server IPs. It is better to assign elastic IPs and then use them to fill the request. You can use the dynamic IPs, but if your instances are stopped and started before your scan test starts, then your scan may fail since your servers will have a new IP.
I have created a LBCookieStickinessPolicy for my ELB.
But I can't seem to find on any AWS documentation a command that retrieves the instances that are currently 'sticked' (I mean, the actual instance that the ELB is sending load now).
I only find the commands that create the policy itself (create-lb-cookie-stickiness-policy & create-app-cookie-stickiness-policy) ...Any ideas?
Sticky sessions mean that a single user's web browser gets stuck to a single server instance (unless the server goes down or the user clears cookies). The ELB still distributes load across all the servers attached to it. The ELB would distribute multiple users across multiple server instances.
So there is no way to see what you are looking for because the ELB is always using all instances. Now if you just had a single user on your website, you could look at the server logs of each web server to determine which server that user is "stuck" to. In general you would need to look at the web server logs to see which servers are currently receiving traffic.