Sitecore agents on instances sharing a DB - sitecore

Our production Content Delivery environment has two web servers and one DB server that the two web servers share.
I know that there are a lot of DB related background tasks/agents that run in out-of-box Sitecore which do thing to the DB, like clean up tables, etc. Is it ok to have both web servers doing these tasks? Or are there tasks that should be turn off on the second server so that both aren't trying to do the same thing on the same DB? I don't see anything about this specifically in their Scaling Guide. Thanks.

As far as I'm aware that are not any issues with this setup - I have sites running like this with no issues. As long as both CD web servers only share Web and Core databases you should be fine.
Section 3.1 (Configuring a Publishing Target) in the Scaling guide has this setup on a diagram where the Core and Pub databases are shared between the two CD boxes.
The Pub database is just another Web database that is configured with a publishing target.

Related

On Prem Application migration to the AWS

We are migrating some of our J2EE based application from on-prem to the AWS cloud. I am trying to find some good document on what steps to be considered for the App migration. Since we already have an AWS account, and some of the applications have been migrated earlier, I don't have to worry about those aspects.. However I am thinking more towards
- Which App-server to use?
- Do i need to migrate DB as well..or just the App?
- Any licensing requirements for app.. we use mostly Open source.. So that should be fine..
- Operational monitoring after migrating to cloud..
Came across some of these articles.
https://serverguy.com/cloud/aws-migration/
Migration Scenario: Migrating Web Applications to the AWS Cloud : https://d36cz9buwru1tt.cloudfront.net/CloudMigration-scenario-wep-app.pdf
I would like to know If you have worked on this kind of work.. and If you point me to some helpful document/links.. or your pwn experience?
So theres 2 good resources I'd recommend for migration:
AWS Whitepaper for migration
AWS Well-Architected Framework.
The key is planning, but not being afraid to experiment. This is cloud so don't be afraid of setting an instance size in stone, you can easily change it.

Django on a single server. Web and Dev environments with web/dev databases

In a couple of months, I'm receiving a single (physical) Ubuntu LTS server for the purpose of a corporate Intranet only web tools site. I've been trying to figure out a framework to go with. My preference at this point would be to go with Django. I've used RoR, CF and PHP heavily in the past.
My Django concern right now is how to have both a separate '/web/' and '/dev/' environment, when I'm only getting a single server. Of course this would include also needing separate 'web' and 'dev' databases (either separated by db name or having two different db instances running on the single server).
Option 1: I know I could only setup a 'web' (production) environment on Ubuntu and then use my corporate Windows laptop to develop Django tools. I've read this works fine except that a lot of 3rd party Django packages don't work on Windows. My other concern would be making code changes and then pushing to the Ubuntu server where I might introduce problems that didn't show up on the local Windows development environment.
Option 2: Somehow setup a separate Django 'web' and 'dev' environment on the same server. I've seen a lot of different and confusing information on this. Also adding to the complication is what I assume would be the need to have two database instances running on the same server. Or, how could you have two different Django environments for 'web' and 'dev' and have them point to different db tables based on name instead of needing two different db instances running?
Thanks for any advice. I'm actually having trouble relaxing and learning Django not knowing how bad this is going to deal with. I could easily just deal with the pain of developing in basic PHP if this is too over complicated. With plain PHP it's dead simple to have a '/web/' and and '/dev/' path and separate db's just by checking the URL or file path for '/web/' or '/dev/' (and then pointing to the right db for example - 'mytool_dev_v1' / 'mytool_web_v1').
There are multiple ways to solve this problem:
You can run 2 separate instances of django in the same server in different virtual environments. You can configure them in a multiple ways: using environment variables or just separate 'production' and 'dev' config-files and choose which gonna be used.
You can use docker containers to serve different django instances. It is the best way I think. You can configure them in the same way: by the environment variables or multiple config files for 'dev' and 'prod' options.
If you want to serve 2 (or more) sites in the same server youll probably need to configure nginx server to redirect requests to the separate containers or django instances depends on the domain name or something else (url, for example).
As I know there is no problem to configure separate database for each instance. You also can run your postgres or mysql instance in container. The same way you can run nginx.
I can't recommend you to develop your app in the same server where production app is running. I convinced that development must going in the developer's computer, but yeah... Windows is not the best for django development, but it mostly works. Otherwise I can recommend you to use dualboot or at least VirtualBox with Ubuntu.

Meteor Framework on AWS

I have an application developed in Meteor Framework.
We are planning to move it to AWS withmulti AZ deployment
need Master Slave configuration for the Mongo DB
My question is how to achieve this, i believe mongo db comes bundled in with the Framework itself,
never worked on it so any help will be appriciated.
Thanks
Welcome to Stack Overflow.
Mongo is bundled into the development environment, but not the server.
It is normal to host the database either on a different server of your own, or using a database service (there are many around, such as compose.io, Mongolab etc) So Mongo can be set up for load balancing and scaling independently of the app itself.

Is there an easy way to test Openshift Autoscaling?

I'm migrating a Django application to Redhat Openshift Online. The application is subject to spikes in demand, so I want to use the Openshift Autoscaling functionality.
To test this, I use Apache JMeter to put a lot of load on the server, to see whether the new gears launch I expect. But I'm encountering bugs with the server scaling up, like my deployment scripts not working as expected, or migrations not occurring correctly on the database. Is there a more convenient way to test the auto-scaling than sending a bunch of requests at the server until haproxy launches a new gear?
You can scale applications up or down using both the web console and the rhc command line tool. You can read more about how to do it here: https://developers.openshift.com/managing-your-applications/scaling.html#managing-application-scaling
Can you provide more details about what scripts/migrations are not working correctly on the newly created gears? You can also feel free to send questions/issues to https://developers.openshift.com/contact.html

How to handle DB migration using AWS deployment tools

Amazon Web Services offer a number of continuous deployment and management tools such as Elastic Beanstalk, OpsWorks, Cloud Formation and Code Deploy depending on your needs. The basic idea being to facilitate code deployment and upgrade with zero downtime. They also help manage best architectural practice using AWS resources.
For simplicity lets assuming a basic architecture where you have a 2 tear structure; a collection of application servers behind a load balancer and then a persistence layer using a multi-zone RDS DB.
The actual code upgrade across a fleet of instances (app servers) is easy to understand. For a very simplistic overview the AWS service upgrades each node in turn handing connections off so the instance in question is not being used.
However, I can't understand how DB upgrades are managed. Assume that we are going from version 1.0.0 to 2.0.0 of an application and that there is a requirement to change the DB structure. Normally you would use a script or a library like Flyway to perform the upgrade. However, if there is a fleet of servers to upgrade there is a point where both 1.0.0 and 2.0.0 applications exist across the fleet each requiring a different DB structure.
I need to understand how this is actually achieved (high level) to know what the best way/time of performing the DB migration is. I guess there are a couple of ways they could be achieving this but I am struggling to see how they can do it and allow both 1.0.0 and 2.0.0 to persist data without loss.
If they migrate the DB structure with the first app node upgrade and at the same time create a cached version of the 1.0.0. Users connected to the 1.0.0 app persist using the cached version of the DB and users connected to the 2.0.0 app persist to the new migrated DB. Once all the app nodes are migrated, the cached data is merged into the DB.
It seems unlikely they can do this as the merge would be pretty complex but I can't see another way. Any pointers/help would be appreciated.
This is a common problem to encounter once your application infrastructure gets into multiple application nodes. In the olden days, you could take your application offline for "maintenance windows" during which you could:
Replace application with a "System Maintenance, back soon" page.
Perform database migrations (schema and/or data)
Deploy new application code
Put application back online
In 2015, and really for several years this approach is not acceptable. Your users expect 24/7 operation, so there must be a better way. Of course there is, the answer is a series of patterns for Database Refactorings.
The basic concept to always keep in mind is to assume you have to maintain two concurrent versions of your application, and there can be no breaking changes between these two versions. This means that you have a current application (v1.0.0) currently in production and (v2.0.0) that is scheduled to be deployed. Both these versions must work on the same schema. Once v2.0.0 is fully deployed across all application servers, you can then develop v3.0.0 that allows you to complete any final database changes.