Meteor Framework on AWS - amazon-web-services

I have an application developed in Meteor Framework.
We are planning to move it to AWS withmulti AZ deployment
need Master Slave configuration for the Mongo DB
My question is how to achieve this, i believe mongo db comes bundled in with the Framework itself,
never worked on it so any help will be appriciated.
Thanks

Welcome to Stack Overflow.
Mongo is bundled into the development environment, but not the server.
It is normal to host the database either on a different server of your own, or using a database service (there are many around, such as compose.io, Mongolab etc) So Mongo can be set up for load balancing and scaling independently of the app itself.

Related

On Prem Application migration to the AWS

We are migrating some of our J2EE based application from on-prem to the AWS cloud. I am trying to find some good document on what steps to be considered for the App migration. Since we already have an AWS account, and some of the applications have been migrated earlier, I don't have to worry about those aspects.. However I am thinking more towards
- Which App-server to use?
- Do i need to migrate DB as well..or just the App?
- Any licensing requirements for app.. we use mostly Open source.. So that should be fine..
- Operational monitoring after migrating to cloud..
Came across some of these articles.
https://serverguy.com/cloud/aws-migration/
Migration Scenario: Migrating Web Applications to the AWS Cloud : https://d36cz9buwru1tt.cloudfront.net/CloudMigration-scenario-wep-app.pdf
I would like to know If you have worked on this kind of work.. and If you point me to some helpful document/links.. or your pwn experience?
So theres 2 good resources I'd recommend for migration:
AWS Whitepaper for migration
AWS Well-Architected Framework.
The key is planning, but not being afraid to experiment. This is cloud so don't be afraid of setting an instance size in stone, you can easily change it.

DC/OS service development with Akka

First of all, I'm new to DC/OS ...
I installed DC/OS locally with Vagrant, everything worked fine. Then I installed Cassandra, Spark and I think to understand the container concept with Docker, so far so good.
Now it's time to develop an Akka service and I'm a little bit confused how I should start. The Akka service should simply offer a HTTP REST endpoint and store some data to Cassandra.
So I have my DC/OS ready, and Eclipse in front of me. Now I would like to develop the Akka service and connect to Cassandra from outside DC/OS, how can I do that? Is this the wrong approach? Should I install Cassandra separately and only if I’m ready I would deploy to DC/OS?
Because it was so simple to install Cassandra, Spark and all the rest I would like to use it for development as well.
While slightly outdated (since it's using DC/OS 1.7 and you should be really using 1.8 these days) there's a very nice tutorial from codecentric that should contain everything you need to get started:
It walks you through setting up DC/OS, Cassandra, Kafka, and Spark
It shows how to use Akka reactive streams and the reactive kafka extension to ingest data from Twitter into Kafka
It shows how to use Spark to ingest data Cassandra
Another great walkthrough resource is available via Cake Solutions:
It walks you through setting up DC/OS, Cassandra, Kafka, and Marathon-LB (a load balancer)
It explains service discovery for Akka
It shows how to expose a service via Marathon-LB

Is there an easy way to test Openshift Autoscaling?

I'm migrating a Django application to Redhat Openshift Online. The application is subject to spikes in demand, so I want to use the Openshift Autoscaling functionality.
To test this, I use Apache JMeter to put a lot of load on the server, to see whether the new gears launch I expect. But I'm encountering bugs with the server scaling up, like my deployment scripts not working as expected, or migrations not occurring correctly on the database. Is there a more convenient way to test the auto-scaling than sending a bunch of requests at the server until haproxy launches a new gear?
You can scale applications up or down using both the web console and the rhc command line tool. You can read more about how to do it here: https://developers.openshift.com/managing-your-applications/scaling.html#managing-application-scaling
Can you provide more details about what scripts/migrations are not working correctly on the newly created gears? You can also feel free to send questions/issues to https://developers.openshift.com/contact.html

Sitecore agents on instances sharing a DB

Our production Content Delivery environment has two web servers and one DB server that the two web servers share.
I know that there are a lot of DB related background tasks/agents that run in out-of-box Sitecore which do thing to the DB, like clean up tables, etc. Is it ok to have both web servers doing these tasks? Or are there tasks that should be turn off on the second server so that both aren't trying to do the same thing on the same DB? I don't see anything about this specifically in their Scaling Guide. Thanks.
As far as I'm aware that are not any issues with this setup - I have sites running like this with no issues. As long as both CD web servers only share Web and Core databases you should be fine.
Section 3.1 (Configuring a Publishing Target) in the Scaling guide has this setup on a diagram where the Core and Pub databases are shared between the two CD boxes.
The Pub database is just another Web database that is configured with a publishing target.

How to handle DB migration using AWS deployment tools

Amazon Web Services offer a number of continuous deployment and management tools such as Elastic Beanstalk, OpsWorks, Cloud Formation and Code Deploy depending on your needs. The basic idea being to facilitate code deployment and upgrade with zero downtime. They also help manage best architectural practice using AWS resources.
For simplicity lets assuming a basic architecture where you have a 2 tear structure; a collection of application servers behind a load balancer and then a persistence layer using a multi-zone RDS DB.
The actual code upgrade across a fleet of instances (app servers) is easy to understand. For a very simplistic overview the AWS service upgrades each node in turn handing connections off so the instance in question is not being used.
However, I can't understand how DB upgrades are managed. Assume that we are going from version 1.0.0 to 2.0.0 of an application and that there is a requirement to change the DB structure. Normally you would use a script or a library like Flyway to perform the upgrade. However, if there is a fleet of servers to upgrade there is a point where both 1.0.0 and 2.0.0 applications exist across the fleet each requiring a different DB structure.
I need to understand how this is actually achieved (high level) to know what the best way/time of performing the DB migration is. I guess there are a couple of ways they could be achieving this but I am struggling to see how they can do it and allow both 1.0.0 and 2.0.0 to persist data without loss.
If they migrate the DB structure with the first app node upgrade and at the same time create a cached version of the 1.0.0. Users connected to the 1.0.0 app persist using the cached version of the DB and users connected to the 2.0.0 app persist to the new migrated DB. Once all the app nodes are migrated, the cached data is merged into the DB.
It seems unlikely they can do this as the merge would be pretty complex but I can't see another way. Any pointers/help would be appreciated.
This is a common problem to encounter once your application infrastructure gets into multiple application nodes. In the olden days, you could take your application offline for "maintenance windows" during which you could:
Replace application with a "System Maintenance, back soon" page.
Perform database migrations (schema and/or data)
Deploy new application code
Put application back online
In 2015, and really for several years this approach is not acceptable. Your users expect 24/7 operation, so there must be a better way. Of course there is, the answer is a series of patterns for Database Refactorings.
The basic concept to always keep in mind is to assume you have to maintain two concurrent versions of your application, and there can be no breaking changes between these two versions. This means that you have a current application (v1.0.0) currently in production and (v2.0.0) that is scheduled to be deployed. Both these versions must work on the same schema. Once v2.0.0 is fully deployed across all application servers, you can then develop v3.0.0 that allows you to complete any final database changes.