DC/OS service development with Akka - akka

First of all, I'm new to DC/OS ...
I installed DC/OS locally with Vagrant, everything worked fine. Then I installed Cassandra, Spark and I think to understand the container concept with Docker, so far so good.
Now it's time to develop an Akka service and I'm a little bit confused how I should start. The Akka service should simply offer a HTTP REST endpoint and store some data to Cassandra.
So I have my DC/OS ready, and Eclipse in front of me. Now I would like to develop the Akka service and connect to Cassandra from outside DC/OS, how can I do that? Is this the wrong approach? Should I install Cassandra separately and only if I’m ready I would deploy to DC/OS?
Because it was so simple to install Cassandra, Spark and all the rest I would like to use it for development as well.

While slightly outdated (since it's using DC/OS 1.7 and you should be really using 1.8 these days) there's a very nice tutorial from codecentric that should contain everything you need to get started:
It walks you through setting up DC/OS, Cassandra, Kafka, and Spark
It shows how to use Akka reactive streams and the reactive kafka extension to ingest data from Twitter into Kafka
It shows how to use Spark to ingest data Cassandra
Another great walkthrough resource is available via Cake Solutions:
It walks you through setting up DC/OS, Cassandra, Kafka, and Marathon-LB (a load balancer)
It explains service discovery for Akka
It shows how to expose a service via Marathon-LB

Related

Does aws ecs distributed load testing support jmeter mqtt sampler

I am performing jmeter distributed load testing, I have been trying to setup aws ecs containers for load testing of mosquitto mqtt.
Does aws distributed load testing support jmeter mqtt sampler plugin
mqtt-jmeter ?
If your test is using any JMeter Plugins - you will need to install the plugin on:
master machine
and all the slave machines
So amend your container building scripts in order to include the plugin and all its dependencies and do this for both master and slave. JMeter Plugins Manager can be executed as a command-line tool
The same applies to any test data like CSV files used in CSV Data Set Config or JMeter Properties
You may find JMeter Distributed Testing with Docker article interesting and get some more ideas on preparing your containers
The anwser is no. aws distriubted testiong work only with http.

What is the best way to host Apache Camel in AWS?

As we move our workloads to AWS I am looking for an ETL tool which is widely used and has the appropriate connectors - Apache Camel appears to fit the bill. However, I am struggling to find information on how Camel can be deployed in AWS - the obvious one is on an EC2 instance, but we would like to avoid the setup and maintenance required by Virtual Machines. I don't see anyone offering it as a managed service, so the option I'd like to explore is running it as a container in ECS, as we will have numerous other containers running.
Containers don't seem to be an installation option on the Apache Camel website - perhaps it is just too limiting for a tool whose purpose is to connect to everything else?
Is it acceptable and practical to run Camel in a container, and where could I find more information about it?
Apache Camel appears to fit the bill.
Indeed the Apache Camel is a great integration framework. And that's the point. It is a framework, not a product. So there are multiple ways to run the Camel flows. As a web app, as a standalone app, as a part if our own code. Camel itself is pretty agnostic to the way you run the flows and that's why you don't have very specific way enforced in the web site.
If you want an out of box product, which can generate containerized deployments with Apache Camel, you could have a look at Apache ServiceMix, Apache Karaf or it's supported RedHat Fuse.
Is it acceptable and practical to run Camel in a container, and where could I find more information about it?
It is perfectly fine.
Question: Can you (are you able) to create a docker container with your (any other) application?. Based on the question this skill is lacking and I really suggest to learn it.
You may check folowing post https://medium.com/#wkrzywiec/how-to-put-your-java-application-into-docker-container-5e0a02acdd6b
FROM java:8-jdk-alpine
COPY ./target/myapp.jar /usr/app/myapp.jar
WORKDIR /usr/app
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "myapp.jar"]
Let's assume you can run your ETL tasks as a standalone application, then just run it in the container as any other standalone application.
it we would like to avoid the setup and maintenance required by Virtual Machines
Question: how do you distribute your camel tasks? I mean - what is result of your build? A war file? A standalone app?
To build a web app you could see https://www.baeldung.com/spring-apache-camel-tutorial
The most convenient way to deploy a war file in AWS is AWS Beanstalk service.
If you build a standalone application (or use servicemix) and you can build a container, then indeed ECS or Fargate seem as natural options.

How do deploy a play application on google cloud

This is my first time deploying an application. I have some idea about it but I am not sure if it is correct. How do I go about deploying a play application on google cloud?
1) I have created a package using dist command. I have the zip file now on my local pc. https://www.playframework.com/documentation/2.5.x/Deploying
2) Do I first need to create a compute resource on gcp? What configuration shall I use for the vm? My app is still in test phase so there are no external users at the moment
3) I suppose play uses netty web server. So do I need to install netty on the compute resource? I have looked online a bit but can't find a resource on how to deploy an application on netty.
deploy an application on netty
Netty is not a web server/application server, but an IO framework which can be used to build web servers or any high-performant IO applications.
If you really want to use netty, you need to write an HTTP server yourself, or just use an HTTP framework built on netty.
If you want to build an application using netty, have a look at the examples on https://github.com/netty/netty/tree/4.1/example/src/main/java/io/netty/example/
Deploying a container to the Cloud using Google Cloud Platform and Kubernetes Engine
Kubernetes is a way of orchestrating containers in the Cloud, enabling you to do things like auto-scale, fast deploys and manage running versions of containers. You simply create a container and upload it to a container repository. In this example I used Google’s Container Registry, it’s really simple to use and works brilliantly with their Kubernetes implementation.
follow this tutorial might help you with this
https://medium.com/beyond/deploying-a-container-to-the-cloud-using-google-cloud-platform-and-kubernetes-engine-10d8ee3aba86

Meteor Framework on AWS

I have an application developed in Meteor Framework.
We are planning to move it to AWS withmulti AZ deployment
need Master Slave configuration for the Mongo DB
My question is how to achieve this, i believe mongo db comes bundled in with the Framework itself,
never worked on it so any help will be appriciated.
Thanks
Welcome to Stack Overflow.
Mongo is bundled into the development environment, but not the server.
It is normal to host the database either on a different server of your own, or using a database service (there are many around, such as compose.io, Mongolab etc) So Mongo can be set up for load balancing and scaling independently of the app itself.

How to handle DB migration using AWS deployment tools

Amazon Web Services offer a number of continuous deployment and management tools such as Elastic Beanstalk, OpsWorks, Cloud Formation and Code Deploy depending on your needs. The basic idea being to facilitate code deployment and upgrade with zero downtime. They also help manage best architectural practice using AWS resources.
For simplicity lets assuming a basic architecture where you have a 2 tear structure; a collection of application servers behind a load balancer and then a persistence layer using a multi-zone RDS DB.
The actual code upgrade across a fleet of instances (app servers) is easy to understand. For a very simplistic overview the AWS service upgrades each node in turn handing connections off so the instance in question is not being used.
However, I can't understand how DB upgrades are managed. Assume that we are going from version 1.0.0 to 2.0.0 of an application and that there is a requirement to change the DB structure. Normally you would use a script or a library like Flyway to perform the upgrade. However, if there is a fleet of servers to upgrade there is a point where both 1.0.0 and 2.0.0 applications exist across the fleet each requiring a different DB structure.
I need to understand how this is actually achieved (high level) to know what the best way/time of performing the DB migration is. I guess there are a couple of ways they could be achieving this but I am struggling to see how they can do it and allow both 1.0.0 and 2.0.0 to persist data without loss.
If they migrate the DB structure with the first app node upgrade and at the same time create a cached version of the 1.0.0. Users connected to the 1.0.0 app persist using the cached version of the DB and users connected to the 2.0.0 app persist to the new migrated DB. Once all the app nodes are migrated, the cached data is merged into the DB.
It seems unlikely they can do this as the merge would be pretty complex but I can't see another way. Any pointers/help would be appreciated.
This is a common problem to encounter once your application infrastructure gets into multiple application nodes. In the olden days, you could take your application offline for "maintenance windows" during which you could:
Replace application with a "System Maintenance, back soon" page.
Perform database migrations (schema and/or data)
Deploy new application code
Put application back online
In 2015, and really for several years this approach is not acceptable. Your users expect 24/7 operation, so there must be a better way. Of course there is, the answer is a series of patterns for Database Refactorings.
The basic concept to always keep in mind is to assume you have to maintain two concurrent versions of your application, and there can be no breaking changes between these two versions. This means that you have a current application (v1.0.0) currently in production and (v2.0.0) that is scheduled to be deployed. Both these versions must work on the same schema. Once v2.0.0 is fully deployed across all application servers, you can then develop v3.0.0 that allows you to complete any final database changes.