Is it RDB or AOF? I couldn't find any information on this on the wiki or scapp-console.
Both are used. AOF is used for general persistence and RDB for backups.
Related
Is Google cloud memory store for Redis support RedisJson, and RedisSearch modules?
Thanks in advance
As per Official docs Memorystore, currently supports Redis versions 6.x but it is not supporting RedisJson, and RedisSearch modules.
A Feature Request was raised to the Google team to enable JSON Redis module support for Memorystore for Redis 6.x and the Product Engineering Team is working on this request. At this moment, there is no ETA to this request.
We are migrating some of our J2EE based application from on-prem to the AWS cloud. I am trying to find some good document on what steps to be considered for the App migration. Since we already have an AWS account, and some of the applications have been migrated earlier, I don't have to worry about those aspects.. However I am thinking more towards
- Which App-server to use?
- Do i need to migrate DB as well..or just the App?
- Any licensing requirements for app.. we use mostly Open source.. So that should be fine..
- Operational monitoring after migrating to cloud..
Came across some of these articles.
https://serverguy.com/cloud/aws-migration/
Migration Scenario: Migrating Web Applications to the AWS Cloud : https://d36cz9buwru1tt.cloudfront.net/CloudMigration-scenario-wep-app.pdf
I would like to know If you have worked on this kind of work.. and If you point me to some helpful document/links.. or your pwn experience?
So theres 2 good resources I'd recommend for migration:
AWS Whitepaper for migration
AWS Well-Architected Framework.
The key is planning, but not being afraid to experiment. This is cloud so don't be afraid of setting an instance size in stone, you can easily change it.
I am new to using cloud services and navigating Google's Cloud Platform is quite intimidating. When it comes to Google Dataproc, they do advertise Hadoop, Spark and Hive.
My question is, is Impala available at all?
I would like to do some benchmarking projects using all four of these tools and I require Apache Impala along side Spark/Hive.
No, DataProc is a cluster that supports Hadoop, Spark, Hive and pig; using default images.
Check this link for more information about native image list for DataProc
https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions
You can try also using another new instance of Dataproc, instead of using the default.
For example, you can create a Dataproc instance with HUE (Hadoop User Experience) which is an interface to handle Hadoop cluster built by Cloudera. The advantage here is that HUE has as a default component Apache Impala. It also has Pig, Hive, etc. So it's a pretty good solution for using Impala.
Another solution will be to create your own cluster by the beginning but is not a good idea (at least you want to customize everything). With this way, you can install Impala.
Here is a link, for more information:
https://github.com/GoogleCloudPlatform/dataproc-initialization-actions/tree/master/hue
Dataproc provides you SSH access to the master and workers, so it is possible to install additional software and according to Impala documentation you would need to:
Ensure Impala Requirements.
Set up Impala on a cluster by building from source.
Remember that it is recommended to install the impalad daemon with each DataNode.
Cloud Dataproc supports Hadoop, Spark, Hive, Pig by default on the cluster. You can install more optionally supported components such as Zookeeper, Jyputer, Anaconda, Kerberos, Druid and Presto (You can find the complete list here). In addition, you can install a large set of open source components using initialization-actions.
Impala is not supported as optional component and there is no initialization-action script for it yet. You could get it to work on Dataproc with HDFS but making it work with GCS may require non-trivial changes.
I have an application developed in Meteor Framework.
We are planning to move it to AWS withmulti AZ deployment
need Master Slave configuration for the Mongo DB
My question is how to achieve this, i believe mongo db comes bundled in with the Framework itself,
never worked on it so any help will be appriciated.
Thanks
Welcome to Stack Overflow.
Mongo is bundled into the development environment, but not the server.
It is normal to host the database either on a different server of your own, or using a database service (there are many around, such as compose.io, Mongolab etc) So Mongo can be set up for load balancing and scaling independently of the app itself.
First of all, I'm new to DC/OS ...
I installed DC/OS locally with Vagrant, everything worked fine. Then I installed Cassandra, Spark and I think to understand the container concept with Docker, so far so good.
Now it's time to develop an Akka service and I'm a little bit confused how I should start. The Akka service should simply offer a HTTP REST endpoint and store some data to Cassandra.
So I have my DC/OS ready, and Eclipse in front of me. Now I would like to develop the Akka service and connect to Cassandra from outside DC/OS, how can I do that? Is this the wrong approach? Should I install Cassandra separately and only if I’m ready I would deploy to DC/OS?
Because it was so simple to install Cassandra, Spark and all the rest I would like to use it for development as well.
While slightly outdated (since it's using DC/OS 1.7 and you should be really using 1.8 these days) there's a very nice tutorial from codecentric that should contain everything you need to get started:
It walks you through setting up DC/OS, Cassandra, Kafka, and Spark
It shows how to use Akka reactive streams and the reactive kafka extension to ingest data from Twitter into Kafka
It shows how to use Spark to ingest data Cassandra
Another great walkthrough resource is available via Cake Solutions:
It walks you through setting up DC/OS, Cassandra, Kafka, and Marathon-LB (a load balancer)
It explains service discovery for Akka
It shows how to expose a service via Marathon-LB