How to create Amazon DocumentDB Elastic cluster via terraform - amazon-web-services

I want to create managed MongoDb using DocumentDb on AWS via terraform.
I created a DocumentDb Elastic cluster via the UI, and it seems to work fine. Now I want to create this cluster via terraform, and I don't find documentation for it.
I read that only the documentDb's 'Elastic Cluster' support MongoDb Sharding APIs (and not the 'Instance Based Cluster').
This is the Hashicorp doc for DocumentDb, but I don't see reference for Elastic cluster.

DocumentDB is relatively new. I think it's not possible to do it on terraform yet.
You can do it using Cloudformation
Using AWS CDK
Or AWS CLI
I think it will be available soon, if it is possible with other IaC Terraform don't take too long to update.

Related

Update some settings of an existing resource using Cloud Formation

I'm new to Cloud Formation. I want to update the settings of already created a lot of RDS instances using Cloud Formation. I don't have the info about either those resources were created through CF or manually. Is it possible to update such resources with CF?
I can think of another way like I can use AWS SDK (boto3) but doing it with CF is perefrable.
The only way to do this from CloudFormation (CF) is to develop your own CF custom resource. This will be a lambda function which will use AWS SDK to query the state of your RDS databases, and perform any actions you want.
Since its fully custom, you can program any logic which satisfies your requirements.
If the resources were created manually, you can also import them to CF, and then update using CF.

AWS EMR or Create Own Cluster

Could someone help me in deciding which would be better AWS EMR or creating own cluster in AWS? I am using airflow to create AWS EMR via terraform , run the job and destroy cluster. However did anyone created a spark cluster in AWS without EMR e.g. using ECS Fargate and docker image from bitnami/spark e.g. link or something along the same line in AWS. Thank you

AWS and GCP centrally managed airflows and Dataflow equivalent for AWS

I have two questions to ask:
So my company has 2 instances of airflow running, one on a GCP
provisioned cluster and another on a AWS provisioned cluster. Since
GCP has Composer, which helps you to manage airflow, is there a way
to sort of integrate the airflow DAGs on the AWS cluster to be
managed by GCP as well?
For Batch ETL/Streaming jobs(in python), GCP has Dataflow (Apache
Beam) for that. What's the AWS equivalent of that?
Thanks!
No, you can't do it, till now you have to use AWS, provision it and manage by yourself. There are some options you can choose: EC2, ECS + Fargate, EKS
Dataflow is equivalent to Amazon Elastic MapReduce (EMR) or AWS Batch Dataflow. Moreover if you want to run current Apache Beam jobs, you can provision Apache Beam in EMR and everything should be the same

Can you use Amazon's MSK on EKS?

I'm looking at the possibly of replacing/moving our existing Apache Kafka set up (version 2.1.0) to Amazon's MSK and for it work on EKS.
I've been looking around to see if this is actually possible and if someone has done this or attempted it but so far I've only seen reference to using Apache Kafka on EKS. Does anyone know if it is possible/makes sense to use MSK on EKS?
Many thanks.
Amazon MSK provides fully-managed Kafka clusters, which means that from your side, you do not have to operate the cluster at all. Broker and Zookeeper nodes are packaged, deployed, created, updated and patched for you.
This step-by-step tutorial illustrates the creation of a cluster.
The answer is not, MSK is a fully managed service provided by AWS, you cannot install managed service :-) but you can run your own Kafka cluster on top of Kubernetes cluster in AWS, eg. on EKS service while installing a Kafka Operator:
https://banzaicloud.com/docs/supertubes/kafka-operator/
I haven’t done it for MSK before but surely done it for AWS Aurora Postgres. Not sure why you can’t define your external persistence (in this case MSK) as a service with no selector then manually register an Endpoint object pointing to the MSK host.
https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors

Can i migrate existing vpc to new account using cloudformation?

I want to migrate my existing VPC, subnets etc from one amazon account to another amazon account using cloud formation.
How can i do this?
If you have a CloudFormation template for your VPC environment already, then you can simply create a new stack using that same template in another AWS account.
However, this will create a copy of your VPC environment as it was when it was initially created. Any changes done to the VPC since it was created using CloudFormation will not be included. This will include the acquisition of data in a database, for example.
If you do not already have a CloudFormation template, you can try to create one using AWS Cloud Former. Cloud Former can be used to examine your AWS environment and create a CloudFormation template from what it sees.
Instructions for running AWS Cloud Former can be found in the AWS Documentation: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-using-cloudformer.html