I have a use case to replicate Kafka topics and data from one Kafka cluster in AWS to Confluent Kafka deployed on AWS. The issue is, my Kafka in AWS is deployed in a VPC, say VPC-1 that dosent allow for VPC peering with confluent cloud. I need to use a Load balancer/Proxy service deployed in another VPC say VPC-2 that is peered with VPC-1 and Confluent Kafka cluster VPC (VPC-3). Also, VPC-3 is peered with VPC-1. What would be a idea Load balancer setup to point to AWS VPC-1 so that that data could be then replicated to Confluent cloud kafka? Below is how the VPC's are peered and I need something to run on VPC-2 and forward data from Confluent Kafka to Kafka in VPC-1. The data in source Kafka is few hundred GB per day.
Related
We have 2 environments Dev and Prod both are in different VPC.
Also, we are using MSK Managed Kafka with EKS Fargate Cluster(no nodes) successfully running.
Now we moved from Managed Kafka to MSK serverless environment.
We attached both VPCs and the same Subnet and SG what we used in order to connect MSK Managed Kafka to EKS Cluster.
We are able to connect EC2 instance to MSK serverless from both VPCs,
but we try to connect any EKS cluster we are facing timeout issue or not able to connect with MSK even though same MSK Managed configurations we are using.
How to connect MSK Serverless to EKS Fargate Cluster.
I have 7 Spring microservices which I would like to deploy into AWS Elastic Beanstalk. I see that I will be charged by outbound and inbound network traffic. It's not clear to me will I be charged for the internal communication between the microservices?
will I be charged for the internal communication between the microservices?
Depends. If all services are in same AZ and you use private IP addresses, they you will not be charged for traffic. From docs:
Data transferred between Amazon EC2, Amazon RDS, Amazon Redshift, Amazon ElastiCache instances, and Elastic Network Interfaces in the same Availability Zone is free.
But if you spread your services across AZs, then you will be probably paying for the traffic:
Data transferred "in" to and "out" from Amazon EC2, Amazon RDS, Amazon Redshift, Amazon DynamoDB Accelerator (DAX), and Amazon ElastiCache instances, Elastic Network Interfaces or VPC Peering connections across Availability Zones in the same AWS Region is charged at $0.01/GB in each direction.
Cross-region traffic will also have cost.
Is it possible to access Aurora Serverless DB from AWS Lambda?
In my case I have a Flutter mobile application which is communicating with Lumen micro framework through RESTful API. For DB I use MySQL.
After creating AWS Aurora cluster, can I connect to it like to a normal MySQL DB connection?
DB_CONNECTION=mysql
DB_HOST=my.awshost.com
DB_PORT=3306
DB_DATABASE=homestead
DB_USERNAME=homestead
DB_PASSWORD=secret
I am relatively new to AWS. I've been only using EC2 so far. Therefore, I am trying to getting more familiar with Serverless concept.
Any help is appreciated.
Yes, you can access like other service but there is limitation of Serverless DB, it can only accessible within VPC, so you should define Lambda in the same VPC and configure networking.
Limitations of Aurora Serverless
Aurora with MySQL version 5.6 compatibility
Aurora with PostgreSQL version 10.7 compatibility
The port number for connections must be:
3306 for Aurora MySQL
5432 for Aurora PostgreSQL
You can't give an Aurora Serverless DB cluster a public IP address. You can access an Aurora Serverless DB cluster only from within a virtual private cloud (VPC) based on the Amazon VPC service.
Each Aurora Serverless DB cluster requires two AWS PrivateLink endpoints. If you reach the limit for PrivateLink endpoints within your VPC, you can't create any more Aurora Serverless clusters in that VPC. For information about checking and changing the limits on endpoints within a VPC, see Amazon VPC Limits.
You can't access an Aurora Serverless DB cluster's endpoint through an AWS VPN connection or an inter-region VPC peering connection.
aurora-serverless
You can explore getting-started-with-the-amazon-aurora-serverless-data-api for configuration lambda with Serverless DB.
Google failed me again or may be I wasnt too clear in my question.
Is there an easy way or rather how do we determine what services are VPC bound and what services are non-vpc ?
For example - EC2, RDS require a VPC setup
Lambda, S3 are publicly available services and doesn't need a VPC setup.
The basic services that require an Amazon VPC are all related to Amazon EC2 instances, such as:
Amazon RDS
Amazon EMR
Amazon Redshift
Amazon Elasticsearch
AWS Elastic Beanstalk
etc
These resources run "on top" of Amazon EC2 and therefore connect to a VPC.
There are also other services that use a VPC, but you would only use them if you are using some of the above services, such as:
Elastic Load Balancer
NAT Gateway
So, if you wish to run "completely non-vpc", then avoid services that are "deployed". It means you would use AWS Lambda for compute, probably DynamoDB for database, Amazon S3 for object storage, etc. This is otherwise referred to as going "serverless".
Hi I am facing trouble in crawling Mongo data to S3 using a crawler from AWS-Glue. In Mongo Atlas you need to whitelist IPs that it expects connections from. As Aws-Glue is serverless I do not have any fixed IP as such. Please suggest any solutions for this.
According to the document: Connecting to a JDBC Data Store in a VPC, AWS Glue jobs belong to the specified VPC ( and the VPC has NAT gateway ) should have a fixed IP address. For example, after configured a NAT gateway for the VPC, HTTP requests from the EC2 server belongs to the VPC has fixed IP address.
I'v not tested this for Glue. How about setting VPC and that NAT gateway for the Glue job ?