I have a ASP.NET Core MVC 5 app that I want to deploy in AWS Beanstalk and communicate with some .NET services and SQL database in the on-premise. How can I achieve that? Can I achieve it by creating a VPN connection?
Based on the comments.
One way to enable private connectivity between application running on Elastic Beanstalk (EB) and on-premise database is through VPN. AWS provides managed service for that called AWS Site-to-Site VPN.
The other solution, though much more expensive, is through AWS Direct Connect (DX). Unlike VPN, DX connectivity does not involve internet which generally improves security, bandwidth and latency of the connection.
Related
We have a VPN server in AWS which is also a AD domain controller that controls our local domain (private subnet) in AWS.
We want to create a pipeline from Azure DevOps through VPN server (Which is also a AD Domain controller) to our other server and deploy the project on the server on private subnet.
So my question is, Can we do any of the things mentioned below? and if yes how can we achieve it?:
Is there a way to make Azure DevOps to use VPN connection to connect directly to server on private subnet? and is it secure?
Is there a way after adding the server on the private subnet to the server list in AD Domain controller server and create a pipeline to the AD server but tell the AD server to deploy it on other server listed in the Server Management?
• Yes, there is a way through which we can connect Azure DevOps to the AD Domain controller server which itself is the VPN server in AWS. For that purpose, you will have to ensure that your AD Domain controller server or VPN server is accessible from the internet and since it is hosted on AWS, consider it to be hosted on on-premises environment for this solution perspective. Thus, I would suggest you deploy Azure DevOps agents and agent pools such that they deploy artifacts and other required data to the ADDC or VPN server in AWS as they should have ‘line of sight’ connectivity to the VPN server since access to internet is needed for these agents to connect to Azure pipelines as shown in the below diagram.
Also, rather than hosted agent pools which are used for Azure resources in the virtual network on Azure itself, use default agents which need to be configured for on-premises environment.
• Since Azure DevOps agents communicate with the VPN servers in AWS as stated above, you can further create route tables between the private subnet and the subnet in which VPN is hosted in AWS as well as whitelist the IP address of the connecting VPN gateway and related resources in that VPC. Also, peering between the different VPCs if private subnets are hosted in it can also work by configuring the proper route tables and allowing the appropriate IP addresses in AWS.
Kindly find the documentation link below which describes the details on the configuration of DevOps agents as stated above: -
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/pools-queues?view=azure-devops&tabs=yaml%2Cbrowser
For additional security, you can also deploy your DevOps agents behind a web proxy as below: -
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/proxy?view=azure-devops&tabs=windows
I have requirement to connect both AWS & Azure sites - say, I have a VM in both AWS & Azure sites, I should be able to communicate between the VMs.
How do I setup the VPN connection between AWS & Azure? any reference article would be helpful.
There are a few options to achieve it in general:
In theory, you can cross-connect the two instances directly if they are publicly available and if you need to link just the two of them and nothing else will rely on that VPN tunnel, however in general this is a bad approach.
You can bring up firewall or a router instances capable of handling IPsec tunnels. Such instances are available in the AWS Marketplace and it's Azure analogue.
The recommended option would be to use the AWS Site-to-Site VPN service and it's counterpart in Azure (but this depends on the use-case).
Use a hybrid between the two - AWS service to firewall instance in Azure, or vice-versa
This blog post should get you started in case you choose option 3.
My organization has an AWS presence, but no VPN nor Direct Connect to and from our on-premises data center. We would still like to leverage DynamoDB in the short-term without having DirectConnect or a VPN connection in place. We will not be using any EC2 instances for our web services. Is it possible for an on--prem host to talk to DynamoDB without any AWS networking infrastructure in place....basically a call direct to the DynamoDB service without VPN or Direct Connect?
All you need is an Internet connection to access DynamoDB. Your on-premis servers will need to have access to make calls to the AWS API, which is publicly accessible over the Internet.
You can use an VPC endpoint gateway to connect your server to Dynamo Db using amazon network
https://docs.aws.amazon.com/it_it/vpc/latest/privatelink/vpc-endpoints.html
We have all of our cloud assets currently inside Azure, which includes a Service Fabric Cluster containing many applications and services which communicate with Azure VM's through Azure Load Balancers. The VM's have both public and private IP's, and the Load Balancers' frontend IP configurations point to the private IP's of the VM's.
What I need to do is move my VM's to AWS. Service Fabric has to stay put on Azure though. I don't know if this is possible or not. The Service Fabric services communicate with the Azure VM's through the Load Balancers using the VM's private IP addresses. So the only way I could see achieving this is either:
Keep the load balancers in Azure and direct the traffic from them to AWS VM's.
Point Azure Service Fabric to AWS load balancers.
I don't know if either of the above are technologically possible.
For #1, if I used Azure's load balancing, I believe the load balancer front-end IP config would have to use the public IP of the AWS VM, right? Is that not less secure? If I set it up to go through a VPN (if even possible) is that as secure as using internal private ip's as in the current load balancer config?
For #2, again, not sure if this is technologically achievable - can we even have Service Fabric Services "talk" to AWS load balancers? If so, what is the most secure way to achieve this?
I'm not new to the cloud engineering game, but very new to the idea of using two cloud services as a hybrid solution. Any thoughts would be appreciated.
As far as I know creating multiregion / multi-datacenter cluster in Service Fabric is possible.
Here are the brief list of requirements to have initial mindset about how this would work and here is a sample not approved by Microsoft with cross region Service Fabric cluster configuration (I know this are different regions in Azure not different cloud provider but this sample can be of use to see how some of the things are configured).
Hope this helps.
Based on the details provided in the comments of you own question:
SF is cloud agnostic, you could deploy your entire cluster without any dependencies on Azure at all.
The cluster you see in your azure portal is just an Azure Resource Screen used to describe the details of your cluster.
Your are better of creating the entire cluster in AWS, than doing the requested approach, because at the end, the only thing left in azure would be this Azure Resource Screen.
Extending the Oleg answer, "creating multiregion / multi-datacenter cluster in Service Fabric is possible." I would add, that is also possible to create an azure agnostic cluster where you can host on AWS, Google Cloud or On Premises.
The only details that is not well clear, is that any other option not hosted in azure requires an extra level of management, because you have to play around with the resources(VM, Load Balancers, AutoScaling, OS Updates, and so on) to keep the cluster updated and running.
Also, multi-region and multi-zone cluster were something left aside for a long time in the SF roadmap because it is something very complex to do and this is why they avoid recommend, but is possible.
If you want to go for AWS approach, I guide you to this tutorial: Create AWS infrastructure to host a Service Fabric cluster
This is the first of a 4 part tutorial with guidance on how you can Setup a SF Cluster on AWS infrastructure.
Regarding the other resources hosted on Azure, You could still access then from AWS without any problems.
I have a Python server (basic REST API) running on an AWS EC2 instance. The server supplies the data for a mobile application. I want my mobile app to connect to the python server securely over HTTPS. What is the easiest way that I can do this?
Thus far, I've tried setting up an HTTP/HTTPS load balancer with an Amazon certificate, but it seems that the connection between the ELB and the EC2 instance would still not be totally secure (HTTP in a VPC).
When you are securing access to an REST API in an EC2 instance, there are several considerations you need to look upon.
Authentication & Authorization.
Monitoring of API calls.
Load balancing & life cycle management.
Throttling.
Firewall rules.
Secure access to the API.
Usage information by consumers & etc.
Several considerations are mandatory to secure a REST API such as
Having SSL for communication (Note: Here SSL termination at AWS Load Balancer Level is accepted, since there onwards, the traffic goes within the VPC and also can be hardened using Security Groups.)
If you plan on getting most of the capabilities around REST APIs stated above, I would recommend to proxy your service in EC2 to AWS API Gateway which will provide most of the capabilities out of the box.
In addition you can configure AWS WAF for additional security at Load Balancer(Supports AWS Application Load Balancer).
You can leverage some of the AWS Services to Handle these.
Question answered in the comments.
It's fine to leave traffic between ELB and EC2 unencrypted as long as they are in the same VPC and the security group for the EC2 instance(s) is properly configured.