Fixed and Reserved Outbound IP for App Service - web-services

I have an azure web app that has 2 slots (one for production and one for staging). My web app calls an external web service that is protected via IP filtering. When I'm in production, I call the external web service for production and when I'm in staging I call the external web service for staging.
Because I switch from staging to production my 2 outbound IP adresses change regularly. So the external web service cannot protect independently staging and production.
Can the App Service Environnement can help me? Or another Azure service?
Thanks.

It seems, you're out of luck here. According to Microsoft Azure documentation:
Can I use a reserved IP for all Azure services?
Reserved IPs can only be used for VMs and cloud service instance roles exposed through a VIP.
So, no reserved IPs for Azure App Services, but only for VMs and Cloud Services.
But there might be some solutions possible:
replace IP filtering with Azure subdomain filtering, such as my-app-prod.azurewebsites.net, my-app-staging.azurewebsites.net (or buy a domain name and set its subdomain records to point to Azure App Service slot subdomains and use them instead of Azure's)
migrate your environment to Azure Cloud Services or VMs and then set up Azure Virtual Network with reserved IP addresses.

Related

Pipeline from Azure DevOps to local domain through a VPN and AD server

We have a VPN server in AWS which is also a AD domain controller that controls our local domain (private subnet) in AWS.
We want to create a pipeline from Azure DevOps through VPN server (Which is also a AD Domain controller) to our other server and deploy the project on the server on private subnet.
So my question is, Can we do any of the things mentioned below? and if yes how can we achieve it?:
Is there a way to make Azure DevOps to use VPN connection to connect directly to server on private subnet? and is it secure?
Is there a way after adding the server on the private subnet to the server list in AD Domain controller server and create a pipeline to the AD server but tell the AD server to deploy it on other server listed in the Server Management?
• Yes, there is a way through which we can connect Azure DevOps to the AD Domain controller server which itself is the VPN server in AWS. For that purpose, you will have to ensure that your AD Domain controller server or VPN server is accessible from the internet and since it is hosted on AWS, consider it to be hosted on on-premises environment for this solution perspective. Thus, I would suggest you deploy Azure DevOps agents and agent pools such that they deploy artifacts and other required data to the ADDC or VPN server in AWS as they should have ‘line of sight’ connectivity to the VPN server since access to internet is needed for these agents to connect to Azure pipelines as shown in the below diagram.
Also, rather than hosted agent pools which are used for Azure resources in the virtual network on Azure itself, use default agents which need to be configured for on-premises environment.
• Since Azure DevOps agents communicate with the VPN servers in AWS as stated above, you can further create route tables between the private subnet and the subnet in which VPN is hosted in AWS as well as whitelist the IP address of the connecting VPN gateway and related resources in that VPC. Also, peering between the different VPCs if private subnets are hosted in it can also work by configuring the proper route tables and allowing the appropriate IP addresses in AWS.
Kindly find the documentation link below which describes the details on the configuration of DevOps agents as stated above: -
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/pools-queues?view=azure-devops&tabs=yaml%2Cbrowser
For additional security, you can also deploy your DevOps agents behind a web proxy as below: -
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/proxy?view=azure-devops&tabs=windows

How to access GCP resource in private server

My application deployed to our private server and I want to use some service from GCP like Bucket and Secret manager.
Suppose my application deployed in internal server and my applicate use GCP services. Is it possible or we should deploy our app to GCP also. My application is in JSP.
How to do this. Which is best practice for this.
You have more than one option. You can use Cloud VPN, as it securely connects your peer network to your Virtual Private Cloud (VPC) network through an IPsec VPN connection. Follow this GCP’s official documentation to set it up.
Another option is Google Cloud Hybrid Connectivity focused on Cloud Interconnect as it allows you to connect your infrastructure to Google Cloud. Visit the following link for the best practices and the set up guide.
Finally, see the following thread for more reference on your connection requirement.

AWS & Azure Hybrid Cloud Setup - is this configuration at all possible (Azure Load Balancer -> AWS VM)?

We have all of our cloud assets currently inside Azure, which includes a Service Fabric Cluster containing many applications and services which communicate with Azure VM's through Azure Load Balancers. The VM's have both public and private IP's, and the Load Balancers' frontend IP configurations point to the private IP's of the VM's.
What I need to do is move my VM's to AWS. Service Fabric has to stay put on Azure though. I don't know if this is possible or not. The Service Fabric services communicate with the Azure VM's through the Load Balancers using the VM's private IP addresses. So the only way I could see achieving this is either:
Keep the load balancers in Azure and direct the traffic from them to AWS VM's.
Point Azure Service Fabric to AWS load balancers.
I don't know if either of the above are technologically possible.
For #1, if I used Azure's load balancing, I believe the load balancer front-end IP config would have to use the public IP of the AWS VM, right? Is that not less secure? If I set it up to go through a VPN (if even possible) is that as secure as using internal private ip's as in the current load balancer config?
For #2, again, not sure if this is technologically achievable - can we even have Service Fabric Services "talk" to AWS load balancers? If so, what is the most secure way to achieve this?
I'm not new to the cloud engineering game, but very new to the idea of using two cloud services as a hybrid solution. Any thoughts would be appreciated.
As far as I know creating multiregion / multi-datacenter cluster in Service Fabric is possible.
Here are the brief list of requirements to have initial mindset about how this would work and here is a sample not approved by Microsoft with cross region Service Fabric cluster configuration (I know this are different regions in Azure not different cloud provider but this sample can be of use to see how some of the things are configured).
Hope this helps.
Based on the details provided in the comments of you own question:
SF is cloud agnostic, you could deploy your entire cluster without any dependencies on Azure at all.
The cluster you see in your azure portal is just an Azure Resource Screen used to describe the details of your cluster.
Your are better of creating the entire cluster in AWS, than doing the requested approach, because at the end, the only thing left in azure would be this Azure Resource Screen.
Extending the Oleg answer, "creating multiregion / multi-datacenter cluster in Service Fabric is possible." I would add, that is also possible to create an azure agnostic cluster where you can host on AWS, Google Cloud or On Premises.
The only details that is not well clear, is that any other option not hosted in azure requires an extra level of management, because you have to play around with the resources(VM, Load Balancers, AutoScaling, OS Updates, and so on) to keep the cluster updated and running.
Also, multi-region and multi-zone cluster were something left aside for a long time in the SF roadmap because it is something very complex to do and this is why they avoid recommend, but is possible.
If you want to go for AWS approach, I guide you to this tutorial: Create AWS infrastructure to host a Service Fabric cluster
This is the first of a 4 part tutorial with guidance on how you can Setup a SF Cluster on AWS infrastructure.
Regarding the other resources hosted on Azure, You could still access then from AWS without any problems.

AWS Cloudformation communicating using internal IP addresses

I am trying to create a web application using AWS Cloudformation. This particular app will have 3 instances (Web server, App server, RDS database). I want these instances to be able to talk to each other. For example, the Web server should talk to App server, and the App server should talk to RDS database.
I can't understand how to configure the servers so that they know each other's IP address. I figure there are 3 ways to do this - but I'm not sure which of these is realistically possible or feasible:
I can assign a fixed private IP address (e.g. 192.168.0.2 and so on) during stack creation - this way I know beforehand the IP address of each instance
I can wait for AWS Cloudformation to return the IP addresses of the created instances and manually tweak my code to use these IP addresses to communicate using these IPs
I can somehow get the IP address of the created instance during the stack creation process and store it as a parameter in the next instance I create (not sure if Cloudformation allows this?)
Which is the best way to set this up? Also, please share a little bit of detail around how I can do this in Cloudformation.
A solution would be to place your Web server and App server behind an ELB (load balancer). This way, your web server will communicate with the app server using the ELB's URL (not the app server's IP). The app server can communicate with the RDS instance via the RDS instance's endpoint (which is again an URL).
Let's suppose you separate your infrastructure into 3 CloudFormation stacks: the RDS database, the app server and the web server. The RDS stack will expose the RDS instance's through the CloudFormation Outputs feature. This endpoint will in turn be used as an CloudFormation Parameter to the App server stack. You can insert the RDS endpoint in the App server LauchConfiguration's UserData field, so that on startup, your App server will know the RDS instance's endpoint. Finally, your App server stack will expose the App server's ELB endpoint (again using the CloudFormation outputs feature). Using the same recipe, the URL of your App server's ELB will be injected and used by your Web server stack.
As a side note, it is also a good idea to oversee your services (web server, app server) using an Autoscaling group. It is very probable that your instances will be terminated by factors out of you control. In that case, you would want the Autoscaling group to start a fresh new instance and place it behind your ELB.

Spring Cloud Eureka on Cloud Foundry: Peer replication among instances on same site

In our case Eureka Discovery service is running on Cloud Foundry. The clients, which also run on CF, register themselves with Eureka and we can see all instances registered okay.
We are also able to use the feature of DS Replica to sync our Eureka servers running on different AZs.
However it's unclear to me how replication between peer instances of the Eureka service works for multiple instances running on same AZ of Cloud Foundry.
Given the Cloud Foundry router exposes all instances of an application on the same AZ as a single HTTP url, how do the instances sync to give a consistent view of Eureka console whenever accessed?
OR shall one expect it to take an unpredictable time to sync up all instances of a site?
This is purely a concern for achieving consistent state of Eureka instances from one site of CF. And for information - Though we are running PFC but not using the service discovery tile from PCF.