AzureDevOps Pipeline fails on creating database in Djano test - django

I have been trying to build an Azure DevOps Pipeline for CI/CD for my Django project. The code is being pulled from a github repo (and is actually deployed already on Azure app service). However, when I run the test on the Pipeline I get the following error when it runs python manage.py test:
Creating test database for alias 'default'...
pyodbc.OperationalError: ('HYT00', '[HYT00] [Microsoft][ODBC Driver 17 for SQL Server]Login timeout expired (0) (SQLDriverConnect)')
##[error]Bash exited with code '1'.
I tried extensively to whitelist Azure DevOps but the error has persisted. How can I resolve this so that the Pipeline can run tests for CI/CD?

Which agent are you using? Hosted agent or self-hosted agent?
If you are using hosted agent, since we are running the code in the pipeline via hosted agent, we should add hosted agent IP addresses to the whitelist instead of Azure DevOps Services IPs. The whitelist Azure DevOps you used is Azure DevOps Service IP. About hosted agent IP, we publish a weekly JSON file listing IP ranges for Azure data centers, broken out by region. To obtain the complete list of possible IP ranges for your agent, you must use the IP ranges from all of the regions that are contained in your geography.
If you are using self-hosted agent. Please check your local agent server IP and then add it.

Related

What can one do with the original used VMs , once the migration of servers are done using AWS server migration service

As we know , AWS server migration service or SMS automates the migration of your on VMware vsphere, Microsoft Hyper-V and Azure virtual machines to the AWS cloud as AMI which we will run on EC2 instance. Once this migration is completely done for both OS/data/application , what to do with the original used VMs we were using previously. Should we sell them ?

Node Express REST hosted in Google Cloud Run

I am thinking of setting up Google Cloud Run to host Docker container services. If the existing service is a Node - Express REST service listening on a port, do I need to remove Express, so it isn't constantly running / listening and charged?
No, your container is only scaled up when it is receiving incoming requests. See "Cloud Run container instances" in the Cloud Run Resource model docs.
If your existing service is an Express app, you are all set.
You will not be charged when you are not receiving any request.
Just package it in a container using a Dockerfile and you can deploy it to Cloud Run. Take a look at the Node.js sample in the quickstart

Viewing requests between Google container and Cloud SQL

We have a GKE application container running Django that connects to a Postgres database on Cloud SQL via Private IP. The application is configured to run 10 django processes per pod. Whenever a new pod is spun up (either when deploying new code or scaling to meet load) each of the 10 django processes in that pod encounters exactly one error (varies, but always database-related) when connecting to the database with all subsequent database requests being fine. We suspect that the problem is on the Django side and the requests that error do not even make it to Cloud SQL.
How do I view the network requests between the application and Cloud SQL?

TeamCity Agent Push Failing Across AWS Accounts

We've recently moved our TeamCity server to AWS, but it is managed by a different business unit in my company, therefore we have different AWS accounts. I've gone through our parent company to get VPC peering enabled, so that I can launch EC2 instance build agents.
To simplify: Our TeamCity server is on AWS account A and I'm working on AWS account B, where I want the build agents to launch.
I had no problems doing this back when the server was on-prem, but I'm having real trouble now.
Good: I can launch the instances from TeamCity, which is located in the other business unit's account.
Bad: I can't get it to progress from there.
I just want to be able to get 'Agent Push' to work right now. Right now, when I try, this is the output I'm given in the web console:
[15:12:09]: AgentPush v58406 - Install Agent on remote host
[15:12:09]: Looking for Target Host...
[15:12:09]: Validating TeamCity Server Root URL 'https://teamcity.company.com' ...
[15:12:09]: Starting agent push to 'xx.xx.xxx.xxx'(IP: xx.xx.xxx.xxx) using preset 'Amazon Linux' (Username 'ec2-user'. Target platform: 'Unix')
[15:12:09]: Checking Platform...
[15:16:09]: Remote agent installation failed: timeout: socket is not established
One more thing: we use direct connect and all private IPs. I'm supplying the private IP to the agent push. This worked when I was running it on-prem.
Does anyone have any ideas as to why I can't get the instances to talk to each other?
You need to setup AWS Cross account access. More here in docs:
https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html?icmpid=docs_iam_console

Setting up an Amazon Server with Go Daddy

I am trying to set up an Amazon Server to host a dynamic website I'm currently creating. I have the domain bought on GoDaddy.com, and I believe that what I've done so far has linked the domain to my Amazon account.
I followed this tutorial : http://www.mycowsworld.com/blog/2013/07/29/setting-up-a-godaddy-domain-name-with-amazon-web-services/
In short, this walked me through setting up and Amazon S3 (Simple Storage Service) and Amazon Route 53. I then configured the DNS Servers, and my website now launches properly on the domain.
I'm not sure on the next step from here, but I would like to set up:
-A database server
-Anything else that might be necessary to run a dynamic website.
I am very new to hosting websites, and semi-new to web development in general, so the more in depth the better.
Thanks a lot
You have two options on AWS. Run an EC2 server and setup your application or continue to use the AWS managed services like S3.
Flask apps can be hosted on Elastic Beanstalk and
your database can be hosted on RDS (Relational Database Service). Then the two can be integrated.
Otherwise, spin up your own t2.micro instance in EC2. Log in via ssh and set up the database server and application like you have locally. This server could also host the (currently S3 hosted) static files too.
I have no idea what your requirements are, personally I would start with setting up the EC2 instance and go from there as integrating AWS services is without knowing what you need is probably not the easiest first step.
Heroku might be another option. They host their services on AWS and give you an end to end solution for deploying and running your python code without getting your hands dirty setting up servers.