TeamCity Agent Push Failing Across AWS Accounts - amazon-web-services

We've recently moved our TeamCity server to AWS, but it is managed by a different business unit in my company, therefore we have different AWS accounts. I've gone through our parent company to get VPC peering enabled, so that I can launch EC2 instance build agents.
To simplify: Our TeamCity server is on AWS account A and I'm working on AWS account B, where I want the build agents to launch.
I had no problems doing this back when the server was on-prem, but I'm having real trouble now.
Good: I can launch the instances from TeamCity, which is located in the other business unit's account.
Bad: I can't get it to progress from there.
I just want to be able to get 'Agent Push' to work right now. Right now, when I try, this is the output I'm given in the web console:
[15:12:09]: AgentPush v58406 - Install Agent on remote host
[15:12:09]: Looking for Target Host...
[15:12:09]: Validating TeamCity Server Root URL 'https://teamcity.company.com' ...
[15:12:09]: Starting agent push to 'xx.xx.xxx.xxx'(IP: xx.xx.xxx.xxx) using preset 'Amazon Linux' (Username 'ec2-user'. Target platform: 'Unix')
[15:12:09]: Checking Platform...
[15:16:09]: Remote agent installation failed: timeout: socket is not established
One more thing: we use direct connect and all private IPs. I'm supplying the private IP to the agent push. This worked when I was running it on-prem.
Does anyone have any ideas as to why I can't get the instances to talk to each other?

You need to setup AWS Cross account access. More here in docs:
https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html?icmpid=docs_iam_console

Related

Why are outbound SSH connections from Google CloudRun to EC2 instances unspeakably slow?

I have a Node API deployed to Google CloudRun and it is responsible for managing external servers (clean, new Amazon EC2 Linux VM's), including through SSH and SFTP. SSH and SFTP actually work eventually but the connections take 2-5 MINUTES to initiate. Sometimes they timeout with handshake timeout errors.
The same service running on my laptop, connecting to the same external servers, has no issues and the connections are as fast as any normal SSH connection.
The deployment on CloudRun is pretty standard. I'm running it with a service account that permits access to secrets, etc. Plenty of memory allocated.
I have a VPC Connector set up, and have routed all traffic through the VPC connector, as per the instructions here: https://cloud.google.com/run/docs/configuring/static-outbound-ip
I also tried setting UseDNS no in the /etc/ssh/sshd_config file on the EC2 as per some suggestions online re: slow SSH logins, but that has not make a difference.
I have rebuilt and redeployed the project a few dozen times and all tests are on brand new EC2 instances.
I am attempting these connections using open source wrappers on the Node ssh2 library, node-ssh and ssh2-sftp-client.
Ideas?
Cloud Run works only until you have a HTTP request active.
You proably don't have an active request during this on Cloud Run, as outside of the active request the CPU is throttled.
Best for this pipeline is Cloud Workflows and regular Compute Engine instances.
You can setup a Workflow to start a Compute Engine for this task, and stop once it finished doing the steps.
I am the author of article: Run shell commands and orchestrate Compute Engine VMs with Cloud Workflows it will guide you how to setup.
Executing the Workflow can be triggered by Cloud Scheduler or by HTTP ping.

AzureDevOps Pipeline fails on creating database in Djano test

I have been trying to build an Azure DevOps Pipeline for CI/CD for my Django project. The code is being pulled from a github repo (and is actually deployed already on Azure app service). However, when I run the test on the Pipeline I get the following error when it runs python manage.py test:
Creating test database for alias 'default'...
pyodbc.OperationalError: ('HYT00', '[HYT00] [Microsoft][ODBC Driver 17 for SQL Server]Login timeout expired (0) (SQLDriverConnect)')
##[error]Bash exited with code '1'.
I tried extensively to whitelist Azure DevOps but the error has persisted. How can I resolve this so that the Pipeline can run tests for CI/CD?
Which agent are you using? Hosted agent or self-hosted agent?
If you are using hosted agent, since we are running the code in the pipeline via hosted agent, we should add hosted agent IP addresses to the whitelist instead of Azure DevOps Services IPs. The whitelist Azure DevOps you used is Azure DevOps Service IP. About hosted agent IP, we publish a weekly JSON file listing IP ranges for Azure data centers, broken out by region. To obtain the complete list of possible IP ranges for your agent, you must use the IP ranges from all of the regions that are contained in your geography.
If you are using self-hosted agent. Please check your local agent server IP and then add it.

authentication failure between 2 ec2 instances with windows server 2016

I am a newbie to AWS Cloud. Recently I was given the requirement to do a Automation Anywhere Clustered Control Room installation on AWS Cloud. Based on this requirement, I set up 2 EC2 instances (as a test run) with Windows Server 2016 AMI. I installed MS SQL server on one of the instances and opened port 1433 for access from the other instance. I installed Control Room on the first instance successfully (using custom install). When I completed the installation on the second instance, I got credential vault error. I have created a shared folder which is accessible by both the instances inspite of which I am getting the error. I have security groups and firewalls setup appropriately alsoI have shared the snapshot below. I have been informed that there is an authentication issue between the 2 instances. How do I get this to work?
Any and all help is much appreciated.
I don't know if this is a duplicate of any other question. If it is, please point me in the right direction.
I was able to solve the problem. I reinstalled the control room on both the EC2 machines with Manual mode for the Credential Vault access.
I also reset the firewall to allow only 80 and 443 (for now) both locally and remotely on the second EC2 instance.

Cannot deploy using AWS code deploy : too few healthy instances are available for deployment

I am trying to deploy an application to an ec2 instace from s3 bucket . I created an instance with the required s3 permimssion and also a code deploy application with required ec2 permissions
When I try to deploy thought I get :
The overall deployment failed because too many individual instances failed deployment, too few healthy instances are available for deployment, or some instances in your deployment group are experiencing problems. (Error code: HEALTH_CONSTRAINTS.
I shh into the ec2 instance to check the code deploy log and this is what I get in the :
2018-08-18 20:52:11 INFO [codedeploy-agent(2704)]: On Premises config file does not exist or not readable
2018-08-18 20:52:11 ERROR [codedeploy-agent(2704)]: booting child: error during start or run: Errno::ENETUNREACH - Network is unreachable - connect(2) - /usr/share/ruby/net/http.rb:878:in `initialize'
I tried changing the permissions , restarting the code deploy agent , creating a brand new codeDEploy application. Nothing seems to work.
In order for the agent to pick up commands from CodeDeploy, your host needs to have network access to the internet, which can be restricted by your EC2 security groups, VPC, configuration on your host, etc. To see if you have access, try pinging the CodeDeploy endpoint:
ping codedeploy.us-west-2.amazonaws.com
Though you should use the endpoint for the region your host is in - see here.
If you've configured the agent to use the proxy config, you may have to restart the agent like here.

AWS unable to connect to Java springboot API endpoints

I am trying to run my springboot API on AWS however when i try to connect to the endpoint the error Site cannot be reached IP refused to connect. This my first time working with AWS.
I created a linux instance and connected to it using filezilla. Afterwards i added my jar to a folder which i created on the linux instance using filezilla. I started the springboot project and its running but the problem is that i cannt seem to connect to the endpoints. Am i missing something, how do i connect to my endpoints.
The other thing to note is that i enabled https on my API and added swagger also.
You need to enable relevant ports in the instances' Security Group.
Look at this to create a new Inbound rule for the specific port.
You can go to the aws console, (here I am assuming you have deployed to us-east-1 if its something else, go to the relevant region.
Open up the relevant security group, and then click edit Inbound roles.