Azure Release Pipeline to deploy to AWS EC2 - amazon-web-services

I'm a leaner of Azure DevOps.
I Have successfully Build an angular application & deployed to AWS S3 bucket.
I was about to transfer the same Publish Pipeline Artifact files to AWS EC2.
I was given by,
Remote Computer: ec2----.compute-1.amazonaws.com,
with UserName and Password
When i use SSH,
Above gives me below error:
Can you please help an example to transfer the Publish Pipeline Artifact files to AWS EC2.
Thanks in Advance.

You may check the following items:
Check whether the remote machine can be accessed in internet.
Try to check the SSH service connection to see whether you input correct information.
Set variable system.debug to true and clink the error step to check the detailed log.
Instead of using ssh copy, you may consider deploying an build agent on remote machine.

Related

How can I clone Google Cloud Platform repository to Google Cloud Platform VM

Can't clone a Google Cloud Platform repository to a Google Cloud Platform VM
Issue: When I attempt to clone I get "Permission denied (publickey)"
Setup:
Created an SSH key pair on the VM
In the Google Cloud Platform VM edited the instance details via the dashboard, added the key, and saved it
Started the VM and confirmed in the instance details that the key was registered.
On the Cloud Source Repositories dashboard for my repo registered the public key
Attempts:
Connected to the instance via ssh from my local terminal, confirmed again that the key existed then attempted to clone the repo. Same permission denied result.
Connected to the instance via the terminal created by the remote access button on the GCP instance details dashboard. Same permission denied result.
The keys on my local machine are also registered with Cloud Source Repositories and I am able to clone the repo into my local machine without any problem.
This issue was resolved with a command different than the one provided by Cloud Source Repositories. Problem was related to the fact that my company set up my account - I don't have the creds to manage it. The clone command provided by provided by Cloud Source Repositories included a reference to the account the company set up which was not accepted even though I used gcloud init on the VM.
I tried using the cloud SDK on the VM. This resulted in the clone being successful:
gcloud source repos clone [repo-name] --project=[project-name]
My team is just now migrating from on-prem to GCP and we're hitting the occasional speed bump like this.
By the way, most of the references to "credential helper" on the GCP doc related to Docker. That did not seem relevant to the repo issue.

Can Amazon Chime be deployed and run on a third-party VPS? And how?

In the examples of Amazon Chime, for instance here https://github.com/aws-samples/amazon-chime-sdk-classroom-demo, they imply that it should be deployed and run on a AWS server via Cloud9. However, I want to deploy and run it on some other VPS such as a digitalocean or linode server.
The main question: can that be done at all, it is supported?
If yes, how? General pointers. Which example should I use and where is it described at all?
Eventually what I want is this:
Say, I have a teaching website that I run on digital ocean or linode. Not on AWS. I want to be able to use Amazon Chime in a way that my users will go to my website and connect to a video class from my website as well
The Chime service would need run on AWS, but you can have a link to the Chime service endpoint from any website hosted anywhere else.
To use the Amazon Chime web application, your students would sign in to https://app.chime.aws/ from your web browser. You would have that link on your website.
See https://docs.aws.amazon.com/chime/latest/ug/chime-web-app.html
Note about the demo. The demo shows how to use the Amazon Chime SDK to build an online classroom in Electron and React. If you are are using that deployment method you can host the React app anywhere under a private domain on any host. That app will run any where, while connecting back to the AWS service endpoint.
Resources would be deployed in AWS. No way around it.
Deployment script can be run from your own laptop, Cloud9 and/or any other Linux server. You just need to be able to run git clone and script/deploy.js.
You'll also need to make that environment is configured with appropriate AWS credentials. Cloud9 would have these credentials out of the box. For any other environment (your laptop/Digital Ocean VM etc.) would need to get AWS Account Ket/Secret pair and use aws config to enable them.

Transfer files from AWS CodeBuild to a remote linux server outside of AWS

I have a Linux server that is not hosted by AWS. Now, I want to use AWS CodePipeline and CodeBuild to build my CI/CD workflow. During the build phase with CodeBuild, I wan't to transfer the build result files to my remote Linux server. I know I can do this using scp <source> <destination> over SSH. But I don't know how to store the SSH keys in CodeBuild. Is this possible?
Yes it is possible.
You keep the secret (SSH private key) in AWS Secrets Manager or Parameter Store. CodeBuild has native support to fetch these secrets safely and they will never be echoed anywhere. See this StackOverflow response: How to retrieve Secret Manager data in buildspec.yaml

What would be the best way to manage cloud credentials as part of an Azure DevOps build pipeline?

We are going to be creating build/deploy pipelines in Azure DevOps to provision infrastructure in Google Cloud Platform (GCP) using Terraform. In order to execute the Terraform provisioning script, we have to provide the GCP credentials so it can connect to our GCP account. I have a credential file (JSON) that can be referenced in the Terraform script. However, being new to build/deploy pipelines, I'm not clear on exactly what to do with the credential file. That is something we don't want to hard-code in the TF script and we don't want to make it generally available to just anybody that has access to the TF scripts. Where exactly would I put the credential file to secure it from prying eyes while making it available to the build pipeline? Would I put it on an actual build server?
I'd probably use build variables or store variables in key vault and pull those at deployment time. storing secrets on the build agent is worse, because that means you are locked in to this build agent.
https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/azure-key-vault?view=azure-devops
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch

Is there a logging service in AWS for debug information?

I'm trying out AWS. I create a app that is running in an EC2 instance. I want to send debug/diagnostic logs to stdout or syslog and have some way to easily collect and let me read them.
Currently I use Stackdriver logging, I install a google-fluentd plugin in the EC2 instance and it picks up the syslog and send to Stackdriver. I'm wondering whether there is a similar offering in AWS so that I don't need to create a GCP project just for reading logs?
Thanks!
AWS allows you dump all your logs to cloud watch where you can store them click here to be redirected to the corresponding aws documentation. The documentation teaches you how to set up the ec2 machine in order to dump the logs to aws
You can install the AWS Cloudwatch agent in your EC2 Instance. The agent then allows you to ship custom log files to AWS Cloudwatch. From AWS cloudwatch you could analyze them. You could also ship system and application logs through the agent. Here is a blog post explaining how it could be done on a Windows machine not hosted in AWS, its pretty much the same approach for a EC2 instance.
You can use AWS Cloud watch Logs to monitor, store, and access your log files from Amazon Elastic Compute Cloud (Amazon EC2) instances, AWS CloudTrail, RouteĀ 53, and other sources. You can then retrieve the associated log data from CloudWatch Logs.