I am a newbie to AWS Cloud. Recently I was given the requirement to do a Automation Anywhere Clustered Control Room installation on AWS Cloud. Based on this requirement, I set up 2 EC2 instances (as a test run) with Windows Server 2016 AMI. I installed MS SQL server on one of the instances and opened port 1433 for access from the other instance. I installed Control Room on the first instance successfully (using custom install). When I completed the installation on the second instance, I got credential vault error. I have created a shared folder which is accessible by both the instances inspite of which I am getting the error. I have security groups and firewalls setup appropriately alsoI have shared the snapshot below. I have been informed that there is an authentication issue between the 2 instances. How do I get this to work?
Any and all help is much appreciated.
I don't know if this is a duplicate of any other question. If it is, please point me in the right direction.
I was able to solve the problem. I reinstalled the control room on both the EC2 machines with Manual mode for the Credential Vault access.
I also reset the firewall to allow only 80 and 443 (for now) both locally and remotely on the second EC2 instance.
Related
I'm trying to access lambda functions from a Windows VM I have created in EC2 for dev purposes but even a simple 'list functions' command fails to connect
I have tried using the AWS CLI through PowerShell, the dotnet sdk and the VS AWS Toolkit but each of these times out after a long waiting period. I can, however, list other services such as my databases and S3 buckets.
aws cli failure message
VS toolkit failure message
I have tried creating a new VM with the same results. I've disabled windows firewall altogether, allowed all traffic through the security group and have VPC endpoints for my subnet (ssm, ec2messages, lambda, ec2).
I have no trouble connecting to the lambda service through my own computer. On the VM, I have modified the .aws/credentials file to match the one on my computer for both the admin and current user but I still can't connect. This tells me that the problem isn't related to my access key credentials.
I'm reaching the end of the troubleshooting options I can think of so any help would be very much appreciated!
Update: using telnet, I cannot connect to lambda.ap-southeast-2 but I can connect to s3.ap-southeast-2 and lambda.ap-southeast-1. It seems lambda.ap-southeast-2 is being blocked somewhere but it isn't windows firewall because it's off and the same problem happens on Ubuntu VMs.
In the VPC Management Console, I haven't set up any firewalls under network or dns filewalls and my network ACL allows all traffic.
I'm a newer AWS user and today I got stuck while working on a sample project. I successfully created a docker container that runs a simple R script that connects to my AWS RDS MySQL Database and creates & writes some basic files to it. I built a public ECR repository, pushed my docker image there, and built a ECS cluster & task choosing Fargate and using the container image from my repository. My task ran and I could see the R code being executed when I went through the logs, but it was never able to connect to the SQL Database and exited afterwards.
I've had to whitelist my own IP address in the security group for the RDS Database so that I can connect to it, so I'm aware I probably have to do that for my ECS task to establish that connection too. But won't that IP address constantly change because I won't have a static IP for the Fargate Server that is executing my task? I'm trying to stay on the free tier so I'm not sure I want to setup an elastic IP address for this server.
These 2 articles seem close if not the same issue I'm having but I can't figure out a solution. I haven't found any other info.
https://aws.amazon.com/premiumsupport/knowledge-center/ecs-fargate-task-database-connection/
https://aws.amazon.com/premiumsupport/knowledge-center/ecs-fargate-static-elastic-ip-address/
The end goal is to get this sample project successfully running on a scheduled fixed interval, and then running actual scripts on there to help automate things and make my life easier, so this sample project is a first step towards that. Any help or info on the questions I'm having would be appreciated !
Yes, your task is ephemeral (whether you launch it manually or as part of an ECS service) and its private/public ip address may change over time if it gets replaced. The way you'd make the connectivity rules to stick is to assign a security group to the task (that may have inbound access on a specific port you need I assume and outbound to everything) and assign another security group to the RDS db that has inbound access on port 3306 for the security group you assigned to the task (this is the trick, the SG will not change and you are telling RDS to allow access to ALL traffic coming from that SG). I see the first article you posted doesn't talk about this part (it should).
I've been tasked with getting a new SSL installed on a website, the site is hosted on AWS EC2.
I've discovered that I need the key pair in order to connect to the server instance, however the client doesn't have contact with the former web master.
I don't have much familiarity with AWS so I'm somewhat at a loss of how to proceed. I'm guessing I would need the old key pair to access the server instance and install the SSL?
I see there's also the Certificate Manager section in AWS, but don't currently see an SSL in there. Will installing it here attach it to the website or do I need to access the server instance and install it there?
There is a documented process for updating the SSH keys on an EC2 instance. However, this will require some downtime, and must not be run on an instance-store-backed instance. If you're new to AWS then you might not be able to determine whether this is the case, so would be risky.
Instead, I think your best option is to bring up an Elastic Load Balancer to be the new front-end for the application: clients will connect to it, and it will in turn connect to the application instance. You can attach an ACM cert to the ELB, and shifting traffic should be a matter of changing the DNS entry (but, of course, test it out first!).
Moving forward, you should redeploy the application to a new EC2 instance, and then point the ELB at this instance. This may be easier said than done, because the old instance is probably manually configured. With luck you have the site in source control, and can do deploys in a test environment.
If not, and you're running on Linux, you'll need to make a snapshot of the live instance and attach it to a different instance to learn how it's configured. Start with the EC2 EBS docs and try it out in a test environment before touching production.
I'm not sure if there's any good way to recover the content from a Windows EC2 instance. And if you're not comfortable with doing ops, you should find someone who is.
I have one instance of a Windows Server 12 R2 VM on google cloud that's working properly and I have connected to it successfully using RDP. I have tried to replicate it by creating a snapshot out of it and creating an instance from the snapshot. According to the platform the instance was created, but i can't seem to connect to it or to get a password. When I click "Get windows password" I get this:
forever. When i try to connect to it, I get
I have no idea what to do, any help would be appreciated.Thanks
The password creation tool from the console only works for images builted from the official image repo. In this case your source is a previous VM through a snapshot. In that case, and also in migrations all the previous credentials are kept in the new VM. You can download the GCP RDP agent here and access using the credential you used to have in your source VM.
Connecting to a Windows Instance
https://cloud.google.com/compute/docs/instances/windows/connecting-to-windows-instance
-----------Update----------------
In the case you can not get to enter the VM, it seems to be a firewall rules issue. By default the port tcp:3389(RDP access) is open to all VM at the default network, check your VM is in that network or check if the firewall rule has a tag to be applied.
If not, apply a tag to your new machine and create a firewall rule tobe applied to that tag.
Hope it help. Keep us posted!
So, I've been working locally in a vagrant ubuntu box for the past month: I've spent a lot of time working on customizing it and installing exactly all the software I want on it. I started all of this through the normal vagrant tutorial (aka, nothing special). I packaged my local vagrant box into a package.box file. Now, I want to move my development environment (e.g. package.box file) to an Amazon EC2 instance on AWS. I know I'm not supposed to ask for software recommendations, but my question is basically: is this possible to do and, if it is, could you point me to some examples of people doing it? I've read that packer might be an option, but it looks to me (a very inexperienced perspective) that maybe I should have started with that instead of trying to use it now. Any help would be appreciated - I don't want to spend a couple weeks setting up a new environment when I have one locally set up.
Edit:
Progress! I followed #error2007s link and followed the tutorial. I'm at the point where I've uploaded the VMDK image to s3 and provisioned an instance using it (all done automatically with the ec2-import-instance command on the CLI). However, I don't see a Public IP to access the new instance after I start it up.
I think this is related to cloud-init somehow, but I'm not sure what that is really. I tried it with both the /etc/cloud/cloud.cfg file that came with the box as well as the one listed here and neither of the two boxes I uploaded gave me a Public IP to access.
Edit 2:
Here are some things I see in the Console (They all seem right to me, but a more experienced eye might see something wrong):
subnet info:
Auto-assign Public IP: yes
Network ACL:
VPC info:
DNS resolution: yes
DNS hostnames: yes
ClassicLink DNS Support: no
VPC CIDR: 172.31.0.0/16
DHCP Option Set:
Options: domain-name = ec2.internal domain-name-servers = AmazonProvidedDNS
From my perspective, those all look right, or am I missing something?
I assigned an Elastic IP per these instructions, but when I ssh ec2-user#<elastic-ip>, it says ssh: connect to host <elastic-ip> port 22: Connection refused. The security group assigned to the instance is set to allow all protocols on all ports. Also, this is the first time I encounter a Elastic IP and I'm unsure what exactly it is doing.
Amazon enables you to transfer your Vm to AWS as a EC2 instance. Check this tutorial this is more simple.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/UsingVirtualMachinesinAmazonEC2.html
You want to use the Vagrant AWS provider found here:
https://github.com/mitchellh/vagrant-aws
This is a Vagrant 1.2+ plugin that adds an AWS provider to Vagrant,
allowing Vagrant to control and provision machines in EC2 and VPC.
This will allow you to provision your AWS instances using Vagrant, allowing you to migrate the same local development environment to an AWS EC2 instance.
There is a good tutorial here:
https://nurmrony.wordpress.com/2015/03/15/vagrant-deploy-and-provisioning-an-amazon-ec2-instance/
Hi I have found these articles but I have not yet tested them myself. Im still in the middle of organizing my personal notes and identifying my technology stack. I intend to have a Homestead vagrant box be replicated as an EC2 instance, so I wont have to configure the instance(s) manually.
https://nurmrony.wordpress.com/2015/03/15/vagrant-deploy-and-provisioning-an-amazon-ec2-instance/
https://www.tothenew.com/blog/using-vagrant-to-deploy-aws-ec2-instances/
https://foxutech.com/how-to-deploy-on-amazon-ec2-with-vagrant/
https://blog.scottlowe.org/2016/09/15/using-vagrant-with-aws/
https://devops.com/devops-primer-using-vagrant-with-aws/
I find their approaches similar. The only thing that I am worried at is the "vagrant add box" part.
I asked myselft, what if I had to do this setup again for familiarization purposes, what will happen since I already added a vagrant box (the dummy one, as instructed in the tutorials) previously.