Stop and start EC2 instance based on simple external commands [closed] - amazon-web-services

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have an AWS EC2 instance that I would like to have various relevent people stop and start. In a perfect world I would like a really simple way for a select handful of people to stop and start an EC2 instance without giving them too many permissions. If I could make it so they just click 1 button to do it, that would be perfect.

Starting/Stopping an Amazon EC2 instance can be done via the:
AWS Management Console
AWS Command-Line Interface (CLI)
AWS SDK for many popular programming languages
The important thing to realize is that users do not have do issue the stop/start command themselves! They can use an in-between system that makes the call for them.
For example, if you have internal intranet, you could configure some code to start/stop instances when a user requests it via the website. The website would then issue the command to AWS (via the CLI or SDK), without the users themselves requiring any special access credentials (they just need access to your internal website).
This is similar to your "just click 1 button" idea, with the button being on your intranet.

Related

Terraform and EC2 Provisioning [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed last month.
Improve this question
I have been looking around for best practices provisioning a server but have yet to find one. The problem I am trying to solve is configuring an nginx.conf file in the new server setup by Terraform. So I guess my question is: Is Terraform actually used to install packages and configure the server after it is spun up? Or am I supposed to use something else entirely alongside Terraform (i.e Chef)? In the Terraform docs it mentioned removing support for Chef, Puppet and Ansible. It also mentions using the "provisioner" block for this functionality is frowned upon.
I have tried using the Terraform resource user_data block and a custom provisioning shell script to do this, but this just seems pretty hacky and difficult to read. I would like to stay away from using a pre-defined AMI as well, which I have also tried.
Specifying user_data is about the best you can do with Terraform for this type of thing. EC2 user-data support is implemented using cloud-init. There is a cloud-init Terraform Provider that helps you create user-data scripts and templates, which helps a lot with the readability.
If that isn't to your liking, then you have to look elsewhere, such as using something like Ansible, or possibly AWS Systems Manager, to install/configure your EC2 instances. However, these would be disconnected from Terraform so you may need to use a provisioner to trigger them after a Terraform apply.

How to identify all the resources created by me in AWS [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 months ago.
Improve this question
I'm from Azure background and trying to learn AWS.
I'm creating multiple services/resources like EC2, S3, Lambda, etc in my AWS account. I'm not a root user.
My question
How can I find/list all the resources created by me? I want the ability to quickly see/list all the resources created by me to do cleanup.
Note: In Azure, I can do this by creating resources under a specific Resource group or I can tag them, later I can open a specific resource group to find all the resources that I've created or filter by tag. Is there any similar feature in AWS?
Thanks
First approach:
There is no single command that can list all resources in an AWS account.
You should use the AWS Management Console or make API calls to any service, in any region, to get a list of the resources created.
A good place to start is the billing console, which can show you which services have been used in which region. You can then log into any of these services and regions to see the resources.
Second approach:
You can use the AWS Configuration Service to create an inventory of all your AWS resources for supported AWS services. An inventory acts as a CMDB for your AWS landscape and records all configuration changes.
To know how to configure, refer here
Third approach:
You can also use Tag Editor which will also allows you to edit the tags for all your AWS resources.
To know how to configure, refer here

How to use AWS to deploy multi-container Docker App [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 months ago.
Improve this question
I have been programming a full-stack application consisting of a NodeJS frontend, .Net Backend, and SQL Server DB and I have managed to fully dockerize the application using docker-compose. I have come a long way since I started this project but I still need a bit more help to finalize it. I am now in the process of deploying my docker containers into AWS (somehow) and I am having a bit of a problem on my end. Please bear in mind I am a beginner and this is quite complex to me.
So far this is the closest I have come to an actual solution to properly deploying all 3 parts of the app.
Created a security group w/ Inbound to all IPv4s and IPv6s, Outbound to all IPv4s
Created a load balancer listening on port 80 with default VPC
Created a key pair to SSH
Created a cluster with 3 instances (backend, frontend, db) default VPC, SG created, default role
Created ECR and pushed all my docker images seperately, 3 ECRs
Created EC2 task def, no role, 512 mem, container with each ECR url, 512 mem, 0:80 mapping
(Unsure if necessary) Created a service to link the LB etc.
When I do this, I am able to run all 3 tasks at the same time with no issues so it seems like progress to me. However, I am doing something wrong when it comes to the ports or IPs since I am not able to access the public DNS or even SSH to any of the instances, it times out.
Question:
Have I made an error anywhere? Specifically, in the ports or IP, I am not sure where the mistake is
Notes:
This is a simple project which I will have up for maybe 1-2 months, I do not plan on spending more than $5-$10. It is just a simple project with CRUD operations.
The end goal is simply to have everything up on AWS and running together, so I can perform CRUD on the DB, nothing long-term or complex.
P.S I MUST use AWS
The simplest way to achieve your goal considering the amount you want to spend would be to move your solution to EC2 as described by you. What issues do you face doing so?
You may also explore the integration of Docker Compose and ECS
Also check this out -
https://aws.amazon.com/blogs/containers/deploy-applications-on-amazon-ecs-using-docker-compose/

How does GCP "Managing SSH keys in metadata" works behind the scenes [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I understand the gcp provide a functionality where adding a ssh public key in instance meta will allow user to ssh into the machine with publickey authentication.
But, I am interested to know how gcp does that?
Does GCP intercept by ssh request before it reaches the machine and add the relevant authorized_keys into my machine?
Or
Does SSH provide some functionality which is used by GCP to achieve such capability?
Google Cloud runs software (guest agent) during a VM startup that copies SSH keys from the Metadata service to the VM. This includes creating home directories and setting up authorized_keys.
On Linux: If OS Login is not used, the guest agent will be responsible
for provisioning and deprovisioning user accounts. The agent creates
local user accounts and maintains the authorized SSH keys file for
each. User account creation is based on adding and remove SSH Keys
stored in metadata.
This software is called the Guest Environment.
The source code is published on GitHub.

AWS CloudFormation provisioning... UserData vs. Ansible or the likes? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
What is the difference between provisioning using AWS CloudFormation UserData vs. Ansible?
I know that in relation to Puppet for example, it enforces provisioning rules even when a change is done (changes it back to reflect the manifest).
But are there more differences which are worth taking into consideration?
To clarify, "UserData" is part of an EC2 instance, not part of CloudFormation itself. EC2 instances can be launched with User Data, which can be used by the AMI to perform dynamic operations on startup. If CloudFormation is used to launch an EC2 instance, it can provide User Data to the EC2 instance by setting the UserData property on the AWS::EC2::Instance Resource.
Typically, user data is processed by Cloud-Init, and is typically formatted as a simple User-Data Script which is just a shell script that gets invoked on the instance when it is first launched.
That said, 'Shell script vs. Ansible' is an apples-to-oranges comparison. Whether or not Ansible is the appropriate software for your use-case depends on whether you need to use the extra layers of abstraction built into Ansible versus a standard shell script to provision what's needed on your instance. Read the Ansible Documentation and decide for yourself.
It is worth mentioning that aside from the normal 'push' method of running Ansible to provision your instance via SSH, you can also run Ansible in an inverted, 'Ansible-pull' mode, using a User-Data Script to perform the initial 'bootstrap' installation on the EC2 instance.
The short answer is: Use CloudFormation or Terraform
Ansible is a configuration managment tool for many diffrent purposes. The most significant diffrence to many of the other tools is, that is is working in a push mode, so there are no agents on the remote server polling for changes.
It is great when it's about installing packages, creating files and so on.
CloudFormation is desinged to create AWS enviroments. This is good if you onyl use Amazon and nothing else.
Ansible can do the job, but i would recommend to use a tool like CloudFormation or Terraform. The Ansible modules for this are ok, but tools like Terraform have a fokus on creating enviroments and they are much smarter when doing the job.