How can i automate script executions in aws EC2 using go sdk? - amazon-web-services

I'm building an app that manages multiple ec2 instances using the go sdk. I would like to run scripts on these instances in an automated way.
How can I achieve that ? I don't think os.command => ssh => raw script stored as string in code is the best practice. Is there any clean way to achieve this ?
Thanks

Is there any clean way to achieve this ?
To bootstrap your instance, you would create a UserData script. The script runs only once, just after your instance is launched.
For other execution of commands remotely, you can use SSM Run Command to run command on a single or multiple instances.

The way you suggest is actually valid and can work. I agree with you though, it wouldn't be my first choice either. I would either use the package golang.org/x/crypto/ssh in the standard library or an external solution like github.com/appleboy/easyssh-proxy.
I would lean towards the default library but if you don't have a preference there then the Scp function of the latter package might be especially of interest to you. You can find examples of this in the readme of the project.
Secure copy protocol (SCP) is a means of securely transferring computer files between a local host and a remote host or between two remote hosts. It is based on the Secure Shell (SSH) protocol.
EDIT: After seeing Marcin's answer, I think my answer is more the plain SSH answer, AWS independent. For the idiomatic answer for AWS please definitely look at his suggested solution!

Related

Best practice to install Tableau Server as IaC

I am trying to figure it out which one is the best practice when creating a new server on AWS EC2.
To do that I choose Tableau Server. First time, as Tableau docs recommend I did the install myself, but I would like to keep as automatic as possible everything, the idea behind that is if ec2 get destroyed how can I recover everything fast?
I am using terraform to store as a code all the AWS infrastructure, but the installation itself is not automatic yet.
To do that, I have two options, ansible (never worked before) or in this particular case Tableau has an automated install script in python, which I could add in the EC2 template launch configuration,and then using terraform I can raise it in minutes.
Which one should be the choosen why? Both seems to acomplish the final goal.
Also it raises some kind of doubs such as:
It retrieves the server up and with a full instalation of the software, but to get all users, and all the Tableau setup I have to raise anyways an snapshot, right? Is there any other tool to do that?
Then, if the manual install of the software is fast enough, why then I should use IaC to keep the install as code, instead of document the script of installation? And just keep the Infrastructure as code?

How to check whether my code runs in a container on AWS EC2 or not

My (python) code runs inside a docker container.
The container is deployed on AWS EC2 for our production and testing purposes, but sometimes on our local machines or other cloud vendors for development and CICD purposes.
For some functionality, I want my python code to be able to distinguish between an EC2 deployment and non-EC2. Is this possible?
I found this answer which uses the EC2 instance metadata endpoint, But I'm wondering:
a) Would this also work from within a docker container?
b) Isn't there a more elegant solution? Issuing an HTTP request and waiting for it seems a bit too much.
(I'm aware that a simple solution is probably to add some proprietary environment variable or flag, trying to find a more native to check this)
I recommend you to go with a custom environment variable. This way you will be able to easily reproduce the required behaviour outside of AWS (on your workstation or using other cloud provider).
Using curl or checking for presence of /etc/cloud would make your application behaviour dependent on third-party services/tools. Beside logic complexity (you'd have to handle possible curl errors, like invalid response codes) that can lead to bugs you surely don't want to meet.

Does anyone know how to use AWS App2Container(A2C)?

AWS App2Container (A2C) is a recently launched feature by AWS. It is a CLI tool to help you lift and shift applications that run in your on-premises data centres or on virtual machines so that they run in containers that are managed by Amazon ECS or Amazon EKS. Since there is not much info on the internet about this, apart from the AWS document so does anybody knows how to implement it and what are the dependencies required for it?
This is a fairly new service so most people will be relying on reading at the moment.
For JAVA applications the setup instructions on Linux indicate that you just download the app2container package and then run the following over your code
sudo app2container containerize --application-id java-app-id
For .NET applications the setup instructions on Windows indicate that it is exactly the same process, run the install file and that will have all dependencies.
The best way to try and implement this will be by following these tutorials step by step. Also remember at this time it is JAVA or .NET only.

Best way to work with code on cloud?

I've lately started with Amazon Web Services and deployed a couple of express applications on EC2 and I find it extremely tedious to edit code on the fly via SSH (ssh is little unresponsive for coding purposes and I'm not really comfortable with nano and vim for heavy editing).
I know I can edit it on my machine and scp it to EC2. I was thinking whether there's any way I can setup something like nodemon but for cloud, i.e. whenever I make a change on my local development, it deploys it on cloud with scp? Kind of extending nodemon to cloud.
Or is there any other way to work with that?
There are plug-ins and utilities that can allow you to edit locally with Sublime Text (a good Australian editor -- please register if you use it a lot!) and have the file automatically updated on a remote server.
See:
Stackoverflow: How to use Sublime over SSH
Editing files remotely via SSH on SublimeText 3
There are probably many similar utilities if you go looking for them.

Medium Hadoop / Spark Cluster Administration

Please let me know if this question is more appropriate for a different channel but I was wondering what the recommended tools are for being able to install, configure and deploy hadoop/spark across a large number of remote servers. I'm already familiar with how to setup all of the software but I'm trying to determine what I should start using that would allow me to easily deploy across a large number of servers. I've started to look into configuration management tools (ie. chef, puppet, ansible) but was wondering what the best and most user friendly option to start off with is out there. I also do not want to use spark-ec2. Should I be creating homegrown scripts to loop through a hosts file containing IP? Should I use pssh? pscp? etc. I want to just be able to ssh with as many servers as needed and install all of the software.
If you have some experience in scripting language then you can go for chef. The recipes are already available for deployment and configuration of cluster and it's very easy to start with.
And if wants to do it by your own then you can use sshxcute java API which runs the script on remote server. You can build up the commands there and pass them to sshxcute API to deploy the cluster.
Check out Apache Ambari. Its a great tool for central management of configs, adding new nodes, monitoring the cluster, etc. This would be your best bet.