Install software on multiple ec2 instances along with json file - amazon-web-services

I need to install Fire Eye in multiple ec2 instances in my AWS account, all running Windows Server 2012. I have the installer msi and could do it using Distributor in SSM. However there is a json file that needs to be in the same folder as the msi file when software is being installed. This doesn't seem to be supported by Distributor.
Can anyone help me out with how this can be done, short of logging in to every server and installing it manually after copy pasting the json and msi file in one folder?

Usually for ad-hoc execution of commands on a fleet of instances you would use AWS Systems Manager Run Command:
Administrators use Run Command to perform the following types of tasks on their managed instances: install or bootstrap applications, build a deployment pipeline, capture log files when an instance is terminated from an Auto Scaling group, and join instances to a Windows domain, to name a few.

Related

How to install software on multiple aws ec2 instances?

I created multiple (say 16) AWS EC2 ubuntu instances such as:
I want to keep these instances to have the same settings for later jobs. My question is how I could manage them jointly. For example, how could I install Docker in all of them at once and so that I can use docker swarm?
Ideally you would actually configure the server build before you deploy the 16 instances.
You would launch a fresh Ubuntu server and install all of the software on it with its configuration. Once all software is installed you'd create an AMI. When you go to launch the 16 servers you'd go ahead with launching them from your AMI instead of the Ubuntu image.
To follow best practices you'd not do this installation by hand, instead using a configuration automation tool such as Ansible, Chef or Puppet to configure the server to your liking.
You can make use of aws user data to install same software on all the instance during ec2 creation.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html

Installing Amazon Inspector Service

I'm about to install and use Amazon Inspector. We have many EC2 instances behind ELB. Plus some EC2 instances are opened via Auto-Scale.
My question: Is the Amazon Inspector doing its work locally or globally, meaning is the monitoring being made on the instance that it is installed on or it can be configured to include all the instances of the infrastructure?
If Inspector should be applied on every EC2 instance, can the Auto-Scale be configured to open the new instances with Inspector already installed on them and if yes, how can i do that?
I asked a similar question on the Amazon forum but got no response.
In the end I used the following feature to customise the EC2 instances that my application gets deployed to:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html
Basically off the root of your .war file you need a folder named '.ebextensions' and in there a .config file containing some commands to install the Inspector client.
So my file 'inspector-agent.config' looks like this:
# Errors get logged to /var/log/cfn-init.log. See Also /var/log/eb-tools.log
commands:
# Download the agent installation script
"01-agent-repository":
command: sudo wget https://inspector-agent.amazonaws.com/linux/latest/install
# Run the installation script
"02-run-installation-script":
command: sudo bash install
I've found the answer and the solution, You have to install Amazon Inspector on each EC2 in order to inspect them all using Amazon Inspector.
About the Auto-Scale, I've applied Amazon Inspector on the main EC2 servers and took an image from them (after inspecting all the EC2s and fix all the issues). Then I've configured the Auto-Scale to lunch to lunch from the new AMIs (The Inspected AMIs).

Seamless switching of an application from one AMI to another AMI

I am having my OpenDJ LDAP setup running on ubuntu 16.04 base AMI. I now want to replace the base AMI with new patched AMI without impacting my working OpenDJ setup. I need to do this everytime a new AMI is released. One way I can think of is to spin a new EC2 instance with new AMI, export the data from existing LDAP and import it into new EC2 instance. But I am wondering if there is better and smarter way to do this automatically. How do I switch an application from one AMI/EC2 instance to another AMI/EC2 instance without redoing the configuration or breaking its functioning?
Create an EFS file system to be designated for back-end database files (eg /opt/ds)
Install DS/OpenDJ so that the instance files are separate to the install files. (See quote from this link below)
For each new instance launch the AMI with the updated software as needed.
In the user data script for the instance, you attach the instance data folder from Step 1.
The purpose of this article is to provide information on installing
DS/OpenDJ so that the instance files (user data) are separate to the
install files (binaries). This setup allows you to separate all your
backend database files and configuration in a separate file system to
your binaries
The approach will isolate application data from software binaries, and allow you to easily switch AMIs

How do I copy varying filenames into Vagrant box (in a platform independent way)?

I'm trying to use Vagrant to deploy to AWS using the vagrant-aws plugin.
This means I need a box, then I need to add a versioned jar (je.g. myApp-1.2.3-SNAPSHOT.jar) and some statically named files. This also need to be able to work on Windows or Linux machines.
I can use config.vm.synced_folder locally with a setup.sh to move the files I need using wildcards (e.g. cp myApp-*.jar) but the plugin only supports rsync, so only Linux.
TLDR; Is there a way to copy files using wildcards in Vagrant
This means I need a box
Yes and No. vagrant heavily relies on the box concept but in the context of AWS provider, the box is a dummy box. the system will look at the aws.* variable to connect to your account.
vagrant will spin an ec2 instance and will connect to it, you need to make sure the instance will be linked with a security group that allows the connection and open the port to your IP (at minimum)
if you are running a provisioner, please note the script is run from the ec2 instance not from your local.
what I suggest is the following:
- copy the jar files that are necessary on s3 or somewhere the ec2 instance can easily access them
- run the provisioner to fetch the files from this source (s3)
- let it go.
If you have quick turnaround of files in development mode, you can push to a git repo that the ec2 instance can pull the files and deploy the jar directly

Rare scenario in DevOps - using jenkins

I have new to aws and jenkins. I have a scenario as below.
We have an aws AMI which has jenkins installed in it. The AMI is a Linux platform. We already have few jobs set in the AMI for code bases (PHP and Python) for Development and QA environment.
Now that we have a new framework in .net which is again a part of the same project done in PHP. These are windows services written in .net.
Right now the deployment are performed manually. We pull the code and build the code in the same machine. So we take care of stop/starting the services manually during this process on the Windows AMI dedicated for this testing. We would like to create a job (build and deploy) as we do for python and PHP.
The challenge is that we want to build the code on the Windows AMI and the jenkins in running on Linux AMI.
Is there a way to establish a connection between the AMI's running in different operating systems in aws.
Should we install powershell in windows to have ssh access. In that case we can establish a connection from Linux AMI to Windows AMI and then execute a .bat to do the rest of activities.
** We are specifically asked not to install another jenkins in Windows system since we want to maintain all the jobs in a single place and single server.
Its not actually a very rare scenario. Its not uncommon to have Jenkins running on Linux and also have the need to build and deploy windows applications using it.
Lucky for you Jenkins handles this quite easily using the concept of a master/slave architecture, where in your case the master node will be your primary Jenkins install running on Linux and you will setup one or more 'slave' instances running windows and the jenkins agent that allows the two to coordinate.
Its all explained here:
https://wiki.jenkins-ci.org/display/JENKINS/Distributed+builds