How to see whats happening on ec2 instance whithout ssh? - amazon-web-services

I am using AWS SEK for java. I create and run an EC2 instance with a user data script which gets a .jar from a S3 bucket and runs it. When I run the instance it shows me that it is running but nothing happens. The .jar should create a SimpleDB table and a SQS queue. How do see whats wrong whithout connecting through ssh to the instance or is it the only what to see the logs?
Kind regards,
Snafu

Some of the user-data output may be found in the system log (on EC2 dashboard, right-click on the instance and choose System logs)
you could put a piece of java code \ shell script and\or cron job to upload your logs to S3, but it's best to SSH to see what's in there at least at the first time you run your code.
You can use mind-term java applet to connect directly from EC2 dashboard (there's a button labeled 'connect' at the top, it's easy and you don't need to download ssh client). I would highly recommend getting used to work with SSH because it's the easiest way to see what's inside.

Related

Is it possible to run a command from one ec2 instance that executes that command onto another ec2 instance?

Right now I am testing to see if I am able to write
touch test.txt
simply to another ec2 instance.
I have looked into both ssh and ssm but I do not understand where to begin the code. Any ideas to remotely send commands?
If you want to send a command remotely you can make use of the AWS run command functionality of SSM.
To do this you will need to ensure that you’re both running SSM agent and have a valid IAM role setup on the remote instance. The getting started section should help that.
Finally you can call the remote instance using the send-command function. Either create your own document or use the existing ‘AWS-RunShellScript’ document.

How can we get list of all file systems(EFS) mounted to an EC2 instance using python boto3?

I have used Boto3 to automate lot of things on AWS, but recently someone asked to list all filesystem of an EC2 instance. I am not able to find any direct method to get all mounted file systems(AWS EFS)to an EC2 instance.
I only have instance id. I have programatically access to AWS resources but no direct access to target instance. I have checked EC2 and EFS client but I surely be missing something so is asking here if anybody came across something similar want share his/her approach here.
I know we can run "df -h" to list all mount file systems but i cannot login to instance.
If you have the programmatic access to the AWS resources, then you can easily solve your problem. What you can do is you can use the AWS System Manager Run Command (Send Command) API.
It allows you to run commands directly on the server and you can easily get the response of the command as an API response and by which you can get the response you want.
You can use the Run command to execute df -h on the instance and then you can filter the response in the python code or execute a single line command which will return you the list of names for the mount system.
Below are the links for AWS System Manager Run Command (Send Command): https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ssm.html#SSM.Client.send_command

Run a batch file on EC2 from a (python) lambda

I can see a generic way of starting an EC2 from lambda in Start and Stop Instances at Scheduled Intervals Using Lambda and CloudWatch.
Suppose I use that method to start an EC2, and suppose the AMI is a windows server 2019 customised to have a .bat file on the desktop, and also suppose I'm using a python lambda.
How can I execute this batch file from the lambda? (i.e. just as though someone had RDP'd into the instance and double-clicked on it)
Note: To be very clear, basically I want to start the EC2 using the method given in the AWS docs (above), and right after the instance has started, to run the batch file that will be sitting on the instance's desktop
I think you have a few concepts mixed together.
AWS Lambda functions run on the Lambda service, without having to use Amazon EC2 instances. This is what makes them "serverless".
If you have a batch file on an Amazon EC2 instance, you would presumably want to run that batch file on the EC2 instance itself, without involving Lambda (since you have got a server).
If you wish to run a script on an EC2 instance when it launches for the first time, you can provide a PowerShell or Command-Line script via the User Data field. Software on the AMI will automatically execute this script the first time that the instance starts.
This script could do all the work itself, or it could simply call another script that is stored on the disk. Some people use the script to download another script from a repository (eg Amazon S3 or GitHub) and then execute the downloaded script.
For more information, see: Running Commands on Your Windows Instance at Launch - Amazon Elastic Compute Cloud
If the Amazon EC2 instance is already running and you wish to trigger a script to execute, you can use the AWS Systems Manager Run Command. This works by having an agent on the instance which can be remotely triggered, thereby running scripts without having to login to the instance.

CloudFormation template to bring up EC2 instance

Using CloudFormation template, I brought up a Windows 2012 EC2 instance. Instance came up fine. I read that metadata related to this instance is all recorded in the Ec2config logs which is in one of the sub-folders of C:\Programfiles\Amazon\ directory.
Following are the steps that I am doing after EC2 instance comes up:
Rename the Administrator password (which doesn't work yet).
Set the time zone
Rename the hostname
Adding that server to the domain controller.
There should be some kind of logs on that EC2 instance about all these steps right? However, I can't find any. Any suggestions where I should be looking for the logs please?
You need to run cloud-init scripts to achieve all the tasks. I recommend writing PowerShell scripts for this.
Just refer the below repo, you will find useful template and scripts which do same activities.
https://github.com/aws-quickstart/quickstart-microsoft-sql

How to run Python Spark code on Amazon Aws?

I have written a python code in spark and I want to run it on Amazon's Elastic Map reduce.
My code works great on my local machine, but I am slightly confused over how to run it on Amazon's AWS?
More specifically, how should I transfer my python code over to the Master node? Do I need to copy my Python code to my s3 bucket and execute it from there? Or, should I ssh into Master and scp my python code to the spark folder in Master?
For now, I tried running the code locally on my terminal and connecting to the cluster address ( I did this by reading the output of --help flag of spark, so I might be missing a few steps here)
./bin/spark-submit --packages org.apache.hadoop:hadoop-aws:2.7.1 \
--master spark://hadoop#ec2-public-dns-of-my-cluster.compute-1.amazonaws.com \
mypythoncode.py
I tried it with and without my permissions file i.e.
-i permissionsfile.pem
However, it fails and the stack trace shows something on the lines of
Exception in thread "main" java.lang.IllegalArgumentException: AWS Access Key ID and Secret Access Key must be specified as the username or password (respectively) of a s3n URL, or by setting the fs.s3n.awsAccessKeyId or fs.s3n.awsSecretAccessKey properties (respectively).
at org.apache.hadoop.fs.s3.S3Credentials.initialize(S3Credentials.java:66)
at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.initialize(Jets3tNativeFileSystemStore.java:49)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
......
......
Is my approach correct and I need to resolve the Access issues to get going or am I heading in a wrong direction?
What is the right way of doing it?
I searched a lot on youtube but couldn't find any tutorials on running Spark on Amazon's EMR.
If it helps, the dataset I am working on it is part of Amazon's public dataset.
go to EMR, create new cluster... [recommendation: start with 1 node only, just for testing purposes].
Click the checkbox to install Spark, you can uncheck the other boxes if you don't need those additional programs.
configure the cluster further by choosing a VPC and a security key (ssh key, a.k.a pem key)
wait for it to boot up. Once your cluster says "waiting", you're free to proceed.
[spark submission via the GUI] in the GUI, you can add a Step and select Spark job, and upload your spark file to S3, and then choose the path to that newly uploaded S3 file. Once it runs it will either succeed or fail. If it fails, wait a moment, and then click "view logs" over on the of that Step line in the list of steps. Keep tweaking your script until you've got it working.
[submission via the command line] SSH into the driver node following the ssh instructions at the top of the page. Once inside, use a command-line text editor to create a new file, and paste the contents of your script in. Then spark-submit yourNewFile.py. If it fails, you'll see the error output straight to the console. Tweak your script, and re-run. Do that until you've got it working as expected.
Note: running jobs from your local machine to a remote machine is troublesome because you may actually be causing your local instance of spark to be responsible for some expensive computations and data transfer over the network. So thats why you want to submit AWS EMR jobs from within EMR.
There are typical two ways to run a job on an Amazon EMR cluster (whether for Spark or other job types):
Login to the master node an run Spark jobs interactively. See: Access the Spark Shell
Submit jobs to the EMR cluster. See: Adding a Spark Step
If you have Apache Zeppelin installed on your EMR cluster, you can use a web browser to interact with Spark.
The error you are experiencing is saying that files where accessed via the s3n: protocol, which requires AWS credentials to be provided. If, instead, the files were accessed via s3:, I suspect that the credentials would be sourced from the IAM Role that is automatically assigned to nodes in the cluster and this error would be resolved.