How to get EC2 memory utilization using command line (aws cli) - amazon-web-services

I am trying to get EC2 memory utilization using aws cli and I see that EC2MemoryUtilization is not available as a metric. I installed cloudwatch agent in the ec2 instance and I have created a dashboard for mem_used_percent.
Now I want to consume the memory used data points programmatically. I could find for CPUUtilization but I am unable to find anything for Memory utilization.
Any help in this regard is helpful. Thanks!

This python script pushes the system memory metrics to cloudwatch in a custom namespace. Schedule the script in crontab to execute it at every 1 minute or 5 minutes to plot the system memory metrics with respect to time. Ensure that IAM role assigned to the vm has sufficient privileges to put metric data to cloudwatch.
#!/usr/bin/env python
import psutil
import requests
import json
import os
import boto3
get_memory = psutil.virtual_memory()
free_memory = get_memory.free/(1024*1024*1024)
print "Free Memory: ", free_memory, "GB"
headers = {'content-type': 'application/json'}
req = requests.get(url='http://169.254.169.254/latest/meta-data/iam/security-credentials/cloudwatch_access', headers=headers)
res = json.loads(req.text)
AccessKeyId = res['AccessKeyId']
SecretAccessKey = res['SecretAccessKey']
Token = res['Token']
Region = "ap-south-1"
os.environ["AWS_ACCESS_KEY_ID"] = AccessKeyId
os.environ["AWS_SECRET_ACCESS_KEY"] = SecretAccessKey
os.environ["AWS_SESSION_TOKEN"] = Token
os.environ["AWS_DEFAULT_REGION"] = Region
namespace = 'mynamespace'
dimension_name = 'my_dimension_name'
dimension_value = 'my_dimension_value'
cloudwatch = boto3.client('cloudwatch')
cloudwatch.put_metric_data(
MetricData=[
{
'MetricName': 'Free Memory',
'Dimensions': [
{
'Name': dimension_name,
'Value': dimension_value
},
],
'Unit': 'Gigabytes',
'Value': free_memory
},
],
Namespace=namespace
)

Related

How to download new uploaded files from s3 to ec2 everytime

I have an s3 bucket which will receive new files throughout the day. I want to download these to my ec2 instance everytime a new file is uploaded to the bucket.
I have read that its possible using sqs or sns or lambda. Which is the easiest of them all? I need the file to be downloaded as early as possible once it is uploaded into the bucket.
EDIT
I basically will be getting png images in the bucket every few seconds or minutes. Everytime a new image is uploaded, I want to download that on the instance which is already running. I will do some AI processing. As the images will keeep coming into the bucket, I want to constantly keep downloading it in the ec2 and process it as soon as possible.
This is my code in the Lambda function so far.
import boto3
import json
def lambda_handler(event, context):
"""Read file from s3 on trigger."""
#print(event)
s3 = boto3.client("s3")
client = boto3.client("ec2")
ssm = boto3.client("ssm")
instanceid = "******"
if event:
file_obj = event["Records"][0]
#print(file_obj)
bucketname = str(file_obj["s3"]["bucket"]["name"])
print(bucketname)
filename = str(file_obj["s3"]["object"]["key"])
print(filename)
response = ssm.send_command(
InstanceIds=[instanceid],
DocumentName="AWS-RunShellScript",
Parameters={
"commands": [f"aws s3 cp {filename} ."]
}, # replace command_to_be_executed with command
)
# fetching command id for the output
command_id = response["Command"]["CommandId"]
time.sleep(3)
# fetching command output
output = ssm.get_command_invocation(CommandId=command_id, InstanceId=instanceid)
print(output)
return
However I am getting the following error
Test Event Name
test
Response
{
"errorMessage": "2021-12-01T14:11:30.781Z 88dbe51b-53d6-4c06-8c16-207698b3a936 Task timed out after 3.00 seconds"
}
Function Logs
START RequestId: 88dbe51b-53d6-4c06-8c16-207698b3a936 Version: $LATEST
END RequestId: 88dbe51b-53d6-4c06-8c16-207698b3a936
REPORT RequestId: 88dbe51b-53d6-4c06-8c16-207698b3a936 Duration: 3003.58 ms Billed Duration: 3000 ms Memory Size: 128 MB Max Memory Used: 87 MB Init Duration: 314.81 ms
2021-12-01T14:11:30.781Z 88dbe51b-53d6-4c06-8c16-207698b3a936 Task timed out after 3.00 seconds
Request ID
88dbe51b-53d6-4c06-8c16-207698b3a936
When I remove all the lines related to ssm, it works fine. Is there any permission issue or is there any problem with the code?
EDIT2
My code is working but I dont see any output or change in my ec2 instance. I should be seeing an empty text file in the home directory but I dont see anything
Code
import boto3
import json
import time
import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
def lambda_handler(event, context):
"""Read file from s3 on trigger."""
#print(event)
s3 = boto3.client("s3")
client = boto3.client("ec2")
ssm = boto3.client("ssm")
instanceid = "******"
print("HI")
if event:
file_obj = event["Records"][0]
#print(file_obj)
bucketname = str(file_obj["s3"]["bucket"]["name"])
print(bucketname)
filename = str(file_obj["s3"]["object"]["key"])
print(filename)
print("sending")
try:
response = ssm.send_command(
InstanceIds=[instanceid],
DocumentName="AWS-RunShellScript",
Parameters={
"commands": ["touch hi.txt"]
}, # replace command_to_be_executed with command
)
# fetching command id for the output
command_id = response["Command"]["CommandId"]
time.sleep(3)
# fetching command output
output = ssm.get_command_invocation(CommandId=command_id, InstanceId=instanceid)
print(output)
except Exception as e:
logger.error(e)
raise e
There are several ways. One would be to setup s3 notifications to invoke a lambda function. Then lambda function would use SSM Run Command to execute AWS CLI S3 command on your instance to download the file from S3.
I don't know why there is any recommendation of Lambda here. What you need is simple: S3 object created event notification -> SQS and some job on your EC2 instance watching a long polling queue.
Here is an example of such a python script. You need to sort out how the object key is encoded in the event, but it will be there. I haven't tested this, but it should be pretty close.
import boto3
def main() -> None:
s3 = boto3.client("s3")
sqs = boto3.client("sqs")
while True:
res = sqs.receive_message(
QueueUrl="yourQueue",
WaitTimeSeconds=20,
)
for msg in res.get("Messages", []):
s3.download_file("yourBucket", msg["key"], "local/file/path")
if __name__ == "__main__":
main()
You can use S3 Event Notifications, which react to a new file coming into the s3 bucket.
The destinations supported by s3 event are SNS, SQS or AWS lambda.
You can directly use the lambda as destination as described by #Marcin
You can use SQS has queue with a lambda behind pulling from the queue. It allows you to have some capability like dead letter queue. You can then pull messages from the queue using different methods:
AWS CLI
AWS SDK
You can use SNS with different things behind (you can have many of these desinations in a row which symbolise the fan-out pattern:
a SQS queue to manage the files
an email to notify
a lambda function
...
You can find more explication in ths article: https://aws.plainenglish.io/system-design-s3-events-to-lambda-vs-s3-events-to-sqs-sns-to-lambda-2d41477d1cc9

Lauch of EC2 Intance and attach to Target Group using Lambda

I am trying to launch EC2 instance and attachig to Target group using following code in lambda function But i am getting following error. But lambda funciton not getting Intsance ID and giving and error, please guide
Error is:
An error occurred (ValidationError) when calling the RegisterTargets operation: Instance ID 'instance_id' is not valid",
"errorType": "ClientError",
Code is :
import boto3
import json
import time
import os
AMI = 'ami-047a51fa27710816e'
INSTANCE_TYPE = os.environ['MyInstanceType']
KEY_NAME = 'miankeyp'
REGION = 'us-east-1'
ec2 = boto3.client('ec2', region_name=REGION)
def lambda_handler(event, context):
instance = ec2.run_instances(
ImageId=AMI,
InstanceType=INSTANCE_TYPE,
KeyName=KEY_NAME,
MaxCount=1,
MinCount=1
)
print ("New instance created:")
instance_id = instance['Instances'][0]['InstanceId']
print (instance_id)
client=boto3.client('elbv2')
time.sleep(5)
response = client.register_targets(
TargetGroupArn='arn:aws:elasticloadbalancing:us-east-1::targetgroup/target-demo/c46e6bfc00b6886f',
Targets=[
{
'Id': 'instance_id'
},
]
)
To wait until an instance is running, you can use an Instance State waiter.
This is a boto3 capability that will check the state of an instance every 15 seconds until it reaches the desired state, up to a limit of 40 checks, which allows 10 minutes.

Boto3 Cloudformation Drift Status

I'm trying to loop through every region and check if a stack has drifted or not, and then print a list of drifted stacks.
# !/usr/bin/env python
import boto3
import time
## Create a AWS Session
session = boto3.Session(profile_name='default', region_name='us-east-1')
if __name__ == '__main__':
## Connect to the EC2 Service
client = session.client('ec2')
## Make a list of all the regions
response = client.describe_regions()
for region in response['Regions']:
name = region['RegionName']
print("Drifted CFn in region: " + name)
## Connect to the CFn service in the region
cloudformationClient = boto3.client("cloudformation")
stacks = cloudformationClient.describe_stacks()
detection_id = cloudformationClient.detect_stack_drift(StackName=stacks)
for stack in stacks['Stacks']:
while True:
time.sleep(3)
# sleep between api calls to prevent lockout
response = cloudformationClient.describe_stack_drift_detection_status(
StackDriftDetectionId=detection_id
)
if response['DetectionStatus'] == 'DETECTION_IN_PROGRESS':
continue
else:
print("Stack" + stack + " has a drift status:" + response)
I am still new to Python and am unsure why it's failing on the StackName on line 22 when I know that the that's the name of the variable in "detect_stack_drift" that I'm trying to parse. Some help would be appreciated!
See these lines:
stacks = cloudformationClient.describe_stacks()
detection_id = cloudformationClient.detect_stack_drift(StackName=stacks)
The describe_stacks() call returns:
{
'Stacks': [
{
'StackId': 'string',
'StackName': 'string',
...
},
],
'NextToken': 'string'
}
However, the detect_stack_drift() function is expecting a single string in StackName.

input data to aws elasticsearch using boto3 or es library

I have a lot of data that I want to send to aws elasticsearch. by looking at the https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-gsg-upload-data.html aws website it uses curl -Xput However I want to use python to do this therefore I've looked into boto3 documentation but cannot find a way to input data.
https://boto3.amazonaws.com/v1/documentation/api/1.9.42/reference/services/es.html I cannot see any method that inserts data.
This seems very basic job. Any help?
You can send the data to elastic search using HTTP interface. Here is the code sourced from
https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-request-signing.html
from requests_aws4auth import AWS4Auth
import boto3
host = '' # For example, my-test-domain.us-east-1.es.amazonaws.com
region = '' # e.g. us-west-1
service = 'es'
credentials = boto3.Session().get_credentials()
awsauth = AWS4Auth(credentials.access_key, credentials.secret_key, region, service, session_token=credentials.token)
es = Elasticsearch(
hosts = [{'host': host, 'port': 443}],
http_auth = awsauth,
use_ssl = True,
verify_certs = True,
connection_class = RequestsHttpConnection
)
document = {
"title": "Moneyball",
"director": "Bennett Miller",
"year": "2011"
}
es.index(index="movies", doc_type="_doc", id="5", body=document)
print(es.get(index="movies", doc_type="_doc", id="5"))
EDIT
To confirm whether data is pushed to the elastic cache under your index, you can try to do an HTTP GET by replacing the domain and index name
search-my-domain.us-west-1.es.amazonaws.com/_search?q=movies

Iteration not working in Lambda as I want to run lambda in two regions listed in code

Hi I have this simple lambda function which stops all EC-2 instances tagged with Auto_off. I have set a for loop so that it works for two regions us-east-1 and us-east-2. I am running the function in us-east-2 region.
the problem is that only the instance located in us-east2 is stopping and the other instance is not(located in us-east-1). what modifications can i make.
please suggest as i am new to python and boto library
import boto3
import logging
#setup simple logging for INFO
logger = logging.getLogger()
logger.setLevel(logging.INFO)
#define the connection
ec2 = boto3.resource('ec2')
client = boto3.client('ec2', region_name='us-east-1')
ec2_regions = ['us-east-1','us-east-2']
for region in ec2_regions:
conn = boto3.resource('ec2',region_name=region)
def lambda_handler(event, context):
# Use the filter() method of the instances collection to retrieve
# all running EC2 instances.
filters = [{
'Name': 'tag:AutoOff',
'Values': ['True']
},
{
'Name': 'instance-state-name',
'Values': ['running']
}
]
#filter the instances
instances = ec2.instances.filter(Filters=filters)
#locate all running instances
RunningInstances = [instance.id for instance in instances]
#print the instances for logging purposes
#print RunningInstances
#make sure there are actually instances to shut down.
if len(RunningInstances) > 0:
#perform the shutdown
shuttingDown = ec2.instances.filter(InstanceIds=RunningInstances).stop()
print shuttingDown
else:
print "Nothing to see here"
You are creating 2 instances of ec2 resource, and 1 instance of ec2 client. You are only using one instance of ec2 resource, and not using the client at all. You are also setting the region in your loop on a different resource object from the one you are actually using.
Change all of this:
ec2 = boto3.resource('ec2')
client = boto3.client('ec2', region_name='us-east-1')
ec2_regions = ['us-east-1','us-east-2']
for region in ec2_regions:
conn = boto3.resource('ec2',region_name=region)
To this:
ec2_regions = ['us-east-1','us-east-2']
for region in ec2_regions:
ec2 = boto3.resource('ec2',region_name=region)
Also your indentation is all wrong in the code in your question. I hope that's just a copy/paste issue and not how your code is really indented, because indentation is syntax in Python.
The loop you do here
ec2_regions = ['us-east-1','us-east-2']
for region in ec2_regions:
conn = boto3.resource('ec2',region_name=region)
Firstly assigns us-east-1 to the conn variable and on the second step, it overwrites it with us-east-2 and then it enters your function.
So what you can do is put that loop inside your function and do the current definition of the function inside that loop.