I am scheduling ec2 instance shutdown everyday at 8 PM using chalice and lambda function.
I have configured the chalice but not able to trigger or integrate python script using chalice
import boto3
#creating session to connect to aws
#defining instances to be started or stopped
myins = ['i-043ae2fbfc26d423f','i-0df3f5ead69c6428c','i-0bac8502574c0cf1d','i-02e866c4c922f1e27','i-0f8a5591a7704f98e','i-08319c36611d11fa1','i-047fc5fc780935635']
#starting ec2 instances if stopped
ec2 = boto3.resource('ec2')
ec2client = boto3.client('ec2')
for instance in ec2.instances.all():
for i in myins:
if i == instance.id and instance.state['Name'] == "running":
ec2client.stop_instances(InstanceIds=[i])
I want to stop instance using chalice
AWS Instance Scheduler does the job that you are looking for. We have used it for several months. It works as expected. You may check this reference: https://aws.amazon.com/premiumsupport/knowledge-center/stop-start-instance-scheduler/
Related
My Python application is deployed in a docker container on an EC2 instance. Passwords are stored in secrets manager. During runtime, application will make an API call to secrets manager to fetch the password and connect. Since we recreated the instance, it started giving out below error -
botocore.exceptions.NoCredentialsError: Unable to locate credentials
My application code is -
session = boto3.session.Session()
client = session.client(service_name = 'secretmanager', region_name = 'us-east-1')
get_secret_value_response = client.get_secret_value(secretId = secret_name)
If I run -
aws secretmanager get-secret-value --secret-id abc
It works without any issues since IAM policy is appropriately attached to the EC2 instance.
I spent the last 2 days trying to troubleshoot this but am still stuck with no clarity on why this is breaking. Any tips or guidance would help.
The problem was with the HTTPToken variable in the instance metadata options which was defaulted to required with the fresh update. Reverted it back to optional and boto3 is now able to make an API call for instance meta data and inherit its roles.
When I launch an EC2 instance from a particular AMI via the web console, it works just fine and I can RDP into it no problems.
But when I launch another (identical) instance via an aws lambda, I cannot RDP into the instance
Details
Here is the lambda used to launch the instance
import boto3
REGION = 'ap-southeast-2'
AMI = 'ami-08e9ad7d527e4e95c'
INSTANCE_TYPE = 't2.small'
def lambda_handler(event, context):
EC2 = boto3.client('ec2', region_name=REGION)
init_script = """<powershell>
powershell "C:\\Users\\Administrator\\Desktop\\ScriptToRunDaily.ps1"
aws ec2 terminate-instances --instance-ids 'curl http://169.254.169.254/latest/meta-data/instance-id'
</powershell>"""
instance = EC2.run_instances(
ImageId=AMI,
InstanceType=INSTANCE_TYPE,
MinCount=1,
MaxCount=1,
InstanceInitiatedShutdownBehavior='terminate',
UserData=init_script
)
I can see the instance start up in the AWS console. Everything looks normal until I go to remote in, where a prompt says 'Initiating remote session' takes ~15 seconds and returns
We couldn't connect to the remote PC. Make sure the PC is turned on and connected to the network, and that remote access is enabled.
Error code: 0x204
Note
When I click try to connect to the instance through the AWS console, it lets me download an RDP file, however, it doesn't display the option to 'Get Password' as it does if I start the exact same AMI through the console (as opposed to via a lambda)
I suspect I may need to associate the instance with a keypair at launch?
Also note
Before creating this particular AMI, I logged in and changed the password, so I really have no need to generate one using the .pem file.
It turns out I needed to add SecurityGroupIds
Note that it's an array of up to 5 values, rather than a single value, so it's specified like ['first', 'second', 'etc'] rather than just 'first'. Hence the square brackets around ['launch-wizard-29'] below
I also specified a key.
The following is what worked for me
import boto3
REGION = 'ap-southeast-2'
AMI = 'ami-08e9ad7d527e4e95c'
INSTANCE_TYPE = 't2.small'
def lambda_handler(event, context):
EC2 = boto3.client('ec2', region_name=REGION)
init_script = """<powershell>
powershell "C:\\Users\\Administrator\\Desktop\\ScriptToRunDaily.ps1"
aws ec2 terminate-instances --instance-ids 'curl http://169.254.169.254/latest/meta-data/instance-id'
</powershell>"""
instance = EC2.run_instances(
ImageId=AMI,
InstanceType=INSTANCE_TYPE,
MinCount=1,
MaxCount=1,
InstanceInitiatedShutdownBehavior='terminate',
UserData=init_script,
KeyName='aws', # Name of a key - I used a key (i.e. pem file) that I used for other instances
SecurityGroupIds=['launch-wizard-29'] # I copied this from another (running) instance
)
I'm starting a bunch of EC2 Instances using the following code
def start_ec2_instances(self, instanceids):
ec2client = boto3.resource('ec2')
response = ec2client.start_instances(InstanceIds=instanceids)
return
Now it starts successfully. However I want to use wait_until_running method to check the status of the instances and wait until all the instances are started.
wait_until_running method can be issued on single instances only? How do I wait for list of instances that has been started using boto3
This is what I'm doing currently. But wanted to know if there are any other ways to do it in one shot
def wait_until_instance_running(self, instanceids):
ec2 = boto3.resource('ec2')
for instanceid in instanceids:
instance = ec2.Instance(instanceid)
logger.info("Check the state of instance: %s",instanceid)
instance.wait_until_running()
return
Use
Wait until instance is running or
Wait until a successful state is reached
Try this:
ec2 = boto3.client('ec2')
start_ec2_instances(instanceids)
waiter = ec2.get_waiter('instance_running')
waiter.wait(InstanceIds=instanceids)
The waiter function polls every 15 seconds until a successful state is reached. An error is returned after 40 failed checks.
You can use EC2 client's Waiter.wait call to pass an array of EC2 instance Ids.
http://boto3.readthedocs.io/en/latest/reference/services/ec2.html#EC2.Waiter.InstanceRunning
I have a quick question about usage of AWS SNS.
I have deployed an EC2 (t2.micro, Linux) instance in us-west-1 (N.California). I have written a python script using boto3 to send a simple text message to my phone. Later I discovered, there is no SNS service for instances deployed out of us-east-1 (N.Virginia). Till this point it made sense, because I see this below error when i execute my python script, as the region is defined as "us-west-1" in aws configure (AWS cli) and also in my python script.
botocore.errorfactory.InvalidParameterException: An error occurred (InvalidParameter) when calling the Publish operation: Invalid parameter: PhoneNumber Reason:
But to test, when I changed the "region" in aws conifgure and in my python script to "us-east-1", my script pushed a text message to my phone. Isn't it weird? Can anyone please explain why this is working just by changing region in AWS cli and in my python script, though my instance is still in us-west-1 and I dont see "Publish text message" option on SNS dashboard on N.california region?
Is redefining the aws cli with us-east-1 similar to deploying a new instance altogether in us-east-1? I dont think so. Correct me if I am wrong. Or is it like having an instance in us-west-1, but just using SNS service from us-east-1? Please shed some light.
Here is my python script, if anyone need to look at it (Its a simple snippet).
import boto3
def send_message():
# Create an SNS client
client = boto3.client("sns", aws_access_key_id="XXXX", aws_secret_access_key="XXXX", region_name="us-east-1")
# Send your sms message.
client.publish(PhoneNumber="XXXX",Message="Hello World!")
if __name__ == '__main__':
send_message()
Is redefining the aws cli with us-east-1 similar to deploying a new
instance altogether in us-east-1?
No, it isn't like that at all.
Or is it like having an instance in us-west-1, but just using SNS
service from us-east-1?
Yes, that's all you are doing. You can connect to any AWS regions' API from anywhere on the Internet. It doesn't matter that it is running on an EC2 instance in a specific region, it only matters what region you tell the SDK/CLI to use.
You could run the same code on your local computer. Obviously your local computer is not running on AWS so you would have to tell the code which AWS region to send the API calls to. What you are doing is the same thing.
Code running on an EC2 server is not limited into using the AWS API in the same region that the EC2 server is in.
Did you try creating a topic before publishing to it? You should try create a topic and then publish to that topic.
I have started to use Amazon EC2 extensively and inorder to contain costs, I manually shutdown the instances before I leave work and bring them up when I come in. Sometimes I forget to shut them down. Is there a mechanism within the Amazon dashboard (or any other way) to automatically shut down the instances at say 6pm and bring them up at 6am? I am happy to write scripts or programs if there are any API's available. If you have some code written already, it would be great if you can share.
There are 2 solutions.
AWS Data Pipeline - You can schedule the instance start/stop just like cron. It will cost you one hour of t1.micro instance for every start/stop
AWS Lambda - Define a lambda function that gets triggered at a pre defined time. Your lambda function can start/stop instances. Your cost will be very minimal or $0
I used Data Pipeline for a long time before moving to Lambda. Data Pipeline is very trivial. Just paste the AWS CLI commands to stop and start instances. Lambda is more involved.
Using AWS Lambda is definitely simple with scheduled event. You can find the complete code script here
For example, to start EC2 instances
From AWS Console, configure a lambda function
Copy following code to "Edit code inline"
import boto3
def lambda_handler(event, context):
client = boto3.client('ec2')
response = client.start_instances(
InstanceIds=[
'i-xxxxxx',
'i-xxxxxx',
]
)
print response
Give it basic execution role
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents",
"ec2:StartInstances"
],
"Resource": [
"arn:aws:logs:*:*:*",
"arn:aws:ec2:*:*:*"
]
}
]
}
Click create lambda function and add Event Source (CloudWatch events - schedule)
Finally, test your function to see if it work correctly
I actually happened to have a cron job running for this. I'll put up my 'start instance' code here (for reference to get you started)
#!/usr/bin/python
import boto.ec2
import boto
import boto.regioninfo
import time
import sys
import boto.ec2.connection
from boto.ec2 import EC2Connection
from boto.regioninfo import RegionInfo
from boto import ec2
from boto import regioninfo
ec2_region = ec2.get_region(aws_access_key_id='QWERTACCESSKEY', aws_secret_access_key='QWERTSECRETKEY', region_name='ap-southeast-2')
conn = boto.ec2.connection.EC2Connection(aws_access_key_id='QWERTACCESSKEY', aws_secret_access_key='QWERTSECRETKEY', region=ec2_region)
instance1 = conn.get_only_instances(instance_ids=['i-xxxxxxx1'])
instance2 = conn.get_only_instances(instance_ids=['i-xxxxxxx2'])
instance3 = conn.get_only_instances(instance_ids=['i-xxxxxxx3'])
instance4 = conn.get_only_instances(instance_ids=['i-xxxxxxx4'])
def inst(): # updates states of instances
instance1[0].update()
instance2[0].update()
instance3[0].update()
instance4[0].update()
def startservers():
inst()
if instance1[0].state in 'stopped':
instance1[0].start()
if instance2[0].state in 'stopped':
instance2[0].start()
if instance3[0].state in 'stopped':
instance3[0].start()
if instance4[0].state in 'stopped':
instance4[0].start()
def check(): # a double check to make sure the servers are down
inst()
while instance1[0].state in 'stopped' or instance2[0].state in 'stopped' or instance3[0].state in 'stopped' or instance4[0].state in 'stopped':
startservers()
time.sleep(30)
startservers()
time.sleep(60)
check()
This is my cronjob
31 8 * * 1-5 python /home/user/startaws
This runs at 8:31am from monday to friday.
Please Note
This script works fine for me BUT it is definitely possible to make it much cleaner and simpler. There are also much better ways than this too. (I wrote this in a hurry so I'm sure I have some unnecessary lines of code in there) I hope it gives you an idea on how to start off :)
If your use case permits terminating and re-launching, you might consider Scheduled Scaling in an Auto Scaling group. AWS recently added tools to manage scheduling rules in the Management Console UI.
But I don't believe Auto Scaling will cover starting and stopping the same instance, preserving state.
You don't need to write any scripts to achieve this, just use AWS console API. This is a perfect use case for AutoScaling with Scheduled Scaling. Steps:
Create a new Autoscaling group create a new autoscaling group. Example group: MY_AUTOSCALING_GROUP1
In your case you need to create 2 new recurring schedules create new schedule. Example:
Scale up every morning
aws autoscaling put-scheduled-update-group-action --scheduled-action-name scaleup-schedule --auto-scaling-group-name MY_AUTOSCALING_GROUP1 --recurrence "0 6 * * *" --desired-capacity 5
Scale down every evening
aws autoscaling put-scheduled-update-group-action --scheduled-action-name scaledown-schedule --auto-scaling-group-name MY_AUTOSCALING_GROUP1 --recurrence "0 18 * * *" --desired-capacity 0
5 EC2 instances will start every morning (6 AM) and terminate in evening (6 PM) everyday by applying above scheduled actions to your autoscaling group.
Can you just start with a simple solution like a cron job and AWS CLI?
start.sh
# some preparations
# ...
$(aws ec2 start-instances --instance-ids $LIST_OF_IDS)
$(aws ec2 wait instance-running --instance-ids $LIST_OF_IDS)
stop.sh
# some preparations
# ...
$(aws ec2 stop-instances --instance-ids $LIST_OF_IDS)
$(aws ec2 wait instance-stopped --instance-ids $LIST_OF_IDS)
Then crontab -e and add something like
* 6 * * 1,2,3,4,5 start.sh
* 18 * * 1,2,3,4,5 stop.sh