Amazon web services autoscale - amazon-web-services

How to use lambda python function to name my group instances?
I want to name them in increasing order like hello1,hello2,hello3,etc.Can anyone tell how to use lambda function to name my autlscale groups?
I want to create instances..I want a function which will create them and give them name tag as..first instance name tag should be "hello1" second instance name tag should be "hello2" ..and so on... If any instance gets terminated ..say hello2 gets terminated then by autoscaling group formed, minimum number of instances is 2 ..therefore new instance will be created name it as hello2

One way to do this would be to write a script that gets executed when the instance is started. Put the script in the User Data that automatically gets run when an instance starts.
The script would:
Call DescribeInstances() to obtain a list of EC2 instances
Filter the list down to the instances within the Auto Scaling group
Count the number of instances (including itself)
Perform the necessary logic to figure out which number should be assigned
Create a Name tag on the new instance (effectively tagging itself)
Please note that the numbers might not be continuous. For example:
Start 4 instances (1, 2, 3, 4)
Auto Scaling might remove instances 2 & 3
Auto Scaling might add an instance (call it #2)
The current instances are: 1, 2, 4
Bottom line: You really shouldn't get fixated over numbering instances that are ephemeral (that is, that can be removed at any time). Simply be aware of how many instances are in the Auto Scaling group. If you really do need a unique ID, use the InstanceId.

Related

`kops update cluster` returns multiple will create/modify resources

I have a Kubernetes cluster that uses 1.17.17. I want to increase the CPU/RAM of a node using KOPS. When running kops update cluster command, I expect it would return the preview of my old instance type VS new instance type.
However, it returns a long line of will create resources/will modify resources.
I want to know why it shows a long log of changes it will execute instead of showing only the changes I made for instance type. Also, if this is safe to apply the changes.
After you will do that cluster update you are going to do rolling update on that cluster. The nodes will be terminated one by one and the new ones are going to show. Also while one node is going down to be replaced with the new one the services inside that node are going to be shifted on that one . Small tip remove all poddistributionbudgets. Also the log is fine dont worry.

AWS - subscribe multiple lambda logs to one elasticsearch service

I have two log groups generated by two different lambda. When I subscribe one log group to my elasticsearch service, it is working. However, when I add the other log group I have the following error in the log generated by cloudwatch :
"responseBody": "{\"took\":5,\"errors\":true,\"items\":[{\"index\":{\"_index\":\"cwl-2018.03.01\",\"_type\":\"/aws/lambda/lambda-1\",\"_id\":\"33894733850010958003644005072668130559385092091818016768\",\"status\":400,\"error\":
{\"type\":\"illegal_argument_exception\",\"reason\":\"Rejecting mapping update to [cwl-2018.03.01] as the final mapping would have more than 1 type: [/aws/lambda/lambda-1, /aws/lambda/lambda-2]\"}}}]}"
How can I resolve this, and still have both log group in my Elasticsearch service, and visualize all the logs ?
Thank you.
The problem is that ElasticSearch 6.0.0 made a change that allows indices to only contain a single mapping type. (https://www.elastic.co/guide/en/elasticsearch/reference/6.0/removal-of-types.html) I assume you are running an ElasticSearch service instance that is using version 6.0.
The default Lambda JS file if created through the AWS console sets the index type to the log group name. An example of the JS file is on this gist (https://gist.github.com/iMilnb/27726a5004c0d4dc3dba3de01c65c575)
Line 86: action.index._type = payload.logGroup;
I personally have a modified version of that script in use and changed that line to be:
action.index._type = 'cwl';
I have logs from various different log groups streaming through to the same ElasticSearch instance. It makes sense to have them all be the same type since they are all CloudWatch logs versus having the type be the log group name. The name is also set in the #log_group field so queries can use that for filtering.
In my case, I did the following:
Deploy modified Lambda
Reindex today's index (cwl-2018.03.07 for example) to change the type
for old documents from <log group name> to cwl
Entries from different log groups will now coexist.
You can also modify the generated Lambda code like below to make it work with multiple CW log groups. If the Lambda function can create different ES index for the different log streams coming under the same log groups, then we can avoid this problem. So, you need to find the Lambda function LogsToElasticsearch_<AWS-ES-DOMAIN-NAME>, then the function function transform(payload), and finally change the index name formation part like below.
// index name format: cwl-YYYY.MM.DD
//var indexName = [
//'cwl-' + timestamp.getUTCFullYear(), // year
//('0' + (timestamp.getUTCMonth() + 1)).slice(-2), // month
//('0' + timestamp.getUTCDate()).slice(-2) // day
//].join('.');
var indexName = [
'cwl-' + payload.logGroup.toLowerCase().split('/').join('-') + '-' + timestamp.getUTCFullYear(), // log group + year
('0' + (timestamp.getUTCMonth() + 1)).slice(-2), // month
('0' + timestamp.getUTCDate()).slice(-2) // day
].join('.');
Is it possible to forward all the cloudwatch log groups to a single index in ES? Like having one index "rds-logs-* "to stream logs from all my available RDS instances.
example: error logs, slow-query logs, general logs, etc., of all RDS instances, would be required to be pushed under the same index(rds-logs-*)?
I tried the above-mentioned code change, but it pushes only the last log group that I had configured.
From AWS: by default, only 1 log group can stream log data into ElasticSearch service. Attempting to stream two log groups at the same time will result in log data of one log group override the log data of the other log group.
Wanted to check if we have a work-around for the same.

AWS boto3 Config Service list of EC2s with states

I would like to use Boto3 to generate a list of EC2s along with state changes (pending, running, shutting-down, terminated etc.) between a set of two date times. My understanding is that Config Service maintains histories of EC2s even if the EC2 no longer exists. I have taken a look at this document, however I am having difficulty understanding which functions to use in order to accomplish the task at hand.
Thank you
Under the assumption that you have already configured AWS Config rules to track ec2-instance state, this approach will suit your need.
1) Get the list of ec2-instances using the list_discovered_resources API.Ensure includeDeletedResources is set to True if you want to include deleted resources in the response.
response = client.list_discovered_resources(
resourceType='AWS::EC2::Instance',
limit=100,
includeDeletedResources=True,
nextToken='string'
)
Parse the response and store the resource-id.
2) Pass each resource_id to the get_resource_config_history API.
response = client.get_resource_config_history(
resourceType='AWS::EC2::Instance',
resourceId='i-0123af12345be162h5', // Enter your EC2 instance id here
laterTime=datetime(2018, 1, 7), // Enter end date. default is current date.
earlierTime=datetime(2018, 1, 1), // Enter start date
chronologicalOrder='Reverse'|'Forward',
limit=100,
nextToken='string'
)
You can parse the response and get the state changes, which ec2 instance went through for that corresponding time period.

Automatic creation of snapshots using AWS Lambda

I have completed the automatic creation of snapshots using the following link :
https://blog.powerupcloud.com/2016/02/15/automate-ebs-snapshots-using-lambda-function/
As written in the code, filtering is done based on tags of VMs. Instead of creating a VM with a Backup or backup tag, I want to create snapshots of all except for some names.
I do not want to add extra tags to VMs. Instead, I want to write an if condition in my filters. I would provide the names of my Test VMs and if the VM tag matches that, snapshot would not be created. If it does not match, snapshots have to be created. Can I do that?
Ex : I have four VMs in my account.
VM 1 --> Prod1,
VM 2 --> Prod2,
VM 3 --> Prod3,
VM 4 --> Test1.
Acc to example, I need to be able to write an if condition which includes my test VM tag 'Test1'. If the tag matches this, the snapshot should not be created. If it does not match, snapshots have to be created.
So, for doing this, how should I change my code?
You just need to create a tag for all your three servers with key 'Backup'. The script is filtering the instances on the key names only.
The piece of code that picks up which VMs need to be backed up is this:
reservations = ec.describe_instances(
Filters=[
{'Name': 'tag-key', 'Values': ['Backup', 'True']},
]
).get(
'Reservations', []
)
As you can see, it uses boto's describe_instances and a filter limits the number of instances that will be processed. If you would like to backup everything except for those which are non-prod in your environment, you should consider tagging your non-prod instances with something like Backup=NO.
To backup all servers except those marked with a tag:
Get a list of all servers
Get a list of servers with the 'do not backup' flag and remove them from the first list
Do the backup
It will require two calls to describe_instances().

Filtering ec2-instances with boto

I use tags to keep track of my EC2 instances, such as (Project, Environment). I have a use case where I need to filter only those instances that belong to a specific project and to a specific environment.
When I use filter with boto and pass these two values I get a result that does a OR rather than a AND of the filters and so I am receiving a list of instances that belong to different projects but same environment.
Now I can use two lists and then compare the instances in each and get the desired set of instances, but is there a better way of getting this done?
Here is what i am doing:
conn = ec2.EC2Connection('us-east-1',aws_access_key_id='XXX',aws_secret_access_key='YYY')
reservations = conn.get_all_instances(filters={"tag-key":"project","tag-value":<project-name>,"tag-key":"env","tag-value":<env-name>})
instances = [i for r in reservations for i in r.instances]
Now the instance list that I am getting gives all the instances from the specified project irrespective of the environment and all the instances from the specified environment irrespective of the project.
You can use the tag:key=value syntax to do an AND search on your filters.
import boto.ec2
conn = boto.ec2.connect_to_region('us-east-1',aws_access_key_id='xx', aws_secret_access_key='xx')
reservations = conn.get_all_instances(filters={"tag:Name" : "myName", "tag:Project" : "B"})
instances = [i for r in reservations for i in r.instances]
print instances
See EC2 API for details
http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-DescribeInstances.html
The problem with the syntax you used is that a Python dict has unique keys, so the second tag-key entry overwrites the first one :-(
Seb
While the documentation does not specifically say what happens with multiple filters, the ORing may be by design. In this case, pass the required attributes in sequence to the function and pass in the result of the previous invocation into the next one (using the instance_ids parameter). This will restrict the results in each step with the additional filter. The attributes are then applied in sequence returning the ANDed result you desire.