Is it possible to get a time for state transition for an Amazon EC2 instance? - amazon-web-services

I'm accessing EC2 with the aws-sdk for Ruby. I have an array of instances from describe_instances().
This provides me with the state of the instances and even a state transition reason. But how can I get a time for the state transition?
Edit
So I have:
client=Aws::EC2::Client()
resp =client.describe_instances({ filters })
and I would need
resp.reservations[0].instances[0].state_transition_time #=> Time
similar to
resp.reservations[0].instances[0].state_transition_reason #=> String

This information is not available via the Amazon EC2 API at this time. The aws-sdk gem returns all of the information available from the DescribeInstances operation as documented here: http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeInstances.html

The State Transition Reason is not always populated with a date and time and may not even be populated at all per the documentation. I have not found any hints in the documentation that specify the conditions in which you DO get a date/time, but in my experience, the date/time are present in the State Transition Reason for between 30 and 90 days. After that, the reason seems to persist, but the date is dropped from the string.
All of the documentation that I can find is listed here:
Attribute Definition
EC2 API - Ruby

Related

Elasticsearch - Take full snapshot using the snapshot api

Is there an option to take full snapshot using the ES snapshot api. we would like to take full snapshot every 3 days.
You can refer to the following document: https://docs.aws.amazon.com/opensearch-service/latest/developerguide/managedomains-snapshots.html
I used to do something the same in my previous company where we had a lambda trigger the backup every week via cron and a Full backup used to happen of all Documents and indexes. I have thought tried to restore once which failed the first time, but the second time it worked though, the issue was the instance was small and it needed a bigger one to restore the data, so please those setting as well.

Replicating data from SQL Server to BigQuery

I've been trying to follow instructions from Google on Replicating data from SQL Server to BigQuery available here: https://cloud.google.com/data-fusion/docs/tutorials/replicating-data/sqlserver-to-bigquery. Following instructions to the letter step by step always results in this odd error when creating the Cloud Fusion instance
Invalid argument (HTTP 400): retry budget exhausted (3 attempts): cloud-control2-saas::GCE_BAD_REQUEST: Invalid value for field 'networkPeering.name': '*******'. Must be a match of regex '(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?)'.
**** is the project ID with the VPC network suffix after a dash and it looks a bit like this (I've changed values)
website.com:api-project-0000000000-default
This value is being assigned somewhere by Google, I am not given a choice to select this or enter this through the instructions when creating the Instance.
Googling the error doesn't show me anything useful and sadly I do not have budget to acquire GCP support in this instance to try and ask them why their instruction appear not to work.
I've already checked quotas, billing, service account permissions, etc. I've also tried both a new VPC as well as a shared VPC with all the settings from the guide.
Would appreciate someone more experienced in this area maybe point me in the right direction or if someone has some sort of understanding of where else to check what could be wrong I would appreciate it.
Instructions do point at creating a peering connection but the instructions themselves require the Cloud Data Fusion Instance to be created before configuring the peering connection and since I can't create the Cloud Data Fusion Instance I am unsure on what exactly I am supposed to do.
Appreciate the help!
According to this documentation, before creating a private instance I assume you're creating a VPC network.
networkPeering.name is a combination of your Project-id and VPC-network. The error which you're getting is due to incorrect naming convention of networkPeeering name. ie. the value of networkPeering.name does not match the regex expression (?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?), which in your case is due to the project-ID: website.com:api-project-xxxxxxxxx.
Also note that networkPeering name should be less than 63 characters in length as per the regex expression.

AWS Ruby SDK filtering

I'm refactoring a Ruby framework that is calling describe_instances and then filtering the response for just the VPC names.
It seems a waste of bandwidth to pull down the data for every instance in the region and then filter out the VPC ids in this way.
When I look at the documentation for filtering server side I see posts doing things like applying filters for all instances of type xx and so on.
What I want to do is pull down all VPC ids as a unique list.
Can anyone point me at an example of how to do that?
Thanks in advance
Never mind, I discovered the describe_vpcs endpoint:
def get_vpc_ids
ec2_object.describe_vpcs[:vpcs].each do |vpc|
#vpc_list.push(vpc[:vpc_id])
end
#vpc_list.uniq!
end

AWS DynamoDB resource not found exception

I have a problem with connection to DynamoDB. I get this exception:
com.amazonaws.services.dynamodb.model.ResourceNotFoundException:
Requested resource not found (Service: AmazonDynamoDB; Status Code:
400; Error Code: ResourceNotFoundException; Request ID: ..
But I have a table and region is correct.
From the docs it's either you don't have a Table with that name or it is in CREATING status.
I would double check to verify that the table does in fact exist, in the correct region, and you're using an access key that can reach it
My problem was stupid but maybe someone has the same... I changed recently the default credentials of aws (~/.aws/credentials), I was testing in another account and forgot to rollback the values to the regular account.
I spent 1 day researching the problem in my project and now I should repay a debt to humanity and reduce the entropy of the universe a little.
Usually, this message says that your client can't reach a table in your DB.
You should check the next things:
1. Your database is running
2. Your accessKey and secretKey are valid for the database
3. Your DB endpoint is valid and contains correct protocol ("http://" or "https://"), and correct hostname, and correct port
4. Your table was created in the database.
5. Your table was created in the database in the same region that you set as a parameter in credentials. Optional, because some
database environments (e.g. Testcontainers Dynalite) don't have an incorrect value for the region. And any nonempty region value will be correct
In my case problem was that I couldn't save and load data from a table in tests with DynamoDB substituted by Testcontainers and Dynalite. I found out that in our project tables creates by Spring component marked with #Component annotation. And in tests, we are using a global setting for lazy loading components to test, so our component didn't load by default because no one call it in the test explicitly. ¯_(ツ)_/¯
If DynamoDB table is in a different region, make sure to set it before initialising the DynamoDB by
AWS.config.update({region: "your-dynamoDB-region" });
This works for me:)
Always ensure that you do one of the following:
The right default region is set up in the AWS CLI configuration files on all the servers, development machines that you are working on.
The best choice is to always specify these constants explicitly in a separate class/config in your project. Always import this in code and use it in the boto3 calls. This will provide flexibility if you were to add or change based on the enterprise requirements.
If your resources are like mine and all over the place, you can define the region_name when you're creating the resource.
I do this for all my instantiations as it forces me to think about what I'm putting/calling where.
boto3.resource("dynamodb", region_name='us-east-2')
I was getting this issue in my .NetCore Application.
Following fixed the issue for me in Startup class --> ConfigureServices method
services.AddDefaultAWSOptions(
new AWSOptions
{
Region = RegionEndpoint.GetBySystemName("eu-west-2")
});
I got Error warning Lambda : lifecycleIteration=0 lambda handler returned an error: ResourceNotFoundException: Requested resource not found
I spent 1 week to fix the issue.
And so its root cause and steps to find issue is mentioned in below Git Issue thread and fixed it.
https://github.com/soto-project/soto/issues/595

Amazon S3 conditional put object

I have a system in which I get a lot of messages. Each message has a unique ID, but it can also receives updates during its lifetime. As the time between the message sending and handling can be very long (weeks), they are stored in S3. For each message only the last version is needed. My problem is that occasionally two messages of the same id arrive together, but they have two versions (older and newer).
Is there a way for S3 to have a conditional PutObject request where I can declare "put this object unless I have a newer version in S3"?
I need an atomic operation here
That's not the use-case for S3, which is eventually-consistent. Some ideas:
You could try to partition your messages - all messages that start with A-L go to one box, M-Z go to another box. Then each box locally checks that there are no duplicates.
Your best bet is probably some kind of database. Depending on your use case, you could use a regular SQL database, or maybe a simple RAM-only database like Redis. Write to multiple Redis DBs at once to avoid SPOF.
There is SWF which can make a unique processing queue for each item, but that would probably mean more HTTP requests than just checking in S3.
David's idea about turning on versioning is interesting. You could have a daemon that periodically trims off the old versions. When reading, you would have to do "read repair" where you search the versions looking for the newest object.
Couldn't this be solved by using tags, and using a Condition on that when using PutObject? See "Example 3: Allow a user to add object tags that include a specific tag key and value" here: https://docs.aws.amazon.com/AmazonS3/latest/dev/object-tagging.html#tagging-and-policies