I am using AWS Backup to back up some DynamoDB tables. Using the AWS Backup console to restore the back-ups I am prompted to restore to a new table. This works fine but my tables are deployed using CloudFormation, so I need the restored data in the existing table.
What is the process to get the restored data into the existing table? It looks like there are some third-party tools to copy data between tables but I'm looking for something within AWS itself.
I recently had this issue and actually got cloudformation to work quite seamlessly. The process was
Delete existing tables directly from dynamodb (do not delete from cloudformation)
Restore backup to new table, using the name of the deleted table
In cloudformation, detect drift, manually fix any drift errors in dynamodb, and then detect drift again
After this, the CFN template was healthy
At this time, AWS has no direct way to do this (though it looks like you can export to some service, then import from that service into an existing table).
I ended up writing my own code to do this.
Related
I have RDS which autobackup period is 7 days.
And I have found I can backup the RDS's snapshot to S3 manually.
However I want to make back up RDS snapshot sautomatically to S3.
How can I do this , I should make event bridge?
The first stop for an answer about an AWS service is normally the AWS documentation.
Since sometimes finding the right section in the sea of info could be a bit overwhelming, please find below references that should answer your question.
There are 3 ways you could export an RDS snapshot to S3:
Using the management console
the AWS CLI
RDS APIs
The Exporting DB snapshot data to Amazon S3 AWS document explains each process in detail.
As described in previous comments, you could for instance using a lambda to trigger the RDS APIs.
Even more interesting, AWS provide a GitHub repository with the code to automate the export. Please find the code here.
As mentioned in the document, please note that:
Exporting RDS snapshots can take a while depending on your database type and size. The export task first restores and scales the entire database before extracting the data to Amazon S3. The task's progress during this phase displays as Starting. When the task switches to exporting data to S3, progress displays as In progress. The time it takes for the export to complete depends on the data stored in the database. For example, tables with well-distributed numeric primary key or index columns export the fastest. Tables that don't contain a column suitable for partitioning and tables with only one index on a string-based column take longer. This longer export time occurs because the export uses a slower single-threaded process.
Created a table in AWS Dynamodb using AWS console in us-west-2 region. Table is getting automatically delete with no trace at all. To debug I enabled BAckup point in time recovery mode. I can see that there are backups of table which got deleted automatically by the system with $deletedTableBackup as a suffix.
Each time I create the table, I can pump data using access
_key and secrets.
Any help what's going on, what exactly is causing the issue. I am using a corporate account and I have the access to create/delete/modify table.
I use AWS RDS as a database for my Spring boot application. I would like to archive earlier than 6 months of data from one specific table. In this context, I have gone through a few articles here but did not get any concrete idea of how to do this. Could anyone please help here?
If you are looking to backup with RDS itself, your options are limited. You can, of course, use automated RDS snapshots, but that won't let you pick a specific table (it will backup the entire database) and can't be set for retention longer than 35 days. Alternatively, you could manually initiate a snapshot, but you can't indicate a retention period. In this case though, you could instead use the AWS published rds-snapshot-tool which will help you automate the snapshot process and let you specify a maximum age of the snapshot. This is likely the easiest way to use RDS for your question. If you only wanted to restore one specific table (and didn't care about have the other tables in the backup), you could restore the snapshot and just immediately DROP the tables you don't care about before you start using the snapshot.
However, if you really care about only backing up one specific table, then RDS itself is out as a possible means for taking the backups on your behalf. I am assuming a mysql database for your spring application, in which case you will need to use the mysqldump tool to grab the database table you are interested in. You will need to manually call that tool from an application and then store the data persistently somewhere (perhaps S3). You will also need to manage the lifecycle on that backup, but if you do use S3, you can set a lifecycle policy to automatically age out and drop old files (backups, in this case).
I have multiple files present in different buckets in S3. I need to move these files to Amazon Aurora PostgreSQL every day on a schedule. Every day I will get a new file and, based on the data, insert or update will happen. I was using Glue for insert but with upsert Glue doesn't seem to be the right option. Is there a better way to handle this? I saw Load command from S3 to RDS will solve the issue but didn't get enough details on it. Any recommendations please?
You can trigger a Lambda function from S3 events, that could then process the file(s) and inject them into Aurora. Alternatively you can create a cron-type function that will run daily on whatever schedule you define.
https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html
I would like to be able to perform PITR restoration without losing benefit of Infrastructure-as-a-code with CloudFormation.
Specifically, if I perform PITR restoration manually and then point application to the new database, won't that result in new DynamoDB table falling out of CloudFormation managed infrastructure? AFAIK, there is no mechanism at the moment to add a resource to CloudFormation after it was already created.
Has anyone solved this problem?
There is a now a way to import existing resources into cloudformation.
This means that you can do a PiTR and then import the newly created table into your stack.
You are correct, the restored table will be outside cloudformation control. The only solution that I know of is to write a script that copies that from the recovered table to the original table. Obviously there is a cost and time involved in that and it is less than ideal.
As ever there is always the option to write a custom resource but that somewhat undermines the point of using Cloudformation in the first place.