I have DynamoDB table and I would like to rename it. There does not seems to be any commands or options to rename a table. Has anybody renamed a table before?
I know this is an old question and I don't want to steal the thunder from user6500852's answer but for anyone stumbling upon this needing an answer:
Select the table you want to rename in the Web UI so it's highlighted and the various tabs show up to the right.
Click on the Backups tab and feast your eyes upon the lower half of the page ("On-Demand Backup and Restore").
3. Smash that "Create Backup" button. You'll be prompted for a backup name. Name it whatever you want. The world is yours!
Give it time to complete the backup. If you just have a few records, this will be seconds. If you have a ton of records it could be hours. Buckle up or just go home and get some rest.
Once the backup is ready, highlight it in the list. You'll see the "Restore Backup" button light up. Click it.
You'll get a prompt asking you to enter the NEW table name. Enter the name you wanted to rename this table to. Note: the way the UI is you may have to click out of the field in the static page area for the "Restore table" button at the bottom of the page to light up.
Once you're ready to go, you click on the "Restore table" button. Note the info on that page indicating it can take several hours to complete. My table with just five test records took over 10 minutes to restore. I thought it was because the original was a Global Table but the restored table was NOT set up as a Global Table (which defeats the purpose of this for Global Tables since you have to empty the table to make it global).
Bear in mind that last note! If you're working with a Global Table, you will lose the Global part after the restore. Buyer beware! You may have to consider an export and import via Data Pipeline...
If you're not, then the Backup and Restore is a pretty easy process to use (without having to go and setup a pipeline, S3 store, etc.). It can just take some time.
Hope this helps someone!
Currently no, you would need to create a new table and copy the data from one to the other if you really needed a new table name.
You can use the Export/Import feature to backup your data to S3. Then delete your old table, and create a new one with the new name. Import your data from S3. Done. No code change necessary. If you don't delete CloudWatch alarms and Pipelines when deleting the old table, then those will automatically hook up to the new table. The ARN even stays the same.
The downside to this, of course, is that the table will be unusable during the time after you delete it and before you recreate it. This may or may not be a problem, but that needs to be considered. Additionally, once you recreate the table, it can be accessed while you work on the data import. You may want to stop your app's access to the table until the import is complete.
Create first backup from backup tab.
While creating restore backup, it prompt with request of new table name.
In that we can apply new/backup table name.
Hope this will help.
You should be able to achieve this using on demand backup / restore functionality.
Check out a walk through on backing up a table, and restore to a new table:
https://www.abhayachauhan.com/2017/12/dynamodb-scheduling-on-demand-backups/
Related
I need your help.
I create a dashboard for another sector of our company. The data for the dashboard is from google docs, and people from that sector edit it daily (sometimes changing the name of the columns or removing the column), which makes me manually check twice per week to make sure that the dashboard is okay.
After the dashboard was created that sector doesn't want me to continue accessing their data. Is there any solution that: 1/allow me to check the dashboard when it has problem(s) 2/minimize my access to their private data?
No, if you want to be able to check the report you will need access to the workspace. If you can't have access to the data, then a new report owner who does have access to it will have to take it over from you.
The only other way would be to create a copy of the google docs, with anonymised data, for column changes. You base a report on that, change the connection settings, then deploy it to the workspace. But if you can deploy it, you can technically access the live data in the work space.
Hi Im trying to query some table in DynamoDB. However from what I read I can only do it using some code or form the CLI. Is there a way to do complex queries from the GUI? I tried playing with it but can't seem to figure out how to do a simple COUNT(*). Please help.
Go to DynamoDB Console;
Select the table that you want to count
Go to "overview" page/tab
In table properties, click on Manage Live Count
Click Start Scan
This will give you the count of items of the table at that moment. Just be warned that this count is eventually consistent; what means that if someone is performing changes in the table at that exact moment your end result will not be exact (but probably very close to reality).
Digressing a little bit (only in case you're new to DynamoDB):
DynamoDB is a NoSQL database. It doesn't support the same commands that are common in SQL databases. Mainly because it doesn't support the same consistency model provided by SQL databases.
In SQL databases, when you send a count(*) query your RDMS make some very educated guesses and take some short paths to discover the number of lines in the table. It does that because reading your entire table to give you this answer would take too much time.
DynamoDB doesn't have means to make these educated guesses. When you want to know how many items one table have the only option it has is to read all of them counting one by one. That is the exact task that the command mentioned in the beginning of this answer does. It scans the entire table counting all the items one by one.
Because of that, when you perform this task it will bill you the entire table read (DynamoDB bills you per reads and writes). And maybe after you started the scan someone put another item in the the table while you are still counting. In that case it will not restart the count because by design DynamoDB is eventually consistent.
In my use case, I need to periodically update a Dynamo table (like once per day). And considering lots of entries need to be inserted, deleted or modified, I plan to drop the old table and create a new one in this case.
How could I make the table queryable while I recreate it? Which API shall I use? It's fine that the old table is the target table. So that customer won't experience any outage.
Is it possible I have something like version number of the table so that I could perform rollback quickly?
I would suggest table name with a common suffix (some people use date, others use a version number).
Store the usable DynamoDB table name in a configuration store (if you are not already using one, you could use Secrets Manager, SSM Parameter Store, another DynamoDB table, a Redis cluster or a third party solution such as Consul).
Automate the creation and insertion of data into a new DynamoDB table. Then update the config store with the name of the newly created DynamoDB table. Allow enough time to switchover, then remove the previous DynamoDB table.
You could do the final part by using Step Functions to automate the workflow with a Wait of a few hours to ensure that nothing is happening, in fact you could even add a Lambda function that would validate whether any traffic is hitting the old DynamoDB.
I have a CiviCRM site with 30,000 contacts. I am noticing a number of places where history is logged. The database is getting larger over time. Does anybody have any thoughts on removing history. Has anybody created scripts to cleanup old history data.
I am not sure what history you want to delete but here are couple of things you can do.
All the logging and history data are important, so think twice before deleting them.
1) If you have "Logging" Enabled under Misc., you will get a log table for every table in CiviCRM database.
2) Every contact has Changelog, I assume by history you mean this one.
3) Remove deleted records permanently, this will eliminate the possibility to check revision records in some places.
4) Extremely, you can even delete activities but you will not want to do that.
At the end of the day, it is a CRM, deleting any of the records is a loss of data.
If you are referring to the detailed logging option (as set up as by #popcm) then you can set this detailed logging to write to a separate database - it's a setting in the civicrm.settings.phop file.
Then you could occasionally dump all the data from this database and store it offline, emptying online the database on each occasion.
If you are referring simply to the changelog history or other aspects of the CiviCRM data, then as #popcm indicates, you really don't want to delete this as you'll only regret it later.
If keeping lots of data online is a concern, look to strengthen your security.
I am using Microsoft Synch Service Framework 4.0 for synching Sql server Database tables with SqlLite Database on the Ipad side.
Before making any Database schema changes in the Sql Server Database, We have to Deprovision the database tables. ALso after making the schema changes, we ReProvision the tables.
Now in this process, the tracking tables( i.e. the Synching information) gets deleted.
I want the tracking table information to be restored after Reprovisioning.
How can this be done? Is it possible to make DB changes without Deprovisioning.
e.g, the application is in Version 2.0, The synching is working fine. Now in the next version 3.0, i want to make some DB changes. SO, in the process of Deprovisioning-Provisioning, the tracking info. gets deleted. So all the tracking information from the previous version is lost. I do not want to loose the tracking info. How can i restore this tracking information from the previous version.
I believe we will have to write a custom code or trigger to store the tracking information before Deprovisioning. Could anyone suggest a suitable method OR provide some useful links regarding this issue.
the provisioning process should automatically populate the tracking table for you. you don't have to copy and reload them yourself.
now if you think the tracking table is where the framework stores what was previously synched, the answer is no.
the tracking table simply stores what was inserted/updated/deleted. it's used for change enumeration. the information on what was previously synched is stored in the scope_info table.
when you deprovision, you wipe out this sync metadata. when you synch, its like the two replicas has never synched before. thus you will encounter conflicts as the framework tries to apply rows that already exists on the destination.
you can find information here on how to "hack" the sync fx created objects to effect some types of schema changes.
Modifying Sync Framework Scope Definition – Part 1 – Introduction
Modifying Sync Framework Scope Definition – Part 2 – Workarounds
Modifying Sync Framework Scope Definition – Part 3 – Workarounds – Adding/Removing Columns
Modifying Sync Framework Scope Definition – Part 4 – Workarounds – Adding a Table to an existing scope
Lets say I have one table "User" that I want to synch.
A tracking table will be created "User_tracking" and some synch information will be present in it after synching.
WHen I make any DB changes, this Tracking table "User_tracking" will be deleted AND the tracking info. will be lost during the Deprovisioning- Provisioning process.
My workaround:
Before Deprovisioning, I will write a script to copy all the "User_tracking" data into another temporary table "User_tracking_1". so all the existing tracking info will be stored in "User_tracking_1". WHen I reprovision the table, a new trackin table "User_Tracking" will be created.
After Reprovisioning, I will copy the data from table "User_tracking_1" to "User_Tracking" and then delete the contents from table "User_Tracking_1".
UserTracking info will be restored.
Is this the right approach...