I am connecting to a local DynamoDB and was able to create 2 tables and added data to each. However I am unable to see the 2nd table under, "Operation builder".
I have tired to commit the table again and get an error saying the table exists. See below
From "Operation builder" I've clicked on the "Table" refresh icon and the 2nd table "grades" will not show up.
I have tried closing down and re-running NoSQL Workbench, but I still getting have the same problem. Committing the missing table again gives me the error it already exist.
As it's stated in this thread, NoSQL Workbench is definitely buggy. I was having similar problems with my recently created GSIs and I am pretty sure I had already tried restarting the APP. Nevertheless, I probably did that by closing and opening the APP with the "X" button above. Still not seeing my updated GSIs. I have just done the same using the "Quit" option from the "Application" menu, and now it worked. Not sure if that was the reason or it was just a matter of time syncing...But you can always try.
Related
In the cloud console, I created a new Cloud Spanner View called StudentView which left joined my Student table and a ClassEnrollment table. The update seemed to work as I could see the new View created.
I then looked at my Students table and I got the error:
Cannot read properties of null (reading 'join').
View screenshot of error
The Students table showed no data even though it is populated. I subsequently deleted the StudentView:
DROP VIEW StudentView;
The result is still the same and the Student table displays nothing but the error stated above - even though the StudentView has been dropped.
I have also tried replacing the StudentView View so it does not contain null values, but the error persists.
I am assuming the error refers to the View that I created. If that is not the case, please could you point me in the right direction.
Any idea how to go about this problem?
There was a short-lived bug in the Cloud Spanner UI that was active around the time you asked this question (2022-08-30). Assuming you can see Students now, and StudentView doesn't appear in the UI anymore, the bug probably affected you too. Nothing you're doing wrong, just unlucky timing.
If you ever suspect there's a problem with the Cloud Spanner UI, you can list and query DBs with gcloud spanner .... In this case gcloud should have continued to work while the UI was down.
It's as simple as that. Athena used to load databases and tables that I crawled using Glue. The data is present in S3 and Athena used to work before. But all of a sudden the loading icon goes round and round but it doesn't load the list of databases and tables.
I'm in the right region. It works when I send the queries through Python/SageMaker i.e. I use awswrangler and the data output from that is fine. But it's not possible to query within Athena itself even though I used to do it before.
Totally stumped on what the problem could be as I have no clues.
This has been solved. I am not sure what the fix was (This has been an issue since atleast 3 months and I have tried solving it before with similar methods).
But I did two things before it 'fixed itself':
Tried changing the Athena output location through the workgroup settings.
Tried changing the same(I'm not sure if both settings point to the same property) through the settings icon on the right top of the page.
And suddenly the list of databases and tables shows up in the Query Editor page.
I have recently faced a problem on Redshift cluster when table stopped responding.
The guess was that there is a lock on that table, but a query
select * from stl_tr_conflict order by xact_start_ts;
gives me nothing, though judging on AWS documentation stl_tr_conflict table should have records of all transaction issues including locks. But maybe records live there only when lock is alive. I am not sure.
Search in useractivitylog in S3 on words violation and ERROR also is not giving any results. So I still can't figure out why one of tables was not accessible.
I am new to database management so I would appreciate any advice on how to troubleshoot this issue.
I have DynamoDB table and I would like to rename it. There does not seems to be any commands or options to rename a table. Has anybody renamed a table before?
I know this is an old question and I don't want to steal the thunder from user6500852's answer but for anyone stumbling upon this needing an answer:
Select the table you want to rename in the Web UI so it's highlighted and the various tabs show up to the right.
Click on the Backups tab and feast your eyes upon the lower half of the page ("On-Demand Backup and Restore").
3. Smash that "Create Backup" button. You'll be prompted for a backup name. Name it whatever you want. The world is yours!
Give it time to complete the backup. If you just have a few records, this will be seconds. If you have a ton of records it could be hours. Buckle up or just go home and get some rest.
Once the backup is ready, highlight it in the list. You'll see the "Restore Backup" button light up. Click it.
You'll get a prompt asking you to enter the NEW table name. Enter the name you wanted to rename this table to. Note: the way the UI is you may have to click out of the field in the static page area for the "Restore table" button at the bottom of the page to light up.
Once you're ready to go, you click on the "Restore table" button. Note the info on that page indicating it can take several hours to complete. My table with just five test records took over 10 minutes to restore. I thought it was because the original was a Global Table but the restored table was NOT set up as a Global Table (which defeats the purpose of this for Global Tables since you have to empty the table to make it global).
Bear in mind that last note! If you're working with a Global Table, you will lose the Global part after the restore. Buyer beware! You may have to consider an export and import via Data Pipeline...
If you're not, then the Backup and Restore is a pretty easy process to use (without having to go and setup a pipeline, S3 store, etc.). It can just take some time.
Hope this helps someone!
Currently no, you would need to create a new table and copy the data from one to the other if you really needed a new table name.
You can use the Export/Import feature to backup your data to S3. Then delete your old table, and create a new one with the new name. Import your data from S3. Done. No code change necessary. If you don't delete CloudWatch alarms and Pipelines when deleting the old table, then those will automatically hook up to the new table. The ARN even stays the same.
The downside to this, of course, is that the table will be unusable during the time after you delete it and before you recreate it. This may or may not be a problem, but that needs to be considered. Additionally, once you recreate the table, it can be accessed while you work on the data import. You may want to stop your app's access to the table until the import is complete.
Create first backup from backup tab.
While creating restore backup, it prompt with request of new table name.
In that we can apply new/backup table name.
Hope this will help.
You should be able to achieve this using on demand backup / restore functionality.
Check out a walk through on backing up a table, and restore to a new table:
https://www.abhayachauhan.com/2017/12/dynamodb-scheduling-on-demand-backups/
Is it possible to view the contents of a Dynamics NAV 2013 Database Table while being in a debugging session?
When I go the development environment I can normally hit run on any table and explore its contents. How ever, while the debugger is running, this is not possible, since the whole Dynamics Nav environment is frozen when the debugger stops on a break point.
One work around I have found, is to copy the relevant data to excel before running the debugger, but that is not so convenient. Also, in the watch list of the debugger, I can only view single variables, but not the whole database table.
You can simply open Sql Server Managemant Studio and have a look at the tables.
Of course, you will see the changes only when they are commited. so either the code in NAV has passed the actual trigger where the record is modified or you explicit call COMMIT();
If you never used sql server management you will notice that the tables are stored with the company nam e ahead.
For example the item ledger entry in demo database is:
[CRONUS AG$Item Ledger Entry]
and a select statement for reading all records in the table could be
SELECT *
FROM [Demo Database NAV (7-0)].[dbo].[CRONUS AG$Item Ledger Entry]
Regards
Alex
The debugger does not have a "table view". You're either stuck with using SQL, without getting calculated fields shown, or you can use another session (in some cases that requires another service tier, since the debugger has the nasty tendency to block the entire service tier).
But another session will not display uncommitted data.
An alternative (not great), is to create a simple method, that loops through all records and dumping FORMAT(rec) into a txt file. That method can be called in the places where you need to inspect the table.
But, unless calculated fields are necessary I would also go with SQL.