I have a backend in AWS Amplify where my data is stored. Upon querying a model and then deleting an item of the model directly from Content in Amplify Studio, I still get the same number of items. After checking their content I found out the difference is that for existing items the property _deleted holds the value null and the items that have actually been deleted hold the value undefined.
Why is that ? and is there a way to deleted items and make them completely disappear from the datastore ?
DataStore.query always operates on local data, not server data--it relies on the automatically managed subscriptions to keep the local store consistent.
If you want a deleted item to be reflected in your UI immediately, i've been using DataStore.observeQuery() seen in the documentation linked below for real time data subscriptions:
https://docs.amplify.aws/lib/datastore/real-time/q/platform/js/
Related
I'm building an Angular 11 web app using AppSync for the backend.
I've mentioned group chat, but basically I have a feature in my app where I have an announcement feature where there's a person creating announcements to a specific audience (can be individual members or groups of members) and whenever the receiving user opens the announcement, it has to mark that announcement as read for that user in their UI and also let the sender know that it has been opened by that particular member.
I have an idea for implementing this:-
Each announcement needs to have a "seenBy" which aggregates the user Ids of the ones who open it.
Each member also has an attribute in their user object named "announcementsRead" which is an array of Ids of the announcements that they have opened.
In the UI when I'm gathering the list of announcements for the user, the ones whose ID don't belong in the member's own announcementsRead array, will be marked as unread.
When they click on it and it is opened, I make 2 updates - a) To the announcement object I simply push the member's user ID to the "seenBy" attribute and push to db. b) to the member's user object, I add the announcement's id to the "announcementRead" attribute and push it to the DB.
This is just something that I came up with.
Please let me know if there are any pitfalls to this approach. Or if there are simpler ways to achieve this functionality.
I have a few concerns as well:-
Let's say that two users are opening an announcement at the same time, and the clients try to update the announcement with the updated seenBy containing the user's ID, what happens when the two requests from two different clients are happening concurrently? It's possible that the first user fetches the object and then the second user fetches it immediately, and by the time the second user has updated the attribute and sent it back to the DB, the first user has already written their updated data. In such a case the second user's write to the DB will overwrite the first user's change. I am not sure of the internal mechanisms of the amplify data store, but I can imagine this happening. Is this possible? If so, how do we ensure that it is prevented?
Is it really necessary for me to maintain the "announcementsRead" attribute in the user? I mean I can imagine generating that list in the UI every time I get the list of announcements by checking if the current user's ID exists in the announcement's "seenBy" and maintaining that list in the UI, that way we can eliminate redundancy of info in the DB and also it would make sense to not accumulate extremely old announcement IDs that may have been deleted. But I'm wondering if having this on the member actually helps in an indispensable way.
Hope my questions are clear.
I see that data objects in the Baqend data storage are versioned.
Are the previous versions accessible?
Does the system store the ID of the user who wrote/updated the object?
Just curious what that is doing and if I need to try to create my own log or if there is something built in.
Specifically, I have "adminstrators" who will be manually verifying some data that is put into the database, then changing a specific field to "approved". We to know the ID of the last person that modified the data.
Baqend does not keep previous versions, and it only collects access logs.
To save which user updated an object, you can use for example an update handler to save the user reference in your object.
I'm currently trying to implement Microsoft Sync Framework for field agents that will be working mostly disconnected from the server.
Currently I have a SQL Express database the application points to for offline mode and when they are back online, They can hit a sync button to push the changes up and down.
I have no problems creating the filtered scope, But our schema uses a "VersionID" column to handle historical data.
No data is deleted from the databases, so when a row is "updated" a new row is inserted with max(VersionID) + 1 as its new versionID.
Since I can't use aggregate functions in a filtered scope, I can't figure out how to retrieve the max version only for each unique row.
I only need to retrieve the max(versionID) record because of the 10GB limit for the database, I can't possibly download all records without going over the limit with all the support tables the application requires.
Any ideas?
the scope filter is simply appended to the _selectchanges SP's WHERE clause. If you can put your condition in a simple query, you should be able to set the same as the scope filter.
I am making an azure web role service, in where I have a long list (thousands) of objects, which I am filter upon different criteria. I need to cache the list, but I have concern, which is:
Suppose I have a number of role instances, and the list is cached on one machine, while another machine wants to iterate the list. Will the list be copied into the memory of the requesting machine and iterated after?
Windows Azure Caching is serialized - meaning that when you store an item to cache it is serialized (using the .Net XmlSerializer by default, but you can change this) and when it is retrieved from the cache it is deserialized to a new object.
So yes - when you retrieve a list from the cache (even on the same role instance!) you will have a new list in memory that is iterated over.
I am using Microsoft Synch Service Framework 4.0 for synching Sql server Database tables with SqlLite Database on the Ipad side.
Before making any Database schema changes in the Sql Server Database, We have to Deprovision the database tables. ALso after making the schema changes, we ReProvision the tables.
Now in this process, the tracking tables( i.e. the Synching information) gets deleted.
I want the tracking table information to be restored after Reprovisioning.
How can this be done? Is it possible to make DB changes without Deprovisioning.
e.g, the application is in Version 2.0, The synching is working fine. Now in the next version 3.0, i want to make some DB changes. SO, in the process of Deprovisioning-Provisioning, the tracking info. gets deleted. So all the tracking information from the previous version is lost. I do not want to loose the tracking info. How can i restore this tracking information from the previous version.
I believe we will have to write a custom code or trigger to store the tracking information before Deprovisioning. Could anyone suggest a suitable method OR provide some useful links regarding this issue.
the provisioning process should automatically populate the tracking table for you. you don't have to copy and reload them yourself.
now if you think the tracking table is where the framework stores what was previously synched, the answer is no.
the tracking table simply stores what was inserted/updated/deleted. it's used for change enumeration. the information on what was previously synched is stored in the scope_info table.
when you deprovision, you wipe out this sync metadata. when you synch, its like the two replicas has never synched before. thus you will encounter conflicts as the framework tries to apply rows that already exists on the destination.
you can find information here on how to "hack" the sync fx created objects to effect some types of schema changes.
Modifying Sync Framework Scope Definition – Part 1 – Introduction
Modifying Sync Framework Scope Definition – Part 2 – Workarounds
Modifying Sync Framework Scope Definition – Part 3 – Workarounds – Adding/Removing Columns
Modifying Sync Framework Scope Definition – Part 4 – Workarounds – Adding a Table to an existing scope
Lets say I have one table "User" that I want to synch.
A tracking table will be created "User_tracking" and some synch information will be present in it after synching.
WHen I make any DB changes, this Tracking table "User_tracking" will be deleted AND the tracking info. will be lost during the Deprovisioning- Provisioning process.
My workaround:
Before Deprovisioning, I will write a script to copy all the "User_tracking" data into another temporary table "User_tracking_1". so all the existing tracking info will be stored in "User_tracking_1". WHen I reprovision the table, a new trackin table "User_Tracking" will be created.
After Reprovisioning, I will copy the data from table "User_tracking_1" to "User_Tracking" and then delete the contents from table "User_Tracking_1".
UserTracking info will be restored.
Is this the right approach...