Loopback 3 updating model properties ensure old data is converted correctly without data loss - loopbackjs

So I am using Loopback 3 atm, and I am currently updating the properties of my model. Problem is that the structure will be different as before as certain properties are now split into 2 separate properties, thus if I would place it online that data might be lost due to those changes. (I am using mongodb)
Example original structure:
{
"properties": {
"address": {
"type": "string"
}
}
}
Example new structure:
{
"properties": {
"address": {
"type": {
"street": {
"type": "string"
},
"city": {
"type": "string"
},
"zipcode": {
"type": "string"
}
}
}
}
}
In my case there are also properties that change names instead of address its addressline or something like that.
I know that some of you might say that its better to move address to a separate model but this is just an example in my case im unable to move it to a separate table due to certain circumstances.
So my question is how can you update a model and remap the existing data to follow the new structure to ensure that the original data isn't lost.
Thanks in advance!

Here's my answer
I understand that loopback is database independent and my approach is going against that, nevertheless here's my approach.
We can write migration script based on the underlying db, this is what I tried for postgresdb
#1 For the scenario where data type is being changed.
ALTER TABLE dbname ADD COLUMN address2 jsonb;
UPDATE dbname set address2= to_json(address) ;
ALTER TABLE dbname DROP COLUMN address;
ALTER TABLE dbname RENAME COLUMN address2 TO address;
#2 and for the scenario where name needs to be changed
ALTER TABLE dbname ADD COLUMN addressline jsonb;
UPDATE dbname set addressline= address;
ALTER TABLE dbname DROP COLUMN address;
Hoping to find more and better answer for this question!!

Related

How to add map to map array in AWS DynamoDB only when id is not existed?

Here is my DynamoDB structure.
{"books": [
{
"name": "Hello World 1",
"id": "1234"
},
{
"name": "Hello World 2",
"id": "5678"
}
]}
I want to set ConditionExpression to check whether id existed before adding new items to books array. Here is my ConditionExpression. I am using API gateway to access DynamoDB.
"ConditionExpression": "NOT contains(#lu.books.id,:id)",
"ExpressionAttributeValues": {":id": {
"S": "$input.path('$.id')"
}
}
Result when I test the API: no matter id existed or not, success to add items to array.
Any suggestion on how to do it? Thanks!
Unfortunately, you can't. However, there is a workaround.
Store the books in separate rows. For example
PK SK
BOOK_LU#<ID> BOOK_NAME#<book name>#BOOK_ID#<BOOK_ID>
Now you can use the 'if_not_exists' conditional expression
"ConditionExpression": "if_not_exists(id, :id)'",
"ExpressionAttributeValues": {":id": {
"S": "$input.path('$.id')"
}
}
The con is if you were previously fetching the list as part of another object you will have to change that.
The pro is that now you can easily work with the books + you won't hit the max row size limits if the books became too many.

Cassandra store list of objects

I need to store a list of map in cassandra. Is that possible?
This is a json representation of my data:
{
"deviceId" : "261e92b8-91af-40da-8ba4-c39d821472ec",
"sensors": [
{
"fieldSensorId": "sensorID",
"name": "sensorName",
"location": "sensor location",
"unit": "value units",
"notes": "notes"
},
{
"fieldSensorId": "sensorID 2",
"name": "sensorName 2",
"location": "sensor location 2",
"unit": "value units",
"notes": "notes"
}
]
}
CQL:
CREATE TABLE device_sensors (
device_id text,
sensors list<frozen <map<text,text>>>,
time timeuuid,
PRIMARY KEY (device_id)
)
Still im not able to insert any data. What is the right way of storing such data in cassandra? Later i will need to query the sensors list
Is it maybe wiser to create a sensors table and use sensor > to reference the sensors?
I think that the problem is that you declare devide_id as text in CQL, but you have declared itUUID in the source code, and Spring maps it into corresponding type when trying to insert data. Can you try to add #CassandraType(type = Name.TEXT) to the deviceId declaration. You can also remove the #Column declaration - the #PrimaryKeyColumn should be enough.
Or you can change the table definition to declare device_idas UUID.

Analytics in WSO2DAS

I'm getting a Table Not Found error while running a select query on spark console of wso2das. I've kept all the default configurations intact after the installation. I'm unable to fetch the data from the event stream even when it's been shown under table dropdown of data explorer.
Initially when the data is moved into the wso2das, it would be persisted in the data store you mention.
But, these are not the tables that are created in spark. You need to write a spark query to create a temporary table in spark which would reference the table you have persisted.
For example,
If your stream is,
{
"name": "sample",
"version": "1.0.0",
"nickName": "",
"description": "",
"payloadData": [
{
"name": "ID",
"type": "INT"
},
{
"name": "NAME",
"type": "STRING"
}
]
}
you need to write the following spark query in the spark console,
CREATE TEMPORARY TABLE sample_temp USING CarbonAnalytics OPTIONS (tableName "sample", schema "ID INT, NAME STRING");
after executing the above script,try the following,
select * from sample_temp;
This should fetch the data you have pushed into WSO2DAS.
Happy learning!! :)

Strongloop - hasOne relation

I am having some trouble with setting up a hasOne relation, which probably comes from me understanding the relation wrongly?
I have 2 Models, one user model, and one location model. What I want to add now is a relation between user and location, meaning a user has a current location. But if I set up a hasOne relation on the user model with the location, I end up with a userId property in the location. But this is completely wrong, since several users can have the same current location, so the user model should store the location id, not the location the user id. So how can I achieve what I want, so that I can afterwards query the user and include the current location?
I can of course add a property to the user and store the id of the location there, but then I can't as easily include the location in a user request. So I would prefer using relations to achieve this.
Your problem is a bit unclear, given the comments of Brian's post, but if you absolutely need Users to share a given set of locations then you would be better off using User belongsTo location.
This will create a locationId field in User, and you will be able to GET api\Users\{userId}\location
Eventually, you can set a location hasMany User to retrieve from a given location all the users in there
The relation name is referring to the relational meaning of "has one," which means that for each user, there exists one (and only one) entry in the location table. Each user "has one" entry in the location table, and if your data needs to show that two users have the same location, it just means the location table would store identical location data with different userIds. This is still perfectly fine for relational mapping, and allows you to do User.location calls.
What you are looking for is slightly different, which would be "Location hasMany Users," because you will be sharing location entries with multiple users. Read this as "for each location entry, many users could share it." You'll have to query a bit differently and use the include: ['location'] filter when you want to return the User with location data included (otherwise you'll only get the locationId value).
Relation builder:
$ slc loopback:relation
? Select the model to create the relationship from: Location
? Relation type: has many
? Choose a model to create a relationship with: user
? Enter the property name for the relation: users
? Optionally enter a custom foreign key:
? Require a through model? No
location.json:
{
"name": "Location",
"base": "PersistedModel",
"idInjection": true,
"options": {
"validateUpsert": true
},
"properties": {
"lat": {
"type": "number"
},
"long": {
"type": "number"
}
},
"validations": [],
"relations": {
"users": {
"type": "hasMany",
"model": "user",
"foreignKey": "locationId"
}
},
"acls": [],
"methods": {}
}

Returning record(s) after store pushPayload call

Is there a better way to return the record(s) after DS.Store#pushPayload is called? This is what I'm doing...
var payload = { id: 1, title: "Example" }
store.pushPayload('post', payload);
return store.getById('post', payload.id);
But, with regular DS.Store#push you get the inserted record returned. The only difference between the two, from what I can tell, is that DS.Store#pushPayload serializes the payload data with the correct serializers.
DS.Store#pushPayload is able to take an array of items, not just one, and may contain side-loaded data. It processes a full payload and expects root keys in the payload:
{
"posts": [{
"id": 1,
"title": "title",
"comments": [1]
}],
"comments": [
//.. and so on ...
]
}
DS.Store#push expects a single record which has been normalized and contains no side loaded data (notice there is no root key):
{
"id": 1,
"title": "title",
"comments": [1]
}
For this reason, it makes sense for push to return the record, but for pushPayload to return nothing.
When you use pushPayload, a second lookup of store.find('post', 1) (or store.getById('post', 1)) is the way to go, I don't believe there is a better way.
As of this PR pushPayload can now return an array of all the records pushed into the store, once the 'ds-pushpayload-return' feature flag has been enabled.
At the moment, this feature isn't available in a standard or beta release-- you'll have to use
"ember-data": "emberjs/data#master",
(i.e. Canary) in your package.json in order to access it. I'm not sure when the feature will be generally available.