Dapper nested list query - list

I have 3 POCO classes as below;
Continent has many ContinentPart
Continent has many Country
Country has many City
I want to get Continens with ContinentParts, Countries and Cities
using (IDbConnection db = new SqlConnection(_conf["ConnectionStrings:WorkConStr"]))
{
string query = #"SELECT * FROM Continent as c
LEFT JOIN ContinentPart as cp ON c.ContinentId=cp.ContinentId
LEFT JOIN Country as co ON c.ContinentId=co.ContinentId
LEFT JOIN City ci ON co.CountryId=ci.CountryId
ORDER BY c.RecDate DESC";
var continentDictionary = new Dictionary<int, Continent>();
var list = db.Query<Continent, ContinentPart, Country, City, Continent>(
query,
map: (continent, continentPart, country, city) =>
{
Continent m;
if (!continentDictionary.TryGetValue(continent.ContinentId, out m))
{
continentDictionary.Add(continent.ContinentId, m = continent);
}
if (m.ContinentParts == null)
m.ContinentParts = new List<ContinentPart>();
m.ContinentParts.Add(continentPart);
if (m.Countries == null)
m.Countries = new List<Country>();
m.Countries.Add(country);
foreach (var h in m.Countries)
{
if (h.Cities == null)
h.Cities = new List<City>();
h.Cities.Add(city);
}
return m;
},
param: new { },
splitOn: "")
.Distinct()
.ToList();
return list;
}

You have two options here. You can use Multiple Resultsets feature:
https://medium.com/dapper-net/handling-multiple-resultsets-4b108a8c5172
Or, another option, better in my opinion, is to return the joined result as a JSON, for example with a query like the following:
select
continents.continent_id,
continents.continent,
countries.country_id,
countries.country,
cities.city_id,
cities.city
from
dbo.Continent continents
inner join
dbo.Country countries on continents.continent_id = countries.continent_id
inner join
dbo.City cities on countries.country_id = cities.country_id
for
json auto
that returns a JSON like this:
{
"continent_id": 1,
"continent": "Africa",
"countries":
[
{
"country_id": 1,
"country": "Nigeria",
"cities":
[
{
"city_id": 1,
"city": "Lagos"
}, {
"city_id": 2,
"city": "Abuja"
}
]
}
]
}
and that can then be turned into a complex object using Custom Type Handling:
https://medium.com/dapper-net/custom-type-handling-4b447b97c620
Links are to articles I've wrote on the subject.

Related

How can I get distinct values for the area.names using graphene?

my resolver in schema.py looks like this
def resolve_areas(self, info, **kwargs):
result = []
dupfree = []
user = info.context.user
areas = BoxModel.objects.filter(client=user, active=True).values_list('area_string', flat=True)
In GraphiQL I am using this query:
{
areas {
edges {
node {
id
name
}
}
}
}
And get Output that starts like this:
{
"data": {
"areas": {
"edges": [
{
"node": {
"id": "QXJlYTpkZWZ",
"name": "default"
}
},
{
"node": {
"id": "QXJlYTptZXN",
"name": "messe"
}
},
{
"node": {
"id": "QXJlYTptZXN",
"name": "messe"
}
},
But i want distinct values on the name variable
(Using a MySQL Database so distinct does not work)
SOLVED:
distinct was not working. so i just wrote a short loop which tracked onlye the string names duplicates in a list and only appended the whole "area" object if it's name has not been added to the duplicates list yet
result = []
dupl_counter = []
for area in areas:
if area not in dupl_counter:
dupl_counter.append(area)
result.append(Area(name=area))
print(area)

DynamoDB Query with sort key and partition key

I am using JS SDK with DynamoDB to fetch data.
I am able to fetch data from my table using simple query with partition key and sort key.
My sort key sk has records -
Year#Batch#Rate
If I pass var sk = "2006#CSE#90"; it returns all of records matching this,
Requirement - How can I get all products with year 2006 , Batch CSE AND Rate =>90
readItem_pbro(){
console.log("inside pbro");
var table2 = "pbro";
var pk = "1";
var sk = "2006#CSE#90";
var params2 = {
TableName: table2,
Key:{
"pk": pk,
"sk": sk
}
};
Edit 1 :: Created a different column for score/rate as score. It is numeric.
Now my query in JS is -
but I am getting error - ValidationException: The provided key element does not match the schema
readItem_score_filter(){
console.log("inside pbro");
var table2 = "pbro";
var pk = "1"; // string
var sk = "2006#CSE"; // string
var score = 90; //number
var params2 = {
TableName: table2,
Key:{
"pk": pk,
"sk": sk,
FilterExpression:'score >=:score',
}
};
what is wrong in my FilterExpression.
Edit 2 :: Added Key condition Expression but issue still remains the same
Error: ValidationException: The provided key element does not match the schema
Here is my complete function now:
readItem_score_filter(){
console.log("inside pbro");
var table2 = "pbro";
var pk = "1"; //string
var sk = "2006#CSE"; // string
var score = 90; //number
var params2 = {
TableName: table2,
Key:{
"pk": pk,
"sk": sk,
"score": score,
KeyConditionExpression: 'pk = :pk AND sk=:sk',
FilterExpression: "score >=:score",
}
};
this.user.docClient.get(params2, function(err, data) {
if (err) {
console.log(err);
} else {
console.log(data);
}
});
}
Screenshot of table attached incase you need to see::
If "2006#CSE#90" this is the value of sort key column then you cant do anything at Dynamodb level..
comparings like this can be done through regular expressions but DynamoDB doesn't support Regular Expressions.
you simply need to get results and then seperate these values and compare ..
Updated :- use different column for score.
And use Filter Expression to get records having rate more than 90.
I dont know python , but still am trying here
var params2 = {
TableName: "pbro",
KeyConditionExpression: "pk = :pk AND sk =:sk",
FilterExpression: "score >= :score"
};

AWS RDS Data API executeStatement not return column names

I'm playing with the New Data API for Amazon Aurora Serverless
Is it possible to get the table column names in the response?
If for example I run the following query in a user table with the columns id, first_name, last_name, email, phone:
const sqlStatement = `
SELECT *
FROM user
WHERE id = :id
`;
const params = {
secretArn: <mySecretArn>,
resourceArn: <myResourceArn>,
database: <myDatabase>,
sql: sqlStatement,
parameters: [
{
name: "id",
value: {
"stringValue": 1
}
}
]
};
let res = await this.RDS.executeStatement(params)
console.log(res);
I'm getting a response like this one, So I need to guess which column corresponds with each value:
{
"numberOfRecordsUpdated": 0,
"records": [
[
{
"longValue": 1
},
{
"stringValue": "Nicolas"
},
{
"stringValue": "Perez"
},
{
"stringValue": "example#example.com"
},
{
"isNull": true
}
]
]
}
I would like to have a response like this one:
{
id: 1,
first_name: "Nicolas",
last_name: "Perez",
email: "example#example.com",
phone: null
}
update1
I have found an npm module that wrap Aurora Serverless Data API and simplify the development
We decided to take the current approach because we were trying to cut down on the response size and including column information with each record was redundant.
You can explicitly choose to include column metadata in the result. See the parameter: "includeResultMetadata".
https://docs.aws.amazon.com/rdsdataservice/latest/APIReference/API_ExecuteStatement.html#API_ExecuteStatement_RequestSyntax
Agree with the consensus here that there should be an out of the box way to do this from the data service API. Because there is not, here's a JavaScript function that will parse the response.
const parseDataServiceResponse = res => {
let columns = res.columnMetadata.map(c => c.name);
let data = res.records.map(r => {
let obj = {};
r.map((v, i) => {
obj[columns[i]] = Object.values(v)[0]
});
return obj
})
return data
}
I understand the pain but it looks like this is reasonable based on the fact that select statement can join multiple tables and duplicated column names may exist.
Similar to the answer above from #C.Slack but I used a combination of map and reduce to parse response from Aurora Postgres.
// declarative column names in array
const columns = ['a.id', 'u.id', 'u.username', 'g.id', 'g.name'];
// execute sql statement
const params = {
database: AWS_PROVIDER_STAGE,
resourceArn: AWS_DATABASE_CLUSTER,
secretArn: AWS_SECRET_STORE_ARN,
// includeResultMetadata: true,
sql: `
SELECT ${columns.join()} FROM accounts a
FULL OUTER JOIN users u ON u.id = a.user_id
FULL OUTER JOIN groups g ON g.id = a.group_id
WHERE u.username=:username;
`,
parameters: [
{
name: 'username',
value: {
stringValue: 'rick.cha',
},
},
],
};
const rds = new AWS.RDSDataService();
const response = await rds.executeStatement(params).promise();
// parse response into json array
const data = response.records.map((record) => {
return record.reduce((prev, val, index) => {
return { ...prev, [columns[index]]: Object.values(val)[0] };
}, {});
});
Hope this code snippet helps someone.
And here is the response
[
{
'a.id': '8bfc547c-3c42-4203-aa2a-d0ee35996e60',
'u.id': '01129aaf-736a-4e86-93a9-0ab3e08b3d11',
'u.username': 'rick.cha',
'g.id': 'ff6ebd78-a1cf-452c-91e0-ed5d0aaaa624',
'g.name': 'valentree',
},
{
'a.id': '983f2919-1b52-4544-9f58-c3de61925647',
'u.id': '01129aaf-736a-4e86-93a9-0ab3e08b3d11',
'u.username': 'rick.cha',
'g.id': '2f1858b4-1468-447f-ba94-330de76de5d1',
'g.name': 'ensightful',
},
]
Similar to the other answers, but if you are using Python/Boto3:
def parse_data_service_response(res):
columns = [column['name'] for column in res['columnMetadata']]
parsed_records = []
for record in res['records']:
parsed_record = {}
for i, cell in enumerate(record):
key = columns[i]
value = list(cell.values())[0]
parsed_record[key] = value
parsed_records.append(parsed_record)
return parsed_records
I've added to the great answer already provided by C. Slack to deal with AWS handling empty nullable character fields by giving the response { "isNull": true } in the JSON.
Here's my function to handle this by returning an empty string value - this is what I would expect anyway.
const parseRDSdata = (input) => {
let columns = input.columnMetadata.map(c => { return { name: c.name, typeName: c.typeName}; });
let parsedData = input.records.map(row => {
let response = {};
row.map((v, i) => {
//test the typeName in the column metadata, and also the keyName in the values - we need to cater for a return value of { "isNull": true } - pflangan
if ((columns[i].typeName == 'VARCHAR' || columns[i].typeName == 'CHAR') && Object.keys(v)[0] == 'isNull' && Object.values(v)[0] == true)
response[columns[i].name] = '';
else
response[columns[i].name] = Object.values(v)[0];
}
);
return response;
}
);
return parsedData;
}

Azure Cosmos query to convert into List

This is my JSON data, which is stored into cosmos db
{
"id": "e064a694-8e1e-4660-a3ef-6b894e9414f7",
"Name": "Name",
"keyData": {
"Keys": [
"Government",
"Training",
"support"
]
}
}
Now I want to write a query to eliminate the keyData and get only the Keys (like below)
{
"userid": "e064a694-8e1e-4660-a3ef-6b894e9414f7",
"Name": "Name",
"Keys" :[
"Government",
"Training",
"support"
]
}
So far I tried the query like
SELECT c.id,k.Keys FROM c
JOIN k in c.keyPhraseBatchResult
Which is not working.
Update 1:
After trying with the Sajeetharan now I can able to get the result, but the issue it producing another JSON inside the Array.
Like
{
"id": "ee885fdc-9951-40e2-b1e7-8564003cd554",
"keys": [
{
"serving": "Government"
},
{
"serving": "Training"
},
{
"serving": "support"
}
]
}
Is there is any way that extracts only the Array without having key value pari again?
{
"userid": "e064a694-8e1e-4660-a3ef-6b894e9414f7",
"Name": "Name",
"Keys" :[
"Government",
"Training",
"support"
]
}
You could try this one,
SELECT C.id, ARRAY(SELECT VALUE serving FROM serving IN C.keyData.Keys) AS Keys FROM C
Please use cosmos db stored procedure to implement your desired format based on the #Sajeetharan's sql.
function sample() {
var collection = getContext().getCollection();
var isAccepted = collection.queryDocuments(
collection.getSelfLink(),
'SELECT C.id,ARRAY(SELECT serving FROM serving IN C.keyData.Keys) AS keys FROM C',
function (err, feed, options) {
if (err) throw err;
if (!feed || !feed.length) {
var response = getContext().getResponse();
response.setBody('no docs found');
}
else {
var response = getContext().getResponse();
var map = {};
for(var i=0;i<feed.length;i++){
var keyArray = feed[i].keys;
var array = [];
for(var j=0;j<keyArray.length;j++){
array.push(keyArray[j].serving)
}
feed[i].keys = array;
}
response.setBody(feed);
}
});
if (!isAccepted) throw new Error('The query was not accepted by the server.');
}
Output:

Complex Queries in DynamoDB

I am working on an application that uses DynamoDB.
Is there a way where I can create a GSI with multiple attributes. My aim is to query the table with a query of following kind:
(attrA.val1 === someVal1 AND attrB.val2 === someVal2 AND attrC.val3 === someVal3)
OR (attrA.val4 === someVal4 AND attrB.val5 === someVal5 AND attrC.val6 === someVal6)
I am aware we can use Query when we have the Key Attribute and when Key Attribute is unknown we can use Scan operations. I am also aware of GSI if we need to query with non-key attributes. But I need some help in this scenario. Is there a way to model GSI to suit the above query.
I have the below item (i.e. data) on my Movies tables. The below query params works fine for me.
You can add the third attribute as present in the OP. It should work fine.
DynamoDB does support the complex condition on FilterExpression.
Query table based on some condition:-
var table = "Movies";
var year_val = 1991;
var title = "Movie with map attribute";
var params = {
TableName : table,
KeyConditionExpression : 'yearkey = :hkey and title = :rkey',
FilterExpression : '(records.K1 = :k1Val AND records.K2 = :k2Val) OR (records.K3 = :k3Val AND records.K4 = :k4Val)',
ExpressionAttributeValues : {
':hkey' : year_val,
':rkey' : title,
':k3Val' : 'V3',
':k4Val' : 'V4',
':k1Val' : 'V1',
':k2Val' : 'V2'
}
};
docClient.query(params, function(err, data) {
if (err) {
console.error("Unable to read item. Error JSON:", JSON.stringify(err,
null, 2));
} else {
console.log("GetItem succeeded:", JSON.stringify(data, null, 2));
}
});
My Data:-
Result:-
GetItem succeeded: {
"Items": [
{
"title": "Movie with map attribute",
"yearkey": 1991,
"records": {
"K3": "V3",
"K4": "V4",
"K1": "V1",
"K2": "V2"
}
}
],
"Count": 1,
"ScannedCount": 1
}