Is it legal to GET all list instances in RESTCONF? - list

Given the following YANG definitions, in module test:
list machine {
key "name";
leaf "name" {
type string;
}
}
and in data tree:
"machine" : [
{ "name": "a" },
{ "name": "b" },
{ "name": "c" }
]
I want to know if the following request conforming to RESTCONF?
GET /restconf/data/test/machine
This request is expected to return all the list instances.
I have this question because I don't have a clear understanding of the words from RESTCONF. In RESTCONF 3.5.3,
If a data node in the path expression is a YANG list node, then the
key values for the list (if any) MUST be encoded according to the
following rules:
o The key leaf values for a data resource representing a YANG list
MUST be encoded using one path segment [RFC3986].
o If there is only one key leaf value, the path segment is
constructed by having the list name, followed by an "=" character,
followed by the single key leaf value.
The (if any) mean which one of the following two meanings? (the key statement is not a must for a non-configuration list. So there are keyed lists and non-keyed lists.)
Users are free to specify key values for keyed lists. The (if any) is about "if the key values are specified." If they specify then the key values MUST follow the rules about the key values. If they don't specify then you don't have to follow the rules about key values. Take my YANG definitions for example, these two requests are both correct:
GET /restconf/data/test/machine // get all list instances
GET /restconf/data/test/machine=a // get the list instance keyed "a"
Users have to specify key values for keyed lists. The (if any) is about "if the list is keyed or not." In this understanding, there will be:
GET /restconf/data/test/machine // wrong request, can't get all list instanecs
GET /restconf/data/test/machine=a // ok, get the list instance keyed "a"
The second understanding is from the similar words in the same section for leaf-lists:
If a data node in the path expression is a YANG leaf-list node, then
the leaf-list value MUST be encoded according to the following rules:
o The identifier for the leaf-list MUST be encoded using one path
segment [RFC3986].
o The path segment is constructed by having the leaf-list name,
followed by an "=" character, followed by the leaf-list value
(e.g., /restconf/data/top-leaflist=fred).
The words for leaf-lists don't have (if any), so you cannot use a URL like /restconf/data/top-leaflist. You have to use =fred to specify a leaf-list instance. So If leaf-list instances cannot be retrieved as a whole, why list instances can be retrieved as a whole (in understanding 1)? A leaf-list instance and a list instance are both a data resource, they are equivalent in concept.
Thanks,

The correct interpretation is 1. The "if any" refers to key values, not YANG key statements. It is okay for a RESTCONF GET to fetch more than one instance of a list, but only in JSON encoding (well formed XML does not allow multiple root elements). This is also the only way to retrieve key-less non-configuration (state) list instances.
If only a single list entry would be allowed to be obtained via GET, its corresponding RFC section would explicitly state this with a MUST - if you take a look at the wording for DELETE in section 4.7, p3, such text exists, but there is no equivalent for GET.
It is also okay to retrieve multiple leaf-list instances. This may be the only way to retrieve some such instances, since (in YANG 1.1) duplicate values are allowed for non-configuration leaf-lists. The missing "if any" is most likely an editorial omission.
Note that the text in 3.5.3 only explains how URIs are formed, it does not say anything about how RESTCONF operations utilize those URIs.

Related

How can I use computed variables in key:value list in AppleScript

How to generate an Array (list) like
set myList to {key1:"foo1", key2:"foo2"}
I would like to increment keys in a repeat-loop.
This is what I have tested so far:
-- hardcoded key:value pair works fine
set mySimpleList to {key1:"foo1"} --> result OK: {key1:"foo1"}
-- generated value works too
set i to 1
set myValue to "foo" & i
set myGoodList to {key1:myValue} --> result OK: {key1:"foo1"}
-- generated key fails
set i to 1
set k to "key" & i --> "key1"
set myValue to "foo" & i
set myFailedList to {k:myValue} --> failed: {k:"foo1"}
Where ist the error? Are there any workarounds?
Records are AppleScript's half-arsed version of what other languages refer to as dictionaries or associative arrays (although these are slightly different entities, but the minutiae aren't significant for now). Unlike dictionaries, which have accessible keys and values one can operate on, records have inaccessible keys (called properties) and accessible values for a known, named property.
Values in the record are read by way of syntax that takes the form <property> of <record>. Because the property is an identifier, and not a string, it can't be substituted out for a proxy, such as a variable, since this is just another identifier that will be treated as a property reference that likely doesn't exist in the record.
Your easiest solution is to use paired lists of the form {<key>, <value>}. Lists are easy-to-use if not especially efficient at what they do. It does mean you'd have to write your own handlers for, say, finding a specific value given a key, but that's reasonably straight forward.
Of course, since you want keys that increment in value, then that's exactly what a straightforward list is: indexed values ordered by integer keys that start at 1 and increment with each element to its right.
NSDictionary is an AppleScriptObjC class that allows conversion of records into dictionaries. They are represented as an opaque reference type to an Objective-C object, so it lacks the visual form of records. But it allows manipulation of keys and values, but the trade off is the need to convert back and forth between the opaque type and an AppleScript type.
Technically, records's properties are compiled into the script, so they aren't something that one would expect to generate on-the-fly during runtime. If you really really want to, you can actually do this, but its value might be mitigated by the work involved:
set entries to {"key1", "foo1", "key2", "foo2", ...}
set _Ref to {«class usrf»:entries}'s contents as anything
--> {key1:"foo1", key2:"foo2", ...}
Unwrapping (serialising) a record is irksome, and relies on the clipboard, which isn't ideal:
set the clipboard to _Ref
get the clipboard as list
--> {"key1", "foo1", "key2", "foo2", ...}
I don't recommend doing any of this, by the way, but it's there if you want to.

How to filter on NULL?

"order (S)","method (NULL)","time (L)"
"/1553695740/Bar","true","[ { ""N"" : ""1556593200"" }, { ""N"" : ""1556859600"" }]"
"/1556439461/adasd","true","[ { ""N"" : ""1556593200"" }, { ""N"" : ""1556679600"" }]"
"/1556516482/Foobar","cheque","[ { ""N"" : ""1556766000"" }]"
How do I scan or query for that matter on empty "method" attribute values? https://s.natalian.org/2019-04-29/null.mp4
Unfortunately the DynamoDB console offers a simple GUI and assumes the operations you want to perform all have the same type. When you select filters on columns of type "NULL", it only allows you to do exists or not exists. This makes sense since a column containing only NULL datatypes can either exist or not exist.
What you have here is a column that contains multiple datatypes (since NULL is a different datatype than String). There are many ways to filter what you want here but I don't believe they are available to you on the console. Here is an example on how you could filter the dataset via the AWS CLI (note: since your column is a named a reserved word method, you will need to alias it with an expression attribute name):
Using Filter expressions
$ aws dynamodb scan --table-name plocal --filter-expression '#M = :null' --expression-attribute-values '{":null":{"NULL":true}}' --expression-attribute-names '{"#M":"method"}'
An option to consider to avoid this would be to update your logic to write some of sort filler string value instead of a null or empty string when writing your data to the database (i.e. "None" or "N/A"). Then you could solely operate on Strings and search on that value instead.
DynamoDB currently does not allow String values of an empty string and will give you errors if you try and put those items directly. To make this "easier", many of the SDKs have provided mappers/converters for objects to DyanmoDB items and this usually involves converting empty strings to Null types as a way of working around the rule of no empty strings.
If you need to differentiate between null and "", you will need to write some custom logic to marshall/unmarshall empty strings to a unique string value (i.e. "__EMPTY_STRING") when they are stored in DyanmoDB.
I'm pretty sure that there is no way to filter using the console. But I'm guessing that what you really want is to use such a filter in code.
DynamoDB has a very peculiar way of storing NULLs. There is a "NULL" data type which basically represents the concept of null values but it really is sort of like a boolean.
If you have the opportunity to change the data type of that attribute to be a string, or numeric, I strongly recommend doing so. Then you'll be able to create much more powerful queries with filter conditions to match what you want.
If the data already exists and you don't have a significant number of items that need to be updated, I recommend creating a new attribute to represent your data and backfilling.
Just following up on the comments. If you prefer using the mapper, you can customize how it marshals certain attributes that may be null/empty. Have a look at the go sdk encoder implementation for some examples: https://git.codingcafe.org/Mirrors/aws/aws-sdk-go/blob/9b5aaeba7a51edcf3f87bda525a08b04b90d2ef8/service/dynamodb/dynamodbattribute/encode.go
I was able to do this inside a FilterExpression:
attribute_type(MyProperty, :nullType) - Where :nullType is a string with value NULL. This one finds null entries.
attribute_type(MyProperty, :stringType) - Where :stringType is a string with value S. This one finds non-null entries.
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.OperatorsAndFunctions.html#Expressions.OperatorsAndFunctions.Syntax

How to save non-overlapping range in DynamoDB

Suppose I want to store some entities in DynamoDB, and each entity is defined by 3 crucial attributes:
group_id [string] : the id of the group the entity belongs to.
from [int] : the start of the range (inclusive).
to [int] : the end of the range (inclusive).
And the constraint is:
Within a group, the overlapping ranges are not allowed.
Across groups however, overlapping is allowed.
Here are few examples of the entries:
("abc",10,21)
("xyz",13,27)
("xyz",45,61)
("abc",39,57)
("abc",81,93)
As you can see, there are no overlapping ranges within a group in the above list. Now if we want to add an entry to the above list, then here are few examples as to what is allowed, and what is not:
("abc",19,27) is not allowed, as its overlapping with the first item.
("abc",23,27) is allowed.
("xyz",39,47) is not allowed, as its overlapping with the third item.
("xyz",39,55) is allowed.
Given this scenario, my question is, how to design the schema and how should it be used so that it can prevent users from inserting overlapping ranges for a given group?
If the solution requires some (meta) attributes to be added to the schema, I'm fine with it; I'm fine with anything as long as it solves the problem. Other related questions to ponder over: should we add each entity as a separate row? Or all entities belonging to a single group, should go to one row (with a list/map attribute)?
The possible queries on the table would be like:
Given a group_id and a super-range {from and to}, return all entries having the same group_id, and whose from and to fall within the limit defined by the super-range (inclusive).
Specifically what are the options to decide on partition key , range key , secondary index (local /secondary) based on the queries listed above ?
step 1 - Create a partition key (group id), range key on (from ,to)
step 2 - Restriction on from, to fields need to be done at application level. 'check constraints' to enable this functionality is not available at database level in dynamodb
One way to do this is to use group id as primary key, use "from" as range key and "to" as a secondary range key (Local Secondary Index).
Then every time you want check a new item with range between x and y, you need to do following checks:
Check that there is no "from" value between x and y
Check that there is no "to" value between x and y
Check that the item with largest value of "from" such that "from" is smaller than x also has "to" smaller than x
Check that the item with smallest value of "to" such that "to" is larger than y also has "from" larger than y
Given that all these checks are using range key queries it should be fairly fast.
Note that if your range options are limited (say range can only be between 1 and 365) then much simpler solutions are possible

Boost ptree top level array

I would like to have write_json output a top level array, something to the effect of:
[{...},{...},{...},...,{...}]
But when I pass a list to write_json, it converts to a json full of blank keys.
{"":{...},"":{...},"":{...},..."":{...}}
Using add_child actually respects the array and gives me the closest thing of:
{"Some Key":[{...},{...},{...},...,{...}]}
But that's still not what I want.
Any idea how to make that array top level?
Boost does not have a JSON library (nor does it have an XML library). It has a Property Tree library (which happens to include a JSON compatible representation).
The limitations you run into are perfectly clearly documented right there: http://www.boost.org/doc/libs/1_62_0/doc/html/property_tree/parsers.html#property_tree.parsers.json_parser
The property tree dataset is not typed, and does not support arrays as such. Thus, the following JSON / property tree mapping is used:
JSON objects are mapped to nodes. Each property is a child node.
JSON arrays are mapped to nodes. Each element is a child node with an empty name. If a node has both named and unnamed child nodes, it cannot be mapped to a JSON representation.
JSON values are mapped to nodes containing the value. However, all type information is lost; numbers, as well as the literals "null", "true" and "false" are simply mapped to their string form.
Property tree nodes containing both child nodes and data cannot be mapped.
JSON round-trips, except for the type information loss.
It goes on to show an example of EXACTLY what you run into.

DynamoDB create index on map or list type

I'm trying to add an index to an attribute inside of a map object in DynamoDB and can't seem to find a way to do so. Is this something that is supported or are indexes really only allowed on scalar values? The documentation around this seems to be quite sparse. I'm hoping that the indexing functionality is similar to MongoDB but so far the approaches I've taken of referencing the attribute to index using dot syntax has not been successful. Any help or additional info that can be provided is appreciated.
Indexes can be built only on top-level JSON attributes. In addition, range keys must be scalar values in DynamoDB (one of String, Number, Binary, or Boolean).
From http://aws.amazon.com/dynamodb/faqs/:
Q: Is querying JSON data in DynamoDB any different?
No. You can create a Global Secondary Index or Local Secondary Index
on any top-level JSON element. For example, suppose you stored a JSON
document that contained the following information about a person:
First Name, Last Name, Zip Code, and a list of all of their friends.
First Name, Last Name and Zip code would be top-level JSON elements.
You could create an index to let you query based on First Name, Last
Name, or Zip Code. The list of friends is not a top-level element,
therefore you cannot index the list of friends. For more information
on Global Secondary Indexing and its query capabilities, see the
Secondary Indexes section in this FAQ.
Q: What data types can be indexed?
All scalar data types (Number, String, Binary, and Boolean) can be
used for the range key element of the local secondary index key. Set,
list, and map types cannot be indexed.
I have tried doing hash(str(object)) while I store the object separately. This hash gives me an integer(Number) and I am able to use a secondary index on it. Below is a sample in python, it is important to use a hash function which generates the same hash key every time for the value. So I am using sha1.
# Generate a small integer hash:
import hashlib
def hash_8_digits(source):
return int(hashlib.sha1(source.encode()).hexdigest(), 16) % (10 ** 8)
The idea is to keep the entire object small while still the entity intact. i.e. rather than serializing and storing the object as string and changing whole way the object is used I am storing a smaller hash value along with the actual list or map.