Apex REST Data Source does not recognize all columns - oracle-apex

my REST data source looks like this:
REST
But apex can't recognize:
"categories": {
"names": ["XYZ", "ABC"]
}
It looks like this
DATA PROFILE
DATA PROFILE 2
It looks like this.
Anyone had a problem with the parser?
Thank you in advance

APEX REST Data Sources cannot deal with nested arrays - as all APEX components work on flat, table-like data, REST Data Sources want to treat REST response data the same way.
In your case, the top-level information (which your screen shots indicate) is a single row, with multiple attributes (which then map to columns in APEX). Your "categories" attribute would then be a "nested table"; as it contains two values for the single row.
The same situation applies if the JSON contains an array at the top level; APEX treats then each array member as a "row" and the attributes of each member as "columns". However, if one of these attributes is -again- an array, we have the nested table again.
What you can do is to manually add the categories column to the Data Profile and to choose the "JSON Document" type. So, navigate to your REST Data Source and the Data Profile. Edit the data profile and add a new column.
Column Type: Data
Column Name: {as you wish}
Selector: categories
Data Type: JSON Document
When using the REST Source, e.g. in a report, the CATEGORIES column will contain ["XYZ","ABC"].
I hope this helps

Related

Gremlin load data format

I am having difficulty understanding the Gremlin data load format (for use with Amazon Neptune).
Say I have a CSV with the following columns:
date_order_created
customer_no
order_no
zip_code
item_id
item_short_description
The requirements for the Gremlin load format are that the data is in an edge file and a vertex file.
The edge file must have the following columns: id, label, from and to.
The vertex file must have: id and label columns.
I have been referring to this page for guidance: https://docs.aws.amazon.com/neptune/latest/userguide/bulk-load-tutorial-format-gremlin.html
It states that in the edge file, the from column must equate to "the vertex ID of the from vertex."
And that (in the edge file) the to column must equate to "the vertex ID of the to vertex."
My questions:
Which columns need to be renamed to id, label, from and to? Or, should I add new columns?
Do I only need one vertex file or multiple?
You can have one or more of each CSV file (nodes, edges) but it is recommended to use fewer large files rather than many smaller ones. This allows the bulk loader to split the file up and load it in a parallel fashion.
As to the column headers, let's say you had a node (vertex) file of the form:
~id,~label,name,breed,age:Int
dog-1,Dog,Toby,Retriever,11
dog-2,Dog,Scamp,Spaniel,12
The edge file (for dogs that are friends), might look like this
~id,~label,~from,~to
e-1,FRIENDS_WITH,dog-1,dog-2
In Amazon Neptune, so long as they are unique, any user provided string can be used as a node or edge ID. So in your example, if customer_no is guaranteed to be unique, rather than store it as a property called customer_no you could instead make it the ~id. This can help later with efficient lookups. You can think of the ID as being a bit like a Primary Key in a relational database.
So in summary, you need to always provide the required fields like ~id and ~label. They are accessed differently using Gremlin steps such as hasLabel and hasId once the data is loaded. Columns with names from your domain like order_no will become properties on the node or edge they are defined with, and will be accessed using Gremlin steps such as has('order_no', 'ABC-123')
To follow on from Kelvin's response and provide some further detail around data modeling...
Before getting to the point of loading the data into a graph database, you need to determine what the graph data model will look like. This is done by first deriving a "naive" approach of how you think the entities in the data are connected and then validating this approach by asking the relevant questions (which will turn into queries) that you want to ask of the data.
By way of example, I notice that your dataset has information related to customers, orders, and items. It also has some relevant attributes related to each. Knowing nothing about your use case, I may derive a "naive" model that looks like:
What you have with your original dataset appears similar to what you might see in a relational database as a Join Table. This is a table that contains multiple foreign keys (the ids/no's fields) and maybe some related properties for those relationships. In a graph, relationships are materialized through the use of edges. So in this case, you are expanding this join table into the original set of entities and the relationships between each.
To validate that we have the correct model, we then want to look at the model and see if we can answer relevant questions that we would want to ask of this data. By example, if we wanted to know all items purchased by a customer, we could trace our finger from a customer vertex to the item vertex. Being able to see how to get from point A to point B ensures that we will be able to easily write graph queries for these questions later on.
After you derive this model, you can then determine how best to transform the original source data into the CSV bulk load format. So in this case, you would take each row in your original dataset and convert that to:
For your vertices:
~id, ~label, zip_code, date_order_created, item_short_description
customer001, Customer, 90210, ,
order001, Order, , 2023-01-10,
item001, Item, , , "A small, non-descript black box"
Note that I'm reusing the no's/ids for the customer, item, and order as the ID for their related vertices. This is always good practice as you can then easily lookup a customer, order, or item by that ID. Also note that the CSV becomes a sparse 2-dimensional array of related entities and their properties. I'm only providing the properties related to each type of vertex. By leaving the others blank, they will not be created.
For your edges, you then need to materialize the relationships between each entity based on the fact that they are related by being in the same row of your source "join table". These relationships did not previously have a unique identifier, so we can create one (it can be arbitrary or based on other parts of the data; it just needs to be unique). I like using the vertex IDs of the two related vertices and the label of the relationship when possible. For the ~from and ~to fields, we are including the vertices from which the relationship is deriving and what it is applying to, respectively:
~id, ~label, ~from, ~to
customer001-has_ordered-order001, has_ordered, customer001, order001
order001-contains-item001, contains, order001, item001
I hope that adds some further color and reasoning around how to get from your source data and into the format that Kelvin shows above.

Make form with dynamical amount of inputs in django template

The crux of the problem is this:
I have a Schema that will later be converted to a .csv file. However, I need to populate this schema with data. To do this, I need columns that will have fields (Name, data type, order, etc.)
But I don't know how many columns the Schema will have
Therefore, the wording sounds like this: Create a form with a dynamic number of columns.
While I was writing the question, I had an idea to create a "Scheme" table in the database and bind a table - "Column" to it
Thus, when you click "add column", a new instance will appear that will already be bound to this Schema.
Am I thinking in the right direction or do you have another idea?
The picture below will allow you to convey the essence of the problem more accurately.
Thank you in advance
If I was in your position, I would utilize HTMX. With that, when the "add column" button would be pressed, a new row would appear.
I would also do it the other way around from what I understand of your post, I would bind the column to the scheme like this
class Scheme(models.Model):
...
# rows of scheme
class Column(models.Model):
scheme = models.ForeignKey(scheme, models.CASCADE)
...
# rows of column

Dynamic values for workspace, dataset and table when adding row to a Power BI data set

I am using the 'Add rows to a dataset' to Power BI connector which should point to different streaming datasets.
I would like to fill out the workspace, dataset and table values dynamically. Is this possible?
I've included an image of the connector I am working with. So far I have tried initializing a string variable with UUID's but that doesn't work.
EDIT: Added image of error when I try to add workspace as a dynamic value
I managed to get it to work so to make sure you're doing the right thing, you need to do something like the following.
First of all, create your string variable that store the workspace name, GUID of the DataSet and the name of the table ...
If you want to be sure of the names, etc. then do it using the dropdowns and then look at the code view to get the right values ...
Now to assign the variables, make sure you select the following option ...
... now you can select your variables.
If you're going to make the thing dynamic, you need to pass in a Body that has a serialized Payload. It looks different to the connector when you're using the dropdowns to select your workspace, dataset and table.
This is an example of the payload (must be an array) ...
[ { "DateTime": "2022-03-25T12:32:20.063Z", "Value": 120 } ]
... and as you can see from the below, the Body needs to have a single property called Payload and that will contain the serialized JSON.
When I do all of that, it works ...

Mapping user spreadsheet columns to database fields

I’m not sure where to start on this project. I know how to read the contents of the excel spreadsheet, I know how to identify the header row, I know how to loop over the contents. I believe I have the UX portion worked out but I am not sure how to process the data.
I’ve googled and only found .Net solutions but I’m looking for a ColdFusion/Lucee solution.
I have a working form allowing me to map a user's spreasheet column to my database values (this is being kept simple for this post; user does not have direct access to the database).
Now that I have my data, I'm not sure how to loop over the data results. I believe there will be several loops (an outer and an inner). Then of course I also need to loop over the file contents but I think if I can get the headings mapped out,I can figure out the remaining.
Any good links, tutorials, or guides would be greatly appreciated.
Some pseudo code might be enough to get me started.
User uploads form
System reads headers and content.
User is presented form with a list of columns from their uploaded spreadsheet to match with available database fields (eg “column1” matches “customer name”.
User submits form.
Now what?
UPDATED
Here is what the data looks like AFTER the mapping has been done in my form. The column deliiter is the ::: and within the column the ||| indicates the ID associated with the selected column value. I've included the id and the column value since I plan on displaying the mapping again as a confirmation. Having the ID saves a trip to the database.
If I understand correctly, your question is: how do you provide the user a form allowing them to map their spreadsheet columns to that of the database
Since you have their spreadsheet column names, and you have the database column names, then this problem is essentially a UI/UX problem. You need to show both lists, and allow the user to map them. I can imagine several approaches to this. My first thought would be some sort of drag/drop operation, as follows:
Create a list of boxes, one for each field in your database table, and include the field name in (or above) the box. I'll call this the db field list. Then, create another list for each column from the spreadsheet, which I'll call the spreadsheet column list. The user would drag/drop items from the spreadsheet column list to the db field list.
When a mapping has been completed by the user, you would store the column/field names in as data for the DOM element of the db field list box. Then upon submission, you would acquire the mapping data by visiting each box and adding it to an array. Then you would serialize that array into JSON and send that to your form submission handler.
This could be difficult or easy, depending on your knowledge of UI implementations using JavaScript. jQuery makes this easy (if you know jQuery). There's even a jquery UI plugin that does this: https://jqueryui.com/droppable/.
A quick search for javascript drag drop would help, and here's a few articles I found:
https://www.w3schools.com/html/html5_draganddrop.asp
https://medium.com/quick-code/simple-javascript-drag-drop-d044d8c5bed5
You would also need to submit the array of mappings using javascript. You could search for that as well, and here's an article I found:
https://codereview.stackexchange.com/questions/94493/submit-an-array-as-an-html-form-value-using-javascript

Making an Oracle Apex report table element read-only

Greetings:
Is there anyway to make an apex report table cell (or even the entire report itself) conditionally read-only in Apex 3.2? I don't see the "read-only" box anywhere in the options; tried searching everywhere.
Thanks in advance!
Since the whole tabform is to be made read-only for particular users, you can do this at rendering time rather than using Javascript. However, you would need 2 copies of each column:
Column to be displayed for an authorised user, not readonly
Identical column to be displayed for a non-authorised user, with Element Attributes set to readonly=readonly
Authorisation schemes can be used to control which columns are displayed to the user.
I was hoping to find a way to do this with a single column and a dynamic value for Element Attributes, but I couldn't get it to work.
OK, I had an error in concept. I wanted to make a tabular form read-only. That's why I couldn't see the "read-only" box. If you open the source for the generated page, each column has given an id with the following naming convention:
id="f02_0001"
This table cell is in column 2, row 1. So, you can use JavaScript to loop through a column and modify it's properties. In this example, I use jQuery:
var payments = $("[id^='f08_']"); // get all cells for column 8
// loop through items
$.each(payments, function(){
alert($('#'+this.id).val());
// make whatever changes you want to this.id, such as make read-only
});