How to create an association to a parameterized CDS view? - cds

I'm struggling to find the syntax to create an association between an extension of a parameterized CDS view and a parameterized CDS view. Their input parameters have the same names.
I've tried this:
extend view I_AAA with ZZ_AAA
association [0..1] to ZZ_BBB(P_param1 : $parameters.P_param1) as _ZZ_BBB
This gives the error: "unexpected keyword '(' (ON was expected)".
Or this:
extend view I_AAA with ZZ_AAA
association [0..1] to ZZ_BBB as _ZZ_BBB on $parameters.P_param1 = _ZZ_BBB.P_param1
This gives the error: "The entity ZZ_BBB requires parameter P_X".
The documentation states:
If the data source target of a specified CDS association is a CDS
entity with input parameters, parameters must be used after the name
_assoc to pass actual parameters to them. No parameters can be specified for a CDS association published as an element of a SELECT
list.
Putting parameters after _assoc is what I tried in the first example.

I've found a workaround: parameters have to be specified for each data element in the selection list using the following syntax:
association [0..1] to ZZ_BBB as _ZZ_BBB on $projection.operand1 = _ZZ_BBB.operand1
{
_ZZ_BBB(P_Param1:$parameters.P_Param1, P_Param2:$parameters.P_Param2).Element1 as SomeElement
...
I still would like to know if it is possible to specify a general parameter for the association that would affect all data elements. I'm going to accept this answer in the meantime.

Related

Python Google Prediction example

predict_custom_model_sample(
"projects/794xxx496/locations/us-central1/xxxx/3452xxx524447744",
{ "instance_key_1": "value", ... },
{ "parameter_key_1": "value", ... }
)
Google is giving this example, I am not understanding the parameter_key and instance_key. To my understanding, I need to send the JSON instance.
{"instances": [ {"when": {"price": "1212"}}]}
How can I make it work with the predict_custom_model_sample?
I assume that you are trying this codelab.
Note that there seems to be a mismatch between the function name defined (predict_tabular_model) and the function name used (predict_custom_model_sample).
INSTANCES is an array of one or more JSON values of any type. Each values represents an instance that you are providing a prediction for.
Instant_key_1 is just the first key of the key/value that goes into instances.
Similarly, parameter_key_1 is just the first key of the key/value that goes into the parameters JSON object.
If your model uses a custom container, your input must be formatted as JSON, and there is an additional parameters field that can be used for your container.
PARAMETERS is a JSON object containing any parameters that your container requires to help serve predictions on the instances. AI Platform considers the parameters field optional, so you can design your container to require it, only use it when provided, or ignore it.
Ref.: https://cloud.google.com/ai-platform-unified/docs/predictions/custom-container-requirements#request_requirements
Here you have examples of inputs for online predictions from custom-trained models
For the codelab, I believe you can use the sample provided:
test_instance={
'Time': 80422,
'Amount': 17.99,
…
}
And then call for prediction (Remember to check for the function name in the notebook cell above)
predict_custom_model_sample(
"your-endpoint-str",
test_instance
)

EMF error : the attribute "XYZ.Attribute_name" is not transient o it must have a data type that is serializable

I am creating an ECore model. I created an EClass and inside it I want to create a data member that is a list. So I created an EAttribute of type EEList.
However when I try to create the genmodel file I get an error saying
the attribute "XYZ.Attribute_name" is not transient o it must have a data type that is serializable.
It also gives a warning saying
The generic type associated with the 'EEList' classifier should have 1 type atrgument(s) to match the number of type parameter(s) of the classifier.
Can anyone tell me what I'm doing wrong? I could not figure out how to set the E in EEList<E>.
First error
The first error probably disappears after you've fix the second error. I write an explanation here, but you probably don't have to deal with it to solve you problem.
It is because, to be saved to disk, the EDataTypes of attributes must be convertible to a text format.
There are two ways to ensure this:
Implement conversion to and from strings for the used EDataType. Standard EMF EDataTypes already do this, but if you have created your own EDataType you have to do it manually.
Use a Java type for the EDataType that is serializable. It must thus implement the Serializable interface and provide serializating operations. Many standard Java classes, such as String and Integer already do that.
Another solution is to set the Transient property of the attribute to true. The the attribute will not be saved and its EDataType does not need to be serialized.
Second error
The normal way to create a list attribute is to set the Upper Bound property of the attribute to a value different from 1. To create a list attribute which can contain any number of elements, set Upper Bound to -1, which means Unbounded.
The EAttribute Type should be set to the element type, not to a list type.
The generated Java code will contain a property with the type EList<ElementType>.

Updating ElasticSearch mappings field type with existing data

I'm storing a few fields and for the sake of simplicity lets call the field in question 'age'. Initially ES created the index for me and it ended up choosing the wrong field type for 'age'. It's a string type right now instead of a numeric type. I'm aware that, I should have defined the mappings myself to begin with and force the data values been sent to be consistently all strings or numeric values.
What I've right now is an index with a ton of data that uses a 'string' type for age with following values: 1, 10, 'na', etc..
Now my question is: if I were to change the mapping from string to integer, would indexing have any issues with the existing data values such as 'na' when being updated ??
I just wanted to ask first before I start creating a playground environment to test with a sample data set.
What you can update according to the doc:
new properties can be added to Object datatype fields.
new multi-fields can be added to existing fields.
doc_values can be disabled, but not enabled.
the ignore_above parameter can be updated.
Otherwise I am afraid you will have to create a new mapping and reindex your data, see this post for example

create generic list object from datatable which has different schema at runtime

Datatable should be converted to a generic list. The datatable may have different schema at runtime.so extracting the class type for the generic list becomes difficult. Because as the columns differ at run time, I could'nt create property for the class type accordingly. I want to know how to do that. i am using c#.

JasperReports: Passing in a list of lists as a datasource

I need to populate a few subreports with lists of different objects. Basically lets say i have the following:
Subreport on used Vehicles
Subreport on new Vehicles
I create a vehicle bean class with variables as strings and create getter and setter methods for the same. Then in my datasource I pass in a List<List<String>> as detailRows. detailRows contains a list for new vehicles and a list for used vehicles. So lets say, i pass detailRows in the data source.
Question is how do i pass these two lists to the two sub-reports? Can i use
new net.sf.jasperreports.engine.data.JRBeanCollectionDataSource($F{newVehiclesList}) as a datasource for sub report 1 and
new net.sf.jasperreports.engine.data.JRBeanCollectionDataSource($F{usedVehiclesList}) as datasource for sub report 2?
Is there anything else that needs to be done apart from what i mentioned? Do i need to create and pass any variables? Is the appropriate use of the list of lists as i have listed above or is it $F{detailRows}.get(0)?
I created a field detailRows in the main report as type list. I then pass the following to the subreport data source expression, new net.sf.jasperreports.engine.data.JRBeanCollectionDataSource($F{detailRows}
Is there any way i can pass the newVehiclesList from detailRows to the sub-report?
Thanks!
Selecting your SubReport you can set the property "Connection type" as "Use a data source expression" and inside the property "Data Source Expression" you set this:
new net.sf.jasperreports.engine.data.JRBeanCollectionDataSource($F{yourFieldHere})
Where your "yourFieldHere" is a list (don't forget to set the "Field Class" inside your field properties as a java.util.List as well)
Ok, then you need create two fields with the Field Class as java.util.List, one for each list (newVehiclesList and usedVehiclesList).
Put your two SubReports wherever you want and click on each one doing the following steps:
Change the "Connection type" to "Use a datasource expression" then change the "Data Source Expression" to new net.sf.jasperreports.engine.data.JRBeanCollectionDataSource($F{yourField})
Done.
ps: In order to use the fields inside your newVehiclesList and usedVehiclesList you have to create them inside of their own subReports.
i was the same problems with you and i solved it using the tag List of jasper, i used datasource in my class java, for example:
parameter.put("MyList", new JRBeanCollectionDataSource(ListObjects));
in JRXML
In palete of Jasper, choose the tag LIST and drag and drop in your relatory
after choose
create new dataset
create new dataset from a connection… ...
in data adapter choose new data adapter - collection of javabeans
use a JRDatasource expression
go in lis of parameters and choose you list op objects (MyList)
now go to outline of jasper and
- dataset properties
- edit and query filter ... ...
- javabean
- search you class (I using eclipse, so it's easy to search my class)
- add fields to use