Table Function to get first row of MSEG in CDS view - cds

I'm trying to build a CDS view that uses various fields from invoices in VBRK and VBRP. Another requirement is to display the price listed in the original purchase order (for example, I'm selling kiwis to someone and I want to display the original purchase price I paid). I'm supposed to use the connection to MSEG with parameter batch (MSEG-CHARG). The assumption here is that for every batch there is only one purchase order. I'm not sure how to make that connection from the invoices, though.
This is my basic CDS view:
#AbapCatalog.sqlViewName: <view_name>
#AbapCatalog.compiler.compareFilter: true
#AbapCatalog.preserveKey: true
#AccessControl.authorizationCheck: #CHECK
#EndUserText.label: '<text>'
#VDM.viewType: #BASIC
define view <view_name> as select distinct from I_BillingDocumentItemCube( P_ExchangeRateType: 'M', P_DisplayCurrency: 'EUR' )
{
key BillingDocument,
key BillingDocumentItem,
BillingDocumentType,
_BillingDocument._Item._PricingElement[ConditionType = 'XXX1'].ConditionRateValue as cost1,
_BillingDocument._Item._PricingElement[ConditionType = 'XXX2'].ConditionRateValue as cost2,
SoldToParty,
SoldToPartyName,
Material,
BillingDocumentItemText,
Batch,
BillingDocumentDate,
BillingQuantity,
BillingQuantityUnit,
SalesDocumentItemCategory
};
I tried using a table function to select the responding batch from MSEG but I'm not sure how to connect it to the CDS view.
#EndUserText.label: '<name>'
define table function <table_function>
with parameters #Environment.systemField: #CLIENT
clnt: abap.clnt,
charg: charg_d
returns {
clnt : abap.clnt;
charg_exp : charg_d;
dmbtr : dmbtr_cs;
menge : menge_d;
}
implemented by method <class> => <method>;
class:
CLASS <class_name> DEFINITION
PUBLIC
FINAL
CREATE PUBLIC .
PUBLIC SECTION.
CLASS-METHODS <method_name> FOR TABLE FUNCTION <table_function>.
PROTECTED SECTION.
PRIVATE SECTION.
ENDCLASS.
CLASS <class_name> IMPLEMENTATION.
METHOD <method_name>
BY DATABASE FUNCTION FOR HDB LANGUAGE SQLSCRIPT
OPTIONS READ-ONLY
USING nsdm_e_mseg.
RETURN select top 1 mandt as clnt, charg as charg_exp, dmbtr, menge
from nsdm_e_mseg
where mandt = :clnt
and charg = :charg
and dmbtr > 0
and menge > 0
order by mblnr;
ENDMETHOD.
ENDCLASS.
How can I use this table function in my basic CDS view to connect the position in VBRP to the position in MSEG?

Generally the linking is following:
VBRK-vbeln -> VBRP-vbeln VBFA-vbeln -> MSEG-mblnr
... VBRP-posnr VBFA-posnn -> MSEG-zeile
... ... ...
... ... vbtyp_n='R'
... VBRP-aubel -> VBFA-vbelv
... VBRP-aupos -> VBFA-posnv
However, I do not exactly now how I_BillingDocumentItemCube is built as I am not very familiar with S4HANA analytical cubes, does it really project to VBRK/VBRP?
Anyway, MSEG table has MBLNR/MJAHR/ZEILE key so passing batch (CHARG) is definitely not sufficient because it does not uniquely defines Purchase Order, even though you assume it will be 1:1 relation of PO:Billing, it can be multiple similar material (MATNR) positions within the same Batch and the same Purchase Order with the different prices.
Look at the MCHB table, only Plant/StorageLoc/Batch (MATNR/WERKS/LGORT) combination unambiguously defines material doc.
So, to conclude, at the very least you should pass into the method these 4 parameters which you will derive from I_BillingDocumentItemCube:
METHOD <method_name>
BY DATABASE FUNCTION FOR HDB LANGUAGE SQLSCRIPT
OPTIONS READ-ONLY
USING nsdm_e_mseg.
RETURN select top 1 mandt as clnt, charg as charg_exp, dmbtr, menge
from nsdm_e_mseg
where mandt = :clnt
and mblnr = :mblnr
and mjahr = :mjahr
and zeile = :zeile
and charg = :charg
and dmbtr > 0
and menge > 0
order by mblnr;
ENDMETHOD.
Also, to conform to the new S4HANA data model, why don't you read MATDOC table directly?
NSDM_E_MSEG is nothing more than legacy wrapper, so-called compatibility view that redirects to MATDOC table with some parameters, check 2206980 note
It may be a nice start to use new tables for a smoother shift to S4HANA model.

Related

ArcGIS Python Toolbox Parameter Dependencies

I'm trying to connect two parameters. In the first, the user inputs a file which contains a list (each line is one item). In the second, I'm hoping to set a parameter of type Field or GPValueTable. Here's a look at how this part of the code currently looks like:
def getParameterInfo(self):
#Define parameter definitions
# Input Features parameter
in_features = arcpy.Parameter(
displayName="Input Features",
name="in_features",
datatype="DETextFile",
parameterType="Required",
direction="Input")
# User selection
selection_field = arcpy.Parameter(
displayName="Selection list",
name="selection_field",
datatype="Field",
parameterType="Required",
direction="Input")
selection_field.parameterDependencies = [in_features]
# Derived Output Features parameter
out_features = arcpy.Parameter(
displayName="Output Features",
name="out_features",
datatype="GPFeatureLayer",
parameterType="Derived",
direction="Output")
out_features.parameterDependencies = [in_features.name]
parameters = [in_features, selection_field]
return parameters
The text file looks like this:
A
B
C
The toolbox dialog output is just A. I'm having a hard time understanding what ArcGIS intended to create here. Perhaps I'm using the wrong data types, but their parameter explanation doesn't make it very clear.
Any ideas?
You are confused with the arcpy terminology, I think. The second parameter's datatype="Field" does not imply data can be read/parsed from a text file where the data columns are written, in fact it is the schema/fields of a proper table and expectation is not a simple text but the field objects of a table/feature class (in the toolbox's run-time, assignment to arcpy.Parameter yields a geoprocessing value object but it is a different discussion). If you look at the "Creating value table parameters" section here, GPFeatureLayer's (param0) fields are populated in param1 automatically when the dependency is set.
The solution is setting your selection_field to GPValueTable and populating the selection_field.filters[0].list by reading the in_features's content after the parameter is altered but has not validated in def updateParameters(...). Have a look at https://gis.stackexchange.com/questions/370250/updating-valuetable-parameter-of-arcpy-python-toolbox-tool.

how to set a parameter in icCube reporting that is based on other parameters?

I have a server function (MDX++) months(value M, value H).
The icCube dashboard contains two events, attached to two filters (LOVs): #{selM} and #{selH}.
How can I assign the result of the function months(#{selM}, #{selH} to a new event called #{selPeriod} that is updated each time one of the LOVs changes?
It's possible, such event could be generated by the cusom mdx filter that based on the manual query. You can hide somehow this filter from the result UI as it will be used only in the automatic mode.
Filter1 -> generates #{selM}
Filter2 -> generates #{selH}
Filter3 -> hidden, generates #{selPeriod}
For the Filter3 you need to switch the data configuration to MDX and put the following request:
WITH
MEMBER ic3Name AS months(#{selM}, #{selH}) // event caption(your function)
MEMBER ic3UName AS months(#{selM}, #{selH}) // event value(your function)
MEMBER ic3PName AS NULL
MEMBER ic3Measure AS 0
MEMBER ic3IsSelected AS true // true to throw the event automatically
MEMBER ic3FilterName as [Measures].[ic3Name]
MEMBER ic3Key as 0
SELECT
{[Measures].[ic3Name],[Measures].[ic3UName],[Measures].[ic3PName],[Measures].[ic3Measure],[Measures].[ic3IsSelected],[Measures].[ic3FilterName],[Measures].[ic3Key]} ON 0,
TopCount([Calendar].[Year], 1) on 1 // any non empty level, to have 1 row in the response
FROM [Sales] // any cube

Lua functions use "self" in source but no metamethod allows to use them

I've been digging into Lua's source code, both the C source from their website and the lua files from Lua on Windows. I found something odd that I can't find any information about, as to why they chose to do this.
There are some methods in the string library that allows OOP calling, by attaching the method to the string like this:
string.format(s, e1, e2, ...)
s:format(e1, e2, ...)
So I dug into the source code for the module table, and found that functions like table.remove(), also allows for the same thing.
Here's the source code from UnorderedArray.lua:
function add(self, value)
self[#self + 1] = value
end
function remove(self, index)
local size = #self
if index == size then
self[size] = nil
elseif (index > 0) and (index < size) then
self[index], self[size] = self[size], nil
end
end
Which indicate that the functions should support the colon method. Lo' and behold when I copy table into my new list, the methods carry over. Here's an example using table.insert as a method:
function copy(obj, seen) -- Recursive function to copy a table with tables
if type(obj) ~= 'table' then return obj end
if seen and seen[obj] then return seen[obj] end
local s = seen or {}
local res = setmetatable({}, getmetatable(obj))
s[obj] = res
for k, v in pairs(obj) do res[copy(k, s)] = copy(v, s) end
return res
end
function count(list) -- Count a list because #table doesn't work on keyindexed tables
local sum = 0; for i,v in pairs(list) do sum = sum + 1 end; print("Length: " .. sum)
end
function pts(s) print(tostring(s)) end -- Macro function
local list = {1, 2, 3}
pts(list.insert) --> nil
pts(table["insert"]) --> function: 0xA682A8
pts(list["insert"]) --> nil
list = copy(_G.table)
pts(table["insert"]) --> function: 0xA682A8
pts(list["insert"]) --> function: 0xA682A8
count(list) --> Length: 9
list:insert(-1, "test")
count(list) --> Length: 10
Was Lua 5.1 and newer supposed to support table methods like the string library but they decided to not implement the meta method?
EDIT:
I'll explain it a little further so people understand.
Strings have metamethods attached that you can use on the strings OOP style.
s = "test"
s:sub(1,1)
But tables doesn't. Even though the methods in the table's source code allow for it using "self" functions. So the following code doesn't work:
t = {1,2,3}
t:remove(#t)
The function has a self member defined in the argument (UnorderedArray.lua:25: function remove(self,index)).
You can find the metamethods of strings by using:
for i,v in pairs(getmetatable('').__index) do
print(i, tostring(v))
end
which prints the list of all methods available for strings:
sub function: 0xB4ABC8
upper function: 0xB4AB08
len function: 0xB4A110
gfind function: 0xB4A410
rep function: 0xB4AD88
find function: 0xB4A370
match function: 0xB4AE08
char function: 0xB4A430
dump function: 0xB4A310
gmatch function: 0xB4A410
reverse function: 0xB4AE48
byte function: 0xB4A170
format function: 0xB4A0F0
gsub function: 0xB4A130
lower function: 0xB4AC28
If you attach the module/library table to a table like Oka showed in the example, you can use the methods that table has just the same way the string metamethods work.
The question is: Why would Lua developers allow metamethods of strings by default but tables doesn't even though table's library and it's methods allow it in the source code?
The question was answered: It would allow a developer of a module or program to alter the metatables of all tables in the program, leading to the result where a table would behave differently from vanilla Lua when used in a program. It's different if you implement a class of a data type (say: vectors) and change the metamethods of that specific class and table, instead of changing all of Lua's standard table metamethods. This also slightly overlaps with operator overloading.
If I'm understanding your question correctly, you're asking why it is not possible to do the following:
local tab = {}
tab:insert('value')
Having tables spawn with a default metatable and __index breaks some assumptions that one would have about tables.
Mainly, empty tables should be empty. If tables were to spawn with an __index metamethod lookup for the insert, sort, etc., methods, it would break the assumption that an empty table should not respond to any members.
This becomes an issue if you're using a table as a cache or memo, and you need to check if the 'insert', or 'sort' strings exist or not (think arbitrary user input). You'd need to use rawget to solve a problem that didn't need to be there in the first place.
Empty tables should also be orphans. Meaning that they should have no relations without the programmer explicitly giving them relations. Tables are the only complex data structure available in Lua, and are the foundation for a lot of programs. They need to be free and flexible. Pairing them with the the table table as a default metatable creates some inconsistencies. For example, not all tables can make use of the generic sort function - a weird cruft for dictionary-like tables.
Additionally, consider that you're utilizing a library, and that library's author has told you that a certain function returns a densely packed table (i.e., an array), so you figure that you can call :sort(...) on the returned table. What if the library author has changed the metatable of that return table? Now your code no longer works, and any generic functions built on top of a _:sort(...) paradigm can't accept these tables.
Basically put, strings and tables are two very different beasts. Strings are immutable, static, and their contents are predictable. Tables are mutable, transient, and very unpredictable.
It's much, much easier to add this in when you need it, instead of baking it into the language. A very simple function:
local meta = { __index = table }
_G.T = function (tab)
if tab ~= nil then
local tab_t = type(tab)
if tab_t ~= 'table' then
error(("`table' expected, got: `%s'"):format(tab_t), 0)
end
end
return setmetatable(tab or {}, meta)
end
Now any time you want a table that responds to functions found in the table table, just prefix it with a T.
local foo = T {}
foo:insert('bar')
print(#foo) --> 1

Ultragrid Export Sort Order / Indicator

To make a long story short, we have a legacy application which displays an infragistics grid where users can export the grid display. The issue I'm having is that there's a particular order in which they want the export to occur, and if I set the order within the grid view prior to export, it retains this order, however if I try to force it "on export", it doesn't seem to work despite trying to set it. Here's my code (VB), as you can see just prior to import I try to set the "sortindicator", but I suspect I'm missing something.
Dim FileName As String
Dim I As Integer
I = 1
FileName = "C:\ReconciliationReport.xls"
While System.IO.File.Exists(FileName)
FileName = "C:\ReconciliationReport_" & I & ".xls"
I = I + 1
End While
grdReconciliationReport.DisplayLayout.Bands(0).Columns("ReconciliationOrder").SortIndicator = Infragistics.Win.UltraWinGrid.SortIndicator.Ascending
UltraGridExcelExporter.Export(grdReconciliationReport, FileName)
During the export of the grid UltraGridExcelExporter creates its own copy of the Layout. This is done exactly to allow you to sort, hide, delete and any other action in the layout without changing the actual grid. To sort the grid by any column you need to handle ExportStarted event. The event argument contains reference to the clonned layout. You can use code like this:
Private Sub UltraGridExcelExporter_ExportStarted(sender As Object, e As ExcelExport.ExportStartedEventArgs) Handles UltraGridExcelExporter1.ExportStarted
Dim sortedCol As UltraGridColumn = e.Layout.Bands(0).Columns(1)
e.Layout.Bands(0).SortedColumns.Add(sortedCol, False, False)
End Sub

Getting odd behavior from $query->setMaxResults()

When I call setMaxResults on a query, it seems to want to treat the max number as "2", no matter what it's actual value is.
function findMostRecentByOwnerUser(\Entities\User $user, $limit)
{
echo "2: $limit<br>";
$query = $this->getEntityManager()->createQuery('
SELECT t
FROM Entities\Thread t
JOIN t.messages m
JOIN t.group g
WHERE
g.ownerUser = :owner_user
ORDER BY m.timestamp DESC
');
$query->setParameter("owner_user", $user);
$query->setMaxResults(4);
echo $query->getSQL()."<br>";
$results = $query->getResult();
echo "3: ".count($results);
return $results;
}
When I comment out the setMaxResults line, I get 6 results. When I leave it in, I get the 2 most recent results. When I run the generated SQL code in phpMyAdmin, I get the 4 most recent results. The generated SQL, for reference, is:
SELECT <lots of columns, all from t0_>
FROM Thread t0_
INNER JOIN Message m1_ ON t0_.id = m1_.thread_id
INNER JOIN Groups g2_ ON t0_.group_id = g2_.id
WHERE g2_.ownerUser_id = ?
ORDER BY m1_.timestamp DESC
LIMIT 4
Edit:
While reading the DQL "Limit" documentation, I came across the following:
If your query contains a fetch-joined collection specifying the result limit methods are not working as you would expect. Set Max Results restricts the number of database result rows, however in the case of fetch-joined collections one root entity might appear in many rows, effectively hydrating less than the specified number of results.
I'm pretty sure that I'm not doing a fetch-joined collection. I'm under the impression that a fetch-joined collection is where I do something like SELECT t, m FROM Threads JOIN t.messages. Am I incorrect in my understanding of this?
An update : With Doctrine 2.2+ you can use the Paginator http://docs.doctrine-project.org/en/latest/tutorials/pagination.html
Using ->groupBy('your_entity.id') seem to solve the issue!
I solved the same issue by only fetching contents of the master table and having all joined tables fetched as fetch="EAGER" which is defined in the Entity (described here http://www.doctrine-project.org/docs/orm/2.1/en/reference/annotations-reference.html?highlight=eager#manytoone).
class VehicleRepository extends EntityRepository
{
/**
* #var integer
*/
protected $pageSize = 10;
public function page($number = 1)
{
return $this->_em->createQuery('SELECT v FROM Entities\VehicleManagement\Vehicles v')
->setMaxResults(100)
->setFirstResult($number - 1)
->getResult();
}
}
In my example repo you can see I only fetched the vehicle table to get the correct result amount. But all properties (like make, model, category) are fetched immediately.
(I also iterated over the Entity-contents because I needed the Entity represented as an array, but that shouldn't matter afaik.)
Here's an excerpt from my entity:
class Vehicles
{
...
/**
* #ManyToOne(targetEntity="Makes", fetch="EAGER")
* #var Makes
*/
public $make;
...
}
Its important that you map every Entity correctly otherwise it won't work.