SAP Link User-Defined Field to normal SAPB1 table field - foreign-keys

In SAP we can add link from User-Defined field to User Defined Table? Is it possible to link UDF to normal SAP Table, for example, to Business Partner Table?
I need kind of foreign key from my UDF to SAP table (added with DI API or manually)

Business One version 9.2 added a new validation option of "Linked to Entities" for UDFs. However, as of PL4 it has the restriction that a link to a standard system table can only be made from a UDF on another standard system table (using the 'Link to System Object' option). Furthermore the standard tables you can link to is a fairly small list:
GL Accounts
Business Partners
Stock Items
Payments
Journal Entries
Warehouses
Most marketing documents
(This list taken from PL04)
It doesn't appear to be possible to link a UDF on a UDT to a standard table.

Related

I can't read specific tables of Dynamics BC with Power BI

I am having troubles sincronising certain tables of our ERP (Dynamics Business Central) with Power BI. The steps that I am doing are explained below:
Get Data
Search Dynamics 365 Business central
Search for the relevant tables
This is when Power BI doesn´t let me preview the information within the table called 'salesCreditMemoLines' (), contrary to other tables that I can see without troubles ()
I appreciate your help in this issue.
This is expected error. Document lines collections in Business Central API require the respective document ID to be present in the request, otherwise it fails.
This is the piece of code from the API page that throws this error.
IdFilter := GetFilter(SystemId);
DocumentIdFilter := GetFilter("Document Id");
if (IdFilter = '') and (DocumentIdFilter = '') then
Error(IDOrDocumentIdShouldBeSpecifiedForLinesErr);
There are two ways to send the document ID. My examples below are querying sales orders, but the same applies to all document collections.
First is request the lines along with the document header using $expand syntax:
https://api.businesscentral.dynamics.com/v2.0/{{tenantid}}/{{environmentname}}/api/v2.0/companies({companyId})/salesOrders(orderId)$expand=salesOrderLines
Another option is to query the document lines adding the $filter parameter:
https://api.businesscentral.dynamics.com/v2.0/{{tenantid}}/{{environmentname}}/api/v2.0/companies(companyId)/salesOrderLines?$filter=documentId eq salesOrderId
Filters can include ranges, so this way it's possible to request a collection of lines from multiple documents.
https://learn.microsoft.com/en-us/dynamics365/business-central/dev-itpro/webservices/use-filter-expressions-in-odata-uris
Neither of these methods is going to work in Power BI data source selection, though. An alternative way is to use an entity salesDocumentLines under the Web Services (legacy) option. Yes, it shows as legacy, but so far Microsoft has not announced any plans to remove the OData web services.
https://learn.microsoft.com/en-us/dynamics365-release-plan/2021wave1/smb/dynamics365-business-central/enable-power-bi-connector-work-business-central-apis-instead-web-services-only

Do views of tables in BigQuery benefit from partitioning/clustering optimization?

We have a few tables in BigQuery that are being updated nightly, and then we have a deduplication process doing garbage collection slowly.
To ensure that our UI is always showing the latest, we have a view setup for each table that simply does a SELECT WHERE on the newest timestamp record_id combination
We're about to setup partitioning and clustering to optimize query scope/speed and I couldn't find a clear answer in Google documentation on whether the view of that table will still have partitioned queries or it will end up querying all data.
Alternatively when we create the view, can we include the partition and cluster on in the query that builds the view?
If you're talking about a logical view, then yes if the base table it references is clustered/partitioned it will use those features if they're referenced from the WHERE clause. The logical view doesn't have its own managed storage, it's just effectively a SQL subquery that gets run whenever the view is referenced.
If you're talking about a materialized view, then partitioning/clustering from the base table isn't inherited, but can be defined on the materialized view. See the DDL syntax for more details: https://cloud.google.com/bigquery/docs/reference/standard-sql/data-definition-language#create_materialized_view_statement

How to get rid of __key__ columns in BigQuery table for every 'Record' Type field?

For every 'Record' Type of my Firestore table, BigQuery is automatically adding the 'key' columns. I do not want to have these added for each of the 'Record' Type fields. How can I get rid of these extra columns automatically being added by BigQuery? (I want to get rid of the below columns in my BigQuery table schema highlighted in yellow)
This is intended behavior, citing Bigquery GCP documentation:
Each document in Firestore has a unique key that contains
information such as the document ID and the document path. BigQuery
creates a RECORD data type (also known as a STRUCT) for the key,
with nested fields for each piece of information, as described in the
following table.
Due to the fact that Firestore export method is fully integrated with GCP managed import and export service, you can't change this behavior, excluding __key__.* properties being sent for each RECORD field in the target Bigquery table.
I guess in your use case, Bigquery table modification action will require some hand-on intervention, since it requires manually changing schema data.
In order to set up this feasibility I would encourage you to raise a service request to the vendor via Google public issue tracker.

Data Model in DynamoDB

When using Mobile Hub (AWS), building a DynamoDB table. There is at some point the option to download the Data Model for the table. But we do not see this option (AFAIK) if we do not use Mobile Hub. So the question is: Is there a way to get the Data Model for the table, when not using Mobile Hub?
Just to clarify, DynamoDB doesn't have a full data model like RDBMS. However, it does have the hash key, partition key (if defined) and all the index details.
You can get this information using Describe table API. The API will give the output in JSON format. Kindly look at the link for more information.
Please note that all the non-key attributes are not included in the data model. This is the basic concept in NoSQL database and this is the flexibility of NoSQL database when compared to RDBMS.
The item structure (non-key attributes) need not be defined while
creating the table. In fact, DynamoDB doesn't allow to define the
non-key attributes while creating the table
The non-key attributes in one item need not be same in the another
item

DropLink datasource item reference with custom dataprovider in Sitecore

How do bind a DropLink using a custom dataprovider?
More info:
I am trying to build a product catalogue site using Sitecore. Each product in the sitecore content tree can have a star rating and short text review attached to it (which will be linked to a user extended with a profile provider but that is another question).
I am planning to store the review information in an external database and reference it using a custom dataprovider. I have downloaded the NorthwindDataProvider from the Shared Source (here) and have altered it to use a table which contains the rating, text and a uniqueidentifier field to store the ID of the product from in sitecore the review is attached to.
The template field is a droplink and the datasource is set to the products in the catalogue.
When I edit a review in the custom dataprovider using the sitecore content editor, the droplink states 'Value not in selection list' even if I select one of the populated products and save using sitecore.
It is saving the ID in the database but if I look at the raw value it displays the id without the curly brackets. Working droplink fields' raw values appear to contain the brackets.
To create a review, I am using a jquery post to a webservice which writes to the database using an external datacontext. Should I be using some Sitecore API to use the custom dataprovider instead?
Any information using custom dataproviders would be helpful. The documentation I've been able to find has all stated what can be done but I'm struggling to find actual implementation.
So the first thing is that you have a template field and you're using droplink which is going to store the guid for the item selected. I'm not quite clear on whether or not you're pointing the datasource to a Sitecore item or not.. but that's essential if you're using droplink. Here's what I would suggest instead for the most straight forward way to do this:
Create a template that you add fields to handle the logic dealing with your catalog items. How you do that is your choice and Sitecore doesn't care since its only going to deal with the item and all it cares about is finding an item... you write business logic to manipulate the external data.
Once you have a folder that stores your catalog items, you could easily write a script to be triggered by the Rules engine in Sitecore or a Sitecore task that runs regularly to get your catalog items to add/update or remove the corresponding list of Sitecore items.
Also, another option that is more complex to implement, but if you have multiple data sources on your site, is a valid approach, is to use an object framework (like the Entity framework) as a data object layer that allows you to create and populate common objects with from any data source.
Hope this is helpful!