Transform a Sales Order into a Invoice with SuiteTalk - web-services

In NetSuite's SuiteTalk how do I transform a record from a Sales Order to an Invoice? It looks like there is a function in SuiteScript, but I can't find anything similar in SuiteTalk.
SuiteScript:
nlapiTransformRecord(type, id, transformType, transformValues)
Initializes a new record using data from an existing record of a
different type and returns an nlobjRecord. This function can be useful
for automated order processing such as creating item fulfillment
transactions and invoices off of orders.

SuiteTalk has an analogous initialize method. With the Java library you'd use it like:
ReadResponse initCS = nsClient.getPort().initialize(new InitializeRecord(InitializeType.cashSale, new InitializeRef(null, InitializeRefType.salesOrder, soId, null), null));
CashSale cs = (CashSale)initCS.getRecord();

Related

how to build sql from RelBuild without schema info?

i want to generate sql use calcite. like this
org.apache.calcite.rel.rel2sql.RelToSqlConverterTest#testAntiJoin
final FrameworkConfig frameworkConfig = Frameworks.newConfigBuilder()
.parserConfig(SqlParser.Config.DEFAULT)
// .defaultSchema(schema)
.build();
final RelBuilder builder = RelBuilder.create(frameworkConfig);
final RelBuilder builder = relBuilder();
final RelNode root = builder
.scan("DEPT")
.scan("EMP")
.join(
JoinRelType.ANTI, builder.equals(
builder.field(2, 1, "DEPTNO"),
builder.field(2, 0, "DEPTNO")))
.project(builder.field("DEPTNO"))
.build();
but if i don't set the schema, the exception table not found will be throw.
is there any way to generate sql without schema info.
the aim is generate sql. just generate sql.
reply for first answer. because comment character length limit.
My scenario is Business Intelligence. DataSource can be many, such as Hive, ClickHouse, and so on. And there are many tables. I also need to dynamically delete or add datasource. So I don't think it's appropriate for Calcite to be aware of all the data sources. I have two more questions:
How to create 'free-standing' table objects as you said
Check whether SqlNode can be used to do this. for example:
SqlIdentifier from = new SqlIdentifier("testTable", SqlParserPos.QUOTED_ZERO);
SqlNode[] nodes = new SqlNode[2];
nodes[0] = new SqlIdentifier("a", SqlParserPos.QUOTED_ZERO);
nodes[1] = SqlLiteral.createExactNumeric("1", SqlParserPos.QUOTED_ZERO);
SqlNode where = new SqlBasicCall(SqlStdOperatorTable.EQUALS, nodes, SqlParserPos.QUOTED_ZERO);
SqlIdentifier selectNode = new SqlIdentifier("a", SqlParserPos.QUOTED_ZERO);
SqlSelect select = new SqlSelect(SqlParserPos.QUOTED_ZERO, SqlNodeList.EMPTY,
new SqlNodeList(Arrays.asList(selectNode), SqlParserPos.QUOTED_ZERO),
from,
where,
null,
null,
null,
null,
null,
null,
null);
SqlString sqlString = select.toSqlString(CalciteSqlDialect.DEFAULT);
System.out.println(sqlString.getSql());
Only one method in RelBuilder uses a RelOptSchema: scan(String...) (and its variant Scan(Iterable<String>)). Which makes sense when you consider that the purpose of RelOptSchema is as a directory service, converting a table name (or table path, consisting of a table name qualified with catalog and/or schema names) into a RelOptTable object.
If you have 'free-standing' table objects that are not accessed via a namespace then you can create TableScan relational expressions directly and then call RelBuilder.push(RelNode) to add them to the stack. Since you never call RelBuilder.scan you can create RelBuilder with a null RelOptSchema.
But in your case, it looks as if you don't have free-standing table objects. That's a problem for Calcite, because it needs to know that your "EMP" table has a field called "DEPTNO" and it has type INTEGER.
So I suggest that you create a 'virtual' schema that contains type information but is not necessarily backed by real tables. The MockCatalogReader class, used in several of Calcite's tests, is a good example to follow.

How to add a new column with custom values, based on a WHERE clause from another table in PowerBi?

I am stuck while dynamically forming a new column based certain WHERE clause from another Table in PowerBi. To give more details, let's say I have a table with item numbers associated with a Customer Name. In another table, I have to add a new column, which will dynamically add the item numbers associated with a particular customer and append as a query parameter to a base url.
So, my first table looks like this:
The second table that I want is this:
The query parameter value in the URL, has to be dynamically based on a SELECT query with a WHERE clause and pick up the ItemNumbers using the Customer field which is common between both. So, how can this be done in PowerBi? Any help would be really appreciated :)
I have one table in my model "TableRol" if I want to summarize my Date as the string I can use CONCATENATEX;
URL = CONCATENATE(CONCATENATE("http:\\mysite.com\parametersHere\getitem?='",CONCATENATEX(VALUES('TableRol'[Date]), 'TableRol'[Date],";")),"'")

How to pass parameter to my POWERBI embedded report?

I have an IFrame which shows a PowerBI embedded Report that shows a list of Accounts.
I want to pass in an Account ID so that I only see the sales for my Account.
In side the report I have a Table lets say in called Query1 and Inside that table I have a field called AccountID. I need to add to my URL to filter the Accountid = 123.
My URL is something like this....
https://app.powerbi.com/reportEmbed?reportId=xxxxxxxxxxxx&autoAuth=true&ctid=xxxxxxxxxxx-xxxxxxxxx&config=eyJjbHVzdGVyVXJsIjoiaHR0cHM6Ly93YWJpLXdlc3QtZXVyb3BlLXJlZGlyZWN0LmFuYWx5c2lzLndpbmRvd3MubmV0LyJ9
What exactly should I add to filter the report by the AccountID?
You should add url parameter called filter. You need to specify table and field you want to filter and add value of the filter after eq. So your end result should be something like that:
URL?filter=Table/Accountid eq 123.
Here's Microsoft documentation about it https://learn.microsoft.com/en-us/power-bi/collaborate-share/service-url-filters#query-string-parameter-syntax-for-filtering
Update top part of course works for filtering reports in the appor work space itself. To filter embedded report you need to specify the page and filter in a similar fashion for the embedded link https://learn.microsoft.com/en-us/power-bi/collaborate-share/service-embed-secure. So you link will be something like that:
https://app.powerbi.com/reportEmbed?blabla&pageName=Page1&$filter=Table/Accountid eq 123
Here how it would like once embedded:

Dynamodb2 Table Schema Creation

I'm using the following: dynamodb2, boto, python. I have the following code for creating a table:
table = Table.create('mySecondTable',
schema=[HashKey('ID')],
RangeKey('advertiser'),
throughput={'read':5,'write':2},
global_indexes=[GlobalAllIndex('otherDataIndex',parts=[
HashKey('date',data_type=NUMBER),
RangeKey('publisher', date_type=str),
],throughput={'read':5,'write':3})],
connection=conn)
I would like to be able to have the following data that I can query by:
ID, advertiser, date, publisher, size, and color
That means I need a different schema. When I add additional points it does not query unless the column name is listed in the schema.
The problem however is that right now I am only able to query by Id, advertiser, date, and publisher in this case. How can I add additional columns that I can query by?
I read this which appears to say that it is possible:
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GSI.html
However there is no example here:
http://boto.readthedocs.org/en/latest/dynamodb2_tut.html
I tried adding an additional range key however it doesn't work (cannot have duplicates)
I'd like it to be like:
table = Table.create('mySecondTable',
schema=[
RangeKey('advertiser'),
otherKey('date')
fourthKey('publisher') ... etc
throughput={'read':5,'write':2},
connection=conn)
Thanks!
If you want to add additional range keys you need to use Local secondary index.
You can query the LSI in the same way that you query the base table. You need to provide an exact value for the hashkey and a comparison-predicate for range key.

Dynamics Nav (Navision) webservice ReadMultiple date filter

Using the Navision webservices, how can you filter by a date.
i.e. Within a SalesHeader table there is an "ExportedDate". I would like to find all SalesHeaders where the ExportedDate has not been set or were exported on a particular date.
It seems that whenever we set a filter on a date field, then the webservice will either return all rows or no rows.
This can be done. You have to use the same filter expression as you would use in the Nav Client:
01012011.. would be all dates from 01.01.2011
..01012011 would be all dates to 01.01.2011
01012011..03012011 gets all dates between 01. and 03.
After publishing page 42 (Sales Order) as a web service in NAV, I added a web reference to the newly created web service in my Visual Studio project. In the C# code, I create a new instance of the service, and tell it to use the default credentials:
SalesOrders_Service salesOrdersService = new SalesOrders_Service();
salesOrdersService.UseDefaultCredentials = true;
Then I instantiate a filter, and set the field and criteria:
SalesOrders_Filter filter = new SalesOrders_Filter();
filter.Field = SalesOrders_Fields.Document_Date;
filter.Criteria = "01-31-14|''"; // specific date (MM-dd-yy) or empty
The filter instance is then added to a new array of SalesOrders_Filters before passing the latter to ReadMultiple:
SalesOrders[] salesOrders = salesOrdersService.ReadMultiple(new SalesOrders_Filter[] { filter }, null, 0);
On my machine, this returns two orders whose Document Date is 31 January 2014, and one order with a blank Document Date.