Can I use a ForAll and UpdateIf within a local offline Powerapps collection? - offline

Can anyone help?I need assistance to collect multiple records in a gallery and save it to a local collection when offline.
When my app is connected, my script uses a ForAll to go through all the gallery items then if the Question ID matches the ID in the gallery, it patches the records to the SQL database. This part works fine.
However, when offline, I collect the items and save them to a local collection called LocalAnswers. It only saves 1 record (instead of 20) and does not pull in the Question ID. I have tried inserting a ForAll and UpdateIf within my Collect function but can't seem to get it right. Any ideas?
If(
Connection.Connected,
ForAll(
Gallery2.AllItems,
UpdateIf(
AuditAnswers,
ID = Value(IDGal.Text),
{
AuditID: IDAuditVar,
Answer: Radio1.Selected.Value,
Action: ActionGal.Text,
AddToActionPlan: tglAction.Value
}
)
),
Collect(
LocalAnswers,
{
AuditID: IDAuditVar,
Answer: Radio1.Selected.Value,
Action: ActionGal.Text,
AddToActionPlan: tglAction.Value
}
)
);

Collect only pulls a single record because you only have a single record defined (everything between the {}).
I don't typically create collections from Gallery.AllItems but rather from a Data Source (Sharepoint, SQL, another Collection, etc.), so not sure if this will work without testing.
Try something like:
ForAll(Gallery2.AllItems,
Collect(colLocalAnswers,
{
AuditID: ThisRecord.AuditID, //or whatever the control's name is
Answer: ThisRecord.Radio1.Selected.Value,
Action: ThisRecord.ActionGal.Text,
AddToActionPlan: ThisRecord.tglAction.Value
}
)
);
SaveDate(colLocalAnswers, "localfile")

Related

Cannot create index on non-empty table

I'm currently using AWS Lambda (NodeJS) with AWS QLDB.
The scenario is like this.
I have the first table and its indexes when I deployed the service. So the table and indexes will be created. My problem is that, once I need to add new table and its indexes; it can't create the index because there's existing table.
My workaround to be able to create new table even if there's an existing table in my Ledger is that I'm querying the list of tables I have.
const getTables = async (transactionExecutor: TransactionExecutor) => {
const statement = `SELECT name FROM information_schema.user_tables`;
return await transactionExecutor.execute(statement);
};
Then I have this condition to check if the table is already existing
const tables = JSON.stringify(result.getResultList());
if (
!JSON.parse(tables).some((object): boolean => object.name === process.env.TABLE_NAME)
) {
console.log('TABLE A NOT EXISTING');
await createTable(transactionExecutor, process.env.TABLE_NAME);
}
if (
!JSON.parse(tables).some(
(object): boolean => object.name === process.env.TABLE_NAME_1,
)
) {
console.log('TABLE B NOT EXISTING');
await createTable(transactionExecutor, process.env.TABLE_NAME_1);
}
I don't know how to do it with indexes, I tried using SQL commands in QLDB but it's not working.
I hope you can help me.
Thank you
I'm not quite sure what your question is (the post title and body hint at different things), but I'm going to do my best to answer.
First, QLDB stores data in Ion, not JSON. So, please use the Ion APIs to parse data and not the JSON ones. The reason your code works at all is because Ion is a superset of JSON and the result set doesn't include types that are unknown to JSON. So, for example, if the result set was changed to include an Ion Timestamp, then your code would break.
Next, actually getting a list of tables has first class support in the driver. Simply use driver.getTableNames.
Third, I think you have a question "can I add an index to a non-empty table?". The answer is "no". This is planned functionality and I will update this answer when it is available. UPDATE: Now you can! https://aws.amazon.com/about-aws/whats-new/2020/09/amazon-qldb-launches-index-improvements/
Finally, I think you're also asking if there is a way to list indexes on a table in the same way as you can list tables in a ledger. The answer to that is 'yes'. The documents returned in information_schema.user_tables look like this:
{
tableId:"...",
name:"THE_TABLE_NAME",
indexes:[
{
expr:"[THE_FIELD_BEING_INDEXED]"
}
],
status:"ACTIVE"
}

How to add a combo drop-down user list in vtigercrm?

I'm using existing Task module under the Project module. I would like to
assign particular task to multiple workers. Meaning that group of people
will together complete the task.
I already have workers as users in my vtigercrm. So if i make a user selection
multiple for assigning single task it could be easier to handle this
use-case.
This use-case already have in Calendar module but i'm unable to understand how they implement.
I have a thorough knowledge on how to create custom user list drop-down in any module and i created lot more for my custom modules. Now stuck with the same scenario having multiple at a time.
I tried like this in module.php, but it is not working.
$users = Vtiger_Module::getInstance('Users');
$module = Vtiger_Module::getInstance('MyModule');
$module->unsetRelatedList($users, 'Users', 'get_related_list');
$module->setRelatedList($users, 'Users', array('SELECT'), 'get_related_list');
Can anyone help me to get it done?
It is possible when you understood the Invite users field in Calender envent creation. Here the field must be multi-selection.
You need to deal with vtiger_salesmanactivityrel table for inserting the multiple users and for showing in detail view.
Here is the code to handle inviteusers.
//Handling for invitees
$selected_users_string = $_REQUEST['inviteesid'];
$invitees_array = explode(';',$selected_users_string);
$this->insertIntoInviteeTable($module,$invitees_array);
function insertIntoInviteeTable($module,$invitees_array)
{
global $log,$adb;
$log->debug("Entering insertIntoInviteeTable(".$module.",".$invitees_array.") method ...");
if($this->mode == 'edit'){
$sql = "delete from vtiger_invitees where activityid=?";
$adb->pquery($sql, array($this->id));
}
foreach($invitees_array as $inviteeid)
{
if($inviteeid != '')
{
$query="insert into vtiger_invitees values(?,?)";
$adb->pquery($query, array($this->id, $inviteeid));
}
}
$log->debug("Exiting insertIntoInviteeTable method ...");
}
Hope it would give you a better idea.

Create NewRecord through Siebel Business Service eScript

I am trying to create a new record using BS server script.
Since the process is taking place inside the BS, the context of Parent is not present, hence I am unable to get Parent Row_Id which I need to explicitly stamp against the child record being created for visibility.
Initially I tried to pass the Parent Row_Id from applet as a profile, but this fails when there are no records in the child applet, ie this.BusComp().ParentBusComp().GetFieldValue returns "This operation is invalid when there are no records present" as the "this" context is unavailable.
Any suggestions?
I was able to achieve the desired with the below code
sId = TheApplication().ActiveBusObject().GetBusComp("Q").ParentBusComp().GetFieldValue("Id");
if(this.BusComp().CountRecords() > 0)
{
sA = TheApplication().ActiveBusObject().GetBusComp("Q").GetFieldValue("A");
sB = TheApplication().ActiveBusObject().GetBusComp("Q").GetFieldValue("B");
}
sEntity = TheApplication().ActiveBusObject().GetBusComp("Q").Name();
It is for these reasons that Siebel provides Pre-Default settings at the Business Component Field level. If you wish to do this entirely through scripting, you will have to find the Active context, you have to know which BC is the parent.
Lets say you know that the Parent BC has to be Account. So
ActiveBusObject().GetBusComp("Account").GetFieldValue("Id") will give you the row id of the currently selected Account BC record. But do make sure that this script fires only in this context. So check the ActiveViewName to check this.
if(TheApplication().GetProfileAttr("ActiveViewName")=="Custom View")
{
//put the scripting here.
}

Search Informatica for text in SQL override

Is there a way to search all the mappings, sessions, etc. in Informatica for a text string contained within a SQL override?
For example, suppose I know a certain stored procedure (SP_FOO) is being called somewhere in an INFA process, but I don't know where exactly. Somewhere I think there is a Post SQL on a source or target calling it. Could I search all the sessions for Post SQL containing SP_FOO ? (Similar to what I could do with grep with source code.)
You can use Repository queries for querying REPO tables(if you have enough access) to get data related with all the mappings,transformations,sessions etc.
Please use the below link to get almost all kind of repo queries.Ur answers can be find in the below link.
https://uisapp2.iu.edu/confluence-prd/display/EDW/Querying+PowerCenter+data
select *--distinct sbj.SUBJECT_AREA,m.PARENT_MAPPING_NAME
from REP_SUBJECT sbj,REP_ALL_MAPPINGS m,REP_WIDGET_INST w,REP_WIDGET_ATTR wa
where sbj.SUBJECT_ID = m.SUBJECT_ID AND
m.MAPPING_ID = w.MAPPING_ID AND
w.WIDGET_ID = wa.WIDGET_ID
and sbj.SUBJECT_AREA in ('TLR','PPM_PNLST_WEB','PPM_CURRENCY','OLA','ODS','MMS','IT_METRIC','E_CONSENT','EDW','EDD','EDC','ABS')
and (UPPER(ATTR_VALUE) like '%PSA_CONTACT_EVENT%'
-- or UPPER(ATTR_VALUE) like '%PSA_MEMBER_CHARACTERISTIC%'
-- or UPPER(ATTR_VALUE) like '%PSA_REPORTING_HH_CHRSTC%'
-- or UPPER(ATTR_VALUE) like '%PSA_REPORTING_MEMBER_CHRSTC%'
)
--and m.PARENT_MAPPING_NAME like '%ARM%'
order by 1
Please let me know if you have any issues.
Another less scientific way to do this is to export the workflow(s) as XML and use a text editor to search through them for the stored procedure name.
If you have read access to the schema where the informatica repository resides, try this.
SELECT DISTINCT f.subj_name folder, e.mapping_name, object_type_name,
b.instance_name, a.attr_value
FROM opb_widget_attr a,
opb_widget_inst b,
opb_object_type c,
opb_attr d,
opb_mapping e,
opb_subject f
WHERE a.widget_id = b.widget_id
AND b.widget_type = c.object_type_id
AND ( object_type_name = 'Source Qualifier'
OR object_type_name LIKE '%Lookup%'
)
AND a.widget_id = b.widget_id
AND a.attr_id = d.attr_id
AND c.object_type_id = d.object_type_id
AND attr_name IN ('Sql Query')--, 'Lookup Sql Override')
AND b.mapping_id = e.mapping_id
AND e.subject_id = f.subj_id
AND a.attr_value is not null
--AND UPPER (a.attr_value) LIKE UPPER ('%currency%')
Yes. There is a small java based tool called Informatica Meta Query.
Using that tool, you can search for any information that is present in the Informatica meta data tables.
If you cannot find that tool, you can write queries directly in the Informatica Meta data tables to get the required information.
Adding few more lines to solution provided by Data Origin and Sandeep.
It is highly advised not to query repository tables directly. Rather, you can create synonyms or views and then query those objects to avoid any damage to rep tables.
In our dev/ prod environment application programmers are not granted any direct access to repo. tables.
As querying the Informatica database isn't the best idea, I would suggest you to export all the workflows in your folder into xml using Repository Manager. From Rep Mgr you can select all of them once and export them at once. Then write a java program to search the pattern from the xml's you have.
I have written a sample prog here, please modify it as per your requirement:
make a spec file with workflow names(specFileName).
main()
{
try {
File inFile = new File(specFileName);
BufferedReader reader = new BufferedReader(newFileReader(infile));
String tectToSearch = '<YourString>';
String currentLine;
while((currentLine = reader.readLine()) != null)
{
//trim newline when comparing with String
String trimmedLine = currentLine.trim();
if(currentline has the string pattern)
{
SOP(specFileName); //specfile name
}
}
reader.close();
}
catch(IOException ex)
{
System.out.println("Error reading to file '" + specFileName +"'");
}
}

mediawiki: is there a way to automatically create redirect pages that redirect to the current page?

My hobby is writing up stuff on a personal wiki site: http://comp-arch.net.
Currently using mediawiki (although I often regret having chosen it, since I need per page access control.)
Often I create pages that define several terms or concepts on the same page. E.g. http://semipublic.comp-arch.net/wiki/Invalidate_before_writing_versus_write_through_is_the_invalidate.
Oftentimes such "A versus B" pages provide the only definitions of A and B. Or at least the only definitions that I have so far gotten around to writing.
Sometimes I will define many more that two topics on the same page.
If I create such an "A vs B" or other paging containing multiple definitions D1, D2, ... DN, I would like to automatically create redirect pages, so that I can say [[A]] or [[B]] or [[D1]] .. [[DN]] in other pages.
At the moment the only way I know of to create such pages is manually. It's hard to keep up.
Furthermore, at the time I create such a page, I would like to provide some page text - typicaly a category.
Here;s another example: variant page names. I often find that I want to create several variants of a page name, all linking to the same place. For example
[[multithreading]],
[[multithreading (MT)]],
[[MT (multithreading)]],
[[MT]]
Please don;t tell me to use piped links. That's NOT what I want!
TWiki has plugins such as
TOPICCREATE automatically create topics or attach files at topic save time
More than that, I remember a twiki plugin, whose name I cannot remember or google up, that included the text of certain subpages within your current opage. You could then edit all of these pages together, and save - and the text would be extracted and distributed as needed. (By the way, if you can remember the name of tghat package, please remind me. It had certain problems, particularly wrt file locking (IIRC it only locked the top file for editing, bot the sub-topics, so you could lose stuff.))
But this last, in combination with parameterized templtes, would be almost everything I need.
Q: does mediawiki have something similar? I can't find it.
I suppose that I can / could should wrote my own robot to perform such actions.
It's possible to do this, although I don't know whether such extensions exist already. If you're not averse to a bit of PHP coding, you could write your own using the ArticleSave and/or ArticleSaveComplete hooks.
Here's an example of an ArticleSaveComplete hook that will create redirects to the page being saved from all section titles on the page:
$wgHooks['ArticleSaveComplete'][] = 'createRedirectsFromSectionTitles';
function createRedirectsFromSectionTitles( &$page, &$user, $text ) {
// do nothing for pages outside the main namespace:
$title = $page->getTitle();
if ( $title->getNamespace() != 0 ) return true;
// extract section titles:
// XXX: this is a very quick and dirty implementation;
// it would be better to call the parser
preg_match_all( '/^(=+)\s*(.*?)\s*\1\s*$/m', $text, $matches );
// create a redirect for each title, unless they exist already:
// (invalid titles and titles outside ns 0 are also skipped)
foreach ( $matches[2] as $section ) {
$nt = Title::newFromText( $section );
if ( !$nt || $nt->getNamespace() != 0 || $nt->exists() ) continue;
$redirPage = WikiPage::factory( $nt );
if ( !$redirPage ) continue; // can't happen; check anyway
// initialize some variables that we can reuse:
if ( !isset( $redirPrefix ) ) {
$redirPrefix = MagicWord::get( 'redirect' )->getSynonym( 0 );
$redirPrefix .= '[[' . $title->getPrefixedText() . '#';
}
if ( !isset( $reason ) ) {
$reason = wfMsgForContent( 'editsummary-auto-redir-to-section' );
}
// create the page (if we can; errors are ignored):
$redirText = $redirPrefix . $section . "]]\n";
$flags = EDIT_NEW | EDIT_MINOR | EDIT_DEFER_UPDATES;
$redirPage->doEdit( $redirText, $reason, $flags, false, $user );
}
return true;
}
Note: Much of this code is based on bits and pieces of the pagemove redirect creating code from Title.php and the double redirect fixer code, as well as the documentation for WikiPage::doEdit(). I have not actually tested this code, but I think it has at least a decent chance of working as is. Note that you'll need to create the MediaWiki:editsummary-auto-redir-to-section page on your wiki to set a meaningful edit summary for the redirect edits.