Use of Or/And combination conditions in the task query in Camunda 7 - camunda

I need to implement this logic in the taskQuery to pull out a task list:
pull up those tasks which meets one of these conditions :
either assigned to this user
or that task is unassigned but belongs to the group that user is part of (based on role )
With the above logic, I tried :
taskQuery
.taskUnassigned().taskCandidateGroupIn(group)
.or().taskAssignee(assignee).endOr()
.list();
My list is always empty with the above query. However, if I try separately :
taskQuery.taskUnassigned().taskCandidateGroupIn(groups) ----> pulls
out right tasks in the list
taskQry.taskAssignee(assignee) —> pulls out right tasks in the list
Question:
What am I doing wrong in my combined query when using the OR condition?
Is there a way to implement “AND” condition so that I can do something like :
Example:
taskQuery
.or()
and().taskUnassigned().taskCandidateGroupIn(group).endAnd()
.endOr()
.or()taskAssignee(assignee).endOr()
.list();
Thanks in advance

Related

DynamoDB Single Table Design guidance

First time building a single table design and was just wondering if anyone had any advice/feedback/better ways on the following plan?
Going to be building a basic 'meetup' clone. so e.g. Users can create events, and then users can attend those events essentially.
How the entities in the app relate to eachother:
Entities: (Also added an 'ItemType' to each entity) so e.g. ItemType=Event
Key Structure:
Access Patterns:
Get All Attendees for an event
Get All events for a specific user
Get all events
Get a single event
Global Secondary Indexes:
Inverted Index: SK-PK-index
ItemType-SK-Index
Queries:
1. Get all attendees for an event:
PK=EVENT#E1
SK=ATTENDEE# (begins with)
2. Get All Events for a specific User
Index: SK-PK-index
SK=ATTENDEE#User1
PK=EVENT# (Begins With)
3. Get All Events (I feel like there's a much better way to do this, if there is please let me know)
Index: ItemType-SK-Index
ItemType=Event
SK=EVENT# (Begins With)
4. Get a single event
PK=EVENT#E1
SK=EVENT#E1
A couple of questions I had:
When returning a list of attendees, I'd want to be able to get extra data for that attendee, e.g. First/Lastname etc.
Based on this example: https://aws.amazon.com/getting-started/hands-on/design-a-database-for-a-mobile-app-with-dynamodb/module-5/
To avoid having to duplicate data and handle when data changes, (e.g. user changes name) use Partial Normalization and the BatchGetItem API to retrieve details?
For fuzzy searches etc, is the best approach to stream this data into e.g. elastic/opensearch?
If so, when building API's - would you still use dynamoDB for some queries, or just use elasticsearch for everything?
e.g. for Get All Events - would using an ItemType of 'Events' end up creating a hot partition if there's a huge number of events?
Sorry for the long post, Would appreciate any feedback/advice/better ways to do things, thank you!!

In Camunda , What is the correct way of optimised coding among two options?

Retrieving userTasks first and then from task, retrive the processInstanceId.
First retrieving ProcessInstances and then iterate it with a loop and find usertasks in each process instance.
Please give reason too.
If you want to build a task list you should use the task queries, as you can do the right queries and filters there easily. Process instance id's are part of the result.
But it would indeed be interesting to understand the use case, what exactly you need the process instance id for.

SSIS Webservice Task Variable

Wonder if someone could help me here. I am trying to download data using Webservice task. The data supplier has a limit of 1000 records per call and asked us to iterate through the whole data set using the "select" and "skip" parameters. "For example to select the first 1000 records in the data set you should set the select parameter to 1000 and the skip parameter to 0. To select the next 1000 records you should set the select parameter to 1000 and the skip parameter to 1000. You should continue to do this until 0 records are returned to you to get the whole data set."
I am not sure how i can implement this in Webservice task using for loop or foreach loop? any help or tips will be greatly appreciated.
Many thanks
I don't see any options within the foreach loop as the enumerator supports no sort of dynamic HTTP connection. So if it is possible, it should be done somehow using a script task that will defines a list of url's that your loop is able to iterate over, after which for each of the iterations, the url from de dynamic list is linked in a dynamic connection that can be used in the web service task inside the foreach loop.
It might that you want to consider program this task fully in the script task and store the result in a SQL table.

trigger Informatica workflow based on the status column in oracle table

I want to implement the below scenario without using pl/sql procedure or trigger
I have a table called emp_details with coulmns (empno,ename,salary,emp_status,flag,date1) .
If someone updates the columns emp_status='abc' and flag='y', Informatica WF 1 would be in continuous running status and checking emp_status value "ABC"
If it found record / records then query all the records and it will invoke WF 2.
WF 1 will pass value ename,salary,Date1 to WF 2 (Wf2 will populate will insert the records into the table emp_details2).
How can I do this using the informatica approach instead of plsql or trigger?
If you want to achieve this in real time, write the output of WF1 to a message queue and in the second workflow WF2 subscribe to the message queue produced from WF1.
If you have batch process in place. Produce a output file from WF1 and use this output file in WF2. You can easily setup this dependency using job schedulers.
I don't understand why do you need two workflows in the first place. Why not accomplish emp_details2 table updates with the very same workflow that is looking for differences.
Anyway, this can be done using indicator file:
WF1 running continously should create a file if any changes have been found.
WF2 should be running continously with EventWait set to wait for the indicator file specified above. Once found it should use the Assignment Task to rename/delete the file and fetch the desired data from source and populate the emp_details2 table.
If you need it this way, you can pass the data through the indicator file
You can do this in a single workflow, Create a dummy session which which check for the flag in table after this divide the flow into two based on the below link conditions,
Flow one: Link condition, Session.Status=SUCCEEDED and SOURCE_SUCCESS_ROWS(count)>=1 then run your actual session which will load the data
Flow two: Link Condition, Session.Status=SUCCEEDED and SOURCE_SUCCESS_ROWS=0, connect this to control task and mark the workflow as complete.
Make sure you schedule the workflow at Informatica level to run continousuly.
Cheers

Powercenter - concurrent target instances

We have a situation where we have a different execution order of instances of the same target being loaded from a single source qualifier.
We have a problem when we promote a mapping from DEV to TEST when we execute in TEST after promoting there are problems.
For instance we have a router with 3 groups for Insert, Update and Delete followed by the appropriate update strategies to set the row type accordingly followed by three target instances.
RTR ----> UPD_Insert -----> TGT_Insert
\
\__> UPD_Update -------> TGT_Update
\
\__> UPD_Delete ---------> TGT_Delete
When we test this out using data to do an insert followed by an update followed by a delete all based on the same primary key we get a different execution order in TEST compared to the same data in our DEV environment.
Anyone have any thoughts - I would post an image but I don't have enough cred yet.
Cheers,
Gil.
You can not controll the load order as long as you have a single source. I you could separate the loads to use separate sources the target load order setting in the mapping could be used, or you could even create separate mappings for them.
As it is now you should use a single target and utilize the update strategy transformation to determine the wanted operation for each record passing through. It is then possible to use a sort to define in what order the different operations is made to the physical table.
You can use the sorter transformation just before update strategy......based on update strategy condition you can sort the incoming rows....So first date will go through the Insert, than update at last through delete strategy.
Simple solution is try renaming the target definition in alphabetical order... like INSERT_A, UPDATE_B, DELETE_C then start loading
This will load in A,B,C order. Try and let me know