ENGINE-09005 Could not parse BPMN process. Errors: - camunda

I just started learning camunda and trying to deploy diagram, I get this error please tell me how to fix it?
ENGINE-09005 Could not parse BPMN process. Errors:
* src-resolve: Cannot resolve the name 'extension' to a(n) 'element declaration' component. | resource diagram_1.bpmn | line 15 | column 70
* src-resolve: Cannot resolve the name 'rootElement' to a(n) 'element declaration' component. | resource diagram_1.bpmn | line 16 | column 72
* src-resolve: Cannot resolve the name 'bpmndi:BPMNDiagram' to a(n) 'element declaration' component. | resource diagram_1.bpmn | line 17 | column 79
* src-resolve: Cannot resolve the name 'relationship' to a(n) 'element declaration' component. | resource diagram_1.bpmn | line 18 | column 73
* cvc-complex-type.2.4.a: Invalid content was found starting with element 'bpmn:process'. One of '{"http://www.omg.org/spec/BPMN/20100524/MODEL":import}' is expected. | resource diagram_1.bpmn | line 3 | column 58 [ deploy-error ]

Related

How to show only unique rows in CloudWatch output?

I have a CloudWatch query that creates a table of output that looks something like:
id | name | age
1313 | Sam | 24
1313 | Sam | 24
1313 | Sam | 24
1481 | David | 62
1481 | David | 62
3748 | Sarah | 37
3748 | Sarah | 37
3748 | Sarah | 37
1481 | David | 62
(All example values)
Is there a way to have CloudWatch automatically deduplicate its output, so I just see:
id | name | age
1313 | Sam | 24
1481 | David | 62
3748 | Sarah | 37
You can calculate an aggregated value across these 3 fields and then drop it (keep just these 3). Like this for example:
YOUR CURRENT QUERY | count(*) by id, name, age | display id, name, age

Self-incrementing primary keys, values that follow or not. Foreign key bindings (django)

I am making a classic bdd and I had a doubt expressed by another dev:
To put it simply, I have 2 tables (with very very few updates), one of which has an FK (foreign key) on the other. There is an autoincrement on the PK (primary key).
With Django I make fixtures (basic filling) to fill these 2 tables and I put the following PK values:
id | codes | name | zone_id
1 | 13 | bla1 | 1
2 | 84 | bla2 | 2
3 | 06 | bla3 | 4
Simple, basic. But this colleague changed my fixtures because it's "better", maybe for convenience:
id | codes | name | zone_id
13 | 13 | bla1 | 1
84 | 84 | bla2 | 2
6 | 06 | bla3 | 4
As a result, the values of the PK no longer follow each other. Is this correct?, or should the auto-increment be removed?. For me it's completely absurd to have changed that.

query to give workflow statistics like source count,target count,start time and end time of each sessions

I have one workflow which contain five sessions. I am looking for a query by using informatica repository tables/views which give me output like below. I am not able to get a query which give me desired result.
workflow-names session-names source-count target-count session-start time session-end time.
If you have access to Repository metadata tables, then you can use below query
Metadata Tables used in query:
OPB_SESS_TASK_LOG
OPB_TASK_INST_RUN
OPB_WFLOW_RUN
Here the Repository user is INFA_REP, and workflow name is wf_emp_load.
SELECT w.WORKFLOW_NAME,
t.INSTANCE_NAME,
s.SRC_SUCCESS_ROWS,
s.TARG_SUCCESS_ROWS,
t.START_TIME,
t.END_TIME
FROM INFA_REP.OPB_SESS_TASK_LOG s
INNER JOIN INFA_REP.OPB_TASK_INST_RUN t
ON s.INSTANCE_ID=t.INSTANCE_ID
AND s.WORKFLOW_RUN_ID=t.WORKFLOW_RUN_ID
INNER JOIN INFA_REP.OPB_WFLOW_RUN w
ON w.WORKFLOW_RUN_ID=t.WORKFLOW_RUN_ID
WHERE w.WORKFLOW_RUN_ID =
(SELECT MAX(WORKFLOW_RUN_ID)
FROM INFA_REP.OPB_WFLOW_RUN
WHERE WORKFLOW_NAME='wf_emp_load')
ORDER BY t.START_TIME
Output
+---------------+---------------+------------------+-------------------+--------------------+--------------------+
| WORKFLOW_NAME | INSTANCE_NAME | SRC_SUCCESS_ROWS | TARG_SUCCESS_ROWS | START_TIME | END_TIME |
+---------------+---------------+------------------+-------------------+--------------------+--------------------+
| wf_emp_load | s_emp_load | 14 | 14 | 10-JUN-18 18:31:24 | 10-JUN-18 18:31:26 |
| wf_emp_load | s_emp_revert | 14 | 14 | 10-JUN-18 18:31:27 | 10-JUN-18 18:31:28 |
+---------------+---------------+------------------+-------------------+--------------------+--------------------+

Apache camel-jetty and activemq conflict in karaf?

I am having an issue in a clean karaf (4.0.3) installing both camel-jetty and activemq (for example, activemq-client or activemq-broker) together. It doesn't matter the order the features are installed. The second one hangs during install with no information displayed in the karaf log beyond that the install has begun.
Has anyone seen this before? Is there a workaround? I tried having the activemq-broker in it's own instance but then my app that uses both camel-jetty and jms still need to have the activemq connector initialized and thus I need to load activemq bundles/features.
Here is the output of both install separately, but when performed one after the other the 2nd always hangs the karaf instance. There don't appear to be any bundles in common.
karaf#root()> feature:install activemq-client
karaf#root()> list
START LEVEL 100 , List Threshold: 50
ID | State | Lvl | Version | Name
------------------------------------------------------------------------
52 | Active | 80 | 5.12.1 | activemq-osgi
53 | Active | 80 | 3.3.0 | Commons Net
54 | Active | 80 | 2.4.2 | Apache Commons Pool
55 | Active | 80 | 1.0.1 | geronimo-j2ee-management_1.1_spec
56 | Active | 80 | 1.1.1 | geronimo-jms_1.1_spec
57 | Active | 80 | 1.1.1 | geronimo-jta_1.1_spec
58 | Active | 80 | 3.4.6 | ZooKeeper Bundle
63 | Active | 80 | 2.2.11.1 | Apache ServiceMix :: Bundles :: jaxb-impl
70 | Active | 80 | 3.18.0 | Apache XBean :: Spring
71 | Active | 80 | 0.6.4 | JAXB2 Basics - Runtime
(Performed clean karaf launch in between installs to get bundle listings)
karaf#root()> feature:install camel-jetty
karaf#root()> list
START LEVEL 100 , List Threshold: 50
ID | State | Lvl | Version | Name
--------------------------------------------------------------------------------
55 | Active | 80 | 2.12.2 | camel-core
56 | Active | 80 | 2.12.2 | camel-http
57 | Active | 80 | 2.12.2 | camel-jetty
58 | Active | 80 | 2.12.2 | camel-karaf-commands
59 | Active | 80 | 1.8.0 | Commons Codec
63 | Active | 80 | 3.1.0.7 | Apache ServiceMix :: Bundles :: commons-httpclient

Django QuerySet slicing returning unexpected results

I'm dynamically writing a Django query and am receiving unexpected results based on the slice parameters. For example, if I request queryset[0:10] and querset[10:20] I receive some of the same item's in query2 that I found in query1.
Searching around, the issue I'm facing appears similar to:
Simple Djanqo Query generating confusing Queryset results
except I am defining a order_by for my query so it doesn't appear to be an exact match.
Viewing the querset.query for my two queries....
queryset[0:10] generates:
SELECT "intercache_localinventorycountsummary"."id",
"intercache_localinventorycountsummary"."part",
"intercache_localinventorycountsummary"."site",
"intercache_localinventorycountsummary"."location",
"intercache_localinventorycountsummary"."hadTransactionsDuring"
FROM "intercache_localinventorycountsummary"
ORDER BY "intercache_localinventorycountsummary"."hadTransactionsDuring" DESC
LIMIT 10
queryset[10:20] generates:
SELECT "intercache_localinventorycountsummary"."id",
"intercache_localinventorycountsummary"."part",
"intercache_localinventorycountsummary"."site",
"intercache_localinventorycountsummary"."location",
"intercache_localinventorycountsummary"."hadTransactionsDuring"
FROM "intercache_localinventorycountsummary"
ORDER BY "intercache_localinventorycountsummary"."hadTransactionsDuring" DESC
LIMIT 10 OFFSET 10
Per request, I've listed the literal SQL generated by Django, and ran it manually against the DB.
Results for Query1:
id | part | site | location | hadTransactionsDuring
------+---------+------+----------+-----------------------
2787 | 2217-1 | 01 | Bluebird | t
2839 | 2215 | 01 | 2600 FG | t
2558 | R4367 | 01 | 2600 Raw | t
2637 | 4453 | 01 | 2600 FG | t
2810 | 1000 | 01 | 2600 FG | t
2531 | 3475 | 01 | 2600 FG | t
2526 | 4596Z | 01 | 2550 FG | t
2590 | 3237-12 | 01 | 2600 Raw | t
3077 | 4841Y | 01 | 2600 FG | t
2919 | 3407 | 01 | 2600 FG | t
Results for Query2:
id | part | site | location | hadTransactionsDuring
------+--------------+------+----------+-----------------------
2598 | 2217-2 | 01 | 2600 Raw | t
2578 | 2216-5 | 01 | 2600 Raw | t
2531 | 3475 | 01 | 2600 FG | t
3010 | 3919 | 01 | 2600 FG | t
2558 | R4367 | 01 | 2600 Raw | t
2637 | 4453 | 01 | 2600 FG | t
2526 | 4596Z | 01 | 2550 FG | t
2590 | 3237-12 | 01 | 2600 Raw | t
2570 | R3760-BRN-GS | 01 | 2600 Raw | f
2569 | 4098 | 01 | 2600 FG | f
(You can see id's 2558, 2637, 2526, 2590 are returned for both queries)
Any guesses what I'm doing wrong here? It seem I must be fundamentally misunderstanding something about how QuerySet slicing works.
Update:
The DB schema is as follows... are result orderings non-reliable when ordering by non-indexed fields perhaps?
\d intercache_localinventorycountsummary
Table "public.intercache_localinventorycountsummary"
Column | Type | Modifiers
-----------------------+--------------------------+------------------------------------------------------------------------------------
id | integer | not null default nextval('intercache_localinventorycountsummary_id_seq'::regclass)
_domain_id | integer |
_created | timestamp with time zone | not null
_synced | timestamp with time zone |
_active | boolean | not null default true
dirty | boolean | not null default true
lastRefresh | timestamp with time zone |
part | character varying(18) | not null
site | character varying(8) | not null
location | character varying(8) | not null
quantity | numeric(16,9) |
startCount | timestamp with time zone |
endCount | timestamp with time zone |
erpCountQOH | numeric(16,9) |
hadTransactionsDuring | boolean | not null default false
quantityChangeSince | numeric(16,9) |
hadManualDating | boolean | not null
variance | numeric(16,9) |
unitCost | numeric(16,9) |
countCost | numeric(16,9) |
varianceCost | numeric(16,9) |
Indexes:
"intercache_localinventorycountsummary_pkey" PRIMARY KEY, btree (id)
"intercache_localinventorycount__domain_id_5691b6f8cca017dc_uniq" UNIQUE CONSTRAINT, btree (_domain_id, part, site, location)
"intercache_localinventorycountsummary__active" btree (_active)
"intercache_localinventorycountsummary__domain_id" btree (_domain_id)
"intercache_localinventorycountsummary__synced" btree (_synced)
Foreign-key constraints:
"_domain_id_refs_id_163d40e6b21ac0f9" FOREIGN KEY (_domain_id) REFERENCES intercache_domain(id) DEFERRABLE INITIALLY DEFERRED
The problem lies with this:
ORDER BY "intercache_localinventorycountsummary"."hadTransactionsDuring" DESC
Apparently you've overridden ordering either explicitly in the query or in model's meta options (vide Model Meta options: ordering).
If you want to order by hadTransactionsDuring but have predictable ordering, you should add second ordering that will resolve cases where first one has same value. For example:
queryset.order_by("-hadTransactionsDuring", "id")
Keep in mind RDBMSes, be it PostgreSQL or MySQL, never guarantee any order at all unless explicitly specified with ORDER BY. Most queries usually return in order of primary key, but that's more like just a happy coincidence, depending on internal implementation of table storage, rather than something you can rely on. In other words you cannot assume that Django queryset is ordered on any field besides the fields you've specified in order_by.