I need help with creating UseCase template for Enterprise Architect (v 12.1). I have UseCase diagram like this one and I need to generate documentation from diagram named "Transaction" (as you can see on this picture):
The problem is, that one UseCase is located in another package. When generating documentation with my template (I need only " Transaction" package in my documentation not Transaction Validation or other packages), the UseCase of Transaction Validation is not generated (simply because of this element is from another package).
One more note - we don't write Structured Specification Scenario, but we write scenarions into "Description" tab. Like this one:
I tried to create template for generating elements located on diagram, but there is only option to generate Element.StructuredScenarioText for "foreign" element located on diagram when creating template (I need the red one values - ElemScenario.Scenario, ElemScenario.Type and ElemScenario.Notes):
Is there any option to generate Scenario (not structured) for "foreign" UseCase located on diagram?
Thanks for your help!
In order to include the description from an element's scenarios, select the RTF template section package / element / scenario and insert the Notes field.
Your other issue, how to include elements from other packages in a report, is a little trickier.
Normally you would base the template on the diagram and make sure you select Include all Diagram Elements in Report in the Generate Documentation dialog.
The problem in this case is that when you document elements in the context of a diagram, you only have access to a limited subset of the element fields. For scenarios, you're limited to the Element.StructuredScenarioText field, which in your case is empty. The same limited set of fields is available if you document elements in the context of a connector, ie elements connected to the element being documented, so you can't use that either.
The simplest option is to create a template fragment. With fragments you can implement your own selection using an SQL query or a custom script, and thus free your document from the package hierarchy.
Alternatively, you can run the generation from a higher package level which includes all your use cases (in your case, "Transactions in WO"), and add a filter in the Generate Documentation dialog to select only those use cases you want.
Related
I would like to get a query for source to target field flow using informatica repository tables like for example the repository view : REP_MAPPING_CONN_PORTS.
We do not have any additional licensing for the Metadata Manager. Idea is to basically automate the end to end source to target logical flow of all fields in a mapping. Say after every release, a job runs which automatically updates the logical flow, which would then be very easy for any person to go through and understand the logic.
say something like if i have a mapping m_temp, with say 4 transformations :
Source --> Source Qualifier --> Expression --> Target
I need to extract the data from the repository say something like below, so that i can then showcase it in a front end tool of some sort.
Say in the above mapping, FIELD_1 starts from the Source flowing though SQ and there is a logical IF in the expression which then is connected to a field FIELD_2 in the Target. This is how i expect the output of the query to be.
From Logic TO
FIELD_1 IF(FIELD_1='1','A','B') FIELD_2
Could someone please assist me on a query that i could use to run on the informatica Repository?
Here's a tool that I'm using: https://xmlanalyzer.maciejg.pl
It does generate a documentation based on XML exports, not on repository. And it shows the basic data lineage, without full logic. Trying to put all the expressions, aggregation conditions, lookups, joins etc. would make this huge and unreadable, I'm afraid.
I recently noticed there is a difference in Item Id for a Sitecore template field between 2 environments (Source and Target). Due to this, any data changes to the field value for the dataitem using the template is not reflecting to target Sitecore database.
Hence, we manually copy the value from source to target and which takes lot of time to sync the 2 environments. Any idea how to change the template field Item Id in Sitecore without data loss in target instance?
Thanks
The template fields have most likely been created manually on the different servers, as #AdrianIorgu has suggested. I am going to suggest that you don't worry about merging fields and tools.
What you really care about is the content on the PRODUCTION instance of your site (assuming that this is Target). In any other environment, content should be regarded throwaway.
With that in mind, create a package of the template from your PRODUCTION instance and the install that in the other environments, deleting the duplicate field from the Source instance. The GUIDs of the field should now match across all environments. Check this into your source control (using TDS or Unicorn or whatever). You can then correctly update any standard values and that will be reflect through the server when you deploy again.
If your other environments (dev/qa/pre-prod) result in data loss for that field then don't worry about it, restore a backup from PROD.
Most likely that happened because the field or the template was added manually on the second environment, without migrating the items using packages, serialization or a third-party tool like TDS or Unicorn.
As #SitecoreClimber mentioned above, you can use Razl to sync the two environments and see the differences, but I don't think you will be able to change the field's GUID, to have the two environments consistent, without any data loss. Depending on the volume of your data, fixing this can be tricky.
What I would do:
make sure the target instance has the right template by installing a package with the correct template from source (with a MERGE-MERGE operation), which will end up having a duplicate field name
write a SQL query to get a list of all the items that have value for that field and update the value to the new field
Warning: this SQL query below is just a sample to get you started, make sure you extend and test this properly before running on a CD instance
use YOUR_DATABASE
begin tran
Declare #oldFieldId nvarchar(100), #newFieldId nvarchar(100), #previousValue nvarchar(100), #newValue nvarchar(100)
set #oldFieldID = '75577384-3C97-45DA-A847-81B00500E250' //old field ID
set #newFieldID = 'A2F96461-DE33-4CC6-B758-D5183676509B' //new field ID
/* versionedFields */
Select itemId, fieldid, value
from [dbo].[versionedFields] f with (nolock)
where f.FieldId like #oldFieldID
For this kind of stuff I sugest you to use Sitecore Razl.
It's a tool for comparing and merging sitecore databases.
Razl allows developers to have a complete side by side comparison between two Sitecore databases; highlighting features that are missing or not up to date. Razl also gives developers the ability to simply move the item from one database to another.
Whether it's finding that one missing template, moving your entire database or just one item, Razl allows you to do it seamlessly and worry free.
It's not a free tool, you can check here how you can buy it:
https://www.razl.net/purchase.aspx
This should be a simple scenario - I have a data model with a parent/child relationship. For example's sake, let's say it's Orders and OrderDetails - 1 Order -> many OrderDetails.
I'd like to expose the model via oData using a standard DataService, but with a few limitations.
First, I should only see my Orders. That's simple enough using EntitySetRights.ReadSingle and a QueryInterceptor to make sure the order is in fact mine.
So far, so good! But how can the associated OrderDetail records be exposed in the oData feed in a way where I can read OrderDetails for a specific (read single) Order without giving access to the entire OrderDetails table?
In other words, I want to allow reading my details
myUrl.com/OrderService.svc/Orders(5)/OrderDetails <-- Good! My order is #5
but not everyone's details
myUrl.com/OrderService.svc/OrderDetails <-- Danger, Scarry, Keep Out!
Thanks for the help!
This is so called "containment" - your sample exactly described here: http://data.uservoice.com/forums/72027-wcf-data-services-feature-suggestions/suggestions/1012615-support-containment-hierarchical-models-in-odata?ref=title
WCF Data Services doesn't support this out of the box yet.
It is theoretically possible to implement such restriction with a custom LINQ provider. In your LINQ implementation you could detect the expansion (not that hard) and in that case allow it. But you could prevent queries to the entity set itself (also rather easy to recognize). For more details of how the LINQ expressions look plese refere to this series: http://blogs.msdn.com/b/vitek/archive/2010/02/25/data-services-expressions-part-1-intro.aspx
It depends on what provider you wanted to use originally. If you had a custom provider already this is not that hard. If you had a reflection based provider, it is possible to layer this on top. If you had EF, this might be rather tricky (not sure if it's even possible).
Here is a description of the scenario and I would appreciate also any comments on the approach used
The core of my application is a set of web services backed by a P2P database. One service accepts a simple XML-based record (I have designed a generic schema for it). The service processes this data (mainly creating keys based on certain criteria) and pass the original data along with the created keys to a listening SocketServer in one of the listening P2P nodes. This key,data pair is routed to the proper node, which stores the data (associated with the key as an ID) in an XML database.
A second service accepts a query document that is structured based on the same schema, but with optional values that would be used for searching and matching from the previously stored ones. So the second service would pass this query (with the proper keys) to the P2P part, get back the results and pass them back to the service client.
E.g. if the original record submitted to the first service was < attr1 >value1 < /attr1 > < attr2 > value2 < / attr2 > (attribute list along with some other metadata mandated by the schema) then the second service should retrieve that record if the query received was < attr2 >value2 < / attr2 >
(I could later think about using more complex XPath or XQuery queries as the underlying XML database allows instead of exact matches for values here but that is not important at this stage. there is also a third service I am working on but it depends on getting the first two in proper shape first)
So my questions are:
1) What data type should I use as the parameters of the web services? How to utilize my schema for this usage? I was considering various XML binding frameworks (especially JAXB and SDO) for this but didn't know how to proceed.
2) How can I enhance the two services (call them store and search) to use dynamically created templates based on the original generic schema? The service would still accept documents of the main schema type but has the inner attribute list based on a template say template1 only requires whose values are ints while template2 require (float) and (string). The current JSP-based prototype manually creates this template but as an XML document that is assembled by hand (<>tags dispersed in text) and there is no type checking what so ever so I thought I could do better!
3) Is it possible to generate a quick web app prototype for simple access to this system (again by using the schema (&templates) to edit the appropriate XML message structures? What I am looking for is for the (human) user to choose a template and then just "fill in the blanks" and submit, no need for any fancy look and feel.
4) Can I or how can I also use this XML message type for communicating across sockets?
5) Does it matter if I deploy the services as stateless EJBs or not? Do I need them to be EJBs or servlets would be more than enough?
I currently have a rudimentary implementation (from previous developers) that were meant for a subset of my current requirements (I am improving on the services and adding new derived ones) but there was no schema nor validation and the data is passed all along as basic strings, thus providing weak typing and difficult to update manual parsing. The reason I want to update this to a stronger bound typing is to introduce changes in the data schema that would be passed along the whole system easily. Basically I want the system to be as less coupled to the data format/schema used as possible; the current prototype is too coupled to the data that I am finding it extremely difficult to change the data without breaking the system.
My initial investigation led me to consider JAXB but it supports only static typing (cannot create a schema/types dynamically at runtime that I want to persist for later usage). So I came across SDO which has both dynamic and static typing. The problem is just that there is not enough community and/or examples of using this approach so it seems risky (the examples of Apache Tuscany and Eclipselink implementations are very scarce and I could not find complete examples that are not 5+ years old (like this http://www.ibm.com/developerworks/java/library/j-sdo/) and also tackles the XML use case of SDO (most seem to focus on the relational usage of SDO).
This is my first time asking for programming help (here and elsewhere) so please bear with me. I searched a lot on the net but I could not find anything useful but pieces here and there that did not add up.
Any comment or hint is really appreciated.
trfndr
EDIT
I forgot one thing: how would the search service get back the results? Since it is opening a client socket connection, there is no way to get back any results synchronously. The current implementation tackles this by having the service client opening a listening socket on a random port and putting this contact info in the query document. After the search web service sends the query to the p2p part it finishes. The p2p sends the results as a WS call to another service which sends them back to the service client socket. I don't like this approach much, is there any more elegant solution?
I lead the EclipseLink JAXB & SDO implementations and represent Oracle on those specifications so hopefully I can help you out. This question is very similar to talk I'm giving at JavaOne in September.
1) What data type should I use as the
parameters of the web services? How to
utilize my schema for this usage? I
was considering various XML binding
frameworks (especially JAXB and SDO)
for this but didn't know how to
proceed.
This depend's on what web service framework you are using. JAXB is much easier to use with JAX-WS, and while JAXB is still easier to use with JAX-RS SDO, is a possible alternative.
2) How can I enhance the two services
(call them store and search) to use
dynamically created templates based on
the original generic schema? The
service would still accept documents
of the main schema type but has the
inner attribute list based on a
template say template1 only requires
whose values are ints while template2
require (float) and (string). The
current JSP-based prototype manually
creates this template but as an XML
document that is assembled by hand
(<>tags dispersed in text) and there
is no type checking what so ever so I
thought I could do better!
I'm not 100% what you mean here, but the following may be helpful:
Using #XmlAnyElement to Build a Generic Message
3) Is it possible to generate a quick
web app prototype for simple access to
this system (again by using the schema
(&templates) to edit the appropriate
XML message structures? What I am
looking for is for the (human) user to
choose a template and then just "fill
in the blanks" and submit, no need for
any fancy look and feel.
JAX-RS is a nice framework for creating quick prototypes. Below is an example I created:
Part 1 - The Database
Part 2 - Mapping the Database to Objects
Part 3 - Mapping the Objects to XML
Part 4 - The RESTful Service
Part 5 - The Client
4) Can I or how can I also use this
XML message type for communicating
across sockets?
I prefer frameworks like JAX-RS that communicate over the HTTP protocol.
5) Does it matter if I deploy the
services as stateless EJBs or not? Do
I need them to be EJBs or servlets
would be more than enough?
My preference is to use an EJB session bean for the service. If you are interacting with a database then you can leverage the Java Transaction API (JTA) to manage your database transactions.
Part 4 - The RESTful Service
SDO
EclipseLink is the SDO 2.1.1 (JSR-235) reference implementation. We have some examples posted below. If you are looking how to do something specific, I will try to post a relevant example.
http://wiki.eclipse.org/EclipseLink/Examples/SDO
JAXB
JAXB is static. It is also more popular than SDO. Recognizing this in EclipseLink we have implemented a dynamic JAXB feature. It gives you the dynamic aspect of SDO with a JAXB slant.
http://wiki.eclipse.org/EclipseLink/Examples/MOXy/Dynamic
EDIT #1
Since you are dealing with JAX-WS and your model is almost entirely dynamic, I think you should skip the JAXB binding altogether. In the following link see the section "Switching Off Data Binding"
http://java.sun.com/developer/technicalArticles/xml/jaxrpcpatterns3/
This will give us the body of the message as a javax.xml.transform.Source object. We will need to process the XML based on the dynamic templates. SDO would be a good choice here. You can constantly add new types to the HelperContext using XML schemas.
helperContext.getXSDHelper().define(schema1, null);
helperContext.getXSDHelper().define(schema2, null);
You wil be able to unmarshal the Source from the web service as follows:
XMLDocument doc = helperContext.getXMLHelper().load(source, null, null);
DataObject rootDataObject = doc.getRootObject();
String someValue = rootDataObject.getString("attr3/childAttr/anotherChildAttr");
You will also be able to use the XMLHelper to marshal your objects to XML when calling another service.
I have the following for-each select;
"sc:item($myFolder,.)/descendant::item [#template='myTemplate']". According to Sitecore's own profiler this returns crawls through 16.000 items, although there are only approx 1700 items with the mentioned template.
Could this be optimized - if so, how?
The for-each is not the problem, but the XPath query used is.
The descendant axis is usually expensive because it requires the engine to do a complete deep search, without the possibility of eliminating paths. So you may want to replace this with a pattern which restricts the required search path.
Maybe using keys would help, but without deeper knowledge of the XML to be processed and the operations performed this is hard to tell.
If you can convert your xsl:for-each to a xsl:template, it should help with performance, as most XSLT processors are optimized for template matching.
"Do you have an example of this? It's a basic for-each that select a list of items based on the template, and then sort by a number field and shows the 3 top (so its basicly a top 3 list)"
Based on what you're trying to accomplish, I think you would be much better off putting this logic into a save handler and organizing your items when a content author saves an item based on the template you're looking for.
For example (and this is a very rough example), create a "Top 3" folder in your content tree. Then, write a save handler that checks the TemplateID of an item as it's being saved. If the saved item's TemplateID matches the TemplateID you're looking for, select all the items in your tree based on the TemplateID and process them with whatever sorting logic you need. As you find an item that belongs in the "Top 3" folder, move the item to the "Top 3" folder.
Then, in your rendering code, you simply have to get the "Top 3" folder and retrieve it's children. This will be MUCH faster than attempting to look through 1700 items every time the rendering is loaded.
If I need to do any particularly difficult searches in an XSLT, then I often define an XSLT helper method to perform the search, as the .Net API has much better facilities available.
In your case, I'd consider creating a .Net function to do the job, either using the Link Manager (as Mark Cassidy suggests) or using Sitecore fast queries to retrieve the items (fast queries are parsed and run as SQL queries).
To register a class as an XML Extension Method library, open web.config, locate the /configuration/sitecore/xslExtensions node and add an entry with your class' full type name and an XML namespace (the URL doesn't actually have to exist).
To use your function, register it in the namespaces at the top of your XSLT (and remember to add the namespace to the exclude-result-prefixes stylesheet attribute). You should then be able to call `namespace:YourFunction().
For more detailed information on XSLT extension methods, see the presentation component reference, page 40.
The reason the XSLT crawls through 16,000 items when you only have 1700 matches is that the 16,000 represents the number of nodes tested, not the number of nodes returned. Sitecore XSLT performance is dependent on your actual data structure, if you have a flat structure with lots of nodes (e.g. everything in one folder) all xslt processing will suffer. You might consider optimising your data structure into something deeper and more branched to aid XSLT performance. I believe Sitecore themselves recommend that you have no more than 100 items in any location (i.e. never more than 99 siblings for any item).
It can be optimised, but the easiest approach would be to reorganize your content architecture to better suit your needs and requirements.
Straight from XSLT there is no easy solution, but with a bit of coding you can fAirly easily get all your content items based on your specific template. It would look something like this:
Item myTemplate = Sitecore.Context.Database.GetItem( "id-of-your-template" );
LinkItem[] li = Sitecore.Globals.LinkDataBase.GetReferrers( myTemplate );
// li now holds a list of LinkItem for items that are based on your template
List<Item> myItems = new List<Item>();
foreach( LinkItem l in li )
myItems.Add( li.SourceItem() );
// your additional processing/filtering here
Keep in mind, this is a very non-specific example of how this can be achieved. I would need to know a lot more about your solution to come up with a better response. And I still feel, the best approach is probably still to look at your information architecture.