Currently I'm searching for a possible solution to apply 2 Physical Renderers in the manifest on 1 applet. Is it possible at all? If so, do I have to extend one file with the other?
You can definitely use multiple Physical Renderers on a single applet by extending one of your PRs with a second one. Make sure that you define your Manifest Files in the correct order. To do that set the sequence number of your base PR to 1 and the extending PR to 2. Without a specified order you are likely to get dependency issues.
Related
I'm about to start a project that requires me to load specific information from an IFC file into classes or structs. I'm using C++, but it's been some years since I last used it so I'm a bit rusty.
The IFC file has a linked structure, where an element in a line might refer to a different line, which in turn links to another. I've included a short example where the initial "#xxx" is the line index and any other "#xxx" in the line is a link to a different line.
#170=IFCAXIS2PLACEMENT3D(#168,$,$);
#171=IFCLOCALPLACEMENT(#32,#170);
#172=IFCBUILDINGSTOREY("GlobalId", #41, "Name", "Description", "ObjectType", #171"...);
In this example I would need to search for "IFCBULDINGSTOREY", and then follow the links backwards through the file, jumping around storing the important bits of information I need.
The main problem is that my test file has 273480 lines (18MB), and links can jump from one end of the file to the other - and I'll likely have to handle larger files than this.
In this file I need to populate about 500 objects, so that's a lot of jumping around the file to grap the relevant information.
What's a performance-friendly method of jumping around a file like that?
(Disclosure - I help out with a .NET IFC implementation)
I'd question what it is you're doing that means you can't use one of the many existing implementations of the IFC schema. Parsing the IFC models is generally the simple part of the problem. If you want to visualise the geometry or take measurements from the geometry primitives there's a whole another level of complexity... E.g. Just one particular geometry type out of dozens: https://standards.buildingsmart.org/IFC/DEV/IFC4_3/RC2/HTML/link/ifcadvancedbrep.htm
If you go to BuildingSmart's software implementations list and search for 'development' you'll get a good list of them for various technologies/languages.
If you're sure you want to implement yourself, the typical approaches are to build some kind of dictionary/map holding the entities based on their key. Naively you can run an initial pass through with a Lexer, and build the map in memory. But as IFC models can be over a GB, you may need a more sophisticated approach where you build some kind of persisted index - and maybe even put it into some kind of database with indexes (maybe some flavour of a document database). This is going to be more important if you want to support 'random access' to the data over multiple sessions.
Are there any tried/true methods of managing your own sequential integer field w/o using SQL Server's built in Identity Specification? I'm thinking this has to have been done many times over and my google skills are just failing me tonight.
My first thought is to use a separate table to manage the IDs and use a trigger on the target table to manage setting the ID. Concurrency issues are obviously important, but insert performance is not critical in this case.
And here are some gotchas I know I need to look out for:
Need to make sure the same ID isn't doled out more than once when
multiple processes run simultaneously.
Need to make sure any solution to 1) doesn't cause deadlocks
Need to make sure the trigger works properly when multiple records are
inserted in a single statement; not only for one record at a time.
Need to make sure the trigger only sets the ID when it is not already
specified.
The reason for the last bullet point (and the whole reason I want to do this without an Identity Specification field in the first place) is because I want to seed multiple environments at different starting points and I want to be able to copy data between each of them so that the ID for a given record remains the same between environments (and I have to use integers; I cannot use GUIDs).
(Also yes, I could set identity insert on/off to copy data and still use a regular Identity Specification field but then it reseeds it after every insert. I could then use DBCC CHECKIDENT to reseed it back to where it was, but I feel the risk with this solution is too great. It only takes one time for someone to make a mistake and then when we realize it, it would be a real pain to repair the data... probably enough pain that it would have made more sense just to do what I'm doing now in the first place).
SQL Server 2012 introduced the concept of a SEQUENCE database object - something like an "identity" column, but separate from a table.
You can create and use sequence from your code, you can use the values in various place, and more.
See these links for more information:
Sequence numbers (MS Docs)
CREATE SEQUENCE statement (MS Docs)
SQL Server SEQUENCE basics (Red Gate - Joe Celko)
I've been working with Endeca at arms length for three years. Now I need to write my first dynamic business Rule.
I have records with a property, say "ActiveField", as a business Rule I need to take the value of "ActiveField" and return the records that match it. I'll restrict it to 20 with the Style.
I've read about writing Dynamic Business Rules, and I've gone through the dialogue box. I can't find where I'd need to write the logic that makes the matches. If it was SQL I expect I'd type in:
SELECT record.name record.id Where record.ActiveField = #ActiveField
I appreciate Endeca might not work like this, or convey this functionality in drop-down boxes which are written to XML config files.
But I can't find any hint of this level of complexity in the documentation; I'm probably missing something since this is fundamental.
Business rules are triggered by search / navigation states, not by records.
Rules can be created in several places depending on your deployment:
1 Developer studio
2 Merchandising Workbench (page builder or rule manager)
3 Experience Manager (which has replaced Merchandising
workbench in the most recent releases).
In any of these locations you can set a trigger for your rule which can be either a search term or a dimension value(s), or combination of the two.
The actual records returned do not effect whether or not the rule is triggered. At that point your application has to take over a do something with the rule.
Best of luck.
In our project , we are using baseline conversion as follows.
ProjName-... (For ex Proj-2.0.1.20)
We use to do update our fileversion number as 2.0.1.20.
After we have created components in Clearcase UCM, often we tend to leave some components as not build. (due to no changes done there)
Though we could apply baseline for all the components we could not update fileversion number when it is not build.
So baseline number and file version number are no more same.
My question is this: Should we follow same version number in fileversion and in baseline, so that traceability would be easier? Is that the standard practice being followed?
There is no standard when it comes to Baseline naming convention: you can chose any versioning number policy you want.
However, one important "feature" of Baseline is:
a non-modified component is not baselined.
Ie when you are setting a baseline on a Stream, only components with modifications since the last baseline receive a new baselines.
The others (not modified) do not.
One best practice, when you want to "remember" the non-modified component baselines (unchanged) and the newly modified component (with new baselines) is to use a composite baseline.
That link to your previous question "What is composite baseline in UCM and when it will be used?".
I'm looking for a good efficient method for scanning a directory structure for changed files in Windows XP+. Something like how git does it is exactly what I'm looking for, when running a git status it displays all modified files, all new (untracked) files and deleted files very quickly which is exactly what I would like to do.
I have a basic model up and running which performs an initial scan and stores all filenames, size, dates and attributes.
On a subsequent scan it checks if the size, attributes or date have changed and marks as a changed file.
My issue now comes in detecting moved and deleted files. Is there a tried and tested method for this sort of thing? I'm struggling to come up with a good method.
I should mention that it will eventually use ReadDirectoryChangesW to monitor files and alert the user when something changes so a full scan is really a last resort after the initial scan.
Thanks,
J
EDIT: I think I may have described the problem badly. The issue I'm facing is not so much detecting the changes - I have ReadDirectoryChangesW() using IOCP on multiple threads to detected when a change happens, the issue is more what to do with the information. For example, a moved file is reported as a delete followed by a create and a rename comes in 2 parts, old name, followed by new name. So what I'm asking is how to differentiate between the delete as part of a move and an actual delete. I'm guessing buffering the changes and processing batches would be an option but feels messy.
In native code FileSystemWatcher is replaced by ReadDirectoryChangesW. Using this properly is not simple, there is a good baseline to build off here.
I have used this code in a previous job and it worked pretty well. The Win32 API itself (and FileSystemWatcher) are prone to problems that are described in the docs and also discussed in various places online, but impact of those will depending on your use cases.
EDIT: the exact change is indicated in the FILE_NOTIFY_INFORMATION structure that you get back - adds, removals, rename data including old and new name.
I voted Liviu M. up. However, another option if you don't want to use the .NET framework for some reason, would be to use the basic Win32 API call FindFirstChangeNotification.
You can use USN journaling if you are up to it, that is pretty low level (NTFS level) stuff.
Here you can find detailed information and source code included. It is written in C# but most of it is PInvoking C/C++ functions.