Related
I have JSP project which uses Liferay framework. There are default Liferay cookies named COOKIE_SUPPORT and GUEST_LANGUAGE_ID in Liferay. I dont want hackers to view any of my technology information by any means. How can I rename these cookie?
If you want to protect the framework you're using, you won't have to worry about the names of the cookies. Worry about server identification, elements of the DOM, structure and mechanics of URLs, secure&hardened setup of your server, common translations, default content, standard error messages, etc.
In other words: If you don't want to give away, which standard framework you're using (and this is not limited to Liferay) you'll have to roll your own. Good luck with getting this as powerful and as well tested as any standard framework.
Rather worry about keeping your systems updated all the time and protect from well known vulnerabilities in older systems. For hardening Liferay specifically, you might want to start with my blog series on securing Liferay (linking chapter 1 which refers to the other chapters)
Promoting a comment into this answer: One way to find out how to change them is to search for their names in the source code and identify the kind of plugin you need to provide different values - most likely this will be an ext-plugin. After all, Liferay's source is available. I don't see anything short of this.
We are currently using Adobe ColdFusion 9 for a rather large application. We are thinking about moving to Railo or Blue Dragon.
What problems will we run into?
Will it require a large amount of refactoring or will most CFML code just work on the new system?
Do alternative engines provide support for most all official tags, or are they more limited?
In short, how divergent are these alternatives from the official language?
Is there anything we can do to make this process less painful (like upgrading to CF11 first or removing/avoiding certain features)?
My question is similar to What Notable Differences are there between Railo, Open Bluedragon, and Adobe Coldfusion?, but while that is concerned with practical differences I'm asking more specifically about practicality of transition/implementation.
It all depends on your code and the specific Adobe ColdFusion functionality that you are using. For the most part each CFML iteration supports the same tags/functionality. Where they deviate from the Adobe product is usually documented and explained. You need to dive into your code base and look specifically at the features you are using and compare those to the CFML engine of your choosing. Or you can just download and spin-up the alternate CFML engine, drop your code base in it and see what breaks.
As an example from Railo - CFML Compatibility
Railo tries to adhere the CFML standard as good as possible, Still there are some differences like missing tags and functions or a slightly different behavior. This page and the ones below should describe the incompatibilities.
And I have to question what you are basing this comment on? "and especially it's very uncertain future with them". You are running ColdFusion 9. Adobe has implemented two major version releases since then (10 and 11) and are currently working on the future release.
There are two main areas that can prove problematic when migrating from Adobe ColdFusion to Railo:
Use of feature areas that are not supported by Railo
Sloppy CFML code
The former includes integration with Microsoft technologies, such as Exchange and Sharepoint, as well as Office document manipulation; PDF forms and some of the more sophisticated document manipulations; UI "widget" integration. There are third party extensions for some of the Microsoft integrations, e.g., cfSpreadsheet, but for PDF-related stuff you'll need to roll your own using Java libraries (PDF forms and high quality HTML to PDF conversion are Adobe specialties so be prepared to do quite a bit of work in your migration if you rely on these). As for the UI "widgets", you're better off doing that the "right way" so if you rely on those, you should read ColdFusion UI The Right Way.
The latter is a harder issue to nail down. The differences are not well documented - except in experience posts to mailing lists and blogs by people who've made the transition to Railo - but they include things like:
Using scope names as variables (Railo treats scopes as reserved names for performance reasons)
Embedding comments inside tags, e.g., <cfif x gt y <!--- check boundary --->> (I've seen things like this in older CFML code and was surprised it worked).
Reliance on automatic creation of nested struct elements, e.g., a.b.c = 0 when a has not been declared.
Reliance on long-deprecated features, e.g., parameterExists().
There are many other small differences: Railo is generally stricter about syntax and semantics than Adobe ColdFusion, and often those decisions are driven by performance concerns in that compatibility with Adobe ColdFusion would make Railo slower.
Full disclosure here: I have used Railo pretty much exclusively for five years and I used to run the US arm of Railo's consulting business. That said, you need to consider that Railo is a small company (despite the backing of five fairly large former Adobe partners) with just a handful of people working on the engine, and very little awareness of the product outside the more leading edge portion of the CFML community. By comparison, Adobe have a large team and a marketing budget. Your concerns about the difficulty of finding developers will not be addressed by switching to Railo - to gain access to a larger developer pool, you'd really need to switch to a more popular language, not just a different engine.
Finally, a word about Blue Dragon's engine, specifically Open BlueDragon: the maintainers of that project have stated publicly several times that compatibility with the other engines (Adobe, Railo) is not a primary concern for them, and indeed there are a lot of modern language features that they still don't support or at least don't support in a compatible manner. Last I checked, full-script components were on that list despite having been supported in Adobe ColdFusion and Railo for many years (by which I mean using component { ... } rather than the <cfcomponent><cfscript> .. </cfscript></cfcomponent> form). The BlueDragon dialect of CFML has been steadily diverging over the years so unless you have very old school CFML, that would still run on CFMX7 / ACF8, you probably won't have much success trying to migrate to Open BlueDragon.
There are a couple good answers here and I appreciate the advice given in them. When I asked this question I was looking for something a little more specific, so now that I've had the chance to really play around with migrating our app to Railo I thought I should come back and list out the issues we've run into and, just as importantly, the severity and workarounds. Hopefully this will help others considering making the jump:
cfMessageBox:
cfMessageBox is not a supported tag in Railo. The best solution we've come up with is to create a new custom tag called MessageBox.cfm, then drop it into “{railo-install}/lib/railo-server/context/library/tag/”. This will allow it to be recognized as a core tag and referenced via “”, which saves us from updating hundreds of templates that call it. This, of course, requires us to create a message box custom tag from the ground up.
cfDiv:
cfDiv seems to be throwing a JS error when used to bind to a JS function. I'm going to guess that this is because JS binding is not officially supported (given that I can't find any reference in the official docs), and while ACF allows it as delayed execution, Railo simply doesn’t accept it. We could just create a custom tag that generates a JS setTimeout as described in (1) above, which solved our problem, but applications that actually use this tag for its intended purpose may have a more difficult road ahead.
cfWindow:
There appears to be limited support for cfWindow in Railo. Specifically, new windows need manually shown, and the destroy methods do not exist. Various other bugs appeared as well. We decided that it made more sense to just move to JQuery based modals.
cfLayout:
cfLayout support is questionable. It is based on JQuery and not Ext-JS like ACF’s version. This causes a problem because we run JQuery 1.10 right now and the built-in tag doesn’t appear to work beyond JQuery 1.8. In fact, I could not find any JQuery version within which the tag worked perfectly. We decided that it may be best to, again, just write our own custom tag based on JQuery.
cfDocument:
cfDocument works differently in Railo and seems to require more strict HTML. I found a lot of helpful information here, though as of yet I haven't actually gotten any of my cfDocument calls to work as expected.
Relative cfLocations:
cfLocations that began with a “../” and backtracked beyond the webroot would throw a weird Java error. This ended up being a bug in Tomcat, and was patched by the Railo team in version 4.3.1.003. If you download an older Railo version you may run into this issue and need to update all of your cfLocation calls.
Oracle Thin Client:
Our database guy reported to me that he setup the Oracle Thin Client, because the OCI client is not natively supported in Railo. I found this, which might be relevant, but I don't have the expertise to say for sure.
Documentation:
ACF Livedocs are sometimes aggravating as they don't touch on the more important intricacies of how some tags are implemented, but Railo's version is the definition of minimalist. I think it's fair to say that Railo has no docs specifying each tag and function and that they leave you to rely on Adobe for that, which causes a serious issue when you need to know how the two implementations differ.
In the end it seems like, as predicted by previous answers, the UI tags were the bulk of our issues. Based on previous comments I was hoping for better implementations of them that may just require a tweak here and there, but (at least for our needs) the Railo versions seem borderline non-functional and it looks like we would need to replace them completely. For us, this may not be realistic, though we are still tossing the idea around.
To be fair, here are some of the good points from our research and testing:
Performance:
Although compatibility problems have prevented me from doing much performance testing, initial spot checks show approximately a 50% decrease in execution time for most pages.
Debugging:
The debugging options in Railo are quite amazing. There are far more options for formatting, including specifying different formats for different developers (IP addresses). One incredible feature is the inclusion of a comma delimited list of query fields that were actually used in the page: this could allow you to effectively develop based on a "select *" query and simply copy and paste the fieldlist into the query at the end of development, which would save a lot of time with views as large as the ones we're using.
Cost:
This is one of the larger reasons we decided to look into alternatives. Switching just a few Enterprise licensed ACF servers over to Railo would save $20k+ over upgrading to the newest version of ACF. Further, with the performance increases you could see an even greater savings in hardware requirements. A side effect of this point is that one can keep far more up to date without the constant cost/benefit analysis of licensing costs holding up upgrades.
Support:
Without a support contract, it doesn't seem like Adobe responds to user concerns. I've had a production impacting bug reported since ACF 9 which still hasn't been fixed. Yet the Railo community is one of the most helpful and responsive I've ever seen, and developers have even responded directly to concerns and bug reports I've raised.
Longevity:
This is a highly opinionated point, of course, but while Adobe seems to be relegating ACF to the shadows more and more with each new version, Railo appears to be dedicated to growing the community. Combined with its open source nature I think this makes it a safer bet for future support in the long term, even if that support is just us taking development into our own hands when needed.
For a number of reasons, including divergent CFML compatibility, we did not even get to the testing stage with Blue Dragon.
How should a Windows 8 Metro application connect to a central database?
I've read about local storage, but I haven't read anything about connecting to a central database.
Obviously, this architectural design decision needs to support the disconnected scenario.
WCF web services seem to make sense.
But even if they do make sense, should we really create separate methods for all read/write operations?
Or are OData WCF services the way to go?
It seems like tablet software architecture should be able to borrow a lot from smartphone software architecture (but I am new to both).
Has Microsoft made any recommendations in its app samples?
It appears that others are asking similar questions on the Microsoft Developer Forums.
Here is what I've found:
According to Tim Heuer:
...You cannot directly have a SQL db embedded in your app or use
something like ADO.NET. This is more of an async/services
infrastructure. So if your data was exposed via services, then of
course you could connect that way. There are some other light-weight
methods you could use for local storage as well using things like the
Windows.Storage namespace (which is similar to Isolated Storage in
.NET).
Morten Nielsen agrees:
You can use HttpClient to download pretty much anything from the web.
Why don't you configure your WCF service to return data as JSON, and
use the DataContractJsonSerializer to deserialize the results?
Also, Tim Heuer cautions:
...Please note that while awesome, the SQLWinRT project on codeplex is a
wrapper to communicate with the classic SQLite engine...which uses
APIs that would not pass store validation currently.
Generic Object Storage Helper for WinRT and WinRTFile Based Database seem to have some promise.
But Daniel Stolt raises some good points:
It's awesome that there is good support for building OData clients and
other REST clients - but this only addresses the online scenario. The
"structured" part of Windows.Storage is a very limited model,
essentially limited to name/value pairs, insufficient for all but the
most basic scenarios. Yes there is local file storage, which is great
of course. But forcing every app developer out there to build her own
DBMS on top of local file storage will simply not cut it, especially
with all of System.Data having been removed from the profile. If local
file storage was sufficient for most device apps, then things like
SQLCE would have no purpose today already. And SQLCE clearly has a
purpose, and has played a very important role for occasionally
connected device apps for a very long time. There is also a tremendous
need for synchronization with a server-side database such as SQL
Azure, mostly to be able to roam data between devices. Yes there is
the roaming storage model in WinRT, but it shares the same limitations
of local storage mentioned above, and on top of this is very limited
in capacity (currently 30KB if memory serves). It is simply
insufficient for all but the simplest roaming data needs. Again,
forcing every app developer to design and implement her own
synchronization solution is very bad. You can do much better to enable
developers.
Many people are disappointed that the System.Data namespace is not supported in WinRT.
Richard Bethell said:
I don't even have words for this. This is astonishing. Leave aside for
the moment they want to force you to abstract to middleware for
database connectivity - I don't agree, but I can quasi understand a
rationale for that. I can even see pathways for developing like that.
But no System.Data.... at all? Do you even understand what you've done
to us?
What System.Data can do, outside of just having providers for Sql,
OleDb and other custom providers like Oracle, is provide a rich
abstraction of XML datasets that allow you to very quickly build a
data oriented Service Oriented Architecture.
For instance, I can easily create a web service using SOAP or WCF that
returns DataSets or DataTables, and then consume those objects easily
and directly. Being able to do this allows very rapid construction of
n-tier architectures, even without direct data connections available.
Without System.Data, and the power of DataViews, DataTables, etc. this
gets a lot harder. Sure you can custom create structs, put data in
there, and serve up structs, and use Linq to do whatever sorting,
filtering, etc. you want to do.... but it ends up being twice the
work, and makes code reuse a lot harder. And it means using our
existing service oriented architecture is impossible (without a big
overhaul.)
The withdrawal of System.Data is as big a thing for developers to deal
with as the loss of the Printer object in VB6 to vb.net 1.0 was. What
is harder to understand in this case is why it is necessary -
re-enabling it in the Metro profile can't possibly be a technical
difficulty of the product, can it?
It is valuable enough that I would seriously consider including Mono's
System.Data classes as part of any app I create (which would obviously
have to be open source.)
I think that this is another of those "it depends" questions...
The first and most obvious issue is that it very much depends on the context in which the application is running as to whether, to take the first case "Obviously...support...disconnected" is actually true - if the app is an internal corporate app then quite possibly not in that case no db == not work.
Secondly you could look (hmm, rash... one assumes you could look, this could be a bad assumption) at database synchronisation between a local SQL database and the remote db and so on and so forth.
Taking a step back... yes - you're absolutely right, look at it as being the same as phone or silverlight (although I don't know if there is yet RIA support) - but the thing is at this point its very hard to be prescriptive because given a general purpose platform one can therefore write applications to suit all sorts of purposes.
Not a hugely helpful answer really - but a start.
Having read #Jim G's answer it seems that I should probably withdrawn mine?
Update: Albert D. Kallal has kindly started the discussion off, and to get some more opinions I'm adding a bounty.
This is a nontrivial question about maintenance of a legacy application myself and two other developers support. We are not the original developers, and the code base is 300,000 lines of MFC and business logic tightly coupled together. We don't know every single line of code 100%.
We do know the code behind the major components, and we know that it's poorly written. Our objective is to refactor the application out of 1995 and into 2010. Between the three of us there is (in aggregate) enough experience in software architecture and database design for us to fix the components that are poorly architected in code or incorrectly modelled in the database, but we don't have a lot of experience with modern reporting systems. Thus my question (once you get to the end of it...) is about reporting systems.
For anybody who reads this entire post, I am appreciative of your time. For anybody who reads this post and replies with solutions, experience (or sympathy!), I am both appreciative and thankful.
At work I have inherited the maintenance of an Access 2003 database that contains approximately 250 reports (and thousands of supporting queries) that acts as a reporting engine for our application.
The reports all have swathes of VBA in them for particular formatting or pulling extra information into the report. For this reason we are entirely locked into the Access platform, we can't use tools like BIDS to import the Access report objects without messing around to make the report display the same without VBA.
So to get ourselves out of this Access solution we need to put some time in going over every single report. Which means we're looking to pick the best longterm solution, since we're going to have to redevelop every report regardless of the platform we choose.
Furthermore our customers have a choice of Microsoft Access or SQL Server as their database. This means that all our SQL has to be written with the lowest common denominator in mind - JET SQL. We've got some wiggle room to drop support for Microsoft Access, but we'd need to build a case for it. If the best reporting system we can identify has strong support for SQL Server but little or no support for Microsoft Access this will accelerate us dropping support for Microsoft Access as a database.
The overall implementation of the report system is quite mediocre, when we want to display reports in our application we start a Microsoft Access process, find its window and reparent it to our application, strip off its window styles and then use the Access.Application COM interface to invoke some VBA that creates linked tables to the database (either a Microsoft Access MDB or a SQL Server database) and then opens up the report we want. Probably the only supported part of the process is using the public COM interfaces, the rest is an ugly hack. The other components in the application are equally underwhelming.
To "fix" our application we've got a new development plan, with development of our application split into (approximately) three parts every year.
4 months upgrading our application to support the latest government legislation in our industry
4 months delivering a new major feature
4 months "consolidation" (fixing what is broken)
We're currently at #3 now (for this year), and we really want to take advantage of the downtime to fix up the application, refactoring the major components. We have three developers, and want AppName v5.0 out at the end of 2012 (it's currently AppName v4.12). This gives us 36 months of development effort to approportion between several components (user interface, underlying database structure, reporting, etc) over the three consolidation periods we will have before then. The sum of the components that we fix will give us v5.0.
We've scoped out what we'd like to do with most of the components except for our reporting engine, and I'm posting on SO in the hope of getting some good ideas, or at least a feel for the work that's required.
I have two ideas for improving our reporting system. Both of them involve a moderate amount of work, and there is one consideration that neither solution addresses completely: in addition to the reports that we develop, our customers also have the opportunity to request bespoke development of reports. They're customer-specific, we take their Access database, augment it with their report and give it back to the customer. There's hundreds of unique reports out there - unusable if we turned the old system off. (And we have to turn the old system off eventually - we don't know how much longer we're going to be able to mess around with the Microsoft Access window to make it look like an embedded report. We already have two distinct code paths for Access 2003 and 2007. What if we can't hack up a code path for Access 2010 and all our customers have to use Access 2007?)
For both ideas, the intention is to stop supporting our current reporting system and let it run for as long as it will without maintenance. Maybe we can hack in Access 2010 and Access 2014 support, and the customer reports that were developed keep putting along for 5 more years. Over time, we'd migrate the most commonly used reports from the old Access database into their new format.
Idea 1: Microsoft.Reporting.WinForms.ReportViewer
The first idea is to write a wrapper around the ReportViewer control as a replacement reporting engine.
We'd need to move the project to C++/CLI (already on the cards), and instead of having to launch an entire process each time we needed to view a report we could simply instantiate this control. A bonus of this that the RDLC files that contain the reports are much easier to version control in Subversion than the Access 2003 database we currently have (we use Visual SourceSafe because the tools to integrate SVN with Access don't work well with the size of our Access database). The visual designer for RDLC files is also nicely integrated into Visual Studio.
This is more of an evolutionary rather than revolutionary change to the way we do reports, the ReportViewer control will take an RDLC file that has the report layout, and our application will take care of querying the data. Because our database might be SQL Server or Microsoft Access, we still have to write simple JET SQL. We're gaining better reporting (drill down looks nice), stronger authoring tools and easier version control, but is this worth the effort?
Idea 2: SQL Server Reporting Services and SharePoint 2010 with Access Services
The second idea is to kill Access as a database platform and migrate all our customers to SQL Server (we have hosted instances of our application for those customers who don't have the skill set to set up their own SQL Server instances). Once they're migrated we would use SQL Server Reporting Services as the reporting engine, with the ReportViewer control in server rendering mode.
In addition to SQL Server Reporting Services, I am curious as to whether SharePoint 2010 with Access Services could be used to rapidly migrate existing Access reports into a more manageable format. We'd take the Access report that the customer uses, convert it to an Access Web Report then make it available for them on a SharePoint site. This would only be for our hosted customers, but if we find a way to deal quickly massage the VBA out of customer reports we could churn through the several hundred custom reports our customers have.
I'm also interested in the ability to use an Access Web Navigation Form to act as a portal to all our reports. We'd host a web browser control inside our application which would give customers access to their own reports and to our standard suite.
We'd get all the benefits of Idea #1 plus the ability to write in full Transact SQL, a reports portal, and (hopefully) a reasonable upgrade path for customer's proprietary reports.
So, my question is: am I going about this the right way? Are these viable solutions for modern reporting systems, or laughable? We have a strong preference for using the ReportViewer control either in client rendering mode where our application processes the data, or in server rendering mode in conjunction with SQL Server - but are there reporting systems like Crystal Reports which offer better reporting and better migration paths for our legacy Access reports?
If you had up to 36 months of developer time, how would you do this?
Well, ok, no one else jumping in, I give this a go.
Quite interesting how you talking about a report writer that 15+ years old. Back then the Access report writer was beyond state of the art. It was a country mile ahead of everything else in the industry. Even today a lot of competing report writers don't have the concept of sub reports that allows modeling of relational data without having to resort to code or even SQL. Then, throw in programmable VBA, then the result is something that's very unique and powerful.
For access 2007, the report writer received some more nice upgrades in terms of layout controls but that going to be of little help here.
And, for 2010 we can now display reports in a sub-form control. This feature was added to facilitate use of the new access navigation control. Access 2010 has a new web browser control (works in forms or reports), and there also a new navigation control. Your post hints that the new navigation control and the web control are somehow related to each other but they completely different features.
Both the new web browser control, and navigation control can be used in both web appliations or 100% client only applications. The navigation control is nice since you can build that nav contorl by drag and dropping reports onto the nav control to build up a up a list of reports to choose from (it is slick and easy and nice). And with this navigation control, we can actually build some nice drill down type of interfaces for reports.
As you noted for access 2010 we now have web publishing of access reports and this feature is based on SQL server reporting services (they are RDL reports). However, two important issues here is no VBA is allowed inside of the web reports. And, I also point out that there is no automatic conversion utility that is built into access that will convert existing reports into web based reports. So to build a report that's going to be designated and published to the web, you have to choose specifically to create a web report to accomplish this goal. So this answers and clears up one question of yours of will this help you convert existing reports to SQL server, and the answer is no. So, Access will not help you convert existing reports to web based RDL reports (As noted, Access uses RDL and sql reporting for those web reports - those reports also render in the access client side without conversion).
Access has a great path for web based reports via SharePoint and also Access Web is coming to Office 365. However, keep in mind this ability is not going to help much with the existing reports that you have.
In fact one of the things I would be looking at if you're going to use winforms report viewer is the change in where that existing VBA report code will be moved to? You not really mentioned this issue. As noted one really interesting and great feature of those reports is that imbedded VBA code. Often that VBA will have been used because SQL and something like RDL will NOT work because neither of those languages (sql, and RDL) are procedural code.
I can't stress how important this concept is. So, this quite much means any report writer replacements means that code will now have to be OUTSIDE of the reports and moved into your application. So, keep this issue in mind as now when you issue new reports, you also be issuing new procedural code that NOT be contained in those reports. This code will have to become part of your application (so, to issue new reports, you will thus also be issusing a new version of your software).
You are not likely to find much that allows procedural code to be imbedded inside the report like you can with access. So, that report code and logic will now have to be built and maintained within your main application and outside of the reports.
At the end of the day, I should point out the old adage if it ain't broke, then don't change it. Access been around for a very long time, but we seen significant investments from the folks in Redmond into this product during the last few years, so it shows no signs of dying anytime soon.
So, one possible suggestion is to keep the status quo, and continue going the way it works now. I mean you stated that you have to continue supporting JET for this anyway so you not getting away from having to use a major part of Access anyway. So, you continue to have to use JET engine anyway. So, you just dumping the report side and you still have use the JET data engine anyway.
However, assuming this decision's been made, I can't really suggest what report writer you should replace the access one with. Obviously considerations for the next report writer should have a seamless path to web even if they are NOW going to be rendered on the desktop. It makes no sense to make a large investment today without web considerations in some fashion.
I do think SQL server reporting services is a good choice due to the web ability. And, as an access developer we also have the option to create web based reports but they also render perfect in the access client on the desktop side (and this works when you have no server and no conversion issues exist when publishing these reports to the web, or using them local on the client). So, even if you don't use access, do choose something that allows reports to render both desktop and web like access 2010 allows.
I would consider building the report system around some .net tools. This would likely not play too well as an embedded report system inside of your existing application, but it would allow you to issue new reports, and you would not have to touch your existing code base for each new report issued. This issuing of new reports that have procedural code needs to be resolved. You likely can now issue new reports without having to modify the main application because those reports can contain code inside. I would be looking to use something that would allow new reports to be built and issued but you not having to issue new edition of your main software. You might not embeed the code in the reports anymore, but you need to palce it somewhere, and hopefully outside of your main application.
Wow, this is a great question and Albert has given you a teriffic answer.
Unfortunately I do not believe there are any magic bullets to solve your problem. I have used Microsoft Access since it's first version, and always felt it's strongest feature was as a report generator, particulary when used with SQL Server. As you undoubtably know, one can often have issues with corrupted Access databases in a multi-user environment and SQL Server addresses that issue very nicely.
To my way of thinking the biggest problem with Access is that Microsoft brought out managed code (.Net) ten years ago now but Access is still a native application. In an ideal world Microsoft would rewrite Access in C# using all the latest features such as improved support for multiple processors etc. Unfortunately I do not expect this to happen any time soon.
Visual Basic for Applications (VBA) was definately far abead of "state of the art" when it was introduced, but today I believe most would agree that coding in VB.Net with Visual Studio is much more productive than continuing to develop in VBA.
SInce the selection of a new report generator is something you will need to live with for several years, perhaps it would be helpful to consider what an "ideal" report generator for the next ten years should look like?
Personally I would want:
1) All the great graphics and ease of skinning and branding that Silverlight provides.
2) Great multiprocessor support (you must have noticed how the UI thread in Access often appears "unresponsive" when running long queries or reports).
3) Support for lots of devices such as cellphones, iPads etc. While today the desktop and web dominate, these are becoming increasingly important (unless for some particular reason they are not important to your customers going forward).
4) Support for modern programming practices such as test driven development, dependency injection etc.
Please do let us know what you decide upon.
This is a long shot, but is there a possibility of using Access to generate a saved PDF and displaying that in your app in a PDF-viewing control that is part of your app, rather than external? Or export to XML or something (I haven't a clue what XML export options are available for reports in recent versions of Access, if any)?
The point is that you'd not have to rewrite the Access reporting logic, but you'd have eliminated the fake embedding and replaced it with something that's really embedded in your application.
What you'd be giving up is the perhaps the options that the Access UI gives the user, but I'm not sure how useful that is (I'd tend to not want those options available!).
Also, you'd be persisting the reports to disk, but I'm not sure this is much of any kind of significant issue, either, but it would entirely depend on the context (I'm assuming you have no 1000-page reports with heavy graphics, etc.).
You could take a look at ActiveReports by Data Dynamics. We use it within our apps for paperwork type reports (eg, invoices) and it's extremely flexible, far more so than what you can achieve with the MS reporting tools. For reports that are genuine reports rather than paperwork we use reporting services. It's been a while since I had to port an access report to active reports, but there is little or nothing you could do in access that you can't do in active reports. I'm also fairly certain that it has a decent tool for import access reports. There's a fully functional evaluation version available for download, which, unless they've changed things, just printes a watermark in the report footer rather than expires after a fixed evaluation period. Well worth a look, I'd say - Here's a link to their site
I won't get into any specifics since I'm not a Microsoft developer, but I can answer on how to integrate a legacy product into the current or new product. As for the 36-month question, see the end of this answer.
Usage Requirements - how do you intend to use the legacy code in the context of your new code?
Identify Use Cases - drill down into usage and create a use case for each transaction between the new and old code.
Identify I/O - drill down into each use case and identify I/O requirements
Write Tests - for each I/O pair, write tests to determine the best way to handle that I/O pair.
Reuse - reuse your tests to create a wrapper/API for the legacy code.
Future - as you replace legacy code with new code, let it match your wrapper/API so you can keep refactoring to a minimum.
If I had 36 months of development time to spend, I would spend 3-6 months writing a wrapper/API and then replace each unit tested I/O pair with new code every 7-10 days utilizing sprints (scrum/agile).
For the data store, I would absolutely move from Access to some SQL server product and prioritize that requirement for the new code.
I've used Crystal, Access (2 - 2007), SQL Reporting and now DevExpress and am very happy with DevExpress's reporting engine. It is specific to .net, but can be utilized by Windows Forms, ASP.net Web pages, WPF and Silverlight. If you are willing to utilize some .net controls, I highly recomend it. It can use just about anything as a datasource and is very flexible. My current projects aren't as complex as some things I have done in the past, but I would venture to say that I would rather do complex reports using the DX engine over any other I have used.
They have an End User designer that includes scripting capabiliities and DX is actively adding functionality.
I would recommend taking a look at: http://devexpress.com/Products/Index/Reporting.xml
What is the normal way to send crash reports, product registrations, etc? In other words, how do you guarantee your C++ Windows apps can 'call home'?
I'm not a novice by any means but I'm completely lost in this area. I've never done it before so would appreciate any advice.
Kind Regards,
For crash reports I would strongly recommend taking advantage of Microsoft's WinQual service rather than attempting to create your own. It's free and seamlessly integrated with Windows, at least since XP. It also requires no code or client-side changes at all at its most basic level. To take advantage of more advanced features you can use the Windows Error Reporting APIs.
Code I've written simply creates an email with the required information using the users default email application with information in plain text. I always get the permission of the user to send it, explaining clearly why I think the information is necessary. Nothing is sent without their express permission.
I also prefer to use plain text (not alway possible with memory dumps and such) so they can check what's being sent and no personal or identifying information.
I'm very careful with that stuff since there are possible legal implications with doing it, at least in the jurisdiction where I operate. In any case, it should always be done with the users permission as a matter of courtesy.
As far as crash reporting is concerned, there's WER for starters. It has its drawbacks (the biggest being you have to sign up for it at microsoft and all reports are sent to a central microsoft server) and is best for driver software.
If you need anything else (add your own wishes here), you can either roll your own solution (codeproject.com search provides a few alternatives - just go "crash report").
Regarding product registration - there must be 3rd party solution available as well. I have not heard of anything "built-in" for that, but it is a vast topic - you have to be more specific on features you're after.