How should a Windows 8 Metro Application connect to a central database? - web-services

How should a Windows 8 Metro application connect to a central database?
I've read about local storage, but I haven't read anything about connecting to a central database.
Obviously, this architectural design decision needs to support the disconnected scenario.
WCF web services seem to make sense.
But even if they do make sense, should we really create separate methods for all read/write operations?
Or are OData WCF services the way to go?
It seems like tablet software architecture should be able to borrow a lot from smartphone software architecture (but I am new to both).
Has Microsoft made any recommendations in its app samples?

It appears that others are asking similar questions on the Microsoft Developer Forums.
Here is what I've found:
According to Tim Heuer:
...You cannot directly have a SQL db embedded in your app or use
something like ADO.NET. This is more of an async/services
infrastructure. So if your data was exposed via services, then of
course you could connect that way. There are some other light-weight
methods you could use for local storage as well using things like the
Windows.Storage namespace (which is similar to Isolated Storage in
.NET).
Morten Nielsen agrees:
You can use HttpClient to download pretty much anything from the web.
Why don't you configure your WCF service to return data as JSON, and
use the DataContractJsonSerializer to deserialize the results?
Also, Tim Heuer cautions:
...Please note that while awesome, the SQLWinRT project on codeplex is a
wrapper to communicate with the classic SQLite engine...which uses
APIs that would not pass store validation currently.
Generic Object Storage Helper for WinRT and WinRTFile Based Database seem to have some promise.
But Daniel Stolt raises some good points:
It's awesome that there is good support for building OData clients and
other REST clients - but this only addresses the online scenario. The
"structured" part of Windows.Storage is a very limited model,
essentially limited to name/value pairs, insufficient for all but the
most basic scenarios. Yes there is local file storage, which is great
of course. But forcing every app developer out there to build her own
DBMS on top of local file storage will simply not cut it, especially
with all of System.Data having been removed from the profile. If local
file storage was sufficient for most device apps, then things like
SQLCE would have no purpose today already. And SQLCE clearly has a
purpose, and has played a very important role for occasionally
connected device apps for a very long time. There is also a tremendous
need for synchronization with a server-side database such as SQL
Azure, mostly to be able to roam data between devices. Yes there is
the roaming storage model in WinRT, but it shares the same limitations
of local storage mentioned above, and on top of this is very limited
in capacity (currently 30KB if memory serves). It is simply
insufficient for all but the simplest roaming data needs. Again,
forcing every app developer to design and implement her own
synchronization solution is very bad. You can do much better to enable
developers.
Many people are disappointed that the System.Data namespace is not supported in WinRT.
Richard Bethell said:
I don't even have words for this. This is astonishing. Leave aside for
the moment they want to force you to abstract to middleware for
database connectivity - I don't agree, but I can quasi understand a
rationale for that. I can even see pathways for developing like that.
But no System.Data.... at all? Do you even understand what you've done
to us?
What System.Data can do, outside of just having providers for Sql,
OleDb and other custom providers like Oracle, is provide a rich
abstraction of XML datasets that allow you to very quickly build a
data oriented Service Oriented Architecture.
For instance, I can easily create a web service using SOAP or WCF that
returns DataSets or DataTables, and then consume those objects easily
and directly. Being able to do this allows very rapid construction of
n-tier architectures, even without direct data connections available.
Without System.Data, and the power of DataViews, DataTables, etc. this
gets a lot harder. Sure you can custom create structs, put data in
there, and serve up structs, and use Linq to do whatever sorting,
filtering, etc. you want to do.... but it ends up being twice the
work, and makes code reuse a lot harder. And it means using our
existing service oriented architecture is impossible (without a big
overhaul.)
The withdrawal of System.Data is as big a thing for developers to deal
with as the loss of the Printer object in VB6 to vb.net 1.0 was. What
is harder to understand in this case is why it is necessary -
re-enabling it in the Metro profile can't possibly be a technical
difficulty of the product, can it?
It is valuable enough that I would seriously consider including Mono's
System.Data classes as part of any app I create (which would obviously
have to be open source.)

I think that this is another of those "it depends" questions...
The first and most obvious issue is that it very much depends on the context in which the application is running as to whether, to take the first case "Obviously...support...disconnected" is actually true - if the app is an internal corporate app then quite possibly not in that case no db == not work.
Secondly you could look (hmm, rash... one assumes you could look, this could be a bad assumption) at database synchronisation between a local SQL database and the remote db and so on and so forth.
Taking a step back... yes - you're absolutely right, look at it as being the same as phone or silverlight (although I don't know if there is yet RIA support) - but the thing is at this point its very hard to be prescriptive because given a general purpose platform one can therefore write applications to suit all sorts of purposes.
Not a hugely helpful answer really - but a start.
Having read #Jim G's answer it seems that I should probably withdrawn mine?

Related

SQLite, iCloud, and maybe Core Data—which to use for storing files and sharing them with all of the user's devices?

I've been tasked with porting personal face recognition software to iOS and Mac OS X as well as helping keep the basic SDK and much of the software as cross-platform as possible. One of the things one of my associates and I want to do is store data on the user's face in an SQL database (probably SQLite). We would also like to allow users to put their data on iCloud so they don't have to train each of their devices separately to recognize them. What's bugging me is how to do both these tasks, and I'm confronted with enough choices to feel overwhelmed. (I am still new to some of the technologies involved.)
For implementing SQL, I could embed SQLite directly in my program and write code for it, or I could use Core Data and have it talk to SQLite for me. (The database is not meant to be shared, so this is OK. And SQL is not fun.) However Core Data is anything but portable (not to mention not intended for a model encoded as C++ objects), while writing directly for SQL would mean we could reuse more code on other platforms.
Things get messier when factoring in iCloud, which has something like five or six possible ways of integrating it with a program. The only method I have definitively ruled out so far is iCloud key-value storage. (At the very least, there's a good chance a user would get into trouble with the 1 MB limit, and it is clearly not intended for anything as complex as I'm dealing with.) Core Data can integrate with iCloud through UIManagedDocument or NSPersistentStore, but, again, that means less in the way of reusable code. I can use SQLite together with UIDocument or NSDocument, but what I am trying to do seems to be not quite what these objects were intended for. The files I am dealing with are essentially large preference files, not meant for end-users to interact with directly; UIDocument and NSDocument seem to be meant for user-viewable and -editable files. And then there are iCloud Drive and CloudKit, which are still in beta. (On the other hand, these two are due to be released fairly soon. Considering that iOS users tend to upgrade to the latest version of the system software quickly, arguments about using either of these based on how many devices they will be able to run on should quickly become weak and obsolete.)
Can anyone recommend which way is best suited for my purposes? Thanks in advance.
Aaron Solomon Adelman
First off, you don't want to try and share the SQLite file directly. That's extremely likely to corrupt the file, because SQLite wasn't built with that kind of use in mind.
However:
You could use SQLite for local on-device storage only and use a separate API to send data back and forth. Apple's CloudKit would probably be a good choice if you can require iOS 8+. Numerous third party solutions exist (for example, Parse). You'll have to write your own code to translate between SQLite and the network API. Your SQLite schema and your data files would be portable to other platforms, and maybe some of the code if you use SQLite's direct API instead of an Objective-C wrapper (and I highly recommend using either FMDB or PLDatabase if you use SQLite).
Core Data does have built-in support for iCloud, which probably makes it a viable option. Your comment that "...I could use Core Data and have it talk to SQLite for me." suggests you might have somewhat misunderstood Core Data. Core Data is not a SQLite wrapper; it presents a completely different API, and uses its own schema. You can't really take a Core Data persistent store file and use it on other platforms unless you want to spend some time reverse-engineering the schema. Also, using Core Data with iCloud does not require the use of UIManagedDocument, though it does still require a lot of other Core Data-specific classes.
If you want to be able to sync data across multiple devices which are not all Apple devices then you need a third party API. None of Apple's cloud APIs will be useful here. There are many providers that can help out with this. For local data storage, either SQLite or Core Data would work, but you should look at the third party services and see what storage option(s) they support, then try to work with them.
The best approach depends on your needs. If you expect to copy data files from an iOS app to other platforms, SQLite is good. You'll still have a lot of platform-specific code, the savings there are much less. If you don't plan to move data files around like that, Core Data is probably easier to deal with.

Creating a browser front-end for TigerLogic D3 DB application

I have an interesting conundrum. I have been challenged with identifying the most suitable process in which to create a "browser front-end" to an existing multi-user application built within the TigerLogic/Pick D3 environment. My research indicates there are many ways to do this; but I am struggling to decide which method is best or where to start. I have "played" with a few technologies but a commitment to one is needed to get started.
These methods include:
Creation of a complex webservice using MVS Toolkit; and engineering a client from WSDL either from scratch or using maven/wsimport. Tests indicate there is a lot more to this process than originally though for a simply WSDL.
Development of a Java based web app that harnesses the MVSPJavaAPI - I am not a JAVA developer so this means learning a new language. Development would most likely take place in Eclipse.
Using TigerLogics FlashCONNECT - resulting in additional expenditure to clients so not preferable - and more or less ruled
out.
There is also the .NET option - but I have ruled this out on the basis of needing portability.
My question is, has anyone else out there done anything like this and could you share your experience? My first task is to build a web-app that will reliably give me the D3 TCL prompt in a browser that I can customise.
I am not sure there is a definitive answer here but would like peoples thoughts and will label the most useful as the answer.
What path you choose depends in some part on your existing skill-set and whether that fits in with your portability needs. It is very difficult to give you a concrete answer to your question becaue of not knowing in which part of the chain you need the portability.
It is however possible to develop a web-browser front-end using .NET which will run on Linux or Windows, so I don't see an issue with portability here. Your web server will have to be windows based but it shouldn't matter whether D3 is running on Linux or Windows at the server-end, or whether the client desktops are running Linux or Windows.
You could try TigerLogics MVSP .NET API but I do not know if it has the power to deliver based on your needs. I believe you may find mv.NET from Bluefinity could fulfill your needs. This is in my opinion the leading product on the MultiValue market for achieving the goals you have in mind. This will mean spending money of course. For this you will get a very powerful set of tools. Also, the cost of investing in a good tool could end up smaller than the cost in terms of time, effort and potential complications of trying to do this piecemeal without spending any extra money. I am sure Flashconnect would also do the job. You would have to weigh up the cost of the different options to find out which one is right for you both technically and financially.
Not knowing whether or not you have .NET in your skill-set, I don't know whether the .NET option would be easier for you. It is however technically possible.
I would suggest using Rocket's D3 (formerly TigerLogic D3) .NET APIs and create a Web API RESTful service that you can consume with JavaScript in any other web technology and if you need to call from a D3 subroutine (in case you would) then use the MVS Toolkit.
Requirements though are D3 9.0 or later.
I've used all of the technologies described, and many more to interface with D3. I agree with #Glenn and will add... I understand you're edging away from .NET. That's fine, you don't need it. But consider that most LAMP implementations separate the DBMS servers from the web servers. That topology introduces short delays between the tiers but decouples them in case you want to use multiple web servers or multiple databases - a common topology even with D3 / MV.
I have a client where we have a Java/Grails front-end over Linux, with all data queries filtered through a single, elegant data provider class that's abstracted from application logic. That uses a web service call which I wrote in Java, calling to a .NET web service. The service is easily generated/modified, as is the client from the WSDL. From there IIS carries inbound queries to D3 via mv.NET, and at this point it doesn't matter if the D3 DBMS is in Linux or Windows. My web service could have as easily been in Linux with Java but it would then lack a pooling mechanism - see below.
If you want all Linux then you can go with the MVSP Java library. TigerLogic (now acquired by Rocket Software) committed to a PHP binding for MVSP some months ago. Rather than wait, one of my clients created a PHP wrapper around mv.NET, though MVSP is as easy. So the resulting application is essentially LAMP, but with the M = Multivalue. I have written code like this too - we can write a wrapper in any language which exposes a useful API and abstracts both connectivity method and OS dependencies. In other words it doesn't matter what languages we want to use or what OS's are involved. That part is rather trivial and subject to change later. It's better to focus on the application than the communications.
You can also go off the menu, so to speak, and create your own Java/PHP wrapper around the OS-level d3tcl command, which is a script/wrapper around the d3 executable. This allows you to open a connection yourself and pass in commands.
Whatever option you select, you need to consider that opening and closing a DBMS connection is a slow process. You do not want to script a login around every data request. You do want to open a connection and keep that open persistently, while your client code accesses and releases that persistent connection as required. This is why we like mv.NET and FlashCONNECT. With MVSP and other mechanisms you need to create your own persistence model. You'll also need to manage a pool of connection resources - what happens when you get 10 simultaneous queries, or just 1 short one after one long one? You don't want queries to back up, you don't want to reject or timeout connections, and you don't want to fire up a connection for every client. You do want the proper number of DBMS sessions waiting for inbound connections. mv.NET and FlashCONNECT do this for you, the others do not.
Personally I'd shy away from FlashCONNECT. I was there for its initial development and testing and for years of end-user implementations. It's not as widely used as the other options and is more a tool for those who aren't familiar with other options. If you're talking about Java then you're probably not inclined to use FlashCONNECT. That said, if you have developers who are not familiar with anything outside D3 then FlashCONNECT is a decent server-side tool for them while someone else is focusing on the client-side with other technologies. Everyone should use their best skillset.
Finally, (already?) if someone is not familiar with external technologies, and more intimate with D3, then other options exist like DesignBAIS and Viságe, mostly removing the burden of communications and allowing developers to work on the client-side features and back-end rules in BASIC.
I discuss all of these topics plus mobile and telephony on my blog.
HTH

Upgrade Path For Legacy Reporting ("Embedded" Access 2003)?

Update: Albert D. Kallal has kindly started the discussion off, and to get some more opinions I'm adding a bounty.
This is a nontrivial question about maintenance of a legacy application myself and two other developers support. We are not the original developers, and the code base is 300,000 lines of MFC and business logic tightly coupled together. We don't know every single line of code 100%.
We do know the code behind the major components, and we know that it's poorly written. Our objective is to refactor the application out of 1995 and into 2010. Between the three of us there is (in aggregate) enough experience in software architecture and database design for us to fix the components that are poorly architected in code or incorrectly modelled in the database, but we don't have a lot of experience with modern reporting systems. Thus my question (once you get to the end of it...) is about reporting systems.
For anybody who reads this entire post, I am appreciative of your time. For anybody who reads this post and replies with solutions, experience (or sympathy!), I am both appreciative and thankful.
At work I have inherited the maintenance of an Access 2003 database that contains approximately 250 reports (and thousands of supporting queries) that acts as a reporting engine for our application.
The reports all have swathes of VBA in them for particular formatting or pulling extra information into the report. For this reason we are entirely locked into the Access platform, we can't use tools like BIDS to import the Access report objects without messing around to make the report display the same without VBA.
So to get ourselves out of this Access solution we need to put some time in going over every single report. Which means we're looking to pick the best longterm solution, since we're going to have to redevelop every report regardless of the platform we choose.
Furthermore our customers have a choice of Microsoft Access or SQL Server as their database. This means that all our SQL has to be written with the lowest common denominator in mind - JET SQL. We've got some wiggle room to drop support for Microsoft Access, but we'd need to build a case for it. If the best reporting system we can identify has strong support for SQL Server but little or no support for Microsoft Access this will accelerate us dropping support for Microsoft Access as a database.
The overall implementation of the report system is quite mediocre, when we want to display reports in our application we start a Microsoft Access process, find its window and reparent it to our application, strip off its window styles and then use the Access.Application COM interface to invoke some VBA that creates linked tables to the database (either a Microsoft Access MDB or a SQL Server database) and then opens up the report we want. Probably the only supported part of the process is using the public COM interfaces, the rest is an ugly hack. The other components in the application are equally underwhelming.
To "fix" our application we've got a new development plan, with development of our application split into (approximately) three parts every year.
4 months upgrading our application to support the latest government legislation in our industry
4 months delivering a new major feature
4 months "consolidation" (fixing what is broken)
We're currently at #3 now (for this year), and we really want to take advantage of the downtime to fix up the application, refactoring the major components. We have three developers, and want AppName v5.0 out at the end of 2012 (it's currently AppName v4.12). This gives us 36 months of development effort to approportion between several components (user interface, underlying database structure, reporting, etc) over the three consolidation periods we will have before then. The sum of the components that we fix will give us v5.0.
We've scoped out what we'd like to do with most of the components except for our reporting engine, and I'm posting on SO in the hope of getting some good ideas, or at least a feel for the work that's required.
I have two ideas for improving our reporting system. Both of them involve a moderate amount of work, and there is one consideration that neither solution addresses completely: in addition to the reports that we develop, our customers also have the opportunity to request bespoke development of reports. They're customer-specific, we take their Access database, augment it with their report and give it back to the customer. There's hundreds of unique reports out there - unusable if we turned the old system off. (And we have to turn the old system off eventually - we don't know how much longer we're going to be able to mess around with the Microsoft Access window to make it look like an embedded report. We already have two distinct code paths for Access 2003 and 2007. What if we can't hack up a code path for Access 2010 and all our customers have to use Access 2007?)
For both ideas, the intention is to stop supporting our current reporting system and let it run for as long as it will without maintenance. Maybe we can hack in Access 2010 and Access 2014 support, and the customer reports that were developed keep putting along for 5 more years. Over time, we'd migrate the most commonly used reports from the old Access database into their new format.
Idea 1: Microsoft.Reporting.WinForms.ReportViewer
The first idea is to write a wrapper around the ReportViewer control as a replacement reporting engine.
We'd need to move the project to C++/CLI (already on the cards), and instead of having to launch an entire process each time we needed to view a report we could simply instantiate this control. A bonus of this that the RDLC files that contain the reports are much easier to version control in Subversion than the Access 2003 database we currently have (we use Visual SourceSafe because the tools to integrate SVN with Access don't work well with the size of our Access database). The visual designer for RDLC files is also nicely integrated into Visual Studio.
This is more of an evolutionary rather than revolutionary change to the way we do reports, the ReportViewer control will take an RDLC file that has the report layout, and our application will take care of querying the data. Because our database might be SQL Server or Microsoft Access, we still have to write simple JET SQL. We're gaining better reporting (drill down looks nice), stronger authoring tools and easier version control, but is this worth the effort?
Idea 2: SQL Server Reporting Services and SharePoint 2010 with Access Services
The second idea is to kill Access as a database platform and migrate all our customers to SQL Server (we have hosted instances of our application for those customers who don't have the skill set to set up their own SQL Server instances). Once they're migrated we would use SQL Server Reporting Services as the reporting engine, with the ReportViewer control in server rendering mode.
In addition to SQL Server Reporting Services, I am curious as to whether SharePoint 2010 with Access Services could be used to rapidly migrate existing Access reports into a more manageable format. We'd take the Access report that the customer uses, convert it to an Access Web Report then make it available for them on a SharePoint site. This would only be for our hosted customers, but if we find a way to deal quickly massage the VBA out of customer reports we could churn through the several hundred custom reports our customers have.
I'm also interested in the ability to use an Access Web Navigation Form to act as a portal to all our reports. We'd host a web browser control inside our application which would give customers access to their own reports and to our standard suite.
We'd get all the benefits of Idea #1 plus the ability to write in full Transact SQL, a reports portal, and (hopefully) a reasonable upgrade path for customer's proprietary reports.
So, my question is: am I going about this the right way? Are these viable solutions for modern reporting systems, or laughable? We have a strong preference for using the ReportViewer control either in client rendering mode where our application processes the data, or in server rendering mode in conjunction with SQL Server - but are there reporting systems like Crystal Reports which offer better reporting and better migration paths for our legacy Access reports?
If you had up to 36 months of developer time, how would you do this?
Well, ok, no one else jumping in, I give this a go.
Quite interesting how you talking about a report writer that 15+ years old. Back then the Access report writer was beyond state of the art. It was a country mile ahead of everything else in the industry. Even today a lot of competing report writers don't have the concept of sub reports that allows modeling of relational data without having to resort to code or even SQL. Then, throw in programmable VBA, then the result is something that's very unique and powerful.
For access 2007, the report writer received some more nice upgrades in terms of layout controls but that going to be of little help here.
And, for 2010 we can now display reports in a sub-form control. This feature was added to facilitate use of the new access navigation control. Access 2010 has a new web browser control (works in forms or reports), and there also a new navigation control. Your post hints that the new navigation control and the web control are somehow related to each other but they completely different features.
Both the new web browser control, and navigation control can be used in both web appliations or 100% client only applications. The navigation control is nice since you can build that nav contorl by drag and dropping reports onto the nav control to build up a up a list of reports to choose from (it is slick and easy and nice). And with this navigation control, we can actually build some nice drill down type of interfaces for reports.
As you noted for access 2010 we now have web publishing of access reports and this feature is based on SQL server reporting services (they are RDL reports). However, two important issues here is no VBA is allowed inside of the web reports. And, I also point out that there is no automatic conversion utility that is built into access that will convert existing reports into web based reports. So to build a report that's going to be designated and published to the web, you have to choose specifically to create a web report to accomplish this goal. So this answers and clears up one question of yours of will this help you convert existing reports to SQL server, and the answer is no. So, Access will not help you convert existing reports to web based RDL reports (As noted, Access uses RDL and sql reporting for those web reports - those reports also render in the access client side without conversion).
Access has a great path for web based reports via SharePoint and also Access Web is coming to Office 365. However, keep in mind this ability is not going to help much with the existing reports that you have.
In fact one of the things I would be looking at if you're going to use winforms report viewer is the change in where that existing VBA report code will be moved to? You not really mentioned this issue. As noted one really interesting and great feature of those reports is that imbedded VBA code. Often that VBA will have been used because SQL and something like RDL will NOT work because neither of those languages (sql, and RDL) are procedural code.
I can't stress how important this concept is. So, this quite much means any report writer replacements means that code will now have to be OUTSIDE of the reports and moved into your application. So, keep this issue in mind as now when you issue new reports, you also be issuing new procedural code that NOT be contained in those reports. This code will have to become part of your application (so, to issue new reports, you will thus also be issusing a new version of your software).
You are not likely to find much that allows procedural code to be imbedded inside the report like you can with access. So, that report code and logic will now have to be built and maintained within your main application and outside of the reports.
At the end of the day, I should point out the old adage if it ain't broke, then don't change it. Access been around for a very long time, but we seen significant investments from the folks in Redmond into this product during the last few years, so it shows no signs of dying anytime soon.
So, one possible suggestion is to keep the status quo, and continue going the way it works now. I mean you stated that you have to continue supporting JET for this anyway so you not getting away from having to use a major part of Access anyway. So, you continue to have to use JET engine anyway. So, you just dumping the report side and you still have use the JET data engine anyway.
However, assuming this decision's been made, I can't really suggest what report writer you should replace the access one with. Obviously considerations for the next report writer should have a seamless path to web even if they are NOW going to be rendered on the desktop. It makes no sense to make a large investment today without web considerations in some fashion.
I do think SQL server reporting services is a good choice due to the web ability. And, as an access developer we also have the option to create web based reports but they also render perfect in the access client on the desktop side (and this works when you have no server and no conversion issues exist when publishing these reports to the web, or using them local on the client). So, even if you don't use access, do choose something that allows reports to render both desktop and web like access 2010 allows.
I would consider building the report system around some .net tools. This would likely not play too well as an embedded report system inside of your existing application, but it would allow you to issue new reports, and you would not have to touch your existing code base for each new report issued. This issuing of new reports that have procedural code needs to be resolved. You likely can now issue new reports without having to modify the main application because those reports can contain code inside. I would be looking to use something that would allow new reports to be built and issued but you not having to issue new edition of your main software. You might not embeed the code in the reports anymore, but you need to palce it somewhere, and hopefully outside of your main application.
Wow, this is a great question and Albert has given you a teriffic answer.
Unfortunately I do not believe there are any magic bullets to solve your problem. I have used Microsoft Access since it's first version, and always felt it's strongest feature was as a report generator, particulary when used with SQL Server. As you undoubtably know, one can often have issues with corrupted Access databases in a multi-user environment and SQL Server addresses that issue very nicely.
To my way of thinking the biggest problem with Access is that Microsoft brought out managed code (.Net) ten years ago now but Access is still a native application. In an ideal world Microsoft would rewrite Access in C# using all the latest features such as improved support for multiple processors etc. Unfortunately I do not expect this to happen any time soon.
Visual Basic for Applications (VBA) was definately far abead of "state of the art" when it was introduced, but today I believe most would agree that coding in VB.Net with Visual Studio is much more productive than continuing to develop in VBA.
SInce the selection of a new report generator is something you will need to live with for several years, perhaps it would be helpful to consider what an "ideal" report generator for the next ten years should look like?
Personally I would want:
1) All the great graphics and ease of skinning and branding that Silverlight provides.
2) Great multiprocessor support (you must have noticed how the UI thread in Access often appears "unresponsive" when running long queries or reports).
3) Support for lots of devices such as cellphones, iPads etc. While today the desktop and web dominate, these are becoming increasingly important (unless for some particular reason they are not important to your customers going forward).
4) Support for modern programming practices such as test driven development, dependency injection etc.
Please do let us know what you decide upon.
This is a long shot, but is there a possibility of using Access to generate a saved PDF and displaying that in your app in a PDF-viewing control that is part of your app, rather than external? Or export to XML or something (I haven't a clue what XML export options are available for reports in recent versions of Access, if any)?
The point is that you'd not have to rewrite the Access reporting logic, but you'd have eliminated the fake embedding and replaced it with something that's really embedded in your application.
What you'd be giving up is the perhaps the options that the Access UI gives the user, but I'm not sure how useful that is (I'd tend to not want those options available!).
Also, you'd be persisting the reports to disk, but I'm not sure this is much of any kind of significant issue, either, but it would entirely depend on the context (I'm assuming you have no 1000-page reports with heavy graphics, etc.).
You could take a look at ActiveReports by Data Dynamics. We use it within our apps for paperwork type reports (eg, invoices) and it's extremely flexible, far more so than what you can achieve with the MS reporting tools. For reports that are genuine reports rather than paperwork we use reporting services. It's been a while since I had to port an access report to active reports, but there is little or nothing you could do in access that you can't do in active reports. I'm also fairly certain that it has a decent tool for import access reports. There's a fully functional evaluation version available for download, which, unless they've changed things, just printes a watermark in the report footer rather than expires after a fixed evaluation period. Well worth a look, I'd say - Here's a link to their site
I won't get into any specifics since I'm not a Microsoft developer, but I can answer on how to integrate a legacy product into the current or new product. As for the 36-month question, see the end of this answer.
Usage Requirements - how do you intend to use the legacy code in the context of your new code?
Identify Use Cases - drill down into usage and create a use case for each transaction between the new and old code.
Identify I/O - drill down into each use case and identify I/O requirements
Write Tests - for each I/O pair, write tests to determine the best way to handle that I/O pair.
Reuse - reuse your tests to create a wrapper/API for the legacy code.
Future - as you replace legacy code with new code, let it match your wrapper/API so you can keep refactoring to a minimum.
If I had 36 months of development time to spend, I would spend 3-6 months writing a wrapper/API and then replace each unit tested I/O pair with new code every 7-10 days utilizing sprints (scrum/agile).
For the data store, I would absolutely move from Access to some SQL server product and prioritize that requirement for the new code.
I've used Crystal, Access (2 - 2007), SQL Reporting and now DevExpress and am very happy with DevExpress's reporting engine. It is specific to .net, but can be utilized by Windows Forms, ASP.net Web pages, WPF and Silverlight. If you are willing to utilize some .net controls, I highly recomend it. It can use just about anything as a datasource and is very flexible. My current projects aren't as complex as some things I have done in the past, but I would venture to say that I would rather do complex reports using the DX engine over any other I have used.
They have an End User designer that includes scripting capabiliities and DX is actively adding functionality.
I would recommend taking a look at: http://devexpress.com/Products/Index/Reporting.xml

Are there cross-platform tools to write XSS attacks directly to the database?

I've recently found this blog entry on a tool that writes XSS attacks directly to the database. It looks like a terribly good way to scan an application for weaknesses in my applications.
I've tried to run it on Mono, since my development platform is Linux. Unfortunately it crashes with a System.ArgumentNullException deep inside Microsoft.Practices.EnterpriseLibrary and I seem to be unable to find sufficient information about the software (it seems to be a single-shot project, with no homepage and no further development).
Is anyone aware of a similar tool? Preferably it should be:
cross-platform (Java, Python, .NET/Mono, even cross-platform C is ok)
open source (I really like being able to audit my security tools)
able to talk to a wide range of DB products (the big ones are most important: MySQL, Oracle, SQL Server, ...)
Edit: I'd like to clarify my goal: I'd like a tool that directly writes the result of a successful XSS/SQL injection attack into the database. The idea is that I want to check that every place in my app does correct output encoding. Detecting and avoiding the data getting there in the first place is an entirely different thing (and might not be possible when I display data that's written to the DB by a third-party application).
Edit 2: Corneliu Tusnea, the author of the tool I linked to above, has since released the tool as free software on codeplex: http://xssattack.codeplex.com/
I think metasploit has most of the attributes you are looking for. It may even be the only one that has all of what you specify, since all the others I can think of are closed source. There are a few existing modules that deal with XSS and one in particular that you should take a peek at: HTTP Microsoft SQL Injection Table XSS Infection. From the sounds of that module it is capable of doing exactly what you are wanting to do.
The framework is written in Ruby I believe, and is supposed to be easy to extend with your own modules which you may need/want to do.
I hope that helps.
http://www.metasploit.com/
Not sure if this is what you're after, its a parameter fuzzer for HTTP/HTTPS.
I haven't used it in a while, but IIRC it acts a proxy between you and the web application in question - and will insert XSS/SQL Injection attack strings into any input fields before deeming whether the response was "interesting" or not, thus whether the application is vulnerable or not.
http://www.owasp.org/index.php/Category:OWASP_WebScarab_Project
From your question I'm guessing it is a type of fuzzer you're looking for, and one specifically for XSS and web applications; if I'm right - then that might help you!
Its part of the Open Web Application Security Project (OWASP) that "jah" has linked you to above.
There are some Firefox plugins to do some XSS testing here:
http://labs.securitycompass.com/index.php/exploit-me/
A friend of mine keeps saying, that php-ids is pretty good. I haven't tried it myself, but it sounds as if it could approximately match your description:
Open Source (LGPL),
Cross Platform - PHP is not in your list, but maybe it's ok?
Detects "all sorts of XSS, SQL Injection, header injection, directory traversal, RFE/LFI, DoS and LDAP attacks" (this is from the FAQ)
Logs to databases.
I don't think there is such a tool, other than the one you pointed us to. I think there's a good reason for that: It's probably not the best way to test that each and every output is properly encoded for the applicable context.
From reading about that tool it seems the premise is to insert random xss vectors into the database and then you browse your application to see if any of those vectors succeed. This is rather a hit and miss methodology, to say the least.
A much better idea, I think, would be to perform code reviews.
You may find it helpful to have a look at some of the resources available at http://owasp.org - namely the Application Security Verification Standard (ASVS), the Testing Guide and the Code Review Guide.

Atom Publishing Protocol in real life

I know that some big players have embraced it and are actually exposing some of their services in APP compliant way, already. However, I haven't found many other (smaller) players in this field. Do you know any web application/service that uses APP as its public API protocol? What is your own take on AtomPub? Do you have any practical experiences using it? What are its limitations and drawbacks? Do you prefer AtomPub as your REST style or do you have some other favourite one? And why?
I know, these are many questions, not just one. The thing I'm interested here in is simple, though - how did the APP standard hit the market and particularly how does it seem with its adoption among web developers?
The company that I work for, is developing a lot of RESTful services.
However none of them expose public APIs.(In the sense that all services are internally consumed by our own clients). The reason why we went for REST architectural style was that we wanted our services to be easily consumable and more importantly scale well.
From my own practical experience I have come to the conclusion that HTTP + ATOM syndication format is a good idea, provided you want to keep things flexible(In terms of different content model, attaching and extending meta data associated with payloads, uniform parsing etc). ATOM ensures that everybody interprets the payload in an uniform manner without any scope for ambiguity.
However if one does not have any such complex requirements or does not forsee such requirements then the ATOM format could be a bit of an overhead. (For instance elements like Author,Title etc make sense more in the blogging/RSS world and may not make sense in your particular problem domain).
Also if the goal is to just serialize data structures at one end and reconstruct it at the other end, then most web frameworks(like WCF) have custom formats which are more appealing.
So in my opinion ATOM Pub is good if you need flexiblity in terms of data representation and if the playing field is huge with different kind of client.
However if you have a good knowledge of potential clients and server/client usage patterns then custom formats might be a good idea.
If the client is browser based then formats like JSON are very appealing.
Hope this answers your question.
My own research so far:
Wordpress supports AtomPub as its API protocol since version 2.3
GData is probably the biggest shot in the AtomPub field so far
Habari - new promising blogging system promotes APP as one of its main features
BlogSvc.net - an AtomPub
server, blog engine for .NET
platform, written in C#
Jangle - an open source project
designed to facilitate API access to
Library Systems
There's also mod_atom - an Apache module that stores entries in the filesystem.
Last time I checked (2007 or so) Atompub was fairly complex to implement. While you can whip together something that emits valid Atom feeds during the lunch break, implementing AtomPub was a fairly big undertaking.
That might have changed due to better libraries and tools but still it might be too complex to be implemented by smaller sides just because it's cool.
And the lack of killer AtomPub client applications puts little or no pressure on server operators to offer an AtomPub interface.