When i proceed to develop a software, ui design or database design, which should be first? - mfc

I tried to design the ui with some ui mocking software, but i found it's hard for me to settle down all the detail of the design, since the database didn't design yet.
But if i first design software, then the same problem occur, I didn't have the UI, how can I create a prominent UI ?

UI first.
Mock up an elegant and easy-to-use user interface (and workflow) thinking from the point of view of the user, and only then think about the underlying database / data structures you'll need to bring that UI to life.
If you can't design your UI because you haven't yet designed your database, you're doing it wrong IMHO. How many annoying pieces of software have you used that suffered from letting the database design drive the UI design?
Edit: As others have pointed out, you need to start with use cases / user stories. The UI design and database design, whichever order you do them, should only happen after you know what your software is trying to do, and for whom.
Edit by Bryan Oakley:
(source: gapingvoid.com)

Put the user at the place he deserves. Design UI first.
Database is only a consequence of user needs.

use cases first, neither ui nor database.

If you're trying to solve a problem in an object-oriented language, it's recommended that you start thinking about the objects involved. Don't worry about the database or the UI until you've got a solid domain model nailed down that addresses all the use cases.
You don't worry about the database or the UI at first. You can serialize objects to the file system if you need persistence and don't have a database. Being able to drive your app with a command line UI is a good exercise for guaranteeing that you have a good MVC separation.
Start with the objects.
UPDATE:
The one advantage that this approach has is that it doesn't prejudice the UI with a particular database design or vice versa. The object are agnostic about the other two layers. You aren't required to have a UI or relational database at all. You're just getting the objects right. Once you have that, you can create any UI or persistence scheme you like, confident that the domain model can handle the problem you've been asked to solve.

All the above answers address your issue in a right direction. That said, I would follow the SDLC thoroughly. It helps you understand the need for the solution for the problem at hand. Then comes the requirement gathering followed by the design either UI / underlying structure that supports the UI. It's a procedure but you would benefit in the end.

Your question is very subjective.
My opinion (and it is just that) is that database and underlying structure should come first. It can often help to put down the keyboard and mouse and compile some notes on paper.
Lay out goals like what you want your application to do, list the features you require and then start thinking about how you'll build it.
This method works for me in application design.

usually you need to manipulate some data in the solutions you develop. So you should consider how this data is organised first, stabilizing this layer is fundamental at the beginning. I agree with duffymo's comment about designing the business objects first if you are in a OO world. Mapping these objects to the DB will be a part of your work. Then you add business functionality and work on the presentation layer. Of course you will have to do some refactoring from time to time, but usually the refactoring impacts the business and presentation layers more than the database.
read this, it is a good technique.
DDD - Domain Driver Design

Would you build a house without a foundation? Database design isn't the fun part but it is the foundation of most business apps and if you get it wrong it becomes the most costly to fix and the most costly to maintain.
That said, I note that there is no reason you can't work on both together as they intertwine. But before you can do either, you need to understand the requirements and the business you are writing the app for.

Related

Should I use an internal API in a django project to communicate between apps?

I'm building/managing a django project, with multiple apps inside of it. One stores survey data, and another stores classifiers, that are used to add features to the survey data. For example, Is this survey answer sad? 0/1. This feature will get stored along with the survey data.
We're trying to decide how and where in the app to actually perform this featurization, and I'm being recommended a number of approaches that don't make ANY sense to me, but I'm also not very familiar with django, or more-than-hobby-scale web development, so I wanted to get another opinion.
The data app obviously needs access to the classifiers app, to be able to run the classifiers on the data, and then reinsert the featurized data, but how to get access to the classifiers has become contentious. The obvious approach, to me, is to just import them directly, a la
# from inside the Survey App
from ClassifierModels import Classifier
cls = Classifier.where(name='Sad').first() # or whatever, I'm used to flask
data = Survey.where(question='How do you feel?').first()
labels = cls(data.responses)
# etc.
However, one of my engineers is saying that this is bad practice, because apps should not import one another's models. And that instead, these two should only communicate via internal APIs, i.e. posting all the data to
http://our_website.com/classifiers/sad
and getting it back that way.
So, what feels to me like the most pressing question: Why in god's name would anybody do it this way? It seems to me like strictly more code (building and handling requests), strictly less intuitive code, that's more to build, harder to work with, and bafflingly indirect, like mailing a letter to your own house rather than talking to the person who lives there, with you.
But perhaps in easier to answer chunks,
1) Is there REALLY anything the matter with the first, direct, import-other-apps-models approach? (The only answers I've found say 'No!,' but again, this is being pushed by my dev, who does have more industrial experience, so I want to be certain.)
2) What is the actual benefit of doing it via internal API's? (I've asked of course, but only get what feel like theoretical answers, that don't address the concrete concerns, of more and more complicated code for no obvious benefit.)
3) How much do the size of our app, and team, factor into which decision is best? We have about 1.75 developers, and only, even if we're VERY ambitious, FOUR users. (This app is being used internally, to support a consulting business.) So to me, any questions of Best Practices etc. have to factor in that we have tiny teams on both sides, and need something stable, functional, and lean, not something that handles big loads, or is externally secure, or fast, or easily worked on by big teams, etc.
4) What IS the best approach, if NEITHER of these is right?
It's simply not true that apps should not import other apps' models. For a trivial refutation, think about the apps in django.contrib which contain models such as User and ContentType, which are meant to be imported and used by other apps.
That's not to say there aren't good use cases for an internal API. I'm in the planning process of building one myself. But they're really only appropriate if you intend to split the apps up some day into separate services. An internal API on its own doesn't make much sense if you're not in a service-based architecture.
I cant see any reason why you should not import an app model from another one. Django itself uses several applications and theirs models internally (like auth and admin). Reading the applications section of documentation we can see that the framework has all the tools to manage multiple applications and their models inside a project.
However it seems quite obvious to me that it would make your code really messy and low-performance to send requests to your applications API.
Without context it's hard to understand why your engineer considers this a bad practice. He was maybe referring to database isolation (thus, see "Working multiple databases" in documentation) or proper code isolation for testing.
It is right to think about decoupling your apps. But I do not think that internal REST API is a good way.
Neither direct import of models, calling queries and updates in another app is a good approach. Every time you use model from another app, you should be careful. I suggest you to try to separate communication between apps to the simple service layer. Than you Survey app do not have to know models structure of Classifier app::
# from inside the Survey App
from ClassifierModels.services import get_classifier_cls
cls = get_classifier_cls('Sad')
data = Survey.where(question='How do you feel?').first()
labels = cls(data.responses)
# etc.
For more information, you should read this thread Separation of business logic and data access in django
In more general, you should create smaller testable components. Nowadays I am interested in "functional core and imperative shell" paradigm. Try Gary Bernhardt lectures https://gist.github.com/kbilsted/abdc017858cad68c3e7926b03646554e

How should a Windows 8 Metro Application connect to a central database?

How should a Windows 8 Metro application connect to a central database?
I've read about local storage, but I haven't read anything about connecting to a central database.
Obviously, this architectural design decision needs to support the disconnected scenario.
WCF web services seem to make sense.
But even if they do make sense, should we really create separate methods for all read/write operations?
Or are OData WCF services the way to go?
It seems like tablet software architecture should be able to borrow a lot from smartphone software architecture (but I am new to both).
Has Microsoft made any recommendations in its app samples?
It appears that others are asking similar questions on the Microsoft Developer Forums.
Here is what I've found:
According to Tim Heuer:
...You cannot directly have a SQL db embedded in your app or use
something like ADO.NET. This is more of an async/services
infrastructure. So if your data was exposed via services, then of
course you could connect that way. There are some other light-weight
methods you could use for local storage as well using things like the
Windows.Storage namespace (which is similar to Isolated Storage in
.NET).
Morten Nielsen agrees:
You can use HttpClient to download pretty much anything from the web.
Why don't you configure your WCF service to return data as JSON, and
use the DataContractJsonSerializer to deserialize the results?
Also, Tim Heuer cautions:
...Please note that while awesome, the SQLWinRT project on codeplex is a
wrapper to communicate with the classic SQLite engine...which uses
APIs that would not pass store validation currently.
Generic Object Storage Helper for WinRT and WinRTFile Based Database seem to have some promise.
But Daniel Stolt raises some good points:
It's awesome that there is good support for building OData clients and
other REST clients - but this only addresses the online scenario. The
"structured" part of Windows.Storage is a very limited model,
essentially limited to name/value pairs, insufficient for all but the
most basic scenarios. Yes there is local file storage, which is great
of course. But forcing every app developer out there to build her own
DBMS on top of local file storage will simply not cut it, especially
with all of System.Data having been removed from the profile. If local
file storage was sufficient for most device apps, then things like
SQLCE would have no purpose today already. And SQLCE clearly has a
purpose, and has played a very important role for occasionally
connected device apps for a very long time. There is also a tremendous
need for synchronization with a server-side database such as SQL
Azure, mostly to be able to roam data between devices. Yes there is
the roaming storage model in WinRT, but it shares the same limitations
of local storage mentioned above, and on top of this is very limited
in capacity (currently 30KB if memory serves). It is simply
insufficient for all but the simplest roaming data needs. Again,
forcing every app developer to design and implement her own
synchronization solution is very bad. You can do much better to enable
developers.
Many people are disappointed that the System.Data namespace is not supported in WinRT.
Richard Bethell said:
I don't even have words for this. This is astonishing. Leave aside for
the moment they want to force you to abstract to middleware for
database connectivity - I don't agree, but I can quasi understand a
rationale for that. I can even see pathways for developing like that.
But no System.Data.... at all? Do you even understand what you've done
to us?
What System.Data can do, outside of just having providers for Sql,
OleDb and other custom providers like Oracle, is provide a rich
abstraction of XML datasets that allow you to very quickly build a
data oriented Service Oriented Architecture.
For instance, I can easily create a web service using SOAP or WCF that
returns DataSets or DataTables, and then consume those objects easily
and directly. Being able to do this allows very rapid construction of
n-tier architectures, even without direct data connections available.
Without System.Data, and the power of DataViews, DataTables, etc. this
gets a lot harder. Sure you can custom create structs, put data in
there, and serve up structs, and use Linq to do whatever sorting,
filtering, etc. you want to do.... but it ends up being twice the
work, and makes code reuse a lot harder. And it means using our
existing service oriented architecture is impossible (without a big
overhaul.)
The withdrawal of System.Data is as big a thing for developers to deal
with as the loss of the Printer object in VB6 to vb.net 1.0 was. What
is harder to understand in this case is why it is necessary -
re-enabling it in the Metro profile can't possibly be a technical
difficulty of the product, can it?
It is valuable enough that I would seriously consider including Mono's
System.Data classes as part of any app I create (which would obviously
have to be open source.)
I think that this is another of those "it depends" questions...
The first and most obvious issue is that it very much depends on the context in which the application is running as to whether, to take the first case "Obviously...support...disconnected" is actually true - if the app is an internal corporate app then quite possibly not in that case no db == not work.
Secondly you could look (hmm, rash... one assumes you could look, this could be a bad assumption) at database synchronisation between a local SQL database and the remote db and so on and so forth.
Taking a step back... yes - you're absolutely right, look at it as being the same as phone or silverlight (although I don't know if there is yet RIA support) - but the thing is at this point its very hard to be prescriptive because given a general purpose platform one can therefore write applications to suit all sorts of purposes.
Not a hugely helpful answer really - but a start.
Having read #Jim G's answer it seems that I should probably withdrawn mine?

What step would u take to refactor a ball of mud CF app into something modern and maintainable

I am going to pick up a task that no one has ever attempted to try at my workplace. It is a CF app first written using CF 2.0 (Yes, 2.0!) 10 yrs ago with > 10 cfscheduler tasks.. We explored the idea of rewriting the app, but 10 yrs of work simply can't be rewrote in 2-3 months.
What steps shall one take to modernize the app into a maintainable, extendable state? The one that I keep hearing is "write tests", but how can I write tests when it wasn't even in MVC?
Any advice would be appreciated, thanks!
p.s. I should thank Allaire, Macromedia and Adobe for keeping CF so freaking backward compatible all the way back to 2.0!
btw, what's the most modern, maintainable state for a CF app without MVC framework? or should my end goal be ultimately refactoring it into a MVC app?? I can't image how many links I will break if I do... seems impossible... thought?
update: found 2 related Q's...
https://softwareengineering.stackexchange.com/questions/6395/how-do-you-dive-into-large-code-bases
https://softwareengineering.stackexchange.com/questions/29788/how-do-you-dive-into-a-big-ball-of-mud
I am not sure if you need to move the whole site to a MVC application. Recently I did helped with an site that was not MVC, that still had a library with the Models, Services and Assemblers in a clean and organized manor. It worked great, and we didn't need to do anything more than what was necessary.
That being said, my first step would be to organize the spaghetti code into their different purposes. It may be hard to properly create the models, but at the very least you could break out the services like functions from the pages. With that done, it should be a lot cleaner already.
Then, I would try to take the repeated code and put them into custom tags. It will make the code more reusable, and easier to read.
Good Luck!
Consider, whether a full fledged framework is really necessary. In its most basic form a framework is merely highly organized code. So if procedural, that is well organized, works leave it.
Keep in mind something like FW/1 as migration path can be better than say Coldbox if you don't need all the other stuff.
Lastly, consider this I was able to migrate a 4.5 almost 70% of the way to Coldbox (very simple and really more about directory and file organization versus IOC, plugins, modules, etc...) just using a few extra lines per file plus onMissingMethod functions.
Good Luck.
I had to deal with a similar situation for about two years at my last job, however, it wasn't quite as old as yours. I think I was dealing with code from 4.0 on. There's no silver bullet here, and you'll need to be careful that you don't get too caught up in re-factoring the code and costing your company tons of money in the process. If the app works as it is rewriting it would be a pretty big wast of money.
What I did was update small chunks at a time, I wouldn't even refactor whole templates at a time, just small portions of one at a time. If I saw a particular ugly loop, or nested if statements I'd try to clean it up the best I could. If the app can be broken down into smaller modules or areas of functionality and you have the extra time you can try to clean up the code a module at a time.
A good practice I heard from the Hearding Code podcast is create a testing harness template that would use a particular cfm page that has a known output that you can re-run to make sure that it still has the same output once you've done refactoring. Its not nearly as granular as a unit test, but its something and something is almost always better than nothing, right?
I suspect that the reason this app hasn't been touched for years is because for the most part it works. So the old adage "if it ain't broken don't fix it" probably applies; However, code can always be improved :)
The first thing I'd do is switch to Application.cfc and add some good error logging. That way you may find out about things that need to be fixed, and also if you do make changes you're know if they break anything else.
The next thing I'd do is before you change any code is use selenium to create some tests - it can be used as a FireFox plugin and will record what you do. It's really good for testing legacy apps without much work on your part.
Chances are that you won't have much if any protection from SQL injection attacks so you will want to add cfqueryparam to everything!!
After that I'd be looking for duplicated code - eliminating duplicate code is going to make maintenance easier.
Good luck!
Funnily enough, I'm currently involved in converting an old CF app into an MVC3 application.
Now this isn't CF2, it was updated as recently as a year ago so all of this may not apply at all to your scenario, apologies if this is the case.
The main thing I had to do consolidate the mixed up CFQuerys and their calls into logical units of code that I could then start porting in functionality either to C# or JavaScript.
Thankfully this was a very simple application, the majority of the logic was called on a database using the DWR Ajax library; that which wasn't was mostly consolidated in a functions.cfm file.
Obviously a lot of that behavior doesn't need to be replicated as packaging up the separate components of logic (such as they were) in the CF app did map quite neatly to the various Partial Views and Editor Templates that I envisaged in the MVC application.
After that, it was simply a case of, page by page, finding out which logic was called when, what it relied upon that then finally creating a series of UML class and sequence diagrams.
Honestly though, I think I gained the most ground when I simply hit File-New Project and started trying to replicate the behavior of the app from the top of index.cfm.
I would break logical parts of the app into CFC's
Pick a single view, look at the logic within. Move that out to a CFC and invoke it.
Keep doing that you will have something much easier to work with that can be plugged into an MVC later. Its almost no work to do this, just copy and paste sections of code and call them.
You can consider using object factory to layer your application. We have similar situation at work and we started refactoring by putting Lightwire DI framework.
First we migrated all the sql statement into gateways, then we started using services and take a lot of code out of the templates to the services.
The work is not finished yet but the application is looking better already.
For large, really complex applications I'd prefer ColdBox for a re-factor project. However, I just saw a presentation at the D2W Conference on F/W 1 (Framework One), a VERY simple ColdFusion MVC framework. Check out code from the presentation here.
It's 1 (one) CFC file and a set of conventions for organizing your code. I highly recommend evaluating it for your project.

GUI Testing tools and feedbacks [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am working on the issue of testing my GUI and I am not entirely sure of the best approach here. My GUI is built using a traditional MVC framework so I am easily able to test the logic parts of the GUI without bringing up the GUI itself. However, when it comes to testing the functionality of the GUI, I am not really sure if I should worry about individually testing GUI components or if I should mainly just focus on functional testing the system. It is a pretty complex system in which testing the GUI frequently involves sending a message to the server and then observing the response on the GUI. My initial thoughts are that functional testing is the way to go here since I need a whole system running to really test the UI. Comments on this issue would be appreciated.
Other GUI-testing tools I can offer are:
Thoughtworks White,
PyWinAuto,
AutoIt,
AutoHotKey.
One thing to keep in mind when trying to automate GUIs is that the only way you can do that is to build the GUI with automation in mind. Crush devs that think their GUIs should not support testability early on in the project and happily expose all the hooks that can help in automation on demand as your testing needs require that.
You have (at least) 2 issues - the complexity of the environment (the server) and the complexity of the GUI.
There are many tools for automating GUI testing. All of them are more or less fragile and require pretty much constant maintenance in the face of changing layout. There is benefit to be gained from using them, but it's a long term benefit.
The environment, on the other hand, is an area that can be tamed. If your application is architected using the Dependency Injection/Inversion technique (where you 'inject' the server component into the application), then you can use a 'mock' of the relevant server interfaces to enable you to script test cases.
Combining these two techniques will allow you to automate GUI testing.
Depending on where in the spectrum of MVC (that's an overused term) you sit, testing the view could be a mechanical process of ensuring that the correct model methods are called in response to the correct inputs to the view to testing some client side validation to who knows.
A lot of the patterns that have been evolved out of MVC (I'm thinking passive view, supervising controller) are striving to make the view require very little testing because it's really just wiring user inputs to the presenter or model (depending on the exact variant of the pattern you're using).
"testing the GUI frequently involves sending a message to the server and then observing the response on the GUI" This statement worries me.
I'm immediately thinking that the GUI should be tested using a mock or stub of the server to test that the correct interactions are occurring and the GUI responds appropriately.
If you need automated functional tests of the server, I don't see the need to have the GUI involved in those.
Mercury QuickTest Pro, Borland SilkTest, and Ranorex Recorder are some GUI testing tools.
If your application is web-based you can write tests using tools like WatiN or Selenium.
If your application is Windows .NET based, you could try White.
My advice: forget the traditional GUI testing. It's too expensive. Coding the tests takes a lot of time, the tools aren't really stable so you will get unreliable test results. The coupling between the code and the test is very strong and you'll spend a lot of time with the maintenance.
The new trend is to ignore the GUI tests. See the ModelViewPresenter pattern from Fowler as a guideline link text
The clearest way I can say this is:
Don't waste your time writing automated GUI tests.
Especially when your working with an MVC app - in your case, when you send a message to the server, you can make sure the right message number comes back and be done. You can add some additional cases - or another test completely to make sure that the GUI is converting the message id's into the right strings, but you just need to run that test once.
We do incorporate GUI testing in our project, and it has its side effects. The developers however have one critical design principle: Keep the GUI layer as thin as possible!
That means no logic in the GUI classes. Separate this in presentation models responsible for input validation etc.
For testing on a Unix machine we use the Xvfb server as the DISPLAY when running the tests.
Try the hallway usability test. It's cheap and useful: go to the nearest hallway, grab the first person that passes, make them sit at your computer and use your software. Watch over their shoulder, you will see what they try to do, what frustrates them, and so on. Do this a few times and notice the patterns.
What you're looking for is "acceptance testing." How you do it depends on the frameworks you're using, what type of application you are creating and in what language. If you google your particular technology and the above phrase, you should find some tools you can use.
I've found WinTask to be a very good way to do GUI testing. Provided you don't constantly change the way the OS refers to each element of the UI, WinTask addresses the UI elements by name, so even if the layout changes, the UI elements can still be pressed / tweaked / selected.
Don't miss the 'U' in 'GUI'
I mean: if what you're trying to test is all works right and works as it was planned to work, then you may follow Seb Rose's answer.
But please, don't forget a USER interface has to be made thinking about USERS, and not ANY user but the TARGET USER the application was made for. So, after you are sure all works like it have to work, put every single view/screen/form in a test with a team made of users representing every group of different users that may use your application: advanced users, administrators, MS Office users, low computer profile users, high computer profile users... and then, get the critiques of every user, make a mix, re-touch your GUI if it's neccesary and back again to GUI user's test.
For SIMPLE Web based GUI testing try iMacros ( a simple Firefox plug-in , has a cool feature to send the entire test to another person )
Note that SIMPLE was spelled with Initials ...

Best practice for integrating TDD with web application development?

Unit testing and ASP.NET web applications are an ambiguous point in my group. More often than not, good testing practices fall through the cracks and web applications end up going live for several years with no tests.
The cause of this pain point generally revolves around the hassle of writing UI automation mid-development.
How do you or your organization integrate best TDD practices with web application development?
Unit testing will be achievable if you separate your layers appropriately. As Rob Cooper implied, don't put any logic in your WebForm other than logic to manage your presentation. All other stuff logic and persistence layers should be kept in separate classes and then you can test those individually.
To test the GUI some people like selenium. Others complain that is a pain to set up.
I layer out the application and at least unit test from the presenter/controller (whichever is your preference, mvc/mvp) to the data layer. That way I have good test coverage over most of the code that is written.
I have looked at FitNesse, Watin and Selenium as options to automate the UI testing but I haven't got around to using these on any projects yet, so we stick with human testing. FitNesse was the one I was leaning toward but I couldn't introduce this as well as introducing TDD (does that make me bad? I hope not!).
This is a good question, one that I will be subscribing too :)
I am still relatively new to web dev, and I too am looking at a lot of code that is largely untested.
For me, I keep the UI as light as possible (normally only a few lines of code) and test the crap out of everything else. At least I can then have some confidence that everything that makes it to the UI is as correct as it can be.
Is it perfect? Perhaps not, but at least it as still quite highly automated and the core code (where most of the "magic" happens) still has pretty good coverage..
I would generally avoid testing that involves relying on UI elements. I favor integration testing, which tests everything from your database layer up to the view layer (but not the actual layout).
Try to start a test suite before writing a line of actual code in a new project, since it's harder to write tests later.
Choose carefully what you test - don't mindlessly write tests for everything. Sometimes it's a boring task, so don't make it harder. If you write too many tests, you risk abandoning that task under the weight of time-consuming maintenance.
Try to bundle as much functionality as possible into a single test. That way, if something goes wrong, the errors will propagate anyway. For example, if you have a digest-generating class - test the actual output, not every single helper function.
Don't trust yourself. Assume that you will always make mistakes, and so you write tests to make your life easier, not harder.
If you are not feeling good about writing tests, you are probably doing it wrong ;)
A common practice is to move all the code you can out of the codebehind and into an object you can test in isolation. Such code will usually follow the MVP or MVC design patterns. If you search on "Rhino Igloo" you will probably find the link to its Subversion repository. That code is worth a study, as it demonstrate one of the best MVP implementations on Web Forms that I have seen.
Your codebehind will, when following this pattern, do two things:
Transit all user actions to the presenter.
Render data provided by the presenter.
Unit testing the presenter should be trivial.
Update: Rhino Igloo can be found here: https://svn.sourceforge.net/svnroot/rhino-tools/trunk/rhino-igloo/
There have been tries on getting Microsoft's free UI Automation (included in .NET Framework 3.0) to work with web applications (ASP.NET). A german company called Artiso happens to have written a blog entry that explains how to achieve that (link).
However, their blogpost also links an MSDN Webcasts that explains the UI Automation Framework with winforms and after I had a look at this, I noticed you need the AutomationId to get a reference to the respecting controls. However, in web applications, the controls do not have an AutomationId.
I asked Thomas Schissler (Artiso) about this and he explained that this was a major drawback on InternetExplorer. He referenced an older technology of Microsoft (MSAA) and was hoping himself that IE8 will do this better.
However, I was also giving Watin a try and it seems to work pretty well. I even liked Wax, which allows to implement simple testcases via Microsoft Excel worksheets.
Ivonna can unit test your views. I'd still recommend moving most of the code to other parts. However, some code just belongs there, like references to controls or control event handlers.