does anyone perform "build" operation for website project - build

I have read about the difference between website project and web application project. I have never created any web application, and I just managed to create a basic web site project recently.
I would like to ask -- Does it make sense for me to do "build" on my web site project or it is totally stupid for me to do so? If it make sense then what does it really do to my web site project?
Thanks

Is this just a static web site? (No database, server side code, api's, file concatenation, preprocessors, data transactions, etc) If not (or none of those words sound familiar), then you shouldn't need a build process.
Its the web applications with user interactions that involve data manipulations and several different user states that need some some sort of testing that mocks up these user actions and test code validation through different scenarios. This is especially the case the more complex your application gets and/or the more developers that get involved.
You can check out a pretty in depth explanation from pluralsight, or a nice read from David Tucker that involves a static site use case.
This build process is basically going to run through all the tasks that you have defined in the 'build process'. These tasks can be practically anything but often include testing, performance tasks like file compression etc. Perhaps a good explanation of handling a smaller (static) sites is that you could just do any of theses steps in a manual process if need be.

Related

Django project-apps: What's your approach about implementing a real database scheme?

I've read articles and posts about what a project and an app is for Django, and basically end up using the typical example of Pool and Users, however a real program generally use a complex relational database, therefore its design gravitates around this RDB; and the eternal conflict raises once again about: which ones to consider an application and which one to consider components of that application?
Let's take as an example this RDB (courtesy of Visual Paradigm):
I could consider the whole set as an application or to consider every entity as an application, the outlook looks gray. The only thing I'm sure is about this:
$ django-admin startproject movie_rental
So I wish to learn from the expertise of all of you: What approach (not necessarily those mentioned before) would you use to create applications based on this RDB for a Django project?
Thanks in advance.
PS1: MORE DETAILS RELATED ABOUT MY REQUEST
When programming something I follow this steps:
Understand the context what you are going to program about,
Identify the main actors and objects in this context,
If needed, make an UML diagram,
Design a solid-relational-database diagram, (solid=constraints, triggers, procedures, etc.)
Create the relational database,
Start coding... suffer and enjoy
When I learn something new I hope they follow these same steps to understand where they want to go with their actions.
When reading articles and posts (and viewing videos), almost all of them omit the steps 1 to 5 (because they choose simple demo apps), and when programming they take the easy route, and don't show other situations or the many supposed features that Django offers (reusability, pluggability, etc).
When doing this request, I wish to know what criteria is used for experienced programmers in Django to determine what applications to create based on this sample RDB diagram.
With the (2) answers obtained so far, "application" for...
brandonris1 is about features/services
Jeff Hui is about implementing entities of a DB
James Bennett is about every action on a object, he likes doing a lot of apps
Conclusion so far: Django application is a personal creed.
My initial request was about creating applications, but as models are mentioned, I have this another question: is with a legacy relational database (as showed in the picture) possible to create a Django project with multiple apps? this is because in every Django demo project showed, every app created has a model with their own tables, giving the impression that tables do not interact with those of other applications.
I hope my request is more clear. Thanks again for your help.
It seems you are trying to decide between building a single monolithic application vs microservices. Both approaches have their pros and cons.
For example, a single monolithic application is a good solution if you have a small amount of support resources and do not need to be able to develop new features in fast sprints across the different areas of the application (i.e. Film Management Features vs Staff Management Features)
One major downside to large monolithic applications is that eventually their feature sets grow too large and with each new feature, you have a significant amount of regression testing which will need to be done to ensure there aren't any negative repercussions in other areas of the application.
Your other option is to go with a microservice strategy. In this case, you would divide these entities amongst a series of smaller services and provide them each methods to integrate/communicate with each other (APIs).
Example:
- Film Service
- Customer Service
- Staff Service
The benefits of this approach is it allows you to separate capabilities and features by specific service areas thus reducing risk and regression testing across the application when new features are deployed or there is a catastrophic issue (i.e. DB goes down).
The downside to this approach is that under true microservice architecture, all resources are separated therefore you need to have unique resources (ie Databases, servers) for each service thus increasing your operating cost.
Either of these options is a good option but is totally dependent on your support model and expected volumes. Hope this helps.
ADDITIONAL DETAIL:
After reading through your additional details, since this DB already exists and my assumption is that you cannot migrate it, you still have the same choice as to whether or not you follow a monolithic application or a microservices architecture.
For both approaches, you would need to connect your django webapp the the specific DB you are already using. I can't speak for every connector out there but I know that the MySQL connector allows django to read from the pre-existing db to systematically generate the models.py file for the application. As a part of that connector, there is a model variable which allows you to define whether or not Django is responsible for actually managing the DB tables themselves.
The only thing this changes from an architecture perspective is how many times do you want to code this connection?
If you only want to do it once and completely comply with the DRY method, you can build a monolithic application knowing that as new features become required, application wide regression testing will be an absolute requirement.
If you want ultimate flexibility for future changes with this collection of features and don't mind recoding the migration across multiple apps while reducing the need for application wide regression testing as new features become required, a microservice architecture strategy is more appropriate.

Transferring from a monolithic application to a micro service one - approach

We currently have a monolithic web application built with Scala (scalatra for the Rest APIs) for the backend and AngularJS for the front end. The application is deployed at AWS. We are going to build a new component, which we would like to build it as an independent microservice. And this component will have its own data repository which may not be the same type of DB. It will also be built with Scala as well, but Akka for the Rest APIs. The current application is built with DB module, domain module, and web service API module and front end/client module.
What is a good approach of a smooth journey? We possibly need to set up a micro service architecture first, such as an API gateway service along with others.
Too many ways, too many approaches, too many best practices. It really all depends on the analysis of your application, trying to figure out where the natural breaks are.
One place I start is looking at the data model. Lots of people advocate each microservice having its own database. Well, that's fine and dandy, but that can really be difficult to achieve without breaking things all over the place. But if you get lucky and there's a place where the data segregates nicely, than see what services would go with it and try breaking it out.
If you do not adhere to the separate database mentality, then I start with the low-hanging fruit, often times nothing more than simple CRUD operations with just a little business logic mixed in, providing some of the basic support for other larger-grained services to come. Of course, this becomes more iterative, not sure your organization will like it.
Which brings me to methodology. Organizations who've created monolithic applications often have methodologies that support them, whereas microservices require a much different approach to application development. Is your organization ready for that?
Needless to say, there's no right answer. I've gone to many conferences where these concepts are high on the interest list and the fact is there's no silver bullet, everyone has different ideas of what is right, and there's exceptions galore. You're just going to have to bite the bullet and cross your fingers, unfortunately.

Turn application into web application

Please excuse the noobiness of my question. I am mostly searching here for some directions and buzzwords to start digging from.
I spent some time developing an application in Python
Basically, it takes a bunch of images and creates a video out of it.
It i quite simple, and uses only a few libraries (opencv and nunmpy mostly).
I designed a small gui in gtk, but I think that it would be a good idea to offer the service over the web.
I think I could reuse some of my core and design a front end that people could access in their browser.
I only need a few data to get it running (images, an email)
The thing is my web dev skills are really close to 0, and I don't exactly know where to start from .
I don't plan on having hundreds of people a day on the platform.
People would connect, feed me with the data (link to a dropbox folder, google drive, whatever) and I would send them a message where it's finished.
If you could provide me with some names or links so that I could touch the field, I'd be really glad.
CGI is a fine option, but if you already have Python experience Django is definitely worth checking out (it falls in the category of rhooligan's #3 except it uses Python!). Django completely takes care of all of the database backend details for you, which is a benefit over simple CGI. It also provides easy-to-use pre-defined classes for handling file uploads, images, etc. It also has a great tutorial that will get you up and running. Just be careful about whether you're using version 1.3, 1.4, or the latest dev version, because some aspects of the framework have changed fairly quickly. Make sure that you're always looking at the right version of the docs.
Another handy service to keep in mind for doing something like image processing through a web app is a hosted cloud computing service provider like PiCloud. Unless you already have a private web server with lots of memory and processing power, these cloud services that charge by the ms are really cool. They also give you 1000s of cores which could allow you to do lot's of concurrent processing. They provide a nice Python API, and it has numpy and opencv pre-installed in both v2.6 and v2.7. (They use PyOpenCV, but you also have root access to install anything you want, so you can set up the "cv2" interface if that's what you're using--actually I just looked at your GitHub and it looks like you're using the old "cv" interface. You can also install any application you want on PiCloud--it doesn't have to be Python.)
You could start by looking into the Python CGI module and see if it will work for you. Then you'll need to do the following steps:
Decide on a webserver and install it, Apache is probably a good starting point.
Design the UI. Wireframe things out on paper paper. Figure out how you'd ideally want the users to go through your site and what you want on each page/view.
Your decision in #2 drives all the decisions from this point out. These days, most web applications are a combination of Web 1.0 and JSON/REST "services" (there's a couple of buzzwords for ya!). JQuery is a popular and widely used JavaScript library for developing the front end of your site. That would be another thing to look at. JQuery is completely independent from the back end and can be used with any type of back end (PHP, Ruby, Perl, .NET, etc)

Is Rietveld inextricably tied to App Engine?

I've been looking at Rietveld as a solution for the lack of code reviews at my company. Can it be set up on a server in-house without using App Engine? It seems to have a bit of App Engine specific code, and I'm not sure it could be set up on a plain old Django/Apache install. I've looked around, but haven't found any information about this.
Check out http://django-gae2django.googlecode.com/svn/trunk/examples/rietveld/README
The gae2django project lets GAE apps run against django instead of the GAE development environment.
That means you can run rietveld under django directly, using (by default) an SQLite backend. You can also use mysql or any other DB backend django supports.
That, plus a web server (e.g. Apache) with WSGI integration, makes a local rietveld install run nicely.
What about using one of these projects that provide the same backend services as GAE?
Typhoon AE
Appscale
There may be more, these are just the ones I know about off the top of my head.
A bit of App Engine specific code? It's supposed to be an example App Engine app, so yeah it's pretty well tied to it. But, you're right, it does use Django which could make it somewhat more feasible to port. I'll second #cope360 recommendation, but from the sounds of your question, it doesn't sound like you've done much with App Engine. If it's only used by a few people, try running it on the GAE SDK itself.
Beyond that, I'd think you could take most of the code in the "codereview" directory and build you're own Django/apache app from that.
Rather than fussing around with a port or other GAE emulation, I would consider using ReviewBoard.
Review Board is a powerful web-based
code review tool that offers
developers an easy way to handle code
reviews. It scales well from small
projects to large companies and offers
a variety of tools to take much of the
stress and time out of the code review
process.
For too long, code reviews have been
too much of a chore. This is largely
due to the lack of quality tools
available, leaving developers to
resort to e-mail and bug tracker-based
solutions.
We've seen a lot of time and energy
wasted doing code reviews both in open
source projects and at companies. In
both cases, code reviews were
typically done over e-mail. A
significant amount of time was spent
in forming review requests, switching
between the diff and the e-mail, and
trying to understand what parts of the
code the reviewer was referring to.
So in an effort to keep our sanity and
improve the process both in our open
source projects and at companies, we
wrote Review Board. We hope it will be
useful to your team too so you can
focus on what's important: writing
great products.

Mocking WebService consumed by a Biztalk Request-Response port

I'm using BizUnit to unit-tests my Biztalk orchestrations, but some orchestrations consume a WebService,and testing these seems more like integration testing than unit testing.
I'm familiar with using a mocking framework to mock the generated proxy objects, in order to test a web service from a Windows Forms application, but I would like to be able to do it in a more integrated way in a request-response port?
How would you approach this problem?
This goes to the heart of one of my main irritations as a BizTalk developer - BizTalk does not lend it self to unit testing. From the fact the 99% of your interfaces into BizTalk applications are message based and have a huge number of possible inputs, through to the opaque nature of orchestrations, BizTalk offers no real way of testing units of functionality as... well... units.
For BizTalk, integration tests are sadly often the only game in town.
That results in, due to no fault on the part of Kevin Smith, BizUnit being (IMO) a misnomer. A better name would perhaps be BizIntegrationIt. BizUnit offers a range of tools that assist in integration testing, the majority of its tests, like checking if a file has been written to a given directory or sending an HTTPRequest to a BizTalk HTTPReceive location are all strictly speaking, testing integration.
Now that I've gotten that rant out, what you are asking for is something I've been thinking about for a long time, the ability to create automated unit tests that give some real confidence that my making a small change to a map won't suddenly break something else downstream, as well as a way to remove dependance on external services.
I've never thought of any nice way of doing this but below is a solution that should work, I've done variations of each part of this in isolation but never tried to but them all together in this specific form.
So given the desire to mock a call to some external service (that may not even exist yet) without needing to actually make any external call and wanting to have the ability to set expectations for that service call and to specify the nature of the response, the only method I can think of is to develop a custom adapter.
Mock webservice using custom adapter
If you build a custom request-response adapter you can plug it into your send port in place of the SOAP adapter. You can then specify properties for the adapter that allow it to behave as a mock of your webservice. The adapter would be similar in concept to a loopback adapter but would allow internal mocking logic.
Things that you might want to include as adapter properties:
Expected document (perhaps a disk location that specifies an example of what you expect your BizTalk applicaiton to send to the webservice).
Response document - the document that the adapter will send back to the messaging engine.
Specific expectations for the test such as lookup values in document elements.
You could also have the custom adapter write to disk and setup a BizUnit step to validate the file that was written out.
Building a custom adapter is non-trivial, but possible, you can get a good start from the BizTalk Adapter Wizard and there is an article on deploying custom adapters here.
There is a bug in the code generated by the wizard, you will need to change new Guid(""), to new Guid().
There are also some examples of building custom adapters in the BizTalk SDK.
Another option is to use a plain http page and the HTTP solicit response as discussed here, all your logic goes in the http page. This is probably simpler if you are happy having an http call, and setting up an IIS port to listen for your test.
Initialising unit tests
You can import binding files into a BizTalk application using a .bat file.
If you make a new binding file for each test you run, as well as for your standard applicaiton set up, you can then run the appropriate batch file to apply the right binding.
Each binding file would change your webservice sendport to use the mock custom adapter and set the specific properties for that test.
You could then even make a custom BizUnit step that (perhaps) generated binding settings based on settings in the test step and then ran the shell commands to update the bindings.
Testing Message Contents
A final thing that you might want to consider, to really tie all this together, is some way of testing the contents of messages. You could do this in your mock adapter, but that would get tedious very quickly for large messages, or for a large range of possible input messages.
One option is to make a custom pipeline that calls Schematron to validate files that it receives. Schematron is a schema language that allows a much richer level of file inspection that xsd, so you can check things like "If element x contains this content, I expect element y to be present".
If you built a custom pipeline that took a schematron schema as a parameter, you could then swap in a testing file for a specific unit test, validating that for this test, when you call the webservice you get a file that actually matches what you want (and doesn't just match the xsd)
As a co-author of BizUnitExtensions (www.codeplex.com/bizunitextensions) i agree that the name "unit" in BizUnit can be confusing but for Biztalk, the 'integration test' is the unit test. Some Biztalk folk have successfully used mocks to test pipeline components and other test harnesses (+ BizUnit/Extensions) to test schemas and maps.
Orchestrations unfortunately are opaque. But theres are good reasons for that.
(a) Because of the huge subscription system in the message box - that orchestrations use when being activated etc, it is not possible to fire up some "virtual" process to host the orchestration (which can be done for pipelines. Tomas Restrepo has done something along these lines).
(b) Also, how would this virtual process handle persistence and dehydration?. I'd wager that people using WF would have the same problem in trying to test the workflow fully.
(c) we dont work with the C# directly, so there is no way we can "inject" a mock
interface into the orchestration code.
(d) An orchestration is not really a "unit". its a composite element. The units are the messages going to and from the message box and the external components called through expression shapes.So even if you could inject a mock webservice interface you cannot inject mock message boxes and correlation sets and other things.
One thing that can be done for orchestrations (and i've been considering an addition to the BizUnitExtensions library to do this) is to link in with the OrchestrationProfiler tool as that tool gives a pretty detailed report of all the shapes and somehow check that individual steps were executed (and perhaps the time it took for execution). This could go quite far in making the orchestration a bit more of a white box.Also considering that the orchestration debugger shows a lot of the variable values, surely it must be possible to get that info via an API to show what the values of variables were at a given point for a given instance.
Back to Richard's question though, my previous dev team had a solution. Basically what we did was to write a generic configurable HttpHandler that parsed incoming service requests and returned pre-set responses. The response sent back was configurable based on conditions such as XPath. In the BUILD and DEV binding files, the webservice end point was the mock. This worked brilliantly in isolating the BUILD and DEV environments from the actual third party webservices. This also helped in a "contract first" approach where we built the mock and the orch developer used it while the webservice author went ahead and built the actual service.
[Update:17-FEB-09: this tool is now on codeplex : http://www.codeplex.com/mockingbird.
If this approach sounds interesting check it out and let me know what you think of the tool ]
Now, before someone throws the old "WHAT ABOUT MOCK OBJECT FRAMEWORKS" chestnut in, let me say that the utility above was used for both Biztalk 'consumers' as well as non Biztalk consumers, BUT i have also worked with NMock2 and found that to be an excellent way to mock interfaces and set expectations when writing CLR consumers. (I'm going to be looking into MoQ and TypeMock etc soon). However, it wont work with orchestrations for the reasons described above.
Hope this helps.
Regards,
Benjy
Don't.
Don't test against arbitrary interfaces, and don't create mocks for them.
Most people seem to see developer (unit) testing as intended for testing nontrivial, individual units of functionality such as a single class. On the other hand, it is also important to perform customer (acceptance/integration) testing of major subsystems or the entire system.
For a web service, the nontrivial unit of functionality is hidden in the classes that actually perform the meaningful service, behind the communication wiring. Those classes should have individual developer test classes that verify their functionality, but completely without any of the web-service-oriented communication wiring. Naturally, but maybe not obviously, that means that your implementation of the functionality must be separate from your implementation of the wiring. So, your developer (unit) tests should never ever see any of that special communication wiring; that is part of integration and it can be viewed (appropriately) as a "presentation" issue rather than "business logic".
The customer (acceptance/integration) tests should address a much bigger scale of functionality, but still not focused on "presentation" issues. This is where the use of the Facade pattern is common--exposing a subsystem with a unified, coarse-grained, testable interface. Again, the web service communication integration is irrelevant and is implemented separately.
However, it is very useful to implement a separate set of tests that actually do include the web service integration. But I strongly recommend against testing only one side of that integration: test it end-to-end. That means building tests that are web service clients just like the real production code; they should consume the web services exactly the way that the real application(s) do(es), which means that those tests then serve as examples to anyone who must implement such applications (like your customers if you are selling a library).
So, why go to all that trouble?
Your developer tests verify that your functionality works in-the-small, regardless of how it is accessed (independent of presentation tier since it is all inside the business logic tier).
Your customer tests verify that your functionality works in-the-large, again regardless of how it is accessed, at the interface boundary of your business logic tier.
Your integration tests verify that your presentation tier works with your business logic tier, which is now managable since you can now ignore the underlying functionality (because you separately tested it above). In other words, these tests are focused on a thin layer of a pretty face (GUI?) and a communication interface (web services?).
When you add another method of accessing your functionality, you only have to add integration tests for that new form of access (presentation tier). Your developer and customer tests ensure that your core functionality is unchanged and unbroken.
You do not need any special tools, such as a test tool specifically for web services. You use the tools/components/libraries/techniques that you would use in production code, exactly as you would use them in such production code. This makes your tests more meaningful, since you are not testing someone else's tools. It saves you lots of time and money, since you are not buying, deploying, developing for, and maintaining for a special tool. However, if you are testing through a GUI (don't do that!), you might need one special tool for that part (e.g., HttpUnit?).
So, let's get concrete. Assume that we want to provide some functionality for keeping track of the cafeteria's daily menu ('cause we work in a mega-corp with its own cafe in the building, like mine). Let's say that we are targeting C#.
We build some C# classes for menus, menu items, and other fine-grained pieces of functionality and its related data. We establish an automated build (you do that, right?) using nAnt that executes developer tests using nUnit, and we confirm that we can build a daily menu and look at it via all these little pieces.
We have some idea of where we are going, so we apply the Facade pattern by creating a single class that exposes a handful of methods while hiding most of the fine-grained pieces. We add a separate set of customer tests that operate only through that new facade, just as a client would.
Now we decide that we want to provide a web page for our mega-corp knowledge workers to check today's cafeteria menu. We write an ASP.NET page, have it invoke our facade class (which becomes our model if we are doing MVC), and deploy it. Since we have already thoroughly tested the facade class via our customer tests, and since our single web page is so simple, we forego writing automated tests against the web page--a manual test using a few fellow knowledge workers will do the trick.
Later, we start adding some major new functionality, like being able to preorder our lunch for the day. We extend our fine-grained classes and the corresponding developer tests, knowing that our pre-existing tests guard us against breaking existing functionality. Likewise, we extend our facade class, perhaps even splitting off a new class (e.g., MenuFacade and OrderFacade) as the interface grows, with similar additions to our customer tests.
Now, perhaps, the changes to the website (two pages is a website, right?) make manual testing unsatisfactory. So, we bring in a simple tool comparable to HttpUnit that allows nUnit to test web pages. We implement a battery of integration/presentation tests, but against a mock version of our facade classes, because the point here is simply that the web pages work--we already know that the facade classes work. The tests push and pull data through the mock facades, only to test that the data successfully made it to the other side. Nothing more.
Of course, our grand success prompts the CEO to request (demand) that we expose the web application to mega-corp's BlackBerrys. So we implement some new pages and a new battery of integration tests. We don't have to touch the developer or customer tests, because we have added no new core functionality.
Finally, the CTO requests (demands) that we extend our cafeteria application to all of mega-corp's robotic workers--you did notice them over the last few days? So, now we add a web services layer that communicates through our facade. Again, no changes to our core functionality, our developer tests, or our customer tests. We apply the Adapter/Wrapper pattern by creating classes that expose the facade with an equivalent web service API, and we create client-side classes to consume that API. We add a new battery of integration tests, but they use plain nUnit to create client-side API classes, which communicate over the web service wiring to the service-side API classes, which invoke mock facade classes, which confirm that our wiring works.
Note that throughout this whole process, we did not need anything significant beyond our production platform and code, our chosen development platform, a few open-source components for automated building and testing, and a few well-defined batteries of tests. Also note that we didn't test anything that we don't use in production, and we didn't test anything twice.
We ended up with a solid core of functionality (business logic tier) that has proven itself mature (hypothetically). We have three separate presentation tier implementations: a website targeted to desktops, a website targeted to BlackBerrys, and a web service API.
Now, please forgive me for the long answer--I tire of inadequate answers and I did not want to provide one. And please note that I have actually done this (though not for a cafeteria menu).
This is a very interesting question that I still haven't seen a good generic answer to. Some people suggest using SoapUI but I haven't had time to actually test that yet. This page might be interesting on that.
Another way might be to somehow wrap the WebDev.WebHost.dll and use that ... Phil Hakkck discusses that in this post.
It's also be discussed before on SO here.
Please let us know if you find another solution to this!
This is the way to do it:
Back to Richard's question though, my
previous dev team had a solution.
Basically what we did was to write a
generic configurable HttpHandler that
parsed incoming service requests and
returned pre-set responses. The
response sent back was configurable
based on conditions such as XPath
I haven't had to do this in a while, but when I would test my Biztalk Apps I always used either soap ui or web service studio. I was able to test different input values without effort.