Is it okay to expose LinqToSQL generated classes in web service - web-services

I'm making a asmx web service and I use LinqToSQL to deal with database.
It's seems to be easy for me to use LinqToSQL generated classes as arguments or return values in web methods.
Like this:
[WebMethod]
public OperationResult Meet_ListMeets(string clientToken, out meet[] meets)
{
ServiceMeet s = new ServiceMeet(sqlCon, clientToken);
return s.ListMeets(out meets);
}
Where "meet" is a LinqToSQL class.
I found that meet class is exposed as WSDL complex type including all dependencies (such as other classes which is referent to meet by the foreign keys in database).
Main question is "is it a good practice to use classes that way?". What about a security?
Should I use wrapper-classes to hide my entity structure?

Not a good practice, and you'll most likely run into problems at some point. Not to mention the overhead of all that extra cruft down the wire.
What I've done in the past is make a "inbetween" model with just the fields I need to actually send across the wire, and the map them back to the real object when they come in. You can do the mapping manually or with one of the many mapping toolkits for .NET (look in NuGet).

Related

Location of Initialization of loosely coupled modules

Please excuse me if my question is already answered but i searched both SO & Software Engineering and did not find a straight answer or bits of information that make this clear.
I'm developing an kind-of-small application which in short, connects to a web service, fetches some data and plays back some music based on the fetched data. I have broken down all the parts of my application as different "module interfaces", for example a "WebServiceInterface", "ConfigurationInterface", "SystemTrayInterface" etc.
I'm in the beginning steps of understanding & implementing SRP(and generally SOLID) in my application.
Now, all these interfaces & their implementations are broken on separate headers/sources. So a short version of my question is:
"In respect to SRP, where should i declare & instantiate the necessary "modules" required for the application startup and use them?"
I mean, there must be a place(main(), a function or a class) where some of the classes are declared and initialized to a proper state in order for the application to actually launch. My problem stems from the fact that SRP states:
Every module or class should have responsibility over a single part of the functionality provided by the software, and that responsibility should be entirely encapsulated by the class
But I'm confused, if cannot have a single place which contains all the declarations & instantiations of my main modules, how I'm supposed to start the application?
I saw this: https://stackoverflow.com/a/5744241/1044356
Loose coupling between 2 classes means each class has very few knowledge of the internal behavior of the other class.
You may have a higher degree of "coupling" between classes which belong to the same "module" or "package", and it's not a bad practice
Does this mean i can have a class which wraps around interfaces to modules that are independent with each other and set them up? This sounds like a GOD class to me.
I can provide additional information if needed to clear any ambiguities.
Encapsulation is about hiding internal implementation details.
the application object doesn't need to know how the web service object retrieves data, only that if it (the application) makes a properly formatted request and nothing else goes wrong that it will get data in return. It does not mean that the application can't instantiate a web service if it needs to make such a request.
Some idioms (such as pimpl) allow you to hide nearly all the implementation details by having a public interface that defers to a private implementation. Using such an idiom your application woud know only about the wrapper and not even be able to see the data needed to make the private object work.
This can be taken to an extreme where the only free-standing object (meaning an object not part of or at least owned by another) is the application object itself.

Where to put database access/functionality in clojure application?

I'm writing a small Clojure application which has a lot of interaction with a MongoDB database with 2-3 different collections.
I come from a OOP/Ruby/ActiveRecord background where standard practice is to create one class per data model and give each one access to the database. I've started doing the same thing in my clojure project. I have one namespace per "data model" and each has its own database connection and CRUD functions. However, this doesn't feel very functional or clojure-like, and I was wondering if there is a more idiomatic way of doing it, such as having a data or database namespace with functions like get-post, and limiting access to the database to only that namespace.
This seems like it would have the benefit of isolating the database client dependency to just one namespace, and also of separating pure functions from those with side effects.
On the other hand, I would have one more namespace which I would need to reference from many different parts of my application, and having a namespace called "data" just seems odd to me.
Is there a conventional, idiomatic way of doing this in Clojure?
A nice and, arguably, the most idiomatic (scored 'adopt' on the Clojure radar) way to manage state in a Clojure app is that proposed by Stuart Sierra's great Component library. In a nutshell, the philosophy of Component is to store all the stateful resources in a single system map that explicitly defines their mutual relationship, and then to architect your code in such a way that your functions are merely passing the state to each other.
Connection / environment access
One part of your system will be to manage the 'machinery' of your application: start the web server, connect do data stores, retrieve configuration, etc. Put this part in a namespace separate namespace from your business logic (your business logic namespaces should not know about this namespace!). As #superkondukr said, Component is a battle-tested and well-documented way to do this.
The recommended way to communicate the database connection (and other environmental dependencies for that matter) to your business logic is via function arguments, not global Vars. This will make everything more testable, REPL-friendly, and explicit as to who depends on whom.
So your business logic functions will receive the connection as an argument and pass it along to other functions. But where does the connection come from in the first place? The way I do it is to attach it to events/requests when they enter the system. For instance, when you start your HTTP server, you attach the connection to each HTTP request coming in.
Namespace organization:
In an OO language, the conventional support for data is instances of classes representing database entities; in order to provide an idiomatic OO interface, business logic is then defined as methods of these classes. As Eric Normand put it in a recent newsletter, you define your model's 'names' as classes, and 'verbs' as methods.
Because Clojure puts emphasis on plain data structures for conveying information, you don't really have these incentives. You can still organize your namespaces by entity to mimick this, but I actually don't think it's optimal. You should also account for the fact that Clojure namespaces, unlike classes in most OO languages, don't allow for circular references.
My strategy is: organize your namespaces by use case.
For example, imagine your domain model has Users and Posts. You may have a myapp.user namespace for Users CRUD and core business logic; similarly you may have a myapp.post namespace. Maybe in your app the Users can like Posts, in which case you'll manage this in a myapp.like namespace which requires both myapp.user and myapp.posts. Maybe your Users can be friends in your app, which you'll manage in a myapp.friendship namespace. Maybe you have a small backoffice app with data visualization about all this: you may put this in a myapp.aggregations namespace for example.

Interfaces for Rich Domain Models

Should Rich Domain Models have interfaces to assist with isolation during unit testing (e.g. when testing a service that uses the model)?
Or should the Rich Domain Model behaviour be involved in any related unit tests?
Edit:
By Rich Domain Model I'm specifically referring to domain entities that contain logic (i.e. non-anaemic).
Usually, the Domain Model is the part that you should keep isolated from everything else. The Domain Model may use interfaces so that it's isolated from external systems, etc.
However, in the most common scenarios, the Domain Model is what you're trying to protect from the deteriorating influences of external systems, UI logic, etc. - not the other way around.
Thus, there's little reason to put interfaces on the Domain Model.
You should definitely involve domain model behaviour in your unit tests. There's absolutely no point in mocking this part. You should really only be mocking external systems.
Should Rich Domain Models have interfaces
I'd say no, because
Domain behavior happens inside a well-delimited bubble. A given command triggers changes in a single aggregate. Conceptually, you don't have to isolate it since it basically just talks to itself. It might also issue messages (events) to the outside world, but testing that doesn't require the domain entities themselves to be mockable.
Concretely, domain entity (or value object) behavior is fluid -- it happens all in memory and is not supposed to directly call lower level, I/O bound operations. There will be no performance impact to not mocking stuff in your tests as long as the system under test is a small prepared cluster of objects (the Aggregate and maybe the thing calling it).
Domain model is a realm of concrete concepts. Ubiquitous language terms reflected in your entity or value object class/method names are usually prosaic and unequivocal. There's not much need for abstraction and polymorphism there -- which is what interfaces or abstract classes are for. The domain is less about generic contracts or interfaces providing services, and more about real world tasks that take place in the problem domain.

Difference between Ember objects and Ember Data ones

What's the difference between Ember Objects and the ones from Ember Data? I know that I should use Ember Data models when there is some data on the server, but when and where should I use either of them?
Note: This is rather long, biased and represents my own opinion on this matter. May not be the answer.
The type Object is what you could call the most "simple" object type in Ember. It has the most essential features you would likely use in modern applications like computed properties and observables. And allied to the runtime it also allows bindings, filtering, etc. I would call it the general purpose object, which can be extended to create other types, can also be combined with mixins to further enhance its usage. It has a great but finite number of features, but I wouldn't call it backend friendly, only because I know of DS.Model and its features.
Ember-Data's DS.Model heavily extends the features in Object in order to provide more features that make sense when working with backend data in (most cases) a RESTful environment. Much like an object supported by an ORM (e.g.: .NET's EntityFramework or Ruby's ActiveRecord), it provides a set of features so the objects of that type (DS.Model) can be managed through a data store (DS.Store), and beyond the features already present in an Object it will allow state management (isDirty, isNew, isError, isNew, etc), the ability to commit and rollback and object in a store (and subsequently the backend API), relationships/associations, etc.
If you are using Ember-Data at all, you should use the type Model, since it (was intended to be used with the store, and) uses the model type on sideloading, associations, plurals and throughout the whole AJAX request/response workflow. In fact, one of the advantages of using a Model backed by a Store is exactly this: Have the framework do the heavy lifting by building the AJAX request to the correct RESTful resource on its own, managing the response, do the sideloading of the JSON payload into an object of the correct type, while giving you a promise that you can use the model while the data is being requested/processed/materialized (so you can transition views/routes while this is happening).
It also gives you a great deal of convenient features within the store backed object itself (e.g.: record.deleteRecord(); store.commit()) and in the end of the day, makes us more productive and we can build applications a lot faster.
With that said, there is criticism of this approach because a large number of developers usually don't like or don't feel confortable resorting to what people call technomagic; in other words, they don't feel like using it because they feel they are not 100% in control of what happens under the hood. In my personal opinion, at the same time I can see where these people are coming from, I believe Ember-Data does nothing be help me to be more productive and the only things it asks in return is that I am consistent with my code and that I follow certain conventions and I'm happy with that.
Back to the Object, if you are not using Ember-Data, you should use the Object type as your models. This means that you will have do all of these tasks manually (generally not a big deal). So you will have to create the AJAX requests manually, handle the response, load the response data into your objects, and basically maintain all the communication workflow between your client app and your API. The advantage is that you will be 100% in control, but with a little more effort, as described here by Robin Ward. You will still be able to use the routing API and most great features that make Ember what it is.
So the question of when and where to use each of these types really depends on what architecture you have on your backend and what level of flexibility you have around that.
This is not something that can have a definite answer for everybody, but can be dealt with by answering a few questions that will asses the viability of using Ember-Data (think on the long run).
Does my API return JSON in the same format defined by Ember conventions?
If that's true only partially, can my team and I simply define mappings on a model basis to have everything conforming to conventions?
If not, can I change my backend API to conform to these conventions?
If not, where can I find an adapter that is specific for my backend technology?
I can't find one; would it be feasible to write my own adapter?
After answering these questions, take into account the development iterations and life cycle; think of what would take to maintain it with either approach in the long run; and also consider what path other people in the community have taken when deciding their architecture and/or development strategy.
In the end of the day, you have to understand what these objects bring in terms of features and whether or not you will need them in order to build your application. IMHO, Ember-Data is the way to go for most cases and it can only get better as we approach to (possibly RC3 and then) Ember 1.0 final, which is likely to include Ember-Data as part of the package.
I hope this can help

Avoiding Inheritance Madness

So, I have an API that I need to implement in to an existing framework. This API manages interactions with an external server. I've been charged with coming up with a way to create an easily repeatable "pattern," so that if people are working on new projects in the given framework they have a simple solution for integrating the API.
My first idea was to create a class for your "main" class of the framework to extend that, would provide all the virtual functions necessary to interact with the API. However, my boss vetoed this, since the existing framework is "inheritence heavy" and he wants to avoid adding to the madness. I obviously can't incapsulate my API, because that is what the API itself is supposed to be doing, and doing so might hide functionality.
Short of asking futures developers to copy and paste my example, what do I do?
If your boss is hostile to inheritance, try aggregation. (Has-a relationships rather than inheritance's is-a relationship.) Assuming you interface with the API in question via an object, maybe you can just keep that object in a property of your framework 'main' class, so you'd interact with it like main->whateverapi->doWhatever(). If the API isn't object-implemented or you need to load a lot of functionality specific to your environment onto it, that points toward making your own class that goes into that role and relates to the third party API however it needs to. Yeah, this basically means you're building an API to the API. Aggregation allows you to avoid the masking-functionality problem, though; even if you do have to do an intermediary layer, you can expose the original API as main->yourobject->originalapi and not have to worry about inheritance mucking things up.
Sounds to me like what your boss is having a problem with is the Framework part of this. There is an important distiction between Framework and API, in order to code to a framework you must have a good understanding of it and how it fits within your overall development, much more of a wholeistic view, adding to frameworks should never be taken lightly.
API's on the other hand are just an interface to your application / Framework and usually just a library of utility calls, I can't see that he would have a problem with inheritance or aggregation in a library, seems to me that the issue would be creating additional complexity in the framework itself, i.e. requiring developers to extend the main class of the framework is much more onerous than creating a stand alone API library that people can just call into (if they choose) I would be willing to bet that your boss would not care (in fact probably support) if the library itself contained inheritance.
Like the answer from chaos above, I was going to suggest aggregation as an alternative to inheritance. You can wrap the API and make it configurable either via properties or via dependency injection.
Also for a related topic see my answer to "How do the Proxy, Decorator, Adaptor, and Bridge Patterns differ?" for a run-down on other "wrapper" design patterns.