Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
At my last project we designed an issue documentation system component (like a bug reporting tool) which should integrate into different Client Applications (Desktop Applications, Mobile Apps and also standalone into a Webclient). The focus was not on the Application/UI itself, but on its functional capabilities as a service. Think of it like the Google Search API, which also can be integrated in your Browser, as a widget, on your phone, and so on.
While defining functional (Use Cases) and nonfunctional requirements I came in great trouble to define them without getting UI specific, because we wanted to get a kind of service.
As a Workaround, we simply defined our requirements to fit on a standalone Webapplication plus the nonfuntional requirement that all function calls have to be done via a RESTful Service API, hoping that we won't miss any function afterwards, when using this API in an Desktop Application for example. Due to the fact, that we don't actually want a webapp in the first place, but a service, I am not very happy with this indirect way of requirement analysis, and I hope, that our developers get the point.
So my question is: How are REST APIs or Webservices designed in a way, that a developer knows what to do? Is there a UML UseCase profile for webservices for example?
Do not Forget : Use cases are just "half of the whole story"
You will have non-functional requirements also.You can not capture every important detail with use cases.
Then Ask Yourself : Are Use Cases Right For Me?
Use Cases are generally good for "interactive systems": systems that has interaction with user.They are good for capturing "functional" requirements.
Use Cases are not good for some systems. Be open-minded. While writing
your use cases, you see that this does not capture what you want or does not add
any value try-start with alternative tecniques such as just plain
Feature List.
Find The Root Cause
Ask yourself "Why I came in great trouble to define my reqirements without getting UI specific details"?
Pick Your Battles : Quality Scenarious + Arhitectural Factor Table
Identify your architecturally significiant requirements. One good way to define them is "Quality Scenarious":
Quality Scenarios are short statements of the form [stimulus]
[measurable response]
For example: When a new bug report entered to the Bug System[stimulus],it will sended to the bug owner mobile phone within 5 minute under X conditions.[measurable response]
Then can create a Architectural Factor Table with Quality Scenarious
Architectural Factor Table is a tool that help you to understand the
influence of factors, priorities and variability.
Here is a sample Factor Table from Craig Larman book: Applying UML and Patterns
"Guarantee that the software do what you want"...
So write your test first...Or create "executable" specifications...
And communicate...
Finally
There is nothing like Software Engineering for REST APIs.:-)
Looking at the definition of software requirements specification, you can specify a RESTful API as a technical requirement in the perspective of the product use such as Software Interfaces (i.e. Overall description, Product perspective, Software interfaces). This is not a functional requirement. It is only a technical requirement of your project.
There are no UML Use Case profile because UseCase intends to specify functional requirements. You can specify the interaction of users accessing your system through the RESTful API considering the functional access and data exchange in a regular Use Case.
All the characteristics required from the expected RESTful API should be specified as a technical requirement (i.e. Specific requirements, External interface requirements). The developer knows what to do considering all requirements of the application.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have been tasked with developing a document for internal testing standards and procedures in our company. I've been doing plenty of research and found some good articles, but I always like to reach out to the community for input on here.
That being said, my question is this: How do you take a company that has a very large legacy code base that is barely testable, if at all testable, and try to test what you can efficiently? Do you have any tips on how to create some useful automated test cases for tightly coupled code?
All of our new code is being written to be as loosely coupled as possible, and we're all pretty proud of the direction we're going with new development. For the record, we're a Microsoft shop transitioning from VB to C# ASP.NET development.
There are actually two aspects to this question: technical, and political.
The technical approach is quite well defined in Michael Feathers' book Working Effectively With Legacy Code. Since you can't test the whole blob of code at once, you hack it apart along imaginary non-architectural "seams". These would be logical chokepoints in the code, where a block of functionality seems like it is somewhat isolated from the rest of the code base. This isn't necessarily the "best" architectural place to split it, it's all about selecting an isolated block of logic that can be tested on its own. Split it into two modules at this point: the bulk of the code, and your isolated functions. Now, add automated testing at that point to exercise the isolated functions. This will prove that any changes you make to the logic won't have adverse effects on the bulk of the code.
Now you can go to town and refactor the isolated logic following the SOLID OO design principles, the DRY principle, etc. Martin Fowler's Refactoring book is an excellent reference here. As you refactor, add unit tests to the newly refactored classes and methods. Try to stay "behind the line" you drew with the split you created; this will help prevent compatibility issues.
What you want to end up with is a well-structured set of fully unit tested logic that follows best OO design; this will attach to a temporary compatibility layer that hooks it up to the seam you cut earlier. Repeat this process for other isolated sections of logic. Then, you should be able to start joining them, and discarding the temporary layers. Finally, you'll end up with a beautiful codebase.
Note in advance that this will take a long, long time. And thus enters the politics. Even if you convince your manager that improving the code base will enable you to make changes better/cheaper/faster, that viewpoint probably will not be shared by the executives above them. What the executives see is that time spent refactoring code is time not spent on adding requested features. And they're not wrong: what you and I may consider to be necessary maintenance is not where they want to spend their limited budgets. In their minds, today's code works just fine even if it's expensive to maintain. In other words, they're thinking "if it ain't broke, don't fix it."
You'll need to present to them a plan to get to a refactored code base. This will include the approach, the steps involved, the big chunks of work you see, and an estimated time line. Its also good to present alternatives here: would you be better served by a full rewrite? Should you change languages? Should you move it to a service oriented architecture? Should you move it into the cloud, and sell it as a hosted service? All these are questions they should be considering at the top, even if they aren't thinking about them today.
If you do finally get them to agree, waste no time in upgrading your tools and setting up a modern development chain that includes practices such as peer code reviews and automated unit test execution, packaging, and deployment to QA.
Having personally barked up this tree for 11 years, I can only assure you it's incredibly not easy. It requires a change all the way at the top of the tech ladder in your organization: CIO, CTO, SVP of Development, or whoever. You also have to convince your technical peers: you may have people who have a long history with the old product and who don't really want to change it. They may even see your complaining about its current state as a personal attack on their skills as a coder, and may look to sabotage or sandbag your efforts.
I sincerely wish you nothing but good luck on your venture!
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
We are implementing a Web Service using Apache Camel that has many (20-50) "direct:" routes calling Java methods. Every method basically has a route to it, whether it's for business rule processing, or DAO access methods. All the routes use from("direct:").to("direct"), but never to any other component.
While this may seem like it decouples the system from the standard Controller->bo->dao layers, it adds unnecessary book keeping of the Camel routes.
A better alternative would just simply be to define Java interfaces for the Business Objects and Dao layers, with an additional interface for any other service (external to the system, like file://, or http://) requests that would be a dependency inside the Business Objects or Controllers. The implementation to this additional interface would be using Apache Camel to talk to those external services.
As a side note, I'm thinking about how to convince my current colleagues to see my point.
Thoughts?
tldr;
Should Apache Camel be used where there is only 1 or 2 applications present?
I have used applications where there have been 20 systems involved in various complexity, protocols and patterns. I know other places where 50+ systems are involved. The only limit is on your design, performance etc.
Apache Camel is a middleware framework. Essentially your business logic should not know about how the data got to it or where it is to be delivered, only what it should deliver. Camel should take care of the rest.
By the way, does your middleware not talk to the external world? Why only use the direct and not other components?
You can also hide the middleware by using bean integration. That gives you an even more decoupling. See here: http://camel.apache.org/bean-integration.html
It really depends on what it is you want to accomplish and what your requirements are.
Well, for your specific use case I would say that you would probably have been better off with just a regular Java implementation with Camel being used as a glue between your system and the outside world. Like Souciance explains in his answer, Camel really shines when you have to integrate with multiple systems so you should at least keep it for communicating with the outside-world.
However, having already implemented the system's internals using Camel, I would have to say that it doesn't make much sense to put additional effort into replacing it with pure Java, especially since the current implementation gives you the ability to make the system more robust, for instance by using a high-performance MQ as a replacement for the direct routes, which would help your system become more resistant to failures and more easily decoupled later on, not to mention that having routes around your DAO objects makes it much easier to implement batching of DB updates when your system's load grows.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I am just trying to look at different licensing models and potential technical C++ implementations.
Suppose you have a desktop application including several algorithms (A1, A2, A3). This application is communicating with some server (potentially in the cloud). These "local" algos may be used independently. Is there any solution/framework out there which could allow us to bill their usage independtly?
For example, , one user uses algo A2 and A3. Before saving files, the software computes the final bill, sends it to some server, asks the user to pay it and generate the results file.
This would allow to ship a potentially expensive software "for free" to the users and without the risk for them to spend an enourmous amount of money upfront without being sure this software will actually be heavily used.
Related question: what are the risks?
Though your Pricing model is feasible for large scale and probably same as what cloud offers.
I don't think any native application would be scalable/feasible with this model.
Most of the License of software's that are too costly to buy for each user, they give a floater license and a cap limit of number of simultaneous users.
Pay as you use is great but it is same as cloud computing but then question is simple.
Do you want to reinvent the wheel?
Unless you don't want to invest in your own cloud server, you can easily put your application on other cloud.
If you are ready for investment into build and maintain your own cloud then you should go ahead.
Edit:
You can use web service or the payment method. Expose the web service for calculating the price to be incurred. I would personally use Java or C# for this purpose.
as java and C# have enough support for it, for the wrapper around the C++ code i would use any of the jni or C++/cli language support.
Another thing is, I have not come accross any open source tool for it, each web service has it's own requirements. You can get the architecture but no ready made work.
C++ code->webservice->price billing->result returned to caller.
Regarding Technical Difficulties.
It would not be possible to do things completely in C++, You may require many other tools with C++.
Consider such a scenario:
The program processes the data on the customer's computer, produces some cryptic data at this stage and calls home (your server).
The server there decodes the data, makes the final analysis and sends info to the client "It will costs you X dollars to see the result. Do you want to proceed?"
If yes, the client makes the payment and gets the result.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
For any software project testing is important. I am new to testing so how can I test a developed software project? . What are the steps and levels of testing? I wanted to know how the software projects will be tested in companies?
wikipedia has a good article on Software Testing, and will be better than anything I write here. However, I'll try to describe the process at the higest level:
At the highest level you have perhaps three types of tests:
unit tests - tests for individual units (eg: functions, methods). If you give function A the inputs x, y and z, does it return the right value? These are cheap, easy and fast, and help you to understand that individual units of code work precisely as they are designed to work.
system tests - do the individual units work together? This is where you test the business logic of your application, and to test the contract between units ("if you provide the arguments x,y,z I'll return A" and "if you give me the wrong arguments I'll raise error B"). These help you to understand whether the individual units work together to accomplish a task.
performance tests. Performance in this context could mean raw speed, or it could mean capacity ("can the website handle 1 million hits per day?"), system load, latency, etc.
unit tests are most often done using a framework called junit (for java), nunit (j.net), or something similar. There's probably an *unit framework for whatever language you are using. Many software shops use their own custom tool to do this. Most often (but not exclusively), unit tests are written by the developer that wrote the unit.
system tests can take many forms, and there's not always a single solution that will work for a particular application. For example, if your site is a web site you can have service layer tests ("if I call the web API of my site, does it return the right value") and presentation layer ("if I click on the button in the UI, does the form get posted to the proper URL?) tests.
While unit tests are almost always automated, system level tests can be automated, manual, or a combination of automated and manual. User interface testing is often a manual process. While there are tools to drive the UI for a variety of types of applications, ultimately it's a very difficult problem to automatically answer questions like "does this look right?" and "Is this easy to use?". Those types of questions almost always have to be answered by a human trained to answer such questions.
Performance tests are almost exclusively automated, though any easy way to do performance testing is simply to time your automated system tests and watch the trends, and also watch system metrics such as CPU and memory utilization while your system tests are running. This isn't an ideal performance testing strategy, but it's good enough if you're just starting out.
So, to get started with testing, see if there is a unit testing framework already available for your language. You can then quickly come up with a body of tests for the individual units. You can then start looking for what are commonly called "testing frameworks" for the system tests. There are many, many frameworks to choose from. There is no "best", so don't get too caught up in finding the perfect tool. Pick any tool that works for your language and start using it.
Fundamentally there are these things going on:
Understand what the software is supposed to do.
Decide how to verify that it does.
Agree your test strategy with the stake-holders: there should be people who care about whether you are testing the right things, they need to have confidence that you are doing so.
Perform the verification
Report the results, accurately, in enough detail to allow problems to be fixed
The details depend upon the nature of the software. For example what would you do if the software didn't have a UI? Or if it has a UI, there are almost certain to be other things (eg. modules which load data from external systems) you need to test too, what proportion of time will you spend on those?
There's a strong likelihood that some parts of the testing you decide is appropriate will need to be repeated as new releases of the software are made. You can make a distinction between "testing" and the subset which is "re-checking" and there may be value in automating the re-checking aspects.
One thing to bear in mind: I'm very suspicious of an attempt to reduce testing to a simple set of "steps". You might look at contextual testing for an explanation.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm a .NET web developer for a small organization. We have some skilled developers here, but what we don't have is anyone who's worked for larger, more organized, software shops. We do all right, but I find myself wanting to structure my code better with little place to turn for advice.
It comes to this. At some point someone in our organization decided we were going to use webservices whenever we had to do any data access at all no matter the case. Thus, our hardware architecture is organized so that is the only way we can access our databases. This sounds fine in theory, but the problem is most of our apps turn out like this:
Spaghetti Mess Of Code In The aspx.cs -> Web Service That Does Nothing But Call a Stored Procedure
Beyond that there's not much separation. Whenever I start trying to research better structural practices I wind up reading about things like dependency injections, dirty properties, and class factories, my head starts to swim, and I move on to something else in frustration.
Here's a basic example of my wonderings. So let's say I have to make a page to select employees from a list, edit them, and update the database. Is it better to have the web service return an Employee object on a get, and accept an Employee object on an update? Or is it better to have the Employee object call the webservice to self populate?
In other words: Employee emp = svc.GetEmployee(42); vs Employee emp = new Employee(42);
The second example seems like it would be better organization for updates (update the relevant properties and call emp.Update()), but the problem is what if I need to get a list of Employees? It would be inconsistent to do Employee emp = new Employee(id); for a singular employee, but do svc.GetAllEmployees() for a list.
I feel like I'm rambling a bit so I'm going to cease trying to explain and hope someone understands my confusion. I appreciate any advice that anyone can offer. Thanks!
As with anything, there are a number of different approaches you can take. (So hopefully there will be a number of good and different answers here, because this is definitely an important question.)
One question you should probably ask yourself about your design is "how much logic will need to be shared between applications?" Going with the small GetEmployee example you gave, it sounds like you want to know where to put the models in your domain. Are these models used by multiple applications? Is business logic shared across applications?
If so then you may want your domain models behind the web service. Maybe build up a rich domain behind those services with its data access and external dependencies (remember that dependency injection thing, the best design decisions will need to be in the domain behind the service layer since that's the core of the whole system).
Then, of course, how do you access this logic? Again, there are a lot of options. My personal favorite design is to have a kind of request/response system that abstracts the service layer. Something as cool as NServiceBus for a really disconnected asynchronous system, something as simple as Agatha for just abstracting out the actual service and putting the request/response logic in code, or maybe play around with ServiceStack (something I've been meaning to do) or another project, etc. Hell, you could just roll a plain old WCF or even SOAP service layer and nothing more. It's up to you.
At that point you're looking at a fairly procedural system at the service layer. This isn't a terrible thing. Think of the service layer like an MVC site. You send it a request, populated with some kind of incoming viewmodel, it does its domain stuff in all its object-oriented goodness, and returns a view in the form of some XML representation of an outgoing viewmodel. Now you have a repeating pattern. Your client-side applications are just great big views for your domain. The dumber they are, the more interchangeable and replaceable they are, the better.
This allows you to encapsulate various "business actions" in a unit of work at the service boundary. Given a request from a client application, either the whole thing succeeds or the whole thing fails. Wrap it up in good error handling and an application-level error/exception logger to give you all the details of the failed requests. (Imagine that every request can be serialized to a string and included in an error message. Now you have everything you need to recreate the error in a simple string, as opposed to asking users "what did you click on?" to try to recreate errors.)
If, instead, the back-end doesn't really share anything with different applications and each application is its own distinct entity entirely. At that point you don't really need to share all that logic behind the service layer, and it's entirely possible that you shouldn't try to make any kind of overlap. Is the data access the only thing that's behind the service? What about things like filesystem access or external web service access?
If the only thing behind the service is the data access, then you can keep your models and data access repositories in your client applications like you seem to be accustomed to and just swap out your repository implementations with implementations that internally reference and access the service layer. (This would be the second option in your GetEmployee example.) Properly abstracted, direct access vs. service access repositories can be swapped out trivially depending on where the application needs to live.
Of course, this leans a little towards a true persistence-ignorance approach, which can be dangerous. Performance implications need to be considered. Some piece of logic or unit of work on the back-end may hit the database several times to do several things. If this is happening across a service then that adds service overhead to each database call. So you'll want to address this on a case-by-case basis.
I guess I may be rambling at this point, so to get back to something concrete it really comes down asking yourself some questions about your domain. How persistence-ignorant can you afford to be? Do your applications share business domain logic? Do you need to access other non-database external dependencies behind the service only? There's no universal design that's always the right answer. You'll probably end up experimenting with various designs and homogenizing on a design that's right for your developers and your environment.