By 'functionalities structuring', I mean how we organize and coordinate different API endpoints to offer desired functionalities to clients. The context here is web APIs for consumption by mobile phones with GPS tracking, and I assume either cellular or WiFi connectivity is required for most functionalities.
I personally prefer a more 'modular' approach where each endpoint does mostly one thing and a collection of them fulfill all the requirements. Of course, you may need to combine some subset or sequence of these endpoints to achieve certain functionalities. Overall, I try to minimize the overlapping between endpoints in terms of both computation and functionalities.
On the other hand, I know some other people prefer client-side convenience (or simplicity) over modularity in the following ways:
If the client needs to achieve a functionality, then there should exist a single API endpoint which does exactly that, such that the client needs only a single request to fulfill the functionality with minimal caching/logic in between requests.
For GET endpoints, if there are multiple levels/kinds of data involved for some functionalities, they prefer as much data as possible (often all necessary data) returned by a single endpoint. Ironically, they may also want a dedicated endpoint for retrieving only the "lowest level" data using a corresponding "highest level" ID. For example, If A corresponds to a collection of Bs, and each B corresponds to a collection of Cs, then they will prefer a direct endpoint that retrieves all the relevant Cs given an A.
In some extreme cases, they will ask for a single endpoint with ambiguous naming (e.g. /api/data) that returns related data from different underlying DB tables (in other words, different resources) based on different combinations of query string parameters.
I understand that people preferring such conveniences above aim to: 1. reduce the number of API requests necessary to fulfill functionalities; 2. minimize data caching and data logic on the client side to reduce client complexity, which arguably leads to a 'simple' client with simplified interaction with the server.
However, I also wonder if the cost of doing so is unjustifiable in other aspects in the long run, especially in terms of the performance and the maintenance of the server-side API. Hence my questions:
What are the tried-and-true guidelines for structuring API functionalities?
How do we determine an optimal number of requests necessary for fulfilling a functionality in a mobile app? Of course, if all other things equal, a single request is the best, but achieving such a single-request implementation usually carries penalty in other aspects.
Given the contention between the number of client requests and the performance and maintainability of server-side API, what are the approaches for striking a balance in order to deliver a sensible design?
What you are asking about breaks into at least three main areas of API design:
Ontology Design (organization)
Request/Response Design (complexity/performance)
Maintenance Considerations
Based on my experience (which is largely from working with very large organizations both on the API producing and consuming side and talking with hundreds of developers on the topic), let's look at each area, addressing the specific points you bring up...
Ontology Design
There are a couple of things to take in to consideration in your design that are perhaps implied when you say:
Overall, I try to minimize the overlapping between endpoints in terms of both computation and functionalities.
This approach makes the APIs easily discoverable. When you are in a situation where you are publishing APIs for consumption by other developers who you may or may not know (and may or may not have enough resources to truly support), this kind of modularity - making them easy to find and learn about - creates a different kind of "convenience" leading to easier adoption and reuse of your APIs.
I know some other people much prefer convenience over modularity: 1. if the client needs a functionality, then there should exist a single endpoint in the API which does exactly that...
The best public example that comes to mind for this approach is perhaps the Google Analytics Core Reporting API. They implement a series of querystring parameters to build a call that returns the data requested, ex:
https://www.googleapis.com/analytics/v3/data/ga
?ids=ga:12134
&dimensions=ga:browser
&metrics=ga:pageviews
&filters=ga:browser%3D~%5EFirefox
&start-date=2007-01-01
&end-date=2007-12-31
In that example we are querying Google Analytics Account 12134 for pageviews by browser where broswer is Firefox for the given date range.
Given the number of metrics, dimensions, filters, and segments their API exposes, they have a tool called the Dimensions & Metrics Explorer to help developers understand how to use the APIs.
One approach makes the APIs discoverable and more understandable from the outset. The other requires more supporting work to explain the intricacies of consuming the API. One thing that isn't immediately obvious with the Google API above is that certain segments and metrics are incompatible, so if you are making calls passing one key/value pair, you may not longer be able to pass certain other pairs.
Request/Response Design
The context here is APIs for mobile applications.
That is still very broad, and better defining (if possible) how you intend for your "mobile applications" to be used can help you design your APIs.
Do you intend for them to be used totally offline? If so, heavy/complete data caching may be desirable.
Do you intend for them to be used in low bandwidth and/or high latency/error-rate connectivity scenarios? If so, heavy/complete data caching may be desirable, but so might small/discrete data requests.
for GET endpoints, they often prefer as much data as possible returned by a single endpoint, especially when there are multiple levels/layers of data involved
This is safe if you know you'll only ever be in good mobile connectivity scenarios, or you can cache the data heavily when you are (and thus access it offline or when things are spotty).
I understand that people preferring convenience aim to reduce the number of API calls necessary to achieve functionalities...
One way to find a happy middle ground is to implement paging in your data-intensive calls. For example, a querystring can be passed in a GET specifying 'pagesize'. Thus 10,000 records could be returned 100 at a time over 100 successive calls, or 1,000 at a time over 10 calls.
With this approach, you can design and publish your API without necessarily knowing what your consuming developer will need. Even though the paging example above uses the Google API referenced earlier, it can still be used in a more semantically designed API. For example, say you have GET /customer/phonecalls you could still design it to accept a pagesize value and make successive calls to get all the phonecalls associated with customer.
Maintenance
I also wonder if the cost of doing so [reduce the number of API calls necessary to achieve functionalities and to minimize data caching] is not justifiable in the long run, especially for the performance and the maintenance of an API.
The key guiding principle here is separation of concerns if your collection of APIs is going to grow to any significant level of complexity and scale.
What happens when you have everything bundled together into one big service and a small part of it changes? You are now creating not only a maintenance headache on your side, but also for your API consumer.
Did that "breaking change" really affect the part of the API they were using? It will take time and energy for them to figure that out. Designing API functionality into discrete, semantic services will let you create a roadmap and version them in a more understandable way.
For further reading, I'd suggest checking out Martin Fowler's writings on Microservices Architecture:
In short, the microservice architectural style is an approach to
developing a single application as a suite of small services, each
running in its own process and communicating with lightweight
mechanisms
Although there is a lot of debate about how to design and build for "microservices" in practice, reading up on that should help further shape your thinking on the API design decisions you're facing and prepare you to engage in "current" discussions around the topic.
Related
Context
Let's say we have an organization that has multiple businesses. In this example, Business A sells a gigabit internet service to college students. Business B sells a megabit internet service to seniors. The businesses sell related products with slight variations, each targeting a different demographic.
At first glance, this seems like we can just have one application handle all the requests. However, it is natural for the businesses to diverge from each other given that they each target a specific demographic - by nature, each business will have its own business requirements. For example, Business A might expose a mobile application for customers to manage their account. Business B might expose a phone number that has to be called for customers to manage their account. The list goes on.
What is the best way to utilize microservices given this context?
The problem is that there is both common and uncommon functionality across the different businesses.
We can remain somewhat DRY and have a set of base microservices (billing-api, order-api, etc.) that can be consumed by the different businesses. This works but this causes the microservices to have more "general" abstractions - leading to more complexity. For a concrete example, let's say the billing-api service has a /charge endpoint that is shared by Business A and B. Business B's requirement is to always discount $5 off the order:
//billing-api
if (businessB) {
orderCost -= 5;
}
In this DRY approach, we would have an API gateway for each business (BFF pattern) which would aggregate different microservices to fulfill their business needs. All "business-specific" logic would get moved from the base microservices into the respective businesses' API gateway. In this discount example, instead of having an if (businessB) check in the billing-api endpoint, we can invert this control to the consumer:
//billing-api
const { orderDiscountAmount } = req.body; //body parameters
if (orderDiscountAmount > 0) {
orderCost -= orderDiscountAmount;
}
Then the endpoint in Business B's API gateway would pass in an orderDiscountAmount of 5 when calling the billing-api endpoint:
//Business B API Gateway
billingApi({ orderDiscountAmount: 5 });
This seems fine, but all we did was take Business B's logic in the billing-api endpoint and created a generic (but forced) abstraction. This is "justified" by saying maybe Business A may use that one day - but that may never actually happen. Overall, this feels like an unnatural exercise for the developer and the consumer of the endpoint. Complexity and cognitive load on all sides are increased.
We can scrap DRY and avoid sharing microservices between businesses for maximum flexibility and simplicity. However, if more businesses are added (10-20) then there's probably going to be a good chunk of duplicated functionality.
How should teams be structured given this context?
If we are okay with the DRY approach from above, how should teams be structured? We can have vertically-sliced feature teams, but does that mean if we have 10 businesses, a team would need to own a feature (i.e. checkout) on all the businesses? The drawback with this approach is that the feature teams won't be experts in any business as a whole - the teams would only be an expert in one feature in a given business. Not having the full context on a business could make it difficult to make the right decisions.
We can have a stream-aligned team for each business dedicated to the UI and the API gateway. We would then have platform teams creating microservices for the stream-aligned teams to consume. The drawback with this is that there is a handoff step between the stream-aligned team and the platform team, a.k.a a dependency.
I'm not sure if I'm looking at all this from the wrong lens - any feedback would be appreciated!
Sorry to say, but this is not a good question for Stackoverflow, because any answer will be a opinion based and many approaches may work and depend on more details in your specific use case. So don't be disappointed if the question get's closed at some point.
That being said I am not too shy to offer my opinion or at least some thoughts about your described situation.
I believe your question about how to set up teams and how to set up the architecture are very tightly linked because architecture will with no doubt follow the organizational structure effectively. So I will give further thoughts about the organizational setup first.
An estimation of the total manpower required for each of the businesses should give you an idea of how many teams you need. Trying to keep team size small (say 2-8 people) will help reducing the communication overhead. So if you think this is the size for a whole business then there is no need to further split responsibility.
Responsibility is the most important keyword. You have to avoid any situation where a common service/library is used but has multiple or no owners. There should always be exactly one organizational owner. Thus, when organizations recognize the overlap of functionality in separate areas it is a common practice to establish a team that will be responsible and provide this functionality to others. This could be in the form of shared libraries or actually deployed services. In both cases it is important that the communication is formalized by correctly versioning their work and leaving it to the consuming groups which versions to use, when to upgrade, putting in requests for new features, etc. This approach will decouple the teams that use this common functionality.
In your problem description the core of the problem is the business logic and it's complexity / overlap. So I would argue that the most important role is the product management. They have to be very good (and at least a bit technical) and sort this exact mess into reusable pieces and things that are specific to only a single business. If you have a whole team of product managers they need to communicate very well and build this picture together. What is most important here is a good communication about the vision for the future and not just immediate requirements (Provide a great domain view). Only then can the architecture and teams be set up in the best possible way.
No matter how careful the initial setup - Changes WILL happen. Whatever you think in the beginning to be the best solution will change at some point in the future. In order to prepare for this I always recommend to go with the simplest approaches - Even if it means some code duplication or other imperfections. As software architects we tend to love the beauty of perfection, but that is rarely the most effective approach in the real world.
It is common sense to make a simple shared service/library that can be made to fit multiple use cases by adding some configurability. Up to certain degree of complexity that is a useful approach, but you have to be sensible to the consumers of that library/service and it should be easy to reuse at any point. It is not black and white about when to a functionality becomes too big / complex and has to be split into multiple pieces to be maintainable, but looking at it with the eyes of the maintainer and the eyes of consumer will make a determination easier. In the case of configurable service libraries you could also have separate deployments with different configurations, so using commonly developed components, but deploying different endpoints for each use case. If you use technologies that produce only a small deployment overhead (for example golang containers that are only a few mbs), then the large number of deployed services is not a drawback but a strength because they can be upgraded / versioned independently and it is even easy to run multiple versions in parallel.
Infrastructure and service deployment may or may not follow the architecture of the services. As a general rule I would recommend to look for the simplest approach, which often means providing common infrastructure that is shared among services and deployment configurations are where the distinction between services starts. For example a all services share a common cluster / streaming / gateways / databases / etc. Exceptions to that could be very special needs for single services, like a hardware encryption key store or GPU servers for machine learning, etc. This would be the approach for any reasonable sized system. (Of course if you are going scale to very large sizes it also a very feasible approach to have complete stacks / clusters for specific services.)
Persistency design is most crucial. Where it is relatively easy to evolve a business logic, reorganize it, etc., it is rather difficult to evolve your historic data. Often you have a choice to do a design in one of two ways:
Smart algorithms, dumb data.
Smart data, dumb algorithms.
(Smart referring to more elaborate / reflecting more of the business requirements)
The second approach is usually harder initially, but in my experience will have better results when a certain complexity threshold is reached.
So these are just a few things that came to my head when reading your question. I apologize that they cannot answer your detailed question about how to slice the billing API, but maybe you have a few additional considerations at hand.
In my workplace (and a lot of other areas), there is a lot of emphasis on building architecture around services. (I am working in an e-commerce startup). However, I think services are implicitly considered as distributed. I am a believer of the first law of distribution - "don't distribute". So, I believe that we should not un-necessarily complicate architecture. It should be an architecture which can evolve. So, one of the ways to approach the problem would be to create well defined namespaces and build code around it, but keep the communication via java api. (this keeps monitoring requirement low, and reliability/availability problems low). This can easily be evolved into a distributed architecture by wrapping modules into web service, as and when, the scale requirements kick-in. So, the question is - what are the cons of writing code as a single application and evolving into distributed services, rather than straight jumping into implementing web services based architecture? Am I right in assuming that services should imply the basic principles of design (abstraction, encapsulation etc), rather than distribution over network?
Distribution requires modularity. However, it requires more than just modularity: it also requires coarse-grained interaction between the modules.
For example, in a single-process ecommerce system, you might have separate modules for managing the user's shopping cart and calculating prices. They might interact by the cart asking the calculator to price an item, then another item, etc. That would be perfectly fine.
However, in a distributed system, that would require a torrent of small method calls, which is inefficient; you might get away with it if you used CORBA for distribution, but with SOAP, you'd be in trouble. Rather, you would want to have the cart ask the calculator to price the whole order in one go. That might be worse from a separation of concerns point of view (why should the calculator have to know about the idea of carts?), but it would be required to make the system perform adequately.
Related to granularity, there's also the problem of modules interacting via interfaces or implementations. With a single process, you can define a set of interfaces through which modules will interact; modules can pass each other objects implementing those interfaces without having to tell each other about the implementations (eg a scheduler module could be passed anything implementing interface Job { void run(); }). Across a network, the requirement for coarse grain means that any objects passed must be passed by value (because passing by reference would entail fine-grained calls back to the passing module - unless you were using mobile code, which you aren't, because nobody is), which means that both modules must know about and agree on the implementations of the objects.
So, while building a single-process system in a modular way makes it easier to implement SOA later, it doesn't make it as simple as wrapping each module in a SOAP interface. At least, not unless you build your system in a coarse-grained manner from the start, which means throwing away a number of sound and helpful good software engineering practices.
I'm working on the initial architecture for a solution for which an SOA approach has been recommended by a previous consultant. From reading the Erl book(s) and applying to previous work with services (and good design patterns in general), I can see the benefits of such an approach. However, this particular group does not currently have any traditional needs for implementing web services -- there are no external consumers, and no integration with other applications.
What I'm wondering is, are there any advantages to going with web services strictly to stick to SOA, that we couldn't get from just implementing objects that are "service ready"?
To explain, an example. Let's say you implement the entity "Person" as a service. You have to implement:
1. Business object/logic
2. Translator to service data structure
3. Translator from service data structure
4. WSDL
5. Service data structure (XML/JSON/etc)
6. Assertions
Now, on the other hand, if you don't go with a service, you only have to implement #1, and make sure the other code accesses it through a loose reference (using dependency injection, or a wrapper, etc). Then, if it later becomes apparent that a service is needed, you can just have the reference instead point to #2/#3 logic above in a wrapper object (so all caller objects do not need updating), and implement the same amount of objects without a penalty to the amount of development you have to do -- no extra objects or code have to be created as opposed to doing it all up front.
So, if the amount of work that has to be done is the same whether the service is implemented initially or as-needed, and there is no current need for external access through a service, is there any reason to initially implement it as a service just to stick to SOA?
Generally speaking you'd be better to wait.
You could design and implement a web service which was simply a technical facade that exposes the underlying functionality - the question is would you just do a straight one for one 'reflection' of that underlying functionality? If yes - did you design that underlying thing in such a way that it's fit for external callers? Does the API make sense, does it expose members that should be private, etc.
Another factor to consider is do you really know what the callers of the service want or need? The risk you run with building a service is that (as you're basically only guessing) you might need to re-write it when the first customers / callers come along. This can could result in all sorts of work including test cases, backwards compatibility if it drives change down to the lower levels, and so on.
having said that the advantage of putting something out there is that it might help spark use of the service - get people thinking - a more agile principled approach.
If your application is an isolated Client type application (a UI that connects to a service just to get data out of the Database) implementing a SOA like architecture is usually overkill.
Nevertheless there could be security, maintainability, serviceability aspects where using web services is a must. e.g. some clients needs access to the data outside the firewall or you prefer to separate your business logic/data access from the UI and put it on 1 server so that you don’t need to re-deploy the app every time some bus. rules changes.
Entreprise applications require many components interacting with each other and many developers working on it. In this type of scénario using SOA type architecture is the way to go.
The main reason to adopt SOA is to reduce the dependencies.
Enterprise Applications usually depends on a lot of external components (logic or data) and you don’t want to integrate these components by sharing assemblies.
Imagine that you share a component that implements some specific calculation, would you deploy this component to all the dependent applications? What will happen if you want to change some calculation logic? Would you ask all teams to upgrade their references and recompile and redeploy their app?
I recently posted on my blog a story where the former Architect had also choosed not to use web services and thought that sharing assemblies was fine. The result was chaos. Read more here.
As I mentioned, it depends on your requirements. If it’s a monolithically application and you’re sure you’ll never integrate this app and that you’ll never reuse the bus. Logic/data access a 2 tier application (UI/DB) is good enough.
Nevertheless this is an Architectural decision and as most of the architectural decisions it’s costly to change. Of course you can still factor in a web service model later on but it’s not as easy as you could think. Refactoring an existing app to add a service layer is usually a difficult task to accomplish even when using a good design based on interfaces. Example of things that could go wrong: data structure that are not serializable, circular references in properties, constructor overloading, dependencies on some internal behaviors…
For a government contract we will be proposing to build a traffic monitoring architecture. We will have the following components:
Video camera's set up around the area of interest. The cameras will be aware of their location and orientation and viewing parameters.
A GIS map server which can be queried for streets, building, etc.
An algorithm the takes in raw video and street location information and outputs car locations.
Another algorithm takes in car locations and very low level street information and provides information about which cars are driving anomalously.
Another database takes in information about car locations and anomaly reports over time and can be queried for this later.
A proxy (or perhaps more accurately, a facade) is set up over the archive database and the real-time algorithms in order to provide a unified interface to the information.
A client attaches to the proxy and to the street server and paints various representations of the traffic situation on the screen.
I'm just now learning what an SOA is. Is this an ideal candidate of a Service Oriented Architecture SOA? I had heard that SOA services should be stateless (or is that only RESTful services?) I had also heard that it was inadvisable to pipe one service to the next to the next because it increases hidden complexity, and that there was something you should do to make this situation better (an "orchestration"?). The services above do appear to be modular and reusable. For instance, there will be plenty of cameras, various types of vehicle detection and anomaly algorithms, distributed databases, and plenty of clients. I will need to have the capability to handle events: for instance, if I may want to register to a service and be notified whenever a big truck moves past this point.
If this isn't ideally implemented by a SOA, then where else should I be looking. If this is ideal for a SOA, then where should I start when designing this? (And I'm starting basically from having read Wikipedia's SOA page.) Are there any good case studies to look at here?
Yes, SOA is ideal in this case (complex, distributed system with a wide mix of technologies) but from the sound of it you need to do a whole lot more research to get your head around the concept. It is not a tough concept by any stretch, it's actually simple, but there is no one prescribed way to do it. I suggest going over SOA case studies for similarly-sized projects, successes and failures.
You mention a facade for one of your subsystems. Extend that same concept to the rest of your components. E.g. each service is a facade to a complex subsystem.
Also, I recommend implementing a couple of different web services in your choice of technologies and abstracting arbitrary different subsystems (a database should be one of the coponents.) Then write a client that makes use of them. Doing so will give you a lot of practical experience and insight into the concept.
Last thought: The one area where an SOA architecture might stumble is if you have to move video data between several different services. The stateless, transactional nature of SOAs might introduce performance issues when moving very large amounts of data or when performing bulk transactions on very large data sets. You either need to keep video localized or implement a back-end subsystem (cheat) to avoid potentially nasty bottlenecks.
I'm not satisfied with the answers given by the SOAP vs REST questions notably here:
Performance of SOAP vs. XML-RPC or REST
because it's just general philosophical answers and not pragmatic answers with some study cases.
Nobody can give precise cases of when soap would be more suitable than rest, especially as for performance point of view ?
Update:I think REST is winning the war.
Performance is not the deciding factor.
First I should say, asking a SOAP-vs-REST question is a little cockeyed, because SOAP is a XML envelope format, and REST is an architecture. So I will make a little assumption and suppose that you are really considering SOAP-vs-POX or SOAP-vs-JSON or SOAP-vs-some other data formatting approach.
The deciding factor should be this:
Do you now need, or will you need in the future, the SOAP envelope?
The SOAP Envelope allows things like framework-provided encryption, digsig, routing, and authorization checks, among other things. You can of course, do those things with REST (or more accurately, with plain-old-XML, or JSON, etc) but you have to do more work yourself, to make that happen.
If Performance - whatever you construe it to mean - really is your #1 criterion, then you should probably abandon SOAP and POX and move to protobufs or something else optimized for performance. These can be faster to serialize and faster to transmit.
If you think this answer is "too philosophical" and you really want hard figures, well, then I suppose you'll need to conduct some tests. The actual perf will vary greatly on the toolkits you choose, the shape of the messages, and the extra data services (like encryption and so on) that you use. But in the end, perf won't be, or shouldn't be, decisive either way.
If your SOAP toolkit is 20% easier to use. debug, and maintain as your POX toolkit, then you should use SOAP, regardless of the performance. People (coders, architects, testers) are much more expensive than CPUs and networks these days. You can always buy another 2 cpus, or a bigger network, if necessary, and if your design is correct. But you can't buy 20% less time developing, at any cost, if your framework is hard to use, or if it drives away your people. Unless you are running a geo-scale network, you will do better to optimize for the people, instead of for the network.
You can find an article comparing REST and SOAP here:
http://www.jopera.org/files/www2008-restws-pautasso-zimmermann-leymann.pdf
Authors conclusions seemed to be:
Use RESTful services for tactical, ad hoc integration over the Web
Prefer WS-* Web services in professional enterprise application integration scenarios with a longer lifespan and advanced QoS requirements
Personally I do not like terminology like "professional enterprise" because it is loose and informal. However in my opinion authors made some good points in the article. Maybe to conclude and to give some own thoughts:
If you want to make API public - do it in RESTful way. Why? It is simple to use for a client application so it will make your service more popular. For example Amazon is exposing both REST and SOAP APIs, but 85% of their users have chosen REST version Amazon API - SOAP vs. REST
Use SOAP and WS-* stack if you will create (or you have some control of the process of creating) both consumers and producers of your services and you do need advanced features of WS-*. This will probably required more resources also because SOAP applications tends to be "heavier" (more features, but more sophistication also).
Also considering performance REST could be faster (messages are definitely shorter and you do not need to parse xml).
Hope it will help.
In your example of flash client - it is really hard to tell without knowing the details, however if one do not need all this security and transactional features of WS-* I think building REST application would be simpler and faster.
Answering to comment
I should use soap because i'm in so
called "professional enterprise"
And assuming of course that your choice isn't really dictated by big software vendors.
SOAP is suited for bigger enterprises because it encourages more formal approach. It offers specifications, which are huge, so your developers may need time to learn them and maybe even some professional training --> so spending companies resources. It also offers tools - and not all of them are open source, so this can also mean additional resources. But if your team will learn this way of integrating services it will probably be efficient and resulting code will be high quality.
REST in contrary is more a philosophy of developing applications. So, no huge specifications, no specialized tools. No resource spending. This may work nice if you have a small team of good programmers - they will not need so many guidelines if they know the basic principles . Unfortunately it is also easier to do things wrong.
Another thing to consider is the applications size - the richer the API, the more services you want to integrate, the harder it will be to do it RESTful. Also building small SOAP application wouldn't be probably a good idea - whole overhead and entrance cost is just too high.
You need to evaluate pros and cons for your project. It is impossible to give recommendation without knowing all the details I think.
And finally - this has nothing to do with reasonable arguments but more with politics. I think that management level people seem to prefer WS-* stack and SOAP (it has support of "big enterprises" so it is easier for them to justify their choose). On the other hand people from academic background[1] prefers REST - because there is still a lot of research that can be conducted in the area.
[1] I'm somewhere in between, so I can observe both behaviors ;-)