Work around Composition failure when one subgraph fails using IntrospectAndCompose - apollo

We have set of subgraphs which are owned by different teams in separate code bases. We are updating #apollo\gateway to 0.50.0 and running into the issue for composition failure when one of the subgraphs fails. We are using IntrospectAndCompose to create apollo gateway. Is there any work around for it?

Related

How do I create Duplicate/Matching Rules through Apex code for an Apex test class?

I recently wrote an API to implement Account/Contact Duplicate/Matching rules for a connected app. It all works fine.
It does the same thing that Salesforce does for these rules. It pretty much follows this example code: https://developer.salesforce.com/docs/atlas.en-us.apexref.meta/apexref/apex_class_Datacloud_DuplicateResult.htm
I'm now trying to write a test class for this but I'm hitting a roadblock.
I need a way to do one of two things:
Create Duplicate and Matching rules through Apex code within the test. I would create them, run the test, then delete them.
Turn Duplicate and Matching rules on and off through Apex code within the test. Same general idea but I could create them beforehand in our testing org.
As far as I can tell there is no way to do this. Am I missing something here?
If this isn't possible then how do I get test coverage on my class. It will only get full code coverage if the call to Database.Insert actually fails with DuplicateErrors.
Edit: I should add that having rules always on and bypassing them with DMLHeader for testing is not an option.
I would say you don't have to create the Duplicate or Matching rules in Apex (not even sure if it is possible). In fact, you should rely on having the Duplicate rules that you've configured in Salesforce. So you should just try to generate the duplicate Salesforce as the Duplicate Rules should trigger also when running the test.

Best practice for DRF versioning - copy whole folder? Or subclass previous version?

Based on this question, and this answer in particular, what's the most sustainable way of creating v2, v3, etc - most times, each version introduces incremental changes over the previous version. Most endpoints stay the same, most fields stay the same.
Option 1: Copy v1 folder, redo the internal references to ensure the code is updated, and then make your changes over that. This keeps every version self contained. If a bug shows up, you fix it in all versions. Versions are clean and dependencies are easier to manage. However, you end up with lots of duplicated code after v30, for example.
Option 2: Create v2 folder, and make v2 classes subclass v1 classes, providing the base functionality, and then add your changes. This promotes code re-use, but can get unwieldly very fast, eg. tracing a change/fixing a bug when you have over 30 versions.
Any prevailing best practices, pros/cons?
Your Option 2 will turn into Option 1 in a few versions.
In my opinion there are two cases:
1 case: you have traditional mostly-CRUD API and then I would suggest to look at this post which shows a way to create a transitions between versions through Serializers.
2 case: your API is more about algorithms, logic and data processing - then you can go with Option 1 - create another app in DRF (copy the folder), move all common libraries out of the app and keep only classes that could change and need a backwards compatibility support in the app.

How to monitor communication in a SOA environment with an intermediary?

I'm looking for a possiblity to monitor all messages in a SOA enviroment with an intermediary, who'll be designed to enforce different rule-sets over the message's structure and sequences (e.g., let's say it'll check and ensure that Service A has to be consumed before B).
Obviously the first idea that came to mind is how WS-Adressing might help here, but I'm not sure if it does, as I don't really see any mechanism there to ensure that a message will get delivered via a given intermediary (as it is in WS-Routing, which is an outdated proprietary protocol by Microsoft).
Or maybe there's even a different approach that the monitor wouldn't be part of the route but would be notified on request/responses, which might it then again make somehow harder to actively enforce rules.
I'm looking forward to any suggestions.
You can implement a "service firewall" either by intercepting all the calls in each service as part of your basic servicehost. Alternatively you can use 3rd party solutions and route all your service calls to them (they will do the intercepting and then forward calls to your services).
You can use ESBs to do the routing (and intercepting) or you can use dedicated solutions like IBM's datapower, XML firewall from Layer7 etc.
For all my (technical) services I use messaging and the command processor pattern, which I describe here, without actually calling the pattern name though. I send a message and the framework finds to corresponding class that implements the interface that corresponds to my message. I can create multiple classes that can handle my message, or a single class that handles a multitude of messages. In the article these are classes implementing the IHandleMessages interface.
Either way, as long as I can create multiple classes implementing this interface, and they are all called, I can easily add auditing without adding this logic to my business logic or anything. Just add an additional implementation for every single message, or enhance the framework so it also accepts IHandleMessages implementations. That class can than audit every single message and store all of them centrally.
After doing that, you can find out more information about the messages and the flow. For example, if you put into the header information of your WCF/MSMQ message where it came from and perhaps some unique identifier for that single message, you can track the flow over various components.
NServiceBus also has this functionality for auditing and the team is working on additional tooling for this, called ServiceInsight.
Hope this helps.

JTA: how to be test JMS and JDBC failures?

we're currently working on testing JTA failure behaviour, on a system that receives messages using JMS, persists them, and sends results using another class.
The whole thing is tied together using Spring. Current unit tests use HSQLDB, Apache ActiveMQ and Bitronix for transaction management. Success with this has been limited, mostly because HSQLDB does not implement XA transactions.
So here is the question: how to best simulate database failures in a transaction unit test? Is there any way to make a standard JDBC driver (for Oracle, say) fail in the middle of a test?
n.b. pressing the power button is not a repeatable test :)
You need to decide what exactly do you want to test - for example if you want to test how Oracle would behave in XA transaction with Bitronix then mocking DAOs, as suggested by duffymo, is not going to help you. In such case you need to find a way to break connectivity in the middle of transaction and then see how Bitronix/Oracle would handle recovery - e.g. heuristic outcomes and so on.
Note that in quite a few cases there are ways to get the same functionality without actually using XA transactions. It could be simpler, faster and more testable. For example, in very common case when messages are consumed from MOM and DML executed in database there is a common pattern of how to get away without XA even so two resource managers are getting updated.
Write a mock object for the test whose implementation throws an exception in the middle of the transaction.
Since you're using Spring, it's an easy matter to write a new, test-only implementation of the DAO interface that behaves in a repeatable, predictable manner. Inject the 'wonky DAO' only for the test.
Of course you're using the XA driver to connect to the database. Two phase commit won't work otherwise.

Handling the Same Class Definition From Multiple Web Services

The situation:
We have a library project that houses much of our code for the various integrations we work on. Many of the integrations consume web service apis, and my supervisor doesn't want 5 gazillion web service references added to the project.
What we generally do, then, is add a reference to a new project and copy the References.vb to the solution and just call the generated code. Not terribly convenient if changes are made to the service, but it works.
Recently, I ran into a problem where we have to use 3 web services for the same integration. 2 of these contain the same class definitions, however, they're in different namespaces because they belong to different services. This became a problem for me because one of the services searches a user based on user ID, and the other pulls back blocks of users. Both return an object, or list of, that is exactly the same semantically. And I need to process the data the same, whether it came from one service or the other.
My solution, was to strip out the duplicated classes in the service and replace them with classes inherited from common base classes. This allowed me to work with both objects as if they were the same, however, it required modifying the generated web service proxy. Therefore this change will need to be made every time I need to regenerate the proxy.
I'm curious what you all might think a better solution to this would be.
You're going to regret playing games with copying Reference.vb and editing generated files.
Switch to WCF and you'll be able to tell it you want to reuse the types, instead of having multiple types that are more or less the same.
BTW, they would be "less" the same if not all of the web references are updated at the same time after a server change.
The other option would be to build an abstraction layer over top of the web service pre-generated proxies, such that when you make to the calls to the abstraction layer you can always use the same objects, as they are squeezed into (and out of) the web service proxies in the abstraction layer. This would also allow for unit testing :)
I think you really should be looking at WCF for 3.5+, but for .NET 2.0 look at something like WSCF (Web Services Contract First), which defines the contracts in XML and generates a set of libraries reusable across services. E.g You define a MyComany.WS.Common namespace and use that namespace in multiple projects. The code generation then builds a shared library of types which get used across all the web-services. We use this extensively in our .NET 2 solutions and it's great. We had to do some additional work around the code generation to get it to fit into our build process, but once that was done we never looked back.
We're migrating to .NET 3.5 over time, so the WSCF will become obsolete
Heres the link to the thinktecture site for WSCF.
wsdl.exe using the /sharetypes switch allows the same types to be used across multiple service definitions, provided the wire signatures are not correct. I was unable to use it in my situation, though, because the various wsdl contracts were carelessly namespaced.