Resolve reference not called when using custom directives - apollo

I am currently using custom directives to handle authorization across mutations and queries for my subgraphs. However, I realized that when I wrap my schema in the custom directive function using mapSchema (I created the custom directive with this) from graphql-tools, the __resolveReference function for a particular entity is not being called. But when I pass the schema returned from the buildSubgraphSchema function directly to the Apollo server constructor, everything works fine. Any idea what I could be doing wrong.? Thank you!
Schema returned from buildSubgraph
let schema = buildSubgraphSchema({typedefs, resolvers})
const server = new ApolloServer({schema}) // __resolveReference is called
Schema returned from directive
let schema = buildSubgraphSchema({typedefs, resolvers})
schema = authDirectiveTransformer(schema)
const server = new ApolloServer({schema}) // __resolveReference is not been called on entities.

Related

#mswjs/data question: why does RTK-Query sandbox example need separately handcoded POST and PUT mocks?

This is a question about the default behaviour of #mswjs/data.toHandlers function using this example with #mswjs/data to create mocks for RTK-Query calls.
https://codesandbox.io/s/github/reduxjs/redux-toolkit/tree/master/examples/query/react/mutations?from-embed
the file src/mocks/db.ts creates a mock database using #mswjs/data and defines default http mock responses using ...db.post.toHandlers('rest') but fails to work if I remove the additional PUT and POST mocks.
My understanding is that #mswjs/data toHandlers() function provides PUT and POST mock API calls for a defined database (in this case Posts) by default according to the github documentation so I am seeking advice to understand better why toHandlers does not work for PUT and POST in this example. i.e. if i remove PUT and POST mock API calls they fail.
What do the manual PUT and POST API mocks do that the default toHandlers dont?
You are correct to state that .toHandlers() generates both POST /posts and PUT /posts/:id request handlers. The RTK-Query example adds those handlers explicitly for the following reasons:
To emulate flaky error behavior by returning an error response based on the Math.random() value in the handler.
To set the id primary key to nanoid().
Adding a post fails if you remove the explicit POST /posts handler because the model definition for post does not define the initial value for the id primary key. You cannot create an entity without providing a primary key to it, which the example does not:
// PostManager.tsx
// The "post" state only contains the name of the new post.
const [post, setPost] = useState<Pick<Post, "name">>(initialValue);
// Only the "post" state is passed to the code that dispatches the
// "POST /posts" request handled by MSW.
await addPost(post).unwrap();
If we omit the random error behavior, I think the example should've used nanoid as the initial value of the id property in the model description:
import { nanoid } from "#reduxjs/toolkit";
const db = factory({
post: {
- id: primaryKey(String),
+ id: primaryKey(nanoid),
name: String
}
});
This way you would be able to create new posts by supplying the name only. The value of the id primary key would be generated using the value getter—the nanoid function.
The post edit operation functions correctly even if you remove the explicit PUT /posts/:id request handler because, unlike the POST handler, the PUT one is only there to implement a flaky error behavior (the edited post id is provided in the path parameters: req.params.id).

Why is it not possible to access objects that seem to be in scope when pausing execution?

I'm learning Dart and Flutter, and I'm finding it a mostly pleasureable experience. I'm making an app that I want to communicate with Amazon AWS resources. However, I'm stuck with an issue that I haven't been able to resolve for a while now.
I'm making a function for getting keys and tokens necessary for making authenticated requests to the API at AWS. I'm using a library function from AWS Amplify to get an AuthSession object. If I set a breakpoint to just after the AuthSession object has been retrieved, it seems as if this object contains some object called AuthSession.credentials. This in turn contains awsAccessKey, awsSecretKey and sessionToken, which are the tokens and keys that I need. I can access them in the debug console when execution is paused.
Future<AWSCredentials> getAWSCredentials() async {
final AuthSession authSession = await Amplify.Auth.fetchAuthSession(
options: CognitoSessionOptions(getAWSCredentials: true));
final a = 1; // I set a breakpoint on this line
//final AWSCredentials awsCredentials = authSession.credentials;
//return awsCredentials;
}
VS Code screenshot of objects in scope at breakpoint
However, if I try to make the function return this AuthSession (by uncommenting the last two lines of the above function), or one of the tokens directly, it doesn't compile anymore and I get the error message
The getter 'credentials' isn't defined for the type 'AuthSession'.
Try importing the library that defines 'credentials', correcting the name to the name of an existing getter, or defining a getter or field named 'credentials'. dart(undefined_getter)
I tried to dig into the code defining the class AuthSession, and it doesn't seem to contain any reference to the credentials object. However, it's obviously there at runtime. Why can't I access it?
The class AuthSession does not contain any member called credentials, so when casting the result of fetchAuthSession() to an AuthSession, it is not possible to access that member. However, the subclass CognitoAuthSession does contain the credentials member, so casting the result to that type allows access to authSession.credentials.
So the call to fetchAuthSession() should be
final CognitoAuthSession authSession = await Amplify.Auth.fetchAuthSession(
options: CognitoSessionOptions(getAWSCredentials: true));

Dynamically Create RecordType in Unit Test

We have some logic that depends on the record type of a custom object in our managed package. One of our clients has created some custom record types for this SObject - which is throwing an exception.
We've put in a fix, but want to update our unit tests to catch this case as well - so we need to be able to create a new RecordType for this SObject and assign it. However, I cannot figure out how to do this dynamically in Apex.
Tried:
insert new RecordType(...);
This throws "DML not allowed on RecordType".
According to SF API the RecordType has a "create" method, but:
RecordType rt = new RecordType();
rt.DeveloperName = 'Test';
rt.Name = 'Test';
rt.SObjectType = 'Listing__c';
rt.create();
Yields "Method does not exist or incorrect signature". Same result when trying as a static method:
RecordType.create(rt);
Ideas?
After discussing with some other SF devs and re-reading the API documentation, it looks like this cannot be done through APEX API (though possible through SOAP API calls).
http://www.salesforce.com/us/developer/docs/apexcode/index_Left.htm#StartTopic=Content/apex_dml_non_dml_objects.htm?SearchType=Stem

Microsoft Dynamics CRM - Pass Parameters from Web Service to IPlugins

We are building some plugins in Microsoft Dynamics CRM by inheriting from IPlugin. We have these configured so they fire whenever an Account is updated.
The problem is the plugins are calling our services, which causes our service to respond with an update. We are doing some pretty hacky things right now to prevent these cyclical updates from happening.
We were wondering if there was a way to pass a value to the IOrganizationService service (the web service) that a plugin can look at. Our other system could send a flag ("hey, don't bothing sending an update!") and the plugin could skip calling back.
Can we pass parameters from web service to the plugins?
Good idea could be usage of custom flag-field. For example you add bit field and call it CallFromExternalSystem. So when you make an update from your external system through IOranizationService you just fill this flag with true field and in plugin you can check condition that this field is present in fields list so you have no need to call external system endpoint again.
We decided the correct solution was to use the value found in IPluginExecutionContext.InputParameters["Target"]. In the case of an Update, this returns an Entity containing attributes for all the attributes that were updated.
We basically have a list of attribute names we cared about. We loop through names and see if any of them appear in the entity attribute list. If so, we send an update to our other system. The good news is, Dynamics CRM ignores updates where the values don't actually change, so trying to update a value to itself is no-op.
public void Execute(IServiceProvider serviceProvider)
{
IPluginExecutionContext context = serviceProvider.GetService(typeof(IPluginExecutionContext));
Entity entity = (Entity)context.InputParameters["Target"];
string[] fields = new string[] { "name", "statecode", "address1_line1" };
bool hasUpdates = fields.Where(f => entity.Attributes.Contains(f)).Any();
if (!hasUpdates)
{
return;
}
}

How to retrieve data from CRM 2011 by using webservice and SSIS

Goal:
Retrieve data from Dynamics CRM 2011 to my database from SQL server R2 by using webservice through integration services (SSIS). Webservice needed to be located inside of SSIS. Gonna use the data for data warehouse.
Problem:
How do I do it?
We only write to Dynamics so I can't address the specific method name but the general idea below should get you started.
Assumptions
Two variables have been defined in your package and they are passed to the script component as ReadOnlyVariables: CrmOrganizationName, CrmWebServiceUrl.
A script component has been added to the dataflow as a Source component. On the Inputs and Outputs tab, an appropriate number of columns have been added to Output 0 (or whatever you define your output collection as) with appropriate data types.
Inside the script, add a web reference to your CRM instance. This code assumes it's called CrmSdk.
using System;
using System.Data;
using System.Data.SqlClient;
using System.Windows.Forms;
using Microsoft.SqlServer.Dts.Pipeline.Wrapper;
using Microsoft.SqlServer.Dts.Runtime.Wrapper;
// web reference
using CrmSdk;
[Microsoft.SqlServer.Dts.Pipeline.SSISScriptComponentEntryPointAttribute]
public class ScriptMain : UserComponent
{
public override void CreateNewOutputRows()
{
// Get a reference to the CRM SDK
CrmSdk.CrmService CrmService = new CrmSdk.CrmService();
// An Authentication Token is required because CRM requires an OrganizationName
// to identify the Organization to be used
CrmSdk.CrmAuthenticationToken token = new CrmSdk.CrmAuthenticationToken();
token.AuthenticationType = 0;
token.OrganizationName = this.Variables.CrmOrganizationName;
CrmService.CrmAuthenticationTokenValue = token;
// Use default credentials
CrmService.Credentials = System.Net.CredentialCache.DefaultCredentials;
// Get the web service url from the config file
CrmService.Url = this.Variables.CrmWebServiceUrl;
//////////////////////////////////////////////////
// This code is approximate
// Use the appropriate service call to get retrieve
// data and then enumerate through it. For each
// row encountered, call the AddRow() method for
// your buffer and then populate fields. Be wary
// of NULLs
//////////////////////////////////////////////////
foreach (CrmSdk.entity person in CrmService.Get())
{
Output0Buffer.AddRow();
Output0Buffer.FirstName = person.FirstName;
Output0Buffer.LastName = person.LastName;
}
}
}
Caveats
There is no error handling, checks for nulls or anything elegant. The service should probably have been defined with the using statement, etc, etc, etc. It should provide an appropriate starting point for understanding how to consume a web service and load data into the pipeline.
The easiest solution for your requirement is to use a third-party library for SSIS. The commercial COZYROC SSIS+ library includes Dynamics CRM adapters, which support all deployment models: Premise, Live, Hosted, Federation, Office 365.