How do you decouple a web service that requires an authheader on every call? - web-services

I have a service reference to a .NET 2.0 web service. I have a reference to this service in my repository and I want to move to Ninject. I've been using DI for some time now, but haven't tried it with a web service like this.
So, in my code, the repository constructor creates two objects: the client proxy for the service, and an AuthHeader object that is the first parameter of every method in the proxy.
The AuthHeader is where I'm having friction. Because the concrete type is required as the first parameter on every call in the proxy, I believe I need to take a dependency on AuthHeader in my repository. Is this true?
I extracted an interface for AuthHeader from my reference.cs. I wanted to move to the following for my repository constructor:
[Inject]
public PackageRepository(IWebService service, IAuthHeader authHeader)
{
_service = service;
_authHeader = authHeader;
}
...but then I can't make calls to my service proxy like
_service.MakeSomeCall(_authheader, "some value").
...because because MakeSomeCall is expecting an AuthHeader, not an IAuthHeader.
Am I square-pegging a round hole here? Is this just an area where there isn't a natural fit (because of web service "awesomeness")? Am I missing an approach?

It's difficult to understand exactly what the question is here, but some general advice might be relevant to this situation:
Dependency injection does not mean that everything has to be an interface. I'm not sure why you would try to extract an interface from a web service proxy generated from WSDL; the types in the WSDL are contracts which you must follow. This is especially silly if the IAuthHeader doesn't have any behaviour (it doesn't seem to) and you'll never have alternate implementations.
The reason why this looks all wrong is because it is wrong; this web service is poorly-designed. Information that's common to all messages (like an authentication token) should never go in the body where it translates to a method parameter; instead it should go in the message header, wherethe ironically-named AuthHeader clearly isn't. Headers can be intercepted by the proxy and inspected prior to executing any operation, either on the client or service side. In WCF that's part of the behavior (generally ClientCredentials for authentication) and in legacy WSE it's done as an extension. Although it's theoretically possible to do this with information in the message body, it's far more difficult to pull off reliably.
In any event, what's really important here isn't so much what your repository depends on but where that dependency comes from. If your AuthHeader is injected by the kernel as a dependency then you're still getting all the benefits of DI - in particular the ability to have this all registered in one place or substitute a different implementation (i.e. a derived class).
So design issues aside, I don't think you have a real problem in your DI implementation. If the class needs to take an AuthHeader then inject an AuthHeader. Don't worry about the exact syntax and type, as long as it takes that dependency as a constructor argument or property.

Related

Is there a useful pattern for unit testing with gmock for grpc Server classes?

It's tremendously helpful that there's a tool to generate mock versions of the client stubs. Testing the server side is causing me tons of headache at the moment. Enough headache where I feel like I must be doing something fundamentally wrong.
I may be misreading the following, but the end2end tests, including 'mock_test' seem to be using an actual client-server connection to drive testing. They may mock out the client, or mock out the client readers/writers to see the response from the server, but it's not clear to me how to test the server in isolation.
What I want to be able to do: I have some Service implentation that inherits from the gRPC generated class "Service." suppose that service exposes an interface ::grpc::Status Foo(::grpc::ServerContext* context, const CommandMessage* request, ::grpc::ServerWriter<CommandResponse>* writer); My gut for writing unit tests is saying to pass in a mock "ServerWriter" class and expect 'Write' is called when appropriate. But ServerWriter is marked final and can't be overridden.
This isn't the first place I've run into trouble with my kind of standard ways of mocking and gRPC's server stuff. The Server class, the ServerBuilder class, etc. I've wrapped so that I could put mock versions of them into tests (to validate that the correct parameters are being passed to my Server when it's being constructed, e.g.)
So I think I'm missing something with grpc then. I just don't know what. Am I supposed to stand up a real server in my unit tests and probe it with a mock client? How do I validate the proper server configurations are being passed, if I have to stand up a test version with test configurations? The code has interface classes and virtual methods, but then the parts that seem exposed for public use don't seem to be easily mockable as I'd expect.

Public functions become remotely accessible when implementing onCFCRequest()

SOME BACKGROUND:
I'm using onCFCRequest() to handle remote CFC calls separately from regular CFM page requests. This allows me to catch errors and set MIME types cleanly for all remote requests.
THE PROBLEM:
I accidentally set some of my remote CFC functions to public access instead of remote and realized that they were still working when called remotely.
As you can see below, my implementation of onCFCRequest() has created a gaping security hole into my entire application, where an HTTP request could be used to invoke any public method on any HTTP-accessible CFC.
REPRO CODE:
In Application.cfc:
public any function onCFCRequest(string cfc, string method, struct args){
cfc = createObject('component', cfc);
return evaluate('cfc.#method#(argumentCollection=args)');
}
In a CFC called remotely:
public any function publicFunction(){
return 'Public function called remotely!';
}
QUESTION:
I know I could check the meta data for the component before invoking the method to verify it allows remote access, but are there other ways I could approach this problem?
onCfcRequest() doesn't really create the security hole, you create the security hole by blindly running the method without checking to see if it's appropriate to do so first, I'm afraid ;-)
(NB: I've fallen foul of exactly the same thing, so I'm not having a go # you ;-)
So - yeah - you do need to check the metadata before running the method. That check is one of the things that CF passes back to you to manage in its stead when you use this handler, and has been explicitly implemented as such (see 3039293).
I've written up a description of the issue and the solution on my blog. As observed in a comment below I use some code in there - invoke() - that will only work on CF10+, but the general technique remains the same.

How does Delphi web-services work ? ( Adding method in runtime ?? )

I've created web-service in Delphi XE using WSDL importer.
Delphi generated for me module ITransmitter1.pas with
ITransmitter interface and GetITransmitter function.
To use webservice i use:
var Transmitter: ITransmitter;
begin
Transmitter := GetITransmitter(True, '', nil);
Transmitter.Transmit(Memo1.Text, OutXML);
end;
But i cant see anywhere body of method Transmit ...
In ITransmitter.pas i see:
InvRegistry.RegisterInterface(TypeInfo(ITransmitter), 'urn:TransmitterIntf-ITransmitter', 'utf-8');
InvRegistry.RegisterDefaultSOAPAction(TypeInfo(ITransmitter), 'urn:TransmitterIntf-ITransmitter#Transmit');
If i comment this lines i get "interface not supported" error.
As i see here delphi is adding method in RunTime !
How does it work ? Can i add method in runtime to my own class ?
If you created a web service client with the WSDL importer, the generated client code will invoke a method on the server. So in fact, the method 'body' (code) is on the web service server.
Delphi generates a Soap request based on the WSDL, and behind the scenes RTTI (introspection) is used to generate parameters etc. of the web service call as XML. This XML is sent to the server, which executes the method implementation and sends back a Soap response.
Things are opposite if you create the web service server, in this case the Delphi application of course needs to implement all method bodies.
You're in fact calling a method defined in a Interface, which in turn inherits from IInvokable, declared in System.pas.
If you check your source code you'll note that no local object in your project implements the IInvokable interface you're calling, that's because that method is remotely executed in the server.
Before that occurs, there's some pascal code used to create a proper SOAP request to the server, send it and then wait and interpret the server response, consider this implementation details. If you're interested in know a bit more how this works, enable the "use debug .dcus" compiler option, so you can debug inside the VCL/RTL.
Then, as usual, use the StepInto (F7) command to ask the debugger to execute the Transmit method step by step... after some assembler in the TRIO.GenericStub method you'll get to the TRIO.Generic method where the packet is prepared and sent.
For a btSOAP binding I'm using to write this response, the relevant part starts at line 943 in the Rio.pas unit:
try
FWebNode.Execute(Req, Resp);
finally
{ Clear Outbound headers }
FHeadersOutBound.Clear;
end;
THTTPReqResp.Execute then uses wininet.dll functions to perform the connection, send and receive of information with the server using.
There are some levels you can go deep with this... how far you want to go will depend on your interests and the great amount of details are far beyond the scope of my answer here... feel free to post more questions with specific tings you're interested in.
I'm not sure about, but details can change between Delphi versions... I'm using Delphi XE right now.

How to test if a fluent service method is called

I have a security rule that a newly registered user has full permissions over their own user entity. I'm using Rhino.Security and the code works fine, but I want to create a unit test to make sure the appropriate call is made to setup the permission. Here is a simplified verison of the code:
public User Register(UserRegisterTask userRegistrationTask) {
User user = User.Create(userRegistrationTask);
this.userRepository.Save(user);
// Give this user permission to do operations on itself
this.permissionsBuilderService.Allow("Domain/User")
.For(user)
.On(user)
.DefaultLevel()
.Save();
return user;
}
I've mocked the userRepository and the permissionBuilderService but the fluent interface of the permissionBuilderService requires different objects to be returned from each method call in the chain (i.e. .Allow(...).For(...).On(...) etc). But I can't find a way to mock each of the objects in the chain.
Is there a way to test if the permissionBuilderService's Allow method is being called but ignoring the rest of the chain?
Thanks
Dan
I also ran into this and ended up wrapping the Rhino Security functionality in a service layer for two reasons:
It was making unit testing a real PITA and after spending a couple of hours hitting my head against a brick wall, this approach allowed me to mock this layer far more easily.
I started to feel that Rhino Security was being coupled very tightly to my controller (my application uses MVC). Wrapping the calls in another layer allowed me looser coupling to a specific security implementation and will allow me to easily swap it out with another - if I so choose - in the future.
Obviously, this is only one approach. But it made my life much easier...

Should my unit tests be touching an API directly when testing a wrapper for that API?

I have some written a number of unit tests that test a wrapper around a FTP server API.
Both the unit tests and the FTP server are on the same machine.
The wrapper API gets deployed to our platform and are used in both remoting and web service scenarios. The wrapper API essentially takes XML messages to perform tasks such as adding/deleting/updating users, changing passwords, modifying permissions...that kinda thing.
In a unit test, say to add a user to a virtual domain, I create the XML message to send to the API. The API does it's work and returns a response with status information about whether the operation was successful or failed (error codes, validation failures etc).
To verify whether the API wrapper code really did do the right thing (if the response indicated success), I invoke the FTP server's COM API and query its store directly to see if, for example when creating a user account, the user account really did get created.
Does this smell bad?
Update 1: #Jeremy/Nick: The wrapper is the focus of the testing, the FTP server and its COM API are 3rd party products, presumably well tested and stable. The wrapper API has to parse the XML message and then invoke the FTP server's API. How would I verify, and this may be a silly case, that a particular property of the user account is set correctly by the wrapper. For example setting the wrong property or attribute of an FTP account due to a typo in the wrapper code. A good example being setting the upload and download speed limits, these may get transposed in the wrapper code.
Update 2: thanks all for the answers. To the folks who suggested using mocks, it had crossed my mind, but the light hasn't switched on there yet and I'm still struggling to get my head round how I would get my wrapper to work with a mock of the FTP server. Where would the mocks reside and do I pass an instance of said mocks to the wrapper API to use instead of calling the COM API? I'm aware of mocking but struggling to get my head round it, mostly because I find most of the examples and tutorials are so abstract and (I'm ashamed to say) verging on the incomprehensible.
You seem to be mixing unit & component testing concerns.
If you're unit-testing your wrapper, you should use a mock FTP server and don't involve the actual server. The plus side is, you can usually achieve 100% automation like this.
If you're component-testing the whole thing (the wrapper + FTP server working together), try to verify your results at the same level as your tests i.e. by means of your wrapper API. For example, if you issue a command to upload a file, next, issue a command to delete/download that file to make sure that the file was uploaded correctly. For more complex operations where it's not trivial to test the outcome, then consider resorting to the COM API "backdoor" you mentioned or perhaps involve some manual verification (do all of your tests need to be automated?).
To verify whether the API wrapper code really did do the right thing (if the response indicated success), I invoke the FTP server's COM API
Stop right there. You should be mocking the FTP server and the wrapper should operate against the mock.
If your test runs both the wrapper and the FTP server, you are not Unit Testing.
To test your wrapper with a mock object, you can do the following:
Write a COM object that has the same interface as the FTP server's COM API. This will be your mock object. You should be able to interchange the real FTP server and your mock object by passing the interface pointer of either to your wrapper by means of dependency injection.
Your mock object should implement hard-coded behaviour based on the methods called on its interface (which mimics FTP server API) and also based on the argument values used:
For example, if you have an UploadFile method you can blindly return a success result and perhaps store the file name that was passed in in an array of strings.
You could simulate an upload error when you encounter a file name with "error" in it.
You could simulate latency/timeout when you encounter a file name with "slow" in it.
Later on, the DownloadFile method could check the internal string array to see if a file with that name was already "uploaded".
The pseudo-code for some test cases would be:
//RealServer theRealServer;
//FtpServerIntf ftpServerIntf = theRealServer.getInterface();
// Let's test with our mock instead
MockServer myMockServer;
FtpServerIntf ftpServerIntf = myMockServer.getInterface();
FtpWrapper myWrapper(ftpServerIntf);
FtpResponse resp = myWrapper.uploadFile("Testing123");
assertEquals(FtpResponse::OK, resp);
resp = myWrapper.downloadFile("Testing123");
assertEquals(FtpResponse::OK, resp);
resp = myWrapper.downloadFile("Testing456");
assertEquals(FtpResponse::NOT_FOUND, resp);
resp = myWrapper.downloadFile("SimulateError");
assertEquals(FtpResponse::ERROR, resp);
I hope this helps...
I agree with Nick and Jeremy about not touching the API. I would look at mocking the API.
http://en.wikipedia.org/wiki/Mock_object
If it's .NET you can use:
Moq: http://code.google.com/p/moq/
And a bunch of other mocking libraries.
What are you testing the wrapper or the API. The API should work as is, so you don't need to test it I would think. Focus your testing efforts on the wrapper and pretend like the API doesn't exist, when I write a class that does file access I don't unit test the build in streamreader...I focus on my code.
I would say your API should be treated just like a database or a network connection when testing. Don't test it, it isn't under your control.
It doesn't sound like you're asking "Should I test the API?" — you're asking "Should I use the API to verify whether my wrapper is doing the right thing?"
I say yes. Your unit tests should assert that your wrapper passes along the information reported by the API. In the example you give, for instance, I don't know how you would avoid touching the API. So I don't think it smells bad.
The only time I can think of when it might make sense to dip into the lower level API to verify results if if the higher-level API is write-only. For example, if you can create a user using the high-level API, then there should be a high-level API to get the user accounts, too. Use that.
Other folks have suggested mocking the lower-level API. That's good, if you can do it. If the lower-level component is mocked, checking the mocks to make sure the right state is set should be okay.