How does Delphi web-services work ? ( Adding method in runtime ?? ) - web-services

I've created web-service in Delphi XE using WSDL importer.
Delphi generated for me module ITransmitter1.pas with
ITransmitter interface and GetITransmitter function.
To use webservice i use:
var Transmitter: ITransmitter;
begin
Transmitter := GetITransmitter(True, '', nil);
Transmitter.Transmit(Memo1.Text, OutXML);
end;
But i cant see anywhere body of method Transmit ...
In ITransmitter.pas i see:
InvRegistry.RegisterInterface(TypeInfo(ITransmitter), 'urn:TransmitterIntf-ITransmitter', 'utf-8');
InvRegistry.RegisterDefaultSOAPAction(TypeInfo(ITransmitter), 'urn:TransmitterIntf-ITransmitter#Transmit');
If i comment this lines i get "interface not supported" error.
As i see here delphi is adding method in RunTime !
How does it work ? Can i add method in runtime to my own class ?

If you created a web service client with the WSDL importer, the generated client code will invoke a method on the server. So in fact, the method 'body' (code) is on the web service server.
Delphi generates a Soap request based on the WSDL, and behind the scenes RTTI (introspection) is used to generate parameters etc. of the web service call as XML. This XML is sent to the server, which executes the method implementation and sends back a Soap response.
Things are opposite if you create the web service server, in this case the Delphi application of course needs to implement all method bodies.

You're in fact calling a method defined in a Interface, which in turn inherits from IInvokable, declared in System.pas.
If you check your source code you'll note that no local object in your project implements the IInvokable interface you're calling, that's because that method is remotely executed in the server.
Before that occurs, there's some pascal code used to create a proper SOAP request to the server, send it and then wait and interpret the server response, consider this implementation details. If you're interested in know a bit more how this works, enable the "use debug .dcus" compiler option, so you can debug inside the VCL/RTL.
Then, as usual, use the StepInto (F7) command to ask the debugger to execute the Transmit method step by step... after some assembler in the TRIO.GenericStub method you'll get to the TRIO.Generic method where the packet is prepared and sent.
For a btSOAP binding I'm using to write this response, the relevant part starts at line 943 in the Rio.pas unit:
try
FWebNode.Execute(Req, Resp);
finally
{ Clear Outbound headers }
FHeadersOutBound.Clear;
end;
THTTPReqResp.Execute then uses wininet.dll functions to perform the connection, send and receive of information with the server using.
There are some levels you can go deep with this... how far you want to go will depend on your interests and the great amount of details are far beyond the scope of my answer here... feel free to post more questions with specific tings you're interested in.
I'm not sure about, but details can change between Delphi versions... I'm using Delphi XE right now.

Related

Only aysnc methods lsited when call wcf

I am trying to call a WCFService from Windows phone 8 which connects and returns data fine when i use WCFTestClient but when I refreence the service using add reference and then try to access in code only the async methods are showing in intelesence. I have not delciared my methods as aync how can i ensure I can access my other methods as I calling a webservice does it need be aysnc.
// Constructor
public MainPage()
{
InitializeComponent();
// Sample code to localize the ApplicationBar
//BuildLocalizedApplicationBar();
IcuroServiceClient _db = new IcuroServiceClient();
var json =_db.GetPersonByIdAsync(1);
}
And if so How would I convert a method that is as simple as below to Aysnc ?. I am used to asmx services and new to WCF.
public string GetListByUserId(string userId)
{
List<curoList> myList = _db.GetAllListsByUserId(userId);
var json = JsonConvert.SerializeObject(myList, Newtonsoft.Json.Formatting.None);
return json;
}
Its grayed out here for me mate In my normal signature im returning a string but the asyncs dont look like their returning anything just void .
Windows phone is based literally and figuratively on the Silverlight motif of keeping all service calls async only. It is a two step process where one has to think backwards. Here are the steps
Provide a callback method whose job it is to handle the resulting data or error in an appropriate fashion.
Then make the call as one normally would (but calling the async version) to start the process off on a separate thread.
Be cognizant of not directly loading to the GUI in the callback method for the result is on a different thread; note that loading to VM properties is fine for the most part whereas the update to any GUI subscribed bindings will be done on the thread.
Hot to Resolve
In the example given before the call to to db.GetPersonByIdAsync(1) provide (find using intellisense) the async subscription method that has the call back.
It may be prefixed with either a Load method or a Begin call, for the PersonById.
In the editor, the providing of a method is usually done by intellisense which where one can tab an example into code. That is so one doesn't have to root out the parameters for the callback method.
Possible example, shows the how to do it:
client.GetPersonByIdAsyncCompleted += MyMethodToHandleResultforGetPerson; // var personId = e.Result;
client.GetPersonByIdAsync(1);
Why
When Silverlight was designed, having a program wait (even if its busy waiting) on any thread would slow down the browser experience. By requiring all database (service) calls to be asynchronous ensures that the developer can't slow down the browser, in particular by bad code. Loosely speaking that motif carried over to the windows phone where Silverlight was brought over as the main operations for the phone.
It took me awhile to get used to the process, but now that I have worked on Silverlight and Windows phone, I actually use the async motif in WPF even though I don't necessarily have too, it makes sense in terms of data processing and thread management.
It is not easier for the developer per-se for everyone learns in a synchronous hello world, but it is adapted for the needs of the target platform and once learned it becomes second nature.

When to use httptest.Server and httptest.ResponseRecorder

As title, when to use httptest.Server and httptest.ResponseRecorder?
It seems to me that I can also test my handlers to return correct response using httptest.Server. I can simply start a httptest.Server given with my implementation of handlers, then do validations on the response's body.
Please correct if I'm wrong, I am learning Go + TDD
When you just want to check, if your http.Handler does what it should, you don't need to use httptest.Server. Just call your handler with an httptest.ResponseRecorder instance and check the output as in the example.
The possible uses of httptest.Server are numerous, so here are just a couple that come to my mind:
If your code depends on some external services and APIs, you can use a test server to emulate them. (Although I personally would isolate all code dealing with external data sources and then use them through interfaces, so that I could easily create fake objects for my tests.)
If you work on a client-server application, you can use a test server to emulate the server-side when testing the client-side.

WCF, Net.Tcp endpoint, code-first

I have a WCF service, this service works perfectly well on a WSHttp endpoint, running as a Windows service under ServiceHost. However I want to move to a TCP endpoint, because of scalability. For the life of me I cannot figure out how to host it correctly. Here is my host's OnStart routine, in VB:
Protected Overrides Sub OnStart(ByVal args() As String)
If _svcHost IsNot Nothing Then
_svcHost.Close()
End If
_svcHost = New ServiceHost(GetType(AutoTestsDataService), New Uri("net.tcp://" & GetIPv4Address() & ":8000"))
Dim metaDataBehavior = _svcHost.Description.Behaviors.Find(Of ServiceMetadataBehavior)()
If metaDataBehavior Is Nothing Then
_svcHost.Description.Behaviors.Add(New ServiceMetadataBehavior() With {.HttpGetEnabled = False})
Else
metaDataBehavior.HttpGetEnabled = False
End If
_svcHost.AddServiceEndpoint(GetType(IMetadataExchange), MetadataExchangeBindings.CreateMexTcpBinding, "mex")
_svcHost.AddServiceEndpoint(GetType(AutoTestsDataService), New NetTcpBinding(SecurityMode.None, False), ServiceName)
Dim debugBehavior As ServiceDebugBehavior = _svcHost.Description.Behaviors.Find(Of ServiceDebugBehavior)()
If debugBehavior Is Nothing Then
_svcHost.Description.Behaviors.Add(New ServiceDebugBehavior() With {.IncludeExceptionDetailInFaults = My.Settings.flagDebug})
Else
debugBehavior.IncludeExceptionDetailInFaults = My.Settings.flagDebug
End If
Try
_svcHost.Open()
Catch ex As Exception
_svcHost.Abort()
End Try
End Sub
As is, the code compiles fine, installs fine, and the Windows service starts up fine. But there is nothing listening on port 8000. I have made sure the Net.Tcp Listener and Port Sharing services are running properly. I chose to not use the config file at all because I had lots of problems in the past and to me putting code in a config file is bad, nevermind what Microsoft wants me to believe. A code-first implementation is always easier to understand than XML anyways, and to me the above code puts all the right parts in the right places. It just refuses to work. Like I said I can stick with WSHttp, but I would prefer to understand why Net.Tcp is not working.
The best way to determine what the WCF Service listener is doing during startup is to enable WCF Tracing. Therefore, I recommend you configure tracing on the service, which should provide detailed information about any underlying exceptions occurring at startup.
WCF tracing provides diagnostic data for fault monitoring and analysis. You can use tracing instead of a debugger to understand how an application is behaving, or why it faults. You can also correlate faults and processing across components to provide an end-to-end experience, including traces for process milestones across all components of the applications, such as operation calls, code exceptions, warnings and other significant processing events.
http://msdn.microsoft.com/en-us/library/ms733025(v=vs.110).aspx
The only problem I could see here is the method GetIPv4Address() and what it returns. Usually best choice is to use localhost, host name, or 0.0.0.0.

How do you decouple a web service that requires an authheader on every call?

I have a service reference to a .NET 2.0 web service. I have a reference to this service in my repository and I want to move to Ninject. I've been using DI for some time now, but haven't tried it with a web service like this.
So, in my code, the repository constructor creates two objects: the client proxy for the service, and an AuthHeader object that is the first parameter of every method in the proxy.
The AuthHeader is where I'm having friction. Because the concrete type is required as the first parameter on every call in the proxy, I believe I need to take a dependency on AuthHeader in my repository. Is this true?
I extracted an interface for AuthHeader from my reference.cs. I wanted to move to the following for my repository constructor:
[Inject]
public PackageRepository(IWebService service, IAuthHeader authHeader)
{
_service = service;
_authHeader = authHeader;
}
...but then I can't make calls to my service proxy like
_service.MakeSomeCall(_authheader, "some value").
...because because MakeSomeCall is expecting an AuthHeader, not an IAuthHeader.
Am I square-pegging a round hole here? Is this just an area where there isn't a natural fit (because of web service "awesomeness")? Am I missing an approach?
It's difficult to understand exactly what the question is here, but some general advice might be relevant to this situation:
Dependency injection does not mean that everything has to be an interface. I'm not sure why you would try to extract an interface from a web service proxy generated from WSDL; the types in the WSDL are contracts which you must follow. This is especially silly if the IAuthHeader doesn't have any behaviour (it doesn't seem to) and you'll never have alternate implementations.
The reason why this looks all wrong is because it is wrong; this web service is poorly-designed. Information that's common to all messages (like an authentication token) should never go in the body where it translates to a method parameter; instead it should go in the message header, wherethe ironically-named AuthHeader clearly isn't. Headers can be intercepted by the proxy and inspected prior to executing any operation, either on the client or service side. In WCF that's part of the behavior (generally ClientCredentials for authentication) and in legacy WSE it's done as an extension. Although it's theoretically possible to do this with information in the message body, it's far more difficult to pull off reliably.
In any event, what's really important here isn't so much what your repository depends on but where that dependency comes from. If your AuthHeader is injected by the kernel as a dependency then you're still getting all the benefits of DI - in particular the ability to have this all registered in one place or substitute a different implementation (i.e. a derived class).
So design issues aside, I don't think you have a real problem in your DI implementation. If the class needs to take an AuthHeader then inject an AuthHeader. Don't worry about the exact syntax and type, as long as it takes that dependency as a constructor argument or property.

Should my unit tests be touching an API directly when testing a wrapper for that API?

I have some written a number of unit tests that test a wrapper around a FTP server API.
Both the unit tests and the FTP server are on the same machine.
The wrapper API gets deployed to our platform and are used in both remoting and web service scenarios. The wrapper API essentially takes XML messages to perform tasks such as adding/deleting/updating users, changing passwords, modifying permissions...that kinda thing.
In a unit test, say to add a user to a virtual domain, I create the XML message to send to the API. The API does it's work and returns a response with status information about whether the operation was successful or failed (error codes, validation failures etc).
To verify whether the API wrapper code really did do the right thing (if the response indicated success), I invoke the FTP server's COM API and query its store directly to see if, for example when creating a user account, the user account really did get created.
Does this smell bad?
Update 1: #Jeremy/Nick: The wrapper is the focus of the testing, the FTP server and its COM API are 3rd party products, presumably well tested and stable. The wrapper API has to parse the XML message and then invoke the FTP server's API. How would I verify, and this may be a silly case, that a particular property of the user account is set correctly by the wrapper. For example setting the wrong property or attribute of an FTP account due to a typo in the wrapper code. A good example being setting the upload and download speed limits, these may get transposed in the wrapper code.
Update 2: thanks all for the answers. To the folks who suggested using mocks, it had crossed my mind, but the light hasn't switched on there yet and I'm still struggling to get my head round how I would get my wrapper to work with a mock of the FTP server. Where would the mocks reside and do I pass an instance of said mocks to the wrapper API to use instead of calling the COM API? I'm aware of mocking but struggling to get my head round it, mostly because I find most of the examples and tutorials are so abstract and (I'm ashamed to say) verging on the incomprehensible.
You seem to be mixing unit & component testing concerns.
If you're unit-testing your wrapper, you should use a mock FTP server and don't involve the actual server. The plus side is, you can usually achieve 100% automation like this.
If you're component-testing the whole thing (the wrapper + FTP server working together), try to verify your results at the same level as your tests i.e. by means of your wrapper API. For example, if you issue a command to upload a file, next, issue a command to delete/download that file to make sure that the file was uploaded correctly. For more complex operations where it's not trivial to test the outcome, then consider resorting to the COM API "backdoor" you mentioned or perhaps involve some manual verification (do all of your tests need to be automated?).
To verify whether the API wrapper code really did do the right thing (if the response indicated success), I invoke the FTP server's COM API
Stop right there. You should be mocking the FTP server and the wrapper should operate against the mock.
If your test runs both the wrapper and the FTP server, you are not Unit Testing.
To test your wrapper with a mock object, you can do the following:
Write a COM object that has the same interface as the FTP server's COM API. This will be your mock object. You should be able to interchange the real FTP server and your mock object by passing the interface pointer of either to your wrapper by means of dependency injection.
Your mock object should implement hard-coded behaviour based on the methods called on its interface (which mimics FTP server API) and also based on the argument values used:
For example, if you have an UploadFile method you can blindly return a success result and perhaps store the file name that was passed in in an array of strings.
You could simulate an upload error when you encounter a file name with "error" in it.
You could simulate latency/timeout when you encounter a file name with "slow" in it.
Later on, the DownloadFile method could check the internal string array to see if a file with that name was already "uploaded".
The pseudo-code for some test cases would be:
//RealServer theRealServer;
//FtpServerIntf ftpServerIntf = theRealServer.getInterface();
// Let's test with our mock instead
MockServer myMockServer;
FtpServerIntf ftpServerIntf = myMockServer.getInterface();
FtpWrapper myWrapper(ftpServerIntf);
FtpResponse resp = myWrapper.uploadFile("Testing123");
assertEquals(FtpResponse::OK, resp);
resp = myWrapper.downloadFile("Testing123");
assertEquals(FtpResponse::OK, resp);
resp = myWrapper.downloadFile("Testing456");
assertEquals(FtpResponse::NOT_FOUND, resp);
resp = myWrapper.downloadFile("SimulateError");
assertEquals(FtpResponse::ERROR, resp);
I hope this helps...
I agree with Nick and Jeremy about not touching the API. I would look at mocking the API.
http://en.wikipedia.org/wiki/Mock_object
If it's .NET you can use:
Moq: http://code.google.com/p/moq/
And a bunch of other mocking libraries.
What are you testing the wrapper or the API. The API should work as is, so you don't need to test it I would think. Focus your testing efforts on the wrapper and pretend like the API doesn't exist, when I write a class that does file access I don't unit test the build in streamreader...I focus on my code.
I would say your API should be treated just like a database or a network connection when testing. Don't test it, it isn't under your control.
It doesn't sound like you're asking "Should I test the API?" — you're asking "Should I use the API to verify whether my wrapper is doing the right thing?"
I say yes. Your unit tests should assert that your wrapper passes along the information reported by the API. In the example you give, for instance, I don't know how you would avoid touching the API. So I don't think it smells bad.
The only time I can think of when it might make sense to dip into the lower level API to verify results if if the higher-level API is write-only. For example, if you can create a user using the high-level API, then there should be a high-level API to get the user accounts, too. Use that.
Other folks have suggested mocking the lower-level API. That's good, if you can do it. If the lower-level component is mocked, checking the mocks to make sure the right state is set should be okay.