What is a web service endpoint? - web-services

Let's say my web service is located at http://localhost:8080/foo/mywebservice and my WSDL is at http://localhost:8080/foo/mywebservice?wsdl.
Is http://localhost:8080/foo/mywebservice an endpoint, i.e., is it the same as the URI of my web service or where the SOAP messages received and unmarshalled?
Could you please explain to me what it is and what the purpose of it is?

This is a shorter and hopefully clearer answer...
Yes, the endpoint is the URL where your service can be accessed by a client application. The same web service can have multiple endpoints, for example in order to make it available using different protocols.

Updated answer, from Peter in comments :
This is de "old terminology", use directally the WSDL2 "endepoint"
definition (WSDL2 translated "port" to "endpoint").
Maybe you find an answer in this document : http://www.w3.org/TR/wsdl.html
A WSDL document defines services as collections of network endpoints, or ports. In WSDL, the abstract definition of endpoints and messages is separated from their concrete network deployment or data format bindings. This allows the reuse of abstract definitions: messages, which are abstract descriptions of the data being exchanged, and port types which are abstract collections of operations. The concrete protocol and data format specifications for a particular port type constitutes a reusable binding. A port is defined by associating a network address with a reusable binding, and a collection of ports define a service. Hence, a WSDL document uses the following elements in the definition of network services:
Types– a container for data type definitions using some type system (such as XSD).
Message– an abstract, typed definition of the data being communicated.
Operation– an abstract description of an action supported by the service.
Port Type–an abstract set of operations supported by one or more endpoints.
Binding– a concrete protocol and data format specification for a particular port type.
Port– a single endpoint defined as a combination of a binding and a network address.
Service– a collection of related endpoints.
http://www.ehow.com/info_12212371_definition-service-endpoint.html
The endpoint is a connection point where HTML files or active server pages are exposed. Endpoints provide information needed to address a Web service endpoint. The endpoint provides a reference or specification that is used to define a group or family of message addressing properties and give end-to-end message characteristics, such as references for the source and destination of endpoints, and the identity of messages to allow for uniform addressing of "independent" messages. The endpoint can be a PC, PDA, or point-of-sale terminal.

A web service endpoint is the URL that another program would use to communicate with your program. To see the WSDL you add ?wsdl to the web service endpoint URL.
Web services are for program-to-program interaction, while web pages are for program-to-human interaction.
So:
Endpoint is: http://www.blah.com/myproject/webservice/webmethod
Therefore,
WSDL is: http://www.blah.com/myproject/webservice/webmethod?wsdl
To expand further on the elements of a WSDL, I always find it helpful to compare them to code:
A WSDL has 2 portions (physical & abstract).
Physical Portion:
Definitions - variables - ex: myVar, x, y, etc.
Types - data types - ex: int, double, String, myObjectType
Operations - methods/functions - ex: myMethod(), myFunction(), etc.
Messages - method/function input parameters & return types
ex: public myObjectType myMethod(String myVar)
Porttypes - classes (i.e. they are a container for operations) - ex: MyClass{}, etc.
Abstract Portion:
Binding - these connect to the porttypes and define the chosen protocol for communicating with this web service.
- a protocol is a form of communication (so text/SMS, vs. phone vs. email, etc.).
Service - this lists the address where another program can find your web service (i.e. your endpoint).

In past projects I worked on, the endpoint was a relative property. That is to say it may or may not have been appended to, but it always contained the protocol://host:port/partOfThePath.
If the service being called had a dynamic part to it, for example a ?param=dynamicValue, then that part would get added to the endpoint. But many times the endpoint could be used as is without having to be amended.
Whats important to understand is what an endpoint is not and how it helps. For example an alternative way to pass the information stored in an endpoint would be to store the different parts of the endpoint in separate properties. For example:
hostForServiceA=someIp
portForServiceA=8080
pathForServiceA=/some/service/path
hostForServiceB=someIp
portForServiceB=8080
pathForServiceB=/some/service/path
Or if the same host and port across multiple services:
host=someIp
port=8080
pathForServiceA=/some/service/path
pathForServiceB=/some/service/path
In those cases the full URL would need to be constructed in your code as such:
String url = "http://" + host + ":" + port + pathForServiceA + "?" + dynamicParam + "=" + dynamicValue;
In contract this can be stored as an endpoint as such
serviceAEndpoint=http://host:port/some/service/path?dynamicParam=
And yes many times we stored the endpoint up to and including the '='. This lead to code like this:
String url = serviceAEndpoint + dynamicValue;
Hope that sheds some light.

Simply put, an endpoint is one end of a communication channel. When an API interacts with another system, the touch-points of this communication are considered endpoints. For APIs, an endpoint can include a URL of a server or service. Each endpoint is the location from which APIs can access the resources they need to carry out their function.
APIs work using ‘requests’ and ‘responses.’ When an API requests information from a web application or web server, it will receive a response. The place that APIs send requests and where the resource lives, is called an endpoint.
Reference:
https://smartbear.com/learn/performance-monitoring/api-endpoints/

An Endpoint is specified as a relative or absolute url that usually results in a response. That response is usually the result of a server-side process that, could, for instance, produce a JSON string. That string can then be consumed by the application that made the call to the endpoint. So, in general endpoints are predefined access points, used within TCP/IP networks to initiate a process and/or return a response. Endpoints could contain parameters passed within the URL, as key value pairs, multiple key value pairs are separated by an ampersand, allowing the endpoint to call, for example, an update/insert process; so endpoints don’t always need to return a response, but a response is always useful, even if it is just to indicate the success or failure of an operation.

A endpoint is a URL for web service.And Endpoints also is a distributed API.
The Simple Object Access Protocol (SOAP) endpoint is a URL. It identifies the location on the built-in HTTP service where the web services listener listens for incoming requests.
Reference: https://www.ibm.com/support/knowledgecenter/SSSHYH_7.1.0.4/com.ibm.netcoolimpact.doc/dsa/imdsa_web_netcool_impact_soap_endpoint_c.html

Related

Genexus - Mark unrecognized procedure parameters as ignorable in webservices

I have procedures that are exposed as Webservices (REST):
I need it to be able to parse the request body ignoring unrecognized fields (that are not specified
in "rules"). Right now, when procedures tries to parse something that is not defined within the parameters, they throw the following error:
Example:
Some procedure has the following definition:
parm(in:&parm1, in:&parm2, out:&someResponse);
Then we change to:
parm(in:&parm1, in:&parm2, in:&parm3, out:&someResponse);
The web service is updated on some distributions, but on some they're still on the old version with 2 in parameters.
The service that consumes these web services on different APP distributions are sending the body with the second (latest definition).
{
"parm1" : "somevalue",
"parm2" : "somevalue",
"parm3" : "somevalue"
}
Unfortunately we don't have control of the third party that is consuming our web services, so in that case, it would be a lot easier if unused parameters could be ignored...
USING GX 16 U11 - Java Generator
Unfortunately there is no way in GeneXus 16 to "catch" the request and do something previous to the object logic. In GeneXus 17 we have the new API object, there you can transform the parameters.
But, not everything is lost. Taking into account you're generating in Java, there is an "external way" to do it with Filters. I used them to log the client requests for debugging purposes.
If you don't want to mess with the code, there is also API Gateways you could put in front of your API services to redirect the requests to the right service. Bear in mind that I'm not a specialist in this topic, maybe a post in ServerFault would help.

Inter-Process communication in a microservices architecture

we are moving from monolithic to microservice architecture application, we're still in planning phase and we want to know what is the best practices of building it.
suppose we have two services :
User
Device
getUserDevices(UserId)
addDevice(DeviceInfo, UserId)
...
Each user has multiple devices
what is the most common, cleaner and proper way of asking the server to get all user devices ?
1- {api-url}/User/{UserId}/devices
needs another HTTP request to communicate with Device service.
for user X, get linked devices from User service.
// OR
2- {api-url}/Device/{UserId}/devices
for user X, get linked devices from Device service.
There are a lot of classic patterns available to solve such problems in Microservices. You have 2 microservices - 1 for User (Microservice A) and 1 for Device (Microservice B). The fundamental principle of a microservice is to have a separate database for each of the microservice. If any microservice wants to talk to each other (or to get data from another microservice), they can but they would do it using an API. Another way for communication between 2 microservices is by events. When something happens in Microservice A, it will raise an event and push it to a central event store or a message queue and Microservice B would subscribe to some or all of the events emitted by A.
I guess in your domain, A would have methods like - Add/Update/Delete a User and B would have Add/Update/Delete a device. Each user can have its own unique id and other data fields like Name, Address, Email etc. Each device can have its own unique id, a user id, and other data fields like Name, Type, Manufacturer, Price etc. Whenever you "Add" a device, you can send a POST request or a command (if you use CQRS) to Device Microservice with the request containing data about device + user-id and it could raise an event called "DeviceAdded". It can also have events corresponding to Update and Delete like "DeviceUpdated" and "DeviceRemoved". The microservice A can subscribe to events - "DeviceAdded", "DeviceRemoved", and "DeviceUpdated" events emitted by B and whenever any such event is raised, it will handle that event and denormalize that event into its own little database of Devices (Which you can call UserRelationships). In future, it can listen to events from other microservices too (so your pattern here would be extensible and scalable).
So now to get all devices owned by a user, all you have to do is make an end-point in User Microservice like "http://{microservice-A-host}:{port}/user/{user-id}/devices" and it will return you a list of the devices by querying for user-id in its own little database of UserRelationships which you must have been maintaining through events.
Good Reference is here: https://www.nginx.com/blog/event-driven-data-management-microservices/
it may really be either way, but to my liking, I would choose to put it under /Devices/{userId}/devices as you are looking for the devices given the user id. I hope this helps. Have a nice one!
You are requesting a resource from a service, resource being a device and service being a device service.
From a rest standpoint, you are looking for a resource and your service is providing various methods to manipulate that resource.
The following url can be used.
[GET] ../device?user_id=xyz
And device information can be fetched via ../device/{device_id}
Having said that, if you had one service that is providing for both user and device data than the following would have made sense.
[GET] ../user/{userId}/device
Do note that this is just a naming convention and you can pick what suits best for you, thing is pick one and hold onto it.
When exposing the api consistency is more important.
One core principle of the microservice architecture is
defining clear boundaries and responsibilities of each microservice.
I can say that it's the same Single Responsibility Principle from SOLID, but on macro level.
Сonsidering this principle we get:
Users service is responsible for user management/operations
Devices service is responsible for operations with devices
You question is
..proper way of asking the server to get all user devices
It's 100% responsibility of the Devices service and Users service nothing know about devices.
As I can see you thinking only in routing terms (yes API consistency is also important).
From one side the better and more logical URL is /api/users/{userId}/devices
- you try to get user's devices, these devices belong to user.
From other side you can use the routes like /api/devices/user/{userId} (/api/devices/{deviceId}) and that can be more easily processed
by the routing system to send a request to the Devices service.
Taking into account other constraints you can choose the option that is right for your design.
And also small addition to:
needs another HTTP request to communicate with Device service.
in the architecture of your solution you can create an additional special and separate component that routes the requests to the desired microservice, not only direct calls are possible from one microservice to another.
You should query the device service only.
And treat the user id like a filter in the device service. For eg: you should search on userid similar to how you would search device based on device type. Just another filter
Eg : /devices?userid=
Also you could cache some basic information of user in device service, to save round trips on getting user data
With microservices there is nothing wrong with both the options. However the device api makes more sense and further I'll prefer
GET ../device/{userId}/devices
over
GET ../device?user_id=123
There are two reasons:
As userId should already be there with devices service you'll save one call to user service. Otherwise it'll go like Requester -> User service -> Device Service
You can use POST ../device/{userId}/devices to create new device for particular user. Which looks more restful then parameterized URL.

Invoking a service according to the input in a Proxy Service OSB

I have a proxy service, but according to a parameter in the request I need to invoke a different business service (e.g. if the parameter is 1 I need to invoke service A otherwise Service B), and each service has a different response, so for service A I need to transform the response but for Service B it's not necessary!
I don't know how to do this in a PS, and what is the best way using Routing or Service Call out, having into account that is for a system that will have a lot transactions. What I have in the picture is the scenario por Service 1, I need to include both scenarios
I think what you can do in this case is use a Routing Table, which you would be able to set a variable somewhere earlier in your pipeline to determine which route you'll take in this table through use of an XQuery expression.

how much has web-service been used in mule (several times)

I have a web service (with WSDL) with mule to be used by people.
I want to get some information about users that use my web-service. For example : ip and time-stamp of the API invocation.
Also, I want to know how much has web-service been used in mule?
I don't think there's such statistical information. However, you could add a logger processor to the flow (assuming it's a flow) marking something like "Web Service XXX was called." The logged message would also contain the timestamp, because of the logger formatter.
As to the IP that called the service, Mule places the calling address in the message Inbound property remoteAddress. So, you could just add this line to the flow:
<logger message="Incoming message. Caller Address: #[message.inboundProperties['remoteAddress']]"/>
This would log each access (which could be used for statistical purposes by an analyzing tool) and their respective calling address.
This sounds a good use case for either:
A custom interceptor, that takes care of storing usage statistics,
A wire-tap that sends to a flow in charge of storing usage statistics.

Apache camel to aggregate multiple REST service responses

I m new to Camel and wondering how I can implement below mentioned use case using Camel,
We have a REST web service and lets say it has two service operations callA and callB.
Now we have ESB layer in the front that intercepts the client requests before hitting this actual web service URLs.
Now I m trying to do something like this -
Expose a URL in ESB that client will actually call. In the ESB we are using Camel's Jetty component which just proxies this service call. So lets say this URL be /my-service/scan/
Now on receiving this request #ESB, I want to call these two REST endpoints (callA and callB) -> Get their responses - resA and resB -> Aggregate it to a single response object resScan -> return to the client.
All I have right now is -
<route id="MyServiceScanRoute">
<from uri="jetty:http://{host}.{port}./my-service/scan/?matchOnUriPrefix=true&bridgeEndpoint=true"/>
<!-- Set service specific headers, monitoring etc. -->
<!-- Call performScan -->
<to uri="direct:performScan"/>
</route>
<route id="SubRoute_performScan">
<from uri="direct:performScan"/>
<!-- HOW DO I??
Make callA, callB service calls.
Get their responses resA, resB.
Aggregate these responses to resScan
-->
</route>
I think that you unnecessarily complicate the solution a little bit. :) In my humble opinion the best way to call two independed remote web services and concatenate the results is to:
call services in parallel using multicast
aggregate the results using the GroupedExchangeAggregationStrategy
The routing for the solution above may look like:
from("direct:serviceFacade")
.multicast(new GroupedExchangeAggregationStrategy()).parallelProcessing()
.enrich("http://google.com?q=Foo").enrich("http://google.com?q=Bar")
.end();
Exchange passed to the direct:serviceFacadeResponse will contain property Exchange.GROUPED_EXCHANGE set to list of results of calls to your services (Google Search in my example).
And that's how could you wire the direct:serviceFacade to Jetty endpoint:
from("jetty:http://0.0.0.0:8080/myapp/myComplexService").enrich("direct:serviceFacade").setBody(property(Exchange.GROUPED_EXCHANGE));
Now all HTTP requests to the service URL exposed by you on ESB using Jetty component will generate responses concatenated from the two calls to the subservices.
Further considerations regarding the dynamic part of messages and endpoints
In many cases using static URL in endpoints is insufficient to achieve what you need. You may also need to prepare payload before passing it to each web service.
Generally speaking - the type of routing used to achieve dynamic endpoints or payloads parameters in highly dependent on the component you use to consume web services (HTTP, CXFRS, Restlet, RSS, etc). Each component varies in the degree and a way in which you can configure it dynamically.
If your endpoints/payloads should be affected dynamically you could also consider the following options:
Preprocess copy of exchange passed to each endpoint using the onPrepareRef option of the Multicast endpoint. You can use it to refer to the custom processor that will modify the payload before passing it to the Multicast's endpoints. This may be good way to compose onPrepareRef with Exchange.HTTP_URI header of HTTP component.
Use Recipient List (which also offers parallelProcessing as the Multicast does) to dynamically create the REST endpoints URLs.
Use Splitter pattern (with parallelProcessing enabled) to split the request into smaller messages dedicated to each service. Once again this option could work pretty well with Exchange.HTTP_URI header of HTTP component. This will work only if both sub-services can be defined using the same endpoint type.
As you can see Camel is pretty flexible and offers you to achieve your goal in many ways. Consider the context of your problem and choose the solution that fits you the best.
If you show me more concrete examples of REST URLs you want to call on each request to the aggregation service I could advice you which solution I will choose and how to implement it. The particularly important is to know which part of the request is dynamic. I also need to know which service consumer you want to use (it will depend on the type of data you will receive from the services).
This looks like a good example where the Content Enricher pattern should be used. Described here
<from uri="direct:performScan"/>
<enrich uri="ServiceA_Uri_Here" strategyRef="aggregateRequestAndA"/>
<enrich uri="ServiceA_Uri_Here" strategyRef="aggregateAandB"/>
</route>
The aggregation strategies has to be written in Java (or perhaps some script language, Scala/groovy? - but that I have not tried).
The aggregation strategy just needs to be a bean that implements org.apache.camel.processor.aggregate.AggregationStrategy which in turn requires you to implement one method:
Exchange aggregate(Exchange oldExchange, Exchange newExchange);
So, now it's up to you to merge the request with the response from the enrich service call. You have to do it twice since you have both callA and callB. There are two predefined aggregation strategies that you might or might not find usefull, UseLatestAggregationStrategy and UseOriginalAggregationStrategy. The names are quite self explainatory.
Good luck