Default endpoint in esb - wso2

i'm using wso2esb 4.7.0 and wso2dss 3.0.0. when i have started with these server their was less load on servers as their are less no of services.but now situation changes.Their are no of services on server and each proxy contain particular address endpoint of dss. After calling this address endpoint it navigate to that endpoint and retrieve information and give the response. This is general scenario of all the service.
Day by day no of services increases and due to the load the server getting slower. That's why i wish to create a particular default endpoint from where i can call the service easily. No need to call the address endpoint and all. Is it possible? how can i implement this?
for load balancing i have used Amazon AWS Elastic Load Balancer.

The question is not clear. For all proxy services your endpoints are different or not? I mean, all different endpoints calling same dataservice or different. If they are different, you need to have different endpoints. If all endpoints point same dataservice, then you do not need to have multiple copy of same endpoint. Define only one endpoint and use that in all proxyservices. When you define proxy service there is a option to define endpoint, where select the existing predefined endpoint.

Related

WSO2 Integrator: How do Inbound Endpoints reroute requests to resources

I am trying to understand how the WSO2 micro integrator reroutes requests internally. I know that inbound endpoints basically enable services to be available on a different port. So, does it maintain a list of resources that are mapped to this inbound endpoint and simply act as a passthrough? For example:
I have API resource defined at: http://localhost:8290/healthcare/querydoctor/{category}.
Then create inbound endpoint at port 8505 with Dispatch File Pattern: /healthcare/querydoctor/.*.
At this point does it internally create a map that says http://localhost:8285/healthcare/querydoctor/.* = [http://localhost:8290/healthcare/querydoctor/{category}, ...]
Also, I saw this in the wso2 documentation:
The HTTP inbound endpoint can bypass the inbound side axis2 layer and directly inject messages to a given sequence or API. For proxy services, messages are routed through the axis2 transport layer in a manner similar to normal transports.
What does bypassing the axis2 layer mean, and why is that being done in this case?
Basically, Axis2 is the default transport layer of MI. For example, if you invoke an API through port 8280 it will go through the Axis2 layer and come into the integration layer. If you invoke an HTTP/S Inbound Endpoint it will not go through the transport layer again, it will be routed internally to the Proxy or API if you have a Dispatch Pattern.
The following image will help you to understand the Inbound Endpoint architecture.

AWS HTTP API Gateway URL Based Routing

Okay so here is my requirement. I want to have end points for my customers like so:
https://customer-a.mydomain.com
https://customer-b.mydomain.com
Now, when we access the customer-a endpoint above, I expect AWS to route the request to customer A's ECS Fargate service which is load balanced by https://customer-a-elb.mydomain.com
Similarly, when we access the customer-b endpoint above, I expect AWS to route the request to customer B's ECS Fargate service which is load balanced by https://customer-b-elb.mydomain.com
The plan was, from my DNS, I would route everyone who accesses *.mydomain.com (wild card DNS entry) to the same API Gateway in AWS. And let the API Gateway determine which load balancer to route to depending on the base URL.
I was hoping this can be easily achieved using AWS API Gateway but so far I have not been able to find a solution to implement this. From what I understand, it is only possible to do path based routing (as opposed to base URL based routing which is really what I need in this case).
Any hints would be much appreciated.
CLARIFICATION :
per my requirement, both the customers need to access the same path /myservice but on different ELBs. For e.g.
https://customer-a.mydomain.com/service1 -> https://customer-a-elb.mydomain.com/service1
https://customer-b.mydomain.com/service1 -> https://customer-b-elb.mydomain.com/service1
Somehow I think path based routing cant handle this scenario - as we can define only one route for a path.
API Gateway supports path-based routing. And you can configure which resources will receive incoming API requests based on the URL requested by the client. The following example may help you.link

how to add AWS API gateway with application load balancer for ECS?

How to integrate API gateway with application load balancer? I have integrated ECS with ALB, now I want to add API gateway in front without lambda. But I got confused how to connect API gateway with ALB..
What you're probably looking for is the HTTP Proxy Integration as described here
The basic idea is this:
Set up your API-Gateway with a greedy path like /{proxy+} on the ANY Method
Set the backend-endpoint to https://my-alb-endpoint.com/ecs-service-bla/{proxy}
(hopefully) success
To make this work, your backend needs to be exposed to the internet (or at least reachable for the API Gateway)!
You probably should keep your backend within a locked down VPC, but for this you're going to need to set up a private integration, which requires a Network Load balancer - this might be costlier, but would be the recommended approach.
Yes you can do . Inside API Gateway under integration type select HTTP and then provide complete path of ALB with endpoint resource.

Regional/Edge-optimized API Gateway VS Regional/Edge-optimized custom domain name

This does not make sense to me at all. When you create a new API Gateway you can specify whether it should be regional or edge-optimized. But then again, when you are creating a custom domain name for API Gateway, you can choose between the two.
Worst of all, you can mix and match them!!! You can have a regional custom domain name for an edge-optimized API gateway and it's absolutely meaningless to me!
Why these two can be regional/edge-optimized separately? And when do I want each of them to be regional/edge-optimized?
Why these two can be regional/edge-optimized separately?
Regional and Edge-Optimized are deployment options. Neither option changes anything fundamental about how the API is processed by the AWS infrastructure once the request arrives at the core of the API Gateway service or how the services behind API Gateway ultimately are accessed -- what changes is how the requests initially arrive at AWS and are delivered to the API Gateway core for execution. More about this, below.
When you use a custom domain name, your selected API stage is deployed a second time, on a second endpoint, which is why you have a second selection of a deployment type that must be made.
Each endpoint has the characteristics of its deployment type, whether regional or edge-optimized. The original deployment type of the API itself does not impact the behavior of the API if deployed with a custom domain name, and subsequently accessed using that custom domain name -- they're independent.
Typically, if you deploy your API with a custom domain name, you wouldn't continue to use the deployment endpoint created for the main API (e.g. xxxx.execute-api.{region}.amazonaws.com), so the initial selection should not matter.
And when do I want each of them to be regional/edge-optimized?
If you're using a custom domain name, then, as noted above, your original deployment selection for the API as a whole has no further impact when you use the custom domain.
Edge-optimized endpoints were originally the only option available. If you don't have anything on which to base your selection, this choice is usually a reasonable one.
This option routes incoming requests through the AWS "Edge Network," which is the CloudFront network, with its 100+ global edge locations. This does not change where the API Gateway core ultimately handles your requests -- they are still ultimately handled within the same region -- but the requests are routed from all over the world into the nearest AWS edge, and they travel from there on networks operated by AWS to arrive at the region where you deployed your API.
If the clients of your API Gateway stage are globally dispersed, and you are only deploying your API in a single region, you probably want an edge-optimized deployment.
The edge-optimized configuration tends to give you better global responsiveness, since it tends to reduce the impact of network round trips, and the quality of the transport is not subject to as many of the vagaries of the public Internet because the request covers the least amount of distance possible before jumping off the Internet and onto the AWS network. The TCP handshake and TLS are negotiated with the connecting browser/client across a short distance (from the client to the edge) and the edge network maintains keep-alive connections that can be reused, all of which usually works in your favor... but this optimization becomes a relative impairment when your clients are always (or usually) calling the API from within the AWS infrastructure, within the same region, since the requests need to hop over to the edge network and then back into the core regional network.
If the clients of your API Gateway stage are inside AWS and within the same region where you deployed the API (such as when the API is being called by other systems in EC2 within the region), then you will most likely want a regional endpoint. Regional endpoints route requests through less of the AWS infrastructure, ensuring minimal latency and reduced jitter when requests are coming from EC2 within the same region.
As a side-effect of routing through the edge network, edge-optimized endpoints also provide some additional request headers that you may find useful, such as CloudFront-Viewer-Country: XX which attempts to identify the two-digit country code of the geographic location of the client making the API request. Regional endpoints don't have these headers.
As a general rule, go with edge-optimized unless you find a reason not to.
What would be some reasons not to? As mentioned above, if you or others are calling the API from within the same AWS region, you probably want a regional endpoint. Edge-optimized endpoints can introduce some edge-case side-effects in more advanced or complicated configurations, because of the way they integrate into the rest of the infrastructure. There are some things you can't do with an edge-optimized deployment, or that are not optimal if you do:
if you are using CloudFront for other sites, unrelated to API Gateway, and CloudFront is configured for a wildcard alternate domain name, like *.example.com, then you can't use a subdomain from that wildcard domain, such as api.example.com, on an edge-optimized endpoint with a custom domain name, because API Gateway submits a request to the edge network on your behalf to claim all requests for that subdomain when they arrive via CloudFront, and CloudFront rejects this request since it represents an unsupported configuration when used with API Gateway, even though CloudFront supports it in some other circumstances.
if you want to provide redundant APIs that respond to the same custom domain name in multiple regions, and use Route 53 Latency-Based Routing to deliver requests to the region nearest to the requester, you can't do this with an edge-optimized custom domain, because the second API Gateway region will not be able to claim the traffic for that subdomain on the edge network, since the edge network requires exactly 1 target for any given domain name (subdomain). This configuration can be achieved using regional endpoints and Route 53 LBR, or can be achieved while leveraging the edge network by using your own CloudFront distribution, Lambda#Edge to select the target endpoint based on the caller's location, and API Gateway regional deployments. Note that this can't be achieved by any means if you need to support IAM authentication of the caller, because the caller needs to know the target region before signing and submitting the request.
if you want to use your API as part of a larger site that integrates multiple resources, is deployed behind CloudFront, and uses the path to route to different services -- for example, /images/* might route to an S3 bucket, /api/* might route to your API Gateway stage, and * (everything else) might route to an elastic load balancer in EC2 -- then you don't want to use an edge-optimized API, because this causes your requests to loop through the edge network twice (increasing latency) and causes some header values to be lost. This configuration doesn't break, but it isn't optimal. For this, a regional endpoint is desirable.

How to use API gateway to call another service running on an EC2

I have a confusing scenario. I am new to AWS. I have some available services written in java jersy and I have them deployed on an Ec2 instance.
I am asked to use API gateway to call these services rather than calling them directly. So for instance if I have a service as follows:
http://domainname/article/2
I want the front end to first call the following endpoint of API gateway:
https://my-api-id.execute-api.region-id.amazonaws.com/stage-name/article
and then the above API Gateway end point call the my service.
What I am thinking is there is a http proxy in integration type when I try to create the API gateway resource. I assume this can fit my purpose but I am not sure about it and I am totally confused.
Can anyone shed light on how I can achieve that?
In the API Gateway Console, create a resource (eg. /v1/user/info) and method (eg. GET/POST etc).
Select Integration Request
You can then configure a an HTTP Proxy or a Lambda function any other AWS Resource. In your case, you want this to be pointing to your EC2 hosted URL.