I have two physical nodes of api manager configured as explained in this guide. Two all components in one machine boxes behind a load balancer.
https://docs.wso2.com/display/AM220/Configuring+an+Active-Active+Deployment#ConfiguringanActive-ActiveDeployment-Step4-ConfigurethePublisherwiththeGateway
Following is my configuration for the data publisher in api-manager.xml
<DataPublisher>
<Enabled>true</Enabled>
<ReceiverUrlGroup>{tcp://172.31.31.93:9611},{tcp://172.31.16.52:9611} </ReceiverUrlGroup>
<AuthUrlGroup>{ssl://172.31.31.93:9711},{ssl://172.31.16.52:9711}</AuthUrlGroup>
<DataPublisherPool>
<MaxIdle>1000</MaxIdle>
<InitIdleCapacity>200</InitIdleCapacity>
</DataPublisherPool>
<DataPublisherThreadPool>
<CorePoolSize>200</CorePoolSize>
<MaxmimumPoolSize>1000</MaxmimumPoolSize>
<KeepAliveTime>200</KeepAliveTime>
</DataPublisherThreadPool>
I have set an application with a application throttling policy of 10 Requests per minute. When I put both the instances behind a load balancer and fire requests, as long as both nodes are up, I am allowed to make 20 requests per minute. However if I bring one node down, it allows me to make 10 requests per minute. I suspect that traffic data is not being published by gateway to traffic manager.
What could be a configuration miss that I need to check to enable that? The only thing that deviates from the specified configuration in my case is that I don't point to single gateway node from both publishers. So both nodes have following APIGateway config in api-manager.xml
<APIGateway>
<!-- The environments to which an API will be published -->
<Environments>
<!-- Environments can be of different types. Allowed values are 'hybrid', 'production' and 'sandbox'.
An API deployed on a 'production' type gateway will only support production keys
An API deployed on a 'sandbox' type gateway will only support sandbox keys
An API deployed on a 'hybrid' type gateway will support both production and sandbox keys. -->
<!-- api-console element specifies whether the environment should be listed in API Console or not -->
<Environment type="hybrid" api-console="true">
<Name>Production and Sandbox</Name>
<Description>This is a hybrid gateway that handles both production and sandbox token traffic.</Description>
<!-- Server URL of the API gateway -->
<ServerURL>https://localhost:${mgt.transport.https.port}${carbon.context}services/</ServerURL>
<!-- Admin username for the API gateway. -->
<Username>${admin.username}</Username>
<!-- Admin password for the API gateway.-->
<Password>${admin.password}</Password>
<!-- Endpoint URLs for the APIs hosted in this API gateway.-->
<GatewayEndpoint>http://${carbon.local.ip}:${http.nio.port},https://${carbon.local.ip}:${https.nio.port}</GatewayEndpoint>
</Environment>
</Environments>
</APIGateway>
Any help will be appreciated. My goal here is to ensure that irrespective of which gateway node the request is redirected to, the throttling should be consistent as defined in subscription across nodes.
Currently, it seems as if each node does an individual throttling.
Related
Quotas are associated with "Usage Plans", but the "Usage Plan" applies to all endpoints in a Stage.
Is there any way to define different quotas to different endpoints in my API using AWS API Gateway?
Example for one user with an APIKey:
POST myapi.domain.com/v1/articles --> 100 requests/month
GET myapi.domain.com/v1/articles --> 1,000 requests/month
POST myapi.domain.com/v1/comments --> 1,000 requests/month
GET myapi.domain.com/v1/comments --> no limit
I am using API gateway to call the API in ec2 instance. When i tested the api i am getting the output below with status:200. But it should show "Success" message.why is it happening?. Do i need to change anything to get the proper output while doing get request.
<?xml version="1.0" encoding="UTF-8"?>
<Response>
<Errors><Error>
<Code>InvalidAction</Code>
<Message>The action deploy is not valid for this web service.</Message>
</Error></Errors>
<RequestID>4c2aeb89-1dfe-4a95-a9fd-7af28b49c708</RequestID>
</Response>
Steps I followed to create the API
created a restapi in api gateway as shown in figure
After that i have added the api end points using resources and methods and connected to aws ec2 instance as shown in the figure
I have created the execution role arn by going into IAM roles and added all the roles shown in the figure.
later if i test the api endpoint by providing the query parameters i am getting this error
You use API as proxy to your EC2 docker app. Its not "AWS Service". You must choose and setup HTTP integration type. "AWS Service" would be used if your were creating API proxy to the actual EC2 service (e.g. launch an instance, stop an instance, ...).
Depending on the nature of you EC2 docker app, if you just setup plain HTTP (http://<defulat-ec2-url) this will be going over unencrypted (not HTTPS) connection. So be mindful of that. If your API contains sensitive info, this is a security issue.
The following link shows how to configure a single AM with the IS.
https://docs.wso2.com/display/AM220/Configuring+WSO2+Identity+Server+as+a+Key+Manager
The parameter <ServerURL>https://${gateway-server-host}:{port}/services/</ServerURL> from IS config.xml file defines the AM URL to be associated to it.
I want to link 2 different API managers to the same Identity Server WITHOUT LOAD BALANCER because the 2 AMs will be handled by 2 separate teams.
Note: It works with load balancer but that is not my requirement.
Thanks for your time :)
That config is only used to remove gateway cache when user roles etc. updated on IS side. So configuring this to point only a single APIM will have a very minimal effect.
Edit: This might work. Add a new environment for the 2nd APIM node.
</Environments>
<Environment type="production" api-console="true">
<Name>Production Gateway</Name>
<Description>Production Gateway Environment</Description>
<ServerURL>https://localhost:9444/services/</ServerURL>
<Username>admin</Username>
<Password>admin</Password>
<GatewayEndpoint>http://localhost:8281,https://localhost:8244</GatewayEndpoint>
</Environment>
<Environment type="hybrid" api-console="true">
<Name>Production and Sandbox</Name>
<Description>Hybrid Gateway Environment</Description>
<ServerURL>https://localhost:9445/services/</ServerURL>
<Username>admin</Username>
<Password>admin</Password>
<GatewayEndpoint>http://localhost:8282,https://localhost:8245</GatewayEndpoint>
</Environment>
</Environments>
I'm experimenting with AWS API Gateway as an authentication solution for a private HTTP service and I'm having trouble getting it setup.
I have a dummy node.js serice running within a private VPC with an internal network load balancer in front (note: it's healthy).
I then setup an API and configured a VPC link this guide: https://docs.aws.amazon.com/apigateway/latest/developerguide/getting-started-with-private-integration.html
The resource URL of the get method points to the DNS name of the internal network load balancer.
When I navigate to the API gateway URL (e.g. https://ABC123.execute-api.us-west-2.amazonaws.com/prod/), I'm able to get the HTML response of the server (see below). However, the javascript files return a 403 Forbidden. I expect this has something to do with the path setup.
Rather than just returning a single response (i.e. json API response), is there a way to configure API gateway to return a full blown app which includes HTML, css, and js resources?
Here's the dummy code that runs the app:
// server.js
var express = require('express'),
app = express();
app.use('/', express.static('/home/ubuntu/static/'));
app.listen(8080);
the index page - this is the page that loads when I hit the API Gateway
<!-- static/index.html -->
<html>
<body>
<h1>Hello world!</h1>
<p>This is some text.</p>
<script src="./app.js"></script>
</body>
</html>
This JS fails to load (403) and does not execute.
// static/app.js
console.log('Logging from app.js');
alert('WOOT!');
You can validate your function is being called (and correctly configured) in the Lambda panel on AWS. If your API Gateway is correctly configured, it should appear as a trigger to your lambda function:
After that, you can see your lambda's stats and logs on the monitoring tab:
You need an event handler function that redirects the API Gateway request to the corresponding resource on your express application, which means you need implement some sort of middle layer to map correctly the API GateWay Requests to your express method/path/parameters/files.
AWS Lambda are functions the are called when an event occurs on services such as API Gateway, SNS, Alexa Skills, etc. I'd try to setup aws-serverless-express, or use some other service such as Elastic Beanstalk or ECS
I have configured 2 wso2 api gateways (say gw1 and gw2) behind a load balancer (say lb1) . I have configured publisher in another node( say pub1). In pub1 box /etc/hosts file I have api gateway url to that of lb1. Now whenever I update or add a new api on pub1 it does not get immediately reflected on both gw1 and gw2, it gets reflected on one of the two. Is there a way programtically force api manager to refresh the list of published api's?
You need us deployment synchronizer to sync the artifacts across the gateway nodes. In your scenario, one gateway will need to be treated as the manager whilst the other one as the worker node.
Pls refer the documentation here on how to configure deployment synchronizer.