I'd like to dynamically change some of the parameters of my inbound endpoint. More precisely, I have a RabbitMQ inbound endpoint, and I would like to dynamically specify server host name, port, queue name, etc. How can I do this?
BTW, if it cannot be done with the existing components that's fine. It would also be great/acceptable if I could for example create a custom mediator that would read these properties from the message context and then somehow modify the RabbitMQ inbound endpoint, just how?
Specifying inbound endpoint parameters as registry values.
Other than specifying parameter values inline, you can also specify parameter values as registry entries. The advantage of specifying a parameter value as a registry entry is that the same inbound endpoint configuration can be used in different environments simply by changing the registry entry value.
<inboundEndpoint xmlns="http://ws.apache.org/ns/synapse" name="file" sequence="request" onError="fault" protocol="file" suspend="false">
<parameters>
...............
<parameter name="transport.vfs.FileURI" key="conf:/repository/esb/esb-configurations/test"/>
...............
</parameters>
</inboundEndpoint>
Refer [1] for detailed explanation.
[1] - https://docs.wso2.com/display/ESB490/Working+with+Inbound+Endpoints
Related
I have a custom domain set up in AWS API Gateway. My intention is to use "API mappings" to send traffic for different API versions to their respective API Gateways, e.g.:
GET https://example.com/v1/foo is sent to an API gateway "APIv1" ($default stage) via an API mapping on the custom domain with path="v1".
GET https://example.com/v2/foo is sent to an API gateway "APIv2" ($default stage) via an API mapping on the custom domain with path="v2" (not shown)
The HTTP APIs themselves are configured with a single route /{proxy+} and an integration that sends requests to a private ALB:
This setup works fine as far as routing traffic goes, but the problem is that when the request makes it to the actual application, the routes the application receives are like /v1/foo instead of just /foo, which is what the app is expecting.
I've played around with different route matching and parameter mapping (of which I can find almost no examples for my use case) to no avail.
I could change my app code to match the routes that AWS is sending, but the entire point of this was to handle versioning using my AWS stack and not app code. Do I have another option?
If you create a resource called /foo and the proxy resource inside it, when you set integration you can define which path to pass and the {proxy} will have just the part after /foo, ignoring the v1 entirely.
See an example below.
In this case it is ignoring everything before v1 and it is also rewriting the integration to /api/{proxy}.
It will receive a request as GET https://example.com/abc/xyz/v1/foo and will forward to backend as GET https://example.com/api/foo.
Update
It can't be done via VPC Link, but we can use public ALB almost like private, like the explanation below.
It explain about CloudFront, but the same is valid for API Gateway.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/restrict-access-to-load-balancer.html
It's totally possible. You just need to use parameters mapping for this. Using the AWS UI it would be:
The following link shows how to configure a single AM with the IS.
https://docs.wso2.com/display/AM220/Configuring+WSO2+Identity+Server+as+a+Key+Manager
The parameter <ServerURL>https://${gateway-server-host}:{port}/services/</ServerURL> from IS config.xml file defines the AM URL to be associated to it.
I want to link 2 different API managers to the same Identity Server WITHOUT LOAD BALANCER because the 2 AMs will be handled by 2 separate teams.
Note: It works with load balancer but that is not my requirement.
Thanks for your time :)
That config is only used to remove gateway cache when user roles etc. updated on IS side. So configuring this to point only a single APIM will have a very minimal effect.
Edit: This might work. Add a new environment for the 2nd APIM node.
</Environments>
<Environment type="production" api-console="true">
<Name>Production Gateway</Name>
<Description>Production Gateway Environment</Description>
<ServerURL>https://localhost:9444/services/</ServerURL>
<Username>admin</Username>
<Password>admin</Password>
<GatewayEndpoint>http://localhost:8281,https://localhost:8244</GatewayEndpoint>
</Environment>
<Environment type="hybrid" api-console="true">
<Name>Production and Sandbox</Name>
<Description>Hybrid Gateway Environment</Description>
<ServerURL>https://localhost:9445/services/</ServerURL>
<Username>admin</Username>
<Password>admin</Password>
<GatewayEndpoint>http://localhost:8282,https://localhost:8245</GatewayEndpoint>
</Environment>
</Environments>
As asked in the title, is it by default the communication between AWS services uses SSL? For example, a Lambda function writes Objects to S3, or a Lambda function reads data from Kinesis.
You can check AWS's Services and their supported protocols in the docs.
The SDK will always opt for HTTPS traffic unless specified otherwise. You can specify so by changing the endpoint attribute for the service you are working with. Just check the link above to fetch the right endpoint for the service you are looking for.
You can see how to do so in the official docs for the SDK, like so:
const s3 = new AWS.S3({endpoint: 'https://s3.us-east-1.amazonaws.com'});
Here's a link for S3's Javascript SDK:
https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#constructor-property
You also have the option to enable/disable SSL by setting the sslEnabled attribute to either true or false. Even though the docs don't state anything for its default value, I truly believe this flag is set to true automatically. Setting this flag will only take effect if the protocol has not been previously specified in the endpoint attribute.
And this is extracted from the Java SDK:
Callers can pass in just the endpoint (ex: "ec2.amazonaws.com") or a
full URL, including the protocol (ex: "https://ec2.amazonaws.com"). If
the protocol is not specified here, the default protocol from this
client's ClientConfiguration will be used, which by default is HTTPS.
I have found a thread in AWS's forums which states that SSL is always enabled for SDK's requests:
If you specify AWS.config.sslEnabled = true, all endpoints will use
SSL by default. There is no "fallback" to HTTP if HTTPS doesn't work
or anything like that.
This can be overridden when creating an endpoint, for example by new
AWS.S3({sslEnabled: false}) - that is why it is only a default
setting. If you do not explicitly say sslEnabled: false in your code,
you can be assured that SSL is used everywhere. And finally, even when
specifying sslEnabled: true, if creating a new endpoint explicitly
with a full URL, such as http://s3-eu-west-1.amazonaws.com, will
override the sslEnabled setting. To say it in another way, sslEnabled
only affects if http:// or https:// is automatically added to hosts
specified without specifying a protocol.
And the default is to use SSL for all Amazon services by default, so
adding a bucket policy that will limit connections to using SSL only
will only block explicit attempts to access S3 without HTTPS (or users
of different SDK:s or other access methods).
Hope this helps.
I have two physical nodes of api manager configured as explained in this guide. Two all components in one machine boxes behind a load balancer.
https://docs.wso2.com/display/AM220/Configuring+an+Active-Active+Deployment#ConfiguringanActive-ActiveDeployment-Step4-ConfigurethePublisherwiththeGateway
Following is my configuration for the data publisher in api-manager.xml
<DataPublisher>
<Enabled>true</Enabled>
<ReceiverUrlGroup>{tcp://172.31.31.93:9611},{tcp://172.31.16.52:9611} </ReceiverUrlGroup>
<AuthUrlGroup>{ssl://172.31.31.93:9711},{ssl://172.31.16.52:9711}</AuthUrlGroup>
<DataPublisherPool>
<MaxIdle>1000</MaxIdle>
<InitIdleCapacity>200</InitIdleCapacity>
</DataPublisherPool>
<DataPublisherThreadPool>
<CorePoolSize>200</CorePoolSize>
<MaxmimumPoolSize>1000</MaxmimumPoolSize>
<KeepAliveTime>200</KeepAliveTime>
</DataPublisherThreadPool>
I have set an application with a application throttling policy of 10 Requests per minute. When I put both the instances behind a load balancer and fire requests, as long as both nodes are up, I am allowed to make 20 requests per minute. However if I bring one node down, it allows me to make 10 requests per minute. I suspect that traffic data is not being published by gateway to traffic manager.
What could be a configuration miss that I need to check to enable that? The only thing that deviates from the specified configuration in my case is that I don't point to single gateway node from both publishers. So both nodes have following APIGateway config in api-manager.xml
<APIGateway>
<!-- The environments to which an API will be published -->
<Environments>
<!-- Environments can be of different types. Allowed values are 'hybrid', 'production' and 'sandbox'.
An API deployed on a 'production' type gateway will only support production keys
An API deployed on a 'sandbox' type gateway will only support sandbox keys
An API deployed on a 'hybrid' type gateway will support both production and sandbox keys. -->
<!-- api-console element specifies whether the environment should be listed in API Console or not -->
<Environment type="hybrid" api-console="true">
<Name>Production and Sandbox</Name>
<Description>This is a hybrid gateway that handles both production and sandbox token traffic.</Description>
<!-- Server URL of the API gateway -->
<ServerURL>https://localhost:${mgt.transport.https.port}${carbon.context}services/</ServerURL>
<!-- Admin username for the API gateway. -->
<Username>${admin.username}</Username>
<!-- Admin password for the API gateway.-->
<Password>${admin.password}</Password>
<!-- Endpoint URLs for the APIs hosted in this API gateway.-->
<GatewayEndpoint>http://${carbon.local.ip}:${http.nio.port},https://${carbon.local.ip}:${https.nio.port}</GatewayEndpoint>
</Environment>
</Environments>
</APIGateway>
Any help will be appreciated. My goal here is to ensure that irrespective of which gateway node the request is redirected to, the throttling should be consistent as defined in subscription across nodes.
Currently, it seems as if each node does an individual throttling.
Am working with wso2esb 4.7.0 and wso2dss 3.0.0.
I have diffident number of data services in my DSS
so, when we are creating a proxy service in my ESB, generally will send the payload to the respective address endpoint of the particular data service in the DSS and will execute the proxy
But what i want to do is create an Endpoint in ESB and configure all my DSS Address endpoints in this Endpoint and use this in all my proxy services
some of my Address endpoints in DSS are as shown below
localhost:9764/services/Get_details/
localhost:9764/services/Get_geodetails/
localhost:9764/services/muser_DataService/
how can i create a default endpoint with this Address endpoints in my ESB
You can use a single proxy in ESB and call different DSS endpoints based on the message contents. Please go through the mediators under Filter category.
For your scenario, you may use the Switch Mediator. From that, you can check for different conditions and send to appropriate endpoint.
For eg: please refer Content-Based Router Pattern
You may use a mediator like Payload Factory to transform message depending on the target endpoint.
I hope this helps.