I have a distributed setup (WSO2 APIM 191) like this:
2 servers working with store and pubisher in cluster; (Server A and B);
2 servers working with gateway-workers and keymanager in cluster; (Server C and D);
1 server working as gateway-manager; (Server E);
1 server working with a BAM; (Server F);
2 postgres in cluster;
All is configured and works well. But when I registry a API at "A" this API is not shown by the server "C" or "D".
When i call this API by "curl" this is the mistake:
<am:fault xmlns:am="http://wso2.org/apimanager">
<am:code>404</am:code>
<am:type>Status report</am:type>
<am:message>Not Found</am:message>
<am:description>
The requested resource (/test/1/ping) is not available.
</am:description>
When I see carbon at "C" or "D" (Main > Metadata > List > APis) the API is there. I dont know why this mistake.
Did you setup deployment synchronizer? see SVN-Based Deployment Synchronizer for Carbon 4.2.0-Based Products .
When you publish an api from the publisher, it creates relevant synapse configurations to handle request related to this api in the manager node (see AM_HOME/repository/deployment/server/synapse-configs/default/api in the manager node and you would find a xml with the api name.) . Since gateway worker nodes handle requests, these files should be in the worker nodes. deployment synchronizer is used to move this configurations to the worker nodes automatically. You can do this manually by copying content in synapse-configs folder in the manager node to all the worker nodes if you do not want a svn base synchronizer
Related
Specs:
The serverless Amazon MSK that's in preview.
t2.xlarge EC2 instance with Amazon Linux 2
Installed Kafka from https://dlcdn.apache.org/kafka/3.0.0/kafka_2.13-3.0.0.tgz
openjdk version "11.0.13" 2021-10-19 LTS
OpenJDK Runtime Environment 18.9 (build 11.0.13+8-LTS)
OpenJDK 64-Bit Server VM 18.9 (build 11.0.13+8-LTS, mixed mode,
sharing)
Gradle 7.3.3
https://github.com/aws/aws-msk-iam-auth, successfully built.
I also tried adding IAM authentication information, as recommended by the Amazon MSK Library for AWS Identity and Access Management. It says to add the following in config/client.properties:
# Sets up TLS for encryption and SASL for authN.
security.protocol = SASL_SSL
# Identifies the SASL mechanism to use.
sasl.mechanism = AWS_MSK_IAM
# Binds SASL client implementation.
# sasl.jaas.config = software.amazon.msk.auth.iam.IAMLoginModule required;
# Encapsulates constructing a SigV4 signature based on extracted credentials.
# The SASL client bound by "sasl.jaas.config" invokes this class.
sasl.client.callback.handler.class = software.amazon.msk.auth.iam.IAMClientCallbackHandler
# Binds SASL client implementation. Uses the specified profile name to look for credentials.
sasl.jaas.config = software.amazon.msk.auth.iam.IAMLoginModule required awsProfileName="kafka-client";
And kafka-client is the IAM role attached to the EC2 instance as an instance profile.
Networking: I used VPC Reachability Analyzer to confirm that the security groups are configured correctly and the EC2 instance I'm using as a Producer can reach the serverless MSK cluster.
What I'm trying to do: create a topic.
How I'm trying: bin/kafka-topics.sh --create --partitions 1 --replication-factor 1 --topic quickstart-events --bootstrap-server boot-zclcyva3.c2.kafka-serverless.us-east-2.amazonaws.com:9098
Result:
Error while executing topic command : Timed out waiting for a node assignment. Call: createTopics
[2022-01-17 01:46:59,753] ERROR org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment. Call: createTopics
(kafka.admin.TopicCommand$)
I'm also trying: with the plaintext port of 9092. (9098 is the IAM-authentication port in MSK, and serverless MSK uses IAM authentication by default.)
All the other posts I found on SO about this node assignment error didn't include MSK. I tried suggestions like uncommenting the listener setting in server.properties, but that didn't change anything.
Installing kcat for troubleshooting didn't work for me, since there's no out-of-the box installation for the yum package manager, which Amazon Linux 2 uses, and since these instructions failed for me at checking for libcurl (by compile)... failed (fail).
The Question: Any other tips on solving this "node assignment" error?
The documentation has been updated recently, I was able to follow it end to end without any issue (The IAM policy is now correct)
https://docs.aws.amazon.com/msk/latest/developerguide/serverless-getting-started.html
The created properties file is not automatically used; your command needs to include --command-config client.properties, where this properties file is documented at the MSK docs on the linked IAM page.
Extract...
ssl.truststore.location=<PATH_TO_TRUST_STORE_FILE>
security.protocol=SASL_SSL
sasl.mechanism=AWS_MSK_IAM
sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;
sasl.client.callback.handler.class=software.amazon.msk.auth.iam.IAMClientCallbackHandler
Alternatively, if the plaintext port didn't work, then you have other networking issues
Beyond these steps, I suggest reaching out to MSK support, and telling them to update the "Create a Topic" page to no longer use Zookeeper, keeping in mind that Kafka 3.0 is not (yet) supported
I have working code similar to this connecting to google IoT with the paho client.
Since I am in a spring boot reactive application, I would like to use Hive MQTT Client, but I can't find the right setup, I keep having the following error message :
com.hivemq.client.mqtt.exceptions.ConnectionClosedException: Server closed connection without DISCONNECT.
The current code I use :
hiveClient = MqttClient.builder()
.identifier(UUID.randomUUID().toString())
.serverHost("mqtt.googleapis.com")
.serverPort(443)
.useMqttVersion3()
.sslWithDefaultConfig()
.simpleAuth(
Mqtt3SimpleAuth.builder()
.username("unused")
.password(StandardCharsets.UTF_8.encode("// a token string generation that works fine with palo"))
.build()
)
.build()
.toBlocking();
hiveClient.connect(); // Error
It looks like the identifier (client ID) should be set to something other than a UUID. The documentation indicates the client ID should be formed as the following path:
projects/PROJECT_ID/locations/REGION/registries/REGISTRY_ID/devices/DEVICE_ID
Note that all of the requirements for the Google Cloud IoT Core MQTT device bridge are strict, so also verify that Hive is configured as follows:
Mqtt 3.1.1
TLS 1.2
Publish to /devices/DEVICE_ID/events or /devices/DEVICE_ID/state
Subscribe to /devices/DEVICE_ID/config or /devices/DEVICE_ID/commands/#
QoS 0 or 1
Note that if you do not adhere to the requirements, your device gets disconnected. Additional information on the disconnect reason may be available in the logging for your registry visible on the Cloud Console for IoT.
We are doing evaluation for metering purpose using WSO2 API Manager and DAS. (Latest versions)
Environment:
WSO2 API Manager runs as 2 node active-active deployment model using Hazlecast. (4 Core 8GB Ram) &
DAS runs as single node.
Both are connecting to backend RDBMS as mysql.
DAS and MYSQL shares the same server of 12 Core 24GB RAM. We dedicatedly allocated 12GB to MYSQL.
We started the test at the rate of 750reads/sec and everything went well for 27hrs until the metering reaches 72 Million and after which we have got the below error.
At API Manager: [PassThroughMessageProcessor-130] WARN DataPublisher Event queue is full, unable to process the event for endpoint Group.
At Das: (After 10 mins), we have got INFO {com.leansoft.bigqueue.page.MappedPageFactoryImpl} - Page file /$DAS_HOME/repository/data/index_staging_queues/4P/index/page-12.dat was just deleted. {com.leansoft.bigqueue.page.MappedPageFactoryImpl}.
Is this something that we have reached the limit w.r.t the infra setup or some performance issues w.r.t DAS. Can you please help us?
You need to tune the server performance of DAS and APIM
I wrote these code lines for accessing and modifying programmatically the load balanced endpoint configurations saved in my esb (4.7.0) local registry. [in a few words i add a new address endpoint to the load balance endpoint list]
SynapseConfiguration sc = synapseMsgContext.getConfiguration();
LoadbalanceEndpoint le =(LoadbalanceEndpoint) sc.getEndpoint("test");
List<Endpoint>list = le.getChildren();
AddressEndpoint ad = new AddressEndpoint();
EndpointDefinition def = new EndpointDefinition();
def.setAddress("http://172.17.54.101:8083/RestService/rest/servizio");
def.setAddressingOn(false);
def.setTimeoutAction(100);
ad.setDefinition(def);
list.add(ad);
le.setChildren(list);
sc.updateEndpoint("test", le);
synapseMsgContext.setConfiguration(sc);
By this code lines the endpoint's updates are held in memory and lost when i restart the ESB. So this update lastes only till the esb is stopped.
How can i make these updates persistent? I mean an effective update on the endpoint xml configuration file?
You have to check endpoint serilaizer and factory.
http://svn.wso2.org/repos/wso2/carbon/platform/branches/turing/dependencies/synapse/2.1.2-wso2v3/modules/core/src/main/java/org/apache/synapse/config/xml/endpoints/
I have an avahi (zeroconf, dnssd, bonjour) service. I want the service to be able to notify the clients when it has new data so the clients can then connect and query for the updated information.
What type of message should the service publish, and how is this done with the avahi API (service is written in C++)?
I don't know what C++ API you are refering to, but this is how you do it in the C-layer. You can use the following functions in avahi to update the TXT record of the service.
avahi_entry_group_update_service_txt (AvahiEntryGroup *g, ...)
avahi_entry_group_update_service_txt_strlst (AvahiEntryGroup *g, ...)
Listening clients will receive a service updated event.