WSO2 BPS: H2 DB size - wso2

I have an WSO2 BPS environment, over default H2 database; since some days ago, my DB is growing without control, until be > 20 Gb. Is any way to avoid this behavior or some commands to compact or clean this file?.
Any assistance would be greatly appreciated.

Execute process cleanup task by configuring scheduler at bps.xml.
Documentation Link1: Process Instance clean Up
When executing BPS, lot of events are generated by default and hence database is size is increased. You can limit the event generation by configuring “Event filtering”. This can be done on “Process Level” and “Scope Level”
Documentation Link2: Refer Filtering Events

Related

Google Cloud IoT Few config updates mesages are missing when sending config updates frequently from cloud functions to device

I am using config update and cloud functions for communication between mobile application and esp32 device by following the example here, but when I am sending config update messages frequently some of them are not sending; say out of 5 only 3 config update messages are going, I have two questions:
1) How frequently we can send config update to avoid some missing updates.
2) Is there any alternative way to communicate between cloud functions and IoT device.
According to the docs: [IoT docs]
Configuration updates are limited to 1 update per second, per device.
However, for best results, device configuration should be updated much
less often — at most, once every 10 seconds.
The update rate is calculated as the time between the most recent
server acknowledgment and the next update request.
If your operations are mostly configuration updates I cannot think another alternative that could perform better.

Amazon Elasticache Failover

We have been using AWS Elasticache for about 6 months now without any issues. Every night we have a Java app that runs which will flush DB 0 of our redis cache and then repopulate it with updated data. However we had 3 instances between July 31 and August 5 where our DB was successfully flushed and then we were not able to write the new data to the database.
We were getting the following exception in our application:
redis.clients.jedis.exceptions.JedisDataException:
redis.clients.jedis.exceptions.JedisDataException: READONLY You can't
write against a read only slave.
When we look at the cache events in Elasticache we can see
Failover from master node prod-redis-001 to replica node
prod-redis-002 completed
We have not been able to diagnose the issue and since the app was running fine for the past 6 months I am wondering if it is something related to a recent Elasticache release that was done on the 30th of June.
https://aws.amazon.com/releasenotes/Amazon-ElastiCache
We have always been writing to our master node and we only have 1 replica node.
If someone could offer any insight it would be much appreciated.
EDIT: This seems to be an intermittent problem. Some days it will fail other days it runs fine.
We have been in contact with AWS support for the past few weeks and this is what we have found.
Most Redis requests are synchronous including the flush so it will block all other requests. In our case we are actually flushing 19m keys and it takes more then 30 seconds.
Elasticache performs a health check periodically and since the flush is running the health check will be blocked, thus causing a failover.
We have been asking the support team how often the health check is performed so we can get an idea of why our flush is only causing a failover 3-4 times a week. The best answer we can get is "We think its every 30 seconds". However our flush consistently takes more then 30 seconds and doesn't consistently fail.
They said that they may implement the ability to configure the timing of the health check however they said this would not be done anytime soon.
The best advice they could give us is:
1) Create a completely new cluster for loading the new data on, and
instead of flushing the previous cluster, re-point your application(s)
to the new cluster, and remove the old one.
2) If the data that you are flushing is an update version of the data,
consider not flushing, but updating and overwriting new keys?
3) Instead of flushing the data, set the expiry of the items to be
when you would normally flush, and let the keys be reclaimed (possibly
with a random time to avoid thundering herd issues), and then reload
the data.
Hope this helps :)
Currently for Redis versions from 6.2 AWS ElastiCache has a new feature of thread monitoring. So the health check doesn't happen in the same thread as all other actions of Redis. Redis can continue to proceed a long command / lua script, but will still considered healthy. Because of this new feature failovers should happen less.

Monitoring WSO2 Message Broker via JMX

I've been attempting to monitor the WSO2 MB 2.1.1 via JMX which appears to be a valid option for other WSO2 tools such as the ESB however the MB does not appear to be updating any of the MBeans for the Queue attributes.
Ex:
org.wso2.andes:type=VirtualHost.Queue,VirtualHost="carbon",name="testQueue"
This has a number of potentially useful attributes like the following:
ConsumerCount
ActiveConsumerCount
ReceivedMessageCount
MessageCount
However the counters always read "0" even when there are messages in the queue dropped in via the sample sender or manually.
Creating a new queue will create the following, but the issue is seen on the new queue as well.
org.wso2.andes:type=VirtualHost.Queue,VirtualHost="carbon",name="testQueue2"
Am I simply looking in the incorrect location or any other variety of user error?
Is this the intentional behavior of the application?
Do you have any suggestions which may assist in getting this data reported via JMX?
Any help would be appreciated.
Thanks
This does not appear to be functional in MB 2.1.1, it looks as if they may be moving to using the CEP or the BAM to provide metrics on some products.

Spring Integration Pop3MailReceiver stops polling silently without logging why

Problem
I have a very basic configuration for a Spring integration mail adapter setup (below is the relevant sample):
<int:channel id="emailChannel">
<int:interceptors>
<int:wire-tap channel="logger"/>
</int:interceptors>
</int:channel>
<mail:inbound-channel-adapter id="popChannel"
store-uri="pop3://user:password#domain.net/INBOX"
channel="emailChannel"
should-delete-messages="true"
auto-startup="true">
<int:poller max-messages-per-poll="1" fixed-rate="30000"/>
</mail:inbound-channel-adapter>
<int:logging-channel-adapter id="logger" level="DEBUG"/>
<int:service-activator input-channel="emailChannel" ref="mailResultsProcessor" method="onMessage" />
This is working fine the majority of the time and I can see the logs showing the polling (and it works fine hooking into my mailResultsProcessor when a mail is there):
2013-08-13 08:19:29,748 [task-scheduler-3] DEBUG org.springframework.integration.mail.Pop3MailReceiver - opening folder [pop3://user:password#fomain.net/INBOX]
2013-08-13 08:19:29,796 [task-scheduler-3] INFO org.springframework.integration.mail.Pop3MailReceiver - attempting to receive mail from folder [INBOX]
2013-08-13 08:19:29,796 [task-scheduler-3] DEBUG org.springframework.integration.mail.Pop3MailReceiver - found 0 new messages
2013-08-13 08:19:29,796 [task-scheduler-3] DEBUG org.springframework.integration.mail.Pop3MailReceiver - Received 0 messages
2013-08-13 08:19:29,893 [task-scheduler-3] DEBUG org.springframework.integration.endpoint.SourcePollingChannelAdapter - Received no Message during the poll, returning 'false'
The problem I have is that the polling stops during the day, with no indication in the logs why it has stopped working. The only reason I can tell is the debug above is not present in the logs and E-Mails build up on the E-Mail account.
Questions
Has anyone seen this before and know how to resolve it?
Is there a change that I can make in my configuration to capture the issue into the log? I thought the logging channel adapter set to debug would have this covered.
Using version 2.2.3.RELEASE of Spring Integration on Tomcat 7, logs output default to catalina.out. Deployed on AWS standard tomcat 7 instance.
Most likely the poller thread is hung someplace upstream. With your configuration, the next poll won't happen until the current poll completes.
You can use jstack or VisualVM to get a thread dump to find out what the thread is doing.
Another possibility is you are suffering from poller thread starvation - if you have a lot of other polled elements in your application, and depending on their configuration. The default taskScheduler bean has only 10 threads.
You can add a task executor to the <poller/> so each poll is handed off to another thread, but be aware that that can result in concurrent polls if a polled task takes longer to execute than the polling rate.
To resolve this problem specifically I used the configuration below:
<mail:inbound-channel-adapter id="popChannel"
store-uri="pop3://***/INBOX"
channel="emailChannel"
should-delete-messages="true"
auto-startup="true">
<int:poller max-messages-per-poll="5" fixed-rate="60000" task-executor="pool"/>
</mail:inbound-channel-adapter>
<task:executor id="pool" pool-size="10" keep-alive="50"/>
Once moving to this approach we saw no further problems, and is with any use of pool the advantage is any Threads that become a problem are cleaned up and recreated.

django celery rabbitmq execute delay

I use Django-Celery +rabbitmq to execute some asyn tasks,I define a queue 'sendmail' to execute send email task,send mail is triggered by a specific task(this task has own queue), but now I encounter a problem,after the specific task finish, the mail sometimes send at once, sometimes need 5-20minutes.I want to know what reason caused it.
Django-celery will package the taskname and param as message to rabbitmq when call task.delay().
I want to know when the message go to the rabbitmq, but use web management tool only can see total messages,can't see the every message's detail, especially the time the message reached. Django-celery log can only see the work got from broker time and execute task time.I want to know all related timepoint to sure which step the time main consumed.
Django-Celery does (I believe) report task data on a per-task basis. When you sync your database, it crates a bunch of monitoring tables which are accessible via the admin. However, in order for these tasks to be recorded in these tables, you need to run the celerycam program in the django context (python ./manage.py celerycam). The celerycam program will take "snapshots" of your tasks every second or so (by default) and record information about them. Another useful tool for monitoring is the celerymon program (which also has to run in the django context). This is a command line ncurses program that reports real-time information about tasks as they occur. Finally, rabbitmqctrl has a bunch of options that might help with monitoring.
This is a particularly useful page in the docs:
http://celery.github.com/celery/userguide/monitoring.html
Anyway, this is what I use to monitor my tasks when using celery.