How to enable compression when sending data between two DolphinDB nodes? - compression

With DolphinDB Java API, I can enable data transfer in compression mode by setting parameter compress to true using method DBConnection. I know DolphinDB built-in function xdb can help open a remote connection, however I could not find the parameter to enable the data compression. Can anyone explain to me how to enable compression when connecting to a DolphinDB server using DolphinDB script?

Use xdb to set up the connection first, then use remoteRunWithCompression to enable data compression for your query. The usage of remoteRunWithCompression is the same as remoteRun.
In this example we are connecting these two nodes:
192.168.0.3:8848 (server)
192.168.0.4:8848 (client)
Let’s first create an in-memory table “testT“ on the server (192.168.0.3:8848) and share it across nodes.
n=1000
ID=rand(10,n)
x=rand(1.0,n)
t=table(ID,x)
share t as testT
From the client side (192.168.0.4:8848), set up server connection with xdb and query the shared in-memory table “testT” with compression enabled.
h = xdb("192.168.0.3", 8848, "admin", "123456")
idList = remoteRunWithCompression(h, "select * from testT")

Related

Bypassing Cloud Run 32mb error via HTTP2 end to end solution

I have an api query that runs during a post request on one of my views to populate my dashboard page. I know the response size is ~35mb (greater than the 32mb limits set by cloud run). I was wondering how I could by pass this.
My configuration is set via a hypercorn server and serving my django web app as an asgi app. I have 2 minimum instances, 1gb ram, 2 cpus per instance. I have run this docker container locally and can't bypass the amount of data required and also do not want to store the data due to costs. This seems to be the cheapest route. Any pointers or ideas would be helpful. I understand that I can bypass this via http2 end to end solution but I am unable to do so currently. I haven't created any additional hypecorn configurations. Any help appreciated!
The Cloud Run HTTP response limit is 32 MB and cannot be increased.
One suggestion is to compress the response data. Django has compression libraries for Python or just use zlib.
import gzip
data = b"Lots of content to compress"
cdata = gzip.compress(s_in)
# return compressed data in response
Cloud Run supports HTTP/1.1 server side streaming, which has unlimited response size. All you need to do is use chunked transfer encoding.
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Transfer-Encoding

SQLite with in-memory and isolation

I want to create an in-memory SQLite DB. I would like to make two connections to this in-memory DB, one to make modifications and the other to read the DB. The modifier connection would open a transaction and continue to make modifications to the DB until a specific event occurs, at which point it would commit the transaction. The other connection would run SELECT queries reading the DB. I do not want the changes that are being made by the modifier connection to be visible to the reader connection until the modifier has committed (the specified event has occurred). I would like to isolate the reader's connection to the writer's connection.
I am writing my application in C++. I have tried opening two connections like the following:
int rc1 = sqlite3_open_v2("file:db1?mode=memory", pModifyDb, SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE | SQLITE_OPEN_FULLMUTEX | SQLITE_OPEN_URI, NULL);
int rc2 = sqlite3_open_v2("file:db1?mode=memory", pReaderDb, SQLITE_OPEN_READONLY | SQLITE_OPEN_FULLMUTEX | SQLITE_OPEN_URI, NULL);
I have created a table, added some rows and committed the transaction to the DB using 'pModifyDb'. When I try to retrieve the values using the second connection 'pReaderDb' by calling sqlite3_exec(), I receive a return code of 1 (SQLITE_ERROR).
I've tried specifying the URI as "file:db1?mode=memory&cache=shared". I am not sure if the 'cache=shared' option would preserve isolation anymore. But that did not work either when the reader connection is trying to exec a SELECT query the return code was 6 (SQLITE_LOCKED). Maybe because the shared cache option unified both the connections under the hood?
If I remove the in-memory requirement from the URI, by using "file:db1" instead, everything works fine. I do not want to use file-based DB as I require high throughput and the size of the DB won't be very large (~10MB).
So I would like to know how to set up two isolated connections to a single SQLite in-memory DB?
Thanks in advance,
kris
This is not possible with an in-memory DB.
You have to use a database file.
To speed it up, put it on a RAM disk (if possible), and disable synchronous writes (PRAGMA synchronous=off) in every connection.
To allow a reader and a writer at the same time, you have to put the DB file into WAL mode.
This is seems possible since version 3.7.13 (2012-06-11):
Enabling shared-cache for an in-memory database allows two or more database connections in the same process to have access to the same in-memory database. An in-memory database in shared cache is automatically deleted and memory is reclaimed when the last connection to that database closes.
Docs

What does External_threads_connected mean in Aurora MySQL 5.6.10

We have 3000 limits for max connection using RDS db.r4.2xlarge, but when our application utilizes 2000 connection we got an error in our application Too many connections which is quite amazing to me, but When I run query this query show status like '%onn%';
I got a response having something like
'External_threads_connected', '1102'
we are using AWS cluster with reader and writer.
Do I need to consider this value in my active connection or does this value contributing to the max connection?

mysql lost connection error

Currently, I am working on a project to integrate mysql with the IOCP server to collect sensor data and verify the collected data from the client.
However, there is a situation where mysql misses a connection.
The query itself is a simple query that inserts a single row of records or gets the average value between date intervals.
The data of each sensor flows into the DB at the same time every 5 seconds. When the messages of the sensors come on occasionally or overlap with the message of the client, the connection is disconnected.
lost connection to mysql server during query
In relation to throwing the above message
max_allowed_packet Numbers changed.
interactive_timeout, net_read_timeout, net_write_timeout, wait_timeout
It seems that if there are overlapping queries, an error occurs.
Please let me know if you know the solution.
I had a similar issue in a MySQL server with very simple queries where the number of concurrent queries were high. I had to disable the query cache to solve the issue. You could try disabling the query cache using following statements.
SET GLOBAL query_cache_size = 0;
SET GLOBAL query_cache_type = 0;
Please note that a server restart will enable the query cache again. Please put the configuration in MySQL configuration file if you need to have it preserved.
Can you run below command and check the current timeouts?
SHOW VARIABLES LIKE '%timeout';
You can change the timeout, if needed -
SET GLOBAL <timeout_variable>=<value>;

how to push data from java class to Wso2 DAS

Is there any document or step by step process which guides us on how we can use WS02 DAS to pull data from java class objects and display reports using this data using WS02 Dashboards.
Any help would be really appreciated.
First You can create an Event Stream by specifying attributes and mention what are the attributes you need to persist. When events arrives to the streams, those will be stored in Events tables [1].
Then you can create an Event Receiver for that Event Stream [2]. When creating an event stream you can use a protocol such as Thrift, Soap, Http, Mqtt, JMS, Kafka and Web sockets. You can write a simple Java Application to publish data to DAS Receiver you created on message format protocol which you have selected. For an instance if you create SOAP receiver you can use data on soap message format and also if you create a HTTP receiver you can use JSON format.
You can create a dashboard and gadgets to visualize Event table which was created by your persistent stream [3]. Please note that this event table consist all the events WSO2 DAS received, you can process these data by using spark SQL [4] and create several streams which could be used in Analytics Dashboard.
[1]https://docs.wso2.com/display/DAS300/Understanding+Event+Streams+and+Event+Tables
[2] https://docs.wso2.com/display/DAS300/Configuring+Event+Receivers
[3] https://docs.wso2.com/display/DAS300/Analytics+Dashboard
[4] https://docs.wso2.com/display/DAS300/Batch+Analytics+Using+Spark+SQL
Your subject of the question and the body is contradictory. The subject says to push data while the body says pull data.
If push data is what you want to achieve, you can refer https://docs.wso2.com/pages/viewpage.action?pageId=45952633 This uses a thrift client to push data to DAS.
Please refer https://docs.wso2.com/display/DAS300/Analyzing+Data for how to analyze the raw data. You can write spark scripts for analyzing.
Finally you can https://docs.wso2.com/display/DAS300/Communicating+Results on how to analyze data. You may use the REST API exposed with DAS 3.0.0 to pull data from DAS.