Postgres 11.9 DB benchmark - postgresql-11

I am trying to benchmark the PostgresDB on a Redhat Linux OS VM with 4 core CPU and 16GB mem.
using this documentation https://www.postgresql.org/docs/11/pgbench.html
The Transcation can be a set of sql statements.
Below are test results
Test 1
1 Transaction is 1- Query, 1 – insert , 3 -updates
scaling factor: 10
query mode: simple
number of clients: 10
number of threads: 2
number of transactions per client: 10000
number of transactions actually processed: 100000/100000
latency average = 105.812 ms
tps = 94.507634 (including connections establishing)
tps = 94.507961 (excluding connections establishing)
scaling factor: 10
query mode: simple
number of clients: 4
number of threads: 2
number of transactions per client: 10000
number of transactions actually processed: 40000/40000
latency average = 15.667 ms
tps = 255.315579 (including connections establishing)
tps = 255.321942 (excluding connections establishing)
Test 2
1 Transaction is 1- Query , 1 – update , 1- insert
scaling factor: 10
query mode: simple
number of clients: 10
number of threads: 2
number of transactions per client: 10000
number of transactions actually processed: 100000/100000
latency average = 87.134 ms
tps = 114.766068 (including connections establishing)
tps = 114.766546 (excluding connections establishing)
scaling factor: 10
query mode: simple
number of clients: 4
number of threads: 2
number of transactions per client: 10000
number of transactions actually processed: 40000/40000
latency average = 13.067 ms
tps = 306.121787 (including connections establishing)
tps = 306.131759 (excluding connections establishing)
How do we intrepret these above results.
We are looking if postgres is capable of handling 5000TPS. 1TPS(2 simple query with primary key + 2 update(insert)).

Related

PowerBI Hierarchy RANKX Against Everyone at Each Level

I'm having a lot of trouble figuring out how to get RANKX to behave at different levels of a hierarchy.
I have a hierarchy structure as follows:
Region
Manager
Supervisor
Agent
For simplicity sake, let's rank each level on the total number of Requests Handled. I want to rank each level of the hierarchy against everyone at that level. For example, at the Agent level, each agent's total Requests Handled should be ranked against every other agent, regardless of what supervisor, manager, or region they are in.
I can get the agent level to work just fine with the following, but I can't figure out how to get the upper levels to rank against each other. This works fine if I apply the same RANKX statement at any level, but only when one level is shown in a visual. When adding any additional levels it breaks.
Requests Handled Measure =
SUMX('Work Done',[Requests Handled])
Rank Measure =
IF(
ISINSCOPE(Roster[Associate Name]) && NOT(ISBLANK([Requests Handled Measure])),
RANKX(
ALL(Roster[Associate Name], Roster[Supervisor Name], Roster[Manager Name], Roster[Region]),
[Requests Handled Measure],,DESC,Dense
)
)
The ideal result would be something like:
- Region 1 240 Requests Rank 2
- Manager A 122 Requests Rank 2
- Supervisor A 65 Requests Rank 3
- Agent A 30 Requests Rank 9
- Agent B 35 Requests Rank 5
- Supervisor B 57 Requests Rank 4
- Agent C 29 Requests Rank 10
- Agent D 28 Requests Rank 11
- Manager B 118 Requests Rank 3
- Supervisor C 65 Requests Rank 3
- Agent E 33 Requests Rank 6
- Agent F 32 Requests Rank 7
- Supervisor D 53 Requests Rank 6
- Agent G 26 Requests Rank 13
- Agent H 27 Requests Rank 12
- Region 2 250 Requests Rank 1
- Manager C 99 Requests Rank 4
- Supervisor E 56 Requests Rank 5
- Agent I 25 Requests Rank 14
- Agent J 31 Requests Rank 8
- Supervisor F 43 Requests Rank 7
- Agent K 20 Requests Rank 16
- Agent L 23 Requests Rank 15
- Manager D 151 Requests Rank 1
- Supervisor G 78 Requests Rank 1
- Agent M 40 Requests Rank 1
- Agent N 38 Requests Rank 2
- Supervisor H 73 Requests Rank 2
- Agent O 36 Requests Rank 4
- Agent P 37 Requests Rank 3

uWebSockets http serves too slow

I've cloned uWebSockets, wrote the following file.
#include <App.h>
#include <iostream>
int main() {
uWS::App()
.get("/*", [](auto *res, auto *req) {
res->end("Hello world!");
})
.listen(3000, [](auto *token) {
if (token) {
std::cout << "Listening on port " << 3000 << std::endl;
}
})
.run();
std::cout << "Failed to listen on port 3000" << std::endl;
return 0;
}
Built it with
make -C uWebSockets/uSockets
g++ -flto -O3 -Wconversion -std=c++17 -IuWebSockets/src -IuWebSockets/uSockets/src main.cpp -o main uWebSockets/uSockets/*.o -lz -lssl -lcrypto -luv
When I do a benchmark, this is what I get.
Concurrency Level: 100
Time taken for tests: 116.928 seconds
Complete requests: 1000
Failed requests: 0
Total transferred: 68000 bytes
HTML transferred: 12000 bytes
Requests per second: 8.55 [#/sec] (mean)
Time per request: 11692.844 [ms] (mean)
Time per request: 116.928 [ms] (mean, across all concurrent requests)
Transfer rate: 0.57 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 3 2.6 2 10
Processing: 8923 11690 921.2 11995 12012
Waiting: 0 1 1.3 0 10
Total: 8923 11693 921.7 11999 12012
Percentage of the requests served within a certain time (ms)
50% 11999
66% 12001
75% 12002
80% 12003
90% 12006
95% 12008
98% 12010
99% 12011
100% 12012 (longest request)
If I set concurrency to 1, this happens
Concurrency Level: 1
Time taken for tests: 61.655 seconds
Complete requests: 5
Failed requests: 0
Total transferred: 408 bytes
HTML transferred: 72 bytes
Requests per second: 0.08 [#/sec] (mean)
Time per request: 12330.911 [ms] (mean)
Time per request: 12330.911 [ms] (mean, across all concurrent requests)
Transfer rate: 0.01 [Kbytes/sec] received
Did I do something wrong? Did I miss something? Shouldn't it be faster?
PS. I'm running it on single thread.
I was using apache benchmark tool(ab) and it was slow, not the program. Used bomberdier and got this result
Bombarding http://127.0.0.1:3000/ with 1000000 request(s) using 5000 connection(s)
1000000 / 1000000 [===============================================================================================================================================================================================] 100.00% 316582/s 3s
Done!
Statistics Avg Stdev Max
Reqs/sec 361230.80 54733.74 520938.19
Latency 14.06ms 8.81ms 358.62ms
HTTP codes:
1xx - 0, 2xx - 1000000, 3xx - 0, 4xx - 0, 5xx - 0
others - 0
Throughput: 40.48MB/s

How to extract time value from cloudwatch logs to perform math operations

I have similar logs in AWS CloudWatch
2020-05-04 14:45:37.453 [http-nio-8095-exec-9] [34mINFO [0;39m xxx - Execution time of Class.methodNameOne :: 23 ms
2020-05-04 14:45:37.475 [http-nio-8095-exec-7] [34mINFO [0;39m xxx - Execution time of Class.methodNameTwo :: 32 ms
2020-05-04 14:45:37.472 [http-nio-8095-exec-3] [34mINFO [0;39m xxx - Execution time of Class.methodNameOne :: 38 ms
Created metric using the below pattern to obtain time value for the method methodNameOne only.
[..., method = class.methodNameOne, , time!=null,]
And I see methodNameOne in $method column, 23 and 38 values in $time column as rows.
lineno $method $time ......................
1 methodNameOne 23 ......................
2 methodNameOne 38 ......................
Later created alarm on the same metric, using maximum math function.
Alarm is not performing the max operation on the metric results
I want to calculate the maximum of time from the logs and alarm needs to be triggered when it crosses specified threshold value.

WEKA: Print the Indexes of Test data instances w.r.t original data at the time of cross validation

I have a query about the indexes of test data instances chosen by weka at the time of cross validation. How to print the indexes of the test data instances which are being evaluated ?
==================================
I have chosen:
Dataset : iris.arff
Total instances : 150
Classifier : J48
cross validation: 10 fold
I have also made output prediction as "PlainText"
=============
In the output window I can see like this :-
inst# actual predicted error prediction
1 3:Iris-virginica 3:Iris-virginica 0.976
2 3:Iris-virginica 3:Iris-virginica 0.976
3 3:Iris-virginica 3:Iris-virginica 0.976
4 3:Iris-virginica 3:Iris-virginica 0.976
5 3:Iris-virginica 3:Iris-virginica 0.976
6 1:Iris-setosa 1:Iris-setosa 1
7 1:Iris-setosa 1:Iris-setosa 1
....
...
...
Total 10 test data set.(15 instances in each).
======================
As WEKA uses startified cross validation, instances in the test data sets are randomly choosen.
So, How to print the indexes of test data w.r.t the data in original file?
i.e
inst# actual predicted error prediction
1 3:Iris-virginica 3:Iris-virginica 0.976
This result is for which instance in main data (among total 50 Iris-virginica) ?
===============
After a lot of search, I have found that the below youtube video is helpful for the above problem.
Hope this will be helpful for any future visitor with same queries.
Weka Tutorial 34: Generating Stratified Folds (Data Preprocessing)

Why is only a single key being emitted by the mapper for the following hive query?

INSERT OVERWRITE TABLE psam_0604
PARTITION(day)
select x.data, x.day
from (
select *, rank() over (partition by `data`.dpinc order by rand()) as prk
from test.gpsam
where `data`.dpinc is not null) x
where x.prk <= 3000;
The test.gpsam table is partitioned by day. To test.gpsam, I added a partition for a day from a bigger table, earlier. This bigger table is also partitioned by day. And I am running the query above.
It has two stages and only one reducer gets all the data, which I believe means that only one key is emitted by the mapper of the second stage.
Why is only one key being generated by the mapper? Is there anything I can do to get a better distribution of data among the reducers?
The job has been running at 66.67% completion since the past 18 hours.
Stats of the partition added to test.gpsam from the bigger table:
Status: HEALTHY
Total size: 325812129888 B
Total dirs: 1
Total files: 696
Total symlinks: 0
Total blocks (validated): 2780 (avg. block size 117198607 B)
Minimally replicated blocks: 2780 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 0 (0.0 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 3
Average block replication: 3.0
Corrupt blocks: 0
Missing replicas: 0 (0.0 %)
Number of data-nodes: 1146
Number of racks: 1
There are like 5 blocks for every snappy compressed file in the partition.