Issue
Running pytest -n auto on my system with mariadb is taking significantly longer the running pytest
Image of result and time
System specs
System specs
Package versions
mariadb Ver 15.1 Distrib 10.8.3-MariaDB
python 3.9.6
pytest 7.1.3
MariaDB config file
[client]
port = 3306
socket = /tmp/mysql.sock
[mysqld]
port = 3306
socket = /tmp/mysql.sock
back_log = 50
max_connections = 100
wait_timeout = 256
max_connect_errors = 10
table_open_cache = 2048
max_allowed_packet = 16M
binlog_cache_size = 512M
max_heap_table_size = 512M
read_buffer_size = 64M
read_rnd_buffer_size = 64M
sort_buffer_size = 64M
join_buffer_size = 64M
thread_cache_size = 8
thread_concurrency = 8
thread_stack = 240K
query_cache_size = 128M
query_cache_limit = 2M
ft_min_word_len = 4
default-storage-engine = InnoDB
transaction_isolation = REPEATABLE-READ
tmp_table_size = 512M
log-bin=mysql-bin
binlog_format=mixed
slow_query_log
long_query_time = 2
server-id = 1
# INNODB options
innodb_buffer_pool_size = 4G
innodb_buffer_pool_instances = 8
innodb_data_file_path = ibdata1:10M:autoextend
innodb_write_io_threads = 8
innodb_read_io_threads = 8
innodb_thread_concurrency = 16
innodb_flush_log_at_trx_commit = 1
innodb_log_buffer_size = 1GB
innodb_change_buffering = all
innodb_change_buffer_max_size = 25
innodb_log_file_size = 512M
innodb_log_files_in_group = 3
innodb_max_dirty_pages_pct = 90
innodb_lock_wait_timeout = 256
[mysqldump]
quick
max_allowed_packet = 50M
[mysql]
no-auto-rehash
[mysqlhotcopy]
interactive-timeout
[mysqld_safe]
open-files-limit = 8192
Additional info
All packages and apps are running natively and not through rosetta
Related
I'm unable to push the below logs via rsyslog. The rsyslog is only forwarding one line of the log.
Kafka-server logs:
[2022-07-25 11:43:45,091] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = INTERNAL://0.0.0.0:9092,BROKER://0.0.0.0:9091,CLIENT://0.0.0.0:9093
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = 0
broker.id.generation.enable = true
broker.rack = null
client.quota.callback.class = null
compression.type = producer
connection.failed.authentication.delay.ms = 100
connections.max.idle.ms = 600000
connections.max.reauth.ms = 0
control.plane.listener.name = null
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
fetch.purgatory.purge.interval.requests = 1000
group.initial.rebalance.delay.ms = 3000
group.max.session.timeout.ms = 1800000
group.max.size = 2147483647
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = BROKER
inter.broker.protocol.version = 2.3-IV1
kafka.metrics.polling.interval.secs = 10
kafka.metrics.reporters = []
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = INTERNAL:PLAINTEXT,BROKER:PLAINTEXT,CLIENT:PLAINTEXT
listeners = INTERNAL://:9092,BROKER://:9091,CLIENT://:9093
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.max.compaction.lag.ms = 9223372036854775807
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /var/lib/kafka
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.3-IV1
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 120
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections = 2147483647
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1000012
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 1
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 9092
principal.builder.class = null
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.principal.mapping.rules = [DEFAULT]
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 2
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 3
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.connect = 0.0.0.0:2181
zookeeper.connection.timeout.ms = 18000
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 6000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2022-07-25 11:43:45,145] ERROR Fatal error during SupportedServerStartable startup. Prepare to shutdown (io.confluent.support.metrics.SupportedKafka)
java.lang.IllegalArgumentException: requirement failed: advertised.listeners cannot use the nonroutable meta-address 0.0.0.0. Use a routable IP address.
at scala.Predef$.require(Predef.scala:224)
at kafka.server.KafkaConfig.validateValues(KafkaConfig.scala:1492)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1460)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1114)
at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:1094)
at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:1091)
at kafka.server.KafkaConfig.fromProps(KafkaConfig.scala)
at io.confluent.support.metrics.SupportedServerStartable.<init>(SupportedServerStartable.java:52)
at io.confluent.support.metrics.SupportedKafka.main(SupportedKafka.java:45)
rsysconf.d/10kafka.conf
$InputFilePollInterval 1
input(type="imfile"
File="/var/log/kafka/server.log"
Tag="app-error"
Severity="error"
startmsg.regex="^[[:digit:]]{4}-[[:digit:]]{2}"
)
*.* #Fluentdvmip:5142
Can someone please guide how to send complete logs from rsys to Fluentd?
Below is the regex which I'll be using in fluentD configuration.
https://regex101.com/r/NaNVcr/1
or do we need to modify kafka log4j properties to have proper logging?
Instead of sending Kafka server logs into RSYS. First, I converted Kakfa logs into Json format.
Created a Jar using this Link
Steps Followed
Git clone the above link in VM where kafka was installed
Installed maven latest version
Ran mvn package inside the recently cloned directory
A jar was created inside target directory "log4j-json-layout-1.0-SNAPSHOT.jar"
Copied this jar to /usr/share/java/kafka/
Added the below code in /etc/kafka/log4j.properties and commented the older code
log4j.appender.kafkaAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.kafkaAppender.File=${kafka.logs.dir}/server.log
log4j.appender.kafkaAppender.layout=de.thmshmm.log4j.JsonLayout
log4j.appender.kafkaAppender.layout.DatePattern=yyyy-MM-dd HH:mm:ss
restarted confluent-kafka service
Before logs:
After Logs:
I am getting different INCAR files for ID 'mp-22590' using MPRester.query() within pymatgen and from the materials-project website.
using MPRester.query():
mpr = MPRester('My_API_key')
data=mpr.query('mp-22590',['input.incar'])[0]
INCAR that i get is:
ALGO = Normal
AMIX = 0.2
AMIX_MAG = 0.8
BMIX = 0.001
BMIX_MAG = 0.001
ENCUT = 520
IBRION = 1
ICHARG = 1
ISIF = 3
ISMEAR = 1
ISPIN = 2
LDAU = True
LDAUJ = 0 0 0
LDAUL = 0 2 0
LDAUTYPE = 2
LDAUU = 0 5.3 0
LREAL = Auto
LWAVE = True
MAGMOM = 12 * 0.6 4 * 5.0 4 * 0.6
NELM = 100
NELMDL = 6
NELMIN = 3
NPAR = 1
NSW = 200
PREC = Accurate
SIGMA = 0.2
SYSTEM = Rubyvaspy :: o fe la
and accessing VASP input files for 'mp-22590' from the materials-project website, INCAR is:
ALGO = Fast
EDIFF = 0.001
ENCUT = 520
IBRION = 2
ICHARG = 1
ISIF = 3
ISMEAR = -5
ISPIN = 2
LDAU = True
LDAUJ = 0 0 0
LDAUL = 0 2 0
LDAUPRINT = 1
LDAUTYPE = 2
LDAUU = 0 5.3 0
LMAXMIX = 6
LORBIT = 11
LREAL = Auto
LWAVE = False
MAGMOM = 4 * 0.6 4 * 5.0 12 * 0.6
NELM = 100
NSW = 99
PREC = Accurate
SIGMA = 0.05
Am i doing something wrong?
I would appreciate any help on this.
Thanks
I am playing with MRTG and I configure it to use RRD to record the performance data (which is a switch interface byte counter). When I use "rrdtool info" to check the RRD file, I see that ds[ds0].last_ds is a number and it changes everytime the new data is inputted
# rrdtool info 10.0.3.129_24_bw.rrd
filename = "10.0.3.129_24_bw.rrd"
rrd_version = "0003"
step = 60
last_update = 1482950882
header_size = 2912
ds[ds0].index = 0
ds[ds0].type = "COUNTER"
ds[ds0].minimal_heartbeat = 600
ds[ds0].min = 0.0000000000e+00
ds[ds0].max = 1.2500000000e+08
ds[ds0].last_ds = "6332648954"
ds[ds0].value = 3.5016393443e+01
ds[ds0].unknown_sec = 0
ds[ds1].index = 1
ds[ds1].type = "COUNTER"
ds[ds1].minimal_heartbeat = 600
ds[ds1].min = 0.0000000000e+00
ds[ds1].max = 1.2500000000e+08
ds[ds1].last_ds = "32104385407"
ds[ds1].value = 5.3344262295e+01
ds[ds1].unknown_sec = 0
What is it exactly? Thanks!
last_ds is the last received value of the DS, prior to calculation of Rate, at last_update time. When a new update comes in with a new DS value, this is used to create the new value for the update interval new_value = ( new_ds - last_ds ) / ( current_time - last_update ) and this is then assigned to one (or more) Intervals (according to Data Normalisation) in order to be able to set values in the various RRAs.
last_ds is different from value as it is before rate calculations and normalisation.
I have recently appeared online coding Test. I was struck one question i.e
A number N is given finding the above number is P^Q(P power Q) form or not. I did the question using Brute force method (satisfying for individual number) but that result in time out. SO I need Efficient algorithm.
Input: 9
out put : yes
Input: 125
out put : yes
Input: 27
out put : yes
Constraints: 2<N<100000
if we assume non trivial cases then the constraints would be something like this:
N = <2,100000)
P>1
Q>1
This can be solved by sieves that mark all powers bigger then 1 up to N of the result. Now the question is do you need to optimize single query or many of them ? If you need just single query then you do not need the sieve table in memory, you just iterate until hit the N and then stop (so in worst case when N is not in form P^Q this would compute the whole sieve). Otherwise init such table once and then just use it. As N is small I go for the full table.
const int n=100000;
int sieve[n]={255}; // for simplicity 1 int/number but it is waste of space can use 1 bit per number instead
int powers(int x)
{
// init sieve table if not already inited
if (sieve[0]==255)
{
int i,p;
for (i=0;i<n;i++) sieve[i]=0; // clear sieve
for (p=sqrt(n);p>1;p--) // process all non trivial P
for (i=p*p;i<n;i*=p) // go through whole table
sieve[i]=p; // store P so it can be easily found later (if use 1bit/number then just set the bit instead)
}
return sieve[x];
}
first call took 0.548 ms on mine setup the others are non measurable small times
it returns the P so if P!=0 the number is in form P^Q so you can use it as bool directly, and also you can easily get Q by dividing or you can create another sieve with Q to be even more fast if you need also the P,Q
Here all found non trivial powers N<100000
4 = 2^q
8 = 2^q
9 = 3^q
16 = 2^q
25 = 5^q
27 = 3^q
32 = 2^q
36 = 6^q
49 = 7^q
64 = 2^q
81 = 3^q
100 = 10^q
121 = 11^q
125 = 5^q
128 = 2^q
144 = 12^q
169 = 13^q
196 = 14^q
216 = 6^q
225 = 15^q
243 = 3^q
256 = 2^q
289 = 17^q
324 = 18^q
343 = 7^q
361 = 19^q
400 = 20^q
441 = 21^q
484 = 22^q
512 = 2^q
529 = 23^q
576 = 24^q
625 = 5^q
676 = 26^q
729 = 3^q
784 = 28^q
841 = 29^q
900 = 30^q
961 = 31^q
1000 = 10^q
1024 = 2^q
1089 = 33^q
1156 = 34^q
1225 = 35^q
1296 = 6^q
1331 = 11^q
1369 = 37^q
1444 = 38^q
1521 = 39^q
1600 = 40^q
1681 = 41^q
1728 = 12^q
1764 = 42^q
1849 = 43^q
1936 = 44^q
2025 = 45^q
2048 = 2^q
2116 = 46^q
2187 = 3^q
2197 = 13^q
2209 = 47^q
2304 = 48^q
2401 = 7^q
2500 = 50^q
2601 = 51^q
2704 = 52^q
2744 = 14^q
2809 = 53^q
2916 = 54^q
3025 = 55^q
3125 = 5^q
3136 = 56^q
3249 = 57^q
3364 = 58^q
3375 = 15^q
3481 = 59^q
3600 = 60^q
3721 = 61^q
3844 = 62^q
3969 = 63^q
4096 = 2^q
4225 = 65^q
4356 = 66^q
4489 = 67^q
4624 = 68^q
4761 = 69^q
4900 = 70^q
4913 = 17^q
5041 = 71^q
5184 = 72^q
5329 = 73^q
5476 = 74^q
5625 = 75^q
5776 = 76^q
5832 = 18^q
5929 = 77^q
6084 = 78^q
6241 = 79^q
6400 = 80^q
6561 = 3^q
6724 = 82^q
6859 = 19^q
6889 = 83^q
7056 = 84^q
7225 = 85^q
7396 = 86^q
7569 = 87^q
7744 = 88^q
7776 = 6^q
7921 = 89^q
8000 = 20^q
8100 = 90^q
8192 = 2^q
8281 = 91^q
8464 = 92^q
8649 = 93^q
8836 = 94^q
9025 = 95^q
9216 = 96^q
9261 = 21^q
9409 = 97^q
9604 = 98^q
9801 = 99^q
10000 = 10^q
10201 = 101^q
10404 = 102^q
10609 = 103^q
10648 = 22^q
10816 = 104^q
11025 = 105^q
11236 = 106^q
11449 = 107^q
11664 = 108^q
11881 = 109^q
12100 = 110^q
12167 = 23^q
12321 = 111^q
12544 = 112^q
12769 = 113^q
12996 = 114^q
13225 = 115^q
13456 = 116^q
13689 = 117^q
13824 = 24^q
13924 = 118^q
14161 = 119^q
14400 = 120^q
14641 = 11^q
14884 = 122^q
15129 = 123^q
15376 = 124^q
15625 = 5^q
15876 = 126^q
16129 = 127^q
16384 = 2^q
16641 = 129^q
16807 = 7^q
16900 = 130^q
17161 = 131^q
17424 = 132^q
17576 = 26^q
17689 = 133^q
17956 = 134^q
18225 = 135^q
18496 = 136^q
18769 = 137^q
19044 = 138^q
19321 = 139^q
19600 = 140^q
19683 = 3^q
19881 = 141^q
20164 = 142^q
20449 = 143^q
20736 = 12^q
21025 = 145^q
21316 = 146^q
21609 = 147^q
21904 = 148^q
21952 = 28^q
22201 = 149^q
22500 = 150^q
22801 = 151^q
23104 = 152^q
23409 = 153^q
23716 = 154^q
24025 = 155^q
24336 = 156^q
24389 = 29^q
24649 = 157^q
24964 = 158^q
25281 = 159^q
25600 = 160^q
25921 = 161^q
26244 = 162^q
26569 = 163^q
26896 = 164^q
27000 = 30^q
27225 = 165^q
27556 = 166^q
27889 = 167^q
28224 = 168^q
28561 = 13^q
28900 = 170^q
29241 = 171^q
29584 = 172^q
29791 = 31^q
29929 = 173^q
30276 = 174^q
30625 = 175^q
30976 = 176^q
31329 = 177^q
31684 = 178^q
32041 = 179^q
32400 = 180^q
32761 = 181^q
32768 = 2^q
33124 = 182^q
33489 = 183^q
33856 = 184^q
34225 = 185^q
34596 = 186^q
34969 = 187^q
35344 = 188^q
35721 = 189^q
35937 = 33^q
36100 = 190^q
36481 = 191^q
36864 = 192^q
37249 = 193^q
37636 = 194^q
38025 = 195^q
38416 = 14^q
38809 = 197^q
39204 = 198^q
39304 = 34^q
39601 = 199^q
40000 = 200^q
40401 = 201^q
40804 = 202^q
41209 = 203^q
41616 = 204^q
42025 = 205^q
42436 = 206^q
42849 = 207^q
42875 = 35^q
43264 = 208^q
43681 = 209^q
44100 = 210^q
44521 = 211^q
44944 = 212^q
45369 = 213^q
45796 = 214^q
46225 = 215^q
46656 = 6^q
47089 = 217^q
47524 = 218^q
47961 = 219^q
48400 = 220^q
48841 = 221^q
49284 = 222^q
49729 = 223^q
50176 = 224^q
50625 = 15^q
50653 = 37^q
51076 = 226^q
51529 = 227^q
51984 = 228^q
52441 = 229^q
52900 = 230^q
53361 = 231^q
53824 = 232^q
54289 = 233^q
54756 = 234^q
54872 = 38^q
55225 = 235^q
55696 = 236^q
56169 = 237^q
56644 = 238^q
57121 = 239^q
57600 = 240^q
58081 = 241^q
58564 = 242^q
59049 = 3^q
59319 = 39^q
59536 = 244^q
60025 = 245^q
60516 = 246^q
61009 = 247^q
61504 = 248^q
62001 = 249^q
62500 = 250^q
63001 = 251^q
63504 = 252^q
64000 = 40^q
64009 = 253^q
64516 = 254^q
65025 = 255^q
65536 = 2^q
66049 = 257^q
66564 = 258^q
67081 = 259^q
67600 = 260^q
68121 = 261^q
68644 = 262^q
68921 = 41^q
69169 = 263^q
69696 = 264^q
70225 = 265^q
70756 = 266^q
71289 = 267^q
71824 = 268^q
72361 = 269^q
72900 = 270^q
73441 = 271^q
73984 = 272^q
74088 = 42^q
74529 = 273^q
75076 = 274^q
75625 = 275^q
76176 = 276^q
76729 = 277^q
77284 = 278^q
77841 = 279^q
78125 = 5^q
78400 = 280^q
78961 = 281^q
79507 = 43^q
79524 = 282^q
80089 = 283^q
80656 = 284^q
81225 = 285^q
81796 = 286^q
82369 = 287^q
82944 = 288^q
83521 = 17^q
84100 = 290^q
84681 = 291^q
85184 = 44^q
85264 = 292^q
85849 = 293^q
86436 = 294^q
87025 = 295^q
87616 = 296^q
88209 = 297^q
88804 = 298^q
89401 = 299^q
90000 = 300^q
90601 = 301^q
91125 = 45^q
91204 = 302^q
91809 = 303^q
92416 = 304^q
93025 = 305^q
93636 = 306^q
94249 = 307^q
94864 = 308^q
95481 = 309^q
96100 = 310^q
96721 = 311^q
97336 = 46^q
97344 = 312^q
97969 = 313^q
98596 = 314^q
99225 = 315^q
99856 = 316^q
it took 62.6 ms including first init call (and string output to memo which is much slower then the computation itself) without the string it took just 1.25 ms
Does not need to be a full debugger but i need to get a good stack trace dump when an assert is raised. A simple list of called functions is not enough. I'm pretty happy with my Eiffel system which is giving me something like
17 frames in current stack.
===== Displaying only top 10 frames in run-time stack =====
agent call wrapper 2
======================================
lookup_key USER_COMMAND_INVOCATION
Current = USER_COMMAND_INVOCATION#03FE5D00
[ arachno_window = #061A4AF0
window = #061A4AF0
mrec = Void
trigger = "if"
param_int = 0
param_str = Void
skip_next_keystroke = false
in_continuation_key = false
current_hook = #05EFB500
update_default_command = #05EF3578
default_command = "Editor.Modifications#cInsert Typed Text"
last_invoked_command = "Editor.Modifications#cTab Character"
show_popup_menu_action = #061B3C30
build_menu = Void
menu_line = 0
menu_offset = 0
modifiers = #03FDCAC8
code = #03FDCB88
character = #03FDCB58
count = 0
prefix_count = -1
selected_commands = #03FDCA98
cached_file_types = #03FDCA68
cached_file_context = #03FDCA38
cached_window_type_name = Void
project_context_action = #061B3C80
program_context_action = #061B3C58
window_context_action = #061B3CA8
cached_window_name_valid = false
cached_file_type_valid = false
cached_file_context_valid = false
reset_prefix_on_next_event = false
]
before = true
mod = BIT_32
[ bits = 0
]
c = 65362
str = ""
menu = Void
map = Void
key = Void
b = Void
i = 0
n = 0
m = 0
cval = 65362
char = 0
cnt = 0
s = Void
skip_flag = false
prefix_found_flag = false
ev = AGUI_EVENT_CONSTANTS
t = Void
consume = false
line 387 column 13 file x:\work_arachno\src\run\user_command_invocation.e
======================================
on_key_hook (late bind.)
======================================
on_key_hook EDITOR_KEY_STEALER_HOOK
Current = EDITOR_KEY_STEALER_HOOK#05EFB500
[ callback = POINTER#063D1570
sapi = SCRIPTING_API
]
mod = BIT_32
[ bits = 0
]
code = 65362
str = ""
Result = false
break = false
engine = POINTER#04BEE690
cmd_pair = Void
line 41 column 0 file x:\work_arachno\src\editor\editor_key_stealer_hook.e
======================================
callback SCRIPTING_API
ptr = POINTER#04BEE690
cb = POINTER#063D1570
args = 3
res = 1
Result = 0
line 481 column 16 file x:\work_arachno\src\scripting\scripting_api.e
======================================
External CECIL call.
======================================
editor__move_lines EDITOR_SCRIPTING
Current = EDITOR_SCRIPTING#05EE17A0
[ sapi = SCRIPTING_API
view = #061A47D0
positions = #05EE1780
stealer_hook = #05EFB500
html_file_dialog = Void
html_file_dialog_modified = false
]
manager = EDITOR_MANAGER#05EE0ED8
[ configuration = #03FD7F00
breakpoints = #05ED0460
folding_settings = #05ED76C0
lang_descriptors = #05EDD360
confirm_edit_while_debugging = true
use_word_wrap = false
are_foldings_visible = true
are_line_numbers_visible = true
are_modified_markers_visible = true
are_bookmarks_visible = true
are_whitespaces_visible = false
is_highlight_current_line = false
is_highlight_search_results = true
is_right_margin_visible = true
are_indent_lines_visible = true
are_line_endings_visible = false
are_end_of_line_dots_visible = false
are_breakpoints_visible = false
commands = #03FD1E70
completition = #0400A8C0
tooltip_buffer = Void
folding_background_task_id = 0
folding_file_observer = #061C6EE0
folding_rand = #05EDDE88
bracket_window = Void
bracket_window_popdown_task = 0
tracked_positions = Void
code_hinting = #05EE2F20
code_hinting_active = false
code_hinting_window_id = 0
code_hinting_file_id = 0
code_hinting_shutdown_task = 0
code_hinting_shutdown_agent = #05ED2690
code_hinting_autostart = true
code_hinting_info_display = 2
code_hinting_display_id = 0
code_hinting_sequence_id = 3
]
engine = POINTER#04BEE690
sobj = POINTER#07526740
Result = 0
first = 486
last = 486
ref = 487
before = false
line 220 column 25 file ΘÖ▓♣
======================================
move_lines (late bind.)
======================================
move_lines EDITOR_BUFFER
Current = EDITOR_BUFFER#03FE91B0
[ modified_lines = #074BD408
modified_lines_string = ""
lines = #074BD348
last_modified_position = #061D6B18
is_read_only_state = false
is_modified = true
is_special_empty = false
is_zombie = false
line_end_coding = 1
loaded_without_last_newline = true
descriptor = #05EE04C0
views = #074BD3A8
transform_position = Void
transform_array = #07552E40
forbid_update_counter = 0
inside_undo_operation = false
debug_position_list = #074BD390
]
pos = EDITOR_POSITION#061B35A0
[ help = "caret"
buffer_line = #03FF0460
offset = 18
data = #075181E0
is_anchored = false
is_transformed = false
next_transformed = Void
next = #044C0270
]
lower = 486
upper = 486
ref = 487
before = false
count = 1
src = 0
dest = 1
i = 2214
j = 2213
low = 486
up = 489
target = 486
skip = false
a = ARRAY[EDITOR_BUFFER_LINE]#075529C0
[ storage = NATIVE_ARRAY[EDITOR_BUFFER_LINE]#03FDA448
capacity = 1
upper = 1
lower = 1
]
set = SET[INTEGER]#075266E0
[ buckets = NATIVE_ARRAY[SET_NODE[INTEGER]]#0755B000
cache_user = -1
cache_node = Void
cache_buckets = 0
count = 2213
capacity = 3079
]
line 1107 column 25 file x:\work_arachno\src\editor\editor_buffer.e
======================================
It means i get a dump of all stack variables and the current (this) object.
And the best is that i can ship this to the customer because it requires zero configuration on the client side.
Does anyone know how i can do this in a C++/C or even D or CObject (i have to switch the langauge anyway and this are my four preferred targets)?
And everyone who says i should deploy GDB will get downvotes.
for windows, look at minidumps, creatable with methods in toolhelp.dll
If you're on Linux with glibc look into the 'backtrace()' function.
I've used it on a PPC system to get stacktraces whenever a critical error occurs.
In my implementation I just log the return addresses and use a script on the host platform (based around addr2line) to translate is back into filename/function name/line number.
As stijn pointed out, under Windows a minidump may be sufficient. If it is, it'll undoubtedly be a whole lot easier than almost any alternative.
A minidump won't be very close to what you're getting from Eiffel though. If you need something closer, you'll probably have to write (most of) the code yourself. The Antex Stack Walker would probably be a reasonable start (look for stackwalker.cpp and stackwalker.h).
You might want to look at google-breakpad: http://code.google.com/p/google-breakpad/
It's the project formerly known as Airbag:
http://benjamin.smedbergs.us/blog/2006-09-12/deploying-the-airbag/
https://wiki.mozilla.org/Airbag
I'll admit that I haven't used it (my needs for something like this have been Windows-only), but it appears to have Mozilla behind it and seems to be pretty active.
It's a set of libraries and tools to automatically send crashdumps to a server so you can do post mortems. As I understand it Firefox is currently using it, so I imagine it's pretty cross-platform. The downside is that it's not self-contained in the client program - you need some infrastructure in place for it to be useful. That might or might not be workable for your situation.
Though if you don't want to have a service that gets the reports (maybe you just want them on the client or have them sent by email) I imagine you could have the client reporter send it's information to a 'server' program that also exists on the client machine to do the reporting. No idea how much work it would be to get it to function like that.