Finding whether a number has P^Q form or not? - c++

I have recently appeared online coding Test. I was struck one question i.e
A number N is given finding the above number is P^Q(P power Q) form or not. I did the question using Brute force method (satisfying for individual number) but that result in time out. SO I need Efficient algorithm.
Input: 9
out put : yes
Input: 125
out put : yes
Input: 27
out put : yes
Constraints: 2<N<100000

if we assume non trivial cases then the constraints would be something like this:
N = <2,100000)
P>1
Q>1
This can be solved by sieves that mark all powers bigger then 1 up to N of the result. Now the question is do you need to optimize single query or many of them ? If you need just single query then you do not need the sieve table in memory, you just iterate until hit the N and then stop (so in worst case when N is not in form P^Q this would compute the whole sieve). Otherwise init such table once and then just use it. As N is small I go for the full table.
const int n=100000;
int sieve[n]={255}; // for simplicity 1 int/number but it is waste of space can use 1 bit per number instead
int powers(int x)
{
// init sieve table if not already inited
if (sieve[0]==255)
{
int i,p;
for (i=0;i<n;i++) sieve[i]=0; // clear sieve
for (p=sqrt(n);p>1;p--) // process all non trivial P
for (i=p*p;i<n;i*=p) // go through whole table
sieve[i]=p; // store P so it can be easily found later (if use 1bit/number then just set the bit instead)
}
return sieve[x];
}
first call took 0.548 ms on mine setup the others are non measurable small times
it returns the P so if P!=0 the number is in form P^Q so you can use it as bool directly, and also you can easily get Q by dividing or you can create another sieve with Q to be even more fast if you need also the P,Q
Here all found non trivial powers N<100000
4 = 2^q
8 = 2^q
9 = 3^q
16 = 2^q
25 = 5^q
27 = 3^q
32 = 2^q
36 = 6^q
49 = 7^q
64 = 2^q
81 = 3^q
100 = 10^q
121 = 11^q
125 = 5^q
128 = 2^q
144 = 12^q
169 = 13^q
196 = 14^q
216 = 6^q
225 = 15^q
243 = 3^q
256 = 2^q
289 = 17^q
324 = 18^q
343 = 7^q
361 = 19^q
400 = 20^q
441 = 21^q
484 = 22^q
512 = 2^q
529 = 23^q
576 = 24^q
625 = 5^q
676 = 26^q
729 = 3^q
784 = 28^q
841 = 29^q
900 = 30^q
961 = 31^q
1000 = 10^q
1024 = 2^q
1089 = 33^q
1156 = 34^q
1225 = 35^q
1296 = 6^q
1331 = 11^q
1369 = 37^q
1444 = 38^q
1521 = 39^q
1600 = 40^q
1681 = 41^q
1728 = 12^q
1764 = 42^q
1849 = 43^q
1936 = 44^q
2025 = 45^q
2048 = 2^q
2116 = 46^q
2187 = 3^q
2197 = 13^q
2209 = 47^q
2304 = 48^q
2401 = 7^q
2500 = 50^q
2601 = 51^q
2704 = 52^q
2744 = 14^q
2809 = 53^q
2916 = 54^q
3025 = 55^q
3125 = 5^q
3136 = 56^q
3249 = 57^q
3364 = 58^q
3375 = 15^q
3481 = 59^q
3600 = 60^q
3721 = 61^q
3844 = 62^q
3969 = 63^q
4096 = 2^q
4225 = 65^q
4356 = 66^q
4489 = 67^q
4624 = 68^q
4761 = 69^q
4900 = 70^q
4913 = 17^q
5041 = 71^q
5184 = 72^q
5329 = 73^q
5476 = 74^q
5625 = 75^q
5776 = 76^q
5832 = 18^q
5929 = 77^q
6084 = 78^q
6241 = 79^q
6400 = 80^q
6561 = 3^q
6724 = 82^q
6859 = 19^q
6889 = 83^q
7056 = 84^q
7225 = 85^q
7396 = 86^q
7569 = 87^q
7744 = 88^q
7776 = 6^q
7921 = 89^q
8000 = 20^q
8100 = 90^q
8192 = 2^q
8281 = 91^q
8464 = 92^q
8649 = 93^q
8836 = 94^q
9025 = 95^q
9216 = 96^q
9261 = 21^q
9409 = 97^q
9604 = 98^q
9801 = 99^q
10000 = 10^q
10201 = 101^q
10404 = 102^q
10609 = 103^q
10648 = 22^q
10816 = 104^q
11025 = 105^q
11236 = 106^q
11449 = 107^q
11664 = 108^q
11881 = 109^q
12100 = 110^q
12167 = 23^q
12321 = 111^q
12544 = 112^q
12769 = 113^q
12996 = 114^q
13225 = 115^q
13456 = 116^q
13689 = 117^q
13824 = 24^q
13924 = 118^q
14161 = 119^q
14400 = 120^q
14641 = 11^q
14884 = 122^q
15129 = 123^q
15376 = 124^q
15625 = 5^q
15876 = 126^q
16129 = 127^q
16384 = 2^q
16641 = 129^q
16807 = 7^q
16900 = 130^q
17161 = 131^q
17424 = 132^q
17576 = 26^q
17689 = 133^q
17956 = 134^q
18225 = 135^q
18496 = 136^q
18769 = 137^q
19044 = 138^q
19321 = 139^q
19600 = 140^q
19683 = 3^q
19881 = 141^q
20164 = 142^q
20449 = 143^q
20736 = 12^q
21025 = 145^q
21316 = 146^q
21609 = 147^q
21904 = 148^q
21952 = 28^q
22201 = 149^q
22500 = 150^q
22801 = 151^q
23104 = 152^q
23409 = 153^q
23716 = 154^q
24025 = 155^q
24336 = 156^q
24389 = 29^q
24649 = 157^q
24964 = 158^q
25281 = 159^q
25600 = 160^q
25921 = 161^q
26244 = 162^q
26569 = 163^q
26896 = 164^q
27000 = 30^q
27225 = 165^q
27556 = 166^q
27889 = 167^q
28224 = 168^q
28561 = 13^q
28900 = 170^q
29241 = 171^q
29584 = 172^q
29791 = 31^q
29929 = 173^q
30276 = 174^q
30625 = 175^q
30976 = 176^q
31329 = 177^q
31684 = 178^q
32041 = 179^q
32400 = 180^q
32761 = 181^q
32768 = 2^q
33124 = 182^q
33489 = 183^q
33856 = 184^q
34225 = 185^q
34596 = 186^q
34969 = 187^q
35344 = 188^q
35721 = 189^q
35937 = 33^q
36100 = 190^q
36481 = 191^q
36864 = 192^q
37249 = 193^q
37636 = 194^q
38025 = 195^q
38416 = 14^q
38809 = 197^q
39204 = 198^q
39304 = 34^q
39601 = 199^q
40000 = 200^q
40401 = 201^q
40804 = 202^q
41209 = 203^q
41616 = 204^q
42025 = 205^q
42436 = 206^q
42849 = 207^q
42875 = 35^q
43264 = 208^q
43681 = 209^q
44100 = 210^q
44521 = 211^q
44944 = 212^q
45369 = 213^q
45796 = 214^q
46225 = 215^q
46656 = 6^q
47089 = 217^q
47524 = 218^q
47961 = 219^q
48400 = 220^q
48841 = 221^q
49284 = 222^q
49729 = 223^q
50176 = 224^q
50625 = 15^q
50653 = 37^q
51076 = 226^q
51529 = 227^q
51984 = 228^q
52441 = 229^q
52900 = 230^q
53361 = 231^q
53824 = 232^q
54289 = 233^q
54756 = 234^q
54872 = 38^q
55225 = 235^q
55696 = 236^q
56169 = 237^q
56644 = 238^q
57121 = 239^q
57600 = 240^q
58081 = 241^q
58564 = 242^q
59049 = 3^q
59319 = 39^q
59536 = 244^q
60025 = 245^q
60516 = 246^q
61009 = 247^q
61504 = 248^q
62001 = 249^q
62500 = 250^q
63001 = 251^q
63504 = 252^q
64000 = 40^q
64009 = 253^q
64516 = 254^q
65025 = 255^q
65536 = 2^q
66049 = 257^q
66564 = 258^q
67081 = 259^q
67600 = 260^q
68121 = 261^q
68644 = 262^q
68921 = 41^q
69169 = 263^q
69696 = 264^q
70225 = 265^q
70756 = 266^q
71289 = 267^q
71824 = 268^q
72361 = 269^q
72900 = 270^q
73441 = 271^q
73984 = 272^q
74088 = 42^q
74529 = 273^q
75076 = 274^q
75625 = 275^q
76176 = 276^q
76729 = 277^q
77284 = 278^q
77841 = 279^q
78125 = 5^q
78400 = 280^q
78961 = 281^q
79507 = 43^q
79524 = 282^q
80089 = 283^q
80656 = 284^q
81225 = 285^q
81796 = 286^q
82369 = 287^q
82944 = 288^q
83521 = 17^q
84100 = 290^q
84681 = 291^q
85184 = 44^q
85264 = 292^q
85849 = 293^q
86436 = 294^q
87025 = 295^q
87616 = 296^q
88209 = 297^q
88804 = 298^q
89401 = 299^q
90000 = 300^q
90601 = 301^q
91125 = 45^q
91204 = 302^q
91809 = 303^q
92416 = 304^q
93025 = 305^q
93636 = 306^q
94249 = 307^q
94864 = 308^q
95481 = 309^q
96100 = 310^q
96721 = 311^q
97336 = 46^q
97344 = 312^q
97969 = 313^q
98596 = 314^q
99225 = 315^q
99856 = 316^q
it took 62.6 ms including first init call (and string output to memo which is much slower then the computation itself) without the string it took just 1.25 ms

Related

Rsyslog unable to send multiline logs

I'm unable to push the below logs via rsyslog. The rsyslog is only forwarding one line of the log.
Kafka-server logs:
[2022-07-25 11:43:45,091] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = INTERNAL://0.0.0.0:9092,BROKER://0.0.0.0:9091,CLIENT://0.0.0.0:9093
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = 0
broker.id.generation.enable = true
broker.rack = null
client.quota.callback.class = null
compression.type = producer
connection.failed.authentication.delay.ms = 100
connections.max.idle.ms = 600000
connections.max.reauth.ms = 0
control.plane.listener.name = null
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
fetch.purgatory.purge.interval.requests = 1000
group.initial.rebalance.delay.ms = 3000
group.max.session.timeout.ms = 1800000
group.max.size = 2147483647
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = BROKER
inter.broker.protocol.version = 2.3-IV1
kafka.metrics.polling.interval.secs = 10
kafka.metrics.reporters = []
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = INTERNAL:PLAINTEXT,BROKER:PLAINTEXT,CLIENT:PLAINTEXT
listeners = INTERNAL://:9092,BROKER://:9091,CLIENT://:9093
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.max.compaction.lag.ms = 9223372036854775807
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /var/lib/kafka
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.3-IV1
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 120
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections = 2147483647
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1000012
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 1
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 9092
principal.builder.class = null
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.principal.mapping.rules = [DEFAULT]
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 2
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 3
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.connect = 0.0.0.0:2181
zookeeper.connection.timeout.ms = 18000
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 6000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2022-07-25 11:43:45,145] ERROR Fatal error during SupportedServerStartable startup. Prepare to shutdown (io.confluent.support.metrics.SupportedKafka)
java.lang.IllegalArgumentException: requirement failed: advertised.listeners cannot use the nonroutable meta-address 0.0.0.0. Use a routable IP address.
at scala.Predef$.require(Predef.scala:224)
at kafka.server.KafkaConfig.validateValues(KafkaConfig.scala:1492)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1460)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1114)
at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:1094)
at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:1091)
at kafka.server.KafkaConfig.fromProps(KafkaConfig.scala)
at io.confluent.support.metrics.SupportedServerStartable.<init>(SupportedServerStartable.java:52)
at io.confluent.support.metrics.SupportedKafka.main(SupportedKafka.java:45)
rsysconf.d/10kafka.conf
$InputFilePollInterval 1
input(type="imfile"
File="/var/log/kafka/server.log"
Tag="app-error"
Severity="error"
startmsg.regex="^[[:digit:]]{4}-[[:digit:]]{2}"
)
*.* #Fluentdvmip:5142
Can someone please guide how to send complete logs from rsys to Fluentd?
Below is the regex which I'll be using in fluentD configuration.
https://regex101.com/r/NaNVcr/1
or do we need to modify kafka log4j properties to have proper logging?
Instead of sending Kafka server logs into RSYS. First, I converted Kakfa logs into Json format.
Created a Jar using this Link
Steps Followed
Git clone the above link in VM where kafka was installed
Installed maven latest version
Ran mvn package inside the recently cloned directory
A jar was created inside target directory "log4j-json-layout-1.0-SNAPSHOT.jar"
Copied this jar to /usr/share/java/kafka/
Added the below code in /etc/kafka/log4j.properties and commented the older code
log4j.appender.kafkaAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.kafkaAppender.File=${kafka.logs.dir}/server.log
log4j.appender.kafkaAppender.layout=de.thmshmm.log4j.JsonLayout
log4j.appender.kafkaAppender.layout.DatePattern=yyyy-MM-dd HH:mm:ss
restarted confluent-kafka service
Before logs:
After Logs:

How do I optimize ORM queries for faster results?

I have 80k+ Entries in table Calendar. When I run either of the following 2 methods, the filter or get is taking too long to execute and due to which server is getting crashed after sometime and no new entries are getting added. I want to know if there are any more methods from which I can solve this issue.
Method 1:
date_list = []
d1 = date(2022, 5, 1)
d2 = date(2022, 6, 30)
delta = d2 - d1
for i in range(delta.days + 1):
date_list.append(d1 + timedelta(days=i))
profiles = Profile.objects.all()
for j in date_list:
for i in profiles:
try:
Calendar.objects.get(date=j,emp_id = i.emp_id)
except Calendar.DoesNotExist:
e = Calander()
e.team = i.emp_process
e.date = j
e.emp_name = i.emp_name
e.emp_id = i.emp_id
e.emp_desi = i.emp_desi
e.att_actual = "Unmarked"
e.save()
Method 2:
date_list = []
d1 = date(2022, 5, 1)
d2 = date(2022, 6, 30)
delta = d2 - d1
for i in range(delta.days + 1):
date_list.append(d1 + timedelta(days=i))
profiles = Profile.objects.all()
for j in date_list:
for i in profiles:
cal = Calander.objects.filter(date=j,emp_id = i.emp_id).count()
if cal < 1:
e = Calander()
e.team = i.emp_process
e.date = j
e.emp_name = i.emp_name
e.emp_id = i.emp_id
e.emp_desi = i.emp_desi
e.att_actual = "Unmarked"
e.save()
Try this:
date_list = []
d1 = date(2022, 5, 1)
d2 = date(2022, 6, 30)
delta = d2 - d1
for i in range(delta.days + 1):
date_list.append(d1 + timedelta(days=i))
for j in date_list:
profiles = Profile.objects.exclude(emp_id__in=Calendar.objects.filter(date=j).values('emp_id'))
calendars = []
for i in profiles:
e = Calendar()
e.team = i.emp_process
e.date = j
e.emp_name = i.emp_name
e.emp_id = i.emp_id
e.emp_desi = i.emp_desi
e.att_actual = "Unmarked"
calendars.append(e)
Calendar.objects.bulk_create(calendars)

Group Items together dictionary while loop

I am having trouble storing the ID to keys, like a sub (parent-child) kind of thing. I spent hours on it and could not figure a way to accomplish this. What output I am expecting is at the end of this post. Any help would be great.
import sys
import collections
dict = collections.OrderedDict()
dict["A.1"] = {"parent_child":0}
dict["A.1.1"] = {"parent_child":1}
dict["A.1.1.1"] = {"parent_child":2}
dict["A.1.1.2"] = {"parent_child":2}
dict["A.1.1.3"] = {"parent_child":2}
dict["A.1.2"] = {"parent_child":1}
dict["A.1.2.1"] = {"parent_child":2}
dict["A.1.2.2"] = {"parent_child":2}
dict["A.1.2.2.1"] = {"parent_child":3}
dict["A.1.2.2.2"] = {"parent_child":3}
dict["A.1.2.3"] = {"parent_child":2}
dict["A.1.3"] = {"parent_child":1}
dict["A.1.4"] = {"parent_child":1}
print(dict)
new_dict = {}
p = 0 # previous index
i = 0 # current
n = 1 # next index
current_PC = 0 # current parent_child
next_PC = 0 # next parent_child
previous_id = ""
current_id = ""
next_id = ""
change_current = True
change = True
lst = []
while(True):
if change_current:
current_id = dict.keys()[i]
current_PC = dict.values()[i]["parent_child"]
change_current = False
try:
next_id = dict.keys()[n]
next_PC = dict.values()[n]["parent_child"]
except:
pass # it will go out of index
print("KEY {0}".format(current_id))
if next_PC > current_PC:
if next_PC - current_PC == 1:
lst.append(next_PC)
next_PC += 1
print("next_PC: {0}".format(next_PC))
if next_PC == current_PC:
new_dict[current_id] = lst
lst = []
break
print(new_dict)
Trying to make output looks like this (at in similar way), the new_dict should look like:
new_dict["A.1"] = ["A.1.1", "A.1.2", "A.1.3", "A.1.4"]
new_dict["A.1.1"] = ["A.1.1.1", "A.1.1.2", "A.1.1.3"]
new_dict["A.1.1.1"] = []
new_dict["A.1.1.2"] = []
new_dict["A.1.1.3"] = []
new_dict["A.1.2"] = ["A.1.2.1", "A.1.2.2", "A.1.2.3"]
new_dict["A.1.2.1"] = []
new_dict["A.1.2.2"] = ["A.1.2.2.1", "A.1.2.2.2"]
new_dict["A.1.2.2.1"] = []
new_dict["A.1.2.2.2"] = []
new_dict["A.1.2.3"] = []
new_dict["A.1.3"] = []
new_dict["A.1.4"] = []
This gives you the output you are asking for. Since i did not see a {"parent_child":...} in you desired output i did not proceed with anything else.
options = ["A.1","A.1.1","A.1.1.1","A.1.1.2","A.1.1.3","A.1.2","A.1.2.1","A.1.2.2","A.1.2.2.1","A.1.2.2.2","A.1.2.3","A.1.3","A.1.4"]
new_dict = {}
for i, key in enumerate(options):
new_dict[key] = []
ls = []
for j, opt in enumerate(options):
if (key in opt) and (len(opt)-len(key)==2):
new_dict[key].append(opt)
print(new_dict)
EDIT
Using the comment of #Ranbir Aulakh
options = ["A.1","A.1.1","A.1.1.1","A.1.1.2","A.1.1.3","A.1.2","A.1.2.1","A.1.2.2","A.1.2.2.1","A.1.2.2.2","A.1.2.3","A.1.3","A.1.4"]
new_dict = {}
for i, key in enumerate(options):
new_dict[key] = []
ls = []
for j, opt in enumerate(options):
if (key in opt) and (len(opt.split("."))-len(key.split("."))==1):#(len(opt)-len(key)==2):
new_dict[key].append(opt)
print(new_dict)

change label values when an entry value is changed

My problem at the moment is I am trying to change a label(label 16) to the first value of entry_values[0] which isn't working I have tried passing it in as a variable and many other things, after about an hour of research I couldn't find a solution.I think the main problem is that it sets the label before the code with the entry is run so that it wont change. when I set it to a textvariable it produces an empty string (I think) but when I use just text it puts in a 0 where I expect my number.
def sub_menu(root):
global subpage
subpage = Frame(root)
button5 = Button(subpage, text="Save Generation Data",
command = lambda: save_entries())
button5.grid(row = 1, column = 6, sticky = E)
button6 = Button(subpage, text="Return To Main Page",
command = lambda: switch_page("main"))
button6.grid(row = 0, column = 6, sticky = W)
juveniles_label0 = Label(subpage,text="Juveniles")
adults_label1 = Label(subpage,text="Adults")
seniles_label2 = Label(subpage,text="Seniles")
population_label3 = Label(subpage,text="Population (Thousands)")
survival_rate_label4 = Label(subpage,text="Survival Rate (Between 0 and 1)")
birth_rate_label5 = Label(subpage,text="Birth Rate")
number_of_gens_label6 = Label(subpage,text="Number of Generations")
disease_trigger_label7 = Label(subpage,text="Disease Trigger Point")
global entry0
entry0 = Entry(subpage)
global entry1
entry1 = Entry(subpage)
global entry2
entry2 = Entry(subpage)
global entry3
entry3 = Entry(subpage)
global entry4
entry4 = Entry(subpage)
global entry5
entry5 = Entry(subpage)
global entry6
entry6 = Entry(subpage)
global entry7
entry7 = Entry(subpage)
global entry8
entry8 = Entry(subpage)
juveniles_label0.grid(row = 0, column = 1)
adults_label1.grid(row = 0, column = 2)
seniles_label2.grid(row = 0, column = 3)
population_label3.grid(row = 1, column = 0)
survival_rate_label4.grid(row = 2, column = 0)
birth_rate_label5.grid(row = 3, column = 0)
number_of_gens_label6.grid(row = 3, column = 2)
disease_trigger_label7.grid(row = 4, column = 0)
entry0.grid(row = 1, column = 1)
entry1.grid(row = 1, column = 2)
entry2.grid(row = 1, column = 3)
entry3.grid(row = 2, column = 1)
entry4.grid(row = 2, column = 2)
entry5.grid(row = 2, column = 3)
entry6.grid(row = 3, column = 1)
entry7.grid(row = 3, column = 3)
entry8.grid(row = 4, column = 1)
return subpage
def save_entries(): #entry recieve point
save_page = Frame(root)
""" if e0 < 0:
make a check to check if value is < 0 dont accept and if a value is inputed or not using if type(string_name) == str """
e0 = entry0.get()
if e0 >= 0:
entry_values[0] = (e0)
e1 = entry1.get()
if e0 >= 0:
entry_values[1] = (e1)
e2 = entry2.get()
if e0 >= 0:
entry_values[2] = (e2)
e3 = entry3.get()
if e0 >= 0:
entry_values[3] = (e3)
e4 = entry4.get()
if e0 >= 0:
entry_values[4] = (e4)
e5 = entry5.get()
if e0 >= 0:
entry_values[5] = (e5)
e6 = entry6.get()
if e0 >= 0:
entry_values[6] = (e6)
e7 = entry7.get()
if e0 >= 0:
entry_values[7] = (e7)
e8 = entry8.get()
if e0 >= 0:
entry_values[8] = (e8)
print entry_values
return save_page
def display_values(root):
sub2 = Frame(root)
global entry_values
label8 = Label(sub2, text = "Juveniles")
label9 = Label(sub2, text = "Adults")
label10 = Label(sub2, text = "Seniles")
label11 = Label(sub2, text = "Population(Thousands)")
label12 = Label(sub2, text = "Survival Rate(Between 1 and 0)")
label13 = Label(sub2, text = "Birth Rate")
label14 = Label(sub2, text = "Number of Generations")
label15 = Label(sub2, text = "Disase Trigger Point")
label16 = Label(sub2, text = entry_values[0])
label17 = Label(sub2, textvariable = entry_values[1])
label18 = Label(sub2, textvariable = "")
label19 = Label(sub2, textvariable = "")
label20 = Label(sub2, textvariable = "")
label21 = Label(sub2, textvariable = "")
label22 = Label(sub2, textvariable = "")
label23 = Label(sub2, textvariable = "")
label24 = Label(sub2, textvariable = "")
button7 = Button(sub2, text="Return To Main Page",
command = lambda: switch_page("main"))
label8.grid(row = 0, column = 1)
label9.grid(row = 0, column = 2)
label10.grid(row = 0, column = 3)
label11.grid(row = 1, column = 0)
label12.grid(row = 2, column = 0)
label13.grid(row = 3, column = 0)
label14.grid(row = 3, column = 3)
label15.grid(row = 4, column = 0)
label16.grid(row = 1, column = 1)
label17.grid(row = 1, column = 2)
label18.grid(row = 1, column = 3)
label19.grid(row = 2, column = 1)
label20.grid(row = 2, column = 2)
label21.grid(row = 2, column = 3)
label22.grid(row = 3, column = 1)
label23.grid(row = 3, column = 3)
label24.grid(row = 4, column = 1)
button7.grid(row = 0, column = 0)
return sub2
In order to change the text of a label you can do:
label["text"] = textVar
or
label.config(text=textVar)
So in your above code, when the entry changes, reconfigure the label using one of the above options.

Embeddable Debugger for C++

Does not need to be a full debugger but i need to get a good stack trace dump when an assert is raised. A simple list of called functions is not enough. I'm pretty happy with my Eiffel system which is giving me something like
17 frames in current stack.
===== Displaying only top 10 frames in run-time stack =====
agent call wrapper 2
======================================
lookup_key USER_COMMAND_INVOCATION
Current = USER_COMMAND_INVOCATION#03FE5D00
[ arachno_window = #061A4AF0
window = #061A4AF0
mrec = Void
trigger = "if"
param_int = 0
param_str = Void
skip_next_keystroke = false
in_continuation_key = false
current_hook = #05EFB500
update_default_command = #05EF3578
default_command = "Editor.Modifications#cInsert Typed Text"
last_invoked_command = "Editor.Modifications#cTab Character"
show_popup_menu_action = #061B3C30
build_menu = Void
menu_line = 0
menu_offset = 0
modifiers = #03FDCAC8
code = #03FDCB88
character = #03FDCB58
count = 0
prefix_count = -1
selected_commands = #03FDCA98
cached_file_types = #03FDCA68
cached_file_context = #03FDCA38
cached_window_type_name = Void
project_context_action = #061B3C80
program_context_action = #061B3C58
window_context_action = #061B3CA8
cached_window_name_valid = false
cached_file_type_valid = false
cached_file_context_valid = false
reset_prefix_on_next_event = false
]
before = true
mod = BIT_32
[ bits = 0
]
c = 65362
str = ""
menu = Void
map = Void
key = Void
b = Void
i = 0
n = 0
m = 0
cval = 65362
char = 0
cnt = 0
s = Void
skip_flag = false
prefix_found_flag = false
ev = AGUI_EVENT_CONSTANTS
t = Void
consume = false
line 387 column 13 file x:\work_arachno\src\run\user_command_invocation.e
======================================
on_key_hook (late bind.)
======================================
on_key_hook EDITOR_KEY_STEALER_HOOK
Current = EDITOR_KEY_STEALER_HOOK#05EFB500
[ callback = POINTER#063D1570
sapi = SCRIPTING_API
]
mod = BIT_32
[ bits = 0
]
code = 65362
str = ""
Result = false
break = false
engine = POINTER#04BEE690
cmd_pair = Void
line 41 column 0 file x:\work_arachno\src\editor\editor_key_stealer_hook.e
======================================
callback SCRIPTING_API
ptr = POINTER#04BEE690
cb = POINTER#063D1570
args = 3
res = 1
Result = 0
line 481 column 16 file x:\work_arachno\src\scripting\scripting_api.e
======================================
External CECIL call.
======================================
editor__move_lines EDITOR_SCRIPTING
Current = EDITOR_SCRIPTING#05EE17A0
[ sapi = SCRIPTING_API
view = #061A47D0
positions = #05EE1780
stealer_hook = #05EFB500
html_file_dialog = Void
html_file_dialog_modified = false
]
manager = EDITOR_MANAGER#05EE0ED8
[ configuration = #03FD7F00
breakpoints = #05ED0460
folding_settings = #05ED76C0
lang_descriptors = #05EDD360
confirm_edit_while_debugging = true
use_word_wrap = false
are_foldings_visible = true
are_line_numbers_visible = true
are_modified_markers_visible = true
are_bookmarks_visible = true
are_whitespaces_visible = false
is_highlight_current_line = false
is_highlight_search_results = true
is_right_margin_visible = true
are_indent_lines_visible = true
are_line_endings_visible = false
are_end_of_line_dots_visible = false
are_breakpoints_visible = false
commands = #03FD1E70
completition = #0400A8C0
tooltip_buffer = Void
folding_background_task_id = 0
folding_file_observer = #061C6EE0
folding_rand = #05EDDE88
bracket_window = Void
bracket_window_popdown_task = 0
tracked_positions = Void
code_hinting = #05EE2F20
code_hinting_active = false
code_hinting_window_id = 0
code_hinting_file_id = 0
code_hinting_shutdown_task = 0
code_hinting_shutdown_agent = #05ED2690
code_hinting_autostart = true
code_hinting_info_display = 2
code_hinting_display_id = 0
code_hinting_sequence_id = 3
]
engine = POINTER#04BEE690
sobj = POINTER#07526740
Result = 0
first = 486
last = 486
ref = 487
before = false
line 220 column 25 file ΘÖ▓♣
======================================
move_lines (late bind.)
======================================
move_lines EDITOR_BUFFER
Current = EDITOR_BUFFER#03FE91B0
[ modified_lines = #074BD408
modified_lines_string = ""
lines = #074BD348
last_modified_position = #061D6B18
is_read_only_state = false
is_modified = true
is_special_empty = false
is_zombie = false
line_end_coding = 1
loaded_without_last_newline = true
descriptor = #05EE04C0
views = #074BD3A8
transform_position = Void
transform_array = #07552E40
forbid_update_counter = 0
inside_undo_operation = false
debug_position_list = #074BD390
]
pos = EDITOR_POSITION#061B35A0
[ help = "caret"
buffer_line = #03FF0460
offset = 18
data = #075181E0
is_anchored = false
is_transformed = false
next_transformed = Void
next = #044C0270
]
lower = 486
upper = 486
ref = 487
before = false
count = 1
src = 0
dest = 1
i = 2214
j = 2213
low = 486
up = 489
target = 486
skip = false
a = ARRAY[EDITOR_BUFFER_LINE]#075529C0
[ storage = NATIVE_ARRAY[EDITOR_BUFFER_LINE]#03FDA448
capacity = 1
upper = 1
lower = 1
]
set = SET[INTEGER]#075266E0
[ buckets = NATIVE_ARRAY[SET_NODE[INTEGER]]#0755B000
cache_user = -1
cache_node = Void
cache_buckets = 0
count = 2213
capacity = 3079
]
line 1107 column 25 file x:\work_arachno\src\editor\editor_buffer.e
======================================
It means i get a dump of all stack variables and the current (this) object.
And the best is that i can ship this to the customer because it requires zero configuration on the client side.
Does anyone know how i can do this in a C++/C or even D or CObject (i have to switch the langauge anyway and this are my four preferred targets)?
And everyone who says i should deploy GDB will get downvotes.
for windows, look at minidumps, creatable with methods in toolhelp.dll
If you're on Linux with glibc look into the 'backtrace()' function.
I've used it on a PPC system to get stacktraces whenever a critical error occurs.
In my implementation I just log the return addresses and use a script on the host platform (based around addr2line) to translate is back into filename/function name/line number.
As stijn pointed out, under Windows a minidump may be sufficient. If it is, it'll undoubtedly be a whole lot easier than almost any alternative.
A minidump won't be very close to what you're getting from Eiffel though. If you need something closer, you'll probably have to write (most of) the code yourself. The Antex Stack Walker would probably be a reasonable start (look for stackwalker.cpp and stackwalker.h).
You might want to look at google-breakpad: http://code.google.com/p/google-breakpad/
It's the project formerly known as Airbag:
http://benjamin.smedbergs.us/blog/2006-09-12/deploying-the-airbag/
https://wiki.mozilla.org/Airbag
I'll admit that I haven't used it (my needs for something like this have been Windows-only), but it appears to have Mozilla behind it and seems to be pretty active.
It's a set of libraries and tools to automatically send crashdumps to a server so you can do post mortems. As I understand it Firefox is currently using it, so I imagine it's pretty cross-platform. The downside is that it's not self-contained in the client program - you need some infrastructure in place for it to be useful. That might or might not be workable for your situation.
Though if you don't want to have a service that gets the reports (maybe you just want them on the client or have them sent by email) I imagine you could have the client reporter send it's information to a 'server' program that also exists on the client machine to do the reporting. No idea how much work it would be to get it to function like that.