Embeddable Debugger for C++ - c++

Does not need to be a full debugger but i need to get a good stack trace dump when an assert is raised. A simple list of called functions is not enough. I'm pretty happy with my Eiffel system which is giving me something like
17 frames in current stack.
===== Displaying only top 10 frames in run-time stack =====
agent call wrapper 2
======================================
lookup_key USER_COMMAND_INVOCATION
Current = USER_COMMAND_INVOCATION#03FE5D00
[ arachno_window = #061A4AF0
window = #061A4AF0
mrec = Void
trigger = "if"
param_int = 0
param_str = Void
skip_next_keystroke = false
in_continuation_key = false
current_hook = #05EFB500
update_default_command = #05EF3578
default_command = "Editor.Modifications#cInsert Typed Text"
last_invoked_command = "Editor.Modifications#cTab Character"
show_popup_menu_action = #061B3C30
build_menu = Void
menu_line = 0
menu_offset = 0
modifiers = #03FDCAC8
code = #03FDCB88
character = #03FDCB58
count = 0
prefix_count = -1
selected_commands = #03FDCA98
cached_file_types = #03FDCA68
cached_file_context = #03FDCA38
cached_window_type_name = Void
project_context_action = #061B3C80
program_context_action = #061B3C58
window_context_action = #061B3CA8
cached_window_name_valid = false
cached_file_type_valid = false
cached_file_context_valid = false
reset_prefix_on_next_event = false
]
before = true
mod = BIT_32
[ bits = 0
]
c = 65362
str = ""
menu = Void
map = Void
key = Void
b = Void
i = 0
n = 0
m = 0
cval = 65362
char = 0
cnt = 0
s = Void
skip_flag = false
prefix_found_flag = false
ev = AGUI_EVENT_CONSTANTS
t = Void
consume = false
line 387 column 13 file x:\work_arachno\src\run\user_command_invocation.e
======================================
on_key_hook (late bind.)
======================================
on_key_hook EDITOR_KEY_STEALER_HOOK
Current = EDITOR_KEY_STEALER_HOOK#05EFB500
[ callback = POINTER#063D1570
sapi = SCRIPTING_API
]
mod = BIT_32
[ bits = 0
]
code = 65362
str = ""
Result = false
break = false
engine = POINTER#04BEE690
cmd_pair = Void
line 41 column 0 file x:\work_arachno\src\editor\editor_key_stealer_hook.e
======================================
callback SCRIPTING_API
ptr = POINTER#04BEE690
cb = POINTER#063D1570
args = 3
res = 1
Result = 0
line 481 column 16 file x:\work_arachno\src\scripting\scripting_api.e
======================================
External CECIL call.
======================================
editor__move_lines EDITOR_SCRIPTING
Current = EDITOR_SCRIPTING#05EE17A0
[ sapi = SCRIPTING_API
view = #061A47D0
positions = #05EE1780
stealer_hook = #05EFB500
html_file_dialog = Void
html_file_dialog_modified = false
]
manager = EDITOR_MANAGER#05EE0ED8
[ configuration = #03FD7F00
breakpoints = #05ED0460
folding_settings = #05ED76C0
lang_descriptors = #05EDD360
confirm_edit_while_debugging = true
use_word_wrap = false
are_foldings_visible = true
are_line_numbers_visible = true
are_modified_markers_visible = true
are_bookmarks_visible = true
are_whitespaces_visible = false
is_highlight_current_line = false
is_highlight_search_results = true
is_right_margin_visible = true
are_indent_lines_visible = true
are_line_endings_visible = false
are_end_of_line_dots_visible = false
are_breakpoints_visible = false
commands = #03FD1E70
completition = #0400A8C0
tooltip_buffer = Void
folding_background_task_id = 0
folding_file_observer = #061C6EE0
folding_rand = #05EDDE88
bracket_window = Void
bracket_window_popdown_task = 0
tracked_positions = Void
code_hinting = #05EE2F20
code_hinting_active = false
code_hinting_window_id = 0
code_hinting_file_id = 0
code_hinting_shutdown_task = 0
code_hinting_shutdown_agent = #05ED2690
code_hinting_autostart = true
code_hinting_info_display = 2
code_hinting_display_id = 0
code_hinting_sequence_id = 3
]
engine = POINTER#04BEE690
sobj = POINTER#07526740
Result = 0
first = 486
last = 486
ref = 487
before = false
line 220 column 25 file ΘÖ▓♣
======================================
move_lines (late bind.)
======================================
move_lines EDITOR_BUFFER
Current = EDITOR_BUFFER#03FE91B0
[ modified_lines = #074BD408
modified_lines_string = ""
lines = #074BD348
last_modified_position = #061D6B18
is_read_only_state = false
is_modified = true
is_special_empty = false
is_zombie = false
line_end_coding = 1
loaded_without_last_newline = true
descriptor = #05EE04C0
views = #074BD3A8
transform_position = Void
transform_array = #07552E40
forbid_update_counter = 0
inside_undo_operation = false
debug_position_list = #074BD390
]
pos = EDITOR_POSITION#061B35A0
[ help = "caret"
buffer_line = #03FF0460
offset = 18
data = #075181E0
is_anchored = false
is_transformed = false
next_transformed = Void
next = #044C0270
]
lower = 486
upper = 486
ref = 487
before = false
count = 1
src = 0
dest = 1
i = 2214
j = 2213
low = 486
up = 489
target = 486
skip = false
a = ARRAY[EDITOR_BUFFER_LINE]#075529C0
[ storage = NATIVE_ARRAY[EDITOR_BUFFER_LINE]#03FDA448
capacity = 1
upper = 1
lower = 1
]
set = SET[INTEGER]#075266E0
[ buckets = NATIVE_ARRAY[SET_NODE[INTEGER]]#0755B000
cache_user = -1
cache_node = Void
cache_buckets = 0
count = 2213
capacity = 3079
]
line 1107 column 25 file x:\work_arachno\src\editor\editor_buffer.e
======================================
It means i get a dump of all stack variables and the current (this) object.
And the best is that i can ship this to the customer because it requires zero configuration on the client side.
Does anyone know how i can do this in a C++/C or even D or CObject (i have to switch the langauge anyway and this are my four preferred targets)?
And everyone who says i should deploy GDB will get downvotes.

for windows, look at minidumps, creatable with methods in toolhelp.dll

If you're on Linux with glibc look into the 'backtrace()' function.
I've used it on a PPC system to get stacktraces whenever a critical error occurs.
In my implementation I just log the return addresses and use a script on the host platform (based around addr2line) to translate is back into filename/function name/line number.

As stijn pointed out, under Windows a minidump may be sufficient. If it is, it'll undoubtedly be a whole lot easier than almost any alternative.
A minidump won't be very close to what you're getting from Eiffel though. If you need something closer, you'll probably have to write (most of) the code yourself. The Antex Stack Walker would probably be a reasonable start (look for stackwalker.cpp and stackwalker.h).

You might want to look at google-breakpad: http://code.google.com/p/google-breakpad/
It's the project formerly known as Airbag:
http://benjamin.smedbergs.us/blog/2006-09-12/deploying-the-airbag/
https://wiki.mozilla.org/Airbag
I'll admit that I haven't used it (my needs for something like this have been Windows-only), but it appears to have Mozilla behind it and seems to be pretty active.
It's a set of libraries and tools to automatically send crashdumps to a server so you can do post mortems. As I understand it Firefox is currently using it, so I imagine it's pretty cross-platform. The downside is that it's not self-contained in the client program - you need some infrastructure in place for it to be useful. That might or might not be workable for your situation.
Though if you don't want to have a service that gets the reports (maybe you just want them on the client or have them sent by email) I imagine you could have the client reporter send it's information to a 'server' program that also exists on the client machine to do the reporting. No idea how much work it would be to get it to function like that.

Related

Rsyslog unable to send multiline logs

I'm unable to push the below logs via rsyslog. The rsyslog is only forwarding one line of the log.
Kafka-server logs:
[2022-07-25 11:43:45,091] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = INTERNAL://0.0.0.0:9092,BROKER://0.0.0.0:9091,CLIENT://0.0.0.0:9093
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = 0
broker.id.generation.enable = true
broker.rack = null
client.quota.callback.class = null
compression.type = producer
connection.failed.authentication.delay.ms = 100
connections.max.idle.ms = 600000
connections.max.reauth.ms = 0
control.plane.listener.name = null
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
fetch.purgatory.purge.interval.requests = 1000
group.initial.rebalance.delay.ms = 3000
group.max.session.timeout.ms = 1800000
group.max.size = 2147483647
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = BROKER
inter.broker.protocol.version = 2.3-IV1
kafka.metrics.polling.interval.secs = 10
kafka.metrics.reporters = []
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = INTERNAL:PLAINTEXT,BROKER:PLAINTEXT,CLIENT:PLAINTEXT
listeners = INTERNAL://:9092,BROKER://:9091,CLIENT://:9093
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.max.compaction.lag.ms = 9223372036854775807
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /var/lib/kafka
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.3-IV1
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 120
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections = 2147483647
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1000012
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 1
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 9092
principal.builder.class = null
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.principal.mapping.rules = [DEFAULT]
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 2
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 3
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.connect = 0.0.0.0:2181
zookeeper.connection.timeout.ms = 18000
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 6000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2022-07-25 11:43:45,145] ERROR Fatal error during SupportedServerStartable startup. Prepare to shutdown (io.confluent.support.metrics.SupportedKafka)
java.lang.IllegalArgumentException: requirement failed: advertised.listeners cannot use the nonroutable meta-address 0.0.0.0. Use a routable IP address.
at scala.Predef$.require(Predef.scala:224)
at kafka.server.KafkaConfig.validateValues(KafkaConfig.scala:1492)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1460)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1114)
at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:1094)
at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:1091)
at kafka.server.KafkaConfig.fromProps(KafkaConfig.scala)
at io.confluent.support.metrics.SupportedServerStartable.<init>(SupportedServerStartable.java:52)
at io.confluent.support.metrics.SupportedKafka.main(SupportedKafka.java:45)
rsysconf.d/10kafka.conf
$InputFilePollInterval 1
input(type="imfile"
File="/var/log/kafka/server.log"
Tag="app-error"
Severity="error"
startmsg.regex="^[[:digit:]]{4}-[[:digit:]]{2}"
)
*.* #Fluentdvmip:5142
Can someone please guide how to send complete logs from rsys to Fluentd?
Below is the regex which I'll be using in fluentD configuration.
https://regex101.com/r/NaNVcr/1
or do we need to modify kafka log4j properties to have proper logging?
Instead of sending Kafka server logs into RSYS. First, I converted Kakfa logs into Json format.
Created a Jar using this Link
Steps Followed
Git clone the above link in VM where kafka was installed
Installed maven latest version
Ran mvn package inside the recently cloned directory
A jar was created inside target directory "log4j-json-layout-1.0-SNAPSHOT.jar"
Copied this jar to /usr/share/java/kafka/
Added the below code in /etc/kafka/log4j.properties and commented the older code
log4j.appender.kafkaAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.kafkaAppender.File=${kafka.logs.dir}/server.log
log4j.appender.kafkaAppender.layout=de.thmshmm.log4j.JsonLayout
log4j.appender.kafkaAppender.layout.DatePattern=yyyy-MM-dd HH:mm:ss
restarted confluent-kafka service
Before logs:
After Logs:

GLPK output formats

I am new to GLPK, so my apologies in advance if I'm missing something simple!
I have a largeish LP that I am feeding through GLPK to model an energy market. I'm running the following command line to GLPK to process this:
winglpk-4.65\glpk-4.65\w64\glpsol --lp problem.lp --data ExampleDataFile.dat --output results2.txt
When I open the resulting text file I can see the outputs, which all look sensible. I have one big problem: each record is split over two rows, making it very difficult to clean the file. See an extract below:
No. Row name St Activity Lower bound Upper bound Marginal
------ ------------ -- ------------- ------------- ------------- -------------
1 c_e_SpecifiedDemand(UTOPIA_CSV_ID_1990)_
NS 0 0 = < eps
2 c_e_SpecifiedDemand(UTOPIA_CSV_ID_1991)_
NS 0 0 = < eps
3 c_e_SpecifiedDemand(UTOPIA_CSV_ID_1992)_
NS 0 0 = < eps
4 c_e_SpecifiedDemand(UTOPIA_CSV_ID_1993)_
NS 0 0 = < eps
5 c_e_SpecifiedDemand(UTOPIA_CSV_ID_1994)_
NS 0 0 = < eps
6 c_e_SpecifiedDemand(UTOPIA_CSV_ID_1995)_
NS 0 0 = < eps
7 c_e_SpecifiedDemand(UTOPIA_CSV_ID_1996)_
NS 0 0 = < eps
8 c_e_SpecifiedDemand(UTOPIA_CSV_ID_1997)_
NS 0 0 = < eps
9 c_e_SpecifiedDemand(UTOPIA_CSV_ID_1998)_
NS 0 0 = < eps
10 c_e_SpecifiedDemand(UTOPIA_CSV_ID_1999)_
NS 0 0 = < eps
11 c_e_SpecifiedDemand(UTOPIA_CSV_ID_2000)_
NS 0 0 = < eps
12 c_e_SpecifiedDemand(UTOPIA_CSV_ID_2001)_
NS 0 0 = < eps
13 c_e_SpecifiedDemand(UTOPIA_CSV_ID_2002)_
NS 0 0 = < eps
14 c_e_SpecifiedDemand(UTOPIA_CSV_ID_2003)_
NS 0 0 = < eps
15 c_e_SpecifiedDemand(UTOPIA_CSV_ID_2004)_
NS 0 0 = < eps
I would be very grateful of any suggestions for either:
How I can get each record in the output text file onto a single row, or
Ideas on how to clean / post-process the existing text file output.
I'm sure I'm missing something simple here, but the output is in a very unhelpful format at the moment!
Thanks!
I wrote a Python parser for the GLPK output file. It is not beautiful and not save (try-catch) but it is working (for pure simplex problems).
You can call it on output file:
outp = GLPKOutput('myoutputfile')
print(outp)
val1 = outp.getCol('mycolvar','Activity')
val2 = outp.getRow('myrowname','Upper_bound') # row names should be defined
The class is as follows:
class GLPKOutput:
def __init__(self,filename):
self.rows = {}
self.columns = {}
self.nRows = 0
self.nCols = 0
self.nNonZeros = 0
self.Status = ""
self.Objective = ""
self.rowHeaders = []
self.rowIdx = {}
self.rowWidth = []
self.Rows = []
self.hRows = {}
self.colHeaders = []
self.colIdx = {}
self.colWidth = []
self.Cols = []
self.hCols = {}
self.wcols = ['Activity','Lower_bound','Upper bound','Marginal']
self.readFile(filename)
# split columns with weird line break
def smartSplit(self,line,type,job):
ret = []
line = line.rstrip()
if type == 'ROWS':
cols = len(self.rowHeaders)
idx = self.rowWidth
else:
cols = len(self.colHeaders)
idx = self.colWidth
if job == 'full':
start = 0
for i in range(cols):
stop = start+idx[i]+1
ret.append(line[start:stop].strip())
start = stop
elif job == 'part1':
entries = line.split()
ret = entries[0:2]
elif job == 'part2':
start = 0
for i in range(cols):
stop = start+idx[i]+1
ret.append(line[start:stop].strip())
start = stop
ret = ret[2:]
# print()
# print("SMART:",job,line.strip())
# print(" TO:",ret)
return ret
def readFile(self,filename):
fp = open(filename,"r")
lines = fp.readlines()
fp.close
i = 0
pos = "HEAD"
while pos == 'HEAD' and i<len(lines):
entries = lines[i].split()
if len(entries)>0:
if entries[0] == 'Rows:':
self.nRows = int(entries[1])
elif entries[0] == 'Columns:':
self.nCols = int(entries[1])
elif entries[0] == 'Non-zeros:':
self.nNonZeros = int(entries[1])
elif entries[0] == 'Status:':
self.Status = entries[1]
elif entries[0] == 'Objective:':
self.Objective = float(entries[3]) #' '.join(entries[1:])
elif re.search('Row name',lines[i]):
lines[i] = lines[i].replace('Row name','Row_name')
lines[i] = lines[i].replace('Lower bound','Lower_bound')
lines[i] = lines[i].replace('Upper bound','Upper_bound')
entries = lines[i].split()
pos = 'ROWS'
self.rowHeaders = entries
else:
pass
i+= 1
# formatting of row width
self.rowWidth = lines[i].split()
for k in range(len(self.rowWidth)): self.rowWidth[k] = len(self.rowWidth[k])
# print("Row Widths:",self.rowWidth)
i+= 1
READY = False
FOUND = False
while pos == 'ROWS' and i<len(lines):
if re.match('^\s*[0-9]+',lines[i]): # new line
if len(lines[i].split())>2: # no linebrak
entries = self.smartSplit(lines[i],pos,'full')
READY = True
else: # line break
entries = self.smartSplit(lines[i],pos,'part1')
READY = False
FOUND = True
else:
if FOUND and not READY: # second part of line
entries += self.smartSplit(lines[i],pos,'part2')
READY = True
FOUND = False
if READY:
READY = False
FOUND = False
# print("ROW:",entries)
if re.match('[0-9]+',entries[0]): # valid line with solution data
self.Rows.append(entries)
self.hRows[entries[1]] = len(self.Rows)-1
else:
print("wrong line format ...")
print(entries)
sys.exit()
elif re.search('Column name',lines[i]):
lines[i] = lines[i].replace('Column name','Column_name')
lines[i] = lines[i].replace('Lower bound','Lower_bound')
lines[i] = lines[i].replace('Upper bound','Upper_bound')
entries = lines[i].split()
pos = 'COLS'
self.colHeaders = entries
else:
pass #print("NOTHING: ",lines[i])
i+= 1
# formatting of row width
self.colWidth = lines[i].split()
for k in range(len(self.colWidth)): self.colWidth[k] = len(self.colWidth[k])
# print("Col Widths:",self.colWidth)
i+= 1
READY = False
FOUND = False
while pos == 'COLS' and i<len(lines):
if re.match('^\s*[0-9]+',lines[i]): # new line
if len(lines[i].split())>2: # no linebreak
entries = self.smartSplit(lines[i],pos,'full')
READY = True
else: # linebreak
entries = self.smartSplit(lines[i],pos,'part1')
READY = False
FOUND = True
else:
if FOUND and not READY: # second part of line
entries += self.smartSplit(lines[i],pos,'part2')
READY = True
FOUND = False
if READY:
READY = False
FOUND = False
# print("COL:",entries)
if re.match('[0-9]+',entries[0]): # valid line with solution data
self.Cols.append(entries)
self.hCols[entries[1]] = len(self.Cols)-1
else:
print("wrong line format ...")
print(entries)
sys.exit()
elif re.search('Karush-Kuhn-Tucker',lines[i]):
pos = 'TAIL'
else:
pass #print("NOTHING: ",lines[i])
i+= 1
for i,e in enumerate(self.rowHeaders): self.rowIdx[e] = i
for i,e in enumerate(self.colHeaders): self.colIdx[e] = i
def getRow(self,name,attr):
if name in self.hRows:
if attr in self.rowIdx:
try:
val = float(self.Rows[self.hRows[name]][self.rowIdx[attr]])
except:
val = self.Rows[self.hRows[name]][self.rowIdx[attr]]
return val
else:
return -1
def getCol(self,name,attr):
if name in self.hCols:
if attr in self.colIdx:
try:
val = float(self.Cols[self.hCols[name]][self.colIdx[attr]])
except:
val = self.Cols[self.hCols[name]][self.colIdx[attr]]
return val
else:
print("key error:",name,"not known ...")
return -1
def __str__(self):
retString = '\n'+"="*80+'\nSOLUTION\n'
retString += "nRows: "+str(self.nRows)+'/'+str(len(self.Rows))+'\n'
retString += "nCols: "+str(self.nCols)+'/'+str(len(self.Cols))+'\n'
retString += "nNonZeros: "+str(self.nNonZeros)+'\n'
retString += "Status: "+str(self.Status)+'\n'
retString += "Objective: "+str(self.Objective)+'\n\n'
retString += ' '.join(self.rowHeaders)+'\n'
for r in self.Rows: retString += ' # '.join(r)+' #\n'
retString += '\n'
retString += ' '.join(self.colHeaders)+'\n'
for c in self.Cols: retString += ' # '.join(r)+' #\n'
return retString

How to get the right demand in the newsvendor model?

So what I'm trying to accomplish is the newsvendor problem, where the program is supposed to run and give me the demand that gives me the best chances of turning a profit as per this link.
But the issue is that when I run the below code it gives me the right demand along with the demand for it.
q = {5:0.2 ,6:0.25 ,7:0.3 ,8:.25}
w = 55
p = 80
s = 40
cul5 = 0.2
cul6 = 0.25
cul7 = 0.3
cul8 = .25
overage = w - s
underage = p - w
crit = overage/float((underage)+(overage)) # its better to use floats within the parenthesis in python 2
cumul_q = {}
cumulativevalue = 0
for key, value in sorted(q.iteritems()):
cumulativevalue = cumulativevalue + value
# print key , value
cumul_q[key] = cumulativevalue
# print cumul_q
previous_key = None
for key, value, in sorted(cumul_q.iteritems(),reverse = True):
cumulprob = 1 - value
cumulprob1 = float(cumulprob)
if crit < cumulprob1:
continue
elif crit > cumulprob1:
print key
previous_key = key

Finding whether a number has P^Q form or not?

I have recently appeared online coding Test. I was struck one question i.e
A number N is given finding the above number is P^Q(P power Q) form or not. I did the question using Brute force method (satisfying for individual number) but that result in time out. SO I need Efficient algorithm.
Input: 9
out put : yes
Input: 125
out put : yes
Input: 27
out put : yes
Constraints: 2<N<100000
if we assume non trivial cases then the constraints would be something like this:
N = <2,100000)
P>1
Q>1
This can be solved by sieves that mark all powers bigger then 1 up to N of the result. Now the question is do you need to optimize single query or many of them ? If you need just single query then you do not need the sieve table in memory, you just iterate until hit the N and then stop (so in worst case when N is not in form P^Q this would compute the whole sieve). Otherwise init such table once and then just use it. As N is small I go for the full table.
const int n=100000;
int sieve[n]={255}; // for simplicity 1 int/number but it is waste of space can use 1 bit per number instead
int powers(int x)
{
// init sieve table if not already inited
if (sieve[0]==255)
{
int i,p;
for (i=0;i<n;i++) sieve[i]=0; // clear sieve
for (p=sqrt(n);p>1;p--) // process all non trivial P
for (i=p*p;i<n;i*=p) // go through whole table
sieve[i]=p; // store P so it can be easily found later (if use 1bit/number then just set the bit instead)
}
return sieve[x];
}
first call took 0.548 ms on mine setup the others are non measurable small times
it returns the P so if P!=0 the number is in form P^Q so you can use it as bool directly, and also you can easily get Q by dividing or you can create another sieve with Q to be even more fast if you need also the P,Q
Here all found non trivial powers N<100000
4 = 2^q
8 = 2^q
9 = 3^q
16 = 2^q
25 = 5^q
27 = 3^q
32 = 2^q
36 = 6^q
49 = 7^q
64 = 2^q
81 = 3^q
100 = 10^q
121 = 11^q
125 = 5^q
128 = 2^q
144 = 12^q
169 = 13^q
196 = 14^q
216 = 6^q
225 = 15^q
243 = 3^q
256 = 2^q
289 = 17^q
324 = 18^q
343 = 7^q
361 = 19^q
400 = 20^q
441 = 21^q
484 = 22^q
512 = 2^q
529 = 23^q
576 = 24^q
625 = 5^q
676 = 26^q
729 = 3^q
784 = 28^q
841 = 29^q
900 = 30^q
961 = 31^q
1000 = 10^q
1024 = 2^q
1089 = 33^q
1156 = 34^q
1225 = 35^q
1296 = 6^q
1331 = 11^q
1369 = 37^q
1444 = 38^q
1521 = 39^q
1600 = 40^q
1681 = 41^q
1728 = 12^q
1764 = 42^q
1849 = 43^q
1936 = 44^q
2025 = 45^q
2048 = 2^q
2116 = 46^q
2187 = 3^q
2197 = 13^q
2209 = 47^q
2304 = 48^q
2401 = 7^q
2500 = 50^q
2601 = 51^q
2704 = 52^q
2744 = 14^q
2809 = 53^q
2916 = 54^q
3025 = 55^q
3125 = 5^q
3136 = 56^q
3249 = 57^q
3364 = 58^q
3375 = 15^q
3481 = 59^q
3600 = 60^q
3721 = 61^q
3844 = 62^q
3969 = 63^q
4096 = 2^q
4225 = 65^q
4356 = 66^q
4489 = 67^q
4624 = 68^q
4761 = 69^q
4900 = 70^q
4913 = 17^q
5041 = 71^q
5184 = 72^q
5329 = 73^q
5476 = 74^q
5625 = 75^q
5776 = 76^q
5832 = 18^q
5929 = 77^q
6084 = 78^q
6241 = 79^q
6400 = 80^q
6561 = 3^q
6724 = 82^q
6859 = 19^q
6889 = 83^q
7056 = 84^q
7225 = 85^q
7396 = 86^q
7569 = 87^q
7744 = 88^q
7776 = 6^q
7921 = 89^q
8000 = 20^q
8100 = 90^q
8192 = 2^q
8281 = 91^q
8464 = 92^q
8649 = 93^q
8836 = 94^q
9025 = 95^q
9216 = 96^q
9261 = 21^q
9409 = 97^q
9604 = 98^q
9801 = 99^q
10000 = 10^q
10201 = 101^q
10404 = 102^q
10609 = 103^q
10648 = 22^q
10816 = 104^q
11025 = 105^q
11236 = 106^q
11449 = 107^q
11664 = 108^q
11881 = 109^q
12100 = 110^q
12167 = 23^q
12321 = 111^q
12544 = 112^q
12769 = 113^q
12996 = 114^q
13225 = 115^q
13456 = 116^q
13689 = 117^q
13824 = 24^q
13924 = 118^q
14161 = 119^q
14400 = 120^q
14641 = 11^q
14884 = 122^q
15129 = 123^q
15376 = 124^q
15625 = 5^q
15876 = 126^q
16129 = 127^q
16384 = 2^q
16641 = 129^q
16807 = 7^q
16900 = 130^q
17161 = 131^q
17424 = 132^q
17576 = 26^q
17689 = 133^q
17956 = 134^q
18225 = 135^q
18496 = 136^q
18769 = 137^q
19044 = 138^q
19321 = 139^q
19600 = 140^q
19683 = 3^q
19881 = 141^q
20164 = 142^q
20449 = 143^q
20736 = 12^q
21025 = 145^q
21316 = 146^q
21609 = 147^q
21904 = 148^q
21952 = 28^q
22201 = 149^q
22500 = 150^q
22801 = 151^q
23104 = 152^q
23409 = 153^q
23716 = 154^q
24025 = 155^q
24336 = 156^q
24389 = 29^q
24649 = 157^q
24964 = 158^q
25281 = 159^q
25600 = 160^q
25921 = 161^q
26244 = 162^q
26569 = 163^q
26896 = 164^q
27000 = 30^q
27225 = 165^q
27556 = 166^q
27889 = 167^q
28224 = 168^q
28561 = 13^q
28900 = 170^q
29241 = 171^q
29584 = 172^q
29791 = 31^q
29929 = 173^q
30276 = 174^q
30625 = 175^q
30976 = 176^q
31329 = 177^q
31684 = 178^q
32041 = 179^q
32400 = 180^q
32761 = 181^q
32768 = 2^q
33124 = 182^q
33489 = 183^q
33856 = 184^q
34225 = 185^q
34596 = 186^q
34969 = 187^q
35344 = 188^q
35721 = 189^q
35937 = 33^q
36100 = 190^q
36481 = 191^q
36864 = 192^q
37249 = 193^q
37636 = 194^q
38025 = 195^q
38416 = 14^q
38809 = 197^q
39204 = 198^q
39304 = 34^q
39601 = 199^q
40000 = 200^q
40401 = 201^q
40804 = 202^q
41209 = 203^q
41616 = 204^q
42025 = 205^q
42436 = 206^q
42849 = 207^q
42875 = 35^q
43264 = 208^q
43681 = 209^q
44100 = 210^q
44521 = 211^q
44944 = 212^q
45369 = 213^q
45796 = 214^q
46225 = 215^q
46656 = 6^q
47089 = 217^q
47524 = 218^q
47961 = 219^q
48400 = 220^q
48841 = 221^q
49284 = 222^q
49729 = 223^q
50176 = 224^q
50625 = 15^q
50653 = 37^q
51076 = 226^q
51529 = 227^q
51984 = 228^q
52441 = 229^q
52900 = 230^q
53361 = 231^q
53824 = 232^q
54289 = 233^q
54756 = 234^q
54872 = 38^q
55225 = 235^q
55696 = 236^q
56169 = 237^q
56644 = 238^q
57121 = 239^q
57600 = 240^q
58081 = 241^q
58564 = 242^q
59049 = 3^q
59319 = 39^q
59536 = 244^q
60025 = 245^q
60516 = 246^q
61009 = 247^q
61504 = 248^q
62001 = 249^q
62500 = 250^q
63001 = 251^q
63504 = 252^q
64000 = 40^q
64009 = 253^q
64516 = 254^q
65025 = 255^q
65536 = 2^q
66049 = 257^q
66564 = 258^q
67081 = 259^q
67600 = 260^q
68121 = 261^q
68644 = 262^q
68921 = 41^q
69169 = 263^q
69696 = 264^q
70225 = 265^q
70756 = 266^q
71289 = 267^q
71824 = 268^q
72361 = 269^q
72900 = 270^q
73441 = 271^q
73984 = 272^q
74088 = 42^q
74529 = 273^q
75076 = 274^q
75625 = 275^q
76176 = 276^q
76729 = 277^q
77284 = 278^q
77841 = 279^q
78125 = 5^q
78400 = 280^q
78961 = 281^q
79507 = 43^q
79524 = 282^q
80089 = 283^q
80656 = 284^q
81225 = 285^q
81796 = 286^q
82369 = 287^q
82944 = 288^q
83521 = 17^q
84100 = 290^q
84681 = 291^q
85184 = 44^q
85264 = 292^q
85849 = 293^q
86436 = 294^q
87025 = 295^q
87616 = 296^q
88209 = 297^q
88804 = 298^q
89401 = 299^q
90000 = 300^q
90601 = 301^q
91125 = 45^q
91204 = 302^q
91809 = 303^q
92416 = 304^q
93025 = 305^q
93636 = 306^q
94249 = 307^q
94864 = 308^q
95481 = 309^q
96100 = 310^q
96721 = 311^q
97336 = 46^q
97344 = 312^q
97969 = 313^q
98596 = 314^q
99225 = 315^q
99856 = 316^q
it took 62.6 ms including first init call (and string output to memo which is much slower then the computation itself) without the string it took just 1.25 ms

set a event listener collision for all image in a list.

i have a section that randomly spawns images from a one list and spawns them on screen and despawns them continously. the images that are spawned are put into a list called object. i trying to make is so that each image that is spawned on screen has a collision detection by the user sprite. however at the moment it only detects the very first raft that is spawned.
isOnRaft = 0
--Set log position and movement
local mRandom = math.random
local raft = {"Raft1" ,"Raft2"}
local objectTag = 0
local object = {}
function spawnlogright()
objectTag = objectTag + 1
local objIdx = mRandom(#raft)
local objName = raft[objIdx]
object[objectTag] = display.newImage(objName..".png")
object[objectTag].x = 416
object[objectTag].y = 72
object[objectTag].name = objectTag
transition.to(object[objectTag], {time = 10000, x = -96, onComplete = function(obj) obj:removeSelf(); obj = nil; end})
physics.addBody( object[objectTag], "static", {isSensor = true})
end
spawnlogright()
timer.performWithDelay(3500,spawnlogright,0)
function spawnlogright()
objectTag = objectTag + 1
local objIdx = mRandom(#raft)
local objName = raft[objIdx]
object[objectTag] = display.newImage(objName..".png")
object[objectTag].x = 416
object[objectTag].y = 168
object[objectTag].name = objectTag
transition.to(object[objectTag], {time = 10000, x = -96, onComplete = function(obj) obj:removeSelf(); obj = nil; end})
physics.addBody( object[objectTag], "static", {isSensor = true})
end
spawnlogright()
timer.performWithDelay(3500,spawnlogright,0)
function spawnlogleft()
objectTag = objectTag + 1
local objIdx = mRandom(#raft)
local objName = raft[objIdx]
object[objectTag] = display.newImage(objName..".png")
object[objectTag].x = -96
object[objectTag].y = 120
object[objectTag].name = objectTag
transition.to(object[objectTag], {time = 10000, x = 416, onComplete = function(obj) obj:removeSelf(); obj = nil; end})
physics.addBody( object[objectTag], "static", {isSensor = true})
end
spawnlogleft()
timer.performWithDelay(3500,spawnlogleft,0)
function spawnlogleft()
objectTag = objectTag + 1
local objIdx = mRandom(#raft)
local objName = raft[objIdx]
object[objectTag] = display.newImage(objName..".png")
object[objectTag].x = -96
object[objectTag].y = 216
object[objectTag].name = objectTag
transition.to(object[objectTag], {time = 10000, x = 416, onComplete = function(obj) obj:removeSelf(); obj = nil; end})
physics.addBody( object[objectTag], "static", {isSensor = true})
end
spawnlogleft()
timer.performWithDelay(3500,spawnlogleft,0)
--while the frog is on the log...
function raftCollide(event)
if ( event.phase == "began" ) then
isOnRaft = isOnRaft + 1
print(isOnLog)
elseif ( event.phase == "ended" )then
isOnRaft = isOnRaft - 1
print(isOnLog)
end
end
--add event for 'walking on the log'
object[objectTag]:addEventListener("collision", raftCollide)
so i need the user sprite to detect all of the rafts that are spawned and add 1 to isOnRaft. this will negate the water death function. is there a way to add the collision detection to all of the raft or all of the entities within the object list.
any help would be appriacaited thanks.
Just replace the last line by:
for logTag, logObject in pairs(object) do
logObject:addEventListener("collision", raftCollide)
end
Also just an advice but do what you feel like with it: Try not to declare 2 functions with the same names in the same scope... You should change the name of the second spawnRight / spawnLeft functions to have a different name :)