Getting different INCAR files for 'mp-22590' using MPRester.query() within pymatgen and from materials-project website - pymatgen

I am getting different INCAR files for ID 'mp-22590' using MPRester.query() within pymatgen and from the materials-project website.
using MPRester.query():
mpr = MPRester('My_API_key')
data=mpr.query('mp-22590',['input.incar'])[0]
INCAR that i get is:
ALGO = Normal
AMIX = 0.2
AMIX_MAG = 0.8
BMIX = 0.001
BMIX_MAG = 0.001
ENCUT = 520
IBRION = 1
ICHARG = 1
ISIF = 3
ISMEAR = 1
ISPIN = 2
LDAU = True
LDAUJ = 0 0 0
LDAUL = 0 2 0
LDAUTYPE = 2
LDAUU = 0 5.3 0
LREAL = Auto
LWAVE = True
MAGMOM = 12 * 0.6 4 * 5.0 4 * 0.6
NELM = 100
NELMDL = 6
NELMIN = 3
NPAR = 1
NSW = 200
PREC = Accurate
SIGMA = 0.2
SYSTEM = Rubyvaspy :: o fe la
and accessing VASP input files for 'mp-22590' from the materials-project website, INCAR is:
ALGO = Fast
EDIFF = 0.001
ENCUT = 520
IBRION = 2
ICHARG = 1
ISIF = 3
ISMEAR = -5
ISPIN = 2
LDAU = True
LDAUJ = 0 0 0
LDAUL = 0 2 0
LDAUPRINT = 1
LDAUTYPE = 2
LDAUU = 0 5.3 0
LMAXMIX = 6
LORBIT = 11
LREAL = Auto
LWAVE = False
MAGMOM = 4 * 0.6 4 * 5.0 12 * 0.6
NELM = 100
NSW = 99
PREC = Accurate
SIGMA = 0.05
Am i doing something wrong?
I would appreciate any help on this.
Thanks

Related

Rsyslog unable to send multiline logs

I'm unable to push the below logs via rsyslog. The rsyslog is only forwarding one line of the log.
Kafka-server logs:
[2022-07-25 11:43:45,091] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = INTERNAL://0.0.0.0:9092,BROKER://0.0.0.0:9091,CLIENT://0.0.0.0:9093
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = 0
broker.id.generation.enable = true
broker.rack = null
client.quota.callback.class = null
compression.type = producer
connection.failed.authentication.delay.ms = 100
connections.max.idle.ms = 600000
connections.max.reauth.ms = 0
control.plane.listener.name = null
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
fetch.purgatory.purge.interval.requests = 1000
group.initial.rebalance.delay.ms = 3000
group.max.session.timeout.ms = 1800000
group.max.size = 2147483647
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = BROKER
inter.broker.protocol.version = 2.3-IV1
kafka.metrics.polling.interval.secs = 10
kafka.metrics.reporters = []
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = INTERNAL:PLAINTEXT,BROKER:PLAINTEXT,CLIENT:PLAINTEXT
listeners = INTERNAL://:9092,BROKER://:9091,CLIENT://:9093
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.max.compaction.lag.ms = 9223372036854775807
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /var/lib/kafka
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.3-IV1
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 120
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections = 2147483647
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1000012
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 1
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 9092
principal.builder.class = null
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.principal.mapping.rules = [DEFAULT]
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 2
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 3
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.connect = 0.0.0.0:2181
zookeeper.connection.timeout.ms = 18000
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 6000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2022-07-25 11:43:45,145] ERROR Fatal error during SupportedServerStartable startup. Prepare to shutdown (io.confluent.support.metrics.SupportedKafka)
java.lang.IllegalArgumentException: requirement failed: advertised.listeners cannot use the nonroutable meta-address 0.0.0.0. Use a routable IP address.
at scala.Predef$.require(Predef.scala:224)
at kafka.server.KafkaConfig.validateValues(KafkaConfig.scala:1492)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1460)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1114)
at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:1094)
at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:1091)
at kafka.server.KafkaConfig.fromProps(KafkaConfig.scala)
at io.confluent.support.metrics.SupportedServerStartable.<init>(SupportedServerStartable.java:52)
at io.confluent.support.metrics.SupportedKafka.main(SupportedKafka.java:45)
rsysconf.d/10kafka.conf
$InputFilePollInterval 1
input(type="imfile"
File="/var/log/kafka/server.log"
Tag="app-error"
Severity="error"
startmsg.regex="^[[:digit:]]{4}-[[:digit:]]{2}"
)
*.* #Fluentdvmip:5142
Can someone please guide how to send complete logs from rsys to Fluentd?
Below is the regex which I'll be using in fluentD configuration.
https://regex101.com/r/NaNVcr/1
or do we need to modify kafka log4j properties to have proper logging?
Instead of sending Kafka server logs into RSYS. First, I converted Kakfa logs into Json format.
Created a Jar using this Link
Steps Followed
Git clone the above link in VM where kafka was installed
Installed maven latest version
Ran mvn package inside the recently cloned directory
A jar was created inside target directory "log4j-json-layout-1.0-SNAPSHOT.jar"
Copied this jar to /usr/share/java/kafka/
Added the below code in /etc/kafka/log4j.properties and commented the older code
log4j.appender.kafkaAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.kafkaAppender.File=${kafka.logs.dir}/server.log
log4j.appender.kafkaAppender.layout=de.thmshmm.log4j.JsonLayout
log4j.appender.kafkaAppender.layout.DatePattern=yyyy-MM-dd HH:mm:ss
restarted confluent-kafka service
Before logs:
After Logs:

Need to generate serial number

I have requirement to generate 4 character serial numbers like below in Python or Shell Scripting.
Serial number should start from 0001, 0002..... when reached 999 it should generate A001,A002....A999, then B001, so on.
I tried below code in Python, but its not fully working, After few number it starts generating 5 characters..
def excel_format(num):
res = ""
while num:
mod = (num - 1) % 26
res = chr(65 + mod) + res
num = (num - mod) // 26
return res
def full_format(num, d=3):
set_flag = 0
chars = num // (10**d-1) + 1 # this becomes A..ZZZ
if len(excel_format(chars)) >= 2:
set_flag = 1
if len(excel_format(chars)) > 2:
set_flag = 2
if set_flag == 1:
d = 2
chars = num // (10 ** d - 1) + 1 # this becomes A..ZZZ
digit = num % (10**d-1) + 1 # this becomes 001..999
return excel_format(chars) + "{:0{}d}".format(digit, d)
if __name__ == '__main__':
for i in range(1,10001):
unique_code = full_format(j, d=3)
print('Unique Code is =>', unique_code)
This Python code will generate the required 4-character (Unique_code) serial numbers:
#!/usr/bin/python3
import re
for i in range(1,10000):
if (i < 1000):
print ("i =", str(i).zfill(4))
else:
m = re.findall(r'(\d)(\d\d\d)', str(i))
code = 64+int(m[0][0])
print ("i =",i, "Unique_code =", chr(code) + m[0][1])
Excerpts from output:
i = 0001
i = 0002
...
i = 0999
i = 1000 Unique_code = A000
i = 1001 Unique_code = A001
i = 1002 Unique_code = A002
...
i = 1997 Unique_code = A997
i = 1998 Unique_code = A998
i = 1999 Unique_code = A999
i = 2000 Unique_code = B000
i = 2001 Unique_code = B001
i = 2002 Unique_code = B002
i = 2003 Unique_code = B003
...
i = 9997 Unique_code = I997
i = 9998 Unique_code = I998
i = 9999 Unique_code = I999
Not familiar enough with Python but since you've flagged ksh
#!/bin/ksh
typeset -Z3 sn
Letter=( 0 A B C D E F G H I J K L M N O P Q R S T U V W X Y Z )
Index=0
while [[ $Index -lt 28 ]]; do
sn=0
while [[ $sn -lt 1000 ]]; do
print ${Letter[$Index]}$sn
((sn++))
done
((Index++))
done

How to get the right demand in the newsvendor model?

So what I'm trying to accomplish is the newsvendor problem, where the program is supposed to run and give me the demand that gives me the best chances of turning a profit as per this link.
But the issue is that when I run the below code it gives me the right demand along with the demand for it.
q = {5:0.2 ,6:0.25 ,7:0.3 ,8:.25}
w = 55
p = 80
s = 40
cul5 = 0.2
cul6 = 0.25
cul7 = 0.3
cul8 = .25
overage = w - s
underage = p - w
crit = overage/float((underage)+(overage)) # its better to use floats within the parenthesis in python 2
cumul_q = {}
cumulativevalue = 0
for key, value in sorted(q.iteritems()):
cumulativevalue = cumulativevalue + value
# print key , value
cumul_q[key] = cumulativevalue
# print cumul_q
previous_key = None
for key, value, in sorted(cumul_q.iteritems(),reverse = True):
cumulprob = 1 - value
cumulprob1 = float(cumulprob)
if crit < cumulprob1:
continue
elif crit > cumulprob1:
print key
previous_key = key

How to improve my python code to speed it up?

Below is my current code:
import pandas as pd
import math
import csv
fund = 10000
print("investment",fund)
pval = 0
oldportfolio = []
dts = ["06 Feb 2017", "07 Feb 2017", "08 Feb 2017", "09 Feb 2017", "10 Feb 2017", "13 Feb 2017", "14 Feb 2017", "15 Feb 2017", "16 Feb 2017", "17 Feb 2017",
"20 Feb 2017", "21 Feb 2017", "22 Feb 2017", "23 Feb 2017", "27 Feb 2017"]
for dt in dts:
files = ["stocklistcustom.csv"]
for file in files:
df = pd.read_csv(file, header=None)
i = 0
filecount = len(df)
result = []
while i < filecount:
# while i < 10:
name = df[0][i]
link = df[1][i]
mcsym = df[2][i]
i = i + 1
filepath = "data/nse/his/" + mcsym + ".csv"
try:
sp = pd.read_csv(filepath, header=None)
endrow = sp[sp[0] == dt].index[0] + 1
parray = []
tarray = []
starray = []
intdate = []
p1 = 0
p2 = 0
p3 = 0
p4 = 0
j = 0
mavg15 = ''
mavg60 = ''
olddiff = 0
days = 2
strtrow = endrow - days - 60
for k in range (strtrow, endrow):
date = sp[0][k]
price = float(sp[4][k])
k = k + 1
parray.append(price)
j = j + 1
strtavg = j - 15
mavg15 = sum(parray[strtavg:j]) / 15
strtavg = j - 60
mavg60 = sum(parray[strtavg:j]) / 60
# buy criteria
if j > 59:
diff = mavg60 - mavg15
if diff < 0 and olddiff > 0:
trigger = 1
intdate.append(date)
else:
trigger = 0
tarray.append(trigger)
olddiff = diff
# sell criteria
if j == (days + 60):
pricep = (price - p1) * 100 / p1
p1p = (p1 - p2) * 100 / p2
p2p = (p2 - p3) * 100 / p3
p3p = (p3 - p4) * 100 / p4
if pricep < -5 or pricep > 8:
sell = 1
if price < p1 and p1 < p2 and p2 < p3:
sell = 1
else:
sell = 0
p4 = p3
p3 = p2
p2 = p1
p1 = price
if sum(tarray) > 0:
result.append([name,mcsym,"buy",price])
if sell > 0:
result.append([name,mcsym,"sell",price])
except:
# print(name,"not found")
pass
# print(result)
output = "output/triggers/"+dt+"trigger.csv"
with open(output, "wb") as f:
writer = csv.writer(f)
writer.writerows(result)
print(output,"exported")
The above code create an array named result and exports various csv files with calls...
The code below now process the data in result array to compute portfolio value
# Code for calculating investment
portfolio = []
for row in result:
if row[2] == "sell" and len(oldportfolio) > 0:
pindex = 0
for buys in oldportfolio:
bindex = 0
for stock in buys:
if row[0] == stock[0]:
sellqty = stock[2]
sellp = row[3]
sellval = sellqty * sellp
purchasep = stock[1]
sellcost = purchasep * sellqty
print(dt,"selling",row[0],row[1],sellp,sellqty,sellval)
# print(oldportfolio)
del oldportfolio[pindex][bindex]
# print(oldportfolio)
fund = fund + sellval
pval = pval - sellcost
bindex = bindex + 1
pindex = pindex + 1
# print("op", oldportfolio)
# print(dt,"fund after selling",fund)
buycount = sum(1 for row in result if row[2]==("buy"))
if buycount > 0:
maxinvest = fund / buycount
for row in result:
if row[2] == "buy":
name = row[0]
price = row[3]
qty = math.floor(maxinvest / price)
if qty > 0:
val = qty * price
print(dt,"buying",name,row[1],price,qty,val)
portfolio.append([name,price,qty,val])
fund = fund - val
# print("portfolio",portfolio)
pval = pval + sum(row[3] for row in portfolio)
print(dt,"cash",fund,"portfolio value",pval,"total",fund+pval)
oldportfolio.append(portfolio)
print(oldportfolio)
It gives me the value of portfolio for each day after trading based on certain rules. But its execution time is too much. How to reduce its execution time?
Also, I need to change pval as it is calculated incorrectly in current code. It must be calculated based on that particular day's prices.
Your code has multiple nested loops which probably why it is so slow.
But your biggest problem isn't speed, it's readability. It is really hard to reason about your code, consider refactoring.
I'm sure you'll find some bottlenecks and be able to improve your code while refactoring.

pro = pro*(x[ip] - x[ir]) giving tuple index out of range error new to python

This is the code and please help I'm getting tuple index out of range error in line 19
import math
n = 7
x = 0.5,1.2,2.1,2.9,3.6,4.5,5.7
y = 3.2,5.2,9.3,14.6,20.5,30.1,45.2
xx = 3.4
yy = y[0 ]
fact = 1
for i in range(0,n):
fact = fact*(xx - x[i])
s = 0.0
i1 = i+ 2
for ip in range(0,i1):
pro = 1.0
for ir in range(0,i1):
if (ir == ip): continue
pro = pro*(x[ip] - x[ir])
s = s + y[ip]/pro
yy = yy + s*fact
print "x=%5.2f y=%5.2f" %(xx,yy)
You're setting i1 = i + 2 in your loop. Since i runs from 0 to n, i1 is set to values larger than the length of x. If you put a
print i1
before the for ip, you can see exactly where it goes wrong.