I have been recently working with Python, and I wish to make a program that tells me how long was the last time I inputted something without it closing (e.g. The first thing I input is the word "foo". After 15 minutes, I input foo again, so the program prints that I last inputted the word foo 15 minutes ago).
Any ideas on how to make such a script? (Thanks in advance)
Do you mean to tell you the last time you entered anything or a specific word?
If it's a specific one, make a dictionary where you use words as keys, and then store the time there.
Record the time using time.time() and every input() in a dictionary. Then record the time again the second time it is entered and take the difference in time. The difference is in seconds, so divmod() it by 60 to get the minutes and seconds.
import time
inputs = {}
while True:
i = input("Type something. ")
t = time.time()
if i in inputs: #The input was inputted previously
time_diff = t-input[i]
minutes, seconds = divmod(time_diff, 60)
print("You typed that", minutes, "minutes and", seconds, "ago")
inputs[i] = t
Related
i have been trying to convert a string to date using
${test:toString():toDate('dd-MMM-yy HH.mm.ss.SSSSSSSSS'):format('dd-MMM-yy HH.mm.ss.SSSSSSSSS')}
my value for test attribute is like 13-MAR-20 15.50.41.396000000
when i'm using the above mentioned expression to convert the string to Date, it actually is changing the date as below:
test (input value):
13-MAR-20 15.50.41.396000000
time (output value)
18-Mar-20 05.50.41.000000000
please advise!
I ran into a similar issue with date time encoded in ISO 8601.
The problem is, that the digits after the second are defined as fragment of a second, not milliseconds. If it has 3 digits, it equivalent to milliseconds. If more than 3, the toDate() function parse the fragment of a second as milliseconds. In your case 396000000 milliseconds = 4,58333333 days.
I solved my issue with replaceAll() by cutting digits to first 3.
${test:replaceAll('(\.[0-9]{3})([0-9]+)','$1')}
But my value was formatted as 18-06-20T05:50:41.396000000, so maybe you have to adjust the regex.
I got a nice answer to my earlier question about de/serialization, which led me to create a method that either deserializes a defaultdict(list) from a file if it exists, or creates the dictionary itself if the file does not exist.
After implementing a simple code
try:
#deserialize - this takes about 6 seconds
with open('dict.flat') as stream:
for line in stream:
vals = line.split()
lexicon[vals[0]] = vals[1:]
except:
#create new - this takes about 40 seconds
for word in lexicon_file:
word = word.lower()
for letter n-gram in word:
lexicon[n-gram].append(word)
#serialize - about 6 seconds
with open('dict.flat', 'w') as stream:
stream.write('\n'.join([' '.join([k] + v) for k, v in lexicon.iteritems()]))
I was a little shocked at the amount of RAM my script takes when deserializing from a file.
(The lexicon_file contains about 620 000 words and the processed defaultdict(list) contains 25 000 keys, while each key contains a list of between 1 and 133 000 strings (average 500, median 20).
Each key is a letter bi/trigram and it's values are words that contain the key letter n-gram.)
When the script creates the lexicon anew, the whole process doesn't use much more than 160 MB of RAM - the serialized file itself is a little over 129 MB.
When the script deserializes the lexicon, the amount of RAM taken by python.exe jumps up to 500 MB.
When I try to emulate the method of creating a new lexicon in the deserialization process with
#deserialize one by one - about 15 seconds
with open('dict.flat') as stream:
for line in stream:
vals = line.split()
for item in vals[1:]:
lexicon[vals[0]].append(item)
The results are exactly the same - except this code snippet runs significantly slower.
What is causing such a drastic difference in memory consumption? My first though was that since a lot of elements in the resulting lists are exactly the same, python somehow creates the dictionary more efficiently memory-wise with references - something there is no time for when deserializing and mapping whole lists to keys. But if that is the case, why is this problem not solved by appending the items one by one, exactly like creating a new lexicon?
edit: This topic was already discussed in this question (how have I missed it?!). Python can be forced to create the dictionary from references by using the intern() function:
#deserialize with intern - 45 seconds
with open('dict.flat') as stream:
for line in stream:
vals = line.split()
for item in vals[1:]:
lexicon[intern(vals[0])].append(intern(item))
This reduces the amount of RAM taken by the dictionary to expected values (160 MB), but the offset is that computational time is back to the same value as creating the dict anew, which completely negates the reason for serialization.
I am fairly new to python and was wondering how to make this loop run for a the number of iterations that is entered by the user, however it is an infinite loop at the moment:
def randMod():
import random
heads = 0
tails = 0
tries = raw_input('Enter a number:')
while True:
runs = 0
if tries == runs:
break
else:
runs + 1
coinFlip = random.randrange(0,1+1)
if coinFlip == 0:
print "Tails"
tails + 1
elif coinFlip == 1:
print "Heads"
heads + 1
print heads
print tails
randMod()
I am trying to make it so it will simulate a coin flip for how many times the user enters then tallies it at the end. Only problem is I am fairly new to python so I don't know if I got this right or not.
The problem I see here is that you are using raw_input() to read the user's input. That method stores the input as a string. You must convert the information contained in tries to a number in order for this to work. As it is comparing tries == runs and a string will never be equal to a int, it is stuck forever.
You can use the conversion like this:
Explained here
For one task I have to create files with the timestamp attached in filename for uniqueness. There is another program which reads these files. The condition for the other program to pick these files is that the filename should contain 14 digits of timestamp (YYYYMMDDHHMMSS) which I am getting through SYSTEMTIME. The issue I am facing is that sometimes the seconds field comes as e.g '10', in the filename it is getting rounded off and only '1' is getting displayed as the seconds field in timestamp. The other program doesn't pick this file because the timestamp now contains only 13 digits. How can I solve this issue by any method other than checking the length of timestamp and adding a '0' in the end.
Thanks
Mahboob
Attaching some example codes is better to describe.
I used codes as below, haven't met problem you are facing.
int64 timestamp = time(NULL);
char data[kFileNameLength + 1];
tm time_struct;
strftime(data, sizeof(data), "%Y%m%d%H%M%S", localtime_r(×tamp, &time_struct));
return std::string(data);
I have a file containing a list of event spaced with some time. Here is an example:
0, Hello World
0.5, Say Hi
2, Say Bye
I would like to be able to replay this sequence of events. The first column is the delta between the two consecutive events ( the first starts immendiately, the second happens 0.5s later, the third 2s later, ... )
How can i do that on Windows . Is there anything that can ensure that I am very accurate on the timing ? The idea is to be as close as what you would have listneing some music , you don't want your audio event to happen close to the right time but just on time .
This can be done easily by using the sleep function from the time module. The exact code should work like this:
import time
# Change data.txt to the name of your file
data_file = open("data.txt", "r")
# Get rid of blank lines (often the last line of the file)
vals = [i for i in data_file.read().split('\n') if i]
data_file.close()
for i in vals:
i = i.split(',')
i[1] = i[1][1:]
time.sleep(float(i[0]))
print i[1]
This is an imperfect algorithm, but it should give you an idea of how this can be done. We read the file, split it to a newline delimited list, then go through each comma delimited couplet sleeping for the number of seconds specified, and printing the specified string.
You're looking for time.sleep(...) in Python.
If you load that file as a list, and then print the values,
import time
with open("datafile.txt", "r") as infile:
lines = infile.read().split('\n')
for line in lines:
wait, response = line.split(',')
time.sleep(float(wait))
print response