Is it possible to measure accurate Milliseconds time between few code statements using python? - python-2.7

I am developing a python application using Python2.7.
That application is using pyserial library to read some data bytes from serial port.
Using while loop to read 1 byte of data in each iteration. In each iteration
I have to measure execution time between statements if it is less than 10 ms wait for it to reach 10ms before starting next iteration. there are two questions here as follows:
Time measurement
What would be the best way to measure time between python statements to the accuracy (1ms or 2ms difference acceptable) of milliseconds.
Time delay
How can i use that measured time for delay in order to wait till total time is 10ms (total time = 10ms = code execution time+delay)
I have tried time library but it does not give a good resolution when in millisecond some time it does not give anything when small time duration.
for example:
import time
while uart.is_open:
uart.read()
start = time.time()
#user code will go here
end = time.time()
execution_time=end - start
if execution_time< 10ms:
remaining_time = 10ms - execution_time
delay(remaining_time)

You can get a string that is minutes:seconds:microseconds using datetime like so:
import datetime
string = datetime.datetime.now().strftime("%M:%S:%f")
And then turn it into a number to make comparison more handy:
m, s, u = string.split(":")
time_us = float(m)/60e6 + float(s)/1e6 + float(u)
In your example, that would look like this:
import datetime
def time_us_now():
h, m, s, u = (float(string) for string in datetime.datetime.now().strftime("%H:%M:%S:%f").split(":"))
return h/3600e6 + m/60e6 + s/1e6 + u
start = time_us_now()
# User code will go here.
end = time_us_now()
execution_time = end - start

Solution:
I have managed to get upto 1ms resolution by doing two things.
Changed (increased) the baud rate to 115200 from 19200.
Used time.clock() rather than time.time() as it has more accuracy and resolution.

Related

How to calculate time estimate for parallel tasks?

I need to calculate the total amount of time for a certain number of tasks to be completed. Details:
5 tasks total. Time estimates (in seconds) for each: [30, 10, 15, 20, 25]
Concurrency: 3 tasks at a time
How can I calculate the total time it will take to process all tasks, given the concurrency? I know it will take at least as long as the longest task (25 seconds), but is there a formula/method to calculate a rough total estimate, that will scale with more tasks added?
If you don't mind making some approximations it could be quite simple. If the tasks take roughly the same time to complete, you could use the average of the tasks duration as a basis (here, 20 seconds).
Assuming that the system is always full of tasks, that task duration is small enough, that there are many tasks and that concurrency level is high enough, then:
estimated_duration = average_task_duration * nb_tasks / nb_workers
Where nb_workers is the number of concurrent threads.
Here is some Python code that shows the idea:
from random import random
from time import sleep, monotonic
from concurrent.futures import ThreadPoolExecutor
def task(i: int, duration: float):
sleep(duration)
def main():
nb_tasks = 20
nb_workers = 3
average_task_duration = 2.0
expected_duration = nb_tasks * average_task_duration / nb_workers
durations = [average_task_duration + (random() - 0.5) for _ in range(nb_tasks)]
print(f"Starting work... Expected duration: {expected_duration:.2f} s")
start = monotonic()
with ThreadPoolExecutor(max_workers=nb_workers) as executor:
for i, d in enumerate(durations):
executor.submit(task, i, d)
stop = monotonic()
print(f"Elapsed: {(stop - start):.2f} s")
if __name__ == "__main__":
main()
If these hypotheses cannot hold in your case, then you'd better use a Bin Packing algorithm as Jerôme suggested.

Calculate timestamp difference in microseconds using Python

The timestamps are obtained by recording the current time.
The python script I have right now is ...
sec1 = float(time.time()*1000000)
sec2 = float(time.time()* 1000000)
print(sec1)
print(sec2)
print(sec2 - sec1)
I am not sure if my approach is correct and I don't know if time.time is pricise enough. Thanks :)

Error in extracting time based on a format from a file using Python

13:20:06.037
13:20:06.038
13:20:06.039
I want to read the timestamps from a file using python and compare the difference between adjacent values. Below is the code I used for this.
h, m, s = str(diff).split(':')
v,w = str(s).split('.')
I tried to split the diff in to hours,minutes and seconds using split(':'). In s, there is seconds and milliseconds value. When I try to run the second line of code, I get the error:" ValueError: need more than 1 value to unpack".
If you would like to convert string records from a file then You should try:
--put here Your code, that retrieves time records from file--
format = '%H:%M:%S.%f'
time_string = '09:54:11.001'
time = datetime.strptime(time_string, format)
This function does the job with displaying time, as You wanted to:
strftime("%H:%M:%S.%f", *put your time variable here*)
And this code snippet shows how to get a difference between two dates in Your format:
time1 = '09:54:11.001'
time2 = '10:32:43.837'
format = '%H:%M:%S.%f'
difference = datetime.strptime(time2, format) - datetime.strptime(time1, format)
You can read more about time functions in Python docs: https://docs.python.org/2/library/time.html
Regards.

Pygame, I am trying to make a timer

I have created a game that has two cars who race each other by tapping buttons quicker than the other. So i want to make it where after the race ends it prints out how long it took for the winner of the race to make it to the finish line. I have tried using pygame.time.get_ticks() however that gives the time of how long since pygame.init() was called.
You can use pygame.time.get_ticks().
Set a start time and an end time and measure the difference:
import pygame as py
py.init()
clock = py.time.Clock()
start_time = py.time.get_ticks()
print "started at:",start_time
for i in xrange(0,30): #wait 1 second
clock.tick(30)
end_time = py.time.get_ticks()
print "finished at:",end_time
time_taken = end_time-start_time
print "time taken:",time_taken

Django formatting multiple times

Using the default delta time field for Django, but as most know it returns horrifically. So I have developed my own function, convert_timedelta(duration). It computes days, hours, minutes and seconds, just like you might find on this website. It works like a charm on a single time ( I.e. when I was calculating an average time for one specific column ) .. However now I'm modifying how this works so it returns multiple times for a column... Separating times returned grouped by their ID, rather than just having one set time returned. So now I have multiple times returned, which works fine with the default formatting. But when I apply it to my function it doesn't like it and no particular informative errors are alerted.
def convert_timedelta(duration):
days, seconds = duration.days, duration.seconds
hours = days * 24 + seconds // 3600
minutes = (seconds % 3600) // 60
seconds = (seconds % 60)