I have a duck Class, such that each Duck object created contains wingspan and weight field variables. Each of these should be initialized randomly for every duck. Wingspans should be initialized to a random float in the range [80.0,100.0]cm. Weight should be initialized randomly in the range [0.7,1.6]kg. I have
import random
class Duck:
def __init__(self):
self.wingspan = round(random.uniform(80.0, 100.0), 1)
self.weight = round(random.uniform(0.7,1.6), 2)
But the second part is asking me to write a function called makeFlock() that takes an integer parameter, n, and returns a list of n Duck objects. I'm not sure how to do this. Any suggestions?
def makeFlock(n):
flock = []
for _ in range(n):
flock.append(Duck())
return(flock)
Related
def anshu():
a=1+2
print(a)
anshu()
def sanju():
b=2+3
print(b)
sanju()
def bala():
c=a+b
print(c)
can you explain?
I gave many value in one or more function i want use these value in any function in python
To access a variable from one function in another function, you can either return the variable from the first function and pass it as an argument to the second function, or you can make the variable global so that it can be accessed from any function. Here's an example using the first approach:
def anshu():
a = 1 + 2
return a
def sanju():
b = 2 + 3
return b
def bala(a, b):
c = a + b
print(c)
anshu_result = anshu()
sanju_result = sanju()
bala(anshu_result, sanju_result)
Here's an example using the second approach:
def anshu():
global a
a = 1 + 2
def sanju():
global b
b = 2 + 3
def bala():
c = a + b
print(c)
anshu()
sanju()
bala()
Note that using global variables is generally not considered good practice, because it can make your code difficult to maintain and debug. It's usually better to use the first approach of passing variables as arguments to functions.
i need to fill up this visitor class so it can be used to fill nodes with depth in a single pass
Example:
Consider the tree as
12
/ \
30 50
/ \ /
40 50 60
/
70
If i consider a tree whose preorder traversal is given as
[12,30,40,50,50,60,70]
the o/p(preorder traversal) i should get is
[0,1,2,2,1,2,3]
that means the actual node values have to be replaced by their corresponding depths
The visitor function i wrote is as
class DepthVisitor(object):
def __init__(self):
self.result=[]
pass
def fill_depth(self, node):
def count(node, level):
if node!=None:
node.value=level
level+=1
count(node.left, level)
if node.left==None:
count(node.right,level)
count(node, 0)
pass
visitor = DepthVisitor()
pre_order(root, visitor.fill_depth)
the problem is that every time the node is passed to the visitor function it's value gets overwritten to 0 and hence the o/p comes as
[0,0,0,0,0,0,0]
How can i prevent it from overwriting the already visited node values to 0 so as to get the correct o/p.
Or is there any alternate/better way to the same w/o using recursion??
You should initialize your DepthVisitor to have a level attribute by doing self.level = 0. You can then modify your count function to use that persistent level, as it will be stored in the object scope instead of the function scope. You can increment and decrement it as appropriate across function calls.
every one. I was trying to use the method copy(), but I was really frustrated it seems that there was bug within my program. I was supposed to get ct=99 only instead of ct =0, when comparing c1 and c2, but it turns out there are so extra terms behind.
Just run the code , and you can immediately spot What that weird thing is. Thank you every one.
NB: This problem is a generic programming problem and has nothing to do with Fourvector.
import numpy as np
class FourVector:
""" This document is a demonstration of how to create a class of Four vector """
def __init__(self, ct=0, x=1, y=5, z=2, r=None):
self.ct = ct
self.r = np.array(r if r else [x,y,z])
def __repr__(self):
return "%s(ct=%g,r=array%s)"% ("FourVector",self.ct,str(self.r))
def copy(self):
return FourVector(self.ct,self.r)
c1=FourVector(ct=0,r=[1,2,3]) # Note: c1,c2 here are objects, we used __repr__ to make them printable
print c1
c2=c1.copy() #use method copy within object c1
c2.ct=99
print c2
When you are copying, you are passing two unnamed arguments to FourVector.__init__. Python interprets them positionally, so you are effectively calling:
FourVector.__init__(new_self, ct=self.ct, x=self.r, y=5, z=2, r=None)
r is still None, so new_self.r is assigned to be np.array([self.r, y, z]). This is why the array in c2 has extra terms.
Instead, you need to tell Python that the second value should be for the r argument, not just the second argument:
def copy(self):
return FourVector(self.ct, r=self.r)
Alternatively, you could either re-order the arguments:
def __init__(self, ct, r=None, x=1, y=5, z=2):
or even remove the x, y and z arguments and provide them as the default value for r:
def __init__(self, ct, r=[1,5,2]):
Is it a linked list, an array? I searched around and only found people guessing. My C knowledge isn't good enough to look at the source code.
The C code is pretty simple, actually. Expanding one macro and pruning some irrelevant comments, the basic structure is in listobject.h, which defines a list as:
typedef struct {
PyObject_HEAD
Py_ssize_t ob_size;
/* Vector of pointers to list elements. list[0] is ob_item[0], etc. */
PyObject **ob_item;
/* ob_item contains space for 'allocated' elements. The number
* currently in use is ob_size.
* Invariants:
* 0 <= ob_size <= allocated
* len(list) == ob_size
* ob_item == NULL implies ob_size == allocated == 0
*/
Py_ssize_t allocated;
} PyListObject;
PyObject_HEAD contains a reference count and a type identifier. So, it's a vector/array that overallocates. The code for resizing such an array when it's full is in listobject.c. It doesn't actually double the array, but grows by allocating
new_allocated = (newsize >> 3) + (newsize < 9 ? 3 : 6);
new_allocated += newsize;
to the capacity each time, where newsize is the requested size (not necessarily allocated + 1 because you can extend by an arbitrary number of elements instead of append'ing them one by one).
See also the Python FAQ.
It's a dynamic array. Practical proof: Indexing takes (of course with extremely small differences (0.0013 µsecs!)) the same time regardless of index:
...>python -m timeit --setup="x = [None]*1000" "x[500]"
10000000 loops, best of 3: 0.0579 usec per loop
...>python -m timeit --setup="x = [None]*1000" "x[0]"
10000000 loops, best of 3: 0.0566 usec per loop
I would be astounded if IronPython or Jython used linked lists - they would ruin the performance of many many widely-used libraries built on the assumption that lists are dynamic arrays.
I would suggest Laurent Luce's article "Python list implementation". Was really useful for me because the author explains how the list is implemented in CPython and uses excellent diagrams for this purpose.
List object C structure
A list object in CPython is represented by the following C structure. ob_item is a list of pointers to the list elements. allocated is the number of slots allocated in memory.
typedef struct {
PyObject_VAR_HEAD
PyObject **ob_item;
Py_ssize_t allocated;
} PyListObject;
It is important to notice the difference between allocated slots and the size of the list. The size of a list is the same as len(l). The number of allocated slots is what has been allocated in memory. Often, you will see that allocated can be greater than size. This is to avoid needing calling realloc each time a new elements is appended to the list.
...
Append
We append an integer to the list: l.append(1). What happens?
We continue by adding one more element: l.append(2). list_resize is called with n+1 = 2 but because the allocated size is 4, there is no need to allocate more memory. Same thing happens when we add 2 more integers: l.append(3), l.append(4). The following diagram shows what we have so far.
...
Insert
Let’s insert a new integer (5) at position 1: l.insert(1,5) and look at what happens internally.
...
Pop
When you pop the last element: l.pop(), listpop() is called. list_resize is called inside listpop() and if the new size is less than half of the allocated size then the list is shrunk.
You can observe that slot 4 still points to the integer but the important thing is the size of the list which is now 4.
Let’s pop one more element. In list_resize(), size – 1 = 4 – 1 = 3 is less than half of the allocated slots so the list is shrunk to 6 slots and the new size of the list is now 3.
You can observe that slot 3 and 4 still point to some integers but the important thing is the size of the list which is now 3.
...
Remove
Python list object has a method to remove a specific element: l.remove(5).
This is implementation dependent, but IIRC:
CPython uses an array of pointers
Jython uses an ArrayList
IronPython apparently also uses an array. You can browse the source code to find out.
Thus they all have O(1) random access.
In CPython, lists are arrays of pointers. Other implementations of Python may choose to store them in different ways.
According to the documentation,
Python’s lists are really variable-length arrays, not Lisp-style linked lists.
As others have stated above, the lists (when appreciably large) are implemented by allocating a fixed amount of space, and, if that space should fill, allocating a larger amount of space and copying over the elements.
To understand why the method is O(1) amortized, without loss of generality, assume we have inserted a = 2^n elements, and we now have to double our table to 2^(n+1) size. That means we're currently doing 2^(n+1) operations. Last copy, we did 2^n operations. Before that we did 2^(n-1)... all the way down to 8,4,2,1. Now, if we add these up, we get 1 + 2 + 4 + 8 + ... + 2^(n+1) = 2^(n+2) - 1 < 4*2^n = O(2^n) = O(a) total insertions (i.e. O(1) amortized time). Also, it should be noted that if the table allows deletions the table shrinking has to be done at a different factor (e.g 3x)
A list in Python is something like an array, where you can store multiple values. List is mutable that means you can change it. The more important thing you should know, when we create a list, Python automatically creates a reference_id for that list variable. If you change it by assigning others variable the main list will be change. Let's try with a example:
list_one = [1,2,3,4]
my_list = list_one
#my_list: [1,2,3,4]
my_list.append("new")
#my_list: [1,2,3,4,'new']
#list_one: [1,2,3,4,'new']
We append my_list but our main list has changed. That mean's list didn't assign as a copy list assign as its reference.
I found this article really helpful to understand how lists are implemented using python code.
class OhMyList:
def __init__(self):
self.length = 0
self.capacity = 8
self.array = (self.capacity * ctypes.py_object)()
def append(self, item):
if self.length == self.capacity:
self._resize(self.capacity*2)
self.array[self.length] = item
self.length += 1
def _resize(self, new_cap):
new_arr = (new_cap * ctypes.py_object)()
for idx in range(self.length):
new_arr[idx] = self.array[idx]
self.array = new_arr
self.capacity = new_cap
def __len__(self):
return self.length
def __getitem__(self, idx):
return self.array[idx]
In CPython list is implemented as dynamic array, and therefore when we append at that time not only one macro is added but some more space is allocated so that everytime new space should not be added.
Weak as in weak references. Basically, I need a sequence of numbers where some of them can be unallocated when they aren't needed anymore.
scalaz.EphemeralStream is what you want.
Views provide you with a lazy collection, where each value is computed as it is needed.
One thing you could do is create an Iterable instead of a Stream. Your Iterable needs to provide an iterator method, which returns an iterator with hasNext and next methods.
When you loop over the Iterable, hasNext and next will be called to generate the elements as they are needed, but they are not stored anywhere (like a Stream does).
Simple example:
class Numbers extends Iterable[Int] {
def iterator = new Iterator[Int] {
private var num = -1
def hasNext = num < 99
def next = { num += 1; num }
}
}