I recently upgraded my tesnorflow from Rev8 to Rev12. In Rev8 the default "state_is_tuple" flag in rnn_cell.LSTMCell is set to False, so I initialized my LSTM Cell with an list, see code below.
#model definition
lstm_cell = rnn_cell.LSTMCell(self.config.hidden_dim)
outputs, states = tf.nn.rnn(lstm_cell, data, initial_state=self.init_state)
#init_state place holder and feed_dict
def add_placeholders(self):
self.init_state = tf.placeholder("float", [None, self.cell_size])
def get_feed_dict(self, data, label):
feed_dict = {self.input_data: data,
self.input_label: reg_label,
self.init_state: np.zeros((self.config.batch_size, self.cell_size))}
return feed_dict
In Rev12, the default "state_is_tuple" flag is set to True, in order to make my old code work I had to explicitly turn the flag to False. However, now I got an warning from tensorflow saying:
"Using a concatenated state is slower and will soon be deprecated.
Use state_is_tuple=True"
I tried to initialize LSTM cell with a tuple by changing the placeholder definition for self.init_state to the following:
self.init_state = tf.placeholder("float", (None, self.cell_size))
but now I got an error message saying:
"'Tensor' object is not iterable"
Does anyone know how to make this work?
Feeding a "zero state" to an LSTM is much simpler now using cell.zero_state. You do not need to explicitely define the initial state as a placeholder. Define it as a tensor instead and feed it if required. This is how it works,
lstm_cell = rnn_cell.LSTMCell(self.config.hidden_dim)
self.initial_state = lstm_cell.zero_state(self.batch_size, dtype=tf.float32)
outputs, states = tf.nn.rnn(lstm_cell, data, initial_state=self.init_state)
If you wish to feed some other value as the initial state, Let's say next_state = states[-1] for instance, calculate it in your session and pass it in the feed_dict like -
feed_dict[self.initial_state] = next_state
In the context of your question, lstm_cell.zero_state() should suffice.
Unrelated, but remember that you can pass both Tensors and Placeholders in the feed dictionary! That's how self.initial_state is working in the example above. Have a look at the PTB Tutorial for a working example.
Related
I made this code in PYTHON 2.7.17:
class prt(object):
def _print(self, x):
self.x = x
print x
def Write_turtle(self, shape, move=False, text_of_show, font=('Arial', 16, 'bold')):
try:
x.shape(shape)
x.write(text_of_show, move, font)
except:
from turtle import *
x = Turtle()
x.shape(shape)
x.write(text_of_show, move, font)
And it gave me this error at the end of line 5:
SyntaxError: non-default argument follows default argument
Can anyone help me?
Thank you very much.
in your definition of Write_turtle the parameters move and font have
default parameters. As the error message tells you, you have to put them
at the end of the parameter list, e.g.:
def Write_turtle(self, shape, text_of_show, move=False, font=('Arial', 16, 'bold'))
The reason is, that these parameters are optional. If you do not set them, the default value is used. Because text_of_show has no default parameter, you always have to set it. This also implies, that you have to give all parameters before it a value. Therefore the default value for move is obsolete. If you call e.g.
Write_turtle((20, 10), True)
the interpreter would not know if the True is the value for move or for text_of_show. If you rearrange the parameters correctly as mentioned above you can call:
Write_turtle((20, 10), True)
Write_turtle((20, 10), False, True)
The first version sets move=False (its default value) the second one sets move=True.
For more information have a look at this!
I am writing an extensive problem in pyomo in which i need to activate and deactivate assets in projects. I think the easiest way to model this is to write an abstract model and then put each asset into a block. Then every time a model would be instanciated it would be possible to activate only certain type of blocks and a certain number of each blocks (ie 3 block wind turbine). Therefore the block will be indexed. Inside these blocks I define parameters that are time dependent but the time will be a shared index between all so it won't be local set but a general set.
Here is a short example of the typical situation I am running into:
import pyomo.environ as pm
model=pm.AbstractModel()
model.A=pm.Set()
model.T=pm.Set(ordered=True) #the set of Time
def Ablock(b,g):
b.A_param=pm.Param(model.T)
model.A_block=pm.Block(model.A,rule=Ablock)
Amodel_dict = \
{None : dict(
A = {None:[1,2,3]},
T = {None:[4,12,23]},
A_block = { 1: dict(A_param = { 4:3, 12:4, 23:5}),
2: dict(A_param = { 4:5, 12:6, 23:7}),
3: dict(A_param = { 4:8, 12:9, 23:10})}
)
}
instance=model.create_instance(data=Amodel_dict)
This gives the error:
RuntimeError: Failed to set value for param=A_block[1].A_param, index=4, value=3.
source error message="Index '4' is not valid for indexed component 'A_block[1].A_param'"
Intuitively i feel it is wrong to call for model.T inside the function Ablock as it is not supposed to know what it refers to but if i give it as an argument of the function, it modifies the block creation to block indexed on time as well. Do you have any idea how to solve this ?
I found the solution that was quite simple but very practical. Instead of calling model.T one should navigate in the hierarchical structure by using the functions model() or parent_block().
A solution for my problem is to call:
b.model().T or b.parent_block().T
I have the following structure in my database:
Each House has multiple Bedrooms and multiple Kitchens. Each Kitchen has multiple Cabinets.
Right now, I obtain all the cabinets based on a given Bedroom (I know it's weird). So I enter a Bedroom, It looks the House up, gets all the Kitchens associated with that House, then all Cabinets of those Kitchens. This is the code for it:
public function findCabinetsByBedroom(Bedroom $bedroom)
{
return $this->createQueryBuilder('cabinet')
->join('cabinet.kitchen', 'kitchen')
->join('kitchen.house', 'house')
->join('house.bedroom', 'bedroom')
->select('cabinet')
->andWhere('bedroom = : bedroom')
->setParameter('bedroom', $bedroom)
->getQuery()
->getResult(Query::HYDRATE_OBJECT);
}
I would like to extend my code to contain the Kitchen of each Cabinet and even the House. I managed to get the Kitchen by simply adding ->addSelect('kitchen') to my code. As so:
public function findCabinetsAndTheirKitchenByBedroom(Bedroom $bedroom)
{
return $this->createQueryBuilder('cabinet')
->join('cabinet.kitchen', 'kitchen')
->join('kitchen.house', 'house')
->join('house.bedroom', 'bedroom')
->select('cabinet')
->addSelect('kitchen')
->andWhere('bedroom = : bedroom')
->setParameter('bedroom', $bedroom)
->getQuery()
->getResult(Query::HYDRATE_OBJECT);
}
But trying to add the House information of the Kitchen doesn't work the same way and I guess it has to do with the fact that the Cabinets have no direct relationship with the House. Xdebug shows the following if I use the latest method (aka findCabinetsAndTheirKitchenByBedroom):
▼$cabinet = {array} [2]
▼0 = {App\Entity\Cabinet} [4]
id = 1
►time = {DateTime} [3]
productCategory = "someCat"
▼kitchen = {App\Entity\Kitchen} [3]
▼house = {Proxies\__CG__\App\Entity\House} [7]
lazyPropertiesDefaults = {array} [0]
►__initializer__ = {Closure} [3]
►__cloner__ = {Closure} [3]
__isInitialized__ = false
*App\Entity\House*bedroom = null
*App\Entity\House*id = 555
*App\Entity\House*name = null
id = 55
country = "US"
Opposes to this when I use the first one (aka findCabinetsByBedroom):
▼$cabinet = {array} [2]
▼0 = {App\Entity\Cabinet} [4]
id = 1
►time = {DateTime} [3]
productCategory = "someCat"
▼kitchen = {Proxies\__CG__\App\Entity\Kitchen} [7]
lazyPropertiesDefaults = {array} [0]
►__initializer__ = {Closure} [3]
►__cloner__ = {Closure} [3]
__isInitialized__ = false
*App\Entity\Kitchen*house = null
*App\Entity\Kitchen*id = 55
*App\Entity\Kitchen*country = null
So based on these result I concluded that addSelect indeed returned the Kitchen. And yes I checked the data in the database, it's the correct results. But how would I add the House information to the Cabinet?
One more issue, even though Xdebug shows the correct Kitchen info for each Cabinet, they're for some reason not returned when testing with POSTMAN or the browser. I get the following result:
{ "id": 1, "time": { "date": "2019-06-12 11:51:22.000000", "timezone_type": 3, "timezone": "UTC" }, "productCategory": "someCat", "kitchen": {} }
So it's empty for some reason. Any ideas how to display the information inside the Object Kitchen? I thought it had to do with it being a different object than Cabinet, but following this logic the content of time should have been empty as well since it's a DateTime object. So that can't be it but I have no clue why it's returned empty.
When using Doctrine, associations with other Entity objects are by default loaded "LAZY", see the docs about Extra lazy associations:
With Doctrine 2.1 a feature called Extra Lazy is introduced for associations. Associations are marked as Lazy by default, which means the whole collection object for an association is populated the first time its accessed. [..]
(I will say, documentation on the default fetching settings is very lacking, as this is the only spot (upgrade docs) which I could find where this is stated)
What this means: if you have something like the following Annotation, it means the relations will not be included until they're called, e.g. via a getObject() or getCollection()
/**
* #var Collection|Bathroom[]
* #ORM\OneToMany(targetEntity="Entity\Bathroom", mappedBy="house")
*/
private $bathrooms;
In this case, when you get your House object, an inspection of the object during execution (ie. with Xdebug) will show that the $house does have $bathrooms, however, each instance will show along the lines of:
{
id: 123,
location: null,
house: 2,
faucets: null,
shower: null,
__initialized__:false
}
This object's presence shows that Doctrine is aware of a Bathroom being associated with the House (hence the back 'n' forth ID's being set on the bi-directional relation, but no other properties set), however, the __initialized__:false indicated that Doctrine did not fetch the object. Hence: fetch="LAZY".
To get the associations with your get*() action, they must be set to fetch="EAGER", as shown here in the docs.
Whenever you query for an entity that has persistent associations and these associations are mapped as EAGER, they will automatically be loaded together with the entity being queried and is thus immediately available to your application.
To fix your issue:
Mark the associations you wish to return immediately as fetch="EAGER"
Extra:
Have a look at the Annotations Reference, specifically for OneToOne, OneToMany, ManyToOne and ManyToMany. Those references also show the available fetch options available for each association type.
I'm training my model using TensorFlow in C++. Python is used only for constructing the graph. So is there a way to save and restore the graph and its state purely in C++? I know about the Python class tf.train.Saver but as far as I understand it does not exist in C++.
The tf.train.Saver class currently exists only in Python, but (i) it is built from TensorFlow ops that you can run from C++, and (ii) it exposes the Saver.as_saver_def() method that lets you get a SaverDef protocol buffer with the names of ops that you must run to save or restore a model.
In Python, you can get the names of the save and restore ops as follows:
saver = tf.train.Saver(...)
saver_def = saver.as_saver_def()
# The name of the tensor you must feed with a filename when saving/restoring.
print saver_def.filename_tensor_name
# The name of the target operation you must run when restoring.
print saver_def.restore_op_name
# The name of the target operation you must run when saving.
print saver_def.save_tensor_name
In C++ to restore from a checkpoint, you call Session::Run(), feeding in the name of the checkpoint file as saver_def.filename_tensor_name, with a target op of saver_def.restore_op_name. To save another checkpoint, you call Session::Run(), again feeding in the name of the checkpoint file as saver_def.filename_tensor_name, and fetching the value of saver_def.save_tensor_name.
The recent TensorFlow version includes some helper functions to do the same in C++ without Python. These are generate from the ProtoBuf in the pip-package (${HOME}/.local/lib/python2.7/site-packages/tensorflow/include/tensorflow/core/protobuf/saver.pb.h).
// save
tensorflow::Tensor checkpointPathTensor(tensorflow::DT_STRING, tensorflow::TensorShape());
checkpointPathTensor.scalar<std::string>()() = "some/path";
tensor_dict feed_dict = {{graph_def.saver_def().filename_tensor_name(), checkpointPathTensor}};
status = sess->Run(feed_dict, {}, {graph_def.saver_def().save_tensor_name()}, nullptr);
// restore
tensorflow::Tensor checkpointPathTensor(tensorflow::DT_STRING, tensorflow::TensorShape());
checkpointPathTensor.scalar<std::string>()() = "some/path";
tensor_dict feed_dict = {{graph_def.saver_def().filename_tensor_name(), checkpointPathTensor}};
status = sess->Run(feed_dict, {}, {graph_def.saver_def().restore_op_name()}, nullptr);
This is based on the undocumented python-way (more details) of restoring a model
def restore(sess, metaGraph, fn):
restore_op_name = metaGraph.as_saver_def().restore_op_name # u'save/restore_all'
restore_op = tf.get_default_graph().get_operation_by_name(restore_op_name)
filename_tensor_name = metaGraph.as_saver_def().filename_tensor_name # u'save/Const'
sess.run(restore_op, {filename_tensor_name: fn})
For a working and complete version see here.
I'm tying to create a loop of buttons, and execute a command when they are pressed.
k=0
for row in list:
delete_list.append("button_delete"+str(k))
delete_list[k] = Gtk.Button(label="Delete")
grid.attach(delete_list[k], columns+1, k, 1, 1)
delete_list[k].connect("clicked",globals()["on_button"+str(k)+"_clicked"])
k+=1
The buttons are displayed correctly, but i'm having problems to connect the "clicked" signal.
delete_list[k].connect("clicked",globals()["on_button"+str(k)+"_clicked"])
KeyError: 'on_button0_clicked'
I first tought that the error was because there is no method on_button0_clicked, but i create it and i stil getting the same error.
Also, if there is some good way/advice to dynamically create the methods for the response of the buttons it would be great. I actually need to create a method for each button that uses the "k" counter.
To dynamically create a function which binds loop variables as locals, you need a factory function to generate them:
def callback_factory(num):
def new_callback(widget):
print(widget, num)
return new_callback
for k, row in enumerate(somelist):
button = Gtk.Button(label=str(k))
button.connect('clicked', callback_factory(k))
This avoids a common pitfall which is to create a method or lambda within the loop body causing the same thing to be printed for each button click. This is due to the environment the function is created in being bound to the generated function in which the k variable is mutating and shared between all callbacks. The pitfall can be observed with the following code which does not work as you might expect:
for k, row in enumerate(somelist):
def callback(widget):
print(k)
button = Gtk.Button(label=str(k))
button.connect('clicked', callback)
PyGObject also supports unique user data per-connect call which is passed into the callback:
def callback(widget, num):
print(widget, num)
for k, row in enumerate(somelist):
button = Gtk.Button(label=str(k))
button.connect('clicked', callback, k))
This allows using the same callback with a varying argument which can be cleaner in some cases.
Side note: It is probably not a good idea to mask the builtin "list" class by assigning it to your own list instance.