I'm working with pull review and asks me to factor the following code, someone can help me with this?
#start_time = (params[:start_time].to_time.hour.to_i < #room.opening_time.hour.to_i)? #room.opening_time.hour_minutes : params[:start_time]
#end_time = (params[:end_time].to_time.hour.to_i > #room.closing_time.hour.to_i)? #room.closing_time.hour_minutes : params[:end_time]
To start you could throw some of the crazy long chains you're comparing into variables like so
opening_time = #room.opening_time.hour.to_i
closing_time = #room.closing_time.hour.to_i
starting_time = params[:start_time].to_time.hour.to_i
ending_time = params[:end_time].to_time.hour.to_i
open_minutes = #room.opening_time.hour_minutes
close_minutes = #room.closing_time.hour_minutes
Then you can do
#start_time = starting_time < opening_time ? start_minutes : params[:start_time]
#end_time = ending_time > closing_time ? end_minutes : params[end_time]
Which is a lot more readable.
From there I would recommend extracting out to two methods to run the actual conditionals, to make it clearer in English what the code is trying to do. For example:
def get_start_time(time)
starting_time = time.to_time.hour.to_i
opening_time = #room.opening_time.hour.to_i
open_minutes = #room.opening_time.hour_minutes
starting_time < opening_time ? start_minutes : time
end
def get_end_time(time)
ending_time = time.to_time.hour.to_i
closing_time = #room.closing_time.hour.to_i
close_minutes = #room.closing_time.hour_minutes
ending_time > closing_time ? end_minutes : time
end
#start_time = get_start_time(params[:start_time])
#end_time = get_end_time(params[:end_time])
Which, while it may be more physical code, is a lot clearer and simpler to read, which is a huge part of refactoring, especially when working with a group.
Related
Yesterday, I rendered the following code in an RMarkdown file:
trip_data_202208 <- read_csv(("/Users/mif06/Documents/
Cyclistic/CSV/202208_divvy_tripdata.csv"),
col_types = cols_only(ride_id = col_character(),
rideable_type = col_character(), started_at = col_character(),
ended_at = col_character(),
total_ride_time = col_time(format = ""),
day_of_week = col_character(),
start_station_name = col_character(),
start_station_id = col_character(),
end_station_name = col_character(),
end_station_id = col_character(),
start_lat = col_double(),
start_lng = col_double(), end_lat = col_double(),
end_lng = col_double(),member_casual = col_character()
)
)
Today, I go back to add to the RMarkdown file and I get the following error message:
Error in UseMethod("collector_value") :
no applicable method for 'collector_value' applied to an object of class "c('collector_skip', 'collector')"
I'm not sure why I'm having an issue since I rendered the same code before.
I can't even find what this error means. Thank you for your help!
Hello, I am new to the GANs. I wanted to ask why do we need to use detach() in the discriminator. I highlighted it in red. It is a part from Cyclegan by Alladdin person Github. Thank you for response.
with torch.cuda.amp.autocast():
fake_horse = gen_H(zebra) # generating fake horse
D_H_real = disc_H(horse) # classifying real horse
D_H_fake = ***disc_H(fake_horse.detach())***
H_reals += D_H_real.mean().item()
H_fakes += D_H_fake.mean().item()
D_H_real_loss = mse(D_H_real, torch.ones_like(D_H_real))
D_H_fake_loss = mse(D_H_fake, torch.zeros_like(D_H_fake))
D_H_loss = D_H_real_loss + D_H_fake_loss
fake_zebra = gen_Z(horse)
D_Z_real = disc_Z(zebra)
D_Z_fake = disc_Z(fake_zebra.detach())
D_Z_real_loss = mse(D_Z_real, torch.ones_like(D_Z_real))
D_Z_fake_loss = mse(D_Z_fake, torch.zeros_like(D_Z_fake))
D_Z_loss = D_Z_real_loss + D_Z_fake_loss
# put it togethor
D_loss = (D_H_loss + D_Z_loss)/2
as said in the title I'm supposed to query a database(SQLAlchemy) in a sequential manner: most of the data come from a checkbox in the HTML, I know that I can just do all the queries in one line but, here the problem: I've to query only the value that are going to be True, and obviously I can't foresee how many. So I'm asking if any of you knows a method to do so correctly, moreover I didn't find anything about nulling the booleans that are not True. Thanks for helping me!
Here is the code, I'm sure that there are not problem in the Database or in the HTML, but it just about how to do so.
if request.method == 'POST':
post_city = str(request.form['city'])
post_nation = str(request.form['nation'])
post_tools = request.form.get('fitness_tools')
post_pub = request.form.get('pub_presence')
post_games = request.form.get('games')
post_crowded = request.form.get('crowded')
post_fountain = request.form.get('fountain')
list_of_parks1 = Parks.query.filter_by(city=post_city, nation=post_nation).all()
if post_tools:
list_of_parks2 = Parks.query.filter_by(fitness_tools=post_tools).all()
if post_pub:
list_of_parks3 = Parks.query.filter_by(pub_presence=post_pub).all()
if post_games:
list_of_parks4 = Parks.query.filter_by(children_games=post_games).all()
if post_crowded:
list_of_parks5 = Parks.query.filter_by(crowded=post_crowded).all()
if post_fountain:
list_of_parks6 = Parks.query.filter_by(fountain=post_fountain).all()
list_of_parks = Parks.query.filter_by(city=post_city, nation=post_nation).all()
if list_of_parks:
session['parks_list'] = list_of_parks.id_park
self.solver = 'adam'
if self.solver == 'adam':
optimizer = tf.train.AdamOptimizer(self.learning_rate_init)
if self.solver == 'sgd_nestrov':
optimizer = tf.train.MomentumOptimizer(learning_rate = self.learning_rate_init, momentum = self.momentum, \
use_nesterov = True)
gradients, variables = zip(*optimizer.compute_gradients(self.loss))
clipped_gradients, self.global_norm = tf.clip_by_global_norm(gradients, self.max_grad_norm)
update_ops_ = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
optimizer_op = optimizer.apply_gradients(zip(clipped_gradients, variables))
control_ops = tf.group([self.ema_op] + update_ops_)
with tf.control_dependencies([optimizer_op]):
self.optimizer = control_ops
i call self.optimizer with the session
The code above is not updating the gradients. However if i change the control dependencies part of the code to the one below it works perfectly fine except that it misses out on a final exponential moving average (self.ema_op) update, which is not desirable to me:
self.solver = 'adam'
if self.solver == 'adam':
optimizer = tf.train.AdamOptimizer(self.learning_rate_init)
if self.solver == 'sgd_nestrov':
optimizer = tf.train.MomentumOptimizer(learning_rate = self.learning_rate_init, momentum = self.momentum, \
use_nesterov = True)
gradients, variables = zip(*optimizer.compute_gradients(self.loss))
clipped_gradients, self.global_norm = tf.clip_by_global_norm(gradients, self.max_grad_norm)
update_ops_ = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
optimizer_op = optimizer.apply_gradients(zip(clipped_gradients, variables))
control_ops = tf.group([self.ema_op] + update_ops_)
# with tf.control_dependencies(optimizer_op):
# self.optimizer = control_ops
with tf.control_dependencies([self.ema_op] + update_ops_):
self.optimizer = optimizer.apply_gradients(zip(clipped_gradients, variables))
Please tell me what am i missing?
You need to define the tensorflow operations under the with statement, not just set the variable. Doing self.optimizer = control_ops has no effect because you did not create any tensorflow operations.
Without fully understanding your problem I think you want something like this:
with tf.control_dependencies(optimizer_op):
control_ops = tf.group([self.ema_op] + update_ops_)
self.optimizer = control_ops
The with statement enters a block, under which any new ops you create in tensorflow will be dependent upon optimizer_op in this case.
The following code:
NAMESPACES = {'ns': 'http://www.starstandard.org/STAR/5', 'ns1': 'http://www.openapplications.org/oagis/9'}
ro_xml = '{}.xml'.format(6001265)
parser = etree.XMLParser(ns_clean=True)
tree = etree.parse(ro_xml)
root = etree.tostring(tree.getroot())
# print root
residence = []
residence_address = '//ns:ResidenceAddress/*'
shop_supplies_amount = tree.xpath(residence_address, namespaces=NAMESPACES)
for child in shop_supplies_amount:
residence.append("%s: %s" % (child.tag, child.text))
print residence
When I run this I am getting the namespaces in front of the tag names like so '{http://www.starstandard.org/STAR/5}LineOne: 10757 RIVER FRONT PARKWAY'
What am I doing incorrectly and how do I get rid of the namespace in front of the tag names
I figured a way around it.
NAMESPACES = {'ns': 'http://www.starstandard.org/STAR/5', 'ns1': 'http://www.openapplications.org/oagis/9'}
ro_xml = '{}.xml'.format(6001265)
parser = etree.XMLParser(ns_clean=True)
tree = etree.parse(ro_xml, parser)
vehicle = {}
vehicle_info = tree.xpath(twc.XML_VEHICLE_INFO, namespaces=NAMESPACES)
for child in vehicle_info:
vehicle.update({child.tag: child.text})
model = residence['{%s}Model' % NAMESPACES['ns']]
print model
Not the cleanest I admit, but it gets me where I need to be.