I have a string s containing such key-value pairs, and I would like to construct from it data frame,
s="{'#JJ': 121, '#NN': 938, '#DT': 184, '#VB': 338, '#RB': 52}"
r1<-sapply(strsplit(s, "[^0-9_]+",as.numeric),as.numeric)
r2<-sapply(strsplit(s, "[^A-Z]+",as.numeric),as.character)
d<-data.frame(id=r2,value=r1)
what gives:
r1
[,1]
[1,] NA
[2,] 121
[3,] 938
[4,] 184
[5,] 338
[6,] 52
r2
[,1]
[1,] ""
[2,] "JJ"
[3,] "NN"
[4,] "DT"
[5,] "VB"
[6,] "RB"
d
id value
1 NA
2 JJ 121
3 NN 938
4 DT 184
5 VB 338
6 RB 52
First I would like don't have NA and "" after using regular expression. I think it should be something like {2,} meaning match all from second occurence, but I can not do that in R.
Another think I would like to do will be: having a data frame with column like below:
m
1 {'#JJ': 121, '#NN': 938, '#DT': 184, '#VB': 338, '#RB': 52}
2 {'#NN': 168, '#DT': 59, '#VB': 71, '#RB': 5, '#JJ': 35}
3 {'#JJ': 18, '#NN': 100, '#DT': 23, '#VB': 52, '#RB': 11}
4 {'#NN': 156, '#JJ': 39, '#DT': 46, '#VB': 67, '#RB': 21}
5 {'#NN': 112, '#DT': 39, '#VB': 57, '#RB': 8, '#JJ': 32}
6 {'#DT': 236, '#NN': 897, '#VB': 420, '#RB': 122, '#JJ': 240}
7 {'#NN': 316, '#RB': 25, '#DT': 66, '#VB': 112, '#JJ': 81}
8 {'#NN': 198, '#DT': 29, '#VB': 85, '#RB': 37, '#JJ': 44}
9 {'#RB': 30}
10 {'#NN': 373, '#DT': 48, '#VB': 71, '#RB': 21, '#JJ': 36}
11 {'#NN': 49, '#DT': 17, '#VB': 23, '#RB': 11, '#JJ': 8}
12 {'#NN': 807, '#JJ': 135, '#DT': 177, '#VB': 315, '#RB': 69}
I would like to iterate over each row and split it numerical values into the columns named by the key.
Example of few rows showing, how I would like it will looks like:
I would use something that parses JSON, what your data seems to be:
s <- "{'#JJ': 121, '#NN': 938, '#DT': 184, '#VB': 338, '#RB': 52}"
parse.one <- function(s) {
require(rjson)
v <- fromJSON(gsub("'", '"', s))
data.frame(id = gsub("#", "", names(v)),
value = unlist(v, use.names = FALSE))
}
parse.one(s)
# id value
# 1 JJ 121
# 2 NN 938
# 3 DT 184
# 4 VB 338
# 5 RB 52
For the second part of the question, I would pass a slightly modified version of the parse.one function through lapply, then let plyr's rbind.fill function align the pieces together while filling missing values with NA:
df <- data.frame(m = c(
"{'#JJ': 121, '#NN': 938, '#DT': 184, '#VB': 338, '#RB': 52}",
"{'#NN': 168, '#DT': 59, '#VB': 71, '#RB': 5, '#JJ': 35}",
"{'#JJ': 18, '#NN': 100, '#DT': 23, '#VB': 52, '#RB': 11}",
"{'#JJ': 12, '#VB': 5}"
))
parse.one <- function(s) {
require(rjson)
y <- fromJSON(gsub("'", '"', s))
names(y) <- gsub("#", "", names(y))
as.data.frame(y)
}
library(plyr)
rbind.fill(lapply(df$m, parse.one))
# JJ NN DT VB RB
# 1 121 938 184 338 52
# 2 35 168 59 71 5
# 3 18 100 23 52 11
# 4 12 NA NA 5 NA
For now, I'll offer a solution to the first part of your question. Clean up your string and use read.table:
s="{'#JJ': 121, '#NN': 938, '#DT': 184, '#VB': 338, '#RB': 52}"
read.table(text = gsub(",", "\n", gsub("[{|}|#]", "", s)),
header = FALSE, sep = ":", strip.white=TRUE)
# V1 V2
# 1 JJ 121
# 2 NN 938
# 3 DT 184
# 4 VB 338
# 5 RB 52
For the second part, here's another alternative using concat.split from a package I wrote called "splitstackshape":
Sample data:
df <- data.frame(m = c(
"{'#JJ': 121, '#NN': 938, '#DT': 184, '#VB': 338, '#RB': 52}",
"{'#NN': 168, '#DT': 59, '#VB': 71, '#RB': 5, '#JJ': 35}",
"{'#JJ': 18, '#NN': 100, '#DT': 23, '#VB': 52, '#RB': 11}"
))
Similar cleanup as above, plus add an "id" column.
df$m <- gsub("[{|}|#]", "", df$m)
df$id <- 1:nrow(df)
Load the "splitstackshape" package:
# install.packages("splitstackshape")
library(splitstackshape)
df2 <- concat.split(concat.split.multiple(df, "m", ",", "long"),
"m", ":", drop = TRUE)
## df2 <- df2[complete.cases(df2), ] ##
## ^^ might be necessary if there are NAs in the resulting data.frame
The data are now in a "long" format that is easy to manipulate:
df2
# id time m_1 m_2
# 1 1 1 JJ 121
# 2 2 1 NN 168
# 3 3 1 JJ 18
# 4 1 2 NN 938
# 5 2 2 DT 59
# 6 3 2 NN 100
# 7 1 3 DT 184
# 8 2 3 VB 71
# 9 3 3 DT 23
# 10 1 4 VB 338
# 11 2 4 RB 5
# 12 3 4 VB 52
# 13 1 5 RB 52
# 14 2 5 JJ 35
# 15 3 5 RB 11
Here's an example of manipulating the data, using dcast from the "reshape2" package:
library(reshape2)
dcast(df2, id ~ m_1, value.var="m_2")
# id DT JJ NN RB VB
# 1 1 184 121 938 52 338
# 2 2 59 35 168 5 71
# 3 3 23 18 100 11 52
Related
i have a data structure of a flat array of numbers
[145, 46, 200, 3, 178, 206, 73, 228, 165, 65, 6, 141, 73, 90, 181, 100]
i need to make an array of arrays with a max of 3 items per sub array. So i look at some examples, and Enum.chunk(arr, n) seems like a candidate
so .chuck(arr, 3) says its deprecated, use chuck_every(arr, 3) instead, so i did that and it produces a strange result vs chunk
for example: chunk returns
[[145, 46, 200], [3, 178, 206], [73, 228, 165], [65, 6, 141], [73, 90, 181]]
while chunk_every returns
[145, 46, 200],
[3, 178, 206],
[73, 228, 165],
[65, 6, 141],
[73, 90, 181],
'p']
the main difference being an extra random element which is a string???
it's almost like it converted the element that chunk cuts off and converts it to a string?
Naturally I am expecting the replacement method would have the same output given the same input. Right?
Look at last element: 100. chunk seems to discard that value while chunk_every add it at last element alone. That is the p character you see. Elixir try to show as chars arrays of numbers in the console, as that is its internal representation.
As you can see in the documentation, you can pass :discard as leftover parameter to behave as deprecated chunk function.
https://hexdocs.pm/elixir/Enum.html#chunk_every/2
Enum.chunk_every/4 was designed by this,
actually its a number if you do like this:
[145, 46, 200, 3, 178, 206, 73, 228, 165, 65, 6, 141, 73, 90, 181, 100]
|> Enum.chunk_every(3, 3, [])
|> Enum.each(fn item ->
IO.inspect item, charlists: false
end)
you can find more detail from official discussion:
https://github.com/elixir-lang/elixir/issues/7260
Sometimes it makes sense to implement basic functionality ourselves, instead of looking up the standard library, for the precisely controlled result.
Here is a recursive implementation, that discards the tail.
input =
[145, 46, 200, 3, 178, 206, 73, 228,
165, 65, 6, 141, 73, 90, 181, 100]
defmodule MyEnum do
def chunk_3(input), do: do_chunk_3(input, [])
defp do_chunk_3([e1, e2, e3 | rest], acc),
do: do_chunk_3(rest, [[e1, e2, e3] | acc])
defp do_chunk_3(_, acc), do: Enum.reverse(acc)
end
MyEnum.chunk_3(input)
#⇒ [[145, 46, 200],
# [3, 178, 206],
# [73, 228, 165],
# [65, 6, 141],
# [73, 90, 181]]
I'm trying to convert numbers that were previously percentages to a decimal format by dividing them by 100 in Google Sheets. Basically, I have:
<polygon points="48, 6, 43, 7, 38, 9, 34, 12, 29, 16, 24, 22, 22, 30, 22, 44, 23, 50, 23, 65, 25, 72, 28, 77, 32, 82, 35, 86, 40, 90, 43, 92, 50, 93, 55, 91, 62, 87, 70, 76, 74, 69, 75, 64, 75, 54, 74, 49, 74, 40, 74, 32, 71, 23, 66, 15, 59, 9, 53, 6" />
And I want:
<polygon points=".48, .06, .43, .07, .38, .09, .34, .12, .29, .16, .24, .22, .22, .30, .22, .44, .23, .50, .23, .65, .25, .72, .28, .77, .32, .82, .35, .86, .40, .90, .43, .92, .50, .93, .55, .91, .62, .87, .70, .76, .74, .69, .75, .64, .75, .54, .74, .49, .74, .40, .74, .32, .71, .23, .66, .15, .59, .09, .53, .06" />
Is there any way to extract numbers, do an operation on them, then replace them in the previous string? I tried to use a regex token in REGEXREPLACE but it doesn't seem to be supported.
=(REGEXREPLACE(A2,"[^[:digit:]]",($/10)))
You cannot apply any function to the string replacement pattern in REGEXREPLACE. In this concrete case, you may simply append a 0 before single-digit numbers and then add dots before each sequence of 1 or more digits:
=REGEXREPLACE(REGEXREPLACE(A1,"\b\d\b", "0$0"), "\d+", ".$0")
See screenshot:
NOTES:
REGEXREPLACE(A1,"\b\d\b", "0$0") - finds a digit not preceded nor followed with a letter/digit/_, and adds a 0 in front of it ($0 is the placeholder for the whole match)
REGEXREPLACE(..., "\d+", ".$0") - prepends one or more digit chunks with a dot.
I want to implement the U-net CNN architecture. In this architecture, there is several concatenations in the "upsampling" part. I use keras 2.1.3, python 2.7 and tensorflow '1.4.0-rc0'
My inputs are of the shape (6,128,128) (channel first). Here is the code I've came up with
input_shape = (6, 128, 128)
# Create model U-net Model
input_fields = Input(shape=input_shape)
f32 = Conv2D(32, (3,3), padding="same")(input_fields)
f32 = Activation("relu", name="f32")(f32)
s32 = Conv2D(32, (3,3), padding="same")(f32)
s32 = Activation("relu",name="s32")(s32) ## To concatenate 32
pool32_64 = MaxPooling2D((2,2), padding="same")(s32)
f64 = Conv2D(64, (3,3), padding="same")(pool32_64)
f64 = Activation("relu")(f64)
s64 = Conv2D(64, (3,3), padding="same")(f64)
s64 = Activation("relu")(s64) # To concatenate 64
pool64_128 = MaxPooling2D((2,2), padding="same")(s64)
f128 = Conv2D(128, (3,3), padding="same")(pool64_128)
f128 = Activation("relu")(f128)
s128 = Conv2D(128, (3,3), padding="same")(f128)
s128 = Activation("relu")(s128)
print "Last shape before Upsampling "s128.get_shape()
#### vvv Upsampling Part vvv ####
up_128_64 = UpSampling2D((2,2))(s128)
up_128_64 = Conv2D(64, (2,2), padding="same")(up_128_64)
print "Conv2d pu_128_64 ", up_128_64.get_shape()
m64 = Concatenate(axis=0)([s64, up_128_64]) #or concatenate([s64, up_128_64], axis=0)
f64U = Conv2D(64, (3,3), padding="same")(m64)
f64U = Activation("relu")(f64U)
#print "f64U.get_shape()", f64U.get_shape()
s64U = Conv2D(64, (3,3), padding="same")(f64U)
s64U = Activation("relu")(s64U)
up_64_32 = UpSampling2D((2,2))(s64U)
up_64_32 = Conv2D(32, (2,2), padding="same")(up_64_32)
m32 = Concatenate(axis=0)([s32, up_64_32]) # or concatenate([s32, up_64_32], axis=0)
f32U = Conv2D(32, (3,3), padding="same")(m32)
f32U = Activation("relu")(f32U)
print "f32U.get_shape()", f32U.get_shape()
s32U = Conv2D(32, (3,3), padding="same")(f32U)
s32U = Activation("relu")(s32U)
output_field = Conv2D(1, (1,1), padding="same")(s32U)
output_field = Activation("relu")(output_field)
print output_field.get_shape()
U_net = Model(input_fields, output_field)
U_net.summary()
U_net.compile(optimizer="RMSProp", loss="mse")#, metrics=["accuracy"])
U_net.fit(X_train, y_train)
Concerning the U_net.summary() the output is :
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, 6, 128, 128) 0
__________________________________________________________________________________________________
conv2d_1 (Conv2D) (None, 32, 128, 128) 1760 input_1[0][0]
__________________________________________________________________________________________________
f32 (Activation) (None, 32, 128, 128) 0 conv2d_1[0][0]
__________________________________________________________________________________________________
conv2d_2 (Conv2D) (None, 32, 128, 128) 9248 f32[0][0]
__________________________________________________________________________________________________
s32 (Activation) (None, 32, 128, 128) 0 conv2d_2[0][0]
__________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D) (None, 32, 64, 64) 0 s32[0][0]
__________________________________________________________________________________________________
conv2d_3 (Conv2D) (None, 64, 64, 64) 18496 max_pooling2d_1[0][0]
__________________________________________________________________________________________________
activation_1 (Activation) (None, 64, 64, 64) 0 conv2d_3[0][0]
__________________________________________________________________________________________________
conv2d_4 (Conv2D) (None, 64, 64, 64) 36928 activation_1[0][0]
__________________________________________________________________________________________________
activation_2 (Activation) (None, 64, 64, 64) 0 conv2d_4[0][0]
__________________________________________________________________________________________________
max_pooling2d_2 (MaxPooling2D) (None, 64, 32, 32) 0 activation_2[0][0]
__________________________________________________________________________________________________
conv2d_5 (Conv2D) (None, 128, 32, 32) 73856 max_pooling2d_2[0][0]
__________________________________________________________________________________________________
activation_3 (Activation) (None, 128, 32, 32) 0 conv2d_5[0][0]
__________________________________________________________________________________________________
conv2d_6 (Conv2D) (None, 128, 32, 32) 147584 activation_3[0][0]
__________________________________________________________________________________________________
activation_4 (Activation) (None, 128, 32, 32) 0 conv2d_6[0][0]
__________________________________________________________________________________________________
up_sampling2d_1 (UpSampling2D) (None, 128, 64, 64) 0 activation_4[0][0]
__________________________________________________________________________________________________
conv2d_7 (Conv2D) (None, 64, 64, 64) 32832 up_sampling2d_1[0][0]
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 64, 64, 64) 0 activation_2[0][0]
conv2d_7[0][0]
__________________________________________________________________________________________________
conv2d_8 (Conv2D) (None, 64, 64, 64) 36928 concatenate_1[0][0]
__________________________________________________________________________________________________
activation_5 (Activation) (None, 64, 64, 64) 0 conv2d_8[0][0]
__________________________________________________________________________________________________
conv2d_9 (Conv2D) (None, 64, 64, 64) 36928 activation_5[0][0]
__________________________________________________________________________________________________
activation_6 (Activation) (None, 64, 64, 64) 0 conv2d_9[0][0]
__________________________________________________________________________________________________
up_sampling2d_2 (UpSampling2D) (None, 64, 128, 128) 0 activation_6[0][0]
__________________________________________________________________________________________________
conv2d_10 (Conv2D) (None, 32, 128, 128) 8224 up_sampling2d_2[0][0]
__________________________________________________________________________________________________
concatenate_2 (Concatenate) (None, 32, 128, 128) 0 s32[0][0]
conv2d_10[0][0]
__________________________________________________________________________________________________
conv2d_11 (Conv2D) (None, 32, 128, 128) 9248 concatenate_2[0][0]
__________________________________________________________________________________________________
activation_7 (Activation) (None, 32, 128, 128) 0 conv2d_11[0][0]
__________________________________________________________________________________________________
conv2d_12 (Conv2D) (None, 32, 128, 128) 9248 activation_7[0][0]
__________________________________________________________________________________________________
activation_8 (Activation) (None, 32, 128, 128) 0 conv2d_12[0][0]
__________________________________________________________________________________________________
conv2d_13 (Conv2D) (None, 1, 128, 128) 33 activation_8[0][0]
__________________________________________________________________________________________________
activation_9 (Activation) (None, 1, 128, 128) 0 conv2d_13[0][0]
==================================================================================================
Total params: 421,313
Trainable params: 421,313
Non-trainable params: 0
The network is built and
X_train.shape = (576, 6, 128, 128)
y_train.shape = (576, 1, 128, 128)
But during the training I receive this error
Epoch 1/1
---------------------------------------------------------------------------
InvalidArgumentError Traceback (most recent call last)
........../run_CNN_tau.py in <module>()
174 U_net.compile(optimizer="RMSProp", loss="mae")#, metrics=["accuracy"])
175
--> 176 U_net.fit(X_train, y_train)#, validation_data=(X_test, y_test), epochs=1)
177
178 #model.add(Conv2D(32, kernel_size=(16,16), padding="same", activation='relu', input_shape=input_shape))
/usr/local/lib/python2.7/dist-packages/keras/engine/training.pyc in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, **kwargs)
1667 initial_epoch=initial_epoch,
1668 steps_per_epoch=steps_per_epoch,
-> 1669 validation_steps=validation_steps)
1670
1671 def evaluate(self, x=None, y=None,
/usr/local/lib/python2.7/dist-packages/keras/engine/training.pyc in _fit_loop(self, f, ins, out_labels, batch_size, epochs, verbose, callbacks, val_f, val_ins, shuffle, callback_metrics, initial_epoch, steps_per_epoch, validation_steps)
1204 ins_batch[i] = ins_batch[i].toarray()
1205
-> 1206 outs = f(ins_batch)
1207 if not isinstance(outs, list):
1208 outs = [outs]
/usr/local/lib/python2.7/dist-packages/keras/backend/tensorflow_backend.pyc in __call__(self, inputs)
2473 session = get_session()
2474 updated = session.run(fetches=fetches, feed_dict=feed_dict,
-> 2475 **self.session_kwargs)
2476 return updated[:len(self.outputs)]
2477
/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.pyc in run(self, fetches, feed_dict, options, run_metadata)
887 try:
888 result = self._run(None, fetches, feed_dict, options_ptr,
--> 889 run_metadata_ptr)
890 if run_metadata:
891 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)
/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.pyc in _run(self, handle, fetches, feed_dict, options, run_metadata)
1118 if final_fetches or final_targets or (handle and feed_dict_tensor):
1119 results = self._do_run(handle, final_targets, final_fetches,
-> 1120 feed_dict_tensor, options, run_metadata)
1121 else:
1122 results = []
/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.pyc in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)
1315 if handle is None:
1316 return self._do_call(_run_fn, self._session, feeds, fetches, targets,
-> 1317 options, run_metadata)
1318 else:
1319 return self._do_call(_prun_fn, self._session, handle, feeds, fetches)
/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.pyc in _do_call(self, fn, *args)
1334 except KeyError:
1335 pass
-> 1336 raise type(e)(node_def, op, message)
1337
1338 def _extend_graph(self):
InvalidArgumentError: Incompatible shapes: [96,1,128,128] vs. [32,1,128,128]
[[Node: training/RMSprop/gradients/loss/activation_9_loss/sub_grad/BroadcastGradientArgs = BroadcastGradientArgs[T=DT_INT32, _class=["loc:#loss/activation_9_loss/sub"], _device="/job:localhost/replica:0/task:0/device:CPU:0"](training/RMSprop/gradients/loss/activation_9_loss/sub_grad/Shape, training/RMSprop/gradients/loss/activation_9_loss/sub_grad/Shape_1)]]
Caused by op u'training/RMSprop/gradients/loss/activation_9_loss/sub_grad/BroadcastGradientArgs', defined at:
File "/usr/local/bin/ipython", line 11, in <module>
sys.exit(start_ipython())
File "/usr/local/lib/python2.7/dist-packages/IPython/__init__.py", line 119, in start_ipython
return launch_new_instance(argv=argv, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/traitlets/config/application.py", line 658, in launch_instance
app.start()
File "/usr/local/lib/python2.7/dist-packages/IPython/terminal/ipapp.py", line 355, in start
self.shell.mainloop()
File "/usr/local/lib/python2.7/dist-packages/IPython/terminal/interactiveshell.py", line 495, in mainloop
self.interact()
File "/usr/local/lib/python2.7/dist-packages/IPython/terminal/interactiveshell.py", line 486, in interact
self.run_cell(code, store_history=True)
File "/usr/local/lib/python2.7/dist-packages/IPython/core/interactiveshell.py", line 2714, in run_cell
interactivity=interactivity, compiler=compiler, result=result)
File "/usr/local/lib/python2.7/dist-packages/IPython/core/interactiveshell.py", line 2824, in run_ast_nodes
if self.run_code(code, result):
File "/usr/local/lib/python2.7/dist-packages/IPython/core/interactiveshell.py", line 2878, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-463-b869a174fafa>", line 1, in <module>
get_ipython().magic(u'run run_CNN_tau.py')
File "/usr/local/lib/python2.7/dist-packages/IPython/core/interactiveshell.py", line 2160, in magic
return self.run_line_magic(magic_name, magic_arg_s)
File "/usr/local/lib/python2.7/dist-packages/IPython/core/interactiveshell.py", line 2081, in run_line_magic
result = fn(*args,**kwargs)
File "<decorator-gen-58>", line 2, in run
File "/usr/local/lib/python2.7/dist-packages/IPython/core/magic.py", line 188, in <lambda>
call = lambda f, *a, **k: f(*a, **k)
File "/usr/local/lib/python2.7/dist-packages/IPython/core/magics/execution.py", line 742, in run
run()
File "/usr/local/lib/python2.7/dist-packages/IPython/core/magics/execution.py", line 728, in run
exit_ignore=exit_ignore)
File "/usr/local/lib/python2.7/dist-packages/IPython/core/interactiveshell.py", line 2483, in safe_execfile
self.compile if kw['shell_futures'] else None)
File "/usr/local/lib/python2.7/dist-packages/IPython/utils/py3compat.py", line 289, in execfile
builtin_mod.execfile(filename, *where)
File "/home/nsaura/Documents/Git_RANNS/ML/turbo/wk/tests/python/run_CNN_tau.py", line 176, in <module>
U_net.fit(X_train, y_train)#, validation_data=(X_test, y_test), epochs=1)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1646, in fit
self._make_train_function()
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 970, in _make_train_function
loss=self.total_loss)
File "/usr/local/lib/python2.7/dist-packages/keras/legacy/interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/keras/optimizers.py", line 233, in get_updates
grads = self.get_gradients(loss, params)
File "/usr/local/lib/python2.7/dist-packages/keras/optimizers.py", line 78, in get_gradients
grads = K.gradients(loss, params)
File "/usr/local/lib/python2.7/dist-packages/keras/backend/tensorflow_backend.py", line 2512, in gradients
return tf.gradients(loss, variables, colocate_gradients_with_ops=True)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gradients_impl.py", line 581, in gradients
grad_scope, op, func_call, lambda: grad_fn(op, *out_grads))
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gradients_impl.py", line 353, in _MaybeCompile
return grad_fn() # Exit early
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gradients_impl.py", line 581, in <lambda>
grad_scope, op, func_call, lambda: grad_fn(op, *out_grads))
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/math_grad.py", line 727, in _SubGrad
rx, ry = gen_array_ops._broadcast_gradient_args(sx, sy)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_array_ops.py", line 532, in _broadcast_gradient_args
"BroadcastGradientArgs", s0=s0, s1=s1, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2956, in create_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1470, in __init__
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access
...which was originally created as op u'loss/activation_9_loss/sub', defined at:
File "/usr/local/bin/ipython", line 11, in <module>
sys.exit(start_ipython())
[elided 16 identical lines from previous traceback]
File "/usr/local/lib/python2.7/dist-packages/IPython/utils/py3compat.py", line 289, in execfile
builtin_mod.execfile(filename, *where)
File "/home/nsaura/Documents/Git_RANNS/ML/turbo/wk/tests/python/run_CNN_tau.py", line 174, in <module>
U_net.compile(optimizer="RMSProp", loss="mae")#, metrics=["accuracy"])
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 827, in compile
sample_weight, mask)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 426, in weighted
score_array = fn(y_true, y_pred)
File "/usr/local/lib/python2.7/dist-packages/keras/losses.py", line 18, in mean_absolute_error
return K.mean(K.abs(y_pred - y_true), axis=-1)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/math_ops.py", line 894, in binary_op_wrapper
return func(x, y, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_math_ops.py", line 4636, in _sub
"Sub", x=x, y=y, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2956, in create_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1470, in __init__
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access
InvalidArgumentError (see above for traceback): Incompatible shapes: [96,1,128,128] vs. [32,1,128,128]
[[Node: training/RMSprop/gradients/loss/activation_9_loss/sub_grad/BroadcastGradientArgs = BroadcastGradientArgs[T=DT_INT32, _class=["loc:#loss/activation_9_loss/sub"], _device="/job:localhost/replica:0/task:0/device:CPU:0"](training/RMSprop/gradients/loss/activation_9_loss/sub_grad/Shape, training/RMSprop/gradients/loss/activation_9_loss/sub_grad/Shape_1)]]
The point is, this U-net works fine if the concatenation layers are removed. Can someone explain how I can fix this issue ?
According to the Keras documentation, the default batch size for training is 32 samples (https://keras.io/models/model/#fit), and if I look at your architecture, it seems like you are essentially taking the input stream, splitting it, and then merging it twice (once for each Concatenate) yielding 96 samples per batch. This might explain the error message content: "[96,1,128,128] vs. [32,1,128,128]".
Are you sure you want to be doing Concatenate along the batch dimension? Hope this helps.
I have recently started to work with python 2.7 .
i have some data which i am passing to product API of amazon, to make it a batch call i want to pass 10 values per call as that is the max IDs or keywords per batch call.
Here is a question that, how to pass only 10 values to the function. I have some 76 values(it may increase) in total which is a list and 6 at the end.I can read values from list using *args but to get only 10 values how can i process it using for-loop statement or any loop.
I want to do something like this
data = rows_db
count = 76
for id in data[range start , count ]:
ids = id #copy 10 values or less
foo(ids)
start = start + 10
def foo(*ids):
#process and retrieve values
I guess you want to do something like this:
data_copy = list(data) # you can replace any appearance of data_copy with data if you don't care if it is changed
while data_copy: # this is equivalent to: while len(data_copy) != 0:
to = min(10, len(data_copy)) # If there are less then 10 entries left, the length will be smaller than ten, so that it is either 10 or the (smaller) length. This is the amount of data that's processed
f(data_copy[:to]) # make the function call with any value up to 'to'
del data_copy[:to] # delete the data, because we already processed it
This:
def f(x): print(x)
data = list(range(53)) # list from 0 (included) to 52 (included)
# here is the top part
yields the expected output of
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
[10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
[20, 21, 22, 23, 24, 25, 26, 27, 28, 29]
[30, 31, 32, 33, 34, 35, 36, 37, 38, 39]
[40, 41, 42, 43, 44, 45, 46, 47, 48, 49]
[50, 51, 52]
I have a list of the numbers 1,2,3 and 4.
I wish to print them out in the following manner:
1
2
3
4
11
12
13
14
21
22
23
24
31
..and so on.
How is it possible to do?
Thanks
from itertools import product
maximumDigits = 2
digits = '1234'
for l in range(1, maximumDigits + 1):
for n in product(digits, repeat=l):
print(''.join(n))
Gives you:
1
2
3
4
11
12
13
14
21
22
23
24
31
32
33
34
41
42
43
44
Non-itertools solution:
>>> digits = (1, 2, 3, 4)
>>> nums = newNums = list(digits)
# calculate 2-digit numbers
>>> newNums = [n * 10 + m for n in newNums for m in digits]
>>> nums.extend(newNums)
>>> nums
[1, 2, 3, 4, 11, 12, 13, 14, 21, 22, 23, 24, 31, 32, 33, 34, 41, 42, 43, 44]
# calculate 3-digit numbers
>>> newNums = [n * 10 + m for n in newNums for m in digits]
>>> nums.extend(newNums)
>>> nums
[1, 2, 3, 4, 11, 12, 13, 14, 21, 22, 23, 24, 31, 32, 33, 34, 41, 42, 43, 44, 111, 112, 113, 114, 121, 122, 123, 124, 131, 132, 133, 134, 141, 142, 143, 144, 211, 212, 213, 214, 221, 222, 223, 224, 231, 232, 233, 234, 241, 242, 243, 244, 311, 312, 313, 314, 321, 322, 323, 324, 331, 332, 333, 334, 341, 342, 343, 344, 411, 412, 413, 414, 421, 422, 423, 424, 431, 432, 433, 434, 441, 442, 443, 444]
# this repeats for each new digit you want