Loading JSON data via DataPortal into Parameter - pyomo

I want to load Data from a JSON file into my Pyomo Model.
I start with
m = AbstractModel()
m.T = RangeSet(1,48,1)
m.TS = Param(m.T)
dp = DataPortal()
Later I want to load my json file to populate my parameter TS
with
dp.load(param = m.TS, filename = data.json)
The data is a timeseries indexed by integers 1,2,...,48
Thus
it looks like this:
{
"1" : 0.0001,
"2" : 0.1 ,
....,
"48" : 0
}
then i create
create_instance(dp)
I get the folowing error:
RuntimeError: Failed to set value for param=TS, index=1, value=35.676187.
source error message="Index '1' is not valid for indexed component 'TS'"

So I came with a solution to this problem.
The JSON file should be written in this fashion:
{"TS":
[
{'index':1, 'value':1},
{'index':2, 'value':0},
{'index':3, 'value':1},
]
}

Related

Dynamically referring to JSON node in Power Query M

I have a function that extracts a node from JSON document as follows:
...
Json = GetJson(Url),
Value = Json[#"values"]
values correspond to the actual node within the JSON document.
I would like to generalize this piece of code and provide the name of the node as a variable like:
let myFunc = (parentNodeName as text) =>
...
Json = GetJson(Url),
Value = Json[parentNodeName]
However getting this error:
An error occurred in the ‘myFunc’ query. Expression.Error: The field 'parentNodeName' of the record wasn't found.
How can I refer to the JSON node dynamically?
Try
(Json, parentNodeName ) =>
let
...
Value = Record.Field(Json,parentNodeName)
in Value
sample code:
let Json = Json.Document(Web.Contents("http://soundcloud.com/oembed?url=http%3A//soundcloud.com/forss/flickermood&format=json")),
Value=myFunc(Json,"title")
in Value
and myFunc:
(Json, parentNodeName ) =>
let
Value = Record.Field(Json,parentNodeName)
in Value

Terraform upgrade from 0.11 to 0.12 getting error Invalid value for "lists" parameter in join function

I am upgrading our code from Terraform 0.11 to 0.12.29.
my older code that works on TF 0.11
my.tf:
data "templ_file" "dep" {
template = "$${input}"
vars {
input = "${join(",", var.abc)}"
}
}
Where abc is defined as:
variable "abc" {
default = []
type = list
}
Updated my.tf to following for TF 0.12.29:
...
vars = {
input = join(",", var.abc)
}
But I am getting this error:
Error: Invalid function argument
on ../modules/x/y/my.tf line 6, in data "templ_file" "dep":
6: input = join(",", var.abc)
|----------------
| var.abc is list of list of dynamic with 1 element
Invalid value for "lists" parameter: incorrect list element type: string
required.
I also saw this post: https://github.com/hashicorp/terraform/issues/20705 which suggest to use concat or flatten but I could not make it work.
I am new to terraform so this might be simple question but I am unable to get this working.

How to save string as json in scala spark

I have the raw of string in logs file . I do many filter and other operation after that . I have reached the following problem as below. I need to convert the string into json format . So that i can save it as a single object.
Suppose i have the following data
Val CDataTime = "20191012"
Val LocationId = "12345"
Val SetInstruc = "Comm=Qwe123,Elem=12345,Elem123=Test"
I am trying to create a data frame that contains datetime|location|jsonofinstruction
The Jsonofstring is the json of third Val; I try to split the string first by comma than by equal to sign and loop through by 2 and create a map of value of one and 2 as value. But json not created . Please help here.
You can use scala.util.parsing.json.JSONObject to convert a map to JSON and then to a string.
val df = spark.createDataset(Seq("Comm=Qwe123,Elem=12345,Elem123=Test")).toDF("col3")
val dfWithJson = df.map{ row =>
val insMap = row.getAs[String]("col3").split(",").map{kv =>
val kvArray = kv.split("=")
(kvArray(0),kvArray(1))
}.toMap
val insJson = JSONObject(insMap).toString()
(row.getAs[String]("col3"),insJson)
}.toDF("col3","col4").show()
Result -
+--------------------+--------------------+
| col3| col4|
+--------------------+--------------------+
|Comm=Qwe123,Elem=...|{"Comm" : "Qwe123...|
+--------------------+--------------------+

OrderedDict() returned from dataset.variable

I want to parse NetCDF file using NETCDF4 and Python
My code is :
>>> from netCDF4 import Dataset
>>> dataset = Dataset('data.nc')
>>> print dataset.variables
OrderedDict()
Why OrderedDict() is returned ?
Actually the Netcdf format is new for me here is a part of it :
group: PRODUCT {
dimensions:
scanline = 289 ;
ground_pixel = 215 ;
corner = 4 ;
time = 1 ;
layer = 50 ;
variables:
int scanline(scanline) ;
scanline:units = "1" ;
So I want to access the variables and tried everything in my mind but all failed..
One of my trails is :
>print dataset.variables.keys()
[]
But it returned []
So any idea how to access these variables ?
Thanks in advance,
Hala
I found the answer in http://unidata.github.io/netcdf4-python/#netCDF4.Dataset.renameGroup
The answer is :
print dataset["PRODUCT"].variables['ground_pixel'][0]
Have a nice day

How to restore variables using CheckpointReader in Tensorflow

I'm trying to restore some variables from checkpoint file if same variable name is in current model.
And I found that there is some way as in Tensorfow Github
So what I want to do is checking variable names in checkpoint file using has_tensor("variable.name") as below,
...
reader = tf.train.NewCheckpointReader(ckpt_path)
for v in tf.trainable_variables():
print v.name
if reader.has_tensor(v.name):
print 'has tensor'
...
But I found that v.name returns both variable name and colon+number. For example, I have variable name W_o and b_o then v.name returns W_o:0, b_o:0.
However reader.has_tensor() requires name without colon and number as W_o, b_o.
My question is: how to remove the colon and number at the end of the variable name in order to read the variables?
Is there a better way to restore such variables?
You could use string.split() to get the tensor name:
...
reader = tf.train.NewCheckpointReader(ckpt_path)
for v in tf.trainable_variables():
tensor_name = v.name.split(':')[0]
print tensor_name
if reader.has_tensor(tensor_name):
print 'has tensor'
...
Next, let me use an example to show how I would restore every possible variable from a .cpkt file. First, let's save v2 and v3 in tmp.ckpt:
import tensorflow as tf
v1 = tf.Variable(tf.ones([1]), name='v1')
v2 = tf.Variable(2 * tf.ones([1]), name='v2')
v3 = tf.Variable(3 * tf.ones([1]), name='v3')
saver = tf.train.Saver({'v2': v2, 'v3': v3})
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
saver.save(sess, 'tmp.ckpt')
That's how I would restore every variable (belonging to a new graph) showing up in tmp.ckpt:
with tf.Graph().as_default():
assert len(tf.trainable_variables()) == 0
v1 = tf.Variable(tf.zeros([1]), name='v1')
v2 = tf.Variable(tf.zeros([1]), name='v2')
reader = tf.train.NewCheckpointReader('tmp.ckpt')
restore_dict = dict()
for v in tf.trainable_variables():
tensor_name = v.name.split(':')[0]
if reader.has_tensor(tensor_name):
print('has tensor ', tensor_name)
restore_dict[tensor_name] = v
saver = tf.train.Saver(restore_dict)
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
saver.restore(sess, 'tmp.ckpt')
print(sess.run([v1, v2])) # prints [array([ 0.], dtype=float32), array([ 2.], dtype=float32)]
Also, you may want to ensure that shapes and dtypes match.
tf.train.NewCheckpointReader is a nifty method that creates a CheckpointReader object. CheckpointReader has several very useful methods. The method that would be the most relevant to your question would be get_variable_to_shape_map().
get_variable_to_shape_map() provides a dictionary with variable names and shapes:
saved_shapes = reader.get_variable_to_shape_map()
print 'fire9/squeeze1x1/kernels:', saved_shapes['fire9/squeeze1x1/kernels']
Please take a look at this quick tutorial below:
Loading Variables from Existing Checkpoints
Simple answer:
reader = tf.train.NewCheckpointReader(checkpoint_file)
variable1 = reader.get_tensor('layer_name1/layer_type_name')
variable2 = reader.get_tensor('layer_name2/layer_type_name')
Now, after modification to these variables, you can assign it back.
layer_name1_var.set_weights([variable1, variable2])