Python lightgbm feature_importance() error? - python-2.7

1.Environment info
Operating System: Windows
Python version: Python 2.7.13
2.Error Message:
ValueError: No JSON object could be decoded
lgb_train = lgb.Dataset(X_train, y_train)
lgb_eval = lgb.Dataset(X_test, y_test, reference=lgb_train)
params = {
'task':'train',
'boosting':'gbdt',
'objective':'binary',
'metric':{'l2', 'auc'},
'num_leaves': 62,
'learning_rate': 0.05,
'feature_fraction': 0.9,
'bagging_fraction': 0.8,
'bagging_freq': 5,
'verbose': 20
}
gbm = lgb.train(params,
lgb_train,
num_boost_round=250,
valid_sets=lgb_eval)
print('Start predicting...')
y_pred = gbm.predict(X_test, num_iteration=gbm.best_iteration)
y_pred = np.round(y_pred)
print gbm.feature_importance()

Follow this link: https://github.com/Microsoft/LightGBM/issues/615. According to the contributor, this is a small bug: The infinite number cannot be handled by json.

Related

Error exchanging list of floats in a topic

I think that the issue is silly.
I'd like to run the code on two computers and I need to use a list. I followed this Tutorials
I used my PC as a talker and computer of the robot as a listener.
when running the code on my PC, the output is good as I needed.
[INFO] [1574230834.705510]: [3.0, 2.1]
[INFO] [1574230834.805443]: [3.0, 2.1]
but once running the code on the computer of the robot, the output is:
Traceback (most recent call last):
File "/home/redhwan/learn.py", line 28, in <module>
talker()
File "/home/redhwan/learn.py", line 23, in talker
pub.publish(position.data)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/topics.py", line 886, in publish
raise ROSSerializationException(str(e))
rospy.exceptions.ROSSerializationException: <class 'struct.error'>: 'required argument is not a float' when writing 'data: [3.0, 2.1]'
full code on PC:
#!/usr/bin/env python
import rospy
from std_msgs.msg import Float32
x = 3.0
y = 2.1
def talker():
# if a == None:
pub = rospy.Publisher('position', Float32, queue_size=10)
rospy.init_node('talker', anonymous=True)
# rospy.init_node('talker')
rate = rospy.Rate(10) # 10hz
while not rospy.is_shutdown():
position = Float32()
a = [x,y]
# a = x
position.data = list(a)
# position.data = a
# hello_str = [5.0 , 6.1]
rospy.loginfo(position.data)
pub.publish(position.data)
rate.sleep()
if __name__ == '__main__':
try:
talker()
except rospy.ROSInterruptException:
pass
full code on the computer of the robot:
#!/usr/bin/env python
import rospy
from std_msgs.msg import Float32
def callback(data):
# a = list(data)
a = data.data
print a
def listener():
rospy.init_node('listener', anonymous=True)
rospy.Subscriber("position", Float32, callback)
# spin() simply keeps python from exiting until this node is stopped
rospy.spin()
if __name__ == '__main__':
listener()
when using one number as float everything is OK.
I understand how to publish and subscribe to them separately as the float but I'd like to do it as list
Any ideas or suggestion, it would be appreciated.
When you exchange messages in ROS is preferred to adopt standard messages if there is something relatively simple. Of course, when you develop more sophisticated systems (or modules), you can implement your own custom messages.
So in the case of float array, Float32MultiArray is your friend.
Populating the message in one side will look like that (just an example using a 2 elements float32 array) in C++:
.
.
.
while (ros::ok())
{
std_msgs::Float32MultiArray velocities;
velocities.layout.dim.push_back(std_msgs::MultiArrayDimension());
velocities.layout.dim[0].label = "velocities";
velocities.layout.dim[0].size = 2;
velocities.layout.dim[0].stride = 1;
velocities.data.clear();
velocities.data.push_back(count % 255);
velocities.data.push_back(-(count % 255));
velocities_demo_pub.publish(velocities);
ros::spinOnce();
loop_rate.sleep();
++count;
}
.
.
.
in Python for 8 elements array an example will look like:
.
.
.
while not rospy.is_shutdown():
# compose the multiarray message
pwmVelocities = Float32MultiArray()
myLayout = MultiArrayLayout()
myMultiArrayDimension = MultiArrayDimension()
myMultiArrayDimension.label = "motion_cmd"
myMultiArrayDimension.size = 1
myMultiArrayDimension.stride = 8
myLayout.dim = [myMultiArrayDimension]
myLayout.data_offset = 0
pwmVelocities.layout = myLayout
pwmVelocities.data = [0, 10.0, 0, 10.0, 0, 10.0, 0, 10.0]
# publish the message and log in terminal
pub.publish(pwmVelocities)
rospy.loginfo("I'm publishing: [%f, %f, %f, %f, %f, %f, %f, %f]" % (pwmVelocities.data[0], pwmVelocities.data[1],
pwmVelocities.data[2], pwmVelocities.data[3], pwmVelocities.data[4], pwmVelocities.data[5],
pwmVelocities.data[6], pwmVelocities.data[7]))
# repeat
r.sleep()
.
.
.
and on the other side your callback (in C++), will look like:
.
.
.
void hardware_interface::velocity_callback(const std_msgs::Float32MultiArray::ConstPtr &msg) {
//velocities.clear();
if (velocities.size() == 0) {
velocities.push_back(msg->data[0]);
velocities.push_back(msg->data[1]);
} else {
velocities[0] = msg->data[0];
velocities[1] = msg->data[1];
}
vel1 = msg->data[0];
vel2 = msg->data[1];
//ROS_INFO("Vel_left: [%f] - Vel_right: [%f]", vel1 , vel2);
}
.
.
.
Hope that you got an idea...if you need something more drop me a line!

Ray - RLlib - Error with Custom env - continuous action space - DDPG - offline experience training?

Error while using offline experiences for DDPG. custom environment dimensions (action space and state space) seem to be inconsistent with what is expected in DDPG RLLIB trainer.
Ubuntu, Ray 0.7 version (latest ray), DDPG example, offline dataset.
Used sampler builder for offline dataset.
Estimated DQN with this experience data and it ran through. Changed environment action space to be continuous (Box(,1)) and DDPG did not work.
from ray.tune.registry import register_env
TRAIN_BATCH_SIZE = 512
class mmt_ctns_offline_logs(gym.Env):
def __init__(self):
self.action_space = Box(0,50,shape=(,1), dtype=np.float32) #one dimension action space, values range 0 to 50 max
self.observation_space = Box(-100000, 100000, shape=(,58), dtype=np.float32) #58 columns in state space
register_env("mmt_env_ctnaction", lambda config: mmt_ctns_offline_logs()) #register custom environment
#define the configuration. Some of these are defaults. But I have explicitely defined them for clarify (within my team)
config_dict = {"env": "mmt_env_ctnaction", "evaluation_num_episodes":50, "num_workers": 11, "sample_batch_size": 512,
"train_batch_size": TRAIN_BATCH_SIZE,
"input": "<experience_replay_folder>/",
"output": "<any_folder>", "gamma": 0.99,
"horizon": None,
"optimizer_class": "SyncReplayOptimizer",
"optimizer": {"prioritized_replay":True},
"actor_hiddens": [128, 64], "actor_hidden_activation": "relu",
"critic_hiddens": [64, 64], "critic_hidden_activation": "relu", "n_step": 1,
"target_network_update_freq": 500,
"input_evaluation": [],
"ignore_worker_failures":True, 'log_level': "DEBUG",
"buffer_size": 50000,
"prioritized_replay": True,
"prioritized_replay_alpha": 0.6,
"prioritized_replay_beta": 0.4,
"prioritized_replay_eps": 1e-6,
"compress_observations": False,
"lr": 1e-3,
"actor_loss_coeff": 0.1,
"critic_loss_coeff": 1.0,
"use_huber": False,
"huber_threshold": 1.0,
"l2_reg": 1e-6,
"grad_norm_clipping": True,
"learning_starts": 1500,
}
config = ddpg.DEFAULT_CONFIG.copy() #dqn.DEFAULT_CONFIG.copy()
for k,v in config_dict.items():
config[k] = v
config_ddpg = config
config_ddpg
run_experiments({
'NM_testing_DDPG_offpolicy_noIS': {
'run': 'DDPG',
'env': 'mmt_env_ctnaction',
'config': config_ddpg,
'local_dir': "/oxygen/narasimham/ray/tmp/mmt/mmt_user_27_DDPG/"
},
})
Expected results from DDPG iterations.
Actual - ERROR:-
ray.exceptions.RayTaskError: ray_DDPGTrainer:train() (pid=89635, host=ip-10-114-53-179)
File "/home/ubuntu/anaconda3/envs/tf_p36n/lib/python3.6/site-packages/ray/rllib/utils/tf_run_builder.py", line 49, in get
self.feed_dict, os.environ.get("TF_TIMELINE_DIR"))
File "/home/ubuntu/anaconda3/envs/tf_p36n/lib/python3.6/site-packages/ray/rllib/utils/tf_run_builder.py", line 91, in run_timeline
fetches = sess.run(ops, feed_dict=feed_dict)
File "/home/ubuntu/anaconda3/envs/tf_p36n/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 877, in run
run_metadata_ptr)
File "/home/ubuntu/anaconda3/envs/tf_p36n/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1076, in _run
str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (512,) for Tensor 'default_policy/action:0', which has shape '(?, 1)'
During handling of the above exception, another exception occurred:
Try with action space definition as follows:
self.action_space = Box(0,50,shape=(1,), dtype=np.float32)

Export tensorflow graph with batchnorm to opencv dnn

First, build a network with batch_norm
net = tf.layers.conv2d(inputs = features, filters = 64, kernel_size = [3, 3], strides = (2, 2), padding = 'same')
training = tf.Variable(False, name = 'training')
net = tf.contrib.layers.batch_norm(net, is_training = training)
net = tf.nn.relu(net)
net = tf.reshape(net, [-1, 64 * 7 * 7]) #
net = tf.layers.dense(inputs = net, units = class_num, kernel_initializer = tf.contrib.layers.xavier_initializer(), name = 'regression_output')
#......
#after training, save the graph and weights
sess.run(loss, feed_dict={features : train_imgs, x : real_delta, training : False})
saver = tf.train.Saver()
saver.save(sess, 'reshape_final.ckpt')
tf.train.write_graph(sess.graph.as_graph_def(), "", 'graph_final.pb')
After that, I freeze the graph->optimize>transform
python3 ~/.keras2/lib/python3.5/site-packages/tensorflow/python/tools/freeze_graph.py --input_graph=graph_final.pb --input_checkpoint=reshape_final.ckpt --output_graph=frozen_graph.pb --output_node_names=regression_output/BiasAdd
python3 ~/.keras2/lib/python3.5/site-packages/tensorflow/python/tools/optimize_for_inference.py --input frozen_graph.pb --output opt_graph.pb --frozen_graph True --input_names input --output_names regression_output/BiasAdd
~/Qt/3rdLibs/tensorflow/bazel-bin/tensorflow/tools/graph_transforms/transform_graph --in_graph=opt_graph.pb --out_graph=fused_graph.pb --inputs=input --outputs=regression_output/BiasAdd --transforms="fold_constants fold_batch_norms fold_old_batch_norms sort_by_execution_order"
Load the model
std::string const model("/home/ramsus/Qt/blogCodes2/deep_homography/cnn/tensorflow/fused_graph.pb");
dnn::Net net = dnn::readNetFromTensorflow(model);
if(net.empty()){
std::cerr<<"Can't load network by using the mode file:"<<std::endl;
std::cerr<<model<<std::endl;
throw std::runtime_error("net is empty");
}
it throw error messages:
BatchNorm/moments/mean:Mean(conv2d/convolution)(BatchNorm/moments/mean/reduction_indices)
keep_dims:[ ] Tidx:[ ] T:0 OpenCV Error: Unspecified error (Unknown
layer type Mean in op BatchNorm/moments/mean) in populateNet, file
/home/ramsus/Qt/3rdLibs/opencv/modules/dnn/src/tensorflow/tf_importer.cpp,
line 1077
/home/ramsus/Qt/3rdLibs/opencv/modules/dnn/src/tensorflow/tf_importer.cpp:1077:
error: (-2) Unknown layer type Mean in op BatchNorm/moments/mean in
function populateNet
How could I solve this issue?Thanks

Got unexpected keyword argument

I am trying to practice examples from PyNN website. The code is as follows
import pyNN.brian as p
rng = p.NumpyRNG(seed = 4242)
refractory_period = p.RandomDistribution('uniform', [2.0, 3.0], rng)
ctx_parameters = {'cm': 0.25, 'tau_m': 20.0, 'v_rest': -60, 'v_thresh': -50, 'tau_refrac': refractory_period, 'v_reset': -60, 'v_spike': -50.0, 'a': 1.0, 'b': 0.005, 'tau_w': 600, 'delta_T': 2.5,'tau_syn_E': 5.0, 'e_rev_E': 0.0, 'tau_syn_I': 10.0, 'e_rev_I': -80 }
tc_parameters = ctx_parameters.copy()
tc_parameters.update({'a': 20.0, 'b': 0.0})
thalamocortical_type = p.EIF_cond_exp_isfa_ista(**tc_parameters)
At this point i get an error saying:
Traceback (most recent call last):
File "/home/ruthvik/Desktop/Summer 2017/pynncheck.py", line 7, in <module>
thalamocortical_type = p.EIF_cond_exp_isfa_ista(**tc_parameters)
TypeError: __init__() got an unexpected keyword argument 'tau_refrac'
I actually checked the pyNN github page and i realized that there is in fact a class called EIF_cond_exp_isfa_ista and it also has the parameter 'tau_refrac'. I am not very comfortable with python classes and object orientation. It will be a great help if someone can guide through this.
Edit:
I defined c = p.EIF_cond_exp_isfa_ista. I performed
c.get_parameter_names()
['tau_refrac', 'a', 'tau_m', 'e_rev_E', 'i_offset', 'cm', 'delta_T', 'e_rev_I', 'v_thresh', 'b', 'tau_syn_E', 'v_reset', 'v_spike', 'tau_syn_I', 'tau_w', 'v_rest']
Which gave the above result. I tried to do this
getattr(c,'cm')
Traceback (most recent call last):
File "<pyshell#55>", line 1, in <module>
getattr(c,'cm')
AttributeError: type object 'EIF_cond_exp_isfa_ista' has no attribute 'cm'
I see that there is a parameter called 'cm' but getattr(c,'cm') is throwing an error. I think I'm missing something here.

Tcl Error: Out of Stack Space With Flask and Matplotlib in Python 2.7

Thanks for your time:
I created a flask server that takes in variables from a form post and outputs a pie or bar graph. While debugging, I noticed this error:
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "C:\Python27\lib\atexit.py", line 24, in _run_exitfuncs
func(*targs, **kargs)
File "C:\Python27\lib\site-packages\matplotlib\_pylab_helpers.py", line 92, in destroy_all
manager.destroy()
File "C:\Python27\lib\site-packages\matplotlib\backends\backend_tkagg.py", line 618, in destroy
self.canvas._tkcanvas.after_cancel(self.canvas._idle_callback)
File "C:\Python27\lib\lib-tk\Tkinter.py", line 616, in after_cancel
self.tk.call('after', 'cancel', id)
TclError: out of stack space (infinite loop?)
Error in sys.exitfunc:
Traceback (most recent call last):
File "C:\Python27\lib\atexit.py", line 24, in _run_exitfuncs
func(*targs, **kargs)
File "C:\Python27\lib\site-packages\matplotlib\_pylab_helpers.py", line 92, in destroy_all
manager.destroy()
File "C:\Python27\lib\site-packages\matplotlib\backends\backend_tkagg.py", line 618, in destroy
self.canvas._tkcanvas.after_cancel(self.canvas._idle_callback)
File "C:\Python27\lib\lib-tk\Tkinter.py", line 616, in after_cancel
self.tk.call('after', 'cancel', id)
_tkinter.TclError: out of stack space (infinite loop?)
This seems to cause the server to reload (successfully for what it's worth) which is a problem. No clue what's going on here, other than tkinter being upset. And, no luck with my google fu.
flask server (w debug settings. Mapped vars are due to a project req.):
# Flask App that functions as a graph end point replacement "DAC-780"
# Standard Library
import os
import uuid
# Third Party
from flask import Flask, request
# Local
from pie import make_pie
from bar import make_bar
app_root = os.path.dirname(os.path.abspath(__file__))
images = os.path.join(app_root, 'static/images')
app = Flask(__name__, static_folder="static")
app._static_folder = os.path.join(app_root, 'static')
#app.route('/charts/<path>', methods=['POST'])
def graph(path):
g_data_list = []
file_name = str(uuid.uuid4())
# if bar graph
if path == "chart4.asp":
# grab vars
g_title = str(request.form['Title'])
x_title = str(request.form['CatTitle'])
y_title = str(request.form['ValTitle'])
ser1 = str(request.form['Ser1'])
ser2 = str(request.form['Ser2'])
cat1 = str(request.form['Cat1'])
cat2 = str(request.form['Cat2'])
cat3 = str(request.form['Cat3'])
cat4 = str(request.form['Cat4'])
cat5 = str(request.form['Cat5'])
cat6 = str(request.form['Cat6'])
cat7 = str(request.form['Cat7'])
cat8 = str(request.form['Cat8'])
cat9 = str(request.form['Cat9'])
cat10 = str(request.form['Cat10'])
cat11 = str(request.form['Cat11'])
cat12 = str(request.form['Cat12'])
cat13 = str(request.form['Cat13'])
s1d1 = int(request.form['S1D1'])
s1d2 = int(request.form['S1D2'])
s1d3 = int(request.form['S1D3'])
s1d4 = int(request.form['S1D4'])
s1d5 = int(request.form['S1D5'])
s1d6 = int(request.form['S1D6'])
s1d7 = int(request.form['S1D7'])
s1d8 = int(request.form['S1D8'])
s1d9 = int(request.form['S1D9'])
s1d10 = int(request.form['S1D10'])
s1d11 = int(request.form['S1D11'])
s1d12 = int(request.form['S1D12'])
s1d13 = int(request.form['S1D13'])
s2d1 = int(request.form['S2D1'])
s2d2 = int(request.form['S2D2'])
s2d3 = int(request.form['S2D3'])
s2d4 = int(request.form['S2D4'])
s2d5 = int(request.form['S2D5'])
s2d6 = int(request.form['S2D6'])
s2d7 = int(request.form['S2D7'])
s2d8 = int(request.form['S2D8'])
s2d9 = int(request.form['S2D9'])
s2d10 = int(request.form['S2D10'])
s2d11 = int(request.form['S2D11'])
s2d12 = int(request.form['S2D12'])
s2d13 = int(request.form['S2D13'])
# vars i mapped but weren't needed for my graph lib
g_type = str(request.form['Type'])
g_cats = str(request.form['Cats'])
g_series = str(request.form['Series'])
cat_title = str(request.form['CatTitle'])
# add data to g_data_list so we can process it
g_data_list.append((ser1, [s1d1, s1d2, s1d3, s1d4, s1d5, s1d6, s1d7, s1d8,
s1d9, s1d10, s1d11, s1d12, s1d13]))
g_data_list.append((ser2, [s2d1, s2d2, s2d3, s2d4, s2d5, s2d6, s2d7, s2d8,
s2d9, s2d10, s2d11, s2d12, s2d13]))
x_labels = [cat1, cat2, cat3, cat4, cat5, cat6, cat7, cat8, cat9, cat10,
cat11, cat12, cat13]
# make a graph to return in html
graph = make_bar(g_title, y_title, x_labels, g_data_list, file_name, cat_title, x_title)
else:
# all others are probably pie graphs
g_title = str(request.form['Title'])
cat1 = str(request.form['Cat1'])
cat2 = str(request.form['Cat2'])
cat3 = str(request.form['Cat3'])
cat4 = str(request.form['Cat4'])
s1d1 = int(request.form['S1D1'])
s1d2 = int(request.form['S1D2'])
s1d3 = int(request.form['S1D3'])
s1d4 = int(request.form['S1D4'])
# vars that aren't needed for replications of the final product, but
# were part of the old code
g_type = str(request.form['Type'])
g_cats = str(request.form['Cats'])
g_series = str(request.form['Series'])
cat_title = str(request.form['CatTitle'])
val_title = str(request.form['ValTitle'])
s1 = str(request.form['Ser1'])
s2 = str(request.form['Ser2'])
# add data
g_data_list.append([cat1, s1d1])
g_data_list.append([cat2, s1d2])
g_data_list.append([cat3, s1d3])
g_data_list.append([cat4, s1d4])
# make graph to send back via html
graph = make_pie(g_title, g_data_list, file_name)
# make a web page with graph and return it
html = """
<html>
<head>
<title>%s</title>
</head>
<body>
<img src="/static/images/%s.png" alt="An Error Occured"/>
</body>
</html>
""" % (g_title, str(file_name))
return html
if __name__ == '__main__':
app.run(port=3456, host="0.0.0.0", debug=True)
bar.py:
# creates a bar chart based on input using matplotlib
import os
import numpy as np
import matplotlib.pyplot as plt
from pylab import rcParams
rcParams['figure.figsize'] = 6.55, 3.8
app_root = os.path.dirname(os.path.abspath(__file__))
images = os.path.join(app_root, 'static/images')
def make_bar(g_title, y_title, x_labels, data_series, file_name, cat_title,
x_title):
n_groups = 13
bar_width = 0.35
opacity = 0.4
fig, ax = plt.subplots()
index = np.arange(n_groups)
error_config = {'ecolor': '0.3'}
plt.bar(index, tuple(data_series[0][1]), bar_width,
alpha=opacity,
color='b',
error_kw=error_config,
label='{}'.format(data_series[0][0]))
plt.bar(index + bar_width, tuple(data_series[1][1]), bar_width,
alpha=opacity,
color='r',
error_kw=error_config,
label='{}'.format(data_series[1][0]))
box = ax.get_position()
ax.set_position([box.x0, box.y0, box.width * 0.8, box.height])
plt.xlabel(x_title, fontsize=10)
plt.ylabel(y_title, fontsize=10)
plt.title(g_title, fontsize=11)
plt.xticks(index + bar_width, tuple(x_labels), fontsize=8)
plt.yticks(fontsize=8)
plt.axis('tight')
lgd = plt.legend(fontsize=8, bbox_to_anchor=(1.15, 0.5))
plt.tight_layout()
plt.draw()
plt.savefig('{}/{}.png'.format(images, file_name),
dpi=100, format='png', bbox_extra_artists=(lgd,),
bbox_inches='tight')
return
pie.py:
# creates a pie chart w/ matplotlib
import os
import matplotlib.pyplot as plt
from pylab import rcParams
app_root = os.path.dirname(os.path.abspath(__file__))
images = os.path.join(app_root, 'static/images')
def make_pie(title, g_data_list, file_name):
rcParams['figure.figsize'] = 5.75, 3
labels = [entry[0] for entry in g_data_list]
sizes = [entry[1] for entry in g_data_list]
ax = plt.subplot(111)
box = ax.get_position()
ax.set_position([box.x0, box.y0, box.width * 0.7, box.height])
patches, texts = ax.pie(sizes, startangle=90)
ax.legend(patches, labels, loc='center left',
bbox_to_anchor=(.9, 0.5), fontsize=8)
plt.axis('equal')
plt.suptitle(g_title, fontsize=12)
plt.draw()
plt.savefig('{}/{}.png'.format(images, file_name), dpi=100, format='png')
return
I noticed that the function that graphed everything, when run separately, would stay running after I closed the plot window. Adding plt.clf() fixed that problem, and appears to be the solution to mine relating to Flask as well.
Had same problem with seaborn
import matplotlib
matplotlib.use('Agg')
helps me.
details: https://matplotlib.org/faq/usage_faq.html#what-is-a-backend