I would like to do a peak detection of a .wav file signal in Python with Octave library on a Raspberry pi 3 with Raspbian but there is a problem with the octave.findpeaks function. I had this error:findpeaks : argument 'MeanPeakHeight' did not match any valid parameter of the parser
I have installed all the packages concerning Octave so this is why i don't understand.
This is a part of my program :
import matplotlib.pyplot as plt
import numpy as np
from scipy.io import wavfile as wav
from scipy.signal import find_peaks_cwt, butter, lfilter
from pylab import *
import os
from operator import truediv
from easygui import *
from oct2py import octave
"High and Low Frequency for the filter"
low = 100
high = 50
list_file = []
octave.eval("pkg load signal")
def display_wav(wav_file):
samplerate, beat = wav.read('/home/pi/heartbeat_project/heartbeat_songs/%s' %wav_file)
beat_resize = np.fromfile(open('/home/pi/heartbeat_project/heartbeat_songs/%s' %wav_file),np.int16)[4*samplerate:float(beat.shape[0])-4*samplerate]
beat_resize = beat_resize / (2.**15)
timeArray = arange(0,float(beat_resize.shape[0]),1)
timeArray = timeArray / samplerate
ylow = butter_lowpass_filter(samplerate, 5, low, beat_resize)
y = butter_highpass_filter(samplerate, 5, high, ylow)
peaks, indexes = octave.findpeaks(np.array(y),'DoubleSided','MeanPeakHeight',np.std(y))
findpeaks is part of the octave-forge signal package:source file
This function doesn't have a 'MeanPeakHeight' parameter. Is guess this is a typo and you want 'MinPeakHeight'
Related
" I'm new in neural networks and DL4j, and I want to train neural network with CSV and build linear regression. How can I fix these errors "Cannot resolve method'.iterations and getFeatureMatrix()'"?
"Previously I'm tried to do that, but have another error in 'seed'".
import org.datavec.api.records.reader.RecordReader;
import org.datavec.api.records.reader.impl.csv.CSVRecordReader;
import org.datavec.api.split.FileSplit;
import org.deeplearning4j.datasets.datavec.RecordReaderDataSetIterator;
import org.deeplearning4j.nn.api.OptimizationAlgorithm;
import org.deeplearning4j.nn.conf.MultiLayerConfiguration;
import org.deeplearning4j.nn.conf.NeuralNetConfiguration;
import org.deeplearning4j.nn.conf.Updater;
import org.deeplearning4j.nn.conf.layers.DenseLayer;
import org.deeplearning4j.nn.conf.layers.OutputLayer;
import org.deeplearning4j.nn.multilayer.MultiLayerNetwork;
import org.deeplearning4j.nn.weights.WeightInit;
import org.deeplearning4j.optimize.listeners.ScoreIterationListener;
import org.nd4j.evaluation.classification.Evaluation;
import org.nd4j.linalg.activations.Activation;
import org.nd4j.linalg.api.ndarray.INDArray;
import org.nd4j.linalg.dataset.api.DataSet;
import org.nd4j.linalg.dataset.api.iterator.DataSetIterator;
import org.nd4j.linalg.lossfunctions.LossFunctions;
import java.io.File;
public class Data {
public static void main(String[] args) throws Exception {
Parameters:
int seed = 3000;
int batchSize = 200;
double learningRate = 0.001;
int nEpochs = 150;
int numInputs = 2;
int numOutputs = 2;
int numHiddenNodes = 100;
Load data:
//load data train
RecordReader rr = new CSVRecordReader();
rr.initialize(new FileSplit(new File("train.csv")));
DataSetIterator trainIter = new RecordReaderDataSetIterator(rr, batchSize, 0, 2);
//load test data
RecordReader rrTest = new CSVRecordReader();
rr.initialize(new FileSplit(new File("test.csv")));
DataSetIterator testIter = new RecordReaderDataSetIterator(rrTest, batchSize, 0, 2);
Network Configuration:
MultiLayerConfiguration conf = new NeuralNetConfiguration.Builder()
.seed(seed)
.iterations(1000)
.optimizationAlgo(OptimizationAlgorithm.STOCHASTIC_GRADIENT_DESCENT)
.learningRate(learningRate)
.updater(Updater.NESTEROVS).momentum(0.9)
.list()
.layer(0, new DenseLayer.Builder()
.nIn(numInputs)
.nOut(numHiddenNodes)
.weightInit(WeightInit.XAVIER)
.activation(Activation.fromString("relu"))
.build())
.layer(1, new OutputLayer.Builder(LossFunctions.LossFunction.NEGATIVELOGLIKELIHOOD)
.weightInit(WeightInit.XAVIER)
.activation(Activation.fromString("softmax"))
.weightInit(WeightInit.XAVIER)
.nIn(numHiddenNodes)
.nOut(numOutputs)
.build()
)
.pretrain(false).backprop(true).build();
MultiLayerNetwork model = new MultiLayerNetwork(conf);
model.init();
model.setListeners(new ScoreIterationListener((15)));
for (int n = 0; n < nEpochs; n++) {
model.fit((trainIter));
System.out.println(("--------------eval model"));
Evaluation eval = new Evaluation(numOutputs);
while (testIter.hasNext()) {
DataSet t = testIter.next();
INDArray features = getFeatureMatrix();
INDArray lables = t.getLabels();
INDArray predicted = model.output(features, false);
eval.eval(lables, predicted);
}
System.out.println(eval.stats());
}
}
}
Logs
Build
First you should consider to use more class (like one for the definition of the neural network, one for the training process etc, ...). Just a best practice stuff.
I do not know which version of DL4J you're using but we can notice that getFeatureMatrix() has been removed. One more thing is that this function should be called on a DataSet object and not "statically" like you seem to do. (you should do t.getFeatureMatrix()).
It is pretty same things about iterations() function of the neural network creation; This function has been removed since some DL4J releases. You can get more information about this function on this thread. Now you have to find an alternative to set up number of iteration, you can take a look at this thread. Hope it is answering your question !
I'm new I have a doubt. I use the library RTC_DS1302 to obtain the PC time and store it in the RTC DS1302 using a Raspberry Pi. My question is how to show the time and date in a tkinter window and to update it every time the time and date change, I have not been able to do this. I leave the code with which I get the time and date. In this link you can find the library.
https://github.com/ksaye/IoTDemonstrations/blob/master/RTC_DS1302/RTC_DS1302.py
This is the code
import RTC_DS1302
import os
import time
ThisRTC = RTC_DS1302.RTC_DS1302()
Data = ThisRTC.ReadRAM()
print("Message: " + Data)
DateTime = { "Year":0, "Month":0, "Day":0, "DayOfWeek":0, "Hour":0, "Minute":0, "Second":0 }
Data = ThisRTC.ReadDateTime(DateTime)
print("Date/Time: " + Data)
print("Year: " + format(DateTime["Year"] + 2000, "04d"))
print("Month: " + format(DateTime["Month"], "02d"))
print("Day: " + format(DateTime["Day"], "02d"))
print("DayOfWeek: " + ThisRTC.DOW[DateTime["DayOfWeek"]])
print("Hour: " + format(DateTime["Hour"], "02d"))
print("Minute: " + format(DateTime["Minute"], "02d"))
print("Second: " + format(DateTime["Second"], "02d"))
ThisRTC.CloseGPIO()
tkinter has function after(time_in_ms, function_name) which let you run function with delay. In this function you can update text in Labels and execute after(time_in_ms, function_name)so it will run the same function after some time.
Example which use after() to display current time
import tkinter as tk # Python 3.x
from datetime import datetime
def update_time():
# update displayed time
current_time = datetime.now()
current_time_str = current_time.strftime('%Y.%m.%d %H:%M:%S')
label['text'] = current_time_str
# run update_time again after 1000ms (1s)
root.after(1000, update_time)
# --- main ---
root = tk.Tk()
label = tk.Label(root)
label.pack()
update_time()
root.mainloop()
More of my examples with after()
I am trying to follow different papers and tutorials to learn how to solve optimization problems of modelica modells.
In http://www.syscop.de/files/2015ss/events/opcon-thermal-systems/optimization_tool_chain_in_jmodelica.org_toivo_henningsson.pdf I found a very simple tutorial. But when I execute it I get some very open error messages.
I am using Python 2.7 with jupyther.
Here is my Notepad:
from pyjmi import transfer_optimization_problem
import matplotlib.pyplot as plt
import os.path
file_path = os.path.join("D:\Studies", "Integrator.mop")
op = transfer_optimization_problem('optI', file_path)
res = op.optimize()
t = res['time']
x = res['x']
u = res['u']
plt.plot(t,x,t,u)
My modelica file:
package Integrator
model Integrator
Real x(start=2, fixed = true);
input Real u;
equation
der(x) = -u;
end Integrator;
optimization optI(objective = finalTime, objectiveIntegrand = x^2 + u^2, startTime = 0, finalTime(free = true, min = 0.5, max = 2, initialGuess = 1))
Real x (start = 2, fixed = true);
input Real u;
equation
der(x) = -u;
constraint
u <= 2;
x(finalTime) = 0;
end optI;
end Integrator;
When I excute the code I get an RuntimeError, telling me that a java error occured and details where printed. From the Traceback I do not know what the note
This file is compatible with both classic and new-style classes
mean. I know that my setup is working because I executed the CSTR tutorial given by modelon. But now, it try to use my own models and it is giving me that error.
Runtime Error desciption
Using same syntax like in Modelica for import
e.g.
import Modelica.SIunits.Temperature;
where the package structure is part of the model-identification should resolve the issue.
op = transfer_optimization_problem('Integrator.optI', file_path)
I use tf.data.dataset to import data into model. I have created a simple reproducible code to show the idea. I save the trained model (please refer to the code below), and once I restore the model to run it on the test data I get to error that the iterator has not been initialized. Please see the error below for more details:
FailedPreconditionError (see above for traceback): GetNext() failed
because the iterator has not been initialized. Ensure that you have
run the initializer operation for this iterator before getting the
next element.
[[Node: IteratorGetNext = IteratorGetNext[output_shapes=[[?,10],
[?,1]], output_types=[DT_FLOAT, DT_FLOAT],
_device="/job:localhost/replica:0/task:0/device:CPU:0"](Iterator)]]
[[Node: IteratorGetNext/_39 = _Recv[client_terminated=false,
recv_device="/job:localhost/replica:0/task:0/device:GPU:0",
send_device="/job:localhost/replica:0/task:0/device:CPU:0",
send_device_incarnation=1, tensor_name="edge_7_IteratorGetNext",
tensor_type=DT_FLOAT,
_device="/job:localhost/replica:0/task:0/device:GPU:0"]()]]
How can I address this issue? Here's the reproducible code:
import tensorflow as tf
import os
import numpy as np
import math
features=np.random.randn(100,10)
features_test=np.random.randn(10,10)
y=np.random.randn(100,1)
y_test=np.random.randn(10,1)
feature_size=features.shape[1]
state_size=5
learning_rate=0.001
graph = tf.Graph()
with graph.as_default():
batch_size_tensor = tf.placeholder(tf.int64,name="Batch_tensor")
X,Y = tf.placeholder(tf.float32,
[None,feature_size],"X"),tf.placeholder(tf.float32,[None,1],name="Y")
dataset =tf.data.Dataset.from_tensor_slices((X,Y)).batch(batch_size_tensor).repeat()
iter = dataset.make_initializable_iterator()
x_inputs,y_outputs = iter.get_next()
Wx = tf.Variable(tf.truncated_normal([feature_size, state_size], stddev=2.0 / math.sqrt(state_size)),name="Visual_weights_layer1")
bx= tf.Variable(tf.zeros([state_size]),name="Visual_bias_layer1")
x_hidden_state=tf.matmul(x_inputs, Wx)+bx
x_hidden_state = tf.contrib.layers.batch_norm(x_hidden_state, epsilon=1e-5)
vx=tf.nn.relu(x_hidden_state)
W_final = tf.Variable(tf.truncated_normal([state_size, 1], stddev=2.0 / math.sqrt(state_size)),name="FinalLayer_weights")
by=tf.Variable(tf.zeros([1]),name="FinalLayer_bias")
predictions = tf.add(tf.matmul(vx, W_final), by,name="preds")
loss = tf.losses.mean_squared_error(y_outputs,predictions)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(loss)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
num_steps=100
batch_size=1
saver_path_model='tmp/testmodel'
export_path_model='tmp/testmodel.meta'
with tf.Session(graph=graph) as sess:
sess.run(init)
sess.run(iter.initializer, feed_dict={X: features, Y: y,
batch_size_tensor: batch_size})
print('initialized.')
for step in range(num_steps):
_, loss_val = sess.run([optimizer, loss])
print (loss_val)
saver.save(sess, saver_path_model)
saver.export_meta_graph(filename=export_path_model)
sess = tf.Session()
new_saver = tf.train.import_meta_graph(export_path_model)
new_saver.restore(sess, saver_path_model)
graph = tf.get_default_graph()
feed = {"X:0": features_test,"Y:0": y_test}
predictions_test = sess.run(["preds:0"], feed_dict=feed)
I saved my model as follows
saver = tf.train.Saver()
with tf.Session() as session:
session.run(tf.global_variables_initializer())
...
# after all training
save_path = saver.save(session, "logs/trained_model.ckpt")
print("Model saved: {}".format(save_path))
Then to load it
saver = tf.train.Saver()
# Initialize a session so that we can run TensorFlow operations
with tf.Session() as session:
# here is important, you need to load weights not initialize
saver.restore(session, "logs/trained_model.ckpt")
# then evaluate
official doc has more examples
https://www.tensorflow.org/api_docs/python/tf/train/Saver
import csv
import sys
import serial
import time
ser= serial.Serial('COM3', baudrate = 9600, timeout = 1)
time.sleep(3)
dataFile = open('NdataFile.csv','w')
while ser.read():
arduinoData = ser.readline().encode('HEX')
if "\n" in arduinoData:
dataFile.write("\n")
else:
dataFile.write(arduinoData + "\t")
ser.close()