I've trained a little dataset of car with inception.
Now, I Have a meta file, a ckpt file then a pbtxt file.
And now I want to know how to make prediction with it..
I tried to use freeze_graph.py but it ask an output_node_names parameter and I absolutely don't know what it could be.
If you know how I could use my ckpt/meta/pbtxt to do prediction or how to freeze my graph with freeze_graph to use classify.py I would be very thanksful!
Thanks in advance!
Related
I'm still very new to the world of machine learning and am looking for some guidance for how to continue a project that I've been working on. Right now I'm trying to feed in the Food-101 dataset into the Image Classification algorithm in SageMaker and later deploy this trained model onto an AWS deeplens to have food detection capabilities. Unfortunately the dataset comes with only the raw image files organized in sub folders as well as a .h5 file (not sure if I can just directly feed this file type into sageMaker?). From what I've gathered neither of these are suitable ways to feed in this dataset into SageMaker and I was wondering if anyone could help point me in the right direction of how I might be able to prepare the dataset properly for SageMaker i.e convert to a .rec or something else. Apologies if the scope of this question is very broad I am still a beginner to all of this and I'm simply stuck and do not know how to proceed so any help you guys might be able to provide would be fantastic. Thanks!
if you want to use the built-in algo for image classification, you can either use Image format or RecordIO format, re: https://docs.aws.amazon.com/sagemaker/latest/dg/image-classification.html#IC-inputoutput
Image format is straightforward: just build a manifest file with the list of images. This could be an easy solution for you, since you already have images organized in folders.
RecordIO requires that you build files with the 'im2rec' tool, re: https://mxnet.incubator.apache.org/versions/master/faq/recordio.html.
Once your data set is ready, you should be able to adapt the sample notebooks available at https://github.com/awslabs/amazon-sagemaker-examples/tree/master/introduction_to_amazon_algorithms
I have been struggling with efficiently adding RMSE output for my premade estimator model like you get when using keras' train function. I was eyeing with using add_metrics, but I am not even sure if you can use it for premade estimators and if yes how? Meaning how do I need to code the metric_fn?
The way google uses via calling predict and transforming this into a np.array takes ages for me.
I am happy receiving any idea on how to make this work.
Thanks in advance!
I have a set of complex C++ classes.
After creating objects of the class, all the data can be saved on disk.
I want to load two such saved instances and tell if they are identical.
Any ideas on how to do this in a way that is maintainable?
I've tried doing sorted text reports that print all the data and compare that. The problem is fields can get added to the classes over time and its not possible to tell if the report is "complete".
Any way introspection or reflection can be used to accomplish this?
this is a complex problem which those wonderful guys at boost.org have already solved for you:
http://www.boost.org/doc/libs/1_58_0/libs/serialization/doc/index.html
Good day.
An apology for my English but it's not my native language so they know apologize for any errors.
I have a text file with the data processing which want to get a .arff file, which is the file type using weka.
I do not want to generate a single file. I get 2 files, one for training the model (training) and another to test the model (test).
This is done directly weka eh applying a filter stringtowordtokenizer but the problem is that when you use the second file to test a mistake because it is not fair test a model that has words that should not be.
If someone helps me I would appreciate.
Thank you and best regards.
I am implementing a facial expression recognition and am using SVM to classify given expression.
When I train, I use this command line
svm.train(myFeatureVector,myLabels,Mat(),Mat(), myParameters);
svm.save("myClassifier.yml");
which will later when I will predict using
response = svm.predict(incomingFeatureVector);
But then when I want to train more than once (exited the program and start again), it seems to have overwritten my previous svm file. Is there any way I could do read previous svm file and add more data into it (and then resave it ,etc) ? I looked up on this openCV documentation and found nothing. However, when I read on this page; there is a method called CvSVM::read. I don't know what that does/how to implement it.
Hope anyone can help me :(
What you are trying to do is incremental learning but unfortunately Support Vector Machines is a batch algorithm, hence if you want to add more data you have to retrain with the whole set again.
There are online learning alternatives, like Pegasos SVM but I am not aware of any that is implemented on OpenCV