Deploy a pre-compiled RStan model and avoid recompilation - shiny

I've been at this for hours now.
I compile a stan model with the following code:
model <- stan(file = "model.stan", data = data, save_dso = FALSE, chains = 4, cores = 4, iter = 2000, warmup = 1000)
When I ls the current directory, I see model.rds whereas I didn't see it before. All is good thus far.
Subsequently, I refit this model with new parameters via:
fit <- stan(file = "model.stan", data = new.data, chains = 1, cores = 1, iter = 4000, warmup = 2000)
This code is running in a Shiny app. Locally, all works as expected.
When I deploy this app to shinyapps.io, the model tries to recompile (resulting in a g++ error - presumably from lack of sufficient memory). Irrespective, I don't want this model to recompile in the first place: I'd like it to use the model.rds object I'd already built. Yes, I do include this object in the files uploaded to the shinyapps.io server.
I feel like I've tried all the things - including loading model.rds explicitly with loadRDS and passing it into the stan(fit = model ... call. What am I missing?
For completeness, below is the error I received on the shinyapps.io end. Again, I don't wish for this model to go about recompiling in the first place.
Compilation ERROR, function(s)/method(s) not created! In file included from /usr/local/lib/R/site-library/BH/include/boost/config.hpp:39:0,
from /usr/local/lib/R/site-library/BH/include/boost/math/tools/config.hpp:13,
from /usr/local/lib/R/site-library/StanHeaders/include/stan/math/rev/core/var.hpp:7,
from /usr/local/lib/R/site-library/StanHeaders/include/stan/math/rev/core/gevv_vvv_vari.hpp:5,
from /usr/local/lib/R/site-library/StanHeaders/include/stan/math/rev/core.hpp:12,
from /usr/local/lib/R/site-library/StanHeaders/include/stan/math/rev/mat.hpp:4,
from /usr/local/lib/R/site-library/StanHeaders/include/stan/math.hpp:4,
from /usr/local/lib/R/site-library/StanHeaders/include/src/stan/model/model_header.hpp:4,
from file137729fa09.cpp:8:
/usr/local/lib/R/site-library/BH/include/boost/config/compiler/gcc.hpp:186:0: warning: "BOOST_NO_CXX11_RVALUE_REFERENCES" redefined [enabled by default]
# define BOOST_NO_CXX11_RVALUE_REFERENCES
^
<command-line>:0:0: note: this is the location of the previous definition
g++: internal compiler error: Killed (program cc1plus)
Please submit a full bug report,
with preprocessed source if appropriate.
See <file:///usr/share/doc/gcc-4.8/README.Bugs> for instructions.
make: *** [file137729fa09.o] Error 4

Related

Inference error with TensorFlow C++ on iOS: "Invalid argument: Session was not created with a graph before Run()!"

I am trying to run my model on iOS using TensorFlow's C++ API. The model is a SavedModel saved as a .pb file. However, calls to Session::Run() result in the error:
"Invalid argument: Session was not created with a graph before Run()!"
In Python, I can successfully run inference on the model with the following code:
with tf.Session() as sess:
tf.saved_model.loader.load(sess, ['serve'], '/path/to/model/export')
result = sess.run(['OutputTensorA:0', 'OutputTensorB:0'], feed_dict={
'InputTensorA:0': np.array([5000.00] * 1000).reshape(1, 1000),
'InputTensorB:0': np.array([300.00] * 1000).reshape(1, 1000)
})
print(result[0])
print(result[1])
In C++ on iOS, I try to mimick this working snippit as follows:
tensorflow::Input::Initializer input_a(5000.00, tensorflow::TensorShape({1, 1000}));
tensorflow::Input::Initializer input_b(300.00, tensorflow::TensorShape({1, 1000}));
tensorflow::Session* session_pointer = nullptr;
tensorflow::SessionOptions options;
tensorflow::Status session_status = tensorflow::NewSession(options, &session_pointer);
std::cout << session_status.ToString() << std::endl; // prints OK
std::unique_ptr<tensorflow::Session> session(session_pointer);
tensorflow::GraphDef model_graph;
NSString* model_path = FilePathForResourceName(#"saved_model", #"pb");
PortableReadFileToProto([model_path UTF8String], &model_graph);
tensorflow::Status session_init = session->Create(model_graph);
std::cout << session_init.ToString() << std::endl; // prints OK
std::vector<tensorflow::Tensor> outputs;
tensorflow::Status session_run = session->Run({{"InputTensorA:0", input_a.tensor}, {"InputTensorB:0", input_b.tensor}}, {"OutputTensorA:0", "OutputTensorB:0"}, {}, &outputs);
std::cout << session_run.ToString() << std::endl; // Invalid argument: Session was not created with a graph before Run()!
The methods FilePathForResourceName and PortableReadFileToProto are taken from the TensorFlow iOS sample found here.
What is the problem? I noticed that this happens regardless of how simple the model is (see my issue report on GitHub), which means the problem is not with the specifics of the model.
The primary issue here is that you are exporting your graph to a SavedModel in Python but then reading it in as a GraphDef in C++. While both have a .pb extension and are similar, they are not equivalent.
What is happening is you are reading in the SavedModel with PortableReadFileToProto() and it is failing, leaving an empty pointer (model_graph) to a GraphDef object. So after the execution of PortableReadFileToProto(), model_graph remains an empty, but valid, GraphDef, which is why the error says Session was not created with a graph before Run(). session->Create() succeeds because you successfully created a session with an empty graph.
The way to check if PortableReadFileToProto() fails is to check its return value. It returns a bool, which will be 0 if reading in the graph failed. If you wish to obtain a descriptive error here, use ReadBinaryProto(). Another way you can tell if reading the graph failed is by checking the value of model_graph.node_size(). If this is 0, then you have an empty graph and reading it in has failed.
While you can use TensorFlow's C API to perform inference on a SavedModel by using TF_LoadSessionFromSavedModel() and TF_SessionRun(), the recomended method is to export your graph to a frozen model using freeze_graph.py or write to a GraphDef using tf.train.write_graph(). I will demonstrate successful inference with a model exported using tf.train.write_graph():
In Python:
# Build graph, call it g
g = tf.Graph()
with g.as_default():
input_tensor_a = tf.placeholder(dtype=tf.int32, name="InputTensorA")
input_tensor_b = tf.placeholder(dtype=tf.int32, name="InputTensorB")
output_tensor_a = tf.stack([input_tensor_a], name="OutputTensorA")
output_tensor_b = tf.stack([input_tensor_b], name="OutputTensorB")
# Save graph g
with tf.Session(graph=g) as sess:
sess.run(tf.global_variables_initializer())
tf.train.write_graph(
graph_or_graph_def=sess.graph_def,
logdir='/path/to/export',
name='saved_model.pb',
as_text=False
)
In C++ (Xcode):
using namespace tensorflow;
using namespace std;
NSMutableArray* predictions = [NSMutableArray array];
Input::Initializer input_tensor_a(1, TensorShape({1}));
Input::Initializer input_tensor_b(2, TensorShape({1}));
SessionOptions options;
Session* session_pointer = nullptr;
Status session_status = NewSession(options, &session_pointer);
unique_ptr<Session> session(session_pointer);
GraphDef model_graph;
string model_path = string([FilePathForResourceName(#"saved_model", #"pb") UTF8String]);
Status load_graph = ReadBinaryProto(Env::Default(), model_path, &model_graph);
Status session_init = session->Create(model_graph);
cout << "Session creation Status: " << session_init.ToString() << endl;
cout << "Number of nodes in model_graph: " << model_graph.node_size() << endl;
cout << "Load graph Status: " << load_graph.ToString() << endl;
vector<pair<string, Tensor>> feed_dict = {
{"InputTensorA:0", input_tensor_a.tensor},
{"InputTensorB:0", input_tensor_b.tensor}
};
vector<Tensor> outputs;
Status session_run = session->Run(feed_dict, {"OutputTensorA:0", "OutputTensorB:0"}, {}, &outputs);
[predictions addObject:outputs[0].scalar<int>()];
[predictions addObject:outputs[1].scalar<int>()];
Status session_close = session->Close();
This general method will work, but you will likely experience issues with required operations missing from the TensorFlow library you built and therefore inference would still fail. To combat this, first make sure that you have built the latest TensorFlow 1.3 by cloning the repo on your machine and running tensorflow/contrib/makefile/build_all_ios.sh from the root tensorflow-1.3.0 directory. It is unlikely that inference will work for a custom, non-canned model if you use the TensorFlow-experimental Pod like the examples. Once you have a static library built using build_all_ios.sh, you need to link it up in your .xcconfig by following the instructions here.
Once you successfully link the static library built using the makefile with Xcode, you will likely still get errors that prevent inference. While the actual errors you will get depend on your implementation, you will receive errors that fall into two different forms:
OpKernel ('op: "[operation]" device_type: "CPU"') for unknown op:
[operation]
No OpKernel was registered to support Op '[operation]' with these
attrs. Registered devices: [CPU], Registered kernels: [...]
Error #1 means that the .cc file from tensorflow/core/ops or tensorflow/core/kernels for the corresponding operation (or closely associated operation) is not in the tf_op_files.txt file in tensorflow/contrib/makefile. You will have to find the .cc that contains REGISTER_OP("YourOperation") and add it to tf_op_files.txt. You must rebuild by running tensorflow/contrib/makefile/build_all_ios.sh again.
Error #2 means that the .cc file for the corresponding operation is in your tf_op_files.txt file, but you have supplied the operation with a data type that it (a) doesn't support or (b) is stripped off to reduce the size of the build.
One "gotcha" is that if you are using tf.float64 in the implementation of your model, this is exported as TF_DOUBLE in your .pb file and this is not supported by most operations. Use tf.float32 in place of tf.float64 and then re-save your model using tf.train.write_graph().
If you are still receiving error #2 after checking you are providing the correct datatype to the operation, you will need to either remove __ANDROID_TYPES_SLIM__ in the makefile located at tensorflow/contrib/makefile or replace it with __ANDROID_TYPES_FULL__ and then rebuild.
After getting passed errors #1 and #2, you will likely have successful inference.
One addition to the very comprehensive explanation above:
#jshapy8 is right in saying "You will have to find the .cc that contains REGISTER_OP("YourOperation") and add it to tf_op_files.txt" and there is a process that can simplify that a bit:
## build the print_selective_register_header tool. Run from tensorflow root
bazel build tensorflow/python/tools:print_selective_registration_header
bazel-bin/tensorflow/python/tools/print_selective_registration_header \
--graphs=<path to your frozen model file here>/model_frozen.pb > ops_to_register.h
This creates a .h file that lists only the ops needed for your specific model.
Now when compiling your static libraries follow the Build By Hand instructions here
The instructions say to do the following:
make -f tensorflow/contrib/makefile/Makefile \
TARGET=IOS \
IOS_ARCH=ARM64
But you can pass a lot to the makefile specific to your needs and I've found the following your best bet:
make -f tensorflow/contrib/makefile/Makefile \
TARGET=IOS IOS_ARCH=ARM64,x86_64 OPTFLAGS="-O3 -DANDROID_TYPES=ANDROID_TYPES_FULL -DSELECTIVE_REGISTRATION -DSUPPORT_SELECTIVE_REGISTRATION"
In particular you are telling it here to compile for just two of the 5 architectures to speed up compiling time (full list is: i386 x86_64 armv7 armv7s arm64 and obviously takes longer) - IOS_ARCH=ARM64,x86_64 - and then you are telling it not to compile for ANDROID_TYPES_SLIM (which will give you the Float/Int casting issues referred to above) and then finally you are telling it to pull all the necessary ops kernel files and include them in the make process.
Update . Not sure why this wasn't working for me yesterday, but this is probably a cleaner and safer method:
build_all_ios.sh OPTFLAGS="-O3 -DANDROID_TYPES=ANDROID_TYPES_FULL -DSELECTIVE_REGISTRATION -DSUPPORT_SELECTIVE_REGISTRATION"
If you want to speed things up edit compile_ios_tensorflow.sh in the /Makefile directory. Look for the following line:
BUILD_TARGET="i386 x86_64 armv7 armv7s arm64"
and change it to:
BUILD_TARGET="x86_64 arm64"

CI 3.0.4 large where_in query causes causes Message: preg_match(): Compilation failed: regular expression is too large at offset)

I am running a query where $sale_ids could potentially contain 100's to a thousands of sale_ids. I am looking for a way to fix the regex error without modifying the core of CI 3.
This didn't happen in version 2 and is NOT considered a bug in CI 3 (I brought up the issue before).
Is there a way I can get this to work? I could change the logic of the application but this would require days of work.
I am looking for a way to extend/override a class so I can allow for this query to work If there is not a way to do this by override I will have to hack the core (I don't know how).
$this->db->select('sales_payments.*, sales.sale_time');
$this->db->from('sales_payments');
$this->db->join('sales', 'sales.sale_id=sales_payments.sale_id');
$this->db->where_in('sales_payments.sale_id', $sale_ids);
$this->db->order_by('payment_date');
Error is:
Severity: Warning
Message: preg_match(): Compilation failed: regular expression is too large at offset 53249
Filename: database/DB_query_builder.php
Line Number: 2354
Backtrace:
File: /Applications/MAMP/htdocs/phppos/PHP-Point-Of-Sale/application/models/Sale.php
Line: 123
Function: get
File: /Applications/MAMP/htdocs/phppos/PHP-Point-Of-Sale/application/models/Sale.php
Line: 48
Function: _get_all_sale_payments
File: /Applications/MAMP/htdocs/phppos/PHP-Point-Of-Sale/application/models/reports/Summary_payments.php
Line: 60
Function: get_payment_data
File: /Applications/MAMP/htdocs/phppos/PHP-Point-Of-Sale/application/controllers/Reports.php
Line: 1887
Function: getData
File: /Applications/MAMP/htdocs/phppos/PHP-Point-Of-Sale/index.php
Line: 323
Function: require_once
There wasn't a good way to modify the core so I came up with a small change that I made to the code with large where_in's. Start a group and create where where_in's in smaller chunks
$this->db->group_start();
$sale_ids_chunk = array_chunk($sale_ids,25);
foreach($sale_ids_chunk as $sale_ids)
{
$this->db->or_where_in('sales_payments.sale_id', $sale_ids);
}
$this->db->group_end();

Assertion Failed Error while compiling Bitcoin-QT application in QT framework?

I am facing error while compiling bitcoin-qt application, I didn't understand what is the problem in main.cpp.
The error:
/main.cpp:2985: bool InitBlockIndex(): Assertion `block.hashMerkleRoot
== uint256("0x7c0b21983dc5a17daeef4b6b936375b0a59f3414af7a1bf248d98209447a494b")'
failed.
The program has unexpectedly finished.
what is the problem? Please give some advice to resolve this problem.
Have you tried this solution?
https://bitcoin.stackexchange.com/questions/21303/creating-genesis-block
The first time you run the compiled code (daemon or qt), it will say
"assertion failed". Just exit the program, go to config dir (under
AppData/Roaming), open the debug.log, get the hash after
"block.GetHash() = ", copy and paste it to the beginnig of main.cpp,
hashGenesisBlock. Also get the merkle root in the same log file, paste
it to the ... position in the following code, in LoadBlockIndex()
assert(block.hashMerkleRoot == uint256("0x...")); recompile the code,
and genesis block created!
BTW, don't forget to change "txNew.vout[0].nValue = " to the coin per
block you defined, it doesn't matter to leave as 50, just be
consistent with your coin per block (do this before adjust the hash
and m-root, otherwise they will be changed again).
check https://bitcointalk.org/index.php?topic=225690.0 for complete
info
It's for an altcoin, but it seems you've some problem with the genesis block.

Why does "small" give an error about "char"?

Trying to compile an open source project with both VS2010, VS2012 in x86 and x86_64 on a windows platform running QT5.4.
A file named unit.h contains a part :
[...]
// DO NOT change noscale's value. Lots of assumptions are made based on this
// value, both in the code and (more importantly) in the database.
enum unitScale
{
noScale = -1,
extrasmall = 0,
small = 1, // Line that causes errors.
medium = 2,
large = 3,
extralarge = 4,
huge = 5,
without = 1000
};
[...]
Generates
error C2062: type 'char' unexpected
error C3805: 'type': unexpected token, expected either '}' or a ','
I tried every trick in my hat to solve it. I removed every use of the "small" enum in the code and I still get the error. But after having removed all the uses, I rename "small" to "smallo" everything is fine. It seems to indicate name collision but a file search gives me no references in the whole project. It's not any keyword I know of.
Got any ideas?
EDIT: Thanks to very helpful comments here is an even stranger version that works. Could somebody explain?
#ifdef small // Same with just straight "#if"
#pragma message("yes")
#endif
#ifndef small
#pragma message("no") // Always prints no.
#endif
#undef small
enum unitScale
{
noScale = -1,
extrasmall = 0,
small = 1,
medium = 2,
large = 3,
extralarge = 4,
huge = 5,
without = 1000
};
EDIT 2: The pragma directive was showing yes but only in files that had previously loaded the windows.h header, and it was lost in the compiler output in a sea of no.
Thanks everyone! What a quest.
small is a defined in rpcndr.h. It is used as datatype for MIDL.

C++ no 'object' file generated

This is some code to get an environment variable from inside Qt, however, it seems Qt's QProcessEnvironment::systemEnvironment() only reflect a new environment variable change after reboot. So I am thinking about using getenv.
However I got "error C2220: warning treated as error - no 'object' file generated" from this :
QProcessEnvironment env = QProcessEnvironment::systemEnvironment();
const QString ENGINE_ROOT = env.value("ENGINE_ROOT", "") != "" ?
env.value("ENGINE_ROOT","") : QString(getenv("ENGINE_ROOT"));
Don't tell me something like disable /WX or lower W4 to W3, I don't want to hear this, I want to know exactly what cause
no 'object' file generated
.
"error C2220: warning treated as error - no 'object' file generated"
The error already answers your question:
A warning was generated.
Because you have told the compiler to treat warnings as errors, an error occurred.
Because an error occurred, the compiler did not generate an object file.
If you want to know what the original warning means, then you need to ask us about that warning.
I just had this problem. The real source of the confusion is that Microsoft Visual Studio lists the
error C2220: warning treated as error - no 'object' file generated
line separately from the warnings--sometimes even before the warnings--so it is not immediately apparent that the error is related to the listed warnings.
Fix all warnings listed to fix this problem.
I'll address the underlying question instead of the compilation problem.
Environment variables for any process are copied from those of its parent process when your new process is started. From that point, the only thing that can modify them is your process yourself.
In practical terms, this means that going to the Windows dialog box to change environment variables does not change those values for any existing processes. Those changes are applied to the explorer.exe process, and then any new processes launched from Explorer.
There is a possible way for a Windows application to get notified of changes made to environment variables made by Explorer. See How to modify the PATH variable definitely through the command line in Windows for details.
in my case, eliminating all useless 'object' will deal this erro