Problem loading packages while deploying an R Shiny app - shiny

I was trying to deploy an R Shiny app that works perfectly when I run it locally. But when I deploy the app, it shows an error message in some of the outputs: "Error: An error has occurred. Check your logs or contact the app author for clarification.".
After reading a few threads, I think it has to do with the loading of the packages.
I am using the following packages:
library(shiny)
library(magrittr)
library(shinyjs)
library(DT)
library(ggthemes)
library(shinythemes)
library(r2symbols)
library(caret)
# install.packages("PresenceAbsence")
library(glmnet)
library(tidyverse)
# install.packages("devtools")
I got the following warnings while deploying the app from a fresh new session. I'm not particularly clear about what is going wrong (since the app works on my R session). Could you please suggest to me things I should try to get the app work on the server too?
Loading required package: shiny
Warning: package ‘shiny’ was built under R version 3.6.3
Warning: package ‘shinyjs’ was built under R version 3.6.3
You can use shinyjs to call your own JavaScript functions:
https://deanattali.com/shinyjs/extend
Attaching package: ‘shinyjs’
The following object is masked from ‘package:shiny’:
runExample
The following objects are masked from ‘package:methods’:
removeClass, show
Warning: package ‘DT’ was built under R version 3.6.3
Attaching package: ‘DT’
The following objects are masked from ‘package:shiny’:
dataTableOutput, renderDataTable
Warning: replacing previous import ‘vctrs::data_frame’ by ‘tibble::data_frame’ when loading ‘dplyr’
Warning: package ‘ggthemes’ was built under R version 3.6.3
Warning: package ‘shinythemes’ was built under R version 3.6.3
Warning: package ‘r2symbols’ was built under R version 3.6.3
Attaching package: ‘r2symbols’
The following object is masked from ‘package:ggplot2’:
sym
Warning: package ‘caret’ was built under R version 3.6.3
Loading required package: lattice
Warning: package ‘lattice’ was built under R version 3.6.3
Attaching package: ‘PresenceAbsence’
The following objects are masked from ‘package:caret’:
sensitivity, specificity
Warning: package ‘glmnet’ was built under R version 3.6.3
Loading required package: Matrix
Loaded glmnet 4.0-2
Warning: package ‘tidyverse’ was built under R version 3.6.3
-- Attaching packages -------------------------------------------------------------------------- tidyverse 1.3.0 --
v tibble 3.0.4 v dplyr 1.0.0
v tidyr 1.0.2 v stringr 1.4.0
v readr 1.3.1 v forcats 0.4.0
v purrr 0.3.3
Warning: package ‘tibble’ was built under R version 3.6.3
Warning: package ‘dplyr’ was built under R version 3.6.3
-- Conflicts ----------------------------------------------------------------------------- tidyverse_conflicts() --
x tidyr::expand() masks Matrix::expand()
x tidyr::extract() masks magrittr::extract()
x dplyr::filter() masks stats::filter()
x dplyr::lag() masks stats::lag()
x purrr::lift() masks caret::lift()
x tidyr::pack() masks Matrix::pack()
x purrr::set_names() masks magrittr::set_names()
x dplyr::sym() masks r2symbols::sym(), ggplot2::sym()
x tidyr::unpack() masks Matrix::unpack()
Listening on http://127.0.0.1:6757
More details:
I also mention my output in the server section that does not show up when deployed to the Shiny server but works when run from the local PC. The pred_prob_func() uses a random forest from caret package and gets a predicted probability using a random forest model (ranger). I'm not even sure that package loading is a problem but I guess so. However, I don't know why it would work on my PC and not on the Shiny server. Problems in versions?
Outputs:
# Heatmap predicted probabilities
output$Heatmap_predicted_probabilities <- renderPlot({
plot(x=seq(0,1,0.01), y=rep(0,101), main="Heatmap of predicted probabilities", xlab="The square cross symbol represents the predicted probability based on input values on the dashboard", ylab="", xlim=c(0,1), ylim=c(-0.1,0.1), yaxt="n", bty="n", pch=15, cex=20, col=hsv(0.05, seq(0,1,length.out = 101), 0.80))
points(x=pred_prob_func(input_list())$pred_prob, y=0, lwd=2, pch=7, cex=4)
})

I found the solution and I thought I'd post it here for others who may face the same problem in the future. Apparently, it wasn't a problem of versions. When I ran the app with only calling the 'caret' package it worked fine on my local PC. But it didn't work when uploaded to the Shiny server. I realized it's not enough to call only the 'caret' package but I also need to call the 'ranger' library because I was doing a random forest using 'caret' that used 'ranger'. Although it works without calling 'ranger' separately on my local PC, the app only runs smoothly on the server when I add both these lines:
library(ranger)
library(caret)

Related

Unable to install R package requiring compiled C++ code (macOS Big Sur 11.4)

After updating to macOS Big Sur 11.4 and installing the latest versions of R (4.1.0), RStudio (1.4.1717) and Xcode (12.5.1) (for Command Line Tools), I am unable to build and install my R package from source (which relies on complied C++ code) via devtools::build() and devtools::install().
Every time I do this in RStudio, I receive the error message:
Error: Could not find tools necessary to compile a package
Call `pkgbuild::check_build_tools(debug = TRUE)` to diagnose the problem.
When I call the above, I am prompted to select "Yes" to install the build tools. However, when I nothing happens.
I have checked to ensure Xcode CLT is installed, and sure enough, it is:
$ xcode-select -p
/Applications/Xcode.app/Contents/Developer
My package relies on both Rcpp and RcppArmadillo. I have installed these within RStudio. devtools is also installed.
I cannot even install my package directly from GitHub via devtools::install_github()
Any ideas on what could be going on here and how I can resolve the issue?
I followed the steps in this post:
clang-7: error: linker command failed with exit code 1 for macOS Big Sur
including altering the Makevars as per the above post.
If it helps, here is my old Makevars
## With R 3.1.0 or later, you can uncomment the following line to tell R to
## enable compilation with C++11 (where available)
##
## Also, OpenMP support in Armadillo prefers C++11 support. However, for wider
## availability of the package we do not yet enforce this here. It is however
## recommended for client packages to set it.
##
## And with R 3.4.0, and RcppArmadillo 0.7.960.*, we turn C++11 on as OpenMP
## support within Armadillo prefers / requires it
CXX_STD = CXX11
PKG_CXXFLAGS = $(SHLIB_OPENMP_CXXFLAGS)
PKG_LIBS = $(SHLIB_OPENMP_CXXFLAGS) $(LAPACK_LIBS) $(BLAS_LIBS) $(FLIBS)
Thanks!

Cannot import tensorflow in python after source build

I am trying to install tensorflow with cuda and cudnn on a linux machine. I do not have sudo access, so I am building from source. I followed instructions here: https://www.tensorflow.org/versions/master/get_started/os_setup.html#source
I got till the part where we get lots of output:
This tutorial iteratively calculates the major eigenvalue of a 2x2 matrix, on GPU.
The last few lines look like this.
000009/000005 lambda = 2.000000 x = [0.894427 -0.447214] y = [1.788854 -0.894427]
000006/000001 lambda = 2.000000 x = [0.894427 -0.447214] y = [1.788854 -0.894427]
But after that, when I open python and try to import tensorflow, it says there is no such module.
Thanks in advance.
You need to take a few more steps before you can import TensorFlow in a Python shell: just building //tensorflow/cc:tutorials_example_trainer does not build any of the Python front-end.
The easiest way to do this from a source installation is to follow the instructions for building a PIP package from source, and then installing it (either globally on your machine, or in a virtualenv).

rpy2 on pycharm generates segmentation fault error

I am using (osx) pycharm as ide and anaconda as python (2.7.10) distribution.
Recently I have installed rpy2 which works quite well on notebook e.g.
In [5]:import rpy2.robjects as robjects
In [7]:robjects.r.pi[0]
Out[7]:3.141592653589793
But on pycharm I get a segmentation fault error.
import rpy2.robjects as robjects
/Users/xxx/anaconda/envs/analytics3/bin/python.app: line 3: 695 Segmentation fault: 11 /Users/xxx/anaconda/envs/analytics3/python.app/Contents/MacOS/python "$#"
PYcharm support claims that this is a bug in the code.
any ideas what that might be?
many thanks
Reinstalling rpy2 from
conda install -c conda.anaconda.org/rpy2
solved the issue.
If you install rpy2 via conda, and also have a system installation of R on the same machine (e.g with RStudio), the system's R installation will be used. Since this R version doesn't match the one that rpy2 needs, segmentation faults occur.
1) remove any existing system installations of R (see here). Verify that you don't have any installations of R:
$>which R
R not found
2) define R_HOME env variable, either in your .rc file:
export R_HOME=/Users/<your user>/anaconda3/envs/<env name>/lib/R
or dynamically in the python project:
import os
os.environ['R_HOME'] = '/Users/<your user>/anaconda3/envs/<env name>/lib/R'

Installation of R-package "BH" not possible

I cannot install the R-package BH which I need only to install dplyr afterwards.
The download works but something is wrong with the processing afterwards, as there is no reaction or progress whatsoever. Installation of lubridate (and deinstallation of lubridate) in contrast worked smoothly without any problems.
My output is:
> install.packages("BH")
Installing package into ‘.../R/win-library/3.2’
(as ‘lib’ is unspecified)
versuche URL 'http://cran.univ-paris1.fr/bin/windows/contrib
/3.2/BH_1.58.0-1.zip'
Content type 'application/zip' length 13846684 bytes (13.2 MB)
downloaded 13.2 MB
and then nothing happens.
Any ideas what could be causing this behavior? Are there any prerequisites for the installation of BH?
> sessionInfo()
R version 3.2.1 (2015-06-18)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 7 x64 (build 7601) Service Pack 1
locale:
[1] LC_COLLATE=German_Germany.1252 LC_CTYPE=German_Germany.1252
[3] LC_MONETARY=German_Germany.1252 LC_NUMERIC=C
[5] LC_TIME=German_Germany.1252
attached base packages:
[1] stats graphics grDevices utils datasets methods base
BH, as a sizeable subset of Boost Headers is big, as in really big:
edd#max:~$ du -csm /usr/local/lib/R/site-library/BH/
111 /usr/local/lib/R/site-library/BH/
111 total
edd#max:~$
That is 111 megabytes.
You may simply have run our of patience if your Windows (network share ?) was slow in writing the files.
BH is also widely used by other CRAN packages, and there has not been a package on any of the platforms used by CRAN.
So I suggest you maybe place your R package library onto a local hard drive...
If you have an actual bug report, please consider filing an issue ticket against our BH package.
I had this problem -- there were two parts to my fix.
1/ Download the windows binary from CRAN and save to the hard disk. You then select the menu item: Packages >> Install package(s) from local files…
2/ Edit the utils:::unpackPkgZip function to increase the sleep time – so that my virus checker has time to scan it. To implement this, do the following:
trace(utils:::unpackPkgZip, edit=TRUE)
Look for the line Sys.sleep(0.5), toward the bottom of the body of the function; it's a big package so I went for Sys.sleep(10).
If you are still seeing the error: Warning: unable to move temporary installation, try a longer sleep period.
Note that you won't see the edit if you check utils:::unpackPkgZip; that is the unedited version, and it can be restored via untrace(utils:::unpackPkgZip).
So see the edited version, use body(utils:::unpackPkgZip).

pandas 0.13 system error: cannot set thread affinity mask

this is my first stack overflow question so please pardon any ignorance about the forum on my part.
I am using python 2.7.6 64 bit and pandas 0.13.1-1 from Enthought Canopy 1.3.0.1715 on a win 7 machine. I have numpy 1.8.0-1 and numexpr 2.2.2-2.
I have inconsistent error behviour running the following on a pandas series of 10,000 numpy.float64 loaded from hdf:
import python
s = pandas.read_hdf(r'C:\test\test.h5', 'test')
s/2.
This gives me inconsistent behaviour, it sometimes works and sometimes throws:
OMP: Error #134: Cannot set thread affinity mask.
OMP: System error #87: The parameter is incorrect.
I have replicated this error on other machines, and the test case is derived from a unit test failure (with the above error) which was replicted on several machines and from a server. This has come up in an upgrade from pandas 0.12 to pandas 0.13.
The following consistantly runs with no error:
import python
s = pandas.read_hdf(r'C:\test\test.h5', 'test')
s.apply(lambda x: x/2.)
and,
import python
s = pandas.read_hdf(r'C:\test\test.h5', 'test')
pandas.computation.expressions.set_use_numexpr(False)
s/2.
Thanks for the help.
This is a very similar problem to as described in this issue, and the linked issue
It seems that only canopy is experiencing these issues. I think it has to do with the canopy numpy MKL build, but that is a guess as I have not been able to reproduce this. So here are some work-arounds:
try to upgrade numexpr to 2.4 (current version), or downgrade to 2.1
try to use numpy 1.8.1 via canopy
try to install numpy/numexpr from the posted binaries here; I don't know exactly how canopy works so not sure if this is possible
uninstall numexpr
you can also disable numexpr support via pandas.computation.expressions.set_use_numexpr(False). Note that numexpr is REQUIRED in order to read/use HDF5 files via PyTables. However the expressions disabling of numexpr should just disable it for 'computations' (and not the HDF access).