Using Pyomo GLPK in google cloud app engine - flask

I set up a Flask service local using pyomo glpk solver, and it runs correctly on my local machine.
But when I uploaded it to a GCloud App Engine, with the exact same virtual environment that worked locally, I got the error:
RuntimeError: Attempting to use an unavailable solver.
I've already downloaded the glpk windows version from the glpk website and used glpsol.exe path as an argument and that worked locally, but didn't work on my GCloud App Engine.
I ran conda install -c conda-forge glpk with the virtual environment activated, which did not help.
import pandas as pd
from pyomo.opt import SolverStatus, TerminationCondition
from pyomo.environ import *
import sys
...
solver=SolverFactory('glpk', executable='venv\\Library\\bin\\glpsol.exe')
This is the relevant part of my code. I've tried different glpsol.exe paths, with no success so far.
Does anyone know how to deploy a pyomo with glpk solver to a GCloud App Engine environment?

You won't be able to run a Windows executable on App Engine.
There's no Windows OS through the service..

I didnt't get a solution to this problem, so I decided to use another solver library.

Related

No module named 'nltk.lm' in Google colaboratory

I'm trying to import the NLTK language modeling module (nltk.lm) in a Google colaboratory notebook without success. I've tried by installing everything from nltk, still without success.
What mistake or omission could I be making?
Thanks in advance.
.
Google Colab has nltk v3.2.5 installed, but nltk.lm (Language Modeling package) was added in v3.4.
In your Google Colab run:
!pip install -U nltk
In the output you will see it downloads a new version, and uninstalls the old one:
...
Downloading nltk-3.6.5-py3-none-any.whl (1.5 MB)
...
Successfully uninstalled nltk-3.2.5
...
You must restart the runtime in order to use newly installed versions.
Click the Restart runtime button shown in the end of the output.
Now it should work!
You can double check the nltk version using this code:
import nltk
print('The nltk version is {}.'.format(nltk.__version__))
You need v3.4 or later to use nltk.lm.

Couldn't import cv2 in GCP ml-engine (runtime version 1.8)

When using runtime version 1.8, i'm getting this error when I tried to import cv2:
/usr/lib/python2.7/dist-packages/cv2.x86_64-linux-gnu.so: undefined symbol: _ZN2cv9Algorithm7getListERSt6vectorINSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEESaIS7_EE
Anyone knows if there's any workaround? Looks like glib needs to be installed in the image, but it wasn't.
Cloud ML images already has installed the python-opencv package. If you are facing the issue in your local environment instead of CloudML, most probably you have dependencies problems, for example when two different programs modify the same package. Other similar threads that solved the issue are:
Related to PKG_CONFIG_PATH.
Related to differences between pip versions 2.7 and 3.4.
I found this tutorial that may be useful for you Running a Spark Application with OpenCV on Cloud Dataproc.

Cannot run Google ML engine locally due to Tensorflow issues

I'm trying to run the Google Cloud ML engine locally for debugging purposes by running the command gcloud ml-engine local predict --model-dir=fasttext_cloud/ --json-instances=debug_instance.json. However, I'm getting the error: ERROR: (gcloud.ml-engine.local.predict) Cannot import Tensorflow.
This is strange as Tensorflow works fine on my machine. Even a simple example like python -c 'import tensorflow' has no issues whatsoever.
Is TensorFlow installed in a virtual environment or a non-standard location that isn't on the Python path when running from gcloud?
Its a bit kludgy but I would do the following to check the Python path being used by gcloud. Modify the file
${GCLOUD_INSTALL_LOCATION}/google-cloud-sdk/lib/surface/ml_engine/__init__.py
At the top of the file add
import sys
print("\n".join(sys.path))
Then run
gcloud ml-engine
This should print out the python path and you can now check that it includes the location where TensorFlow is installed.
Can you upgrade to the latest gcloud release (171.0.0) and retry?
To upgrade, run
$ gcloud components update

How to install Azure module in IBM Data Science Experience

I'm trying to import Azure data into DSx. I get an error when I try to import the module. When I use the command "from azure.storage.blob import BlobService" in DSx, it tells me that there's no module with that name. Do I have to do some further setup in DSx to access this module?
Please install the azure package by running following command in your notebook:-
!pip install azure
then run this to import your library
from azure.storage.blob import BlobService
Please also refer to this article for different ways of installing libraries:-
http://datascience.ibm.com/docs/content/analyze-data/importing-libraries.html
Thanks,
Charles.

Import setup module error while deploying to app engine via google cloud sdk

I am writing after a lot of searching and trial and error with no luck.
I am trying to deploy a service in app engine.
You might be aware that deploying on app engine is usually practiced a two step process
1. Deploy on local dev app server
2. If step 1 succeeds deploy on cloud
My problems are with step 1 when I include third party python libraries such as numpy, sklearn, gcloud etc.
I am trying to deploy a service in local devapp server. When I import numpy or any other third party libraries in my main.py script it throws an error saying unable to find the module.
I am using cloud sdk and have two python distributions, the default python 2.7 and anaconda with python 2.7. When I change the path to look for the modules in anaconda distribution, it fails to find module ‘setup’ required by the cloud sdk.
Is there a way to install the cloud sdk for anaconda distribution ?
Any help/pointers will be much appreciated!
When using app engine python standard environment, you can install pure python 3rd party libs using pip by vendoring them as explained here.
There are also a number of libraries included in the python27 runtime which can be requested using the libraries directive in your app.yaml as explained here.
If there's a lib which is not pure python (i.e it uses C extensions) that you want to use in your project, and it's not part of this list, then your only option is to use a flexible VM. If you want to use anaconda, you should consider customizing the runtime for your flexible VM.