No module name `keras` under tmux on AWS instance - amazon-web-services

I am trying to use Amazon AWS instance to train my network. To run it under keras, I need to run
source activate tensorflow_p36
first and it works. Unfortunately, if I do the same from under tmux, it says it can't find keras module.
Why and how to overcome?

You can refer to the solution suggested in TMUX Session Won't Import Python Module. If you start the tmux session first and then import tensorflow it should work. At least my issue was resolved when I used this sequence other wise I got an error saying tensorflow module was not found.

Related

PySpark ModuleNotFoundError on GCP

I'm trying to run a Pyspark Streaming program on GCP Dataproc. I pip install mmh3 in ssh already, running pyspark then type import mmh3 caused no problem. But when I started running sc.start() and send info over from another ssh terminal, it starts saying the module not found. Any idea why this happened or how to fix it? Thanks.
By installing the package via SSH, you're just install it on the "driver" node. You'll need to install the package for the whole cluster (i.e. all worker nodes) as well. Try following the documentation

How can I import regex on AWS Lambda

I am getting the following error:
Unable to import module '': No module
named 'regex._regex'
The AWS Lambda deployment package runs just fine without import htmldate statement (the module I want to use) which in turn requires regex.
Also the code runs fine locally.
So this seems to be a problem running regex on AWS Lambda.
A new version of htmldate makes some of the dependencies optional, regex is such a case. That should solve the problem. (FYI: I'm the main developer of the package.)
If it runs locally and not in the lambda it may be an issue with the package installation. You may want to install your requirements.txt via a docker replicating the lambdas environment. If it works locally this can be used to ensure you are replicating the environment your lambda is running in during installation.
This docker image can be used to help:
https://hub.docker.com/r/lambci/lambda/
There are some examples specified here: https://github.com/lambci/docker-lambda#build-examples

install be_helper on datalab

I knew that BigQuery module is already installed on datalab. I just wanna to use bq_helper module because I learned it on Kaggle.
I did !pip install -e git+https://github.com/SohierDane/BigQuery_Helper#egg=bq_helper and it worked.
but I can't import the bq_helper. The pic is shown below.
Please help. Thanks!
I used python2 on Datalab.
I am not familiar with the BigQuery Helper library you shared, but in general, in Datalab, it may happen that you need to restart the kernel in order for the libraries to be properly loaded.
I reproduced the scenario you proposed: installing the library with the command !pip install -e git+https://github.com/SohierDane/BigQuery_Helper#egg=bq_helper and then trying to import it in the notebook using:
from bq_helper import BigQueryHelper
bq_assistant = BigQueryHelper("bigquery-public-data", "github_repos")
bq_assistant.project_name
At first, it did not work and I obtained the same error as you; then I clicked on the Reset Session button and the library was loaded properly.
Some other details that may be relevant if this does not work for you are:
I am also running on Python2 (although the GitHub page of the library suggests that it was only tested in Python3.6+).
The Custom metadata parameters in the Datalab GCE instance are: created-with-datalab-version: 20180503 and created-with-sdk-version: 208.0.2.

Amazon Lambda unable to import [python windows .pyd pip]

I am trying to write to my PostgreSQL database with AWS Lambda using the python2.7 runtime. I care very little about how I do this, so if anyone has a different way that I can understand that works, I'd love to hear it.
The method I'm currently trying is to use psycopg2, as this is the only way I know. In order to do this, I need to upload the psycopg2 module to my environment on AWS Lambda. As per instructions, I've created a directory with my source and psycopg2 using pip install psycopg2 -t ..\my-project, zipped my-project, and uploaded it.
My error message is this from within the AWS Lambda console: Unable to import module 'lambda_function': No module named _psycopg
The code runs on my windows machine. I think the issue is that when I import psycopg2 from my local windows machine, the _psycopg module is being imported from _psycopg.pyd, and .pyd files are windows specific. I may be wrong about this.
I'm really just looking for any way to achieve the desired result described in my first paragraph, but here's a more specific question: How do I tell windows to pip install and compile psycopg2 without using .pyd files? Is this possible? Do I have something completely wrong?
I know the formatting of this question is a little unorthodox, I think I've given all the necessary information, let me know if there's anything else I can provide.
I solved the problem by opening an ubuntu instance on VirtualBox, pip installing the package there, pulling the relevant folders out, and placing them in my-project before zipping and uploading to AWS Lambda.
See these instructions.

Cannot run Google ML engine locally due to Tensorflow issues

I'm trying to run the Google Cloud ML engine locally for debugging purposes by running the command gcloud ml-engine local predict --model-dir=fasttext_cloud/ --json-instances=debug_instance.json. However, I'm getting the error: ERROR: (gcloud.ml-engine.local.predict) Cannot import Tensorflow.
This is strange as Tensorflow works fine on my machine. Even a simple example like python -c 'import tensorflow' has no issues whatsoever.
Is TensorFlow installed in a virtual environment or a non-standard location that isn't on the Python path when running from gcloud?
Its a bit kludgy but I would do the following to check the Python path being used by gcloud. Modify the file
${GCLOUD_INSTALL_LOCATION}/google-cloud-sdk/lib/surface/ml_engine/__init__.py
At the top of the file add
import sys
print("\n".join(sys.path))
Then run
gcloud ml-engine
This should print out the python path and you can now check that it includes the location where TensorFlow is installed.
Can you upgrade to the latest gcloud release (171.0.0) and retry?
To upgrade, run
$ gcloud components update