The worker for a Google Cloud Platform task cannot find the logging library - python-2.7

I have created a simple task based on the Google Cloud Platform "update counter" push task example. All I want to do is log that it has been invoked to the Stackdriver logs.
from google.cloud import logging
logging_client = logging.Client()
log_name = 'service-log'
logger = logging_client.logger(log_name)
import webapp2
class UpdateCounterHandler(webapp2.RequestHandler):
def post(self):
amount = int(self.request.get('amount'))
logger.log_text('Service startup task done.')
app = webapp2.WSGIApplication([
('/update_counter', UpdateCounterHandler)
], debug=True)
After deploying this and invoking it, there is an error. In the logs online it says:
from google.cloud import logging
ImportError: No module named cloud
This isn't a local version, but one that I've deployed. It's hard for me to believe that I have to actually install python libraries into the production runtime. (I can't even imagine that I can.)

As the root readme states:
Many samples require extra libraries to be installed. If there is a requirements.txt, you will need to install the dependencies with pip.
Try adding the library as explained here.

When using logging from the Python standard library in App Engine, the logs also end up in Stackdriver. So you could use import logging instead of from google.cloud import logging.
When you are specifically interested in using the google.cloud.logging library, then it needs to be installed to a project folder ./lib as referred by Tudormi: here

Related

What changes are needed to my django app when deploying to pythonanywhere? error points to nowhere

Deploying my django website with S3 as storage which runs fine locally to pythonanywhere gives a strange error I can't google a solution for:
"TypeError: a bytes-like object is required, not 'str'"
What I'm doing wrong?
I've tried to put my environment variables out of settings.env (aws keys, secret_key, etc) ad set them directly in my settings.py app. + every suggestion I could find but it's still the same :(
here's my /var/www/username_pythonanywhere_com_wsgi.py:
# +++++++++++ DJANGO +++++++++++
# To use your own Django app use code like this:
import os
import sys
from dotenv import load_dotenv
project_folder = os.path.expanduser('~/portfolio_pa/WEB') # adjust as appropriate
load_dotenv(os.path.join(project_folder, 'settings.env'))
# assuming your Django settings file is at '/home/myusername/mysite/mysite/settings.py'
path = '/home/corebots/portfolio_pa'
if path not in sys.path:
sys.path.insert(0, path)
os.environ['DJANGO_SETTINGS_MODULE'] = 'WEB.settings'
## Uncomment the lines below depending on your Django version
###### then, for Django >=1.5:
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
###### or, for older Django <=1.4
#import django.core.handlers.wsgi
#application = django.core.handlers.wsgi.WSGIHandler()
I'd expect the website to run fine just like it does locally.
Boto library doesn't have a good Python3 support. This particular issue is known in the boto bugtracker: https://github.com/boto/boto/issues/3837
The best way of fixing this is to use boto3 which has decent Python3 support and is a generally most supported AWS SDK for Python.
The reason why it works on your local machine and doesn't work on production is that pythonanywhere setup seems to be using proxy which triggers this incompatible boto code. See the actual calling code: https://github.com/boto/boto/blob/master/boto/connection.py#L747
Your error traceback confirms this.
Unfortunately, I'm not familliar with the django-photologue, but a brief look doesn't suggest that it strongly depends on boto3. Maybe I'm wrong.
I still think that the best way is to go with boto3. As a backup strat you can fork boto with a fix for this issue and install that instead of the official one from PyPI: https://github.com/boto/boto/pull/3699

No module named cloud while using google.cloud import bigquery

i have built an app engine application to load data into bigquery table using google app engine launcher but when I run it on local host or on the cloud i get the No module named cloud while using google.cloud import bigquery error message in log file. I have installed the google cloud client library but it is still giving me the same error. please see below the code I am using
---main.py file contains
import argparse
import time
import uuid
from google.cloud import bigquery
def load_data_from_gcs(dataset_name, table_name, source):
bigquery_client = bigquery.Client()
dataset = bigquery_client.dataset(dataset_name)
table = dataset.table(table_name)
job_name = str(uuid.uuid4())
job = bigquery_client.load_table_from_storage(
job_name, table, source)
job.begin()
wait_for_job(job)
print('Loaded {} rows into {}:{}.'.format(
job.output_rows, dataset_name, table_name))
def wait_for_job(job):
while True:
job.reload()
if job.state == 'DONE':
if job.error_result:
raise RuntimeError(job.error_result)
return
time.sleep(1)
if __name__ == '__main__':
parser = argparse.ArgumentParser(
description=__doc__,
formatter_class=argparse.RawDescriptionHelpFormatter)
parser.add_argument('Test')
parser.add_argument('mytable')
parser.add_argument('gs://week/geninfo.csv')
args = parser.parse_args()
load_data_from_gcs(
args.dataset_name,
args.table_name,
args.source)
--app.yaml file contains the following code
application: mycloudproject
version: 1
runtime: python27
api_version: 1
threadsafe: yes
handlers:
- url: /favicon\.ico
static_files: favicon.ico
upload: favicon\.ico
- url: .*
script: main.app
Please let me know what is missing or if I am doing something wrong here?
This can be a bit tricky. Google Cloud uses the new Python namespace format (if you look at the source you'll notice that there's no __init__.py in the directory structure).
This was changed in Python 3.3 with PEP-420
Fortunately in Python 2.7 you can fix this easily by avoiding implicit imports. Just add this to the top of your file:
from __future__ import absolute_import
Hope that helps.
Find the directory containing google/cloud/..., and add that directory to the PYTHONPATH so that python can find it. See this post for details on how to add to PYTHONPATH. It outlines two common ways to do it:
Here's how to do it with a bash command:
export PYTHONPATH=$PYTHONPATH:/<path_to_modules>
Or you could append it to the path in your script:
# if the google/ directory is in the directory /path/to/directory/
path_to_look_for_module = '/path/to/directory/'
import sys
if not path_to_look_for_module in sys.path:
sys.path.append(path_to_look_for_module)
If that doesn't work, here is some code I found in one of my projects for importing Google Appengine modules:
def fixup_paths(path):
"""Adds GAE SDK path to system path and appends it to the google path
if that already exists."""
# Not all Google packages are inside namespace packages, which means
# there might be another non-namespace package named `google` already on
# the path and simply appending the App Engine SDK to the path will not
# work since the other package will get discovered and used first.
# This emulates namespace packages by first searching if a `google` package
# exists by importing it, and if so appending to its module search path.
try:
import google
google.__path__.append("{0}/google".format(path))
except ImportError:
pass
sys.path.insert(0, path)
# and then call later in your code:
fixup_paths(path_to_google_sdk)
from google.cloud import bigquery
It looks like you are trying to use the Cloud Datastore client library in a Google App Engine's standard environment. As documented in Google's documentation you should not be doing this. Instead, either use the NDB Client Library or do not use the standard environment.
Are you sure you've updated to the latest version of the library? The version installed by pip may be out of date. Previously, the module was imported as:
from gcloud import bigquery
If that works, you're running an older version. To install the latest, I'd recommend pulling from the master in the github project.

unresolved import gcs_oauth2_boto_plugin

I am currently trying to create bucket in the Google Cloud Storage using Python and the documentation provided by Google here at this link.
https://cloud.google.com/storage/docs/gspythonlibrary
I have followed the instructions and I have successfully install the stand alone gsutil. However, once I go into eclipse and import gcs_oauth2_boto_plugin, it does not recognize it even though it recognizes the import boto.
It was a problem with both my PYTHON PATH and Eclipse. What I ended up doing was
import sys
sys.path.append("/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages")
try:
import boto
from boto import connect_gs
except:
print 'neither of the modules were imported'
This solved my problem. Updating the python path did not.

GAE import endpoints "No module named endpoints"

I'm using pycharm to develop appengine. Now i'm trying to use endpoints and I've put
libraries:
- name: pycrypto
version: latest
- name: endpoints
version: 1.0
and then in main.py
import endpoints
But it gives me error
No module named endpoints
I can see the endpoints folder inside the GAE library. Anyone can help?
*EDIT: it is just a matter of IDE (pycharm) cant locate endpoints. The app runs fine and okay both in dev server or cloud server. There is a picture just to make it a bit clearer:
Thanks
You need to add {GAE_SDK}/lib/endpoints-1.0, not just the SDK itself. The reason you can import google is because it is directly under {GAE_SDK}. The libraries you specify in app.yaml are laid out differently due to supporting multiple versions. I believe you also need to add {GAE_SDK}/lib/protorpc-1.0/, it's just not showing because there's already an import error.
I'm using the new version of PyCharm Community and I got to config too. You need to set the Source option on each folder like endpoints in File - Setting - Project:
I've run across the following code somewhere which fixes it for me in a client script. I'm not able to say how much of it may be unnecessary. You'd need to edit the google_appengine path for your SDK installation:
sdk_path = os.path.expanduser('~/work/google-cloud-sdk/platform/google_appengine')
try:
import google
google.__path__.append("{0}/google".format(sdk_path))
except ImportError:
pass
try:
import protorpc
protorpc.__path__.append("{0}/lib/protorpc-1.0/protorpc".format(sdk_path))
except ImportError:
pass
sys.path.append("{0}/lib/endpoints-1.0".format(sdk_path))

Accessing Sentry models in my Django Project

I'm working on a system that has two django projects. A server and a client. The server is responsible for managing several client instances. This system relies on Sentry/Raven to process error logging.
My problem is that Sentry needs me to create and configure each client(sentry project) by hand. Since the number of client instances is large and I already have to do this by hand on my server project. I was trying to automatize the process, so that when I create a new client on the server, it creates a new Sentry project.
Much like in this question, I tried to access directly to the Sentry ORM on my project. But this revealed to be a dead end. So I wrote a python scrypt, to do this.
In said script, I import the DJANGO_SETTINGS_MODULE from sentry and work my way around with it until I have what I need.
sys.path.append("/sentry/")
os.environ.setdefault("DJANGO_SETTINGS_MODULE", 'sentry_configuration_file')
from sentry.models import *
#Do my thing here
If I run the script on my shell, it works perfectly.
However, when I use subprocess to call it inside of my Django project
from subprocess import call
call("/sentry/venv/bin/python /sentry/my_script.py", shell=True)
The script generates the following error on the "from sentry.models import * line:
ImportError("Could not import settings '%s' (Is it on sys.path?): %s" % (self.SETTINGS_MODULE, e))
ImportError: Could not import settings 'configurations.settings' (Is it on sys.path?): No module named configurations.settings
You may have noticed that sentry is installed inside a virtualenv. However, I don't need it activated when I call this script on my bash, as long as I provide the correct path to the virtualenv's python.
I'm lost here. I see no reason in particular for the script to fail using subprocess.call when it runs fine using the shell.
Any pointers would be greatly apreciated.
Thanks.
If anyone ever comes across with this question, I managed to solve the issue by replacing subprocess.call by subprocess.Popen
The cool thing about Popen is that you can specify the environment of the process with the argument "env"
So
my_env = os.environ
my_env["DJANGO_SETTINGS_MODULE"] = "sentry_configuration_file"
result = Popen(command, shell=True, env=my_env)
Worked like a charm.