unresolved import gcs_oauth2_boto_plugin - python-2.7

I am currently trying to create bucket in the Google Cloud Storage using Python and the documentation provided by Google here at this link.
https://cloud.google.com/storage/docs/gspythonlibrary
I have followed the instructions and I have successfully install the stand alone gsutil. However, once I go into eclipse and import gcs_oauth2_boto_plugin, it does not recognize it even though it recognizes the import boto.

It was a problem with both my PYTHON PATH and Eclipse. What I ended up doing was
import sys
sys.path.append("/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages")
try:
import boto
from boto import connect_gs
except:
print 'neither of the modules were imported'
This solved my problem. Updating the python path did not.

Related

What changes are needed to my django app when deploying to pythonanywhere? error points to nowhere

Deploying my django website with S3 as storage which runs fine locally to pythonanywhere gives a strange error I can't google a solution for:
"TypeError: a bytes-like object is required, not 'str'"
What I'm doing wrong?
I've tried to put my environment variables out of settings.env (aws keys, secret_key, etc) ad set them directly in my settings.py app. + every suggestion I could find but it's still the same :(
here's my /var/www/username_pythonanywhere_com_wsgi.py:
# +++++++++++ DJANGO +++++++++++
# To use your own Django app use code like this:
import os
import sys
from dotenv import load_dotenv
project_folder = os.path.expanduser('~/portfolio_pa/WEB') # adjust as appropriate
load_dotenv(os.path.join(project_folder, 'settings.env'))
# assuming your Django settings file is at '/home/myusername/mysite/mysite/settings.py'
path = '/home/corebots/portfolio_pa'
if path not in sys.path:
sys.path.insert(0, path)
os.environ['DJANGO_SETTINGS_MODULE'] = 'WEB.settings'
## Uncomment the lines below depending on your Django version
###### then, for Django >=1.5:
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
###### or, for older Django <=1.4
#import django.core.handlers.wsgi
#application = django.core.handlers.wsgi.WSGIHandler()
I'd expect the website to run fine just like it does locally.
Boto library doesn't have a good Python3 support. This particular issue is known in the boto bugtracker: https://github.com/boto/boto/issues/3837
The best way of fixing this is to use boto3 which has decent Python3 support and is a generally most supported AWS SDK for Python.
The reason why it works on your local machine and doesn't work on production is that pythonanywhere setup seems to be using proxy which triggers this incompatible boto code. See the actual calling code: https://github.com/boto/boto/blob/master/boto/connection.py#L747
Your error traceback confirms this.
Unfortunately, I'm not familliar with the django-photologue, but a brief look doesn't suggest that it strongly depends on boto3. Maybe I'm wrong.
I still think that the best way is to go with boto3. As a backup strat you can fork boto with a fix for this issue and install that instead of the official one from PyPI: https://github.com/boto/boto/pull/3699

The worker for a Google Cloud Platform task cannot find the logging library

I have created a simple task based on the Google Cloud Platform "update counter" push task example. All I want to do is log that it has been invoked to the Stackdriver logs.
from google.cloud import logging
logging_client = logging.Client()
log_name = 'service-log'
logger = logging_client.logger(log_name)
import webapp2
class UpdateCounterHandler(webapp2.RequestHandler):
def post(self):
amount = int(self.request.get('amount'))
logger.log_text('Service startup task done.')
app = webapp2.WSGIApplication([
('/update_counter', UpdateCounterHandler)
], debug=True)
After deploying this and invoking it, there is an error. In the logs online it says:
from google.cloud import logging
ImportError: No module named cloud
This isn't a local version, but one that I've deployed. It's hard for me to believe that I have to actually install python libraries into the production runtime. (I can't even imagine that I can.)
As the root readme states:
Many samples require extra libraries to be installed. If there is a requirements.txt, you will need to install the dependencies with pip.
Try adding the library as explained here.
When using logging from the Python standard library in App Engine, the logs also end up in Stackdriver. So you could use import logging instead of from google.cloud import logging.
When you are specifically interested in using the google.cloud.logging library, then it needs to be installed to a project folder ./lib as referred by Tudormi: here

create_engine in sqlalchemy not working in python 3.6 runtime for aws lambda

I successfully tested pandas, numpy, and sqlalchemy in an Amazon Linux docker image using python 3.6. I was able to import, use, and connect to a database in the virtual environment using create_engine from the sqlalchemy module in python 3.6.
I then exported all the dependencies and built a python deployment package to run it in AWS Lambda but for some reason I keep getting an error for create_engine in lambda.
module 'sqlalchemy' has no attribute 'create_engine': AttributeError
This is my code:
import pandas as pd
import numpy as np
import sqlalchemy
from datetime import datetime, timedelta
def lambda_handler(event, context):
engine = sqlalchemy.create_engine("DB_URI")
return "Hello world!"
However, if I simply comment out the line where I call create_engine, I get my "Hello world!" response.
I don't get why create_engine is not working in this environment when it worked perfectly fine in the identical docker environment. Any ideas?
I figured it out. I had a rookie mistake when I was zipping up my file and didn't use the -r option which meant only the top level of my python module folders where getting zipped up. This explains why I wasn't getting an import error but none of the actual methods were working.
So to reiterate, the solution was adding the -r option to my zip operation to add all the files recursively:
zip -r package.zip *

Odoo custom module with external Python library

I created an Odoo Module in Python using the Python library ujson.
I installed this library on my development server manually with pip install ujson.
Now I want to install the Module on my live server. Can I somehow tell the Odoo Module to install the ujson library when it is installed? So I just have to add the Module to my addons path and install it via the Odoo Web Interface?
Another reason to have this automated would be if I like to share my custom module, so others don't have to install the library manually on their server.
Any suggestions how to configure my Module that way? Or should I just include the library's directory in my module?
You should try-except the import to handle problems on odoo server start:
try:
from external_dependency import ClassA
except ImportError:
pass
And for other users of your module, extend the external_dependencies in your module manifest (v9 and less: __openerp__.py; v10+: __manifest__.py), which will prompt a warning on installation:
"external_dependencies": {
'python': ['external_dependency']
},
Big thanks goes to Ivan and his Blog
Thank you for your help, #Walid Mashal and #CZoellner, you both pointed me to the right direction.
I solved this task now with the following code added to the __init__.py of my module:
import pip
try:
import ujson
except ImportError:
print('\n There was no such module named -ujson- installed')
print('xxxxxxxxxxxxxxxx installing ujson xxxxxxxxxxxxxx')
pip.main(['install', 'ujson'])
In python file using the following command, you can install it (it works for odoo only). Eg: Here I am going to install xlsxwriter
try:
import xlsxwriter
except:
os.system("pip install xlsxwriter")
import xlsxwriter
The following is the code that is used in odoo base module report in base addons inside report.py (odoo_root_folder/addons/report/models/report.py) to install wkhtmltopdf.
from openerp.tools.misc import find_in_path
import subprocess
def _get_wkhtmltopdf_bin():
return find_in_path('wkhtmltopdf')
try:
process = subprocess.Popen([_get_wkhtmltopdf_bin(), '--version'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
except (OSError, IOError):
_logger.info('You need Wkhtmltopdf to print a pdf version of the reports.')
basically you need to find some python code that will run the library and install it and include that code in one of you .py files, and that should do it.

No module named cloud while using google.cloud import bigquery

i have built an app engine application to load data into bigquery table using google app engine launcher but when I run it on local host or on the cloud i get the No module named cloud while using google.cloud import bigquery error message in log file. I have installed the google cloud client library but it is still giving me the same error. please see below the code I am using
---main.py file contains
import argparse
import time
import uuid
from google.cloud import bigquery
def load_data_from_gcs(dataset_name, table_name, source):
bigquery_client = bigquery.Client()
dataset = bigquery_client.dataset(dataset_name)
table = dataset.table(table_name)
job_name = str(uuid.uuid4())
job = bigquery_client.load_table_from_storage(
job_name, table, source)
job.begin()
wait_for_job(job)
print('Loaded {} rows into {}:{}.'.format(
job.output_rows, dataset_name, table_name))
def wait_for_job(job):
while True:
job.reload()
if job.state == 'DONE':
if job.error_result:
raise RuntimeError(job.error_result)
return
time.sleep(1)
if __name__ == '__main__':
parser = argparse.ArgumentParser(
description=__doc__,
formatter_class=argparse.RawDescriptionHelpFormatter)
parser.add_argument('Test')
parser.add_argument('mytable')
parser.add_argument('gs://week/geninfo.csv')
args = parser.parse_args()
load_data_from_gcs(
args.dataset_name,
args.table_name,
args.source)
--app.yaml file contains the following code
application: mycloudproject
version: 1
runtime: python27
api_version: 1
threadsafe: yes
handlers:
- url: /favicon\.ico
static_files: favicon.ico
upload: favicon\.ico
- url: .*
script: main.app
Please let me know what is missing or if I am doing something wrong here?
This can be a bit tricky. Google Cloud uses the new Python namespace format (if you look at the source you'll notice that there's no __init__.py in the directory structure).
This was changed in Python 3.3 with PEP-420
Fortunately in Python 2.7 you can fix this easily by avoiding implicit imports. Just add this to the top of your file:
from __future__ import absolute_import
Hope that helps.
Find the directory containing google/cloud/..., and add that directory to the PYTHONPATH so that python can find it. See this post for details on how to add to PYTHONPATH. It outlines two common ways to do it:
Here's how to do it with a bash command:
export PYTHONPATH=$PYTHONPATH:/<path_to_modules>
Or you could append it to the path in your script:
# if the google/ directory is in the directory /path/to/directory/
path_to_look_for_module = '/path/to/directory/'
import sys
if not path_to_look_for_module in sys.path:
sys.path.append(path_to_look_for_module)
If that doesn't work, here is some code I found in one of my projects for importing Google Appengine modules:
def fixup_paths(path):
"""Adds GAE SDK path to system path and appends it to the google path
if that already exists."""
# Not all Google packages are inside namespace packages, which means
# there might be another non-namespace package named `google` already on
# the path and simply appending the App Engine SDK to the path will not
# work since the other package will get discovered and used first.
# This emulates namespace packages by first searching if a `google` package
# exists by importing it, and if so appending to its module search path.
try:
import google
google.__path__.append("{0}/google".format(path))
except ImportError:
pass
sys.path.insert(0, path)
# and then call later in your code:
fixup_paths(path_to_google_sdk)
from google.cloud import bigquery
It looks like you are trying to use the Cloud Datastore client library in a Google App Engine's standard environment. As documented in Google's documentation you should not be doing this. Instead, either use the NDB Client Library or do not use the standard environment.
Are you sure you've updated to the latest version of the library? The version installed by pip may be out of date. Previously, the module was imported as:
from gcloud import bigquery
If that works, you're running an older version. To install the latest, I'd recommend pulling from the master in the github project.