Python ConfigParser - module object is not callable in Fedora 23 - python-2.7

Trying to read an .ini db file from python to test a connection to a PostGres database
Python Version 2.7.11
In Fedora, I installed with
sudo dnf install python-configparser
Install 1 Package
Total download size: 41 k
Installed size: 144 k
Is this ok [y/N]: y
Downloading Packages:
python-configparser-3.5.0b2-0.2.fc23.noarch.rpm 40 kB/s | 41 kB 00:01
Total 31 kB/s | 41 kB 00:01
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Installing : python-configparser-3.5.0b2-0.2.fc23.noarch 1/1
Verifying : python-configparser-3.5.0b2-0.2.fc23.noarch 1/1
Installed:
python-configparser.noarch 3.5.0b2-0.2.fc23
in my config.py I do a
import configparser
when I run my script I get 'module' object is not callable
db.ini
[test1]
host=IP address
database=db
port=5432
user=username
password=pswd
[test2]
host=localhost
database=postgres
port=7999
user=abc
password=abcd
config.py
#!/usr/bin/env python
#from configparser import ConfigParser
import configparser
def config(filename='database.ini', section='gr'):
# create a parser
parser = configparser()
# read config file
parser.read(filename)
# get section, default to gr
db = {}
if parser.has_section(section):
params = parser.items(section)
for param in params:
db[param[0]] = param[1]
else:
raise Exception('Section {0} not found in the {1} file'.format(section, filename))
return db
testing.py
#!/usr/bin/env python
import psycopg2
from config import config
def connect():
""" Connect to the PostgreSQL database server """
conn = None
try:
# read connection parameters
params = config()
# connect to the PostgreSQL server
print('Connecting to the PostgreSQL database...')
conn = psycopg2.connect(**params)
# create a cursor
cur = conn.cursor()
# execute a statement
print('PostgreSQL database version:')
cur.execute('SELECT version()')
# display the PostgreSQL database server version
db_version = cur.fetchone()
print(db_version)
# close the communication with the PostgreSQL
cur.close()
except (Exception, psycopg2.DatabaseError) as error:
print(error)
finally:
if conn is not None:
conn.close()
print('Database connection closed.')
if __name__ == '__main__':
connect()
[root#svr mytest]# ./testing.py
'module' object is not callable
any ideas ?
thank you.

You have an error in your parser creation, see the ConfigParser examples:
An example of reading the configuration file again:
import ConfigParser
config = ConfigParser.RawConfigParser()
config.read('example.cfg')
In Python, a module is basically just a file, so if you want to use anything from this module, you have to specify what from the module you want. In your case, change the lines to
# create a parser
parser = configparser.ConfigParser()
This way, you are using the class ConfigParser from the module.

You can also use the follow to import the parser:
from configparser import ConfigParser
instead of
import configparser
I had the same issue as you and this worked for me.

Related

Django raise AppRegistryNotReady("Apps aren't loaded yet.") django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet

I am trying to use pyspark to preprocess data for the prediction model.
I get an error when I try spark.createDataFrame out of my preprocessing.Is there a way to check how processedRDD look like before making it to dataframe?
import findspark
findspark.init('/usr/local/spark')
import pyspark
from pyspark.sql import SQLContext
import os
import pandas as pd
import geohash2
sc = pyspark.SparkContext('local', 'sentinel')
spark = pyspark.SQLContext(sc)
sql = SQLContext(sc)
working_dir = os.getcwd()
df = sql.createDataFrame(data)
df = df.select(['starttime', 'latstart','lonstart', 'latfinish', 'lonfinish', 'trip_type'])
df.show(10, False)
processedRDD = df.rdd
processedRDD = processedRDD \
.map(lambda row: (row, g, b, minutes_per_bin)) \
.map(data_cleaner) \
.filter(lambda row: row != None)
print(processedRDD)
featuredDf = spark.createDataFrame(processedRDD, ['year', 'month', 'day', 'time_cat', 'time_num', 'time_cos', \
'time_sin', 'day_cat', 'day_num', 'day_cos', 'day_sin', 'weekend', \
'x_start', 'y_start', 'z_start','location_start', 'location_end', 'trip_type'])
I am getting this error:
[Stage 1:> (0 + 1) / 1]2019-10-24 15:37:56 ERROR Executor:91 - Exception in task 0.0 in stage 1.0 (TID 1)
raise AppRegistryNotReady("Apps aren't loaded yet.") django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:452)
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:588)
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:571)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:406)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)
at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)
at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289)
at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28)
at org.apache.spark.api.python.PythonRDD$$anonfun$3.apply(PythonRDD.scala:153)
at org.apache.spark.api.python.PythonRDD$$anonfun$3.apply(PythonRDD.scala:153)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
... 1 more
I do not understand what this have to do with importing an app
I don't know what this script has to do with Django exactly, but adding the following lines at the top of the script will probably fix this issue:
import os
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myproject.settings')
import django
django.setup()
Basically, you need to load your settings and populate Django’s application registry before doing anything else. You have all the information required in the Django docs.
https://docs.djangoproject.com/en/2.2/topics/settings/#calling-django-setup-is-required-for-standalone-django-usage
Instead of manualy running Hadoop I am making a python server which is using pyspack and calculate 10 times faster heavy AI algorithms on Django server. The problem I had came from SPARK-LOCAL-IP, different IP was used (the one I use to connect to a remote database vis sshtunnel). I import and use pyspark. I had to rename a file and add the correct IP.
cd /usr/local/spark/conf
touch spark-env.sh.template
mv -i spark-env.sh.template spark-env.sh
nano spark-env.sh
paste: SPARK-LOCAL_IP="127.0.1.1"
Then I had to add to my views.py sc.setLogLevel("ERROR") To see what was the real problem .Debuging of java in python can be problematic sometimes. A column was datetime instead of string and I fixed it.

PyCharm can't find 'SPARK_HOME' when imported from a different file

I've two files.
test.py
from pyspark import SparkContext
from pyspark import SparkConf
from pyspark import SQLContext
class Connection():
conf = SparkConf()
conf.setMaster("local")
conf.setAppName("Remote_Spark_Program - Leschi Plans")
conf.set('spark.executor.instances', 1)
sc = SparkContext(conf=conf)
sqlContext = SQLContext(sc)
print ('all done.')
con = Connection()
test_test.py
from test import Connection
sparkConnect = Connection()
when I run test.py the connection is made successfully but with test_test.py it gives
raise KeyError(key)
KeyError: 'SPARK_HOME'
KEY_ERROR arises if the SPARK_HOME is not found or invalid. So it's better to add it to your bashrc and check and reload in your code. So add this at the top of your test.py
import os
import sys
import pyspark
from pyspark import SparkContext, SparkConf, SQLContext
# Create a variable for our root path
SPARK_HOME = os.environ.get('SPARK_HOME',None)
# Add the PySpark/py4j to the Python Path
sys.path.insert(0, os.path.join(SPARK_HOME, "python", "lib"))
sys.path.insert(0, os.path.join(SPARK_HOME, "python"))
pyspark_submit_args = os.environ.get("PYSPARK_SUBMIT_ARGS", "")
if not "pyspark-shell" in pyspark_submit_args: pyspark_submit_args += " pyspark-shell"
os.environ["PYSPARK_SUBMIT_ARGS"] = pyspark_submit_args
Also add this at the end of your ~/.bashrc file
COMMAND: vim ~/.bashrc if you are using any Linux based OS
# needed for Apache Spark
export SPARK_HOME="/opt/spark"
export IPYTHON="1"
export PYSPARK_PYTHON="/usr/bin/python3"
export PYSPARK_DRIVER_PYTHON="ipython3"
export PYSPARK_DRIVER_PYTHON_OPTS="notebook"
export PYTHONPATH="$SPARK_HOME/python/:$PYTHONPATH"
export PYTHONPATH="$SPARK_HOME/python/lib/py4j-0.9-src.zip:$PYTHONPATH"
export PYSPARK_SUBMIT_ARGS="--master local[2] pyspark-shell"
export CLASSPATH="$CLASSPATH:/opt/spark/lib/spark-assembly-1.6.1-hadoop2.6.0.jar
Note:
In the above bashrc code, I have given my SPARK_HOME value as /opt/spark you can give the location where you keep your spark folder(the downloaded one from the website).
Also I'm using python3 you can change it to python in the bashrc if you are using python 2.+ versions
I was using Ipython, for easy testing during runtime, like load the data once and test your code many times. If you are using plain old text editor, let me know I will update the bashrc accordingly.

How do I use flask-migrate on pythonanywhere?

Does anyone have an example of successfully using flask-migrate on pythonanywhere? Does anyone have a simple example of what an app.py using migration should look like? Something along the lines of:
from flask import Flask, request, render_template
from flask.ext.sqlalchemy import SQLAlchemy
from flask.ext.migrate import Migrate, MigrateCommand
from flask.ext.script import Manager
app = Flask(__name__ )
app.secret_key = 'This is really unique and secret'
app.config['SQLALCHEMY_DATABASE_URI'] = '<whatever>'
db = SQLAlchemy(app)
db.create_all()
manager = Manager( app )
migrate = Migrate( app, db )
manager.add_command('db', MigrateCommand )
I find that when I try to run
python app.py db init
it fails to generate the migration repository. Could this be a file permission issue on pythonanywhere?
Author of Flask-Migrate here.
The idea is that you generate your db repository in your development environment. You have to commit your repository along with your source files to source control.
Then when you install the application on your hosting service all you need to do is run the upgrade command to get your db created and migrated to the latest revision.
Update: base on your comments below, you want to develop an application from scratch. I just tested this myself, and was able to create a db repository, create a migration, and apply it. What I did is start the pythonanywhere bash console. Here is a copy of my complete session:
17:39 ~ $ mkdir dbtest
17:39 ~ $ cd dbtest
17:39 ~/dbtest $ virtualenv venv
New python executable in venv/bin/python2.7
Also creating executable in venv/bin/python
Installing setuptools............done.
Installing pip...............done.
17:39 ~/dbtest $ . venv/bin/activate
(venv)17:39 ~/dbtest $ pip install flask flask-migrate
...
(venv)17:42 ~/dbtest $ vi dbtest.py
... (entered flask-migrate example code, see below)
(venv)17:47 ~/dbtest $ python dbtest.py
usage: dbtest.py [-?] {shell,db,runserver} ...
positional arguments:
{shell,db,runserver}
shell Runs a Python shell inside Flask application context.
db Perform database migrations
runserver Runs the Flask development server i.e. app.run()
optional arguments:
-?, --help show this help message and exit
(venv)17:47 ~/dbtest $ python dbtest.py db init
Creating directory /home/miguelgrinberg/dbtest/migrations ... done
Creating directory /home/miguelgrinberg/dbtest/migrations/versions ... done
Generating /home/miguelgrinberg/dbtest/migrations/README ... done
Generating /home/miguelgrinberg/dbtest/migrations/alembic.ini ... done
Generating /home/miguelgrinberg/dbtest/migrations/env.py ... done
Generating /home/miguelgrinberg/dbtest/migrations/script.py.mako ... done
Generating /home/miguelgrinberg/dbtest/migrations/env.pyc ... done
Please edit configuration/connection/logging settings in '/home/miguelgrinberg/dbtest/migrations/alembic.ini' before proceeding.
(venv)17:54 ~/dbtest $ python dbtest.py db migrate
INFO [alembic.migration] Context impl SQLiteImpl.
INFO [alembic.migration] Will assume non-transactional DDL.
INFO [alembic.autogenerate.compare] Detected added table 'user'
Generating /home/miguelgrinberg/dbtest/migrations/versions/1c4aa671e23a_.py ... done
(venv)17:54 ~/dbtest $ python dbtest.py db upgrade
INFO [alembic.migration] Context impl SQLiteImpl.
INFO [alembic.migration] Will assume non-transactional DDL.
INFO [alembic.migration] Running upgrade None -> 1c4aa671e23a, empty message
(venv)17:55 ~/dbtest $ ls -l
total 8
-rw-r--r-- 1 miguelgrinberg registered_users 3072 Sep 28 2014 app.db
-rwxrwxr-x 1 miguelgrinberg registered_users 511 Sep 28 17:48 dbtest.py
drwxrwxr-x 3 miguelgrinberg registered_users 100 Sep 28 17:55 migrations
drwxrwxr-x 6 miguelgrinberg registered_users 52 Sep 28 17:41 venv
The example application that I used to test this is below:
from flask import Flask
from flask.ext.sqlalchemy import SQLAlchemy
from flask.ext.script import Manager
from flask.ext.migrate import Migrate, MigrateCommand
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///app.db'
db = SQLAlchemy(app)
migrate = Migrate(app, db)
manager = Manager(app)
manager.add_command('db', MigrateCommand)
class User(db.Model):
id = db.Column(db.Integer, primary_key = True)
name = db.Column(db.String(128))
if __name__ == '__main__':
manager.run()
Any chance you already have a migration repository created? Or a database?

no module named http.server

here is my web server class :
import http.server
import socketserver
class WebHandler(http.server.BaseHTTPRequestHandler):
def parse_POST(self):
ctype, pdict = cgi.parse_header(self.headers['content-type'])
if ctype == 'multipart/form-data':
postvars = cgi.parse_multipart(self.rfile, pdict)
elif ctype == 'application/x-www-form-urlencoded':
length = int(self.headers['content-length'])
postvars = urllib.parse.parse_qs(self.rfile.read(length),
keep_blank_values=1)
else:
postvars = {}
return postvars
def do_POST(self):
postvars = self.parse_POST()
print(postvars)
# reply with JSON
self.send_response(200)
self.send_header("Content-type", "application/json")
self.send_header("Access-Control-Allow-Origin","*");
self.send_header("Access-Control-Expose-Headers: Access-Control-Allow-Origin");
self.send_header("Access-Control-Allow-Headers: Origin, X-Requested-With, Content-Type, Accept");
self.end_headers()
json_response = json.dumps({'test': 42})
self.wfile.write(bytes(json_response, "utf-8"))
when i run server i got "name 'http' is not defined" after import http.server then i got this "no module named http.server"
In python earlier than v3 you need to run http server as
python -m SimpleHTTPServer 8012
#(8069=portnumber on which your python server is configured)
And for python3
python3 -m http.server 7034 #(any free port number)
http.server only exists in Python 3. In Python 2, you should use the BaseHTTPServer module:
from BaseHTTPServer import BaseHTTPRequestHandler
should work just fine.
like #Sami said: you need earlier versions of python (2.7)
- if you're running on Linux you need to "apt install python-minimal"
- when launching the server "python2 -m SimpleHTTPServer 8082" [or any other port]. By default, python will launch it on 0.0.0.0 which actually is 127.0.0.1:8082 [or your specific port]
Happy hacking
PS when you have 2 versions of python, you need to specify every time you run a command, "python2" or "python3"

uwsgi: no app loaded. going in full dynamic mode

In my uwsgi config, I have these options:
[uwsgi]
chmod-socket = 777
socket = 127.0.0.1:9031
plugins = python
pythonpath = /adminserver/
callable = app
master = True
processes = 4
reload-mercy = 8
cpu-affinity = 1
max-requests = 2000
limit-as = 512
reload-on-as = 256
reload-on-rss = 192
no-orphans
vacuum
My app structure looks like this:
/adminserver
app.py
...
My app.py has these bits of code:
app = Flask(__name__)
...
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5003, debug=True)
The result is that when I try to curl my server, I get this error:
Wed Sep 11 23:28:56 2013 - added /adminserver/ to pythonpath.
Wed Sep 11 23:28:56 2013 - *** no app loaded. going in full dynamic mode ***
Wed Sep 11 23:28:56 2013 - *** uWSGI is running in multiple interpreter mode ***
What do the module and callable options do? The docs say:
module, wsgi Argument: string
Load a WSGI module as the application. The module (sans .py) must be
importable, ie. be in PYTHONPATH.
This option may be set with -w from the command line.
callable Argument: string Default: application
Set default WSGI callable name.
Module
A module in Python maps to a file on disk - when you have a directory like this:
/some-dir
module1.py
module2.py
If you start up a python interpreter while the current working directory is /some-dir you will be able to import each of the modules:
some-dir$ python
>>> import module1, module2
# Module1 and Module2 are now imported
Python searches sys.path (and a few other things, see the docs on import for more information) for a file that matches the name you are trying to import. uwsgi uses Python's import process under the covers to load the module that contains your WSGI application.
Callable
The WSGI PEPs (333 and 3333) specify that a WSGI application is a callable that takes two arguments and returns an iterable that yields bytestrings:
# simple_wsgi.py
# The simplest WSGI application
HELLO_WORLD = b"Hello world!\n"
def simple_app(environ, start_response):
"""Simplest possible application object"""
status = '200 OK'
response_headers = [('Content-type', 'text/plain')]
start_response(status, response_headers)
return [HELLO_WORLD]
uwsgi needs to know the name of a symbol inside of your module that maps to the WSGI application callable, so it can pass in the environment and the start_response callable - essentially, it needs to be able to do the following:
wsgi_app = getattr(simple_wsgi, 'simple_app')
TL;PC (Too Long; Prefer Code)
A simple parallel of what uwsgi is doing:
# Use `module` to know *what* to import
import simple_wsgi
# construct request environment from user input
# create a callable to pass for start_response
# and then ...
# use `callable` to know what to call
wsgi_app = getattr(simple_wsgi, 'simple_app')
# and then call it to respond to the user
response = wsgi_app(environ, start_response)
For anyone else having this problem, if you are sure your configuration is correct, you should check your uWSGI version.
Ubuntu 12.04 LTS provides 1.0.3. Removing that and using pip to install 2.0.4 resolved my issues.
First, check your configuration whether is correct.
my uwsgi.ini configuration:
[uwsgi]
chdir=/home/air/repo/Qiy
uid=nobody
gid=nobody
module=Qiy.wsgi:application
socket=/home/air/repo/Qiy/uwsgi.sock
master=true
workers=5
pidfile=/home/air/repo/Qiy/uwsgi.pid
vacuum=true
thunder-lock=true
enable-threads=true
harakiri=30
post-buffering=4096
daemonize=/home/air/repo/Qiy/uwsgi.log
then use uwsgi --ini uwsgi.ini to run uwsgi.
if not work, you rm -rf the venv directory, and re-initial the venv, and re-try my step.
I re-initial the venv solved my issue, seems the problem is when I pip3 install some packages of requirements.txt, and upgrade the pip, then install uwsgi package. so, I delete the venv, and re-initial my virtual environment.