How do I configure alembic.ini path correctly? - flask

When I run any alembic commands, I get the following error:
FAILED: No config file 'alembic.ini' found, or file has no '[alembic]' section
According to the documentation, I need to set my path so I did. I'm running WSL Ubuntu 18.
Am I setting the path wrong? I run the commands in my flask project's top directory:
~/microblog$
My alembic file structure is:
~/microblog/
- migrations/
-- __pycache__/
-- versions/
-- alembic.ini
-- env.py
-- README
-- script.py.mako
alembic.ini
# A generic, single database configuration.
[alembic]
script_location = home/acanizales1/microblog/migrations/alembic
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false
# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic
... the rest below are configured correctly ...
Thank you.

Related

How to use DBT with AWS Managed Airflow?

hope you are doing well.
I wanted to check if anyone has get up and running with dbt in aws mwaa airflow.
I have tried without success this one and this python packages but fails for some reason or another (can't find the dbt path, etc).
Did anyone has managed to use MWAA (Airflow 2) and DBT without having to build a docker image and placing it somewhere?
Thank you!
I've managed to solve this by doing the following steps:
Add dbt-core==0.19.1 to your requirements.txt
Add DBT cli executable into plugins.zip
#!/usr/bin/env python3
# EASY-INSTALL-ENTRY-SCRIPT: 'dbt-core==0.19.1','console_scripts','dbt'
__requires__ = 'dbt-core==0.19.1'
import re
import sys
from pkg_resources import load_entry_point
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw?|\.exe)?$', '', sys.argv[0])
sys.exit(
load_entry_point('dbt-core==0.19.1', 'console_scripts', 'dbt')()
)
And from here you have two options:
Setting dbt_bin operator argument to /usr/local/airflow/plugins/dbt
Add /usr/local/airflow/plugins/ to the $PATH by following the docs
Environment variable setter example:
from airflow.plugins_manager import AirflowPlugin
import os
os.environ["PATH"] = os.getenv(
"PATH") + ":/usr/local/airflow/.local/lib/python3.7/site-packages:/usr/local/airflow/plugins/"
class EnvVarPlugin(AirflowPlugin):
name = 'env_var_plugin'
The plugins zip content:
plugins.zip
├── dbt (DBT cli executable)
└── env_var_plugin.py (environment variable setter)
Using the pypi airflow-dbt-python package has simplified the setup of dbt_ to MWAA for us, as it avoids needing to amend PATH environment variables in the plugins file. However, I've yet to have a successful dbt_ run via either airflow-dbt or airflow-dbt-python packages, as MWAA worker seems to be a read only filesystem, so as soon as dbt_ tries to compile to the target directory, the following error occurs:
File "/usr/lib64/python3.7/os.py", line 223, in makedirs
mkdir(name, mode)
OSError: [Errno 30] Read-only file system: '/usr/local/airflow/dags/dbt/target'
This is how I managed to do it:
#dag(**default_args)
def dbt_dag():
#task()
def run_dbt():
from dbt.main import handle_and_check
os.environ["DBT_TARGET_DIR"] = "/usr/local/airflow/tmp/target"
os.environ["DBT_LOG_DIR"] = "/usr/local/airflow/tmp/logs"
os.environ["DBT_PACKAGE_DIR"] = "/usr/local/airflow/tmp/packages"
succeeded = True
try:
args = ['run', '--whatever', 'bla']
results, succeeded = handle_and_check(args)
print(results, succeeded)
except SystemExit as e:
if e.code != 0:
raise e
if not succeeded:
raise Exception("DBT failed")
note that my dbt_project.yml has the following paths, this is to avoid os exception when trying to write to read only paths:
target-path: "{{ env_var('DBT_TARGET_DIR', 'target') }}" # directory which will store compiled SQL files
log-path: "{{ env_var('DBT_LOG_DIR', 'logs') }}" # directory which will store dbt logs
packages-install-path: "{{ env_var('DBT_PACKAGE_DIR', 'packages') }}" # directory which will store dbt packages
Combining the answer from #Yonatan Kiron & #Ofer Helman works for me.
I just need to fix these 3 files:
requiremnt.txt
plugins.zip
dbt_project.yml
My requirements.txt I specify the version I want, and looks like this:
airflow-dbt==0.4.0
dbt-core==1.0.1
dbt-redshift==1.0.0
Note that, as of v1.0.0, pip install dbt is no longer supported and will raise an explicit error. Since v0.13, the PyPi package named dbt was a simple "pass-through" of dbt-core. (refer https://docs.getdbt.com/dbt-cli/install/pip#install-dbt-core-only)
For my plugins.zip I add a file env_var_plugin.py that looks like this
from airflow.plugins_manager import AirflowPlugin
import os
os.environ["DBT_LOG_DIR"] = "/usr/local/airflow/tmp/logs"
os.environ["DBT_PACKAGE_DIR"] = "/usr/local/airflow/tmp/dbt_packages"
os.environ["DBT_TARGET_DIR"] = "/usr/local/airflow/tmp/target"
class EnvVarPlugin(AirflowPlugin):
name = 'env_var_plugin'
And finally I add this in my dbt_project.yml
log-path: "{{ env_var('DBT_LOG_DIR', 'logs') }}" # directory which will store dbt logs
packages-install-path: "{{ env_var('DBT_PACKAGE_DIR', 'dbt_packages') }}" # directory which will store dbt packages
target-path: "{{ env_var('DBT_TARGET_DIR', 'target') }}" # directory which will store compiled SQL files
And as stated in the airflow-dbt github, (https://github.com/gocardless/airflow-dbt#amazon-managed-workflows-for-apache-airflow-mwaa) configure the dbt task like below:
dbt_bin='/usr/local/airflow/.local/bin/dbt',
profiles_dir='/usr/local/airflow/dags/{DBT_FOLDER}/',
dir='/usr/local/airflow/dags/{DBT_FOLDER}/'

How to get ansible.module_utils to resolve my custom directory

Certain variables in ansible.cfg seem to not be taking affect.
Ansible 2.2.1.0
Python 2.7.10
Mac OS Version 10.12.5
We have created a custom class that will be used in our custom Ansible module. The class lives here:
/sites/utils/local/ansible/agt_module_utils/ldapData.py
/sites/utils/local/ansible/agt_module_utils/init.py
The class inside ldapData.py is named:
class ldapDataClass(object):
In all cases, init.py is a zero byte file.
Our Ansible module is located here:
/sites/utils/local/ansible/agt_modules/init.py
/sites/utils/local/ansible/agt_modules/agtWeblogic.py
The import statement in atgWeblogic looks as follows:
from ansible.module_utils.ldapData import ldapDataClass
I have also tried:
from ldapData import ldapDataClass
The config file has the following lines:
library = /sites/utils/local/ansible/att_modules
module_utils = /sites/utils/local/ansible/att_module_utils
When running our module, the modules directory IS resolved but the module_utils directory is not resolved. When the include is "from ansible.module_utils.ldapData import ldapDataClass" the failure is before ansible even connects to the remote machine. When the include is "from ldapData import ldapDataClass" the failure is on the remote machine. Below I am showing the failure "on the remote machine" (scroll right to see full error):
nverkland#local>ansible ecomtest37 -m agtWeblogic -a "action=stop instances=tst37-shop-main" -vvv
ecomtest37 | FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_name": "agtWeblogic"
},
"module_stderr": "",
"module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible_FwHCmh/ansible_module_attWeblogic.py\", line 63, in <module>\r\n from ldapData import ldapDataClass\r\nImportError: No module named ldapData\r\n",
"msg": "MODULE FAILURE"
}
If I move the ldapData.py file into the "ansible installed" module_utils directory (/Library/Python/2.7/site-packages/ansible/module_utils/ on my Mac) the module runs fine. What have I done incorrectly in my config file that has prevented the use of my "custom" module_utils directory?
Thanks.
module_utils path is configurable since Ansible 2.3: changelog, PR.
I guess you have to upgrade or refactor the module.
Update: working example
Project tree:
.
├── ansible
│   └── ansible.cfg
├── module_utils
│   └── mycommon.py
└── modules
└── test_module.py
ansible.cfg:
[defaults]
library = ../modules
module_utils = ../module_utils
mycommon.py:
class MyCommonClass(object):
#staticmethod
def hello():
return 'world'
test_module.py:
#!/usr/bin/python
from ansible.module_utils.basic import *
from ansible.module_utils.mycommon import MyCommonClass
module = AnsibleModule(
argument_spec = dict()
)
module.exit_json(changed=True, hello=MyCommonClass.hello())
Execution:
$ ansible localhost -m test_module
[WARNING]: Host file not found: /etc/ansible/hosts
[WARNING]: provided hosts list is empty, only localhost is available
localhost | SUCCESS => {
"changed": true,
"hello": "world"
}
Ansible version:
$ ansible --version
ansible 2.3.1.0
config file = /<masked>/ansible/ansible.cfg
configured module search path = [u'../modules']
python version = 2.7.10 (default, Feb 7 2017, 00:08:15) [GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.34)]
Konstantin's notes helped. I managed to solve the riddle.
In my config file I had:
module_utils = /sites/utils/local/ansible/agt_module_utils
This causes Ansible to think that my class is found at:
import ansible.agt_module_utils.ldapData (working)
until now I was attempting to find the class at:
import ansible.module_utils.ldapData (broken)

In Rails 4, how can I have a Rake task minify my assets?

I need to export a subset of my assets to some external sites. I've created a rake task to do that:
rake build:navbar
The problem is I cannot get the assets library to minify my library. Here's the code from my task method:
desc "Build navbar assets and markup for other sites."
task navbar: :environment do
# Set environment to production so pipeline will minify assets.
Rails.env = "production"
# Some setup code removed...
# How do we coax assets into minifying files?
Rails.application.config.assets.prefix = "../build/navbar/staging"
Rails.application.config.assets.js_compressor = :uglifier
Rails.application.config.assets.css_compressor = :sass
Rails.application.config.assets.digest = false
Rails.application.config.assets.compress = true
Rails.application.config.assets.debug = false
Rails.application.config.assets.paths = [Rails.root.join('/app/assets/javascripts'),
Rails.root.join('/app/assets/stylesheets/navbar')]
Rails.application.config.assets.precompile = ['navbar.js', 'navbar.css']
# Compile now.
Rake::Task['assets:clean'].invoke
Rake::Task['assets:precompile'].invoke
# Cleanup code removed...
end
It will generated a compressed version of my assets (navbar.css.gz), but not a minified version (navbar.min.css).
I've googled this up and down and it seems like this recipe of settings should do the trick. What am I missing?
I think I've identified the underlying problem. The assets pipeline task, i.e. sprockets-rails, doesn't fully respect config settings. It seems to override some settings depending on the Rails environment. And you can't simply change the Rails environment within a rake task.
The goal, again, is to port a subset of the Rails application's assets for another project using this rake command:
rake build:navbar
Here's some sample code that shows how I worked around these issues:
namespace :build do
desc "Build navbar assets and markup."
task navbar: :environment do
# Prep Builder
builder = Navbar::Builder.new(target: target)
builder.prep_build
# Why this? Setting Rails.env or ENV['RAILS_ENV'] didn't work.
system("rake build:navbar_assets RAILS_ENV=production")
builder.generate_markup_file
builder.move_output_files_to_build_directory
builder.cleanup
end
desc "Build navbar assets."
task navbar_assets: :environment do
# Configure the asset pipeline to compile minified files.
# NOTE: Sprockets only minifies files in production environment (or won't
# do it in development) so this assumes RAILS_ENV is set to production
# on the command line.
Rails.application.config.assets.prefix = "../build/navbar/staging"
Rails.application.config.assets.paths = [Rails.root.join('app/assets/javascripts'),
Rails.root.join('app/assets/stylesheets')]
Rails.application.config.assets.precompile += ['navbar.js', 'navbar.css']
# Let it rip.
Rake::Task['assets:clobber'].invoke
Rake::Task['assets:precompile'].invoke
end
end
There were also some issues with file-path-building in the code in the question. Those have been corrected.

rspec running with development database as opposed to test database

When I run my rspec test, I noticed that the test is using my development database as opposed to using the one for test environment.
My spec_helper.rb file is as follow:
# This file was generated by the `rails generate rspec:install` command. Conventionally, all
# specs live under a `spec` directory, which RSpec adds to the `$LOAD_PATH`.
# The generated `.rspec` file contains `--require spec_helper` which will cause this
# file to always be loaded, without a need to explicitly require it in any files.
#
# Given that it is always loaded, you are encouraged to keep this file as
# light-weight as possible. Requiring heavyweight dependencies from this file
# will add to the boot time of your test suite on EVERY test run, even for an
# individual file that may not need all of that loaded. Instead, consider making
# a separate helper file that requires the additional dependencies and performs
# the additional setup, and require it from the spec files that actually need it.
# require 'webmock/rspec'
# WebMock.disable_net_connect!(allow_localhost: true)
#
# The `.rspec` file also contains a few flags that are not defaults but that
# users commonly want.
#
# See http://rubydoc.info/gems/rspec-core/RSpec/Core/Configuration
require 'rubygems'
# require 'test/unit'
require 'redis'
ENV['RAILS_ENV'] = 'test'
require File.expand_path("../../config/environment", __FILE__)
require 'factory_girl_rails'
# Capybara.register_driver :selenium do |app|
# Capybara::Selenium::Driver.new(app, :browser => :chrome)
# end
RSpec.configure do |config|
# rspec-expectations config goes here. You can use an alternate
# assertion/expectation library such as wrong or the stdlib/minitest
# assertions if you prefer.
config.expect_with :rspec do |expectations|
# This option will default to `true` in RSpec 4. It makes the `description`
# and `failure_message` of custom matchers include text for helper methods
# defined using `chain`, e.g.:
# be_bigger_than(2).and_smaller_than(4).description
# # => "be bigger than 2 and smaller than 4"
# ...rather than:
# # => "be bigger than 2"
expectations.include_chain_clauses_in_custom_matcher_descriptions = true
end
# rspec-mocks config goes here. You can use an alternate test double
# library (such as bogus or mocha) by changing the `mock_with` option here.
config.mock_with :rspec do |mocks|
# Prevents you from mocking or stubbing a method that does not exist on
# a real object. This is generally recommended, and will default to
# `true` in RSpec 4.
mocks.verify_partial_doubles = true
end
config.mock_with :rspec
config.before(:all) do
ActiveRecord::Base.skip_callbacks = true
end
config.after(:all) do
ActiveRecord::Base.skip_callbacks = false
end
end
And the rails_helper.rb file is as follow:
# This file is copied to spec/ when you run 'rails generate rspec:install'
require 'spec_helper'
require File.expand_path("../../config/environment", __FILE__)
require 'rspec/rails'
require 'database_cleaner'
# Add additional requires below this line. Rails is not loaded until this point!
# Requires supporting ruby files with custom matchers and macros, etc, in
# spec/support/ and its subdirectories. Files matching `spec/**/*_spec.rb` are
# run as spec files by default. This means that files in spec/support that end
# in _spec.rb will both be required and run as specs, causing the specs to be
# run twice. It is recommended that you do not name files matching this glob to
# end with _spec.rb. You can configure this pattern with the --pattern
# option on the command line or in ~/.rspec, .rspec or `.rspec-local`.
#
# The following line is provided for convenience purposes. It has the downside
# of increasing the boot-up time by auto-requiring all files in the support
# directory. Alternatively, in the individual `*_spec.rb` files, manually
# require only the support files necessary.
#
Dir[Rails.root.join("spec/support/**/*.rb")].each { |f| require f }
# Checks for pending migrations before tests are run.
# If you are not using ActiveRecord, you can remove this line.
ActiveRecord::Migration.maintain_test_schema!
RSpec.configure do |config|
# Remove this line if you're not using ActiveRecord or ActiveRecord fixtures
config.fixture_path = "#{::Rails.root}/spec/fixtures"
# If you're not using ActiveRecord, or you'd prefer not to run each of your
# examples within a transaction, remove the following line or assign false
# instead of true.
config.use_transactional_fixtures = false
# RSpec Rails can automatically mix in different behaviours to your tests
# based on their file location, for example enabling you to call `get` and
# `post` in specs under `spec/controllers`.
#
# You can disable this behaviour by removing the line below, and instead
# explicitly tag your specs with their type, e.g.:
#
# RSpec.describe UsersController, :type => :controller do
# # ...
# end
#
# The different available types are documented in the features, such as in
# https://relishapp.com/rspec/rspec-rails/docs
config.infer_spec_type_from_file_location!
end
BTW: if it is any help, I just finished solving an issue with database_cleaner wiping my development db according to this post.
How can I restrict the test to run only in test environment, and using only the test database?
All help is welcome, thank you.
My database.yml is as follow:
# PostgreSQL. Versions 8.2 and up are supported.
#
# Install the pg driver:
# gem install pg
# On OS X with Homebrew:
# gem install pg -- --with-pg-config=/usr/local/bin/pg_config
# On OS X with MacPorts:
# gem install pg -- --with-pg-config=/opt/local/lib/postgresql84/bin/pg_config
# On Windows:
# gem install pg
# Choose the win32 build.
# Install PostgreSQL and put its /bin directory on your path.
#
# Configure Using Gemfile
# gem 'pg'
#
default: &default
adapter: postgresql
encoding: unicode
# For details on connection pooling, see rails configuration guide
# http://guides.rubyonrails.org/configuring.html#database-pooling
pool: 5
development:
<<: *default
database: directory-service_development
# The specified database role being used to connect to postgres.
# To create additional roles in postgres see `$ createuser --help`.
# When left blank, postgres will use the default role. This is
# the same name as the operating system user that initialized the database.
#username: directory-service
# The password associated with the postgres role (username).
#password:
# Connect on a TCP socket. Omitted by default since the client uses a
# domain socket that doesn't need configuration. Windows does not have
# domain sockets, so uncomment these lines.
#host: localhost
# The TCP port the server listens on. Defaults to 5432.
# If your server runs on a different port number, change accordingly.
#port: 5432
# Schema search path. The server defaults to $user,public
#schema_search_path: myapp,sharedapp,public
# Minimum log levels, in increasing order:
# debug5, debug4, debug3, debug2, debug1,
# log, notice, warning, error, fatal, and panic
# Defaults to warning.
#min_messages: notice
# Warning: The database defined as "test" will be erased and
# re-generated from your development database when you run "rake".
# Do not set this db to the same as development or production.
test:
<<: *default
database: directory-service_test
# As with config/secrets.yml, you never want to store sensitive information,
# like your database password, in your source code. If your source code is
# ever seen by anyone, they now have access to your database.
#
# Instead, provide the password as a unix environment variable when you boot
# the app. Read http://guides.rubyonrails.org/configuring.html#configuring-a-database
# for a full rundown on how to provide these environment variables in a
# production deployment.
#
# On Heroku and other platform providers, you may have a full connection URL
# available as an environment variable. For example:
#
# DATABASE_URL="postgres://myuser:mypass#localhost/somedatabase"
#
# You can use this database configuration with:
#
# production:
# url: <%= ENV['DATABASE_URL'] %>
#
production:
<<: *default
database: directory-service_production
username: directory-service
password: <%= ENV['DIRECTORY-SERVICE_DATABASE_PASSWORD'] %>
On puts ENV["RAILS_ENV"], it shows that my test was running straight on test environment.
But the local foreman server that was running was getting data from the development environment.
By manually specifying that the server should run on test environment, the test also uses data from the test environment.
Big thanks to #AndyWaite.
What worked for me is the following:
Stop foreman (or your server running in development)
bin/rails db:migrate RAILS_ENV=test (optional)
bin/rails db:environment:set RAILS_ENV=test (set env explicitly)
rails server -e test (in another window)
rspec ___ (start testing)

Running PySpark on and IDE like Spyder?

I could run PySpark from the terminal line and everything works fine.
~/spark-1.0.0-bin-hadoop1/bin$ ./pyspark
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/__ / .__/\_,_/_/ /_/\_\ version 1.0.0
/_/
Using Python version 2.7.6 (default, May 27 2014 14:50:58)
However when I try to this on a Python IDE
import pyspark
ImportError: No module named pyspark
How do I import it like other Python libraries such numpy, scikit etc.?
Working in the terminal works fine, I just wanted to work in the IDE.
I wrote this launcher script a while back expressly for that purpose. I wanted to be able to interact with the pyspark shell from within the bpython(1) code-completion interpreter and WING IDE, or any IDE for that matter because they have code completion as well as provide a complete development experience. Learning Spark core by just typing 'pyspark' isn't good enough. So I wrote this. This was written in a Cloudera CDH5 environment, but with a little tweaking you can get this to work in whatever your environment is (even manually installed ones).
How to use:
NOTE: You can place all of the following in your .profile (or equivalent).
(1) linux$ export MASTER='yarn-client | local[NN] | spark://host:port'
(2) linux$ export SPARK_HOME=/usr/lib/spark # Your's will vary.
(3) linux$ export JAVA_HOME=/usr/java/latest # Your's will vary.
(4) linux$ export NAMENODE='vps00' # Your's will vary.
(5) linux$ export PYSTART=${PYTHONSTARTUP} # See in-line commends about the reason for the need for this alias to PYTHONSTARTUP.
(6) linux$ export HADOOP_CONF_DIR=/etc/hadoop/conf # Your's will vary. This one may not be necessary to set. Try and see.
(7) linux$ export HADOOP_HOME=/usr/lib/hadoop # Your's will vary. This one may not be necessary to set. Try and see.
(8) bpython -i /path/to/script/below # The moment of truth. Note that this is 'bpython' (not just plain 'python', which would not give the code completion you desire).
>>> sc
<pyspark.context.SparkContext object at 0x2798110>
>>>
Now for use with an IDE, you simply determine how to specify the equivalent of a PYTHONSTARTUP script for that IDE, and set that to '/path/to/script/below'. For example, as I described in the in-line comments below, for WING IDE you simply set the key/value pair 'PYTHONSTARTUP=/path/to/script/below' inside the project's properties section.
See in-line comments for more information.
#! /usr/bin/env python
# -*- coding: utf-8 -*-
#
# ===========================================================================
# Author: Noel Milton Vega (PRISMALYTICS, LLC.)
# ===========================================================================
# Start-up script for 'python(1)', 'bpython(1)', and Python IDE iterpreters
# when you want a 'client-mode' SPARK Shell (i.e. interactive SPARK shell)
# environment either LOCALLY, on a SPARK Standalone Cluster, or on SPARK
# YARN cluster. The code-sense/intelligence of bpython(1) and IDEs, in
# particular will aid in learning the SPARK core API.
#
# This script basically (1) first sets up an environment to launch a SPARK
# Shell, then (2) launches the SPARK Shell using the 'shell.py' python script
# provided in the distribution's SPARK_HOME; and finally (3) imports our
# favorite Python modules (for convenience; e.g. numpy, scipy; etc.).
#
# IMPORTANT:
# DON'T RUN THIS SCRIPT DIRECTLY. It is meant to be read in by interpreters
# (similar, in that respect, to a PYTHONSTARTUP script).
#
# Thus, there are two ways to use this file:
# # We can't refer to PYTHONSTARTUP inside this file b/c that causes a recursion loop
# # when calling this from within IDEs. So in step (0) we alias PYTHONSTARTUP to
# # PYSTARTUP at the O/S level, and use that alias here (since no conflict with that).
# (0): user$ export PYSTARTUP=${PYTHONSTARTUP} # We can't use PYTHONSTARTUP in this file
# (1): user$ export MASTER='yarn-client | local[NN] | spark://host:port'
# user$ bpython|python -i /path/to/this/file
#
# (2): From within your favorite IDE, specify it as your python startup
# script. For example, from within a WINGIDE project, set the following
# variables within a WING Project: 'Project -> Project Properties':
# 'PYTHONSTARTUP=/path/to/this/very/file'
# 'MASTER=yarn-client | local[NN] | spark://host:port'
# ===========================================================================
import sys, os, glob, subprocess, random
namenode = os.getenv('NAMENODE')
SPARK_HOME = os.getenv('SPARK_HOME')
# ===========================================================================
# =================================================================================
# This functions emulates the action of "source" or '.' that exists in bash(1),
# and can be used to set PYTHON environment variables (in Pythons globals dict).
# =================================================================================
def source(script, update=True):
proc = subprocess.Popen(". %s; env -0" % script, stdout=subprocess.PIPE, shell=True)
output = proc.communicate()[0]
env = dict((line.split("=", 1) for line in output.split('\x00') if line))
if update: os.environ.update(env)
return env
# ================================================================================
# ================================================================================
# Here, we get the name of our current SPARK Assembly JAR file name (locally). We
# use that to create a HDFS URL that points to it's location in HDFS when using
# YARN (i.e. when 'export MASTER=yarn-client'; we ignore it otherwise).
# ================================================================================
# Remember to always upload/update your distribution's current SPARK Assembly JAR
# to HDFS like this:
# $ hdfs dfs -mkdir -p /user/spark/share/lib" # Only necessary to do once!
# $ hdfs dfs -rm "/user/spark/share/lib/spark-assembly-*.jar" # Remove old version.
# $ hdfs dfs -put ${SPARK_HOME}/assembly/lib/spark-assembly-[0-9]*.jar /user/spark/share/lib/
# ================================================================================
SPARK_JAR_LOCATION = glob.glob(SPARK_HOME + '/lib/' + 'spark-assembly-[0-9]*.jar')[0].split("/")[-1]
SPARK_JAR_LOCATION = 'hdfs://' + namenode + ':8020/user/spark/share/lib/' + SPARK_JAR_LOCATION
# ================================================================================
# ================================================================================
# Update Pythons globals environment variable dict with necessary environment
# variables that the SPARK Shell will be looking for. Some we set explicitly via
# an in-line dictionary, as shown below. And the rest are set by 'source'ing the
# global SPARK environment file (although we could have included those explicitly
# here too, if we preferred not to touch that system-wide file -- and leave it as FCS).
# ================================================================================
spark_jar_opt = None
MASTER = os.getenv('MASTER') if os.getenv('MASTER') else 'local[8]'
if MASTER.startswith('yarn-'): spark_jar_opt = ' -Dspark.yarn.jar=' + SPARK_JAR_LOCATION
elif MASTER.startswith('spark://'): pass
else: HADOOP_HOME = ''
# ================================================================================
# ================================================================================
# Build '--driver-java-options' options for spark-shell, pyspark, or spark-submit.
# Many of these are set in '/etc/spark/conf/spark-defaults.conf' (and thus
# commented out here, but left here for reference completeness).
# ================================================================================
# Default UI port is 4040. The next statement allows us to run multiple SPARK shells.
DRIVER_JAVA_OPTIONS = '-Dspark.ui.port=' + str(random.randint(1025, 65535))
DRIVER_JAVA_OPTIONS += spark_jar_opt if spark_jar_opt else ''
# ================================================================================
# ================================================================================
# Build PYSPARK_SUBMIT_ARGS (i.e. the sames ones shown in 'pyspark --help'), and
# apply them to the O/S environment.
# ================================================================================
DRIVER_JAVA_OPTIONS = "'" + DRIVER_JAVA_OPTIONS + "'"
PYSPARK_SUBMIT_ARGS = ' --master ' + MASTER # Remember to set MASTER on UNIX CLI or in the IDE!
PYSPARK_SUBMIT_ARGS += ' --driver-java-options ' + DRIVER_JAVA_OPTIONS # Built above.
# ================================================================================
os.environ.update(source('/etc/spark/conf/spark-env.sh', update = False))
os.environ.update({ 'PYSPARK_SUBMIT_ARGS' : PYSPARK_SUBMIT_ARGS })
# ================================================================================
# ================================================================================
# Next, adjust 'sys.path' so SPARK Shell has the python modules it needs.
# ================================================================================
SPARK_PYTHON_DIR = SPARK_HOME + '/python'
PY4J = glob.glob(SPARK_PYTHON_DIR + '/lib/' + 'py4j-*-src.zip')[0].split("/")[-1]
sys.path = [SPARK_PYTHON_DIR, SPARK_PYTHON_DIR + '/lib/' + PY4J] + sys.path
# ================================================================================
# ================================================================================
# With our environment set, we start the SPARK Shell; and then to that, we add
# our favorite Python imports (e.g. numpy, scipy; etc).
# ================================================================================
print('PYSPARK_SUBMIT_ARGS:' + PYSPARK_SUBMIT_ARGS) # For visual debug.
execfile(SPARK_HOME + '/python/pyspark/shell.py', globals()) # Start the SPARK Shell.
execfile(os.getenv('PYSTARTUP')) # Next, load our favorite Python modules.
# ================================================================================
Enjoy and good luck! =:)
Thanks Ophir YokTon's upper post, I Finally managed to do it with "Spark 1.4.1+ Spyder2.3.4.
Here I would like to give one summary on all my steps to do it, hope it can help some people in the similiar situations.
Add PYTHONPATH variable into .bashrc. (of course you can put into other relavent profile file)
export PYTHONPATH=$SPARK_HOME/python/:$PYTHONPATH
export PYTHONPATH=$SPARK_HOME/python/lib/py4j-0.8.2.1-src.zip:$PYTHONPATH
Make it effective by
source .bashrc
Create one copy of spyder as spyder.py on your spyder bin directory
cp spyder spyder.py
Start Spyder IDE with following command
spark-submit spyder.py
I implemented the sample "simple app" from apache spark and passed the running test it in spyder environment. please refer to the picture "http://i.stack.imgur.com/xTv6s.gif"
pyspark isn't probably at your pythonpath variable. Go to location where pyspark folder is located and add that folder to your class path.
If you just want to import the module , adding it to python path is enough
If you want to run complete scripts from the IDE, you can create a 'tool' that uses spark-submit to execute your script from the IDE (instead of normal run)
Specifically for spyder (or other IDE's that are written in python) you can run the IDE from within spark-submit
example:
spark-submit.cmd c:\Python27\Scripts\spyder.py
note that I had to rename spyder to spyder.py - it appears spark submit relies on the extension do distinguish between python, java, or scala
add any required parameters to spark-submit