Below is my current /etc/init.d/celeryd script:
# Name of nodes to start, here we have a single node
#CELERYD_NODES="w1"
# or we could have three nodes:
CELERYD_NODES="w1 w2 w3"
# Where to chdir at start.
CELERYD_CHDIR="/srv/project/website"
# How to call "manage.py celeryd_multi"
CELERYD_MULTI="$CELERYD_CHDIR/manage.py celeryd_multi"
# How to call "manage.py celeryctl"
CELERYCTL="$CELERYD_CHDIR/manage.py celeryctl"
# Extra arguments to celeryd
CELERYD_OPTS="--time-limit=300 --concurrency=8"
# %n will be replaced with the nodename.
CELERYD_LOG_FILE="/srv/project/logs/celery/%n.log"
CELERYD_PID_FILE="/srv/project/celery/%n.pid"
# Workers should run as an unprivileged user.
CELERYD_USER="root"
CELERYD_GROUP="root"
# Name of the projects settings module.
export DJANGO_SETTINGS_MODULE="website.settings"
I now want to run Periodic Tasks, adding to/changing my example above how to I create the script configuration for beat mode?
Do I just add the following to the file? and what is the last line?
# Where the Django project is.
CELERYBEAT_CHDIR="/srv/project/website"
# Name of the projects settings module.
export DJANGO_SETTINGS_MODULE="website.settings"
# Path to celerybeat
CELERYBEAT="/opt/project/website/manage.py celerybeat"
# Extra arguments to celerybeat
CELERYBEAT_OPTS="--schedule=/var/run/celerybeat-schedule"
Yes, you can add it to the bottom of the script or create a new one that your init script points to.
Related
I followed the tutorial on http://docs.celeryproject.org/en/latest/ and I am on virtualbox (Xubuntu 16.XX TLS), Django 1.11.3, Celery 4.1 . rabbitmq 3.6.14, Python 2.7 .
and when I started the daemonization with the init-script: celerybeat (with /etc/default/celeryd config file)
[2017-11-19 01:13:00,912: INFO/MainProcess] beat: Starting...
and nothing more after. Do you see what could I make wrong ?
My celery.py:
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'oscar.settings')
app = Celery('oscar')
# Using a string here means the worker doesn't have to serialize
# the configuration object to child processes.
# - namespace='CELERY' means all celery-related configuration keys
# should have a `CELERY_` prefix.
app.config_from_object('django.conf:settings', namespace='CELERY')
# Broker settings
app.conf.broker_url = 'amqp://oscar:oscar#localhost:5672/oscarRabbit'
# Load task modules from all registered Django app configs.
app.autodiscover_tasks()
some_app/tasks.py:
from __future__ import absolute_import, unicode_literals
from oscar import celery_app
from celery.schedules import crontab
from .models import HelpRequest
from datetime import datetime, timedelta
import logging
""" CONSTANTS FOR THE TIMER """
# Can be changed (by default 1 week)
WEEKS_BEFORE_PENDING = 0
DAYS_BEFORE_PENDING = 0
HOURS_BEFORE_PENDING = 0
MINUTES_BEFORE_PENDING = 1
# http://docs.celeryproject.org/en/latest/userguide/periodic-tasks.html
# for schedule : http://docs.celeryproject.org/en/latest/userguide/periodic-tasks.html#crontab-schedules
#celery_app.on_after_configure.connect
def setup_periodic_tasks(sender, **kwargs):
sender.add_periodic_task(
crontab(minute=2),
set_open_help_request_to_pending
)
#celery_app.task(name="HR_OPEN_TO_PENDING")
def set_open_help_request_to_pending():
""" For timedelay idea : https://stackoverflow.com/a/27869101/6149867 """
logging.info("RUNNING CRON TASK FOR STUDENT COLLABORATION : set_open_help_request_to_pending")
request_list = HelpRequest.objects.filter(
state=HelpRequest.OPEN,
timestamp__gte=datetime.now() - timedelta(hours=HOURS_BEFORE_PENDING,
minutes=MINUTES_BEFORE_PENDING,
days=DAYS_BEFORE_PENDING,
weeks=WEEKS_BEFORE_PENDING)
)
if request_list:
logging.info("FOUND ", request_list.count(), " Help request(s) => PENDING")
for help_request in request_list.all():
help_request.change_state(HelpRequest.PENDING)
/etc/default/celeryd:
# Names of nodes to start
# most people will only start one node:
CELERYD_NODES="worker1"
# but you can also start multiple and configure settings
# for each in CELERYD_OPTS
#CELERYD_NODES="worker1 worker2 worker3"
# alternatively, you can specify the number of nodes to start:
#CELERYD_NODES=10
# Absolute or relative path to the 'celery' command:
CELERY_BIN="/home/jy95/Documents/oscareducation/ve/local/bin/celery"
# App instance to use
# comment out this line if you don't use an app
CELERY_APP="oscar"
# Where to chdir at start.
CELERYD_CHDIR="/home/jy95/Documents/oscareducation"
# Extra command-line arguments to the worker
# django_celery_beat for admin purpuse
CELERYD_OPTS="--scheduler django_celery_beat.schedulers:DatabaseScheduler -f /var/log/celery/celery_tasks.log"
# Set logging level to DEBUG
#CELERYD_LOG_LEVEL="DEBUG"
# %n will be replaced with the first part of the nodename.
CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
CELERYD_PID_FILE="/var/run/celery/%n.pid"
# Workers should run as an unprivileged user.
# You need to create this user manually (or you can choose
# a user/group combination that already exists (e.g., nobody).
CELERYD_USER="celery"
CELERYD_GROUP="celery"
# If enabled pid and log directories will be created if missing,
# and owned by the userid/group configured.
CELERY_CREATE_DIRS=1
My setup of rabbitmq :
$ sudo rabbitmqctl add_user oscar oscar
$ sudo rabbitmqctl add_vhost oscarRabbit
$ sudo rabbitmqctl set_user_tags oscar administrator
$ sudo rabbitmqctl set_permissions -p oscarRabbit oscar ".*" ".*" ".*"
The commands I run to start (and their messages) :
sudo rabbitmq-server -detached
sudo /etc/init.d/celerybeat start
Warning: PID file not written; -detached was passed.
/etc/init.d/celerybeat: lerybeat: not found
celery init v10.1.
Using configuration: /etc/default/celeryd
Starting celerybeat...
sudo /etc/init.d/celerybeat start
source ve/bin/activate
python manage.py runserver
Performing system checks...
System check identified no issues (0 silenced).
November 19, 2017 -01:49:22 Django version 1.11.3, using settings 'oscar.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
Thanks for your answer
It looks like you've started a celerybeat process and your server, but haven't started a celery worker process.
python celery -A proj worker -B
(where proj is the name of your project).
Note that you can start a celery worker with an embedded beat process rather than needing to run celerybeat separately.
I have a website project written with django, celery and rabbitmq. And a '.delay' task (the task creates a new folder) is called when a button is clicked.
Everything works fine with celery (the .delay task is called, and a new folder is created) when I run celery with manage.py like:
python manage.py celeryd
However, when I ran celery as the daemon, even there was no error, the task was not executed (no folder was created).
I was kind of following the tutorial: http://www.arruda.blog.br/programacao/django-celery-in-daemon/
My settings are:
/etc/default/celeryd
:
# Name of nodes to start, here we have a single node
CELERYD_NODES="w1"
# Where to chdir at start.
CELERYD_CHDIR="/var/www/myproject"
# How to call "manage.py celeryd_multi"
CELERYD_MULTI="$CELERYD_CHDIR/manage.py celeryd_multi"
# How to call "manage.py celeryctl"
CELERYCTL="$CELERYD_CHDIR/manage.py celeryctl"
# Extra arguments to celeryd
CELERYD_OPTS=""
# Name of the celery config module.
CELERY_CONFIG_MODULE="myproject.settings"
# %n will be replaced with the nodename.
CELERYD_LOG_FILE="/var/log/celery/w1.log"
CELERYD_PID_FILE="/var/run/celery/w1.pid"
# Workers should run as an unprivileged user.
#CELERYD_USER="root"
#CELERYD_GROUP="root"
# Name of the projects settings module.
export DJANGO_SETTINGS_MODULE="myproject.settings"
the correlated folders are created too
for the '/etc/default/celeryd/init.d' file, I used this version:
https://raw.github.com/ask/celery/1da3aa43d1e6de525beeda398d0acb8841d5b4d2/contrib/generic-init.d/celeryd
for /var/www/myproject/myproject/settings.py, I have:
:
import djcelery
djcelery.setup_loader()
BROKER_HOST = "127.0.0.1"
BROKER_PORT = 5672
BROKER_VHOST = "/"
BROKER_USER = "guest"
BROKER_PASSWORD = "guest"
INSTALLED_APPS = (
'djcelery',
...
)
There was no error when I start celery by using:
/etc/init.d/celeryd start
and no results neither. Does someone know how to fix the problem?
Celery's docs have a daemon troubleshooting section that might be helpful. Celery has a flag that lets you run your init script without actually daemonizing, and that should show what's going wrong:
C_FAKEFORK=1 sh -x /etc/init.d/celeryd start
Newer versions of that init script have a dryrun command that's an easier-to-remember way to run the start command without daemonizing.
When I run my rspec test, I noticed that the test is using my development database as opposed to using the one for test environment.
My spec_helper.rb file is as follow:
# This file was generated by the `rails generate rspec:install` command. Conventionally, all
# specs live under a `spec` directory, which RSpec adds to the `$LOAD_PATH`.
# The generated `.rspec` file contains `--require spec_helper` which will cause this
# file to always be loaded, without a need to explicitly require it in any files.
#
# Given that it is always loaded, you are encouraged to keep this file as
# light-weight as possible. Requiring heavyweight dependencies from this file
# will add to the boot time of your test suite on EVERY test run, even for an
# individual file that may not need all of that loaded. Instead, consider making
# a separate helper file that requires the additional dependencies and performs
# the additional setup, and require it from the spec files that actually need it.
# require 'webmock/rspec'
# WebMock.disable_net_connect!(allow_localhost: true)
#
# The `.rspec` file also contains a few flags that are not defaults but that
# users commonly want.
#
# See http://rubydoc.info/gems/rspec-core/RSpec/Core/Configuration
require 'rubygems'
# require 'test/unit'
require 'redis'
ENV['RAILS_ENV'] = 'test'
require File.expand_path("../../config/environment", __FILE__)
require 'factory_girl_rails'
# Capybara.register_driver :selenium do |app|
# Capybara::Selenium::Driver.new(app, :browser => :chrome)
# end
RSpec.configure do |config|
# rspec-expectations config goes here. You can use an alternate
# assertion/expectation library such as wrong or the stdlib/minitest
# assertions if you prefer.
config.expect_with :rspec do |expectations|
# This option will default to `true` in RSpec 4. It makes the `description`
# and `failure_message` of custom matchers include text for helper methods
# defined using `chain`, e.g.:
# be_bigger_than(2).and_smaller_than(4).description
# # => "be bigger than 2 and smaller than 4"
# ...rather than:
# # => "be bigger than 2"
expectations.include_chain_clauses_in_custom_matcher_descriptions = true
end
# rspec-mocks config goes here. You can use an alternate test double
# library (such as bogus or mocha) by changing the `mock_with` option here.
config.mock_with :rspec do |mocks|
# Prevents you from mocking or stubbing a method that does not exist on
# a real object. This is generally recommended, and will default to
# `true` in RSpec 4.
mocks.verify_partial_doubles = true
end
config.mock_with :rspec
config.before(:all) do
ActiveRecord::Base.skip_callbacks = true
end
config.after(:all) do
ActiveRecord::Base.skip_callbacks = false
end
end
And the rails_helper.rb file is as follow:
# This file is copied to spec/ when you run 'rails generate rspec:install'
require 'spec_helper'
require File.expand_path("../../config/environment", __FILE__)
require 'rspec/rails'
require 'database_cleaner'
# Add additional requires below this line. Rails is not loaded until this point!
# Requires supporting ruby files with custom matchers and macros, etc, in
# spec/support/ and its subdirectories. Files matching `spec/**/*_spec.rb` are
# run as spec files by default. This means that files in spec/support that end
# in _spec.rb will both be required and run as specs, causing the specs to be
# run twice. It is recommended that you do not name files matching this glob to
# end with _spec.rb. You can configure this pattern with the --pattern
# option on the command line or in ~/.rspec, .rspec or `.rspec-local`.
#
# The following line is provided for convenience purposes. It has the downside
# of increasing the boot-up time by auto-requiring all files in the support
# directory. Alternatively, in the individual `*_spec.rb` files, manually
# require only the support files necessary.
#
Dir[Rails.root.join("spec/support/**/*.rb")].each { |f| require f }
# Checks for pending migrations before tests are run.
# If you are not using ActiveRecord, you can remove this line.
ActiveRecord::Migration.maintain_test_schema!
RSpec.configure do |config|
# Remove this line if you're not using ActiveRecord or ActiveRecord fixtures
config.fixture_path = "#{::Rails.root}/spec/fixtures"
# If you're not using ActiveRecord, or you'd prefer not to run each of your
# examples within a transaction, remove the following line or assign false
# instead of true.
config.use_transactional_fixtures = false
# RSpec Rails can automatically mix in different behaviours to your tests
# based on their file location, for example enabling you to call `get` and
# `post` in specs under `spec/controllers`.
#
# You can disable this behaviour by removing the line below, and instead
# explicitly tag your specs with their type, e.g.:
#
# RSpec.describe UsersController, :type => :controller do
# # ...
# end
#
# The different available types are documented in the features, such as in
# https://relishapp.com/rspec/rspec-rails/docs
config.infer_spec_type_from_file_location!
end
BTW: if it is any help, I just finished solving an issue with database_cleaner wiping my development db according to this post.
How can I restrict the test to run only in test environment, and using only the test database?
All help is welcome, thank you.
My database.yml is as follow:
# PostgreSQL. Versions 8.2 and up are supported.
#
# Install the pg driver:
# gem install pg
# On OS X with Homebrew:
# gem install pg -- --with-pg-config=/usr/local/bin/pg_config
# On OS X with MacPorts:
# gem install pg -- --with-pg-config=/opt/local/lib/postgresql84/bin/pg_config
# On Windows:
# gem install pg
# Choose the win32 build.
# Install PostgreSQL and put its /bin directory on your path.
#
# Configure Using Gemfile
# gem 'pg'
#
default: &default
adapter: postgresql
encoding: unicode
# For details on connection pooling, see rails configuration guide
# http://guides.rubyonrails.org/configuring.html#database-pooling
pool: 5
development:
<<: *default
database: directory-service_development
# The specified database role being used to connect to postgres.
# To create additional roles in postgres see `$ createuser --help`.
# When left blank, postgres will use the default role. This is
# the same name as the operating system user that initialized the database.
#username: directory-service
# The password associated with the postgres role (username).
#password:
# Connect on a TCP socket. Omitted by default since the client uses a
# domain socket that doesn't need configuration. Windows does not have
# domain sockets, so uncomment these lines.
#host: localhost
# The TCP port the server listens on. Defaults to 5432.
# If your server runs on a different port number, change accordingly.
#port: 5432
# Schema search path. The server defaults to $user,public
#schema_search_path: myapp,sharedapp,public
# Minimum log levels, in increasing order:
# debug5, debug4, debug3, debug2, debug1,
# log, notice, warning, error, fatal, and panic
# Defaults to warning.
#min_messages: notice
# Warning: The database defined as "test" will be erased and
# re-generated from your development database when you run "rake".
# Do not set this db to the same as development or production.
test:
<<: *default
database: directory-service_test
# As with config/secrets.yml, you never want to store sensitive information,
# like your database password, in your source code. If your source code is
# ever seen by anyone, they now have access to your database.
#
# Instead, provide the password as a unix environment variable when you boot
# the app. Read http://guides.rubyonrails.org/configuring.html#configuring-a-database
# for a full rundown on how to provide these environment variables in a
# production deployment.
#
# On Heroku and other platform providers, you may have a full connection URL
# available as an environment variable. For example:
#
# DATABASE_URL="postgres://myuser:mypass#localhost/somedatabase"
#
# You can use this database configuration with:
#
# production:
# url: <%= ENV['DATABASE_URL'] %>
#
production:
<<: *default
database: directory-service_production
username: directory-service
password: <%= ENV['DIRECTORY-SERVICE_DATABASE_PASSWORD'] %>
On puts ENV["RAILS_ENV"], it shows that my test was running straight on test environment.
But the local foreman server that was running was getting data from the development environment.
By manually specifying that the server should run on test environment, the test also uses data from the test environment.
Big thanks to #AndyWaite.
What worked for me is the following:
Stop foreman (or your server running in development)
bin/rails db:migrate RAILS_ENV=test (optional)
bin/rails db:environment:set RAILS_ENV=test (set env explicitly)
rails server -e test (in another window)
rspec ___ (start testing)
I have database-cleaner configured for my rails 4 application,
Each time I run the test, I discovered that my database gets wiped out in both the test and development environment.
My configurations are in rails_helper as follow:
ENV["RAILS_ENV"] ||= 'test'
# This file is copied to spec/ when you run 'rails generate rspec:install'
require 'spec_helper'
require File.expand_path("../../config/environment", __FILE__)
require 'rspec/rails'
require 'database_cleaner'
Rails.env = "test"
# Add additional requires below this line. Rails is not loaded until this point!
# Requires supporting ruby files with custom matchers and macros, etc, in
# spec/support/ and its subdirectories. Files matching `spec/**/*_spec.rb` are
# run as spec files by default. This means that files in spec/support that end
# in _spec.rb will both be required and run as specs, causing the specs to be
# run twice. It is recommended that you do not name files matching this glob to
# end with _spec.rb. You can configure this pattern with the --pattern
# option on the command line or in ~/.rspec, .rspec or `.rspec-local`.
#
# The following line is provided for convenience purposes. It has the downside
# of increasing the boot-up time by auto-requiring all files in the support
# directory. Alternatively, in the individual `*_spec.rb` files, manually
# require only the support files necessary.
#
# Dir[Rails.root.join("spec/support/**/*.rb")].each { |f| require f }
# Checks for pending migrations before tests are run.
# If you are not using ActiveRecord, you can remove this line.
ActiveRecord::Migration.maintain_test_schema!
RSpec.configure do |config|
# Remove this line if you're not using ActiveRecord or ActiveRecord fixtures
config.fixture_path = "#{::Rails.root}/spec/fixtures"
# If you're not using ActiveRecord, or you'd prefer not to run each of your
# examples within a transaction, remove the following line or assign false
# instead of true.
config.use_transactional_fixtures = false
# RSpec Rails can automatically mix in different behaviours to your tests
# based on their file location, for example enabling you to call `get` and
# `post` in specs under `spec/controllers`.
#
# You can disable this behaviour by removing the line below, and instead
# explicitly tag your specs with their type, e.g.:
#
# RSpec.describe UsersController, :type => :controller do
# # ...
# end
#
# The different available types are documented in the features, such as in
# https://relishapp.com/rspec/rspec-rails/docs
config.infer_spec_type_from_file_location!
config.before(:suite) do
DatabaseCleaner.clean_with(:truncation)
end
config.before(:each) do
DatabaseCleaner.strategy = :transaction
end
config.before(:each, :js => true) do
DatabaseCleaner.strategy = :truncation
end
config.before(:each) do
DatabaseCleaner.start
end
config.after(:each) do
DatabaseCleaner.clean
end
config.mock_with :rspec
config.before(:all) do
ActiveRecord::Base.skip_callbacks = true
end
config.after(:all) do
ActiveRecord::Base.skip_callbacks = false
end
end
How can I ensure that the cleaner only wipes the db in test environment without touching my development?
My database.yml is as follow:
# PostgreSQL. Versions 8.2 and up are supported.
#
# Install the pg driver:
# gem install pg
# On OS X with Homebrew:
# gem install pg -- --with-pg-config=/usr/local/bin/pg_config
# On OS X with MacPorts:
# gem install pg -- --with-pg-config=/opt/local/lib/postgresql84/bin/pg_config
# On Windows:
# gem install pg
# Choose the win32 build.
# Install PostgreSQL and put its /bin directory on your path.
#
# Configure Using Gemfile
# gem 'pg'
#
default: &default
adapter: postgresql
encoding: unicode
# For details on connection pooling, see rails configuration guide
# http://guides.rubyonrails.org/configuring.html#database-pooling
pool: 5
development:
<<: *default
database: directory-service_development
# The specified database role being used to connect to postgres.
# To create additional roles in postgres see `$ createuser --help`.
# When left blank, postgres will use the default role. This is
# the same name as the operating system user that initialized the database.
#username: directory-service
# The password associated with the postgres role (username).
#password:
# Connect on a TCP socket. Omitted by default since the client uses a
# domain socket that doesn't need configuration. Windows does not have
# domain sockets, so uncomment these lines.
#host: localhost
# The TCP port the server listens on. Defaults to 5432.
# If your server runs on a different port number, change accordingly.
#port: 5432
# Schema search path. The server defaults to $user,public
#schema_search_path: myapp,sharedapp,public
# Minimum log levels, in increasing order:
# debug5, debug4, debug3, debug2, debug1,
# log, notice, warning, error, fatal, and panic
# Defaults to warning.
#min_messages: notice
# Warning: The database defined as "test" will be erased and
# re-generated from your development database when you run "rake".
# Do not set this db to the same as development or production.
test:
<<: *default
database: directory-service_test
# As with config/secrets.yml, you never want to store sensitive information,
# like your database password, in your source code. If your source code is
# ever seen by anyone, they now have access to your database.
#
# Instead, provide the password as a unix environment variable when you boot
# the app. Read http://guides.rubyonrails.org/configuring.html#configuring-a-database
# for a full rundown on how to provide these environment variables in a
# production deployment.
#
# On Heroku and other platform providers, you may have a full connection URL
# available as an environment variable. For example:
#
# DATABASE_URL="postgres://myuser:mypass#localhost/somedatabase"
#
# You can use this database configuration with:
#
# production:
# url: <%= ENV['DATABASE_URL'] %>
#
production:
<<: *default
database: directory-service_production
username: directory-service
password: <%= ENV['DIRECTORY-SERVICE_DATABASE_PASSWORD'] %>
I'd recommend changing
ENV["RAILS_ENV"] ||= 'test'
to
ENV["RAILS_ENV"] = 'test'
and remove
Rails.env = 'test'
as the RAILS_ENV environment variable should be sufficient for configuration
If anyone is looking for another potential source of this issue, I randomly had $DATABASE_URL defined in my .bashrc file to point directly to my development database. Took me a few hours to find that.
In my case it was database connection specified in .env file when I used dotenv-rails gem. For some reasons database_cleaner prefer connection from there instead of rails application config.
Well, I'm not sure what I was doing wrong, but by undoing all the configurations I had for database_cleaner:
uninstalling the database_cleaner gem
removing all related configurations from both spec_helper and rails_helper
And then following this guide by Avdi Grimm, after re-installing the database_cleaner gem and also uncomment this line:
Dir[Rails.root.join("spec/support/**/*.rb")].each { |f| require f }
from my rails_helper, I was able to get the database_cleaner back to work as expected. Thank you all.
I appreciate this is an old post but I had this issue today.
I checked using pry and my
ENV['RAILS_ENV'] = 'test' however my ENV['DATABASE_URL'] was set to my development db in the form of:
postgres://localhost/my_dev_db
I added a line in the database cleaner config in rails_helper.rb to change to my test db like so:
config.before(:suite) do
ActiveRecord::Base.establish_connection(ENV['DATABASE_TEST'])
DatabaseCleaner.clean_with(:truncation)
end
where ENV['DATABASE_TEST'] was in the form of:
postgres://localhost/my_test_db
This solved the issue for me.
For me the issue was having DatabaseCleaner.clean on the top level of rails_helper instead of within config.before(:suite).
I could run PySpark from the terminal line and everything works fine.
~/spark-1.0.0-bin-hadoop1/bin$ ./pyspark
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/__ / .__/\_,_/_/ /_/\_\ version 1.0.0
/_/
Using Python version 2.7.6 (default, May 27 2014 14:50:58)
However when I try to this on a Python IDE
import pyspark
ImportError: No module named pyspark
How do I import it like other Python libraries such numpy, scikit etc.?
Working in the terminal works fine, I just wanted to work in the IDE.
I wrote this launcher script a while back expressly for that purpose. I wanted to be able to interact with the pyspark shell from within the bpython(1) code-completion interpreter and WING IDE, or any IDE for that matter because they have code completion as well as provide a complete development experience. Learning Spark core by just typing 'pyspark' isn't good enough. So I wrote this. This was written in a Cloudera CDH5 environment, but with a little tweaking you can get this to work in whatever your environment is (even manually installed ones).
How to use:
NOTE: You can place all of the following in your .profile (or equivalent).
(1) linux$ export MASTER='yarn-client | local[NN] | spark://host:port'
(2) linux$ export SPARK_HOME=/usr/lib/spark # Your's will vary.
(3) linux$ export JAVA_HOME=/usr/java/latest # Your's will vary.
(4) linux$ export NAMENODE='vps00' # Your's will vary.
(5) linux$ export PYSTART=${PYTHONSTARTUP} # See in-line commends about the reason for the need for this alias to PYTHONSTARTUP.
(6) linux$ export HADOOP_CONF_DIR=/etc/hadoop/conf # Your's will vary. This one may not be necessary to set. Try and see.
(7) linux$ export HADOOP_HOME=/usr/lib/hadoop # Your's will vary. This one may not be necessary to set. Try and see.
(8) bpython -i /path/to/script/below # The moment of truth. Note that this is 'bpython' (not just plain 'python', which would not give the code completion you desire).
>>> sc
<pyspark.context.SparkContext object at 0x2798110>
>>>
Now for use with an IDE, you simply determine how to specify the equivalent of a PYTHONSTARTUP script for that IDE, and set that to '/path/to/script/below'. For example, as I described in the in-line comments below, for WING IDE you simply set the key/value pair 'PYTHONSTARTUP=/path/to/script/below' inside the project's properties section.
See in-line comments for more information.
#! /usr/bin/env python
# -*- coding: utf-8 -*-
#
# ===========================================================================
# Author: Noel Milton Vega (PRISMALYTICS, LLC.)
# ===========================================================================
# Start-up script for 'python(1)', 'bpython(1)', and Python IDE iterpreters
# when you want a 'client-mode' SPARK Shell (i.e. interactive SPARK shell)
# environment either LOCALLY, on a SPARK Standalone Cluster, or on SPARK
# YARN cluster. The code-sense/intelligence of bpython(1) and IDEs, in
# particular will aid in learning the SPARK core API.
#
# This script basically (1) first sets up an environment to launch a SPARK
# Shell, then (2) launches the SPARK Shell using the 'shell.py' python script
# provided in the distribution's SPARK_HOME; and finally (3) imports our
# favorite Python modules (for convenience; e.g. numpy, scipy; etc.).
#
# IMPORTANT:
# DON'T RUN THIS SCRIPT DIRECTLY. It is meant to be read in by interpreters
# (similar, in that respect, to a PYTHONSTARTUP script).
#
# Thus, there are two ways to use this file:
# # We can't refer to PYTHONSTARTUP inside this file b/c that causes a recursion loop
# # when calling this from within IDEs. So in step (0) we alias PYTHONSTARTUP to
# # PYSTARTUP at the O/S level, and use that alias here (since no conflict with that).
# (0): user$ export PYSTARTUP=${PYTHONSTARTUP} # We can't use PYTHONSTARTUP in this file
# (1): user$ export MASTER='yarn-client | local[NN] | spark://host:port'
# user$ bpython|python -i /path/to/this/file
#
# (2): From within your favorite IDE, specify it as your python startup
# script. For example, from within a WINGIDE project, set the following
# variables within a WING Project: 'Project -> Project Properties':
# 'PYTHONSTARTUP=/path/to/this/very/file'
# 'MASTER=yarn-client | local[NN] | spark://host:port'
# ===========================================================================
import sys, os, glob, subprocess, random
namenode = os.getenv('NAMENODE')
SPARK_HOME = os.getenv('SPARK_HOME')
# ===========================================================================
# =================================================================================
# This functions emulates the action of "source" or '.' that exists in bash(1),
# and can be used to set PYTHON environment variables (in Pythons globals dict).
# =================================================================================
def source(script, update=True):
proc = subprocess.Popen(". %s; env -0" % script, stdout=subprocess.PIPE, shell=True)
output = proc.communicate()[0]
env = dict((line.split("=", 1) for line in output.split('\x00') if line))
if update: os.environ.update(env)
return env
# ================================================================================
# ================================================================================
# Here, we get the name of our current SPARK Assembly JAR file name (locally). We
# use that to create a HDFS URL that points to it's location in HDFS when using
# YARN (i.e. when 'export MASTER=yarn-client'; we ignore it otherwise).
# ================================================================================
# Remember to always upload/update your distribution's current SPARK Assembly JAR
# to HDFS like this:
# $ hdfs dfs -mkdir -p /user/spark/share/lib" # Only necessary to do once!
# $ hdfs dfs -rm "/user/spark/share/lib/spark-assembly-*.jar" # Remove old version.
# $ hdfs dfs -put ${SPARK_HOME}/assembly/lib/spark-assembly-[0-9]*.jar /user/spark/share/lib/
# ================================================================================
SPARK_JAR_LOCATION = glob.glob(SPARK_HOME + '/lib/' + 'spark-assembly-[0-9]*.jar')[0].split("/")[-1]
SPARK_JAR_LOCATION = 'hdfs://' + namenode + ':8020/user/spark/share/lib/' + SPARK_JAR_LOCATION
# ================================================================================
# ================================================================================
# Update Pythons globals environment variable dict with necessary environment
# variables that the SPARK Shell will be looking for. Some we set explicitly via
# an in-line dictionary, as shown below. And the rest are set by 'source'ing the
# global SPARK environment file (although we could have included those explicitly
# here too, if we preferred not to touch that system-wide file -- and leave it as FCS).
# ================================================================================
spark_jar_opt = None
MASTER = os.getenv('MASTER') if os.getenv('MASTER') else 'local[8]'
if MASTER.startswith('yarn-'): spark_jar_opt = ' -Dspark.yarn.jar=' + SPARK_JAR_LOCATION
elif MASTER.startswith('spark://'): pass
else: HADOOP_HOME = ''
# ================================================================================
# ================================================================================
# Build '--driver-java-options' options for spark-shell, pyspark, or spark-submit.
# Many of these are set in '/etc/spark/conf/spark-defaults.conf' (and thus
# commented out here, but left here for reference completeness).
# ================================================================================
# Default UI port is 4040. The next statement allows us to run multiple SPARK shells.
DRIVER_JAVA_OPTIONS = '-Dspark.ui.port=' + str(random.randint(1025, 65535))
DRIVER_JAVA_OPTIONS += spark_jar_opt if spark_jar_opt else ''
# ================================================================================
# ================================================================================
# Build PYSPARK_SUBMIT_ARGS (i.e. the sames ones shown in 'pyspark --help'), and
# apply them to the O/S environment.
# ================================================================================
DRIVER_JAVA_OPTIONS = "'" + DRIVER_JAVA_OPTIONS + "'"
PYSPARK_SUBMIT_ARGS = ' --master ' + MASTER # Remember to set MASTER on UNIX CLI or in the IDE!
PYSPARK_SUBMIT_ARGS += ' --driver-java-options ' + DRIVER_JAVA_OPTIONS # Built above.
# ================================================================================
os.environ.update(source('/etc/spark/conf/spark-env.sh', update = False))
os.environ.update({ 'PYSPARK_SUBMIT_ARGS' : PYSPARK_SUBMIT_ARGS })
# ================================================================================
# ================================================================================
# Next, adjust 'sys.path' so SPARK Shell has the python modules it needs.
# ================================================================================
SPARK_PYTHON_DIR = SPARK_HOME + '/python'
PY4J = glob.glob(SPARK_PYTHON_DIR + '/lib/' + 'py4j-*-src.zip')[0].split("/")[-1]
sys.path = [SPARK_PYTHON_DIR, SPARK_PYTHON_DIR + '/lib/' + PY4J] + sys.path
# ================================================================================
# ================================================================================
# With our environment set, we start the SPARK Shell; and then to that, we add
# our favorite Python imports (e.g. numpy, scipy; etc).
# ================================================================================
print('PYSPARK_SUBMIT_ARGS:' + PYSPARK_SUBMIT_ARGS) # For visual debug.
execfile(SPARK_HOME + '/python/pyspark/shell.py', globals()) # Start the SPARK Shell.
execfile(os.getenv('PYSTARTUP')) # Next, load our favorite Python modules.
# ================================================================================
Enjoy and good luck! =:)
Thanks Ophir YokTon's upper post, I Finally managed to do it with "Spark 1.4.1+ Spyder2.3.4.
Here I would like to give one summary on all my steps to do it, hope it can help some people in the similiar situations.
Add PYTHONPATH variable into .bashrc. (of course you can put into other relavent profile file)
export PYTHONPATH=$SPARK_HOME/python/:$PYTHONPATH
export PYTHONPATH=$SPARK_HOME/python/lib/py4j-0.8.2.1-src.zip:$PYTHONPATH
Make it effective by
source .bashrc
Create one copy of spyder as spyder.py on your spyder bin directory
cp spyder spyder.py
Start Spyder IDE with following command
spark-submit spyder.py
I implemented the sample "simple app" from apache spark and passed the running test it in spyder environment. please refer to the picture "http://i.stack.imgur.com/xTv6s.gif"
pyspark isn't probably at your pythonpath variable. Go to location where pyspark folder is located and add that folder to your class path.
If you just want to import the module , adding it to python path is enough
If you want to run complete scripts from the IDE, you can create a 'tool' that uses spark-submit to execute your script from the IDE (instead of normal run)
Specifically for spyder (or other IDE's that are written in python) you can run the IDE from within spark-submit
example:
spark-submit.cmd c:\Python27\Scripts\spyder.py
note that I had to rename spyder to spyder.py - it appears spark submit relies on the extension do distinguish between python, java, or scala
add any required parameters to spark-submit