Running PySpark on and IDE like Spyder? - python-2.7

I could run PySpark from the terminal line and everything works fine.
~/spark-1.0.0-bin-hadoop1/bin$ ./pyspark
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/__ / .__/\_,_/_/ /_/\_\ version 1.0.0
/_/
Using Python version 2.7.6 (default, May 27 2014 14:50:58)
However when I try to this on a Python IDE
import pyspark
ImportError: No module named pyspark
How do I import it like other Python libraries such numpy, scikit etc.?
Working in the terminal works fine, I just wanted to work in the IDE.

I wrote this launcher script a while back expressly for that purpose. I wanted to be able to interact with the pyspark shell from within the bpython(1) code-completion interpreter and WING IDE, or any IDE for that matter because they have code completion as well as provide a complete development experience. Learning Spark core by just typing 'pyspark' isn't good enough. So I wrote this. This was written in a Cloudera CDH5 environment, but with a little tweaking you can get this to work in whatever your environment is (even manually installed ones).
How to use:
NOTE: You can place all of the following in your .profile (or equivalent).
(1) linux$ export MASTER='yarn-client | local[NN] | spark://host:port'
(2) linux$ export SPARK_HOME=/usr/lib/spark # Your's will vary.
(3) linux$ export JAVA_HOME=/usr/java/latest # Your's will vary.
(4) linux$ export NAMENODE='vps00' # Your's will vary.
(5) linux$ export PYSTART=${PYTHONSTARTUP} # See in-line commends about the reason for the need for this alias to PYTHONSTARTUP.
(6) linux$ export HADOOP_CONF_DIR=/etc/hadoop/conf # Your's will vary. This one may not be necessary to set. Try and see.
(7) linux$ export HADOOP_HOME=/usr/lib/hadoop # Your's will vary. This one may not be necessary to set. Try and see.
(8) bpython -i /path/to/script/below # The moment of truth. Note that this is 'bpython' (not just plain 'python', which would not give the code completion you desire).
>>> sc
<pyspark.context.SparkContext object at 0x2798110>
>>>
Now for use with an IDE, you simply determine how to specify the equivalent of a PYTHONSTARTUP script for that IDE, and set that to '/path/to/script/below'. For example, as I described in the in-line comments below, for WING IDE you simply set the key/value pair 'PYTHONSTARTUP=/path/to/script/below' inside the project's properties section.
See in-line comments for more information.
#! /usr/bin/env python
# -*- coding: utf-8 -*-
#
# ===========================================================================
# Author: Noel Milton Vega (PRISMALYTICS, LLC.)
# ===========================================================================
# Start-up script for 'python(1)', 'bpython(1)', and Python IDE iterpreters
# when you want a 'client-mode' SPARK Shell (i.e. interactive SPARK shell)
# environment either LOCALLY, on a SPARK Standalone Cluster, or on SPARK
# YARN cluster. The code-sense/intelligence of bpython(1) and IDEs, in
# particular will aid in learning the SPARK core API.
#
# This script basically (1) first sets up an environment to launch a SPARK
# Shell, then (2) launches the SPARK Shell using the 'shell.py' python script
# provided in the distribution's SPARK_HOME; and finally (3) imports our
# favorite Python modules (for convenience; e.g. numpy, scipy; etc.).
#
# IMPORTANT:
# DON'T RUN THIS SCRIPT DIRECTLY. It is meant to be read in by interpreters
# (similar, in that respect, to a PYTHONSTARTUP script).
#
# Thus, there are two ways to use this file:
# # We can't refer to PYTHONSTARTUP inside this file b/c that causes a recursion loop
# # when calling this from within IDEs. So in step (0) we alias PYTHONSTARTUP to
# # PYSTARTUP at the O/S level, and use that alias here (since no conflict with that).
# (0): user$ export PYSTARTUP=${PYTHONSTARTUP} # We can't use PYTHONSTARTUP in this file
# (1): user$ export MASTER='yarn-client | local[NN] | spark://host:port'
# user$ bpython|python -i /path/to/this/file
#
# (2): From within your favorite IDE, specify it as your python startup
# script. For example, from within a WINGIDE project, set the following
# variables within a WING Project: 'Project -> Project Properties':
# 'PYTHONSTARTUP=/path/to/this/very/file'
# 'MASTER=yarn-client | local[NN] | spark://host:port'
# ===========================================================================
import sys, os, glob, subprocess, random
namenode = os.getenv('NAMENODE')
SPARK_HOME = os.getenv('SPARK_HOME')
# ===========================================================================
# =================================================================================
# This functions emulates the action of "source" or '.' that exists in bash(1),
# and can be used to set PYTHON environment variables (in Pythons globals dict).
# =================================================================================
def source(script, update=True):
proc = subprocess.Popen(". %s; env -0" % script, stdout=subprocess.PIPE, shell=True)
output = proc.communicate()[0]
env = dict((line.split("=", 1) for line in output.split('\x00') if line))
if update: os.environ.update(env)
return env
# ================================================================================
# ================================================================================
# Here, we get the name of our current SPARK Assembly JAR file name (locally). We
# use that to create a HDFS URL that points to it's location in HDFS when using
# YARN (i.e. when 'export MASTER=yarn-client'; we ignore it otherwise).
# ================================================================================
# Remember to always upload/update your distribution's current SPARK Assembly JAR
# to HDFS like this:
# $ hdfs dfs -mkdir -p /user/spark/share/lib" # Only necessary to do once!
# $ hdfs dfs -rm "/user/spark/share/lib/spark-assembly-*.jar" # Remove old version.
# $ hdfs dfs -put ${SPARK_HOME}/assembly/lib/spark-assembly-[0-9]*.jar /user/spark/share/lib/
# ================================================================================
SPARK_JAR_LOCATION = glob.glob(SPARK_HOME + '/lib/' + 'spark-assembly-[0-9]*.jar')[0].split("/")[-1]
SPARK_JAR_LOCATION = 'hdfs://' + namenode + ':8020/user/spark/share/lib/' + SPARK_JAR_LOCATION
# ================================================================================
# ================================================================================
# Update Pythons globals environment variable dict with necessary environment
# variables that the SPARK Shell will be looking for. Some we set explicitly via
# an in-line dictionary, as shown below. And the rest are set by 'source'ing the
# global SPARK environment file (although we could have included those explicitly
# here too, if we preferred not to touch that system-wide file -- and leave it as FCS).
# ================================================================================
spark_jar_opt = None
MASTER = os.getenv('MASTER') if os.getenv('MASTER') else 'local[8]'
if MASTER.startswith('yarn-'): spark_jar_opt = ' -Dspark.yarn.jar=' + SPARK_JAR_LOCATION
elif MASTER.startswith('spark://'): pass
else: HADOOP_HOME = ''
# ================================================================================
# ================================================================================
# Build '--driver-java-options' options for spark-shell, pyspark, or spark-submit.
# Many of these are set in '/etc/spark/conf/spark-defaults.conf' (and thus
# commented out here, but left here for reference completeness).
# ================================================================================
# Default UI port is 4040. The next statement allows us to run multiple SPARK shells.
DRIVER_JAVA_OPTIONS = '-Dspark.ui.port=' + str(random.randint(1025, 65535))
DRIVER_JAVA_OPTIONS += spark_jar_opt if spark_jar_opt else ''
# ================================================================================
# ================================================================================
# Build PYSPARK_SUBMIT_ARGS (i.e. the sames ones shown in 'pyspark --help'), and
# apply them to the O/S environment.
# ================================================================================
DRIVER_JAVA_OPTIONS = "'" + DRIVER_JAVA_OPTIONS + "'"
PYSPARK_SUBMIT_ARGS = ' --master ' + MASTER # Remember to set MASTER on UNIX CLI or in the IDE!
PYSPARK_SUBMIT_ARGS += ' --driver-java-options ' + DRIVER_JAVA_OPTIONS # Built above.
# ================================================================================
os.environ.update(source('/etc/spark/conf/spark-env.sh', update = False))
os.environ.update({ 'PYSPARK_SUBMIT_ARGS' : PYSPARK_SUBMIT_ARGS })
# ================================================================================
# ================================================================================
# Next, adjust 'sys.path' so SPARK Shell has the python modules it needs.
# ================================================================================
SPARK_PYTHON_DIR = SPARK_HOME + '/python'
PY4J = glob.glob(SPARK_PYTHON_DIR + '/lib/' + 'py4j-*-src.zip')[0].split("/")[-1]
sys.path = [SPARK_PYTHON_DIR, SPARK_PYTHON_DIR + '/lib/' + PY4J] + sys.path
# ================================================================================
# ================================================================================
# With our environment set, we start the SPARK Shell; and then to that, we add
# our favorite Python imports (e.g. numpy, scipy; etc).
# ================================================================================
print('PYSPARK_SUBMIT_ARGS:' + PYSPARK_SUBMIT_ARGS) # For visual debug.
execfile(SPARK_HOME + '/python/pyspark/shell.py', globals()) # Start the SPARK Shell.
execfile(os.getenv('PYSTARTUP')) # Next, load our favorite Python modules.
# ================================================================================
Enjoy and good luck! =:)

Thanks Ophir YokTon's upper post, I Finally managed to do it with "Spark 1.4.1+ Spyder2.3.4.
Here I would like to give one summary on all my steps to do it, hope it can help some people in the similiar situations.
Add PYTHONPATH variable into .bashrc. (of course you can put into other relavent profile file)
export PYTHONPATH=$SPARK_HOME/python/:$PYTHONPATH
export PYTHONPATH=$SPARK_HOME/python/lib/py4j-0.8.2.1-src.zip:$PYTHONPATH
Make it effective by
source .bashrc
Create one copy of spyder as spyder.py on your spyder bin directory
cp spyder spyder.py
Start Spyder IDE with following command
spark-submit spyder.py
I implemented the sample "simple app" from apache spark and passed the running test it in spyder environment. please refer to the picture "http://i.stack.imgur.com/xTv6s.gif"

pyspark isn't probably at your pythonpath variable. Go to location where pyspark folder is located and add that folder to your class path.

If you just want to import the module , adding it to python path is enough
If you want to run complete scripts from the IDE, you can create a 'tool' that uses spark-submit to execute your script from the IDE (instead of normal run)
Specifically for spyder (or other IDE's that are written in python) you can run the IDE from within spark-submit
example:
spark-submit.cmd c:\Python27\Scripts\spyder.py
note that I had to rename spyder to spyder.py - it appears spark submit relies on the extension do distinguish between python, java, or scala
add any required parameters to spark-submit

Related

Is there a way to directly make use of SMOTE to a xxx.csv in python-weka-wrapper3

For example,I have a data-set,which has 18 instances(class value:True:False=12:6),and it has 11 attributes.Can just make use SMOTE in python like in weka?
How can I make it by code?
enter image description here
Have you looked at the API examples and the example repository? These resources explain most if not all of the available functionality.
Here is an example of loading a CSV file and applying the SMOTE filter:
import weka.core.jvm as jvm
from weka.core.classes import from_commandline
from weka.core.packages import install_missing_package, installed_package
from weka.core.converters import load_any_file
jvm.start(packages=True)
# install SMOTE if necessary
installed = installed_package("SMOTE")
if not installed:
success, restart = install_missing_package("SMOTE")
if restart:
print("Please rerun script")
jvm.stop()
import sys
sys.exit(0)
# load data
data = load_any_file("/some/where/iris.csv", class_index="last")
# if the default parameters for loading the CSV file don't work,
# you need to configure the CSVLoader yourself and set the
# appropriate options (or even use a 3rd-party package):
# from weka.core.converters import Loader
# loader = Loader(classname="weka.core.converters.CSVLoader", options=[])
# data = loader.load_file("/some/where/iris.csv", class_index="last")
print(data.num_instances)
# apply SMOTE
# replace the command-line with the one that you can copy/paste from
# the Weka Explorer via right-click menu
smote = from_commandline("weka.filters.supervised.instance.SMOTE -C 0 -K 5 -P 100.0 -S 1", classname="weka.filters.Filter")
smote.inputformat(data)
filtered = smote.filter(data)
print(filtered.num_instances)
jvm.stop()

How do I configure alembic.ini path correctly?

When I run any alembic commands, I get the following error:
FAILED: No config file 'alembic.ini' found, or file has no '[alembic]' section
According to the documentation, I need to set my path so I did. I'm running WSL Ubuntu 18.
Am I setting the path wrong? I run the commands in my flask project's top directory:
~/microblog$
My alembic file structure is:
~/microblog/
- migrations/
-- __pycache__/
-- versions/
-- alembic.ini
-- env.py
-- README
-- script.py.mako
alembic.ini
# A generic, single database configuration.
[alembic]
script_location = home/acanizales1/microblog/migrations/alembic
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false
# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic
... the rest below are configured correctly ...
Thank you.

rspec running with development database as opposed to test database

When I run my rspec test, I noticed that the test is using my development database as opposed to using the one for test environment.
My spec_helper.rb file is as follow:
# This file was generated by the `rails generate rspec:install` command. Conventionally, all
# specs live under a `spec` directory, which RSpec adds to the `$LOAD_PATH`.
# The generated `.rspec` file contains `--require spec_helper` which will cause this
# file to always be loaded, without a need to explicitly require it in any files.
#
# Given that it is always loaded, you are encouraged to keep this file as
# light-weight as possible. Requiring heavyweight dependencies from this file
# will add to the boot time of your test suite on EVERY test run, even for an
# individual file that may not need all of that loaded. Instead, consider making
# a separate helper file that requires the additional dependencies and performs
# the additional setup, and require it from the spec files that actually need it.
# require 'webmock/rspec'
# WebMock.disable_net_connect!(allow_localhost: true)
#
# The `.rspec` file also contains a few flags that are not defaults but that
# users commonly want.
#
# See http://rubydoc.info/gems/rspec-core/RSpec/Core/Configuration
require 'rubygems'
# require 'test/unit'
require 'redis'
ENV['RAILS_ENV'] = 'test'
require File.expand_path("../../config/environment", __FILE__)
require 'factory_girl_rails'
# Capybara.register_driver :selenium do |app|
# Capybara::Selenium::Driver.new(app, :browser => :chrome)
# end
RSpec.configure do |config|
# rspec-expectations config goes here. You can use an alternate
# assertion/expectation library such as wrong or the stdlib/minitest
# assertions if you prefer.
config.expect_with :rspec do |expectations|
# This option will default to `true` in RSpec 4. It makes the `description`
# and `failure_message` of custom matchers include text for helper methods
# defined using `chain`, e.g.:
# be_bigger_than(2).and_smaller_than(4).description
# # => "be bigger than 2 and smaller than 4"
# ...rather than:
# # => "be bigger than 2"
expectations.include_chain_clauses_in_custom_matcher_descriptions = true
end
# rspec-mocks config goes here. You can use an alternate test double
# library (such as bogus or mocha) by changing the `mock_with` option here.
config.mock_with :rspec do |mocks|
# Prevents you from mocking or stubbing a method that does not exist on
# a real object. This is generally recommended, and will default to
# `true` in RSpec 4.
mocks.verify_partial_doubles = true
end
config.mock_with :rspec
config.before(:all) do
ActiveRecord::Base.skip_callbacks = true
end
config.after(:all) do
ActiveRecord::Base.skip_callbacks = false
end
end
And the rails_helper.rb file is as follow:
# This file is copied to spec/ when you run 'rails generate rspec:install'
require 'spec_helper'
require File.expand_path("../../config/environment", __FILE__)
require 'rspec/rails'
require 'database_cleaner'
# Add additional requires below this line. Rails is not loaded until this point!
# Requires supporting ruby files with custom matchers and macros, etc, in
# spec/support/ and its subdirectories. Files matching `spec/**/*_spec.rb` are
# run as spec files by default. This means that files in spec/support that end
# in _spec.rb will both be required and run as specs, causing the specs to be
# run twice. It is recommended that you do not name files matching this glob to
# end with _spec.rb. You can configure this pattern with the --pattern
# option on the command line or in ~/.rspec, .rspec or `.rspec-local`.
#
# The following line is provided for convenience purposes. It has the downside
# of increasing the boot-up time by auto-requiring all files in the support
# directory. Alternatively, in the individual `*_spec.rb` files, manually
# require only the support files necessary.
#
Dir[Rails.root.join("spec/support/**/*.rb")].each { |f| require f }
# Checks for pending migrations before tests are run.
# If you are not using ActiveRecord, you can remove this line.
ActiveRecord::Migration.maintain_test_schema!
RSpec.configure do |config|
# Remove this line if you're not using ActiveRecord or ActiveRecord fixtures
config.fixture_path = "#{::Rails.root}/spec/fixtures"
# If you're not using ActiveRecord, or you'd prefer not to run each of your
# examples within a transaction, remove the following line or assign false
# instead of true.
config.use_transactional_fixtures = false
# RSpec Rails can automatically mix in different behaviours to your tests
# based on their file location, for example enabling you to call `get` and
# `post` in specs under `spec/controllers`.
#
# You can disable this behaviour by removing the line below, and instead
# explicitly tag your specs with their type, e.g.:
#
# RSpec.describe UsersController, :type => :controller do
# # ...
# end
#
# The different available types are documented in the features, such as in
# https://relishapp.com/rspec/rspec-rails/docs
config.infer_spec_type_from_file_location!
end
BTW: if it is any help, I just finished solving an issue with database_cleaner wiping my development db according to this post.
How can I restrict the test to run only in test environment, and using only the test database?
All help is welcome, thank you.
My database.yml is as follow:
# PostgreSQL. Versions 8.2 and up are supported.
#
# Install the pg driver:
# gem install pg
# On OS X with Homebrew:
# gem install pg -- --with-pg-config=/usr/local/bin/pg_config
# On OS X with MacPorts:
# gem install pg -- --with-pg-config=/opt/local/lib/postgresql84/bin/pg_config
# On Windows:
# gem install pg
# Choose the win32 build.
# Install PostgreSQL and put its /bin directory on your path.
#
# Configure Using Gemfile
# gem 'pg'
#
default: &default
adapter: postgresql
encoding: unicode
# For details on connection pooling, see rails configuration guide
# http://guides.rubyonrails.org/configuring.html#database-pooling
pool: 5
development:
<<: *default
database: directory-service_development
# The specified database role being used to connect to postgres.
# To create additional roles in postgres see `$ createuser --help`.
# When left blank, postgres will use the default role. This is
# the same name as the operating system user that initialized the database.
#username: directory-service
# The password associated with the postgres role (username).
#password:
# Connect on a TCP socket. Omitted by default since the client uses a
# domain socket that doesn't need configuration. Windows does not have
# domain sockets, so uncomment these lines.
#host: localhost
# The TCP port the server listens on. Defaults to 5432.
# If your server runs on a different port number, change accordingly.
#port: 5432
# Schema search path. The server defaults to $user,public
#schema_search_path: myapp,sharedapp,public
# Minimum log levels, in increasing order:
# debug5, debug4, debug3, debug2, debug1,
# log, notice, warning, error, fatal, and panic
# Defaults to warning.
#min_messages: notice
# Warning: The database defined as "test" will be erased and
# re-generated from your development database when you run "rake".
# Do not set this db to the same as development or production.
test:
<<: *default
database: directory-service_test
# As with config/secrets.yml, you never want to store sensitive information,
# like your database password, in your source code. If your source code is
# ever seen by anyone, they now have access to your database.
#
# Instead, provide the password as a unix environment variable when you boot
# the app. Read http://guides.rubyonrails.org/configuring.html#configuring-a-database
# for a full rundown on how to provide these environment variables in a
# production deployment.
#
# On Heroku and other platform providers, you may have a full connection URL
# available as an environment variable. For example:
#
# DATABASE_URL="postgres://myuser:mypass#localhost/somedatabase"
#
# You can use this database configuration with:
#
# production:
# url: <%= ENV['DATABASE_URL'] %>
#
production:
<<: *default
database: directory-service_production
username: directory-service
password: <%= ENV['DIRECTORY-SERVICE_DATABASE_PASSWORD'] %>
On puts ENV["RAILS_ENV"], it shows that my test was running straight on test environment.
But the local foreman server that was running was getting data from the development environment.
By manually specifying that the server should run on test environment, the test also uses data from the test environment.
Big thanks to #AndyWaite.
What worked for me is the following:
Stop foreman (or your server running in development)
bin/rails db:migrate RAILS_ENV=test (optional)
bin/rails db:environment:set RAILS_ENV=test (set env explicitly)
rails server -e test (in another window)
rspec ___ (start testing)

database_cleaner is wiping my development database

I have database-cleaner configured for my rails 4 application,
Each time I run the test, I discovered that my database gets wiped out in both the test and development environment.
My configurations are in rails_helper as follow:
ENV["RAILS_ENV"] ||= 'test'
# This file is copied to spec/ when you run 'rails generate rspec:install'
require 'spec_helper'
require File.expand_path("../../config/environment", __FILE__)
require 'rspec/rails'
require 'database_cleaner'
Rails.env = "test"
# Add additional requires below this line. Rails is not loaded until this point!
# Requires supporting ruby files with custom matchers and macros, etc, in
# spec/support/ and its subdirectories. Files matching `spec/**/*_spec.rb` are
# run as spec files by default. This means that files in spec/support that end
# in _spec.rb will both be required and run as specs, causing the specs to be
# run twice. It is recommended that you do not name files matching this glob to
# end with _spec.rb. You can configure this pattern with the --pattern
# option on the command line or in ~/.rspec, .rspec or `.rspec-local`.
#
# The following line is provided for convenience purposes. It has the downside
# of increasing the boot-up time by auto-requiring all files in the support
# directory. Alternatively, in the individual `*_spec.rb` files, manually
# require only the support files necessary.
#
# Dir[Rails.root.join("spec/support/**/*.rb")].each { |f| require f }
# Checks for pending migrations before tests are run.
# If you are not using ActiveRecord, you can remove this line.
ActiveRecord::Migration.maintain_test_schema!
RSpec.configure do |config|
# Remove this line if you're not using ActiveRecord or ActiveRecord fixtures
config.fixture_path = "#{::Rails.root}/spec/fixtures"
# If you're not using ActiveRecord, or you'd prefer not to run each of your
# examples within a transaction, remove the following line or assign false
# instead of true.
config.use_transactional_fixtures = false
# RSpec Rails can automatically mix in different behaviours to your tests
# based on their file location, for example enabling you to call `get` and
# `post` in specs under `spec/controllers`.
#
# You can disable this behaviour by removing the line below, and instead
# explicitly tag your specs with their type, e.g.:
#
# RSpec.describe UsersController, :type => :controller do
# # ...
# end
#
# The different available types are documented in the features, such as in
# https://relishapp.com/rspec/rspec-rails/docs
config.infer_spec_type_from_file_location!
config.before(:suite) do
DatabaseCleaner.clean_with(:truncation)
end
config.before(:each) do
DatabaseCleaner.strategy = :transaction
end
config.before(:each, :js => true) do
DatabaseCleaner.strategy = :truncation
end
config.before(:each) do
DatabaseCleaner.start
end
config.after(:each) do
DatabaseCleaner.clean
end
config.mock_with :rspec
config.before(:all) do
ActiveRecord::Base.skip_callbacks = true
end
config.after(:all) do
ActiveRecord::Base.skip_callbacks = false
end
end
How can I ensure that the cleaner only wipes the db in test environment without touching my development?
My database.yml is as follow:
# PostgreSQL. Versions 8.2 and up are supported.
#
# Install the pg driver:
# gem install pg
# On OS X with Homebrew:
# gem install pg -- --with-pg-config=/usr/local/bin/pg_config
# On OS X with MacPorts:
# gem install pg -- --with-pg-config=/opt/local/lib/postgresql84/bin/pg_config
# On Windows:
# gem install pg
# Choose the win32 build.
# Install PostgreSQL and put its /bin directory on your path.
#
# Configure Using Gemfile
# gem 'pg'
#
default: &default
adapter: postgresql
encoding: unicode
# For details on connection pooling, see rails configuration guide
# http://guides.rubyonrails.org/configuring.html#database-pooling
pool: 5
development:
<<: *default
database: directory-service_development
# The specified database role being used to connect to postgres.
# To create additional roles in postgres see `$ createuser --help`.
# When left blank, postgres will use the default role. This is
# the same name as the operating system user that initialized the database.
#username: directory-service
# The password associated with the postgres role (username).
#password:
# Connect on a TCP socket. Omitted by default since the client uses a
# domain socket that doesn't need configuration. Windows does not have
# domain sockets, so uncomment these lines.
#host: localhost
# The TCP port the server listens on. Defaults to 5432.
# If your server runs on a different port number, change accordingly.
#port: 5432
# Schema search path. The server defaults to $user,public
#schema_search_path: myapp,sharedapp,public
# Minimum log levels, in increasing order:
# debug5, debug4, debug3, debug2, debug1,
# log, notice, warning, error, fatal, and panic
# Defaults to warning.
#min_messages: notice
# Warning: The database defined as "test" will be erased and
# re-generated from your development database when you run "rake".
# Do not set this db to the same as development or production.
test:
<<: *default
database: directory-service_test
# As with config/secrets.yml, you never want to store sensitive information,
# like your database password, in your source code. If your source code is
# ever seen by anyone, they now have access to your database.
#
# Instead, provide the password as a unix environment variable when you boot
# the app. Read http://guides.rubyonrails.org/configuring.html#configuring-a-database
# for a full rundown on how to provide these environment variables in a
# production deployment.
#
# On Heroku and other platform providers, you may have a full connection URL
# available as an environment variable. For example:
#
# DATABASE_URL="postgres://myuser:mypass#localhost/somedatabase"
#
# You can use this database configuration with:
#
# production:
# url: <%= ENV['DATABASE_URL'] %>
#
production:
<<: *default
database: directory-service_production
username: directory-service
password: <%= ENV['DIRECTORY-SERVICE_DATABASE_PASSWORD'] %>
I'd recommend changing
ENV["RAILS_ENV"] ||= 'test'
to
ENV["RAILS_ENV"] = 'test'
and remove
Rails.env = 'test'
as the RAILS_ENV environment variable should be sufficient for configuration
If anyone is looking for another potential source of this issue, I randomly had $DATABASE_URL defined in my .bashrc file to point directly to my development database. Took me a few hours to find that.
In my case it was database connection specified in .env file when I used dotenv-rails gem. For some reasons database_cleaner prefer connection from there instead of rails application config.
Well, I'm not sure what I was doing wrong, but by undoing all the configurations I had for database_cleaner:
uninstalling the database_cleaner gem
removing all related configurations from both spec_helper and rails_helper
And then following this guide by Avdi Grimm, after re-installing the database_cleaner gem and also uncomment this line:
Dir[Rails.root.join("spec/support/**/*.rb")].each { |f| require f }
from my rails_helper, I was able to get the database_cleaner back to work as expected. Thank you all.
I appreciate this is an old post but I had this issue today.
I checked using pry and my
ENV['RAILS_ENV'] = 'test' however my ENV['DATABASE_URL'] was set to my development db in the form of:
postgres://localhost/my_dev_db
I added a line in the database cleaner config in rails_helper.rb to change to my test db like so:
config.before(:suite) do
ActiveRecord::Base.establish_connection(ENV['DATABASE_TEST'])
DatabaseCleaner.clean_with(:truncation)
end
where ENV['DATABASE_TEST'] was in the form of:
postgres://localhost/my_test_db
This solved the issue for me.
For me the issue was having DatabaseCleaner.clean on the top level of rails_helper instead of within config.before(:suite).

.txt file is no longer written to by snmptrapd daemon after opening and closing with ifstream in C++

I am running Net-Snmp (environment is a virtual machine running Linux Mint OS 11) and have configured it to send trap information to a text file that I have called trapd.txt.
If I reboot the VM, any trap that is generated is sent to the file no problem. However If I run a C++ program using ifstream to open it and then close it no trap information can be written to it again until I reboot.
When I generate a trap during this state I will sometimes even see the trapd.txt file flicker in the GUI as if it tried to write but failed. This situation happens if I do a clean reboot and run the following code and it alone:
ifstream file;
file.open("trapd.txt");
if(file)
cout<<"open"<<endl;
file.close();
file.open("nothing.txt");
file.close();
exit(0);
Clearly this code is not changing permissions or the SNMP configuration files. The only reason I can think that would prevent trap information from coming in afterwards is that the ifstream is not actually getting closed all the way.
If you have any ideas for a fix or a work around or any insight whatsoever I will be extremely grateful! This is a fairly important to me...
Here's my snmp.conf file:
oidOutputFormat 1
oidOutputFormat 5
logTimestamp yes
escapeQuotes yes
snmptrapd.conf:
authCommunity log,execute,net public
authCommunity log,execute,net private
outputOption auSs
logOption f /home/utd/Desktop/REPO/src/Manager/trapd.txt
snmpd.conf:
authtrapenable 1
master all
linkUpDownNotifications yes
defaultMonitors yes
trap2sink localhost public
rwcommunity private localhost
rocommunity public localhost
###############################################################################
#
# EXAMPLE.conf:
# An example configuration file for configuring the Net-SNMP agent ('snmpd')
# See the 'snmpd.conf(5)' man page for details
#
# Some entries are deliberately commented out, and will need to be explicitly activated
#
###############################################################################
#
# AGENT BEHAVIOUR
#
# Listen for connections from the local system only
agentAddress udp:127.0.0.1:161
# Listen for connections on all interfaces (both IPv4 *and* IPv6)
#agentAddress udp:161,udp6:[::1]:161
###############################################################################
#
# SNMPv3 AUTHENTICATION
#
# Note that these particular settings don't actually belong here.
# They should be copied to the file /var/lib/snmp/snmpd.conf
# and the passwords changed, before being uncommented in that file *only*.
# Then restart the agent
# createUser authOnlyUser MD5 "remember to change this password"
# createUser authPrivUser SHA "remember to change this one too" DES
# createUser internalUser MD5 "this is only ever used internally, but still change the password"
# If you also change the usernames (which might be sensible),
# then remember to update the other occurances in this example config file to match.
###############################################################################
#
# ACCESS CONTROL
#
# system + hrSystem groups only
view systemonly included .1.3.6.1.2.1.1
view systemonly included .1.3.6.1.2.1.25.1
# Full access from the local host
# Default access to basic system info
# Full access from an example network
# Adjust this network address to match your local
# settings, change the community string,
# and check the 'agentAddress' setting above
# Full read-only access for SNMPv3
rouser authOnlyUser
# Full write access for encrypted requests
# Remember to activate the 'createUser' lines above
#rwuser authPrivUser priv
# It's no longer typically necessary to use the full 'com2sec/group/access' configuration
# r[ou]user and r[ow]community, together with suitable views, should cover most requirements
###############################################################################
#
# SYSTEM INFORMATION
#
# Note that setting these values here, results in the corresponding MIB objects being 'read-only'
# See snmpd.conf(5) for more details
sysContact Me <me#example.org>
# Application + End-to-End layers
sysServices 72
#
# Process Monitoring
#
# At least one 'mountd' process
proc mountd
# No more than 4 'ntalkd' processes - 0 is OK
proc ntalkd 4
# At least one 'sendmail' process, but no more than 10
proc sendmail 10 1
# Walk the UCD-SNMP-MIB::prTable to see the resulting output
# Note that this table will be empty if there are no "proc" entries in the snmpd.conf file
#
# Disk Monitoring
#
# 10 MB required on root disk, 5% free on /var, 10% free on all other disks
disk / 10000
disk /var 5%
includeAllDisks 10%
# Walk the UCD-SNMP-MIB::dskTable to see the resulting output
# Note that this table will be empty if there are no "disk" entries in the snmpd.conf file
#
# System Load
#
# Unacceptable 1-, 5-, and 15-minute load averages
load 12 10 5
# Walk the UCD-SNMP-MIB::laTable to see the resulting output
# Note that this table *will* be populated, even without a "load" entry in the snmpd.conf file
###############################################################################
#
# ACTIVE MONITORING
#
# Send SNMPv1 traps
# Send SNMPv2c traps
# Send SNMPv2c INFORMs
# Note that you typically only want *one* of these three lines
# Uncommenting two (or all three) will result in multiple copies of each notification.
#
# Event MIB - automatically generate alerts
#
# Remember to activate the 'createUser' lines above
iquerySecName internalUser
rouser internalUser
# Generate traps on UCD error conditions
# Generate traps on linkUp/Down
###############################################################################
#
# EXTENDING THE AGENT
#
#
# Arbitrary extension commands
#
extend test1 /bin/echo Hello, world!
extend-sh test2 echo Hello, world! ; echo Hi there ; exit 35
#perl $debugging = \'1\';
#perl $verbose = \'1\';
#perl {$regat = \'.1.3.6.1.4.1.8072.999\'; $extenstion = \'1\'; $mibdata = \'/etc/passwd\'; $delimT=\'\'; $delimV=\':\'; do \'/etc/snmp/snmpagent.pl\';}
#perl print STDERR 'Test'
#perl $debugging = '1';
#perl $verbose = '1';
#perl $regat = '.1.3.6.1.4.8072.999';
#perl $extenstion = '1';
#perl $mibdata = '/etc/passwd';
#perl $delimT='';
#perl $delimV=':';
#perl do '/home/utd/snmpagent.pl';
#perl print STDERR 'Now loading Perl extensions...\n'
#perl $mibdata = "dick.txt";
#perl do '/home/utd/mymod.pl';
#extend-sh test3 /bin/sh /tmp/shtest
# Note that this last entry requires the script '/tmp/shtest' to be created first,
# containing the same three shell commands, before the line is uncommented
# Walk the NET-SNMP-EXTEND-MIB tables (nsExtendConfigTable, nsExtendOutput1Table
# and nsExtendOutput2Table) to see the resulting output
# Note that the "extend" directive supercedes the previous "exec" and "sh" directives
# However, walking the UCD-SNMP-MIB::extTable should still returns the same output,
# as well as the fuller results in the above tables.
#
# "Pass-through" MIB extension command
#
#pass .1.3.6.1.4.1.8072.2.255 /bin/sh PREFIX/local/passtest
#pass .1.3.6.1.4.1.8072.2.255 /usr/bin/perl PREFIX/local/passtest.pl
# Note that this requires one of the two 'passtest' scripts to be installed first,
# before the appropriate line is uncommented.
# These scripts can be found in the 'local' directory of the source distribution,
# and are not installed automatically.
# Walk the NET-SNMP-PASS-MIB::netSnmpPassExamples subtree to see the resulting output
#
# AgentX Sub-agents
#
# Run as an AgentX master agent
master agentx
# Listen for network connections (from localhost)
# rather than the default named socket /var/agentx/master
#agentXSocket tcp:localhost:705
perl $mibdata = "/etc/snmp/agenty.conf";
perl do "/etc/snmp/agenty.pl";
The problem's origin was actually from the editing and saving of the file itself by myself using gedit. While I still do not understand why this would cause the issue I can work around it by not editing the file. Thanks to everyone who replied.