I am running dataflow job which reads file and pushes data to cloudsql. Its working in local mode(DirectRunner) but failing in DataflowRunner mode. I am getting following error
I 2021-08-24T10:08:13.866094Z ERROR: Command errored out with exit status 1:
I 2021-08-24T10:08:13.866142Z command: /usr/local/bin/python3 /tmp/pip-standalone-pip-_qnajhyd/__env_pip__.zip/pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-idcssgqm/overlay --no-warn-script-location --no-binary :none: --only-binary :none: --no-index --find-links /var/opt/google/dataflow -- 'setuptools>=54.0' 'setuptools_scm[toml]>=5.0' 'wheel>=0.36.2' 'Cython>=0.29.22'
I 2021-08-24T10:08:13.866202Z cwd: None
I 2021-08-24T10:08:13.866213Z Complete output (15 lines):
I 2021-08-24T10:08:13.866222Z Looking in links: /var/opt/google/dataflow
I 2021-08-24T10:08:13.866232Z Processing /var/opt/google/dataflow/setuptools-57.4.0.tar.gz
I 2021-08-24T10:08:13.866244Z Installing build dependencies: started
I 2021-08-24T10:08:13.866253Z Installing build dependencies: finished with status 'error'
I 2021-08-24T10:08:13.866262Z ERROR: Command errored out with exit status 1:
I 2021-08-24T10:08:13.866272Z command: /usr/local/bin/python3 /tmp/pip-standalone-pip-_qnajhyd/__env_pip__.zip/pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-x9grztra/overlay --no-warn-script-location --no-binary :none: --only-binary :none: --no-index --find-links /var/opt/google/dataflow -- wheel
I 2021-08-24T10:08:13.866290Z cwd: None
I 2021-08-24T10:08:13.866305Z Complete output (3 lines):
I 2021-08-24T10:08:13.866314Z Looking in links: /var/opt/google/dataflow
I 2021-08-24T10:08:13.866324Z ERROR: Could not find a version that satisfies the requirement wheel (from versions: none)
I 2021-08-24T10:08:13.866335Z ERROR: No matching distribution found for wheel
I 2021-08-24T10:08:13.866344Z ----------------------------------------
I 2021-08-24T10:08:13.866359Z WARNING: Discarding file:///var/opt/google/dataflow/setuptools-57.4.0.tar.gz. Command errored out with exit status 1: /usr/local/bin/python3 /tmp/pip-standalone-pip-_qnajhyd/__env_pip__.zip/pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-x9grztra/overlay --no-warn-script-location --no-binary :none: --only-binary :none: --no-index --find-links /var/opt/google/dataflow -- wheel Check the logs for full command output.
I 2021-08-24T10:08:13.866383Z ERROR: Could not find a version that satisfies the requirement setuptools>=54.0 (from versions: 57.4.0)
I 2021-08-24T10:08:13.866394Z ERROR: No matching distribution found for setuptools>=54.0
I 2021-08-24T10:08:13.866404Z ----------------------------------------
I 2021-08-24T10:08:13.869072Z WARNING: Discarding file:///var/opt/google/dataflow/pymssql-2.2.2.tar.gz. Command errored out with exit status 1: /usr/local/bin/python3 /tmp/pip-standalone-pip-_qnajhyd/__env_pip__.zip/pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-idcssgqm/overlay --no-warn-script-location --no-binary :none: --only-binary :none: --no-index --find-links /var/opt/google/dataflow -- 'setuptools>=54.0' 'setuptools_scm[toml]>=5.0' 'wheel>=0.36.2' 'Cython>=0.29.22' Check the logs for full command output.
I 2021-08-24T10:08:13.869405Z ERROR: Could not find a version that satisfies the requirement pymssql (from versions: 2.2.2)
I 2021-08-24T10:08:13.869475Z ERROR: No matching distribution found for pymssql
I 2021-08-24T10:08:13.943732Z /usr/local/bin/pip failed with exit status 1
F 2021-08-24T10:08:13.943818Z Failed to install packages: failed to install requirements: exit status 1
I 2021-08-24T10:11:13.976686Z [topologymanager] RemoveContainer - Container ID: 6006a3d10b0289d5b69478c1a8189eef02db1fa0af2216bb5e7c57659498009c
I 2021-08-24T10:11:14.015846Z [topologymanager] RemoveContainer - Container ID: bb14dd1d3a5414c2ea02157ebff7e7ba227337e47ff86216a3f86847e261cdee
E 2021-08-24T10:11:14.016267Z Error syncing pod 5814435f816ec192ccc2709209670a6a ("dataflow-cloudsql-upload-2021-08-2-08240304-up8o-harness-x0f4_default(5814435f816ec192ccc2709209670a6a)"), skipping: failed to "StartContainer" for "python" with CrashLoopBackOff: "back-off 1m20s restarting failed container=python pod=dataflow-cloudsql-upload-2021-08-2-08240304-up8o-harness-x0f4_default(5814435f816ec192ccc2709209670a6a)"
I 2021-08-24T10:11:25.595721Z [topologymanager] RemoveContainer - Container ID: bb14dd1d3a5414c2ea02157ebff7e7ba227337e47ff86216a3f86847e261cdee
E 2021-08-24T10:11:25.596300Z Error syncing pod 5814435f816ec192ccc2709209670a6a ("dataflow-cloudsql-upload-2021-08-2-08240304-up8o-harness-x0f4_default(5814435f816ec192ccc2709209670a6a)"), skipping: failed to "StartContainer" for "python" with CrashLoopBackOff: "back-off 1m20s restarting failed container=python pod=dataflow-cloudsql-upload-2021-08-2-08240304-up8o-harness-x0f4_default(5814435f816ec192ccc2709209670a6a)"
I 2021-08-24T10:11:38.592275Z [topologymanager] RemoveContainer - Container ID: bb14dd1d3a5414c2ea02157ebff7e7ba227337e47ff86216a3f86847e261cdee
E 2021-08-24T10:11:38.592668Z Error syncing pod 5814435f816ec192ccc2709209670a6a ("dataflow-cloudsql-upload-2021-08-2-08240304-up8o-harness-x0f4_default(5814435f816ec192ccc2709209670a6a)"), skipping: failed to "StartContainer" for "python" with CrashLoopBackOff: "back-off 1m20s restarting failed container=python pod=dataflow-cloudsql-upload-2021-08-2-08240304-up8o-harness-x0f4_default(5814435f816ec192ccc2709209670a6a)"
I 2021-08-24T10:11:53.598346Z [topologymanager] RemoveContainer - Container ID: bb14dd1d3a5414c2ea02157ebff7e7ba227337e47ff86216a3f86847e261cdee
There are so many SO posts which suggested to see about, conflicts with dependency in requirements and even after trying multiple changes in requirements.txt (trial and error) I am unable to figure out right dependencies. I followed this and
this but was unable to debug. Following is my code files
dataflow.py
import csv
import datetime
import logging
import apache_beam as beam
from apache_beam.io.fileio import MatchFiles, ReadMatches
import argparse
import os
import json
from ldif3 import LDIFParser
import pymssql
from apache_beam.options.pipeline_options import PipelineOptions, GoogleCloudOptions
logging.basicConfig(level='INFO')
# Change the project_id
project_id = os.getenv('GOOGLE_CLOUD_PROJECT')
def get_db_connection():
mssqlhost = '127.0.0.1'
mssqluser = 'a'
mssqlpass = 'b'
mssqldb = 'usersdb'
cnx = pymssql.connect(user=mssqluser, password=mssqlpass,
host=mssqlhost, database=mssqldb)
return cnx
class SQLWriteDoFn(beam.DoFn):
# Max documents to process at a time
MAX_DOCUMENTS = 200
def __init__(self, project):
self._project = project
def setup(self):
os.system("wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy")
os.system("chmod +x cloud_sql_proxy")
os.system(f"./cloud_sql_proxy -instances=mydatabase-database=tcp:0.0.0.0:1433 &")
def start_bundle(self):
self._mutations = []
logging.info("In start_bundle")
def finish_bundle(self):
logging.info("In finish_bundle")
if self._mutations:
self._flush_batch()
def process(self, element, *args, **kwargs):
logging.info("In process")
self._mutations.append(element)
if len(self._mutations) > self.MAX_DOCUMENTS:
self._flush_batch()
def _flush_batch(self):
try:
mssqlconn = get_db_connection()
print("Connection Established to MS SQL server.")
cursor = mssqlconn.cursor()
stmt = "insert into usersdb.dbo.users_dataflow (uname, password) values (%s,%s)"
cursor.executemany(stmt, self._mutations)
mssqlconn.commit()
mssqlconn.close()
except Exception as e:
print(e)
self._mutations = []
def return_dictionary_element_if_present(dict_entry, element):
if dict_entry.get(element):
return dict_entry.get(element)[0]
return ''
class CreateEntities(beam.DoFn):
def process(self, file):
parser = LDIFParser(file.open())
arr=[]
for dn, entry in parser.parse():
# dict1 ={}
dict_entry = dict(entry)
uname = return_dictionary_element_if_present(dict_entry,'uname')
userPassword = return_dictionary_element_if_present(dict_entry,'userPassword')
arr.append(tuple((uname,userPassword)))
return arr
def dataflow(pipeline_options):
print("starting")
options = GoogleCloudOptions.from_dictionary(pipeline_options)
with beam.Pipeline(options=options) as p:
(p | 'Reading data from GCS' >> MatchFiles(file_pattern="gs://my_bucket_name/*.ldiff")
| 'file match' >> ReadMatches()
| 'Create entities' >> beam.ParDo(CreateEntities())
# | 'print to screen' >> beam.Map(print)
| 'Write to CloudSQL' >> beam.ParDo(SQLWriteDoFn(pipeline_options['project']))
)
if __name__ == '__main__':
parser = argparse.ArgumentParser(
description='dataflow options for ldif to sql')
parser.add_argument('--project', help='Project ID',
default=f'{project_id}')
parser.add_argument('--region', help='region', default='us-central1')
parser.add_argument('--runner', help='Runner', default='DirectRunner')
parser.add_argument('--staging_location',
default=f'gs://{project_id}/staging')
parser.add_argument('--temp_location',
default=f'gs://{project_id}/tmp')
args = parser.parse_args()
JOB_NAME = 'cloudsql-upload-{}'.format(
datetime.datetime.now().strftime('%Y-%m-%d-%H%M%S'))
pipeline_options = {
'project': args.project,
'staging_location': args.staging_location,
'runner': args.runner,
'job_name': JOB_NAME,
'temp_location': args.temp_location,
'save_main_session': True,
'requirements_file': 'requirements.txt',
'region': args.region,
'machine_type': 'n1-standard-8'
}
dataflow(pipeline_options)
requirements.txt
ldif3
pymssql
apache-beam[gcp]==2.31.0
Execution way
python3 dataflow.py --runner=DataflowRunner
Any help is really appreciated. Thanks in Advance.
Edit1:
I have done multiple trial and error and finally changed my requirements file as following
setuptools==57.4.0
wheel==0.37.0
setuptools_scm[toml]==6.0.1
Cython==0.29.24
ldif3
apache-beam[gcp]==2.31.0
But iam getting only following error now.
ERROR: pymssql-2.2.2-cp36-cp36m-manylinux_2_12_i686.manylinux2010_i686.whl is not a supported wheel on this platform.
The way iam running now is
python3 dataflow.py --runner=DataflowRunner --requirements_file=requirements.txt --extra_package=pymssql-2.2.2-cp36-cp36m-manylinux_2_12_i686.manylinux2010_i686.whl
When you run your pipeline locally, the packages that your pipeline
depends on are available because they are installed on your local
machine. However, when you want to run your pipeline remotely, you
must make sure these dependencies are available on the remote
machines.
In your execution way I see that you don't specify the requirements.txt:
so you need to start by adding --requirements_file requirements.txt
Ref: https://beam.apache.org/documentation/sdks/python-pipeline-dependencies/
The error
is not a supported wheel on this platform
is probably due to trying to use a Python 3.6 wheel with a Beam pipeline that uses a different Python version. Can you try to use a wheel that matches the Python version your Beam pipeline is run with ?
Related
Epoptes is a lab view (as in classroom lab) software commonly available in Debian based distributions. I'm trying to rebuild it for CentOS, basically as described here and here. The OpenSuse rpm can be found here, the deb here.
Either way, from deb to rpm or OpenSuse rpm the stopping point is the same:
/var/tmp/rpm-tmp.leBIra: line 1: fg: no job control
error: %pre(epoptes-0.5.10-3.1.noarch) scriptlet failed, exit status 1
error: epoptes-0.5.10-3.1.noarch: install failed
error: epoptes-0.5.10-3.noarch: erase skipped
Basically, trying to install the rpm after rebuild, e.g.,
rpmrebuild -pe epoptes-0.5.10-3.noarch.rpm
rpm -Uvh epoptes-0.5.10-3.noarch.rpm
fails complaining about job control. I am not sure how to amend this given the workflow for rebuilding the rpm.
From OpenSuse:
Using the command rpm -qp --scripts epoptes-0.5.10-3.1.noarch.rpm returns
preinstall scriptlet (using /bin/sh):
%service_add_pre epoptes-server.service
postinstall scriptlet (using /bin/sh):
if ! getent group epoptes >/dev/null; then
groupadd --system epoptes
fi
if ! [ -f /etc/epoptes/server.key ] || ! [ -f /etc/epoptes/server.crt ] || ! [ -s /etc/epoptes/server.crt ]; then
if ! [ -d /etc/epoptes ]; then
mkdir /etc/epoptes
fi
openssl req -batch -x509 -nodes -newkey rsa:1024 -days $(($(date --utc +%s) / 86400 + 3652)) -keyout /etc/epoptes/server.key -out /etc/epoptes/server.crt
chmod 600 /etc/epoptes/server.key
fi
%service_add_post epoptes-server.service
preuninstall scriptlet (using /bin/sh):
%service_del_preun epoptes-server.service
postuninstall scriptlet (using /bin/sh):
%service_del_postun epoptes-server.service
Rebuilding from the Ubuntu deb, installation is successful, but launch fails with:
# epoptes
Traceback (most recent call last):
File "/usr/bin/epoptes", line 30, in <module>
import epoptes
ImportError: No module named epoptes
Running rpm -qlp on the RPM generated from the .deb shows:
/etc
/etc/default
/etc/default/epoptes
/etc/epoptes
/etc/init.d/epoptes
/usr
/usr/bin/epoptes
/usr/lib/python2.7
/usr/lib/python2.7/dist-packages
/usr/lib/python2.7/dist-packages/epoptes
/usr/lib/python2.7/dist-packages/epoptes-0.5.10_2.egg-info
/usr/lib/python2.7/dist-packages/epoptes/__init__.py
/usr/lib/python2.7/dist-packages/epoptes/common
/usr/lib/python2.7/dist-packages/epoptes/common/__init__.py
/usr/lib/python2.7/dist-packages/epoptes/common/config.py
/usr/lib/python2.7/dist-packages/epoptes/common/constants.py
/usr/lib/python2.7/dist-packages/epoptes/common/ltsconf.py
/usr/lib/python2.7/dist-packages/epoptes/common/xdg_dirs.py
/usr/lib/python2.7/dist-packages/epoptes/core
/usr/lib/python2.7/dist-packages/epoptes/core/__init__.py
/usr/lib/python2.7/dist-packages/epoptes/core/lib_users.py
/usr/lib/python2.7/dist-packages/epoptes/core/structs.py
/usr/lib/python2.7/dist-packages/epoptes/core/wol.py
/usr/lib/python2.7/dist-packages/epoptes/daemon
/usr/lib/python2.7/dist-packages/epoptes/daemon/__init__.py
/usr/lib/python2.7/dist-packages/epoptes/daemon/bashplex.py
/usr/lib/python2.7/dist-packages/epoptes/daemon/commands.py
/usr/lib/python2.7/dist-packages/epoptes/daemon/exchange.py
/usr/lib/python2.7/dist-packages/epoptes/daemon/guiplex.py
/usr/lib/python2.7/dist-packages/epoptes/daemon/uiconnection.py
/usr/lib/python2.7/dist-packages/epoptes/ui
/usr/lib/python2.7/dist-packages/epoptes/ui/__init__.py
/usr/lib/python2.7/dist-packages/epoptes/ui/about_dialog.py
/usr/lib/python2.7/dist-packages/epoptes/ui/benchmark.py
/usr/lib/python2.7/dist-packages/epoptes/ui/client_information.py
/usr/lib/python2.7/dist-packages/epoptes/ui/execcommand.py
/usr/lib/python2.7/dist-packages/epoptes/ui/graph.py
/usr/lib/python2.7/dist-packages/epoptes/ui/gui.py
/usr/lib/python2.7/dist-packages/epoptes/ui/notifications.py
/usr/lib/python2.7/dist-packages/epoptes/ui/remote_assistance.py
/usr/lib/python2.7/dist-packages/epoptes/ui/sendmessage.py
/usr/lib/python2.7/dist-packages/twisted
/usr/lib/python2.7/dist-packages/twisted/plugins
/usr/lib/python2.7/dist-packages/twisted/plugins/epoptesd.py
/usr/share
/usr/share/applications
/usr/share/applications/epoptes.desktop
/usr/share/doc
/usr/share/doc/epoptes
/usr/share/doc/epoptes/README
/usr/share/doc/epoptes/changelog.Debian.gz
/usr/share/doc/epoptes/copyright
/usr/share/epoptes
/usr/share/epoptes/about_dialog.ui
/usr/share/epoptes/client-functions
/usr/share/epoptes/client_information.ui
/usr/share/epoptes/epoptes.ui
/usr/share/epoptes/executeCommand.ui
/usr/share/epoptes/images
/usr/share/epoptes/images/16
/usr/share/epoptes/images/16/assist.png
/usr/share/epoptes/images/16/broadcast-stop.png
/usr/share/epoptes/images/16/broadcast-windowed.png
/usr/share/epoptes/images/16/broadcast.png
/usr/share/epoptes/images/16/execute.png
/usr/share/epoptes/images/16/graph.png
/usr/share/epoptes/images/16/info.png
/usr/share/epoptes/images/16/lock-screen.png
/usr/share/epoptes/images/16/logout.png
/usr/share/epoptes/images/16/message.png
/usr/share/epoptes/images/16/mute.png
/usr/share/epoptes/images/16/observe.png
/usr/share/epoptes/images/16/poweron.png
/usr/share/epoptes/images/16/restart.png
/usr/share/epoptes/images/16/root-terminal.png
/usr/share/epoptes/images/16/shutdown.png
/usr/share/epoptes/images/16/terminal.png
/usr/share/epoptes/images/16/unlock-screen.png
/usr/share/epoptes/images/16/unmute.png
/usr/share/epoptes/images/assist.svg
/usr/share/epoptes/images/broadcast-stop.svg
/usr/share/epoptes/images/broadcast-windowed.svg
/usr/share/epoptes/images/broadcast.svg
/usr/share/epoptes/images/execute.svg
/usr/share/epoptes/images/fat.svg
/usr/share/epoptes/images/graph.png
/usr/share/epoptes/images/info.svg
/usr/share/epoptes/images/lock-screen.svg
/usr/share/epoptes/images/login.png
/usr/share/epoptes/images/logout.svg
/usr/share/epoptes/images/message.svg
/usr/share/epoptes/images/mute.svg
/usr/share/epoptes/images/observe.svg
/usr/share/epoptes/images/off.png
/usr/share/epoptes/images/offline.svg
/usr/share/epoptes/images/on.png
/usr/share/epoptes/images/poweron.svg
/usr/share/epoptes/images/restart.svg
/usr/share/epoptes/images/root-terminal.svg
/usr/share/epoptes/images/shutdown.svg
/usr/share/epoptes/images/standalone.svg
/usr/share/epoptes/images/systemgrp.png
/usr/share/epoptes/images/terminal.svg
/usr/share/epoptes/images/thin.svg
/usr/share/epoptes/images/unlock-screen.svg
/usr/share/epoptes/images/unmute.svg
/usr/share/epoptes/images/usersgrp.png
/usr/share/epoptes/netbenchmark.ui
/usr/share/epoptes/remote_assistance.ui
/usr/share/epoptes/sendMessage.ui
/usr/share/icons
/usr/share/icons/hicolor
/usr/share/icons/hicolor/scalable
/usr/share/icons/hicolor/scalable/apps
/usr/share/icons/hicolor/scalable/apps/epoptes.svg
/usr/share/locale
/usr/share/locale/af
/usr/share/locale/af/LC_MESSAGES
/usr/share/locale/af/LC_MESSAGES/epoptes.mo
/usr/share/locale/ar
/usr/share/locale/ar/LC_MESSAGES
/usr/share/locale/ar/LC_MESSAGES/epoptes.mo
/usr/share/locale/bg
/usr/share/locale/bg/LC_MESSAGES
/usr/share/locale/bg/LC_MESSAGES/epoptes.mo
/usr/share/locale/ca
/usr/share/locale/ca/LC_MESSAGES
/usr/share/locale/ca/LC_MESSAGES/epoptes.mo
/usr/share/locale/ca#valencia
/usr/share/locale/ca#valencia/LC_MESSAGES
/usr/share/locale/ca#valencia/LC_MESSAGES/epoptes.mo
/usr/share/locale/cs
/usr/share/locale/cs/LC_MESSAGES
/usr/share/locale/cs/LC_MESSAGES/epoptes.mo
/usr/share/locale/da
/usr/share/locale/da/LC_MESSAGES
/usr/share/locale/da/LC_MESSAGES/epoptes.mo
/usr/share/locale/de
/usr/share/locale/de/LC_MESSAGES
/usr/share/locale/de/LC_MESSAGES/epoptes.mo
/usr/share/locale/el
/usr/share/locale/el/LC_MESSAGES
/usr/share/locale/el/LC_MESSAGES/epoptes.mo
/usr/share/locale/en_AU
/usr/share/locale/en_AU/LC_MESSAGES
/usr/share/locale/en_AU/LC_MESSAGES/epoptes.mo
/usr/share/locale/en_GB
/usr/share/locale/en_GB/LC_MESSAGES
/usr/share/locale/en_GB/LC_MESSAGES/epoptes.mo
/usr/share/locale/es
/usr/share/locale/es/LC_MESSAGES
/usr/share/locale/es/LC_MESSAGES/epoptes.mo
/usr/share/locale/eu
/usr/share/locale/eu/LC_MESSAGES
/usr/share/locale/eu/LC_MESSAGES/epoptes.mo
/usr/share/locale/fi
/usr/share/locale/fi/LC_MESSAGES
/usr/share/locale/fi/LC_MESSAGES/epoptes.mo
/usr/share/locale/fr
/usr/share/locale/fr/LC_MESSAGES
/usr/share/locale/fr/LC_MESSAGES/epoptes.mo
/usr/share/locale/gl
/usr/share/locale/gl/LC_MESSAGES
/usr/share/locale/gl/LC_MESSAGES/epoptes.mo
/usr/share/locale/he
/usr/share/locale/he/LC_MESSAGES
/usr/share/locale/he/LC_MESSAGES/epoptes.mo
/usr/share/locale/hu
/usr/share/locale/hu/LC_MESSAGES
/usr/share/locale/hu/LC_MESSAGES/epoptes.mo
/usr/share/locale/id
/usr/share/locale/id/LC_MESSAGES
/usr/share/locale/id/LC_MESSAGES/epoptes.mo
/usr/share/locale/it
/usr/share/locale/it/LC_MESSAGES
/usr/share/locale/it/LC_MESSAGES/epoptes.mo
/usr/share/locale/lt
/usr/share/locale/lt/LC_MESSAGES
/usr/share/locale/lt/LC_MESSAGES/epoptes.mo
/usr/share/locale/ml
/usr/share/locale/ml/LC_MESSAGES
/usr/share/locale/ml/LC_MESSAGES/epoptes.mo
/usr/share/locale/ms
/usr/share/locale/ms/LC_MESSAGES
/usr/share/locale/ms/LC_MESSAGES/epoptes.mo
/usr/share/locale/nb
/usr/share/locale/nb/LC_MESSAGES
/usr/share/locale/nb/LC_MESSAGES/epoptes.mo
/usr/share/locale/nl
/usr/share/locale/nl/LC_MESSAGES
/usr/share/locale/nl/LC_MESSAGES/epoptes.mo
/usr/share/locale/oc
/usr/share/locale/oc/LC_MESSAGES
/usr/share/locale/oc/LC_MESSAGES/epoptes.mo
/usr/share/locale/pl
/usr/share/locale/pl/LC_MESSAGES
/usr/share/locale/pl/LC_MESSAGES/epoptes.mo
/usr/share/locale/pt
/usr/share/locale/pt/LC_MESSAGES
/usr/share/locale/pt/LC_MESSAGES/epoptes.mo
/usr/share/locale/pt_BR
/usr/share/locale/pt_BR/LC_MESSAGES
/usr/share/locale/pt_BR/LC_MESSAGES/epoptes.mo
/usr/share/locale/ru
/usr/share/locale/ru/LC_MESSAGES
/usr/share/locale/ru/LC_MESSAGES/epoptes.mo
/usr/share/locale/se
/usr/share/locale/se/LC_MESSAGES
/usr/share/locale/se/LC_MESSAGES/epoptes.mo
/usr/share/locale/sk
/usr/share/locale/sk/LC_MESSAGES
/usr/share/locale/sk/LC_MESSAGES/epoptes.mo
/usr/share/locale/sl
/usr/share/locale/sl/LC_MESSAGES
/usr/share/locale/sl/LC_MESSAGES/epoptes.mo
/usr/share/locale/so
/usr/share/locale/so/LC_MESSAGES
/usr/share/locale/so/LC_MESSAGES/epoptes.mo
/usr/share/locale/sr
/usr/share/locale/sr/LC_MESSAGES
/usr/share/locale/sr/LC_MESSAGES/epoptes.mo
/usr/share/locale/sr#latin
/usr/share/locale/sr#latin/LC_MESSAGES
/usr/share/locale/sr#latin/LC_MESSAGES/epoptes.mo
/usr/share/locale/sv
/usr/share/locale/sv/LC_MESSAGES
/usr/share/locale/sv/LC_MESSAGES/epoptes.mo
/usr/share/locale/tr
/usr/share/locale/tr/LC_MESSAGES
/usr/share/locale/tr/LC_MESSAGES/epoptes.mo
/usr/share/locale/uk
/usr/share/locale/uk/LC_MESSAGES
/usr/share/locale/uk/LC_MESSAGES/epoptes.mo
/usr/share/locale/vi
/usr/share/locale/vi/LC_MESSAGES
/usr/share/locale/vi/LC_MESSAGES/epoptes.mo
/usr/share/locale/zh_CN
/usr/share/locale/zh_CN/LC_MESSAGES
/usr/share/locale/zh_CN/LC_MESSAGES/epoptes.mo
/usr/share/locale/zh_TW
/usr/share/locale/zh_TW/LC_MESSAGES
/usr/share/locale/zh_TW/LC_MESSAGES/epoptes.mo
/usr/share/ltsp
/usr/share/ltsp/plugins
/usr/share/ltsp/plugins/ltsp-build-client
/usr/share/ltsp/plugins/ltsp-build-client/common
/usr/share/ltsp/plugins/ltsp-build-client/common/040-epoptes-certificate
/usr/share/man
/usr/share/man/man1
/usr/share/man/man1/epoptes.1.gz
I can't create a minion from the map file, no idea what's happened. A month ago my script was working correctly, right now it fails. I was trying to do some research about it but I could't find anything about it. Could someone have a look on my DEBUG log? The minion is created on DigitalOcean but the master server can't connect to it at all.
so I run:
salt-cloud -P -m /etc/salt/cloud.maps.d/production.map -l debug
The master is running on Ubuntu 16.04.1 x64, the minion also.
I use the latest saltstack's library:
echo "deb http://repo.saltstack.com/apt/ubuntu/16.04/amd64/latest xenial main" >> /etc/apt/sources.list.d/saltstack.list
I tested both 2016.3.2 and 2016.3.3, what is interesting, the same script was working correctly 4 weeks ago, I assume something had to change.
ERROR:
Writing /usr/lib/python2.7/dist-packages/salt-2016.3.3.egg-info
* INFO: Running install_ubuntu_git_post()
disabled
Created symlink from /etc/systemd/system/multi-user.target.wants/salt-minion.service to /lib/systemd/system/salt-minion.service.
* INFO: Running install_ubuntu_check_services()
* INFO: Running install_ubuntu_restart_daemons()
Job for salt-minion.service failed because a configured resource limit was exceeded. See "systemctl status salt-minion.service" and "journalctl -xe" for details.
start: Unable to connect to Upstart: Failed to connect to socket /com/ubuntu/upstart: Connection refused
* ERROR: No init.d support for salt-minion was found
* ERROR: Fai
[DEBUG ] led to run install_ubuntu_restart_daemons()!!!
[ERROR ] Failed to deploy 'minion-zk-0'. Error: Command 'ssh -t -t -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -oControlPath=none -oPasswordAuthentication=no -oChallengeResponseAuthentication=no -oPubkeyAuthentication=yes -oIdentitiesOnly=yes -oKbdInteractiveAuthentication=no -i /etc/salt/keys/cloud/do.pem -p 22 root#REMOVED_IP '/tmp/.saltcloud-5d18c002-e817-46d5-9fb2-d3bdb2dfe7fd/deploy.sh -c '"'"'/tmp/.saltcloud-5d18c002-e817-46d5-9fb2-d3bdb2dfe7fd'"'"' -P git v2016.3.3'' failed. Exit code: 1
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/salt/cloud/__init__.py", line 2293, in create_multiprocessing
local_master=parallel_data['local_master']
File "/usr/lib/python2.7/dist-packages/salt/cloud/__init__.py", line 1281, in create
output = self.clouds[func](vm_)
File "/usr/lib/python2.7/dist-packages/salt/cloud/clouds/digital_ocean.py", line 481, in create
ret = __utils__['cloud.bootstrap'](vm_, __opts__)
File "/usr/lib/python2.7/dist-packages/salt/utils/cloud.py", line 527, in bootstrap
deployed = deploy_script(**deploy_kwargs)
File "/usr/lib/python2.7/dist-packages/salt/utils/cloud.py", line 1516, in deploy_script
if root_cmd(deploy_command, tty, sudo, **ssh_kwargs) != 0:
File "/usr/lib/python2.7/dist-packages/salt/utils/cloud.py", line 2167, in root_cmd
retcode = _exec_ssh_cmd(cmd, allow_failure=allow_failure, **kwargs)
File "/usr/lib/python2.7/dist-packages/salt/utils/cloud.py", line 1784, in _exec_ssh_cmd
cmd, proc.exitstatus
SaltCloudSystemExit: Command 'ssh -t -t -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -oControlPath=none -oPasswordAuthentication=no -oChallengeResponseAuthentication=no -oPubkeyAuthentication=yes -oIdentitiesOnly=yes -oKbdInteractiveAuthentication=no -i /etc/salt/keys/cloud/do.pem -p 22 root#REMOVED_ID '/tmp/.saltcloud-5d18c002-e817-46d5-9fb2-d3bdb2dfe7fd/deploy.sh -c '"'"'/tmp/.saltcloud-5d18c002-e817-46d5-9fb2-d3bdb2dfe7fd'"'"' -P git v2016.3.3'' failed. Exit code: 1
[DEBUG ] LazyLoaded nested.output
minion-zk-0:
----------
Error:
Command 'ssh -t -t -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -oControlPath=none -oPasswordAuthentication=no -oChallengeResponseAuthentication=no -oPubkeyAuthentication=yes -oIdentitiesOnly=yes -oKbdInteractiveAuthentication=no -i /etc/salt/keys/cloud/do.pem -p 22 root#REMOVED_IP '/tmp/.saltcloud-5d18c002-e817-46d5-9fb2-d3bdb2dfe7fd/deploy.sh -c '"'"'/tmp/.saltcloud-5d18c002-e817-46d5-9fb2-d3bdb2dfe7fd'"'"' -P git v2016.3.3'' failed. Exit code: 1
root#master-zk:/etc/salt/cloud.maps.d# salt '*' test.ping
minion-zk-0:
Minion did not return. [No response]
root#master-zk:/etc/salt/cloud.maps.d#
It is located in your cloud configuration somewhere in /etc/salt/cloud.profiles.d/, /etc/salt/cloud.providers.d/ or /etc/salt/cloud.d/. Just figure out where and change the value salt to your masters ip.
I currently do this in my providers setting like that:
hit-vcenter:
driver: vmware
user: 'foo'
password: 'secret'
url: 'some url'
protocol: 'https'
port: 443
minion:
master: 10.1.10.1
Following this tutorial from GoRails I'm getting this error when I try to deploy on Ubuntu 16.04 on Digital Ocean.
$ cap production deploy --trace
Trace Here
** DEPLOY FAILED
** Refer to log/capistrano.log for details. Here are the last 20 lines:
DEBUG [9a2c15d9] Command: [ -d /home/deployer/RMG_rodeobest/releases/20160829222734/public/system ]
DEBUG [9a2c15d9] Finished in 0.181 seconds with exit status 1 (failed).
INFO [86a233a2] Running /usr/bin/env ln -s /home/deployer/RMG_rodeobest/shared/public/system /home/deployer/RMG_rodeobest/releases/20160829222734/public/system as deployer#138.68.8.2…
DEBUG [86a233a2] Command: ( export RBENV_ROOT="$HOME/.rbenv" RBENV_VERSION="2.3.1" ; /usr/bin/env ln -s /home/deployer/RMG_rodeobest/shared/public/system /home/deployer/RMG_rodeobest/…
INFO [86a233a2] Finished in 0.166 seconds with exit status 0 (successful).
DEBUG [07f5e5a2] Running [ -L /home/deployer/RMG_rodeobest/releases/20160829222734/public/assets ] as deployer#138.68.8.255
DEBUG [07f5e5a2] Command: [ -L /home/deployer/RMG_rodeobest/releases/20160829222734/public/assets ]
DEBUG [07f5e5a2] Finished in 0.166 seconds with exit status 1 (failed).
DEBUG [5e61eaf3] Running [ -d /home/deployer/RMG_rodeobest/releases/20160829222734/public/assets ] as deployer#138.68.8.255
DEBUG [5e61eaf3] Command: [ -d /home/deployer/RMG_rodeobest/releases/20160829222734/public/assets ]
DEBUG [5e61eaf3] Finished in 0.168 seconds with exit status 1 (failed).
INFO [52076052] Running /usr/bin/env ln -s /home/deployer/RMG_rodeobest/shared/public/assets /home/deployer/RMG_rodeobest/releases/20160829222734/public/assets as deployer#138.68.8.2…
DEBUG [52076052] Command: ( export RBENV_ROOT="$HOME/.rbenv" RBENV_VERSION="2.3.1" ; /usr/bin/env ln -s /home/deployer/RMG_rodeobest/shared/public/assets /home/deployer/RMG_rodeobest/…
INFO [52076052] Finished in 0.167 seconds with exit status 0 (successful).
DEBUG [2a6bf02b] Running if test ! -d /home/deployer/RMG_rodeobest/releases/20160829222734; then echo "Directory does not exist '/home/deployer/RMG_rodeobest/releases/20160829222734'"…
DEBUG [2a6bf02b] Command: if test ! -d /home/deployer/RMG_rodeobest/releases/20160829222734; then echo "Directory does not exist '/home/deployer/RMG_rodeobest/releases/20160829222734'…
DEBUG [2a6bf02b] Finished in 0.164 seconds with exit status 0 (successful).
INFO [f4b636e3] Running $HOME/.rbenv/bin/rbenv exec bundle install --path /home/deployer/RMG_rodeobest/shared/bundle --without development test --deployment --quiet as deployer#138.6…
DEBUG [f4b636e3] Command: cd /home/deployer/RMG_rodeobest/releases/20160829222734 && ( export RBENV_ROOT="$HOME/.rbenv" RBENV_VERSION="2.3.1" ; $HOME/.rbenv/bin/rbenv exec bundle inst…
DEBUG [f4b636e3] bash: line 1: 3509 Killed $HOME/.rbenv/bin/rbenv exec bundle install --path /home/deployer/RMG_rodeobest/shared/bundle --without development test -…
My Capfile:
# Load DSL and Setup Up Stages
require 'capistrano/setup'
# Includes default deployment tasks
require 'capistrano/deploy'
# Includes tasks from other gems included in your Gemfile
# If you are using rbenv add these lines:
require 'capistrano/rbenv'
set :rbenv_type, :user # or :system, depends on your rbenv setup
set :rbenv_ruby, '2.3.1'
require 'capistrano/bundler'
require 'capistrano/rails'
# require 'capistrano/passenger'
# Loads custom tasks from `lib/capistrano/tasks' if you have any defined.
Dir.glob('lib/capistrano/tasks/*.cap').each { |r| import r }
I'm stuck, don't know why is aborting cap.
Any idea?
Hi I am using docker to deploy my rails app using phusion/passenger image. Here is my Dockerfile:
FROM phusion/passenger-ruby22:0.9.19
# set correct environment variables
ENV HOME /root
ENV RAILS_ENV production
# Use baseimage-docker's init system.
CMD ["/sbin/my_init"]
# Expose Nginx HTTP service
EXPOSE 80
# Start Nginx / Passenger
RUN rm -f /etc/service/nginx/down
# Remove the default site
RUN rm /etc/nginx/sites-enabled/default
# Add the nginx site and config
ADD nginx.conf /etc/nginx/sites-enabled/nginx.conf
ADD rails-env.conf /etc/nginx/main.d/rails-env.conf
# Let ensure these packages are already installed
# otherwise install them anyways
RUN apt-get update && apt-get install -y build-essential \
nodejs \
libpq-dev
# bundle gem and cache them
WORKDIR /tmp
ADD Gemfile /tmp/
ADD Gemfile.lock /tmp/
RUN gem install bundler
RUN bundle install
# Add rails app
ADD . /home/app/webapp
WORKDIR /home/app/webapp
RUN touch log/delayed_job.log log/production.log log/
RUN chown -R app:app /home/app/webapp
RUN RAILS_ENV=production rake assets:precompile
# Clean up APT and bundler when done.
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
I am getting permission issue for tmp and log files.
web_1 | [ 2016-07-19 08:45:12.6653 31/7ff812726700 age/Cor/App/Implementation.cpp:304 ]: Could not spawn process for application /home/app/webapp: An error occurred while starting up the preloader.
web_1 | Error ID: 42930e85
web_1 | Error details saved to: /tmp/passenger-error-9DeJ86.html
web_1 | Message from application: Permission denied # rb_sysopen - log/logentries.log (Errno::EACCES)
web_1 | /usr/lib/ruby/2.2.0/logger.rb:628:in `initialize'
web_1 | /usr/lib/ruby/2.2.0/logger.rb:628:in `open'
web_1 | /usr/lib/ruby/2.2.0/logger.rb:628:in `open_logfile'
web_1 | /usr/lib/ruby/2.2.0/logger.rb:584:in `initialize'
web_1 | /usr/lib/ruby/2.2.0/logger.rb:318:in `new'
web_1 | /usr/lib/ruby/2.2.0/logger.rb:318:in `initialize'
web_1 | /var/lib/gems/2.2.0/gems/le-2.7.2/lib/le/host/http.rb:37:in `new'
web_1 | /var/lib/gems/2.2.0/gems/le-2.7.2/lib/le/host/http.rb:37:in `initialize'
I tried to give chmod -R 665/775/777 log/ and still didn't fixed the problem.
Thanks
Rearrange your line RUN RAILS_ENV=production rake assets:precompile first then RUN chown -R app:app /home/app/webapp(after your rake task) So, It should be something like this:
RUN RAILS_ENV=production rake assets:precompile
RUN chown -R app:app /home/app/webapp
I installed OpenERP 7 in my CentOS 64 bit and I have this problem when starting the service:
Starting OpenERP Server Daemon (openerp-server): [ OK ]
root#****[~]# ERROR: couldn't create the logfile directory. Logging to the standard output.
2014-09-10 14:04:58,739 29029 INFO ? openerp: OpenERP version 7.0-20140804-231303
2014-09-10 14:04:58,739 29029 INFO ? openerp: addons paths: /usr/lib/python2.6/site-packages/openerp-7.0_20140804_231303-py2.6.egg/openerp/addons
2014-09-10 14:04:58,739 29029 INFO ? openerp: database hostname: localhost
2014-09-10 14:04:58,739 29029 INFO ? openerp: database port: 5432
2014-09-10 14:04:58,740 29029 INFO ? openerp: database user: openerp
Traceback (most recent call last):
File "/usr/bin/openerp-server", line 5, in <module>
pkg_resources.run_script('openerp==7.0-20140804-231303', 'openerp-server')
File "/usr/lib/python2.6/site-packages/pkg_resources.py", line 461, in run_script
self.require(requires)[0].run_script(script_name, ns)
File "/usr/lib/python2.6/site-packages/pkg_resources.py", line 1194, in run_script
execfile(script_filename, namespace, namespace)
File "/usr/lib/python2.6/site-packages/openerp-7.0_20140804_231303-py2.6.egg/EGG-INFO/scripts/openerp-server", line 5, in <module>
openerp.cli.main()
File "/usr/lib/python2.6/site-packages/openerp-7.0_20140804_231303-py2.6.egg/openerp/cli/__init__.py", line 61, in main
o.run(args)
File "/usr/lib/python2.6/site-packages/openerp-7.0_20140804_231303-py2.6.egg/openerp/cli/server.py", line 272, in run
main(args)
File "/usr/lib/python2.6/site-packages/openerp-7.0_20140804_231303-py2.6.egg/openerp/cli/server.py", line 252, in main
setup_pid_file()
File "/usr/lib/python2.6/site-packages/openerp-7.0_20140804_231303-py2.6.egg/openerp/cli/server.py", line 88, in setup_pid_file
fd = open(config['pidfile'], 'w')
IOError: [Errno 13] Permission denied: '/var/run/openerp/openerp-server.pid'
Also when I Try to stop the service I have this error:
service openerp stop
Stopping OpenERP Server Daemon (openerp-server): cat: /var/run/openerp/openerp-server.pid: No such file or directory
[FAILED]
Can you please advise how to fix this issue?
Thank you,
Best Regards,
As Odedra said, its an permission issue as well as missing python modules and I found no reason for why this happens from the start, but at least I got hint how to solve is as the following:
1- I found missing python and other modules must be installed in my Cent OS 6.5 and it was:
yum -y install python-psycopg2 python-lxml PyXML python-setuptools libxslt-python pytz \
python-matplotlib python-babel python-mako python-dateutil python-psycopg2 \
pychart pydot python-reportlab python-devel python-imaging python-vobject \
hippo-canvas-python mx python-gdata python-ldap python-openid \
python-werkzeug python-vatnumber pygtk2 glade3 pydot python-dateutil \
python-matplotlib pygtk2 glade3 pydot python-dateutil python-matplotlib \
python python-devel python-psutil python-docutils make\
automake gcc gcc-c++ kernel-devel byacc flashplugin-nonfree poppler-utils pywebdav\
2- After this I fix the permission issue by doing the following:
chown USERNAME:USERNAME/var/run/openerp/openerp-server.pid
sudo chown USERNAME:USERNAME/tmp/oe-sessions-openerp
Please note the following:
1- Be sure where you installed your openerp as not by defualt you will find your path in /tmp.
After doing the following my problem has been solved.