awslogs-agent-setup.py not working on Ubuntu 17.10 (artful) - amazon-web-services

This works fine on ubuntu 16.04, but not on 17.10
+ curl https://s3.amazonaws.com/aws-cloudwatch/downloads/latest/awslogs-agent-setup.py -O
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
^M 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0^M100 56093 100 56093 0 0 56093 0 0:00:01 --:--:-- 0:00:01 98929
+ chmod +x ./awslogs-agent-setup.py
+ ./awslogs-agent-setup.py -n -c /etc/awslogs/awslogs.conf -r us-west-2
Step 1 of 5: Installing pip ...^[[0mlibyaml-dev does not exist in system ^[[0m^[[92mDONE^[[0m
Step 2 of 5: Downloading the latest CloudWatch Logs agent bits ... ^[[0mTraceback (most recent call last):
File "./awslogs-agent-setup.py", line 1317, in <module>
main()
File "./awslogs-agent-setup.py", line 1313, in main
setup.setup_artifacts()
File "./awslogs-agent-setup.py", line 858, in setup_artifacts
self.install_awslogs_cli()
File "./awslogs-agent-setup.py", line 570, in install_awslogs_cli
subprocess.call([AWSCLI_CMD, 'configure', 'set', 'plugins.cwlogs', 'cwlogs'], env=DEFAULT_ENV)
File "/usr/lib/python2.7/subprocess.py", line 168, in call
return Popen(*popenargs, **kwargs).wait()
File "/usr/lib/python2.7/subprocess.py", line 390, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1025, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
I noticed that earlier on in the process, in the AWS boilerplate it failed to install libyaml-dev but not sure if that's the only problem.

Always find the answer right after I post it...
Here's my modified CF template command:
050_install_awslogs:
command: !Sub
"/bin/bash -x\n
exec >>/var/log/cf_050_install_awslogs.log 2>&1 \n
echo 050_install_awslogs...\n
set -xe\n
# Get the CloudWatch Logs agent\n
mkdir /opt/awslogs\n
cd /opt/awslogs\n
# Needed for python3 in 17.10\n
apt-get install -y libyaml-dev python-dev \n
pip3 install awscli-cwlogs\n
# avoid it complaining about not having /var/awslogs/bin/aws binary\n
if [ ! -d /var/awslogs/bin ] ; then\n
mkdir -p /var/awslogs/bin\n
ln -s /usr/local/bin/aws /var/awslogs/bin/aws\n
fi\n
curl https://s3.amazonaws.com/aws-cloudwatch/downloads/latest/awslogs-agent-setup.py -O\n
chmod +x ./awslogs-agent-setup.py\n
# Hack for python 3.6 & old awslogs-agent-setup.py\n
sed -i 's/3,6/3,7/' awslogs-agent-setup.py\n
./awslogs-agent-setup.py -n -c /etc/awslogs/awslogs.conf -r ${AWS::Region}\n
echo 050_install_awslogs end\n
"
Not entirely sure about the need for the dir creation but I expect this is a temporary case that will get resolved soon as one still needs to fudge the python 3.6 compatibility check.
it may be installable using python 2.7 as well, but that felt like going backwards at this point as the my rationale for 17.10 was python 3.6.
Credit for the yaml package and dir creation idea to https://forums.aws.amazon.com/thread.jspa?threadID=265977 but I prefer to avoid easy_install.

I had similar issue on Ubuntu 18.04.
Instruction from AWS for standalone install worked for my case.
To download and run it standalone, use the following commands and follow the prompts:
curl https://s3.amazonaws.com/aws-cloudwatch/downloads/latest/awslogs-agent-setup.py -O
curl https://s3.amazonaws.com/aws-cloudwatch/downloads/latest/AgentDependencies.tar.gz -O
tar xvf AgentDependencies.tar.gz -C /tmp/
sudo python ./awslogs-agent-setup.py --region us-east-1 --dependency-path /tmp/AgentDependencies

Related

Trying to install django to my virtual env

I am new to linunx and pipenv. I tried to install django on my new environment with "pipenv install django" and this happend:
Installing django…
Adding django to Pipfile's [packages]…
✔ Installation Succeeded
Pipfile.lock not found, creating…
Locking [dev-packages] dependencies…
Locking [packages] dependencies…
✔ Success!
Updated Pipfile.lock (4f9dd2)!
Installing dependencies from Pipfile.lock (4f9dd2)…
An error occurred while installing asgiref==3.2.5 --hash=sha256:3e4192eaec0758b99722f0b0666d5fbfaa713054d92e8de5b58ba84ec5ce696f --hash=sha256:c8f49dd3b42edcc51d09dd2eea8a92b3cfc987ff7e6486be734b4d0cbfd5d315! Will try again.
An error occurred while installing django==3.0.4 --hash=sha256:50b781f6cbeb98f673aa76ed8e572a019a45e52bdd4ad09001072dfd91ab07c8 --hash=sha256:89e451bfbb815280b137e33e454ddd56481fdaa6334054e6e031041ee1eda360! Will try again.
An error occurred while installing pytz==2019.3 --hash=sha256:1c557d7d0e871de1f5ccd5833f60fb2550652da6be2693c1e02300743d21500d --hash=sha256:b02c06db6cf09c12dd25137e563b31700d3b80fcc4ad23abb7a315f2789819be! Will try again.
An error occurred while installing sqlparse==0.3.1 --hash=sha256:022fb9c87b524d1f7862b3037e541f68597a730a8843245c349fc93e1643dc4e --hash=sha256:e162203737712307dfe78860cc56c8da8a852ab2ee33750e33aeadf38d12c548! Will try again.
🐍 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 4/4 — 00:00:00
Installing initially failed dependencies…
☤ ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 4/4 — 00:00:00
[pipenv.exceptions.InstallError]: File "/home/codrut/.local/lib/python3.7/site-packages/pipenv/cli/command.py", line 254, in install
[pipenv.exceptions.InstallError]: editable_packages=state.installstate.editables,
[pipenv.exceptions.InstallError]: File "/home/codrut/.local/lib/python3.7/site-packages/pipenv/core.py", line 1992, in do_install
[pipenv.exceptions.InstallError]: skip_lock=skip_lock,
[pipenv.exceptions.InstallError]: File "/home/codrut/.local/lib/python3.7/site-packages/pipenv/core.py", line 1253, in do_init
[pipenv.exceptions.InstallError]: pypi_mirror=pypi_mirror,
[pipenv.exceptions.InstallError]: File "/home/codrut/.local/lib/python3.7/site-packages/pipenv/core.py", line 862, in do_install_dependencies
[pipenv.exceptions.InstallError]: _cleanup_procs(procs, False, failed_deps_queue, retry=False)
[pipenv.exceptions.InstallError]: File "/home/codrut/.local/lib/python3.7/site-packages/pipenv/core.py", line 681, in _cleanup_procs
[pipenv.exceptions.InstallError]: raise exceptions.InstallError(c.dep.name, extra=err_lines)
[pipenv.exceptions.InstallError]: []
[pipenv.exceptions.InstallError]: ['Traceback (most recent call last):', ' File "/home/codrut/.local/share/virtualenvs/Django-B9r4LqTh/bin/pip", line 5, in <module>', ' from pip._internal.cli.main import main', "ModuleNotFoundError: No module named 'pip'"]
ERROR: ERROR: Package installation failed...
To mention that a few minutes ago I installed django on a wrong folder and everything worked..
Please help! Thank you!
In the main directory root add the following command to create the environment:
$ python3 -m venv venv this code will create a folder called venv in the root. Which basically is the virtual environment folder.
Then add the following command to activate the virtual environment:
$ source venv/bin/activate
your_project_folder/
|
|-- your_main_app_folder/
| |
| |--Folder_with_controllers/
| | settings.py
| | urls.py
| | ...
| |
| |--App_folder/
| |--Other_app_folder/
|
|--venv/
If the code works fine your bash should look like this:
(venv) <the_path_for_the_folder> your_project_folder %
After activated your environment you can now install django and other packages.
Ps: make sure you instal and activate the virtual environment folder not in the your_main_app_folder.

Can't add jars pyspark in jupyter of Google DataProc

I have a Jupyter notebook on DataProc and I need a jar to run some job. I'm aware of editting spark-defaults.conf and using the --jars=gs://spark-lib/bigquery/spark-bigquery-latest.jar to submit the job from command line - they both work well. However, if I want to directly add jar to jupyter notebook, I tried the methods below and they all fail.
Method 1:
import os
os.environ['PYSPARK_SUBMIT_ARGS'] = '--jars gs://spark-lib/bigquery/spark-bigquery-latest.jar pyspark-shell'
Method 2:
spark = SparkSession.builder.appName('Shakespeare WordCount')\
.config('spark.jars', 'gs://spark-lib/bigquery/spark-bigquery-latest.jar')\
.getOrCreate()
They both have the same error:
---------------------------------------------------------------------------
Py4JJavaError Traceback (most recent call last)
<ipython-input-1-2b7692efb32b> in <module>()
19 # Read BQ data into spark dataframe
20 # This method reads from BQ directly, does not use GCS for intermediate results
---> 21 df = spark.read.format('bigquery').option('table', table).load()
22
23 df.show(5)
/usr/lib/spark/python/pyspark/sql/readwriter.py in load(self, path, format, schema, **options)
170 return self._df(self._jreader.load(self._spark._sc._jvm.PythonUtils.toSeq(path)))
171 else:
--> 172 return self._df(self._jreader.load())
173
174 #since(1.4)
/usr/lib/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py in __call__(self, *args)
1255 answer = self.gateway_client.send_command(command)
1256 return_value = get_return_value(
-> 1257 answer, self.gateway_client, self.target_id, self.name)
1258
1259 for temp_arg in temp_args:
/usr/lib/spark/python/pyspark/sql/utils.py in deco(*a, **kw)
61 def deco(*a, **kw):
62 try:
---> 63 return f(*a, **kw)
64 except py4j.protocol.Py4JJavaError as e:
65 s = e.java_exception.toString()
/usr/lib/spark/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
326 raise Py4JJavaError(
327 "An error occurred while calling {0}{1}{2}.\n".
--> 328 format(target_id, ".", name), value)
329 else:
330 raise Py4JError(
Py4JJavaError: An error occurred while calling o81.load.
: java.lang.ClassNotFoundException: Failed to find data source: bigquery. Please find packages at http://spark.apache.org/third-party-projects.html
at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:657)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:194)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:167)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: bigquery.DefaultSource
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$20$$anonfun$apply$12.apply(DataSource.scala:634)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$20$$anonfun$apply$12.apply(DataSource.scala:634)
at scala.util.Try$.apply(Try.scala:192)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$20.apply(DataSource.scala:634)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$20.apply(DataSource.scala:634)
at scala.util.Try.orElse(Try.scala:84)
at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:634)
... 13 more
The task I try to run is very simple:
table = 'publicdata.samples.shakespeare'
df = spark.read.format('bigquery').option('table', table).load()
df.show(5)
I understand there are many similar questions and answers, but they are either not working or not fitting my needs. There are ad-hoc jars I'll need and I don't want to keep all of them in the default configurations. I'd like to be more flexible and add jars on-the-go. How can I solve this? Thank you!
Unfortunately there isn't a built-in way to do this dynamically without effectively just editing spark-defaults.conf and restarting the kernel. There's an open feature request in Spark for this.
Zeppelin has some usability features for adding jars through the UI but even in Zeppelin you have to restart the interpreter after doing so for the Spark context to pick it up in its classloader. And also those options require the jarfiles to already be staged on the local filesystem; you can't just refer to remote file paths or URLs.
One workaround would be to create an init action which sets up a systemd service which regularly polls on some HDFS directory to sync into one of the existing classpath directories like /usr/lib/spark/jars:
#!/bin/bash
# Sets up continuous sync'ing of an HDFS directory into /usr/lib/spark/jars
# Manually copy jars into this HDFS directory to have them sync into
# ${LOCAL_DIR} on all nodes.
HDFS_DROPZONE='hdfs:///usr/lib/jars'
LOCAL_DIR='file:///usr/lib/spark/jars'
readonly ROLE="$(/usr/share/google/get_metadata_value attributes/dataproc-role)"
if [[ "${ROLE}" == 'Master' ]]; then
hdfs dfs -mkdir -p "${HDFS_DROPZONE}"
fi
SYNC_SCRIPT='/usr/lib/hadoop/libexec/periodic-sync-jars.sh'
cat << EOF > "${SYNC_SCRIPT}"
#!/bin/bash
while true; do
sleep 5
hdfs dfs -ls ${HDFS_DROPZONE}/*.jar 2>/dev/null | grep hdfs: | \
sed 's/.*hdfs:/hdfs:/' | xargs -n 1 basename 2>/dev/null | sort \
> /tmp/hdfs_files.txt
hdfs dfs -ls ${LOCAL_DIR}/*.jar 2>/dev/null | grep file: | \
sed 's/.*file:/file:/' | xargs -n 1 basename 2>/dev/null | sort \
> /tmp/local_files.txt
comm -23 /tmp/hdfs_files.txt /tmp/local_files.txt > /tmp/diff_files.txt
if [ -s /tmp/diff_files.txt ]; then
for FILE in \$(cat /tmp/diff_files.txt); do
echo "$(date): Copying \${FILE} from ${HDFS_DROPZONE} into ${LOCAL_DIR}"
hdfs dfs -cp "${HDFS_DROPZONE}/\${FILE}" "${LOCAL_DIR}/\${FILE}"
done
fi
done
EOF
chmod 755 "${SYNC_SCRIPT}"
SERVICE_CONF='/usr/lib/systemd/system/sync-jars.service'
cat << EOF > "${SERVICE_CONF}"
[Unit]
Description=Period Jar Sync
[Service]
Type=simple
ExecStart=/bin/bash -c '${SYNC_SCRIPT} &>> /var/log/periodic-sync-jars.log'
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
chmod a+rw "${SERVICE_CONF}"
systemctl daemon-reload
systemctl enable sync-jars
systemctl restart sync-jars
systemctl status sync-jars
Then, whenever you need a jarfile to be available everywhere you just copy the jarfile into hdfs:///usr/lib/jars, and the periodic poller will automatically stick it into /usr/lib/spark/jars and then you simply restart your kernel to pick it up. You can add jars to that HDFS directory either by SSH'ing in and running hdfs dfs -cp directly, or simply subprocess out from your Jupyter notebook:
import subprocess
sp = subprocess.Popen(
['hdfs', 'dfs', '-cp',
'gs://spark-lib/bigquery/spark-bigquery-latest.jar',
'hdfs:///usr/lib/jars/spark-bigquery-latest.jar'],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
out, err = sp.communicate()
print(out)
print(err)

"no job control" or "no module" rebuilding Epoptes to CentOS

Epoptes is a lab view (as in classroom lab) software commonly available in Debian based distributions. I'm trying to rebuild it for CentOS, basically as described here and here. The OpenSuse rpm can be found here, the deb here.
Either way, from deb to rpm or OpenSuse rpm the stopping point is the same:
/var/tmp/rpm-tmp.leBIra: line 1: fg: no job control
error: %pre(epoptes-0.5.10-3.1.noarch) scriptlet failed, exit status 1
error: epoptes-0.5.10-3.1.noarch: install failed
error: epoptes-0.5.10-3.noarch: erase skipped
Basically, trying to install the rpm after rebuild, e.g.,
rpmrebuild -pe epoptes-0.5.10-3.noarch.rpm
rpm -Uvh epoptes-0.5.10-3.noarch.rpm
fails complaining about job control. I am not sure how to amend this given the workflow for rebuilding the rpm.
From OpenSuse:
Using the command rpm -qp --scripts epoptes-0.5.10-3.1.noarch.rpm returns
preinstall scriptlet (using /bin/sh):
%service_add_pre epoptes-server.service
postinstall scriptlet (using /bin/sh):
if ! getent group epoptes >/dev/null; then
groupadd --system epoptes
fi
if ! [ -f /etc/epoptes/server.key ] || ! [ -f /etc/epoptes/server.crt ] || ! [ -s /etc/epoptes/server.crt ]; then
if ! [ -d /etc/epoptes ]; then
mkdir /etc/epoptes
fi
openssl req -batch -x509 -nodes -newkey rsa:1024 -days $(($(date --utc +%s) / 86400 + 3652)) -keyout /etc/epoptes/server.key -out /etc/epoptes/server.crt
chmod 600 /etc/epoptes/server.key
fi
%service_add_post epoptes-server.service
preuninstall scriptlet (using /bin/sh):
%service_del_preun epoptes-server.service
postuninstall scriptlet (using /bin/sh):
%service_del_postun epoptes-server.service
Rebuilding from the Ubuntu deb, installation is successful, but launch fails with:
# epoptes
Traceback (most recent call last):
File "/usr/bin/epoptes", line 30, in <module>
import epoptes
ImportError: No module named epoptes
Running rpm -qlp on the RPM generated from the .deb shows:
/etc
/etc/default
/etc/default/epoptes
/etc/epoptes
/etc/init.d/epoptes
/usr
/usr/bin/epoptes
/usr/lib/python2.7
/usr/lib/python2.7/dist-packages
/usr/lib/python2.7/dist-packages/epoptes
/usr/lib/python2.7/dist-packages/epoptes-0.5.10_2.egg-info
/usr/lib/python2.7/dist-packages/epoptes/__init__.py
/usr/lib/python2.7/dist-packages/epoptes/common
/usr/lib/python2.7/dist-packages/epoptes/common/__init__.py
/usr/lib/python2.7/dist-packages/epoptes/common/config.py
/usr/lib/python2.7/dist-packages/epoptes/common/constants.py
/usr/lib/python2.7/dist-packages/epoptes/common/ltsconf.py
/usr/lib/python2.7/dist-packages/epoptes/common/xdg_dirs.py
/usr/lib/python2.7/dist-packages/epoptes/core
/usr/lib/python2.7/dist-packages/epoptes/core/__init__.py
/usr/lib/python2.7/dist-packages/epoptes/core/lib_users.py
/usr/lib/python2.7/dist-packages/epoptes/core/structs.py
/usr/lib/python2.7/dist-packages/epoptes/core/wol.py
/usr/lib/python2.7/dist-packages/epoptes/daemon
/usr/lib/python2.7/dist-packages/epoptes/daemon/__init__.py
/usr/lib/python2.7/dist-packages/epoptes/daemon/bashplex.py
/usr/lib/python2.7/dist-packages/epoptes/daemon/commands.py
/usr/lib/python2.7/dist-packages/epoptes/daemon/exchange.py
/usr/lib/python2.7/dist-packages/epoptes/daemon/guiplex.py
/usr/lib/python2.7/dist-packages/epoptes/daemon/uiconnection.py
/usr/lib/python2.7/dist-packages/epoptes/ui
/usr/lib/python2.7/dist-packages/epoptes/ui/__init__.py
/usr/lib/python2.7/dist-packages/epoptes/ui/about_dialog.py
/usr/lib/python2.7/dist-packages/epoptes/ui/benchmark.py
/usr/lib/python2.7/dist-packages/epoptes/ui/client_information.py
/usr/lib/python2.7/dist-packages/epoptes/ui/execcommand.py
/usr/lib/python2.7/dist-packages/epoptes/ui/graph.py
/usr/lib/python2.7/dist-packages/epoptes/ui/gui.py
/usr/lib/python2.7/dist-packages/epoptes/ui/notifications.py
/usr/lib/python2.7/dist-packages/epoptes/ui/remote_assistance.py
/usr/lib/python2.7/dist-packages/epoptes/ui/sendmessage.py
/usr/lib/python2.7/dist-packages/twisted
/usr/lib/python2.7/dist-packages/twisted/plugins
/usr/lib/python2.7/dist-packages/twisted/plugins/epoptesd.py
/usr/share
/usr/share/applications
/usr/share/applications/epoptes.desktop
/usr/share/doc
/usr/share/doc/epoptes
/usr/share/doc/epoptes/README
/usr/share/doc/epoptes/changelog.Debian.gz
/usr/share/doc/epoptes/copyright
/usr/share/epoptes
/usr/share/epoptes/about_dialog.ui
/usr/share/epoptes/client-functions
/usr/share/epoptes/client_information.ui
/usr/share/epoptes/epoptes.ui
/usr/share/epoptes/executeCommand.ui
/usr/share/epoptes/images
/usr/share/epoptes/images/16
/usr/share/epoptes/images/16/assist.png
/usr/share/epoptes/images/16/broadcast-stop.png
/usr/share/epoptes/images/16/broadcast-windowed.png
/usr/share/epoptes/images/16/broadcast.png
/usr/share/epoptes/images/16/execute.png
/usr/share/epoptes/images/16/graph.png
/usr/share/epoptes/images/16/info.png
/usr/share/epoptes/images/16/lock-screen.png
/usr/share/epoptes/images/16/logout.png
/usr/share/epoptes/images/16/message.png
/usr/share/epoptes/images/16/mute.png
/usr/share/epoptes/images/16/observe.png
/usr/share/epoptes/images/16/poweron.png
/usr/share/epoptes/images/16/restart.png
/usr/share/epoptes/images/16/root-terminal.png
/usr/share/epoptes/images/16/shutdown.png
/usr/share/epoptes/images/16/terminal.png
/usr/share/epoptes/images/16/unlock-screen.png
/usr/share/epoptes/images/16/unmute.png
/usr/share/epoptes/images/assist.svg
/usr/share/epoptes/images/broadcast-stop.svg
/usr/share/epoptes/images/broadcast-windowed.svg
/usr/share/epoptes/images/broadcast.svg
/usr/share/epoptes/images/execute.svg
/usr/share/epoptes/images/fat.svg
/usr/share/epoptes/images/graph.png
/usr/share/epoptes/images/info.svg
/usr/share/epoptes/images/lock-screen.svg
/usr/share/epoptes/images/login.png
/usr/share/epoptes/images/logout.svg
/usr/share/epoptes/images/message.svg
/usr/share/epoptes/images/mute.svg
/usr/share/epoptes/images/observe.svg
/usr/share/epoptes/images/off.png
/usr/share/epoptes/images/offline.svg
/usr/share/epoptes/images/on.png
/usr/share/epoptes/images/poweron.svg
/usr/share/epoptes/images/restart.svg
/usr/share/epoptes/images/root-terminal.svg
/usr/share/epoptes/images/shutdown.svg
/usr/share/epoptes/images/standalone.svg
/usr/share/epoptes/images/systemgrp.png
/usr/share/epoptes/images/terminal.svg
/usr/share/epoptes/images/thin.svg
/usr/share/epoptes/images/unlock-screen.svg
/usr/share/epoptes/images/unmute.svg
/usr/share/epoptes/images/usersgrp.png
/usr/share/epoptes/netbenchmark.ui
/usr/share/epoptes/remote_assistance.ui
/usr/share/epoptes/sendMessage.ui
/usr/share/icons
/usr/share/icons/hicolor
/usr/share/icons/hicolor/scalable
/usr/share/icons/hicolor/scalable/apps
/usr/share/icons/hicolor/scalable/apps/epoptes.svg
/usr/share/locale
/usr/share/locale/af
/usr/share/locale/af/LC_MESSAGES
/usr/share/locale/af/LC_MESSAGES/epoptes.mo
/usr/share/locale/ar
/usr/share/locale/ar/LC_MESSAGES
/usr/share/locale/ar/LC_MESSAGES/epoptes.mo
/usr/share/locale/bg
/usr/share/locale/bg/LC_MESSAGES
/usr/share/locale/bg/LC_MESSAGES/epoptes.mo
/usr/share/locale/ca
/usr/share/locale/ca/LC_MESSAGES
/usr/share/locale/ca/LC_MESSAGES/epoptes.mo
/usr/share/locale/ca#valencia
/usr/share/locale/ca#valencia/LC_MESSAGES
/usr/share/locale/ca#valencia/LC_MESSAGES/epoptes.mo
/usr/share/locale/cs
/usr/share/locale/cs/LC_MESSAGES
/usr/share/locale/cs/LC_MESSAGES/epoptes.mo
/usr/share/locale/da
/usr/share/locale/da/LC_MESSAGES
/usr/share/locale/da/LC_MESSAGES/epoptes.mo
/usr/share/locale/de
/usr/share/locale/de/LC_MESSAGES
/usr/share/locale/de/LC_MESSAGES/epoptes.mo
/usr/share/locale/el
/usr/share/locale/el/LC_MESSAGES
/usr/share/locale/el/LC_MESSAGES/epoptes.mo
/usr/share/locale/en_AU
/usr/share/locale/en_AU/LC_MESSAGES
/usr/share/locale/en_AU/LC_MESSAGES/epoptes.mo
/usr/share/locale/en_GB
/usr/share/locale/en_GB/LC_MESSAGES
/usr/share/locale/en_GB/LC_MESSAGES/epoptes.mo
/usr/share/locale/es
/usr/share/locale/es/LC_MESSAGES
/usr/share/locale/es/LC_MESSAGES/epoptes.mo
/usr/share/locale/eu
/usr/share/locale/eu/LC_MESSAGES
/usr/share/locale/eu/LC_MESSAGES/epoptes.mo
/usr/share/locale/fi
/usr/share/locale/fi/LC_MESSAGES
/usr/share/locale/fi/LC_MESSAGES/epoptes.mo
/usr/share/locale/fr
/usr/share/locale/fr/LC_MESSAGES
/usr/share/locale/fr/LC_MESSAGES/epoptes.mo
/usr/share/locale/gl
/usr/share/locale/gl/LC_MESSAGES
/usr/share/locale/gl/LC_MESSAGES/epoptes.mo
/usr/share/locale/he
/usr/share/locale/he/LC_MESSAGES
/usr/share/locale/he/LC_MESSAGES/epoptes.mo
/usr/share/locale/hu
/usr/share/locale/hu/LC_MESSAGES
/usr/share/locale/hu/LC_MESSAGES/epoptes.mo
/usr/share/locale/id
/usr/share/locale/id/LC_MESSAGES
/usr/share/locale/id/LC_MESSAGES/epoptes.mo
/usr/share/locale/it
/usr/share/locale/it/LC_MESSAGES
/usr/share/locale/it/LC_MESSAGES/epoptes.mo
/usr/share/locale/lt
/usr/share/locale/lt/LC_MESSAGES
/usr/share/locale/lt/LC_MESSAGES/epoptes.mo
/usr/share/locale/ml
/usr/share/locale/ml/LC_MESSAGES
/usr/share/locale/ml/LC_MESSAGES/epoptes.mo
/usr/share/locale/ms
/usr/share/locale/ms/LC_MESSAGES
/usr/share/locale/ms/LC_MESSAGES/epoptes.mo
/usr/share/locale/nb
/usr/share/locale/nb/LC_MESSAGES
/usr/share/locale/nb/LC_MESSAGES/epoptes.mo
/usr/share/locale/nl
/usr/share/locale/nl/LC_MESSAGES
/usr/share/locale/nl/LC_MESSAGES/epoptes.mo
/usr/share/locale/oc
/usr/share/locale/oc/LC_MESSAGES
/usr/share/locale/oc/LC_MESSAGES/epoptes.mo
/usr/share/locale/pl
/usr/share/locale/pl/LC_MESSAGES
/usr/share/locale/pl/LC_MESSAGES/epoptes.mo
/usr/share/locale/pt
/usr/share/locale/pt/LC_MESSAGES
/usr/share/locale/pt/LC_MESSAGES/epoptes.mo
/usr/share/locale/pt_BR
/usr/share/locale/pt_BR/LC_MESSAGES
/usr/share/locale/pt_BR/LC_MESSAGES/epoptes.mo
/usr/share/locale/ru
/usr/share/locale/ru/LC_MESSAGES
/usr/share/locale/ru/LC_MESSAGES/epoptes.mo
/usr/share/locale/se
/usr/share/locale/se/LC_MESSAGES
/usr/share/locale/se/LC_MESSAGES/epoptes.mo
/usr/share/locale/sk
/usr/share/locale/sk/LC_MESSAGES
/usr/share/locale/sk/LC_MESSAGES/epoptes.mo
/usr/share/locale/sl
/usr/share/locale/sl/LC_MESSAGES
/usr/share/locale/sl/LC_MESSAGES/epoptes.mo
/usr/share/locale/so
/usr/share/locale/so/LC_MESSAGES
/usr/share/locale/so/LC_MESSAGES/epoptes.mo
/usr/share/locale/sr
/usr/share/locale/sr/LC_MESSAGES
/usr/share/locale/sr/LC_MESSAGES/epoptes.mo
/usr/share/locale/sr#latin
/usr/share/locale/sr#latin/LC_MESSAGES
/usr/share/locale/sr#latin/LC_MESSAGES/epoptes.mo
/usr/share/locale/sv
/usr/share/locale/sv/LC_MESSAGES
/usr/share/locale/sv/LC_MESSAGES/epoptes.mo
/usr/share/locale/tr
/usr/share/locale/tr/LC_MESSAGES
/usr/share/locale/tr/LC_MESSAGES/epoptes.mo
/usr/share/locale/uk
/usr/share/locale/uk/LC_MESSAGES
/usr/share/locale/uk/LC_MESSAGES/epoptes.mo
/usr/share/locale/vi
/usr/share/locale/vi/LC_MESSAGES
/usr/share/locale/vi/LC_MESSAGES/epoptes.mo
/usr/share/locale/zh_CN
/usr/share/locale/zh_CN/LC_MESSAGES
/usr/share/locale/zh_CN/LC_MESSAGES/epoptes.mo
/usr/share/locale/zh_TW
/usr/share/locale/zh_TW/LC_MESSAGES
/usr/share/locale/zh_TW/LC_MESSAGES/epoptes.mo
/usr/share/ltsp
/usr/share/ltsp/plugins
/usr/share/ltsp/plugins/ltsp-build-client
/usr/share/ltsp/plugins/ltsp-build-client/common
/usr/share/ltsp/plugins/ltsp-build-client/common/040-epoptes-certificate
/usr/share/man
/usr/share/man/man1
/usr/share/man/man1/epoptes.1.gz

Saltstack fails when creating minions from a map file on DigitalOcean

I can't create a minion from the map file, no idea what's happened. A month ago my script was working correctly, right now it fails. I was trying to do some research about it but I could't find anything about it. Could someone have a look on my DEBUG log? The minion is created on DigitalOcean but the master server can't connect to it at all.
so I run:
salt-cloud -P -m /etc/salt/cloud.maps.d/production.map -l debug
The master is running on Ubuntu 16.04.1 x64, the minion also.
I use the latest saltstack's library:
echo "deb http://repo.saltstack.com/apt/ubuntu/16.04/amd64/latest xenial main" >> /etc/apt/sources.list.d/saltstack.list
I tested both 2016.3.2 and 2016.3.3, what is interesting, the same script was working correctly 4 weeks ago, I assume something had to change.
ERROR:
Writing /usr/lib/python2.7/dist-packages/salt-2016.3.3.egg-info
* INFO: Running install_ubuntu_git_post()
disabled
Created symlink from /etc/systemd/system/multi-user.target.wants/salt-minion.service to /lib/systemd/system/salt-minion.service.
* INFO: Running install_ubuntu_check_services()
* INFO: Running install_ubuntu_restart_daemons()
Job for salt-minion.service failed because a configured resource limit was exceeded. See "systemctl status salt-minion.service" and "journalctl -xe" for details.
start: Unable to connect to Upstart: Failed to connect to socket /com/ubuntu/upstart: Connection refused
* ERROR: No init.d support for salt-minion was found
* ERROR: Fai
[DEBUG ] led to run install_ubuntu_restart_daemons()!!!
[ERROR ] Failed to deploy 'minion-zk-0'. Error: Command 'ssh -t -t -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -oControlPath=none -oPasswordAuthentication=no -oChallengeResponseAuthentication=no -oPubkeyAuthentication=yes -oIdentitiesOnly=yes -oKbdInteractiveAuthentication=no -i /etc/salt/keys/cloud/do.pem -p 22 root#REMOVED_IP '/tmp/.saltcloud-5d18c002-e817-46d5-9fb2-d3bdb2dfe7fd/deploy.sh -c '"'"'/tmp/.saltcloud-5d18c002-e817-46d5-9fb2-d3bdb2dfe7fd'"'"' -P git v2016.3.3'' failed. Exit code: 1
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/salt/cloud/__init__.py", line 2293, in create_multiprocessing
local_master=parallel_data['local_master']
File "/usr/lib/python2.7/dist-packages/salt/cloud/__init__.py", line 1281, in create
output = self.clouds[func](vm_)
File "/usr/lib/python2.7/dist-packages/salt/cloud/clouds/digital_ocean.py", line 481, in create
ret = __utils__['cloud.bootstrap'](vm_, __opts__)
File "/usr/lib/python2.7/dist-packages/salt/utils/cloud.py", line 527, in bootstrap
deployed = deploy_script(**deploy_kwargs)
File "/usr/lib/python2.7/dist-packages/salt/utils/cloud.py", line 1516, in deploy_script
if root_cmd(deploy_command, tty, sudo, **ssh_kwargs) != 0:
File "/usr/lib/python2.7/dist-packages/salt/utils/cloud.py", line 2167, in root_cmd
retcode = _exec_ssh_cmd(cmd, allow_failure=allow_failure, **kwargs)
File "/usr/lib/python2.7/dist-packages/salt/utils/cloud.py", line 1784, in _exec_ssh_cmd
cmd, proc.exitstatus
SaltCloudSystemExit: Command 'ssh -t -t -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -oControlPath=none -oPasswordAuthentication=no -oChallengeResponseAuthentication=no -oPubkeyAuthentication=yes -oIdentitiesOnly=yes -oKbdInteractiveAuthentication=no -i /etc/salt/keys/cloud/do.pem -p 22 root#REMOVED_ID '/tmp/.saltcloud-5d18c002-e817-46d5-9fb2-d3bdb2dfe7fd/deploy.sh -c '"'"'/tmp/.saltcloud-5d18c002-e817-46d5-9fb2-d3bdb2dfe7fd'"'"' -P git v2016.3.3'' failed. Exit code: 1
[DEBUG ] LazyLoaded nested.output
minion-zk-0:
----------
Error:
Command 'ssh -t -t -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -oControlPath=none -oPasswordAuthentication=no -oChallengeResponseAuthentication=no -oPubkeyAuthentication=yes -oIdentitiesOnly=yes -oKbdInteractiveAuthentication=no -i /etc/salt/keys/cloud/do.pem -p 22 root#REMOVED_IP '/tmp/.saltcloud-5d18c002-e817-46d5-9fb2-d3bdb2dfe7fd/deploy.sh -c '"'"'/tmp/.saltcloud-5d18c002-e817-46d5-9fb2-d3bdb2dfe7fd'"'"' -P git v2016.3.3'' failed. Exit code: 1
root#master-zk:/etc/salt/cloud.maps.d# salt '*' test.ping
minion-zk-0:
Minion did not return. [No response]
root#master-zk:/etc/salt/cloud.maps.d#
It is located in your cloud configuration somewhere in /etc/salt/cloud.profiles.d/, /etc/salt/cloud.providers.d/ or /etc/salt/cloud.d/. Just figure out where and change the value salt to your masters ip.
I currently do this in my providers setting like that:
hit-vcenter:
driver: vmware
user: 'foo'
password: 'secret'
url: 'some url'
protocol: 'https'
port: 443
minion:
master: 10.1.10.1

OpenERP 7 service start issue

I installed OpenERP 7 in my CentOS 64 bit and I have this problem when starting the service:
Starting OpenERP Server Daemon (openerp-server): [ OK ]
root#****[~]# ERROR: couldn't create the logfile directory. Logging to the standard output.
2014-09-10 14:04:58,739 29029 INFO ? openerp: OpenERP version 7.0-20140804-231303
2014-09-10 14:04:58,739 29029 INFO ? openerp: addons paths: /usr/lib/python2.6/site-packages/openerp-7.0_20140804_231303-py2.6.egg/openerp/addons
2014-09-10 14:04:58,739 29029 INFO ? openerp: database hostname: localhost
2014-09-10 14:04:58,739 29029 INFO ? openerp: database port: 5432
2014-09-10 14:04:58,740 29029 INFO ? openerp: database user: openerp
Traceback (most recent call last):
File "/usr/bin/openerp-server", line 5, in <module>
pkg_resources.run_script('openerp==7.0-20140804-231303', 'openerp-server')
File "/usr/lib/python2.6/site-packages/pkg_resources.py", line 461, in run_script
self.require(requires)[0].run_script(script_name, ns)
File "/usr/lib/python2.6/site-packages/pkg_resources.py", line 1194, in run_script
execfile(script_filename, namespace, namespace)
File "/usr/lib/python2.6/site-packages/openerp-7.0_20140804_231303-py2.6.egg/EGG-INFO/scripts/openerp-server", line 5, in <module>
openerp.cli.main()
File "/usr/lib/python2.6/site-packages/openerp-7.0_20140804_231303-py2.6.egg/openerp/cli/__init__.py", line 61, in main
o.run(args)
File "/usr/lib/python2.6/site-packages/openerp-7.0_20140804_231303-py2.6.egg/openerp/cli/server.py", line 272, in run
main(args)
File "/usr/lib/python2.6/site-packages/openerp-7.0_20140804_231303-py2.6.egg/openerp/cli/server.py", line 252, in main
setup_pid_file()
File "/usr/lib/python2.6/site-packages/openerp-7.0_20140804_231303-py2.6.egg/openerp/cli/server.py", line 88, in setup_pid_file
fd = open(config['pidfile'], 'w')
IOError: [Errno 13] Permission denied: '/var/run/openerp/openerp-server.pid'
Also when I Try to stop the service I have this error:
service openerp stop
Stopping OpenERP Server Daemon (openerp-server): cat: /var/run/openerp/openerp-server.pid: No such file or directory
[FAILED]
Can you please advise how to fix this issue?
Thank you,
Best Regards,
As Odedra said, its an permission issue as well as missing python modules and I found no reason for why this happens from the start, but at least I got hint how to solve is as the following:
1- I found missing python and other modules must be installed in my Cent OS 6.5 and it was:
yum -y install python-psycopg2 python-lxml PyXML python-setuptools libxslt-python pytz \
python-matplotlib python-babel python-mako python-dateutil python-psycopg2 \
pychart pydot python-reportlab python-devel python-imaging python-vobject \
hippo-canvas-python mx python-gdata python-ldap python-openid \
python-werkzeug python-vatnumber pygtk2 glade3 pydot python-dateutil \
python-matplotlib pygtk2 glade3 pydot python-dateutil python-matplotlib \
python python-devel python-psutil python-docutils make\
automake gcc gcc-c++ kernel-devel byacc flashplugin-nonfree poppler-utils pywebdav\
2- After this I fix the permission issue by doing the following:
chown USERNAME:USERNAME/var/run/openerp/openerp-server.pid
sudo chown USERNAME:USERNAME/tmp/oe-sessions-openerp
Please note the following:
1- Be sure where you installed your openerp as not by defualt you will find your path in /tmp.
After doing the following my problem has been solved.