No folder on ubuntu - clamd.scan/clamd.sock - django

I am trying to install a scanner for my Django application which refers to this place /var/run/clamd.scan/clamd.sock but I don't have the following folder on ubuntu 18+. I tried to install it using sudo apt-get install clamav but I still have no visible folery. How can I install it on ubuntu? or maybe he is in a different place.

for the server files, you may need to install the daemon:
sudo apt-get install clamav clamav-daemon
sudo service clamav-freshclam restart
sudo service clamav-daemon restart

Related

How to install apache superset in the secured servers(Without internet)

What is the best way of installing the apache superset in the secured server?
I haven't found any answer for it i figured it out with so many trial and error method, So its my small contribution to the community who are facing same issue
Below are the commands to install apache superset in production/Development
sudo yum install gcc gcc-c++ libffi-devel python-devel python-pip python-wheel openssl-devel cyrus-sasl-devel openldap-devel
sudo yum install python3
sudo yum install python3-devel
Since server has no internet you need to download it from below python community in your local system and transfer it to the server
Download all dependencies from :- https://pypi.org/simple/
To untar :- tar -xvf package.tar
Go to untared folder and run below command
sudo python3.x setup.py install x=your python version
If the package is of wheel extension i.e .whl run below command
sudo python3.6 -m pip install packagename.whl
After installing all the dependencies run below command
fabmanager create-admin --app superset
superset db upgrade
superset init
superset run -p port --with-threads --reload --debugger - For development
gunicorn -b IP:port superset:app --For production use
Hope this will help someone who is facing issue,
If there is any changes or i missed please add

Installing libpoco and NPM conflict

I've got Ubuntu 18.04 on my machine. I need to work on two projects: first on C++, second: front-end app with Angular which use for development mode NPM and Node.js.
The question is when I try to configure my environment with executing POCO it's killing NPM and the other way round NPM causes the same with POCO.
sudo app-get install libpoco-dev
sudo apt-get install libssh-dev
sudo apt-get install libssl-dev
sudo apt-get install nodejs
sudo apt-get install npm
After it I can see that symbolic links of POCO disappeared
if I'm trying to reinstall POCO - it starts to delete NPM.
The question is - is it possible to fix -> or better that application environment will be on different servers? Thanks

How to fix this gRPC installation problem?

Im following these steps to install gRPC on my freshly launched AWS EC2 Instance:
https://jitpaul.blog/2018/04/18/grpc-on-aws/
When I try to execute this line:
sudo yum install libgflags-dev libgtest-dev
I get this error:
I don't want to mess up anything, please help.
Try instead:
sudo yum install gflags-dev
sudo yum install gtest-dev
That should install libgflags-dev and libgtest-dev.

Missing libpq header files on CentOS when attempting to install psycopg2 module

I have been searching the web hours on end for several days and I am unable to install psycopg2 library on my Linux machine (CentOS - 2.6.32-431.3.1.el6.x86_64 GNU/Linux).
I know that the problem is that I am missing the libpq header files since I am getting this message after attempting pip install psycopg2: libpq-fe.h: No such file or directory
http://initd.org/psycopg/docs/install.html#install-from-source
Almost all the articles I found pointed me to use apt-get on CentOS but apt-get is not a standard tool on CentOS 6.3 so I've been trying yum install instead.
However, every time I try to use sudo yum install to download something the package is not available. For example:
yum install postgresql-devel
Loaded plugins: fastestmirror, refresh-packagekit, security
Setting up Install Process
Loading mirror speeds from cached hostfile
drivesrvr | 2.2 kB 00:00
No package postgresql-devel available.
Error: Nothing to do
I've tried this for:
yum install postgresql-server
yum install python-devel
service postgresql initdb
service postgresql start
yum install python-psycopg2
Any ideas? Without the the libpq header files I can't install the psycopg2 module that is necessary for my Python program. This is for Python 2.7.12. And PostgreSQL 9.3.13.
I had this exact issue on Fedora 2016.09 box on Amazon. I was able to install postgresql-devel via yum, but that didn't do the trick; the version seemed to be out of date.
I solved it using:
sudo yum install /usr/include/libpq-fe.h
This installs an updated version of postgresql-devel which allows psycopg2 to compile correctly when installing through pip.

Amazon S3 + Docker - "403 Forbidden: The difference between the request time and the current time is too large"

I am trying to run my django application in a docker container with static files served from Amazon S3. When I run RUN $(which python3.4) /home/docker/code/vitru/manage.py collectstatic --noinput in my Dockerfile, I get a 403 Forbidden error from Amazon S3 with the following response XML
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>RequestTimeTooSkewed</Code>
<Message>The difference between the request time and the current time is too large.</Message>
<RequestTime>Sat, 27 Dec 2014 11:47:05 GMT</RequestTime>
<ServerTime>2014-12-28T08:45:09Z</ServerTime>
<MaxAllowedSkewMilliseconds>900000</MaxAllowedSkewMilliseconds>
<RequestId>4189D5DAF2FA6649</RequestId>
<HostId>lBAhbNfeV4C7lHdjLwcTpVVH2snd/BW18hsZEQFkxqfgrmdD5pgAJJbAP6ULArRo</HostId>
</Error>
My docker container is running Ubuntu 14.04... if that makes any difference.
I also am running the application using uWSGI, without nginx or apache or any other kind of reverse-proxy server.
I also get the error at run-time, when the files are being served to the site.
Attempted Solution
Other stackoverflow questions have reported a similar error using S3 (none specifically in conjunction with Docker) and they have said that this error is caused when your system clock is out of sync, and can be fixed by running
sudo service ntp stop
sudo ntpd -gq
sudo service ntp start
so I added the following to my Dockerfile, but it didn't fix the problem.
RUN apt-get install -y ntp
RUN ntpd -gq
RUN service ntp start
I also attempted to sync the time on my local machine before building the docker image, using sudo ntpd -gq, but that did not work either.
Dockerfile
FROM ubuntu:14.04
# Get most recent apt-get
RUN apt-get -y update
# Install python and other tools
RUN apt-get install -y tar git curl nano wget dialog net-tools build-essential
RUN apt-get install -y python3 python3-dev python-distribute
RUN apt-get install -y nginx supervisor
# Get Python3 version of pip
RUN apt-get -y install python3-setuptools
RUN easy_install3 pip
# Update system clock so S3 does not get 403 Error
# NOT WORKING
#RUN apt-get install -y ntp
#RUN ntpd -gq
#RUN service ntp start
RUN pip install uwsgi
RUN apt-get -y install libxml2-dev libxslt1-dev
RUN apt-get install -y python-software-properties uwsgi-plugin-python3
# Install GEOS
RUN apt-get -y install binutils libproj-dev gdal-bin
# Install node.js
RUN apt-get install -y nodejs npm
# Install postgresql dependencies
RUN apt-get update && \
apt-get install -y postgresql libpq-dev && \
rm -rf /var/lib/apt/lists
# Install pylibmc dependencies
RUN apt-get update
RUN apt-get install -y libmemcached-dev zlib1g-dev libssl-dev
ADD . /home/docker/code
# Setup config files
RUN ln -s /home/docker/code/supervisor-app.conf /etc/supervisor/conf.d/
RUN pip install -r /home/docker/code/vitru/requirements.txt
# Create directory for logs
RUN mkdir -p /var/logs
# Set environment as staging
ENV env staging
# Run django commands
# python3.4 is at /usr/bin/python3.4, but which works too
RUN $(which python3.4) /home/docker/code/vitru/manage.py collectstatic --noinput
RUN $(which python3.4) /home/docker/code/vitru/manage.py syncdb --noinput
RUN $(which python3.4) /home/docker/code/vitru/manage.py makemigrations --noinput
RUN $(which python3.4) /home/docker/code/vitru/manage.py migrate --noinput
EXPOSE 8000
CMD ["supervisord", "-c", "/home/docker/code/supervisor-app.conf"]
Noted in the comments but for others who come here:
If using boot2docker (i.e. If on windows or Mac), the boot2docker vm has a known time issue when you sleep your machine--see here. Since the host for your docker container is the boot2docker vm, that's where it syncs its time.
I've had success restarting the boot2docker vm. This may cause problems with losing some state, i.e. If you had some data volumes.
Docker containers share clock with the host machine, so syncing your host machine clock should solve the problem. To force the timezone of your container is the same as your host machine you can add -v /etc/localtime:/etc/localtime:ro in docker run.
Anyway, you should not start a service in a Dockerfile. This file contains the steps and commands to build the image for your containers, and any process you run inside a Dockerfile will end after the building process. To start any service you should add a run script or a process control daemon (as supervisord) which will run each time you run a new container.
Restarting Docker for Mac fixes the error on my machine.