I'm trying to run Datomic Pro with the following command:
./bin/run -m datomic.peer-server -h localhost -p 8998 -a myaccesskey,mysecret -d demo,"datomic:sql://jdbc:mysql://localhost:3306/datomic?user=datomic&password=datomic"
But every time I run that command it throws:
Exception in thread "main" java.sql.SQLException: No suitable driver
Any thoughts?
ps: I've already added mysql connector jar to ./lib.
Gabriel,
You need to provide a database name to the peer-server command. You'll want to start a datomic peer against your running transactor and create the database first. For this example I created the "test" db.
(require '[datomic.api :as d])
(def uri "datomic:sql://test?jdbc:mysql://localhost:3306/datomic?user=datomic&password=datomic")
(d/create-database uri)
That create DB should return true. Once created your URI string will look like:
./bin/run -m datomic.peer-server -h localhost -p 8998 -a myaccesskey,mysecret -d demo,"datomic:sql://test?jdbc:mysql://localhost:3306/datomic?user=datomic&password=datomic"
Cheers,
Jaret
Related
I have a backup of PostgreSQL database (taken from Heroku) now I want to restore this backup to my local database but it's not working. I tried from CMD and pgadmin, but no luck!
What wrong I am doing?
pg_restore --verbose --clean --no-acl --no-owner -h localhost -U nafuser -d naf latest.dump
pgadmin issue:
Having this Dockerfile:
FROM fedora:30
ENV LANG C.UTF-8
RUN dnf upgrade -y \
&& dnf install -y \
openssh-clients \
openvpn \
slirp4netns \
&& dnf clean all
CMD ["openvpn", "--config", "/vpn/ovpn.config", "--auth-user-pass", "/vpn/ovpn.auth"]
Building the image with:
podman build -t peque/vpn .
If I try to run it with (note $(pwd), where the VPN configuration and credentials are stored):
podman run -v $(pwd):/vpn:Z --cap-add=NET_ADMIN --device=/dev/net/tun -it peque/vpn
I get the following error:
ERROR: Cannot open TUN/TAP dev /dev/net/tun: Permission denied (errno=13)
Any ideas on how could I fix this? I would not mind changing the base image if that could help (i.e.: to Alpine or anything else as long as it allows me to use openvpn for the connection).
System information
Using Podman 1.4.4 (rootless) and Fedora 30 distribution with kernel 5.1.19.
/dev/net/tun permissions
Running the container with:
podman run -v $(pwd):/vpn:Z --cap-add=NET_ADMIN --device=/dev/net/tun -it peque/vpn
Then, from the container, I can:
# ls -l /dev/ | grep net
drwxr-xr-x. 2 root root 60 Jul 23 07:31 net
I can also list /dev/net, but will get a "permission denied error":
# ls -l /dev/net
ls: cannot access '/dev/net/tun': Permission denied
total 0
-????????? ? ? ? ? ? tun
Trying --privileged
If I try with --privileged:
podman run -v $(pwd):/vpn:Z --privileged --cap-add=NET_ADMIN --device=/dev/net/tun -it peque/vpn
Then instead of the permission-denied error (errno=13), I get a no-such-file-or-directory error (errno=2):
ERROR: Cannot open TUN/TAP dev /dev/net/tun: No such file or directory (errno=2)
I can effectively verify there is no /dev/net/ directory when using --privileged, even if I pass the --cap-add=NET_ADMIN --device=/dev/net/tun parameters.
Verbose log
This is the log I get when configuring the client with verb 3:
OpenVPN 2.4.7 x86_64-redhat-linux-gnu [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [PKCS11] [MH/PKTINFO] [AEAD] built on Feb 20 2019
library versions: OpenSSL 1.1.1c FIPS 28 May 2019, LZO 2.08
Outgoing Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authentication
Incoming Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authentication
TCP/UDP: Preserving recently used remote address: [AF_INET]xx.xx.xx.xx:1194
Socket Buffers: R=[212992->212992] S=[212992->212992]
UDP link local (bound): [AF_INET][undef]:0
UDP link remote: [AF_INET]xx.xx.xx.xx:1194
TLS: Initial packet from [AF_INET]xx.xx.xx.xx:1194, sid=3ebc16fc 8cb6d6b1
WARNING: this configuration may cache passwords in memory -- use the auth-nocache option to prevent this
VERIFY OK: depth=1, C=ES, ST=XXX, L=XXX, O=XXXXX, emailAddress=email#domain.com, CN=internal-ca
VERIFY KU OK
Validating certificate extended key usage
++ Certificate has EKU (str) TLS Web Server Authentication, expects TLS Web Server Authentication
VERIFY EKU OK
VERIFY OK: depth=0, C=ES, ST=XXX, L=XXX, O=XXXXX, emailAddress=email#domain.com, CN=ovpn.server.address
Control Channel: TLSv1.2, cipher TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384, 2048 bit RSA
[ovpn.server.address] Peer Connection Initiated with [AF_INET]xx.xx.xx.xx:1194
SENT CONTROL [ovpn.server.address]: 'PUSH_REQUEST' (status=1)
PUSH: Received control message: 'PUSH_REPLY,route xx.xx.xx.xx 255.255.255.0,route xx.xx.xx.0 255.255.255.0,dhcp-option DOMAIN server.net,dhcp-option DNS xx.xx.xx.254,dhcp-option DNS xx.xx.xx.1,dhcp-option DNS xx.xx.xx.1,route-gateway xx.xx.xx.1,topology subnet,ping 10,ping-restart 60,ifconfig xx.xx.xx.24 255.255.255.0,peer-id 1'
OPTIONS IMPORT: timers and/or timeouts modified
OPTIONS IMPORT: --ifconfig/up options modified
OPTIONS IMPORT: route options modified
OPTIONS IMPORT: route-related options modified
OPTIONS IMPORT: --ip-win32 and/or --dhcp-option options modified
OPTIONS IMPORT: peer-id set
OPTIONS IMPORT: adjusting link_mtu to 1624
Outgoing Data Channel: Cipher 'AES-128-CBC' initialized with 128 bit key
Outgoing Data Channel: Using 160 bit message hash 'SHA1' for HMAC authentication
Incoming Data Channel: Cipher 'AES-128-CBC' initialized with 128 bit key
Incoming Data Channel: Using 160 bit message hash 'SHA1' for HMAC authentication
ROUTE_GATEWAY xx.xx.xx.xx/255.255.255.0 IFACE=tap0 HWADDR=0a:38:ba:e6:4b:5f
ERROR: Cannot open TUN/TAP dev /dev/net/tun: No such file or directory (errno=2)
Exiting due to fatal error
Error number may change depending on whether I run the command with --privileged or not.
It turns out that you are blocked by SELinux: after running the client container and trying to access /dev/net/tun inside it, you will get the following AVC denial in the audit log:
type=AVC msg=audit(1563869264.270:833): avc: denied { getattr } for pid=11429 comm="ls" path="/dev/net/tun" dev="devtmpfs" ino=15236 scontext=system_u:system_r:container_t:s0:c502,c803 tcontext=system_u:object_r:tun_tap_device_t:s0 tclass=chr_file permissive=0
To allow your container configuring the tunnel while staying not fully privileged and with SELinux enforced, you need to customize SELinux policies a bit. However, I did not find an easy way to do this properly.
Luckily, there is a tool called udica, which can generate SELinux policies from container configurations. It does not provide the desired policy on its own and requires some manual intervention, so I will describe how I got the openvpn container working step-by-step.
First, install the required tools:
$ sudo dnf install policycoreutils-python-utils policycoreutils udica
Create the container with required privileges, then generate the policy for this container:
$ podman run -it --cap-add NET_ADMIN --device /dev/net/tun -v $PWD:/vpn:Z --name ovpn peque/vpn
$ podman inspect ovpn | sudo udica -j - ovpn_container
Policy ovpn_container created!
Please load these modules using:
# semodule -i ovpn_container.cil /usr/share/udica/templates/base_container.cil
Restart the container with: "--security-opt label=type:ovpn_container.process" parameter
Here is the policy which was generated by udica:
$ cat ovpn_container.cil
(block ovpn_container
(blockinherit container)
(allow process process ( capability ( chown dac_override fsetid fowner mknod net_raw setgid setuid setfcap setpcap net_bind_service sys_chroot kill audit_write net_admin )))
(allow process default_t ( dir ( open read getattr lock search ioctl add_name remove_name write )))
(allow process default_t ( file ( getattr read write append ioctl lock map open create )))
(allow process default_t ( sock_file ( getattr read write append open )))
)
Let's try this policy (note the --security-opt option, which tells podman to run the container in newly created domain):
$ sudo semodule -i ovpn_container.cil /usr/share/udica/templates/base_container.cil
$ podman run -it --cap-add NET_ADMIN --device /dev/net/tun -v $PWD:/vpn:Z --security-opt label=type:ovpn_container.process peque/vpn
<...>
ERROR: Cannot open TUN/TAP dev /dev/net/tun: Permission denied (errno=13)
Ugh. Here is the problem: the policy generated by udica still does not know about specific requirements of our container, as they are not reflected in its configuration (well, probably, it is possible to infer that you want to allow operations on tun_tap_device_t based on the fact that you requested --device /dev/net/tun, but...). So, we need to customize the policy by extending it with few more statements.
Let's disable SELinux temporarily and run the container to collect the expected denials:
$ sudo setenforce 0
$ podman run -it --cap-add NET_ADMIN --device /dev/net/tun -v $PWD:/vpn:Z --security-opt label=type:ovpn_container.process peque/vpn
These are:
$ sudo grep denied /var/log/audit/audit.log
type=AVC msg=audit(1563889218.937:839): avc: denied { read write } for pid=3272 comm="openvpn" name="tun" dev="devtmpfs" ino=15178 scontext=system_u:system_r:ovpn_container.process:s0:c138,c149 tcontext=system_u:object_r:tun_tap_device_t:s0 tclass=chr_file permissive=1
type=AVC msg=audit(1563889218.937:840): avc: denied { open } for pid=3272 comm="openvpn" path="/dev/net/tun" dev="devtmpfs" ino=15178 scontext=system_u:system_r:ovpn_container.process:s0:c138,c149 tcontext=system_u:object_r:tun_tap_device_t:s0 tclass=chr_file permissive=1
type=AVC msg=audit(1563889218.937:841): avc: denied { ioctl } for pid=3272 comm="openvpn" path="/dev/net/tun" dev="devtmpfs" ino=15178 ioctlcmd=0x54ca scontext=system_u:system_r:ovpn_container.process:s0:c138,c149 tcontext=system_u:object_r:tun_tap_device_t:s0 tclass=chr_file permissive=1
type=AVC msg=audit(1563889218.947:842): avc: denied { nlmsg_write } for pid=3273 comm="ip" scontext=system_u:system_r:ovpn_container.process:s0:c138,c149 tcontext=system_u:system_r:ovpn_container.process:s0:c138,c149 tclass=netlink_route_socket permissive=1
Or more human-readable:
$ sudo grep denied /var/log/audit/audit.log | audit2allow
#============= ovpn_container.process ==============
allow ovpn_container.process self:netlink_route_socket nlmsg_write;
allow ovpn_container.process tun_tap_device_t:chr_file { ioctl open read write };
OK, let's modify the udica-generated policy by adding the advised allows to it (note, that here I manually translated the syntax to CIL):
(block ovpn_container
(blockinherit container)
(allow process process ( capability ( chown dac_override fsetid fowner mknod net_raw setgid setuid setfcap setpcap net_bind_service sys_chroot kill audit_write net_admin )))
(allow process default_t ( dir ( open read getattr lock search ioctl add_name remove_name write )))
(allow process default_t ( file ( getattr read write append ioctl lock map open create )))
(allow process default_t ( sock_file ( getattr read write append open )))
; This is our new stuff.
(allow process tun_tap_device_t ( chr_file ( ioctl open read write )))
(allow process self ( netlink_route_socket ( nlmsg_write )))
)
Now we enable SELinux back, reload the module and check that the container works correctly when we specify our custom domain:
$ sudo setenforce 1
$ sudo semodule -r ovpn_container
$ sudo semodule -i ovpn_container.cil /usr/share/udica/templates/base_container.cil
$ podman run -it --cap-add NET_ADMIN --device /dev/net/tun -v $PWD:/vpn:Z --security-opt label=type:ovpn_container.process peque/vpn
<...>
Initialization Sequence Completed
Finally, check that other containers still have no these privileges:
$ podman run -it --cap-add NET_ADMIN --device /dev/net/tun -v $PWD:/vpn:Z peque/vpn
<...>
ERROR: Cannot open TUN/TAP dev /dev/net/tun: Permission denied (errno=13)
Yay! We stay with SELinux on, and allow the tunnel configuration only to our specific container.
I'm having some issues with postgresql 9.3 running on ubuntu 14.04.5
I had a django web application running using NGINX/Gunicorn, for a few weeks, then one day I went to use it and I got an OperationalError:
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 5432?
After researching, it became apparent that the postgresql server wasn't running. I then attempted to start it, and got the following error:
* No PostgreSQL clusters exist; see "man pg_createcluster"
Alarming, but my data is backed up just fine, so I did:
pg_createcluster 9.3 main --start
and then I got:
Error: cluster configuration already exists
No problem. I'll drop it and create a new one, I thought:
pg_dropcluster 9.3 main --stop
that ran fine, and so I ran:
pg_createcluster 9.3 main --start
again, and now it created the cluster apparently, but would not start:
Creating new cluster 9.3/main ...
config /etc/postgresql/9.3/main
data /var/lib/postgresql/9.3/main
locale en_US.UTF-8
port 5432
Error: cluster_port_ready: could not find psql binary
does anyone have any advice as to how to address this? I have done apt-get remove and install for postgresql, and still same results.
Thanks in advance!
UPDATE:
So, I tried the suggestion below to run the following:
/usr/lib/postgresql/9.3/bin/initdb -D /var/lib/postgresql/data -W -A md5
which returned:
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "C".
The default database encoding has accordingly been set to "SQL_ASCII".
The default text search configuration will be set to "english".
Data page checksums are disabled.
fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
creating configuration files ... ok
creating template1 database in /var/lib/postgresql/data/base/1 ... ok
initializing pg_authid ... ok
Enter new superuser password:
Enter it again:
setting password ... ok
initializing dependencies ... ok
creating system views ... ok
loading system objects' descriptions ... ok
creating collations ... ok
creating conversions ... ok
creating dictionaries ... ok
setting privileges on built-in objects ... ok
creating information schema ... ok
loading PL/pgSQL server-side language ... ok
vacuuming database template1 ... ok
copying template1 to template0 ... ok
copying template1 to postgres ... ok
syncing data to disk ... ok
Success. You can now start the database server using:
/usr/lib/postgresql/9.3/bin/postgres -D /var/lib/postgresql/data
or
/usr/lib/postgresql/9.3/bin/pg_ctl -D /var/lib/postgresql/data -l logfile start
So I ran:
user$ /usr/lib/postgresql/9.3/bin/postgres -D /var/lib/postgres
ql/data
and got:
LOG: could not bind IPv6 socket: Address already in use
HINT: Is another postmaster already running on port 5432? If not, wait a few seconds and retry.
LOG: could not bind IPv4 socket: Address already in use
HINT: Is another postmaster already running on port 5432? If not, wait a few seconds and retry.
WARNING: could not create listen socket for "localhost"
FATAL: could not create any TCP/IP sockets
So I was unable to recover the postgres instance I was trying to run, so I used this stack overflow article on
How to purge and thoroughly uninstall postgres
. As far as why this happened, I am still at a loss. I read that my issue could have been caused by my locale environment variables for LANGUAGE, and LANG_ALL. Or it's possible that this could have been caused after an update because I didn't specify what version of postgres I wanted to install when I ran apt-get initially. Regardless, here are the commands I ran that got me back into business:
apt-get --purge remove postgresql\*
After this command, I got some errors complaining about an invalid data directory again, so on the advice of a post that's not marked as the answer from the link above, I also ran:
apt-get autoremove postgresql*
That seemed to go well, so I returned to the instructions as laid out by the accepted answer:
rm -r /etc/postgresql/
rm -r /etc/postgresql-common/
rm -r /var/lib/postgresql/
userdel -r postgres
groupdel postgres
That appeared to complete without errors, except that the postgres user had already been removed by the previous commands..
I then ensured that my locale environment variables were set, based on an a related article titled: Solving "no PostgresSQL Clusters found" error
I ran:
#dpkg-reconfigure locales
Once that was all finished, I did a fresh install, this time specifying the version number:
apt-get install postgresql-9.3 postgresql-contrib-9.3 postgresql-doc-9.3
after the install completed postgres started automatically, and it seems to be fine now.
create postgresql database cluster:
$ sudo su
# mkdir /var/lib/postgresql/data
# chown -R postgres:postgres /var/lib/postgresql/data
# su postgres
$ /usr/lib/postgresql/9.3/bin/initdb -D /var/lib/postgresql/data -W -A md5
If this error messages:
LOG: could not bind IPv4 socket: Address already in use
It mean postgresql already running, you need to restart it.
You can restart service postgresql in ubuntu 14.04 with this command:
$ sudo sh /etc/init.d/postgresql restart
In new Ubuntu with systemd, to restart postgresql with this command:
$ sudo systemctl restart postgresql
When i try to run a local ESP then i get this error.
ERROR:Fetching service config failed(status code 403, reason Forbidden, url ***)
I have a new created service account this account works fine with gcloud cli.
System: OSX Sierra with Docker for MAC
this is the command that i use to start the container:
docker run -d --name="esp" --net="host" -v ~/Downloads:/esp gcr.io/endpoints-release/endpoints-runtime:1.0 -s 2017-02-07r5 -v echo.endpoints.****.cloud.goog -p 8082 -a localhost:9000 -k /esp/serviceaccount.json
UPDATE:
I have found the error i have set for the service name the verision and for the version the servicename.
Now i get no error but it not works, this is the console output from the container. From my view is all fine but it not works, i can't call the proxy with localhost:8082/***
INFO:Constructing an access token with scope https://www.googleapis.com/auth/service.management.readonly
INFO:Service account email: aplha-api#****.iam.gserviceaccount.com
INFO:Refreshing access_token
INFO:Fetching the service configuration from the service management service
nginx: [warn] Using trusted CA certificates file: /etc/nginx/trusted-ca-certificates.crt
This is the used correct command:
docker run -d --name="esp-user-api" --net="host" -v ~/Downloads:/esp gcr.io/endpoints-release/endpoints-runtime:1.0 -s echo.endpoints.***.cloud.goog -v 2017-02-07r5 -p 8082 -a localhost:9000 -k /esp/serviceaccount.json
Aron, I assume:
(1) you are following this user guide: https://cloud.google.com/endpoints/docs/running-esp-localdev
(2) And you do have a backend running on localhost:9000
Have you issued a curl request as suggested in that user guide to localhost:8082/***? does curl command get stuck or returns any error message?
If you don't have a local backend running yet, I would recommend you to follow the user guide above to run a local backend. Note this guide will instruct you to run it at port 8080, so you'll need to change your docker run command from "-a localhost:9000" to "-a localhost:8080" as well.
Also, please note this user guide is for linux env. We haven't tried this set up in a Mac env yet. We do notice some user gets this working on Windows docker with extra work, where he sets backend to "IP of docker NIC". Note "-a" is short for "--backend".
see https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/google-cloud-endpoints/4sRaSkigPiU/KY8g46NSBgAJ
when running the following command:
cmd /c C:\sonar-runner-2.4\bin\sonar-runner.bat
(sonar runner is installed on the build machine)
i get the following errors:
ERROR: Sonar server 'http://localhost:9000' can not be reached
ERROR: Error during Sonar runner execution
ERROR: java.net.ConnectException: Connection refused: connect
ERROR: Caused by: Connection refused: connect
what can cause these errors?
Hi dinesh,
this is my sonar-runner.properties file:
sonar.projectKey=NDM
sonar.projectName=NDM
sonar.projectVersion=1.0
sonar.visualstudio.solution=NDM.sln
#sonar.sourceEncoding=UTF-8
sonar.web.host:sonarqube
sonar.web.port=9000
# Enable the Visual Studio bootstrapper
sonar.visualstudio.enable=true
# Unit Test Results
sonar.cs.vstest.reportsPaths=TestResults/*.trx
# Required only when using SonarQube < 4.2
sonar.language=cs
sonar.sources=.
As you can see i set the sonar.web.host:sonarqube
sonar.web.port=9000 but when i run sonar-runner.bat i still get the
ERROR: Sonar server 'http://localhost:9000' can not be reached - why is it still looking for localhost:9000
and not sonarqube:9000 as i set?
i saw that in the log of sonar-runner.bat there the following line:
INFO: Work directory: D:\sTFS\26091\Sources\NDM\Source..sonar
while my solution is in D:\sTFS\26091\Sources\NDM\Source\
could this be the problem?
thanks,
Guy
If you use SonarScanner CLI with Docker, you may have this error because the SonarScanner container can not access to the Sonar UI container.
Note that you will have the same error with a simple curl from another container:
docker run --rm byrnedo/alpine-curl 127.0.0.1:9000
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (7) Failed to connect to 127.0.0.1 port 8080: Connection refused
The solution is to connect the SonarScanner container to the same docker network of your sonar instance, for instance with --network=host:
docker run --network=host -e SONAR_HOST_URL='http://127.0.0.1:9000' --user="$(id -u):$(id -g)" -v "$PWD:/usr/src" sonarsource/sonar-scanner-cli
(other parameters of this command comes from the SonarScanner CLI documentation)
I got the same issue, and I changed to IP and it working well
Go to System References --> Network --> Advanced --> Open TCP/IP tabs --> copy the IPv4 Address.
change that IP instead localhost
Hope this can help
You should configure the sonar-runner to use your existing SonarQube server. To do so, you need to update its conf/sonar-runner.properties file and specify the SonarQube server URL, username, password, and JDBC URL as well. See https://docs.sonarqube.org/display/SCAN/Analyzing+with+SonarQube+Scanner for details.
If you don't yet have an up and running SonarQube server, then you can launch one locally (with the default configuration) - it will bind to http://localhost:9000 and work with the default sonar-runner configuration. See https://docs.sonarqube.org/latest/setup/get-started-2-minutes/ for details on how to get started with the SonarQube server.
For others who ran into this issue in a project that is not using a sonar-runners.property file, you may find (as I did) that you need to tweak your pom.xml file, adding a sonar.host.url property.
For example, I needed to add the following line under the 'properties' element:
<sonar.host.url>https://sonar.my-internal-company-domain.net</sonar.host.url>
Where the url points to our internal sonar deployment.
For me the issue was that the maven sonar plugin was using proxy servers defined in the maven settings.xml. I was trying to access the sonarque on another (not localhost alias) and so it was trying to use the proxy server to access it. Just added my alias to nonProxyHosts in settings.xml and it is working now. I did not face this issue in maven sonar plugin 3.2, only after i upgraded it.
<proxy>
<id>proxy_id</id>
<active>true</active>
<protocol>http</protocol>
<host>your-proxy-host/host>
<port>your-proxy-host</port>
<nonProxyHosts>localhost|127.0.*|other-non-proxy-hosts</nonProxyHosts>
</proxy>enter code here
The issue occurred with me in a different way a little a while ago,
I had a docker container running normally in the main network of my host machine accessible via the browser on the normal localhost:9000. But whenever the scanner wants to connect to the server it couldn't despite being on the same network of the host.
I made sure they are, because on the docker run command I mentioned --network=bridge
So the trick was that I pointed to the actual local ip of mine instead of just writing localhost
you can know the ip of your machine by typing ipconfig on windows or ifconfig on linux
so on the scan docker run command I have pointed to the server like that -Dsonar.host.url=http://192.168.1.2:9000 where 192.168.1.2 is my local host address
That was my final docker commands to run the Server:
docker run -d --name sonarqube \
--network=bridge \
-p 9000:9000 \
-e SONAR_JDBC_USERNAME=<db username> \
-e SONAR_JDBC_PASSWORD=<db password>\
-v sonarqube_data:/opt/sonarqube/data \
-v sonarqube_extensions:/opt/sonarqube/extensions \
-v sonarqube_logs:/opt/sonarqube/logs \
sonarqube:community
and that's for the Scanner:
docker run \
--network=bridge \
-v "<local path of the project to scan>:/usr/src" sonarsource/sonar-scanner-cli \
-Dsonar.projectKey=<project key> \
-Dsonar.sources=. \
-Dsonar.host.url=http://<local-ip>:9000 \
-Dsonar.login=<token>
In the config file there is a colon instead of an equal sign after the sonar.web.host.
Is:
sonar.web.host:sonarqube
Should be
sonar.web.host=sonarqube
In sonar.properties file in conf folder I had hardcoaded ip of my machine where sobarqube was installed in property sonar.web.host=10.9 235.22 I commented this and it started working for me.
Please check if postgres(or any other database service) is running properly.
When you allow the 9000 port to firewall on your desired operating System the following error "ERROR: Sonar server 'http://localhost:9000' can not be reached" will remove successfully.In ubuntu it is just like as by typing the following command in terminal "sudo ufw allow 9000/tcp" this error will removed from the Jenkins server by clicking on build now in jenkins.