I'm trying to run datomic pro using a local postgresql, transactor an peer.
I'm able to start both the database and the transactor without any problem:
db-storage | 2019-09-01 21:26:34.823 UTC [1] LOG: starting PostgreSQL 12beta3 on x86_64-pc-linux-musl, compiled by gcc (Alpine 8.3.0) 8.3.0, 64-bit
db-storage | 2019-09-01 21:26:34.823 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db-storage | 2019-09-01 21:26:34.823 UTC [1] LOG: listening on IPv6 address "::", port 5432
db-storage | 2019-09-01 21:26:34.835 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db-storage | 2019-09-01 21:26:34.849 UTC [18] LOG: database system was shut down at 2019-09-01 21:25:15 UTC
db-storage | 2019-09-01 21:26:34.852 UTC [1] LOG: database system is ready to accept connections
db-transactor | Launching with Java options -server -Xms1g -Xmx1g -XX:+UseG1GC -XX:MaxGCPauseMillis=50
db-transactor | Starting datomic:sql://<DB-NAME>?jdbc:postgresql://localhost:5432/datomic?user=datomic&password=datomic-password, you may need to change the user and password parameters to work with your jdbc driver ...
db-transactor | System started datomic:sql://<DB-NAME>?jdbc:postgresql://localhost:5432/datomic?user=datomic&password=datomic-password, you may need to change the user and password parameters to work with your jdbc driver
(They're all running on containers with a network_mode=host)
I think that theses warnings may come from the fact that I'm using datomic as the user and the database name, but I'm not sure.
But then, when I try to start a peer server, I'm faced with the following error:
$ ./bin/run -m datomic.peer-server -h localhost -p 8998 -a datomic-peer-user,datomic-peer-password -d datomic,datomic:sql://datomic?jdbc:postgresql://localhost:5432/datomic?user=datomic\&password=datomic-password
Exception in thread "main" java.lang.RuntimeException: Could not find datomic in catalog
at datomic.peer$get_connection$fn__18852.invoke(peer.clj:681)
at datomic.peer$get_connection.invokeStatic(peer.clj:669)
at datomic.peer$get_connection.invoke(peer.clj:666)
at datomic.peer$connect_uri.invokeStatic(peer.clj:763)
at datomic.peer$connect_uri.invoke(peer.clj:755)
(...)
at clojure.main$main.doInvoke(main.clj:561)
at clojure.lang.RestFn.applyTo(RestFn.java:137)
at clojure.lang.Var.applyTo(Var.java:705)
at clojure.main.main(main.java:37)
I've already tried changing a bunch of configurations with no success. Can someone help me?
I faced with the same issue but I've found a solution after carefully docs exploring.
The key is presented in this section of the documentation: https://docs.datomic.com/on-prem/overview/storage.html#connecting-to-transactor
After running a transactor and before you run a peer, go to datomic base dir and execute following:
bin/shell
## for now you are inside datomic shell
uri = "datomic:sql://datomic?jdbc:postgresql://localhost:5432/datomic?user=datomic&password=datomic";
Peer.createDatabase(uri);
## Terminate your datomic shell
That's all. After that you can run peer server as you mentioned
Related
When I run heroku local on my machine I get the following error:
07:44:21 web.1 | Watching for file changes with StatReloader
07:44:22 web.1 | Error: That port is already in use.
[DONE] Killing all processes with signal SIGINT
07:44:22 web.1 Exited with exit code null
When I run sudo lsof -i tcp:5000 This is what I see:
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
ControlCe 83303 x 19u IPv4 0x874167a5a53a48c7 0t0 TCP *:commplex-main (LISTEN)
ControlCe 83303 x 20u IPv6 0x874167a5922f00af 0t0 TCP *:commplex-main (LISTEN)
I've tried to kill the above processes using kill -9 but they don't seem to go away - I'm not sure if these are what are causing the issue either.
Any help appreciated.
It looks like port 5000 is used by "AirPlay Receiver" on macOS Monterey. The answer on that question shows how you can disable AirPlay Receiver in your System Preferences.
But if you don't want to disable that feature you could also just use a different port. Django's default development port is 8000, so that might be a good choice.
Assuming you have something like this in your .env file:
PORT=5000
simply change it to
PORT=8000
After installing neo4j on my aws ec2 instance, the following seems to indicate that the server is up.
# bin/neo4j console
Active database: graph.db
Directories in use:
home: /usr/local/share/neo4j-community-3.3.1
config: /usr/local/share/neo4j-community-3.3.1/conf
logs: /usr/local/share/neo4j-community-3.3.1/logs
plugins: /usr/local/share/neo4j-community-3.3.1/plugins
import: /usr/local/share/neo4j-community-3.3.1/import
data: /usr/local/share/neo4j-community-3.3.1/data
certificates: /usr/local/share/neo4j-community-3.3.1/certificates
run: /usr/local/share/neo4j-community-3.3.1/run
Starting Neo4j.
WARNING: Max 1024 open files allowed, minimum of 40000 recommended.
See the Neo4j manual.
2017-12-01 16:03:04.380+0000 INFO ======== Neo4j 3.3.1 ========
2017-12-01 16:03:04.447+0000 INFO Starting...
2017-12-01 16:03:05.986+0000 INFO Bolt enabled on 127.0.0.1:7687.
2017-12-01 16:03:11.206+0000 INFO Started.
2017-12-01 16:03:12.860+0000 INFO Remote interface available at
http://localhost:7474/
At this point I am not able to connect. I have opened up ports 7474 - and 7687 - and I can access port 80, plus ssh into the instance, etc.
Is this a neo4j or aws problem?
Any help is appreciated.
Colin Goldberg
Set the dbms.connectors.default_listen_address to be 0.0.0.0, then only open the SSL port located on 7473 using Amazon's ec2 security groups. Don't use 7474 if you don't have to.
It looks like Neo4j is only listening on the localhost interface. If your run netstat -a | grep 7474 you want to see something like *:7474. If you see something like localhost:7474 then you won't be able to connect to the port from outside.
Take a look at Configuring Neo4j connectors. I believe you want dbms.connectors.default_listen_address set to 0.0.0.0.
And now a warning - you are opening your Neo4j to the entire planet if you do this. That may be ok but it seems unlikely that this is what you want to do. The defaults are there for a reason - you don't want the entire planet being able to try to hack into your database. Use caution if you enable this.
I am getting these errors from MailEnable, the OS is CentOS. The errors are from /var/log/maillog as suggested by #OlegNeumyvakin.
Sep 8 03:33:12 localhost journal: plesk sendmail[38416]: handlers_stderr:$
Sep 8 03:33:12 localhost journal: plesk sendmail[38416]: SKIP during call$
Sep 8 03:33:12 localhost postfix/pickup[35664]: 66B7B21F2D4F: uid=0 from=$
Sep 8 03:33:12 localhost postfix/cleanup[38422]: 66B7B21F2D4F: message-id$
Sep 8 03:33:12 localhost postfix/qmgr[9634]: 66B7B21F2D4F: from=<root#loc$
The email cannot send nor receive anything. I am trying to get it to work since it is for a site and it needs to send/receive emails.
You can check your virtual address by command:
postmap -q mail#example.tld hash:/var/spool/postfix/plesk/virtual
virtual.db is Berkeley DB file
you can check it content with Berkeley DB dump util:
# db5.1_dump -p /var/spool/postfix/plesk/virtual.db
VERSION=3
format=print
type=hash
h_nelem=4103
db_pagesize=4096
HEADER=END
drweb#example.tld\00
drweb#localhost.localdomain\00
kluser#example.tld\00
kluser#localhost.localdomain\00
mail1#example.tld\00
mail1#example.tld\00
postmaster#example.tld\00
postmaster#localhost.localdomain\00
root#dexample.tld\00
root#localhost.localdomain\00
anonymous#example.tld\00
anonymous#localhost.localdomain\00
mailer-daemon#example.tld\00
mailer-daemon#localhost.localdomain\00
DATA=END
you can install this util with yum install libdb-utils
Also in case you have issues with sending mail you can check limitations on outgoing email messages at Tools & settings > Mail Server Settings and if you have enabled them Tools & settings > Outgoing Mail Control
I'm installing a modern GoCD (16.7) on an Ubuntu machine. openjdk-8 (jre and jdk). The agents (on localhost) fail to connect to the server:
[Sat Jul 30 05:58:47 UTC 2016] Starting Go Agent Bootstrapper with command:
/usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java
-jar /usr/share/go-agent3/agent-bootstrapper.jar
-serverUrl https://127.0.0.1:8154/go/
...
java.lang.Exception: Couldn't access Go Server with base url:
https://127.0.0.1:8154/go/admin/agent-launcher.jar:
java.net.SocketException: Broken pipe
at com.thoughtworks.go.agent.launcher.ServerCall.invoke(ServerCall.java:78)
and
2016-07-30 06:00:48,790 [main ] ERROR go.agent.launcher.ServerBinaryDownloader:118
- Couldn't update admin/agent-launcher.jar. Sleeping for 1m.
Error: java.lang.Exception: Couldn't access Go Server with base url:
https://127.0.0.1:8154/go/admin/agent-launcher.jar:
java.net.SocketException: Broken pipe
(I manually wrapped those lines for readability)
The server is actually accessible. For instance:
$ curl --silent --insecure https://127.0.0.1:8154/go/ | head -2
<!-- *************************GO-LICENSE-START******************************
* Copyright 2014 ThoughtWorks, Inc.
Yes, I'm using --insecure, but gocd ships with a self-signed cert. It's standard practice. Some of the things I've seen said "oh, you are blocking your port" but this is to localhost.
Are your GOCD server and agent using identical versions of Java? We have found they must be the same because the certificates have to match. See chatter
I am using spark 0.90 stand alone mode.
When I tried with a streaming application in stand alone mode, I am getting a connection refused exception.
I added hostname in /etc/hosts also tried with IP alone. In both cases worker got registered with master without any issues.
Is there a way to solve this issue?
14/02/28 07:15:01 INFO Master: akka.tcp://driverClient#127.0.0.1:55891 got disassociated, removing it.
14/02/28 07:15:04 INFO Master: Registering app Twitter Streaming
14/02/28 07:15:04 INFO Master: Registered app Twitter Streaming with ID app-20140228071504-0000
14/02/28 07:34:42 INFO Master: akka.tcp://spark#127.0.0.1:33688 got disassociated, removing it.
14/02/28 07:34:42 INFO LocalActorRef: Message [akka.remote.transport.ActorTransportAdapter$DisassociateUnderlying] from Actor[akka://sparkMaster/deadLetters] to Actor[akka://sparkMaster/system/transports/akkaprotocolmanager.tcp0/akkaProtocol-tcp%3A%2F%2FsparkMaster%4010.165.35.96%3A38903-6#-1146558090] was not delivered. [2] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
14/02/28 07:34:42 ERROR EndpointWriter: AssociationError [akka.tcp://sparkMaster#10.165.35.96:8910] -> [akka.tcp://spark#127.0.0.1:33688]: Error [Association failed with [akka.tcp://spark#127.0.0.1:33688]] [
akka.remote.EndpointAssociationException: Association failed with [akka.tcp://spark#127.0.0.1:33688]
Caused by: akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2: Connection refused: /127.0.0.1:33688
I had a similar issue when running in Spark in cluster mode. My problem was that the server was started with the hostname 'fluentd:7077' and not the FQDN. I edited the
/sbin/start-master.sh
to reflect how my remote nodes connect with the -ip flag.
/usr/lib/jvm/jdk1.7.0_51/bin/java -cp :/home/vagrant/spark-0.9.0-incubating-bin- hadoop2/conf:/home/vagrant/spark-0.9.0-incuba
ting-bin-hadoop2/assembly/target/scala-2.10/spark-assembly_2.10-0.9.0-incubating-hadoop2.2.0.jar -Dspark.akka.logLifecycleEvents=true -Djava.library.path= -Xms512m -Xmx512m org.ap
ache.spark.deploy.master.Master --ip fluentd.alex.dev --port 7077 --webui-port 8080
Hope this helps.