I get this error :
In AbstractMySQLDriver.php line 115:
An exception occured in driver: SQLSTATE[HY000] [2006] MySQL server has gone away
In PDOConnection.php line 47:
SQLSTATE[HY000] [2006] MySQL server has gone away
In PDOConnection.php line 43:
SQLSTATE[HY000] [2006] MySQL server has gone away
In PDOConnection.php line 43:
Warning: PDO::__construct(): MySQL server has gone away
doctrine:schema:update [--complete] [--dump-sql] [-f|--force] [--em [EM]] [-h|--help] [-q|--quiet] [-v|vv|vvv|--verbose] [-V|--version] [--ansi] [--no-ansi] [-n|--no-interaction] [-e|--env ENV] [--no-debug] [--] <command>
check your connection parameters in parameters.yml
database_port: port
database_name: table
database_user: root
database_password:******
Related
I have a Django project in Docker compose container.
So, when I'm trying to docker-compose up - it's all OK. All containers, including Redis is up.
Redis logs
2023-02-06 14:48:50 1:M 06 Feb 2023 11:48:50.519 * Ready to accept connections
But when I'm trying to send a POST request to my API from Postman - I'm getting this error.
File "c:\Users\django_env\Lib\site-packages\redis\connection.py", line 617, in connect
raise ConnectionError(self._error_message(e))
redis.exceptions.ConnectionError: Error 11001 connecting to redis:6379. getaddrinfo failed.
Anyone knows how to fix it? I'm working on Windows mashine. Thanks)
My docker-compose file
https://pastebin.com/n2buBpxH
I'm trying to get celery up and running on Heroku as per the woeful instructions here
whenever i try to run "heroku local"
it gives me the error:
consumer: Cannot connect to redis://localhost:6379//: Error 61 connecting to 127.0.0.1:6379.
Connection refused..
any help is much appreciated.
you have to run the server
$ redis-server
this work for me
I am trying to use Symfony 3.0 With a PostreSQL Database.
Parameters.yml:
parameters:
database_driver: pdo_pgsql
database_host: 127.0.0.1
database_port: 5432
database_name: dmfa
database_user: username
database_password: password
mailer_transport: smtp
mailer_host: 127.0.0.1
mailer_user: null
mailer_password: null
secret: a28a9e1bfefb5aa6f7f3be73a9a62c01eedf55ab
I run the following code to try and generate and entity:
php ./bin/console doctrine:generate:entity
Despite changing my drive to pdo_pgsql i get the following errors:
[Doctrine\DBAL\Exception\DriverException]
An exception occured in driver: SQLSTATE[HY000] [2006] MySQL server has gone away
[Doctrine\DBAL\Driver\PDOException]
SQLSTATE[HY000] [2006] MySQL server has gone away
[PDOException]
SQLSTATE[HY000] [2006] MySQL server has gone away
[PDOException]
PDO::__construct(): MySQL server has gone away
It feels like i am missing a step because Symfony is still looking for a mysql database. Please give me some insight on how to correct this error.
Double check that in your app/config/config.yml you have:
doctrine:
dbal:
driver: "%database_driver%"
I am obtaining an exception when trying to connect to my local instance of Cassandra from Python. I can connect to Cassandra with no problems using cqlsh. The version I am running is Cassandra 3.01 on ubuntu:
cqlsh
Connected to Test Cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.0.1 | CQL spec 3.3.1 | Native protocol v4]
The exception I obtain is below:
ERROR:cassandra.cluster:Control connection failed to connect, shutting down Cluster:
Traceback (most recent call last):
File "cassandra/cluster.py", line 840, in cassandra.cluster.Cluster.connect (cassandra/cluster.c:11146)
File "cassandra/cluster.py", line 2088, in cassandra.cluster.ControlConnection.connect (cassandra/cluster.c:36955)
File "cassandra/cluster.py", line 2123, in cassandra.cluster.ControlConnection._reconnect_internal (cassandra/cluster.c:37811)
NoHostAvailable: ('Unable to connect to any servers', {'127.0.0.1': InvalidRequest(u'code=2200 [Invalid query] message="unconfigured table schema_keyspaces"',), 'localhost': InvalidRequest(u'code=2200 [Invalid query] message="unconfigured table schema_keyspaces"',)
I have checked my cassandra.yaml file and it looks ok:
egrep 'rpc_port:|native_transport_port:' /etc/cassandra/cassandra.yaml
native_transport_port: 9042
rpc_port: 9160
Anything else I can look at ? Suggestions are most welcome.
It looks like you are attempting to connect to a 3.0.1 server using an older install of cqlsh or you are (somehow) using an older python driver.
The error message you are getting:
(u'code=2200 [Invalid query] message="unconfigured table schema_keyspaces"',)
indicates that the client driver is attempting to get table metadata from the schema_keyspaces table which pre-dates 3.0. This information is now held in the system_schema.keyspaces table.
Use pip install --upgrade cassandra-driver to upgrade cassandra-driver.
You could type python -c 'import cassandra; print cassandra.__version__' to confirm the version of this driver.
I've followed the instructions on the website of Spark and I got 1 master and 1 slave running in my Amazon. However, I'm not able to connect to the master node using pyspark
I can connect to the master node using SSH without any problem.
Here's my command
spark-ec2 --key-pair=graph-cluster --identity-file=/Users/.ssh/pem.pem --region=us-east-1 --zone=us-east-1a launch graph-cluster
I can go to http://ec2-54-152-xx-xxx.compute-1.amazonaws.com:8080/
and see that Spark is up and running I also see this Spark Master at
spark://ec2-54-152-xx-xxx.compute-1.amazonaws.com:7077
However when I run command
MASTER=spark://ec2-54-152-xx-xx.compute-1.amazonaws.com:7077 pyspark
I get this error
2015-09-16 15:39:31,800 ERROR actor.OneForOneStrategy (Slf4jLogger.scala:apply$mcV$sp(66)) -
java.lang.NullPointerException
at org.apache.spark.deploy.client.AppClient$ClientActor$$anonfun$receiveWithLogging$1.applyOrElse(AppClient.scala:160)
at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:59)
at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:42)
at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
at org.apache.spark.util.ActorLogReceive$$anon$1.applyOrElse(ActorLogReceive.scala:42)
at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
at org.apache.spark.deploy.client.AppClient$ClientActor.aroundReceive(AppClient.scala:61)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
at akka.actor.ActorCell.invoke(ActorCell.scala:487)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
at akka.dispatch.Mailbox.run(Mailbox.scala:220)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
2015-09-16 15:39:31,804 INFO client.AppClient$ClientActor (Logging.scala:logInfo(59)) - Connecting to master akka.tcp://sparkMaster#ec2-54-152-xx-xxx.compute-1.amazonaws.com:7077/user/Master...
2015-09-16 15:39:31,955 INFO util.Utils (Logging.scala:logInfo(59)) - Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 52333.
2015-09-16 15:39:31,956 INFO netty.NettyBlockTransferService (Logging.scala:logInfo(59)) - Server created on 52333
2015-09-16 15:39:31,959 INFO storage.BlockManagerMaster (Logging.scala:logInfo(59)) - Trying to register BlockManager
2015-09-16 15:39:31,964 INFO storage.BlockManagerMasterEndpoint (Logging.scala:logInfo(59)) - Registering block manager xxx:52333 with 265.1 MB RAM, BlockManagerId(driver, xxx, 52333)
2015-09-16 15:39:31,969 INFO storage.BlockManagerMaster (Logging.scala:logInfo(59)) - Registered BlockManager
2015-09-16 15:39:32,458 ERROR spark.SparkContext (Logging.scala:logError(96)) - Error initializing SparkContext.
java.lang.IllegalStateException: Cannot call methods on a stopped SparkContext
at org.apache.spark.SparkContext.org$apache$spark$SparkContext$$assertNotStopped(SparkContext.scala:103)
at org.apache.spark.SparkContext.getSchedulingMode(SparkContext.scala:1503)
at org.apache.spark.SparkContext.postEnvironmentUpdate(SparkContext.scala:2007)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:543)
at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:61)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:234)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
at py4j.Gateway.invoke(Gateway.java:214)
at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:79)
at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:68)
at py4j.GatewayConnection.run(GatewayConnection.java:207)
at java.lang.Thread.run(Thread.java:745)
2015-09-16 15:39:32,460 INFO spark.SparkContext (Logging.scala:logInfo(59)) - SparkContext already stopped.
Traceback (most recent call last):
File "/usr/local/Cellar/apache-spark/1.4.1/libexec/python/pyspark/shell.py", line 43, in <module>
sc = SparkContext(appName="PySparkShell", pyFiles=add_files)
File "/usr/local/Cellar/apache-spark/1.4.1/libexec/python/pyspark/context.py", line 113, in __init__
conf, jsc, profiler_cls)
File "/usr/local/Cellar/apache-spark/1.4.1/libexec/python/pyspark/context.py", line 165, in _do_init
self._jsc = jsc or self._initialize_context(self._conf._jconf)
File "/usr/local/Cellar/apache-spark/1.4.1/libexec/python/pyspark/context.py", line 219, in _initialize_context
return self._jvm.JavaSparkContext(jconf)
File "/usr/local/Cellar/apache-spark/1.4.1/libexec/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 701, in __call__
File "/usr/local/Cellar/apache-spark/1.4.1/libexec/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling None.org.apache.spark.api.java.JavaSparkContext.
: java.lang.IllegalStateException: Cannot call methods on a stopped SparkContext
at org.apache.spark.SparkContext.org$apache$spark$SparkContext$$assertNotStopped(SparkContext.scala:103)
at org.apache.spark.SparkContext.getSchedulingMode(SparkContext.scala:1503)
at org.apache.spark.SparkContext.postEnvironmentUpdate(SparkContext.scala:2007)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:543)
at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:61)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:234)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
at py4j.Gateway.invoke(Gateway.java:214)
at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:79)
at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:68)
at py4j.GatewayConnection.run(GatewayConnection.java:207)
at java.lang.Thread.run(Thread.java:745)
Spark_ec2 doesn't not open port 7077 on master node for incoming connections from outside the cluster.
You can check in AWS console/EC2/Network & Security/Security Groups and check graph-cluster-master security group's Inbound tab.
You can add the rule to open inbound connection to port 7077.
But it is suggested to run pyspark (essentially Spark's App driver) from the master machine in EC2 cluster, and avoid running driver outside the network.
The reason for this - increased delays and problems with settings firewall connections - you'll need to open some ports so executions could connection to driver on your machine.
So the way to go is to login to ssh cluster with this command:
spark-ec2 --key-pair=graph-cluster --identity-file=/Users/.ssh/pem.pem --region=us-east-1 --zone=us-east-1a login graph-cluster
And run the commands from the master server:
cd spark
bin/pyspark
You'll need to transfer related files (your script and data) to master. I usually keep data on S3 and edit script files with vim or start ipython notebook.
BTW the latter is very easy - you need to add the rule for incoming connections from your computer IP to port 18888 in EC2 console master's security group. And then run the command on a cluster:
IPYTHON_OPTS="notebook --pylab inline --port=18888 --ip='*'" pyspark
Then you can access it with http://ec2-54-152-xx-xxx.compute-1.amazonaws.com:18888/