404 - The file you requested could not be found - icecast

My icecast2 server is running and I can see my admin. My config file is setup also with a normal mount set. My source client is liquidsoap, this is my code:
#!/usr/bin/liquidsoap
# Log dir
set("log.file.path","/var/log/liquidsoap/basic-radio.log")
jazz = playlist("/var/www/html/stream/audio/mp3/jazz")
popular = playlist.safe("/var/www/html/stream/audio/mp3/popular-music")
radio = fallback(
[ switch(
[
({ 0h-12h }, jazz),
({ 12h01-23h59 }, popular),
]),
jazz])
#radio = random(weights=[1,5],[ jazz, radio ])
# Stream it out
output.icecast(%mp3,
host = "18.221.199.44", port = 8000,
, mount = "ssp-radio",
radio)
My .liq file is inside /etc/liquidsoap/, filename is radio.liq
But when I try to load from the browser my streaming at http://someserver.com:8000/ssp-radio an error "404 - The file you requested could not be found! " is returned
I also found these in my error log:
[2018-01-10 11:49:21] INFO fserve/fserve_client_create checking for
file /icecast.png (/etc/icecast2/web/icecast.png)
[2018-01-10 11:49:21] WARN fserve/fserve_client_create req for file
"/etc/icecast2/web/icecast.png" No such file or directory
[2018-01-10 11:49:23] INFO fserve/fserve_client_create checking for
file /style.css (/etc/icecast2/web/style.css)
[2018-01-10 17:09:13] INFO fserve/fserve_client_create checking for
file /style.css (/etc/icecast2/web/style.css)
[2018-01-10 17:22:26] INFO fserve/fserve_client_create checking for
file /style.css (/etc/icecast2/web/style.css)
[2018-01-10 17:22:28] INFO fserve/fserve_client_create checking for
file /style.css (/etc/icecast2/web/style.css)
[2018-01-10 18:16:04] INFO sighandler/_sig_die Caught signal 15,
shutting down...
[2018-01-10 18:16:04] INFO main/main Shutting down
[2018-01-10 18:16:04] INFO fserve/fserve_shutdown file serving stopped
[2018-01-10 18:16:05] INFO slave/_slave_thread shutting down current
relays
[2018-01-10 18:16:05] INFO slave/_slave_thread Slave thread shutdown
complete
[2018-01-10 18:16:05] INFO auth/auth_shutdown Auth shutdown
[2018-01-10 18:16:05] INFO yp/yp_shutdown YP thread down
[2018-01-10 18:16:05] INFO stats/stats_shutdown stats thread finished
[2018-01-10 18:16:05] INFO auth/auth_run_thread Authenication thread
shutting down
When I try to load this one: http://some-ip:8000/admin/listclients?mount=/ssp-radio
it says:
400 - Source does not exist
it's looping and I can't stop the server so I need to exit the terminal
What does this mean? No mountpoint is listed in my admin as well. Please help. Thanks
Update:
This is the output from liquidsoap:
2018/01/15 13:08:15 [popular-music:3] Successfully loaded a playlist of
23 tracks.
2018/01/15 13:08:15 [jazz:3] Prepared
"/var/www/html/mediafiles/audio/jazz/1-14_Let_Me_Be_The_One.mp3" (RID
3).
2018/01/15 13:08:15 [tea-media:3] Connecting mount tea-media for
source#my-server-ip-here...
2018/01/15 13:08:15 [tea-media:2] Connection failed: 403, Forbidden
(HTTP/1.0)
2018/01/15 13:08:15 [tea-media:3] Will try again in 3.00 sec.
strange error flushing buffer ...
strange error flushing buffer ...
2018/01/15 13:08:15 [threads:3] Created thread "wallclock_main" (1
total).
2018/01/15 13:08:15 [clock.wallclock_main:3] Streaming loop starts,
synchronized with wallclock.
2018/01/15 13:08:15 [fallback_4970:3] Switch to random_4968.
2018/01/15 13:08:15 [random_4968:3] Switch to jazz.
2018/01/15 13:08:19 [tea-media:3] Connecting mount tea-media for
source#my-server-ip-here...
2018/01/15 13:08:19 [tea-media:2] Connection failed: 403, Forbidden
(HTTP/1.0)
2018/01/15 13:08:19 [tea-media:3] Will try again in 3.00 sec.
strange error flushing buffer ...
strange error flushing buffer ...
2018/01/15 13:08:23 [tea-media:3] Connecting mount tea-media for
source#my-server-ip-here...
2018/01/15 13:08:23 [tea-media:2] Connection failed: 403, Forbidden
(HTTP/1.0)
2018/01/15 13:08:23 [tea-media:3] Will try again in 3.00 sec.
strange error flushing buffer ...
strange error flushing buffer ...
2018/01/15 13:08:27 [tea-media:3] Connecting mount tea-media for
source#my-server-ip-here...
2018/01/15 13:08:27 [tea-media:2] Connection failed: 403, Forbidden
(HTTP/1.0)
2018/01/15 13:08:27 [tea-media:3] Will try again in 3.00 sec.
strange error flushing buffer .
...and so on

As you can see from the liquidsoap log, it fails to connect, apparently:
2018/01/15 13:08:19 [tea-media:2] Connection failed: 403, Forbidden (HTTP/1.0)
So likely you supplied the wrong username or password in liquidsoap, and therefore liquidsoap can't connect to the Icecast server.
Make sure you are using the correct source authentication username (usually source) and password (configured in your icecast.xml as source password).

Related

Tendermint : get error when start tendermint node as per the sample

I tried to create a application as per the guild in the tendermint document.
After I started the application and tendermint node, getting the below error. I use go version go1.15.5 linux/amd64 and tednermint v0.34.0-rc4-148-g095e9cd.
[bc#localhost kvstore]$ TMHOME="/tmp/example" tendermint node --proxy_app=unix://example.sock
I[2020-12-01|13:16:59.697] Version info module=main software=v0.34.0-rc4-148-g095e9cd block=11 p2p=8
I[2020-12-01|13:16:59.702] Starting Node service module=main impl=Node
I[2020-12-01|13:16:59.703] Starting StateSync service module=statesync impl=StateSync
I[2020-12-01|13:16:59.738] Started node module=main nodeInfo="{ProtocolVersion:{P2P:8 Block:11 App:0} DefaultNodeID:3cf5ea6219c57fd906c042f767748988ba070db7 ListenAddr:tcp://0.0.0.0:26656 Network:test-chain-1mrgVg Version:v0.34.0-rc4-148-g095e9cd Channels:40202122233038606100 Moniker:localhost.localdomain Other:{TxIndex:on RPCAddress:tcp://127.0.0.1:26657}}"
E[2020-12-01|13:17:00.758] Stopping abci.socketClient for error: read message: EOF module=abci-client connection=consensus
E[2020-12-01|13:17:00.758] consensus connection terminated. Did the application crash? Please restart tendermint module=proxy err="read message: EOF"
E[2020-12-01|13:17:00.758] Error in proxyAppConn.BeginBlock module=state err="read message: EOF"
E[2020-12-01|13:17:00.758] Error on ApplyBlock module=consensus err="read message: EOF"
I[2020-12-01|13:17:00.758] captured terminated, exiting... module=main
I[2020-12-01|13:17:00.758] Stopping Node service module=main impl=Node
I[2020-12-01|13:17:00.758] Stopping Node module=main
I[2020-12-01|13:17:00.760] Stopping StateSync service module=statesync impl=StateSync
I[2020-12-01|13:17:00.760] Closing rpc listener module=main listener="&{Listener:0xc00000d440 sem:0xc000039200 closeOnce:{done:0 m:{state:0 sema:0}} done:0xc000039260}"
E[2020-12-01|13:17:00.760] Error serving server module=main err="accept tcp 127.0.0.1:26657: use of closed network connection"
KVStore
[bc#localhost kvstore]$ ./example
badger 2020/12/01 13:16:55 INFO: All 0 tables opened in 0s
badger 2020/12/01 13:16:55 INFO: Replaying file id: 0 at offset: 0
badger 2020/12/01 13:16:55 INFO: Replay took: 6.807µs
badger 2020/12/01 13:16:55 DEBUG: Value log discard stats empty
I[2020-12-01|13:16:55.373] Starting ABCIServer service impl=ABCIServer
I[2020-12-01|13:16:55.405] Waiting for new connection...
I[2020-12-01|13:16:59.692] Accepted a new connection
I[2020-12-01|13:16:59.692] Waiting for new connection...
I[2020-12-01|13:16:59.692] Accepted a new connection
I[2020-12-01|13:16:59.692] Waiting for new connection...
I[2020-12-01|13:16:59.692] Accepted a new connection
I[2020-12-01|13:16:59.692] Waiting for new connection...
I[2020-12-01|13:16:59.692] Accepted a new connection
I[2020-12-01|13:16:59.692] Waiting for new connection...
E[2020-12-01|13:17:00.758] Connection error err="error reading message: proto: wrong wireType = 2 for field Height"
E[2020-12-01|13:17:00.761] Connection was closed by client
E[2020-12-01|13:17:00.761] Connection was closed by client
E[2020-12-01|13:17:00.761] Connection was closed by client
go.mod
module github.com/me/example
go 1.15
require (
github.com/dgraph-io/badger v1.6.2
github.com/tendermint/tendermint v0.34.0-rc4
)

Can't connect to Spark cluster in EC2 using pyspark

I've followed the instructions on the website of Spark and I got 1 master and 1 slave running in my Amazon. However, I'm not able to connect to the master node using pyspark
I can connect to the master node using SSH without any problem.
Here's my command
spark-ec2 --key-pair=graph-cluster --identity-file=/Users/.ssh/pem.pem --region=us-east-1 --zone=us-east-1a launch graph-cluster
I can go to http://ec2-54-152-xx-xxx.compute-1.amazonaws.com:8080/
and see that Spark is up and running I also see this Spark Master at
spark://ec2-54-152-xx-xxx.compute-1.amazonaws.com:7077
However when I run command
MASTER=spark://ec2-54-152-xx-xx.compute-1.amazonaws.com:7077 pyspark
I get this error
2015-09-16 15:39:31,800 ERROR actor.OneForOneStrategy (Slf4jLogger.scala:apply$mcV$sp(66)) -
java.lang.NullPointerException
at org.apache.spark.deploy.client.AppClient$ClientActor$$anonfun$receiveWithLogging$1.applyOrElse(AppClient.scala:160)
at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:59)
at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:42)
at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
at org.apache.spark.util.ActorLogReceive$$anon$1.applyOrElse(ActorLogReceive.scala:42)
at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
at org.apache.spark.deploy.client.AppClient$ClientActor.aroundReceive(AppClient.scala:61)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
at akka.actor.ActorCell.invoke(ActorCell.scala:487)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
at akka.dispatch.Mailbox.run(Mailbox.scala:220)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
2015-09-16 15:39:31,804 INFO client.AppClient$ClientActor (Logging.scala:logInfo(59)) - Connecting to master akka.tcp://sparkMaster#ec2-54-152-xx-xxx.compute-1.amazonaws.com:7077/user/Master...
2015-09-16 15:39:31,955 INFO util.Utils (Logging.scala:logInfo(59)) - Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 52333.
2015-09-16 15:39:31,956 INFO netty.NettyBlockTransferService (Logging.scala:logInfo(59)) - Server created on 52333
2015-09-16 15:39:31,959 INFO storage.BlockManagerMaster (Logging.scala:logInfo(59)) - Trying to register BlockManager
2015-09-16 15:39:31,964 INFO storage.BlockManagerMasterEndpoint (Logging.scala:logInfo(59)) - Registering block manager xxx:52333 with 265.1 MB RAM, BlockManagerId(driver, xxx, 52333)
2015-09-16 15:39:31,969 INFO storage.BlockManagerMaster (Logging.scala:logInfo(59)) - Registered BlockManager
2015-09-16 15:39:32,458 ERROR spark.SparkContext (Logging.scala:logError(96)) - Error initializing SparkContext.
java.lang.IllegalStateException: Cannot call methods on a stopped SparkContext
at org.apache.spark.SparkContext.org$apache$spark$SparkContext$$assertNotStopped(SparkContext.scala:103)
at org.apache.spark.SparkContext.getSchedulingMode(SparkContext.scala:1503)
at org.apache.spark.SparkContext.postEnvironmentUpdate(SparkContext.scala:2007)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:543)
at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:61)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:234)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
at py4j.Gateway.invoke(Gateway.java:214)
at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:79)
at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:68)
at py4j.GatewayConnection.run(GatewayConnection.java:207)
at java.lang.Thread.run(Thread.java:745)
2015-09-16 15:39:32,460 INFO spark.SparkContext (Logging.scala:logInfo(59)) - SparkContext already stopped.
Traceback (most recent call last):
File "/usr/local/Cellar/apache-spark/1.4.1/libexec/python/pyspark/shell.py", line 43, in <module>
sc = SparkContext(appName="PySparkShell", pyFiles=add_files)
File "/usr/local/Cellar/apache-spark/1.4.1/libexec/python/pyspark/context.py", line 113, in __init__
conf, jsc, profiler_cls)
File "/usr/local/Cellar/apache-spark/1.4.1/libexec/python/pyspark/context.py", line 165, in _do_init
self._jsc = jsc or self._initialize_context(self._conf._jconf)
File "/usr/local/Cellar/apache-spark/1.4.1/libexec/python/pyspark/context.py", line 219, in _initialize_context
return self._jvm.JavaSparkContext(jconf)
File "/usr/local/Cellar/apache-spark/1.4.1/libexec/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 701, in __call__
File "/usr/local/Cellar/apache-spark/1.4.1/libexec/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling None.org.apache.spark.api.java.JavaSparkContext.
: java.lang.IllegalStateException: Cannot call methods on a stopped SparkContext
at org.apache.spark.SparkContext.org$apache$spark$SparkContext$$assertNotStopped(SparkContext.scala:103)
at org.apache.spark.SparkContext.getSchedulingMode(SparkContext.scala:1503)
at org.apache.spark.SparkContext.postEnvironmentUpdate(SparkContext.scala:2007)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:543)
at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:61)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:234)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
at py4j.Gateway.invoke(Gateway.java:214)
at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:79)
at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:68)
at py4j.GatewayConnection.run(GatewayConnection.java:207)
at java.lang.Thread.run(Thread.java:745)
Spark_ec2 doesn't not open port 7077 on master node for incoming connections from outside the cluster.
You can check in AWS console/EC2/Network & Security/Security Groups and check graph-cluster-master security group's Inbound tab.
You can add the rule to open inbound connection to port 7077.
But it is suggested to run pyspark (essentially Spark's App driver) from the master machine in EC2 cluster, and avoid running driver outside the network.
The reason for this - increased delays and problems with settings firewall connections - you'll need to open some ports so executions could connection to driver on your machine.
So the way to go is to login to ssh cluster with this command:
spark-ec2 --key-pair=graph-cluster --identity-file=/Users/.ssh/pem.pem --region=us-east-1 --zone=us-east-1a login graph-cluster
And run the commands from the master server:
cd spark
bin/pyspark
You'll need to transfer related files (your script and data) to master. I usually keep data on S3 and edit script files with vim or start ipython notebook.
BTW the latter is very easy - you need to add the rule for incoming connections from your computer IP to port 18888 in EC2 console master's security group. And then run the command on a cluster:
IPYTHON_OPTS="notebook --pylab inline --port=18888 --ip='*'" pyspark
Then you can access it with http://ec2-54-152-xx-xxx.compute-1.amazonaws.com:18888/

Spark 0.90 Stand alone connection refused

I am using spark 0.90 stand alone mode.
When I tried with a streaming application in stand alone mode, I am getting a connection refused exception.
I added hostname in /etc/hosts also tried with IP alone. In both cases worker got registered with master without any issues.
Is there a way to solve this issue?
14/02/28 07:15:01 INFO Master: akka.tcp://driverClient#127.0.0.1:55891 got disassociated, removing it.
14/02/28 07:15:04 INFO Master: Registering app Twitter Streaming
14/02/28 07:15:04 INFO Master: Registered app Twitter Streaming with ID app-20140228071504-0000
14/02/28 07:34:42 INFO Master: akka.tcp://spark#127.0.0.1:33688 got disassociated, removing it.
14/02/28 07:34:42 INFO LocalActorRef: Message [akka.remote.transport.ActorTransportAdapter$DisassociateUnderlying] from Actor[akka://sparkMaster/deadLetters] to Actor[akka://sparkMaster/system/transports/akkaprotocolmanager.tcp0/akkaProtocol-tcp%3A%2F%2FsparkMaster%4010.165.35.96%3A38903-6#-1146558090] was not delivered. [2] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
14/02/28 07:34:42 ERROR EndpointWriter: AssociationError [akka.tcp://sparkMaster#10.165.35.96:8910] -> [akka.tcp://spark#127.0.0.1:33688]: Error [Association failed with [akka.tcp://spark#127.0.0.1:33688]] [
akka.remote.EndpointAssociationException: Association failed with [akka.tcp://spark#127.0.0.1:33688]
Caused by: akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2: Connection refused: /127.0.0.1:33688
I had a similar issue when running in Spark in cluster mode. My problem was that the server was started with the hostname 'fluentd:7077' and not the FQDN. I edited the
/sbin/start-master.sh
to reflect how my remote nodes connect with the -ip flag.
/usr/lib/jvm/jdk1.7.0_51/bin/java -cp :/home/vagrant/spark-0.9.0-incubating-bin- hadoop2/conf:/home/vagrant/spark-0.9.0-incuba
ting-bin-hadoop2/assembly/target/scala-2.10/spark-assembly_2.10-0.9.0-incubating-hadoop2.2.0.jar -Dspark.akka.logLifecycleEvents=true -Djava.library.path= -Xms512m -Xmx512m org.ap
ache.spark.deploy.master.Master --ip fluentd.alex.dev --port 7077 --webui-port 8080
Hope this helps.

Percona on EC2 timeout errors

I am setting up a new Percona node on an AWS to connect to an existing cluster that is not on AWS. I have security groups updated.
I added all the IPs to the my.cnf file and could not start Percona. I removed the IPs to start from scratch. I am getting this error:
140114 16:24:30 [ERROR] WSREP: failed to open gcomm backend connection: 110: failed to reach primary view: 110 (Connection timed out)
at gcomm/src/pc.cpp:connect():139
140114 16:24:30 [ERROR] WSREP: gcs/src/gcs_core.c:gcs_core_open():195: Failed to open backend connection: -110 (Connection timed out)
140114 16:24:30 [ERROR] WSREP: gcs/src/gcs.c:gcs_open():1289: Failed to open channel 'my_centos_cluster' at 'gcomm://10.10.25.10,10.20.4.11,10.10.20.12,10.20.4.13': -110 (Connection timed out)
140114 16:24:30 [ERROR] WSREP: gcs connect failed: Connection timed out
140114 16:24:30 [ERROR] WSREP: wsrep::connect() failed: 6
140114 16:24:30 [ERROR] Aborting
That is only a sample. Here is my.cnf:
[mysqld]
#datadir=/var/lib/mysql
datadir=/data/mysql
user=mysql
log-error=/data/mysql/mysqlerror.log
# Path to Galera library
wsrep_provider=/usr/lib64/libgalera_smm.so
# Cluster connection URL contains the IPs of node#1, node#2 and node#3
wsrep_cluster_address=gcomm://LOCAL_IP
# In order for Galera to work correctly binlog format should be ROW
binlog_format=ROW
# MyISAM storage engine has only experimental support
default_storage_engine=InnoDB
# This is a recommended tuning variable for performance
innodb_locks_unsafe_for_binlog=1
# This changes how InnoDB autoincrement locks are managed and is a requirement for Galera
innodb_autoinc_lock_mode=2
# Node #1 address
wsrep_node_address=LOCAL_IP
# SST method
wsrep_sst_method=xtrabackup
# Cluster name
wsrep_cluster_name=my_centos_cluster
# Authentication for SST method
wsrep_sst_auth="USER:PASS"
What am I doing wrong?

`lein deploy clojars` gives Connection reset by peer: socket write error

I get the error I/O exception (java.net.SocketException)
caught when processing request: Connection reset by peer:
socket write error when I try to deploy a lib to clojars. I
tried it several times, in a span of 20 days.
C:\Users\oskarkv\Desktop\jmonkeyengine-read-only\engine>lein deploy clojars
No credentials found for clojars
See `lein help deploy` for how to configure credentials.
Username: oskarkv
Password:
Wrote C:\Users\oskarkv\Desktop\jmonkeyengine-read-only\engine\pom.xml
Created C:\Users\oskarkv\Desktop\jmonkeyengine-read-only\engine\target\jmonkeyengine-3.0.1-SNAPSHOT.jar
Could not find metadata org.clojars.oskarkv:jmonkeyengine:3.0.1-SNAPSHOT/maven-metadata.xml in clojars (https://clojars.org/repo/)
Sending org/clojars/oskarkv/jmonkeyengine/3.0.1-SNAPSHOT/jmonkeyengine-3.0.1-20130817.134749-1.pom (2k)
to https://clojars.org/repo/
Sending org/clojars/oskarkv/jmonkeyengine/3.0.1-SNAPSHOT/jmonkeyengine-3.0.1-20130817.134749-1.jar (79156k)
to https://clojars.org/repo/
aug 17, 2013 3:48:24 EM org.apache.http.impl.client.DefaultRequestDirector tryExecute
INFO: I/O exception (java.net.SocketException) caught when processing request: Connection reset by peer: socket write error
aug 17, 2013 3:48:24 EM org.apache.http.impl.client.DefaultRequestDirector tryExecute
INFO: Retrying request
aug 17, 2013 3:48:54 EM org.apache.http.impl.client.DefaultRequestDirector tryExecute
INFO: I/O exception (java.net.SocketException) caught when processing request: Connection reset by peer: socket write error
aug 17, 2013 3:48:54 EM org.apache.http.impl.client.DefaultRequestDirector tryExecute
INFO: Retrying request
aug 17, 2013 3:49:24 EM org.apache.http.impl.client.DefaultRequestDirector tryExecute
INFO: I/O exception (java.net.SocketException) caught when processing request: Connection reset by peer: socket write error
aug 17, 2013 3:49:24 EM org.apache.http.impl.client.DefaultRequestDirector tryExecute
INFO: Retrying request
Could not transfer artifact org.clojars.oskarkv:jmonkeyengine:jar:3.0.1-20130817.134749-1 from/to clojars (https://clojars.org/repo/): Connection rese
t by peer: socket write error
Failed to deploy artifacts: Could not transfer artifact org.clojars.oskarkv:jmonkeyengine:jar:3.0.1-20130817.134749-1 from/to clojars (https://clojars
.org/repo/): Connection reset by peer: socket write error
Maybe size matters, because I tried to deploy a project I
just created by doing lein new testclojars, and that seemed
to work.
C:\Users\oskarkv\Desktop\jmonkeyengine-read-only\test\testclojars>lein deploy clojars
WARNING: please set :description in project.clj.
WARNING: please set :url in project.clj.
No credentials found for clojars
See `lein help deploy` for how to configure credentials.
Username: oskarkv
Password:
Wrote C:\Users\oskarkv\Desktop\jmonkeyengine-read-only\test\testclojars\pom.xml
Created C:\Users\oskarkv\Desktop\jmonkeyengine-read-only\test\testclojars\target\testclojars-0.1.0-SNAPSHOT.jar
Could not find metadata org.clojars.oskarkv:testclojars:0.1.0-SNAPSHOT/maven-metadata.xml in clojars (https://clojars.org/repo/)
Sending org/clojars/oskarkv/testclojars/0.1.0-SNAPSHOT/testclojars-0.1.0-20130817.133946-1.pom (2k)
to https://clojars.org/repo/
Sending org/clojars/oskarkv/testclojars/0.1.0-SNAPSHOT/testclojars-0.1.0-20130817.133946-1.jar (2k)
to https://clojars.org/repo/
Could not find metadata org.clojars.oskarkv:testclojars/maven-metadata.xml in clojars (https://clojars.org/repo/)
Sending org/clojars/oskarkv/testclojars/0.1.0-SNAPSHOT/maven-metadata.xml (1k)
to https://clojars.org/repo/
Sending org/clojars/oskarkv/testclojars/maven-metadata.xml (1k)
to https://clojars.org/repo/
C:\Users\oskarkv\Desktop\jmonkeyengine-read-only\test\testclojars>
Any ideas of what could be wrong?
Apparently there is a 20 MB limit on files that one sends to clojars.