apache storm spout not emitting tuples - amazon-web-services
I have setup a test enviroment on a aws cluster using three machines and this guide.
I tested my code in local mode and using wirbelsturm to create a local vagrant cluster, both of which works gives desired results.
When i now submit my code to the webserver my spouts and all of my bolts are silent. My spout reads from a csv, which I have copied to the nimbus and my supervisor. The storm UI shows me the topology as active and displays all bolts and my spout, the counters are not visible though. The supervisor has no used workers. The firewall is configured to let nimbus and supervisor accept the ports 6700-6703 from supervisor and nimbus. Does the zookeeper talk on those ports?
I can't seem to find my output logs on my machines either. I find ui and nimbus logs in /usr/local/storm/logs of nimbus and slave but other than that i do not seem to get an error or even logs for spouts/bolts. The vagrant machines show a worker-xxxx.log, but my aws servers do not.
Is that because my code crashes on some error or because i did a config wrong?
Update: I verified my topology with the storm-starter example, those do not seem to work either. I used mvn package to build an uberjar.
Update2:
I included the log from my supervisor, doesnt show any errors but maybe theres something in there...
2015-12-08 13:42:55.168 b.s.u.Utils [INFO] Using defaults.yaml from resources
2015-12-08 13:42:55.297 b.s.u.Utils [INFO] Using storm.yaml from resources
2015-12-08 13:42:57.434 o.a.s.s.o.a.z.ZooKeeper [INFO] Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
2015-12-08 13:42:57.435 o.a.s.s.o.a.z.ZooKeeper [INFO] Client environment:host.name=ip-172-31-26-239.us-west-2.compute.internal
2015-12-08 13:42:57.435 o.a.s.s.o.a.z.ZooKeeper [INFO] Client environment:java.version=1.7.0_91
2015-12-08 13:42:57.435 o.a.s.s.o.a.z.ZooKeeper [INFO] Client environment:java.vendor=Oracle Corporation
2015-12-08 13:42:57.435 o.a.s.s.o.a.z.ZooKeeper [INFO] Client environment:java.home=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.91.x86_64/jre
2015-12-08 13:42:57.435 o.a.s.s.o.a.z.ZooKeeper [INFO] Client environment:java.class.path=/usr/local/apache-storm-0.10.0/lib/clojure-1.6.0.jar:/usr/local/apache-storm-0.10.0/lib/log4j-core-2.1.jar:/usr/local/apache-storm-0.10.0/lib/log4j-api-2.1.jar:/usr/local/apache-sto$
2015-12-08 13:42:57.435 o.a.s.s.o.a.z.ZooKeeper [INFO] Client environment:java.library.path=/usr/local/lib:/opt/local/lib:/usr/lib
2015-12-08 13:42:57.435 o.a.s.s.o.a.z.ZooKeeper [INFO] Client environment:java.io.tmpdir=/tmp
2015-12-08 13:42:57.435 o.a.s.s.o.a.z.ZooKeeper [INFO] Client environment:java.compiler=<NA>
2015-12-08 13:42:57.436 o.a.s.s.o.a.z.ZooKeeper [INFO] Client environment:os.name=Linux
2015-12-08 13:42:57.436 o.a.s.s.o.a.z.ZooKeeper [INFO] Client environment:os.arch=amd64
2015-12-08 13:42:57.436 o.a.s.s.o.a.z.ZooKeeper [INFO] Client environment:os.version=2.6.32-504.8.1.el6.x86_64
2015-12-08 13:42:57.436 o.a.s.s.o.a.z.ZooKeeper [INFO] Client environment:user.name=storm
2015-12-08 13:42:57.436 o.a.s.s.o.a.z.ZooKeeper [INFO] Client environment:user.home=/app/home/storm
2015-12-08 13:42:57.436 o.a.s.s.o.a.z.ZooKeeper [INFO] Client environment:user.dir=/
2015-12-08 13:42:57.459 o.a.s.s.o.a.z.s.ZooKeeperServer [INFO] Server environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
2015-12-08 13:42:57.459 o.a.s.s.o.a.z.s.ZooKeeperServer [INFO] Server environment:host.name=ip-172-31-26-239.us-west-2.compute.internal
2015-12-08 13:42:57.459 o.a.s.s.o.a.z.s.ZooKeeperServer [INFO] Server environment:java.version=1.7.0_91
2015-12-08 13:42:57.459 o.a.s.s.o.a.z.s.ZooKeeperServer [INFO] Server environment:java.vendor=Oracle Corporation
2015-12-08 13:42:57.459 o.a.s.s.o.a.z.s.ZooKeeperServer [INFO] Server environment:java.home=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.91.x86_64/jre
2015-12-08 13:42:57.460 o.a.s.s.o.a.z.s.ZooKeeperServer [INFO] Server environment:java.class.path=/usr/local/apache-storm-0.10.0/lib/clojure-1.6.0.jar:/usr/local/apache-storm-0.10.0/lib/log4j-core-2.1.jar:/usr/local/apache-storm-0.10.0/lib/log4j-api-2.1.jar:/usr/local/ap$
2015-12-08 13:42:57.460 o.a.s.s.o.a.z.s.ZooKeeperServer [INFO] Server environment:java.library.path=/usr/local/lib:/opt/local/lib:/usr/lib
2015-12-08 13:42:57.460 o.a.s.s.o.a.z.s.ZooKeeperServer [INFO] Server environment:java.io.tmpdir=/tmp
2015-12-08 13:42:57.460 o.a.s.s.o.a.z.s.ZooKeeperServer [INFO] Server environment:java.compiler=<NA>
2015-12-08 13:42:57.460 o.a.s.s.o.a.z.s.ZooKeeperServer [INFO] Server environment:os.name=Linux
2015-12-08 13:42:57.460 o.a.s.s.o.a.z.s.ZooKeeperServer [INFO] Server environment:os.arch=amd64
2015-12-08 13:42:57.460 o.a.s.s.o.a.z.s.ZooKeeperServer [INFO] Server environment:os.version=2.6.32-504.8.1.el6.x86_64
2015-12-08 13:42:57.460 o.a.s.s.o.a.z.s.ZooKeeperServer [INFO] Server environment:user.name=storm
2015-12-08 13:42:57.460 o.a.s.s.o.a.z.s.ZooKeeperServer [INFO] Server environment:user.home=/app/home/storm
2015-12-08 13:42:57.460 o.a.s.s.o.a.z.s.ZooKeeperServer [INFO] Server environment:user.dir=/
2015-12-08 13:42:57.774 b.s.u.Utils [INFO] Using defaults.yaml from resources
2015-12-08 13:42:57.803 b.s.u.Utils [INFO] Using storm.yaml from resources
2015-12-08 13:42:57.939 b.s.d.supervisor [INFO] Starting Supervisor with conf {"topology.builtin.metrics.bucket.size.secs" 60, "nimbus.childopts" "-Xmx1024m -Djava.net.preferIPv4Stack=true", "ui.filter.params" nil, "storm.cluster.mode" "distributed", "storm.messaging.net$
2015-12-08 13:42:57.963 b.s.u.StormBoundedExponentialBackoffRetry [INFO] The baseSleepTimeMs [1000] the maxSleepTimeMs [30000] the maxRetries [5]
2015-12-08 13:42:58.063 o.a.s.s.o.a.c.f.i.CuratorFrameworkImpl [INFO] Starting
2015-12-08 13:42:58.066 o.a.s.s.o.a.z.ZooKeeper [INFO] Initiating client connection, connectString=zkserver1:2181 sessionTimeout=20000 watcher=org.apache.storm.shade.org.apache.curator.ConnectionState#502016b8
2015-12-08 13:42:58.081 o.a.s.s.o.a.z.ClientCnxn [INFO] Opening socket connection to server zkServer1/xx.xx.xx.xx:2181. Will not attempt to authenticate using SASL (unknown error)
2015-12-08 13:42:58.089 o.a.s.s.o.a.z.ClientCnxn [INFO] Socket connection established to zkServer1/xx.xx.xx.xx:2181, initiating session
2015-12-08 13:42:58.094 o.a.s.s.o.a.z.ClientCnxn [INFO] Session establishment complete on server zkServer1/xx.xx.xx.xx:2181, sessionid = 0x15182c7ba25000d, negotiated timeout = 20000
2015-12-08 13:42:58.096 o.a.s.s.o.a.c.f.s.ConnectionStateManager [INFO] State change: CONNECTED
2015-12-08 13:42:58.097 b.s.zookeeper [INFO] Zookeeper state update: :connected:none
2015-12-08 13:42:59.109 o.a.s.s.o.a.z.ClientCnxn [INFO] EventThread shut down
2015-12-08 13:42:59.110 o.a.s.s.o.a.z.ZooKeeper [INFO] Session: 0x15182c7ba25000d closed
2015-12-08 13:42:59.111 b.s.u.StormBoundedExponentialBackoffRetry [INFO] The baseSleepTimeMs [1000] the maxSleepTimeMs [30000] the maxRetries [5]
2015-12-08 13:42:59.116 o.a.s.s.o.a.c.f.i.CuratorFrameworkImpl [INFO] Starting
2015-12-08 13:42:59.116 o.a.s.s.o.a.z.ZooKeeper [INFO] Initiating client connection, connectString=zkserver1:2181/storm sessionTimeout=20000 watcher=org.apache.storm.shade.org.apache.curator.ConnectionState#5edfa0aa
2015-12-08 13:42:59.121 o.a.s.s.o.a.z.ClientCnxn [INFO] Opening socket connection to server zkServer1/xx.xx.xx.xx:2181. Will not attempt to authenticate using SASL (unknown error)
2015-12-08 13:42:59.122 o.a.s.s.o.a.z.ClientCnxn [INFO] Socket connection established to zkServer1/xx.xx.xx.xx:2181, initiating session
2015-12-08 13:42:59.124 o.a.s.s.o.a.z.ClientCnxn [INFO] Session establishment complete on server zkServer1/xx.xx.xx.xx:2181, sessionid = 0x15182c7ba25000e, negotiated timeout = 20000
2015-12-08 13:42:59.124 o.a.s.s.o.a.c.f.s.ConnectionStateManager [INFO] State change: CONNECTED
2015-12-08 13:42:59.169 b.s.d.supervisor [INFO] Starting supervisor with id cc5e1723-cc06-4bc1-a1bf-192a1d7f5bf6 at host xxxxxxx.us-west-2.compute.internal
2015-12-08 13:43:06.059 b.s.d.supervisor [INFO] Downloading code for storm id production-topology-4-1449599549 from /app/storm/nimbus/stormdist/production-topology-4-1449599549
2015-12-08 13:43:06.075 b.s.u.StormBoundedExponentialBackoffRetry [INFO] The baseSleepTimeMs [2000] the maxSleepTimeMs [60000] the maxRetries [5]
Any ideas?
Update2:
So i did find this:
java.lang.RuntimeException: org.apache.thrift7.transport.TTransportException: java.net.ConnectException: Connection timed out
at backtype.storm.security.auth.TBackoffConnect.retryNext(TBackoffConnect.java:59) ~[storm-core-0.10.0.jar:0.10.0]
at backtype.storm.security.auth.TBackoffConnect.doConnectWithRetry(TBackoffConnect.java:51) ~[storm-core-0.10.0.jar:0.10.0]
at backtype.storm.security.auth.ThriftClient.reconnect(ThriftClient.java:103) ~[storm-core-0.10.0.jar:0.10.0]
at backtype.storm.security.auth.ThriftClient.<init>(ThriftClient.java:72) ~[storm-core-0.10.0.jar:0.10.0]
at backtype.storm.utils.NimbusClient.<init>(NimbusClient.java:74) ~[storm-core-0.10.0.jar:0.10.0]
at backtype.storm.utils.NimbusClient.getConfiguredClient(NimbusClient.java:37) ~[storm-core-0.10.0.jar:0.10.0]
at backtype.storm.utils.Utils.downloadFromMaster(Utils.java:361) ~[storm-core-0.10.0.jar:0.10.0]
at backtype.storm.daemon.supervisor$fn__7720.invoke(supervisor.clj:581) ~[storm-core-0.10.0.jar:0.10.0]
at clojure.lang.MultiFn.invoke(MultiFn.java:241) ~[clojure-1.6.0.jar:?]
at backtype.storm.daemon.supervisor$mk_synchronize_supervisor$this__7638.invoke(supervisor.clj:465) ~[storm-core-0.10.0.jar:0.10.0]
at backtype.storm.event$event_manager$fn__7258.invoke(event.clj:40) [storm-core-0.10.0.jar:0.10.0]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.6.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_91]
Caused by: org.apache.thrift7.transport.TTransportException: java.net.ConnectException: Connection timed out
at org.apache.thrift7.transport.TSocket.open(TSocket.java:187) ~[storm-core-0.10.0.jar:0.10.0]
at org.apache.thrift7.transport.TFramedTransport.open(TFramedTransport.java:81) ~[storm-core-0.10.0.jar:0.10.0]
at backtype.storm.security.auth.SimpleTransportPlugin.connect(SimpleTransportPlugin.java:103) ~[storm-core-0.10.0.jar:0.10.0]
at backtype.storm.security.auth.TBackoffConnect.doConnectWithRetry(TBackoffConnect.java:48) ~[storm-core-0.10.0.jar:0.10.0]
... 11 more
Caused by: java.net.ConnectException: Connection timed out
at java.net.PlainSocketImpl.socketConnect(Native Method) ~[?:1.7.0_91]
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) ~[?:1.7.0_91]
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) ~[?:1.7.0_91]
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) ~[?:1.7.0_91]
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[?:1.7.0_91]
at java.net.Socket.connect(Socket.java:579) ~[?:1.7.0_91]
at org.apache.thrift7.transport.TSocket.open(TSocket.java:182) ~[storm-core-0.10.0.jar:0.10.0]
at org.apache.thrift7.transport.TFramedTransport.open(TFramedTransport.java:81) ~[storm-core-0.10.0.jar:0.10.0]
at backtype.storm.security.auth.SimpleTransportPlugin.connect(SimpleTransportPlugin.java:103) ~[storm-core-0.10.0.jar:0.10.0]
at backtype.storm.security.auth.TBackoffConnect.doConnectWithRetry(TBackoffConnect.java:48) ~[storm-core-0.10.0.jar:0.10.0]
... 11 more
2015-12-08 14:26:41.028 b.s.util [ERROR] Halting process: ("Error when processing an event")
java.lang.RuntimeException: ("Error when processing an event")
at backtype.storm.util$exit_process_BANG_.doInvoke(util.clj:336) [storm-core-0.10.0.jar:0.10.0]
at clojure.lang.RestFn.invoke(RestFn.java:423) [clojure-1.6.0.jar:?]
at backtype.storm.event$event_manager$fn__7258.invoke(event.clj:48) [storm-core-0.10.0.jar:0.10.0]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.6.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_91]
I followed the same guide as you and ran into the same issue.
What solved the problem for me :
Edit the 3 /etc/hosts files of your three machines (zookeeper, nimbus and slave1) the same way
First remove the IPv6 line which starts like ::1, this is not supported by apache storm.
In the first line of the file, containing local aliases, place the public hostname of the local machine (the one known by other nodes of the cluster) just after 127.0.0.1. I suppose this is the alias storm will take into account.
Finally, as told in the guide list all the other machines and there storm-knowned hostnames
Finally my /etc/hosts looks like this (for the nimbus)
127.0.0.1 vm-matthias-02 localhost.localdomain localhost
192.168.200.48 vm-matthias-01
192.168.200.49 vm-matthias-02
192.168.200.50 vm-matthias-03
Beware to use the same name of the machine when you edit configuration files.
Related
Print all broadcasted vsomeip messages in routing manager?
I am new to vSOME/IP, and I am working on a system that all SOME/IP messages are broadcasted via Ethernet, so basically any vSOME/IP application (Service provider, consumer, even the routing manager) can see the messages being sent. I tried to develop a simple application that only starts a vsomeip application, and lot of log messages shows up (I assume that are the broadcasted messages): ... 2022-07-21 11:19:10.949493 [info] REGISTERED_ACK(0006) 2022-07-21 11:19:10.949551 [info] REGISTER EVENT(0006): [xxxx1:is_provider=false] 2022-07-21 11:19:10.949580 [info] REGISTER EVENT(0006): [xxxx2:is_provider=false] 2022-07-21 11:19:10.949598 [info] REGISTER EVENT(0006): [xxxx3:is_provider=false] 2022-07-21 11:19:10.949614 [info] REGISTER EVENT(0006): [xxxx4:is_provider=false] 2022-07-21 11:19:10.949630 [info] REGISTER EVENT(0006): [xxxx5:is_provider=false] 2022-07-21 11:19:10.949647 [info] REGISTER EVENT(0006): [xxxx6:is_provider=false] ... 2022-07-21 11:19:10.989091 [info] Application/Client 0009 is registering. 2022-07-21 11:19:10.989158 [info] Client [20] is connecting to [9] at /tmp/vsomeip-9 2022-07-21 11:19:10.989622 [info] REGISTERED_ACK(0009) ... 2022-07-21 11:19:11.268943 [info] Client [20] is connecting to [12] at /tmp/vsomeip-12 2022-07-21 11:19:11.269018 [info] Application/Client 001c is registering. 2022-07-21 11:19:11.269075 [info] Client [20] is connecting to [1c] at /tmp/vsomeip-1c 2022-07-21 11:19:11.269201 [info] REGISTERED_ACK(0012) 2022-07-21 11:19:11.269993 [info] REGISTERED_ACK(001c) 2022-07-21 11:19:11.270024 [info] REGISTER EVENT(0012): [xxxx1:is_provider=false] ... As you can see, the application detects that multiple clients are connecting and some REQUESTs and EVENTs are happening. My question is how to dump the messages data and access its content? I do not need to subscribe to a specific service ID, because everything is broadcasted. Thanks.
Shiny on Ubuntu machine
[2019-01-03T18:20:08.983] [INFO] shiny-server - Shiny Server v1.5.9.923 (Node.js v8.11.3) [2019-01-03T18:20:08.986] [INFO] shiny-server - Using config file "/etc/shiny-server/shiny-server.conf" [2019-01-03T18:20:09.047] [WARN] shiny-server - Running as root unnecessarily is a security risk! You could be running more securely as non-root. [2019-01-03T18:20:09.055] [INFO] shiny-server - Starting listener on http://[::]:3838 [2019-01-03T18:20:09.060] [ERROR] shiny-server - HTTP server error (http://[::]:3838): listen EAFNOSUPPORT :::3838 [2019-01-03T18:20:09.060] [INFO] shiny-server - Shutting down worker processes This is the shiny server status: shiny-server stop/waiting How can I resolve this problem?
VSOMEIP - Communication between 2 devices (TCP/UDP) Not working
Following the VSOMEIP tutorial Vsomeip in 10 minutes everything works up to the point of Communication between 2 devices. Current Setup: Ubuntu 16.04 (two machines - Server & Client) Two Machines connected over ethernet Files used: server.cpp client.cpp client_config.json server_config.json Output of Server [info] Parsed vsomeip configuration in 1ms [info] Using configuration file: "../clie_prop.json". [info] Default configuration module loaded. [info] Initializing vsomeip application "Hello". [info] SOME/IP client identifier configured. Using 0033 (was: 1313) [info] Instantiating routing manager [Proxy]. [info] Client [33] is connecting to [0] at /tmp/vsomeip-0 [info] Listening at /tmp/vsomeip-33 [info] Application(Hello, 33) is initialized (11, 100). [info] Starting vsomeip application "Hello" using 2 threads [warning] local_client_endpoint::connect: Couldn't connect to: /tmp/vsomeip-0 (Connection refused / 111) [info] io thread id from application: 0033 (Hello) is: 7f80f5cd88c0 TID: 1497 [info] routing_manager_proxy::on_disconnect: Client 0x33 calling host_->on_state with DEREGISTERED [info] io thread id from application: 0033 (Hello) is: 7f80f15e7700 TID: 1501 [info] shutdown thread id from application: 0033 (Hello) is: 7f80f1de8700 TID: 1500 [info] main dispatch thread id from application: 0033 (Hello) is: 7f80f25e9700 TID: 1499 [warning] local_client_endpoint::connect: Couldn't connect to: /tmp/vsomeip-0 (Connection refused / 111) [info] routing_manager_proxy::on_disconnect: Client 0x33 calling host_->on_state with DEREGISTERED [warning] local_client_endpoint::connect: Couldn't connect to: /tmp/vsomeip-0 (Connection refused / 111) [info] routing_manager_proxy::on_disconnect: Client 0x33 calling host_->on_state with DEREGISTERED [warning] local_client_endpoint::connect: Couldn't connect to: /tmp/vsomeip-0 (Connection refused / 111) [info] routing_manager_proxy::on_disconnect: Client 0x33 calling host_->on_state with DEREGISTERED [warning] local_client_endpoint::connect: Couldn't connect to: /tmp/vsomeip-0 (Connection refused / 111) [info] routing_manager_proxy::on_disconnect: Client 0x33 calling host_->on_state with DEREGISTERED [warning] local_client_endpoint::connect: Couldn't connect to: /tmp/vsomeip-0 (Connection refused / 111) [info] routing_manager_proxy::on_disconnect: Client 0x33 calling host_->on_state with DEREGISTERED Output of Client [info] Parsed vsomeip configuration in 0ms [info] Using configuration file: "../serv_prop.json". [info] Default configuration module loaded. [info] Initializing vsomeip application "World". [warning] Routing Manager seems to be inactive. Taking over... [info] SOME/IP client identifier configured. Using 1212 (was: 1212) [info] Instantiating routing manager [Host]. [info] init_routing_endpoint Routing endpoint at /tmp/vsomeip-0 [info] Client [1212] is connecting to [0] at /tmp/vsomeip-0 [info] Service Discovery enabled. Trying to load module. [info] Service Discovery module loaded. [info] Application(World, 1212) is initialized (11, 100). [info] OFFER(1212): [1234.5678:0.0] [info] Starting vsomeip application "World" using 2 threads [info] Watchdog is disabled! [info] io thread id from application: 1212 (World) is: 7fa68723d8c0 TID: 5370 [info] Network interface "enp0s3" state changed: up [info] vSomeIP 2.10.21 | (default) [info] Sent READY to systemd watchdog [info] io thread id from application: 1212 (World) is: 7fa6828f3700 TID: 5374 [info] shutdown thread id from application: 1212 (World) is: 7fa6838f5700 TID: 5372 [info] main dispatch thread id from application: 1212 (World) is: 7fa6840f6700 TID: 5371 [warning] Releasing client identifier 0003. Its corresponding application went offline while no routing manager was running. [info] Application/Client 0003 is deregistering. All the code used is the same as the code used in Request/Response in the vsomeip tutorial. The config files are the same as the config file specified in the communication between 2 devices section with the IP Addresses changed to match my machine addresses. Any help would be greatly appreciated, thanks.
I found a solution!! If you navigate to the /build/examples folder in the vsomeip or vsomeip-master directory, you will find executables (response-sample, subscribe-sample, etc.). If you run them in such a way that they use the same configuration files as used in the vsomeip in 10 mintutes (changing unicast addresses, etc.) it should work perfectly. This is the configuration file I used. { "unicast" : "192.168.43.6", "logging" : { "level" : "debug", "console" : "true", "file" : { "enable" : "false", "path" : "/tmp/vsomeip.log" }, "dlt" : "false" }, "applications" : [ { "name" : "World", "id" : "0x1212" } ], "services" : [ { "service" : "0x1234", "instance" : "0x5678", "unreliable" : "30509" } ], "routing" : "World", "service-discovery" : { "enable" : "true", "multicast" : "224.224.224.245", "port" : "30490", "protocol" : "udp", "initial_delay_min" : "10", "initial_delay_max" : "100", "repetitions_base_delay" : "200", "repetitions_max" : "3", "ttl" : "3", "cyclic_offer_delay" : "2000", "request_response_delay" : "1500" } } I used shell script to do this. #!/bin/bash route add -host 224.224.224.245 dev <interface> export VSOMEIP_CONFIGURATION=<config_file> export VSOMEIP_APPLICATION_NAME=<application_name> ./<executable> It did for me anyways! Hope this helps! :)
To clarify Rob Crowley's answer, I got it working using the two unique .json configuration files included in the "vsomeip in 10 minutes" tutorial. I used the "World" configuration on the host offering the service and the "Hello" configuration file on the host running the client. The only thing I need to modify in these files was the "unicast" address. I changed that to match the IP address of the respective hosts. I also modified the script to use "sudo" before the "route add -host" command, as I found it wouldn't actually add the route without it. I called make in the "vsomeip/build/examples/" folder to build the examples. The I pointed to on the script for the service service was "notify-sample" executable (vsomeip/build/examples/). The I pointed to on the script for the service service was "subscribe-sample" executable (vsomeip/build/examples/). This combination worked for me after connecting my two hosts via ethernet and ensuring their IP addresses matched those in the "unicast" field of their respective configuration files.
django web app deployment gunicorn + aws ECS issue
I have working django REST API docker image with following dependencies: python 3.5.2, django 1.10.6, djangorestframework 3.6.2, gevent 1.2.2 In my dockerfile, port 5000 is exposed. docker command: /usr/local/bin/gunicorn --log-level=DEBUG --worker-class gevent --timeout=300 config.wsgi -w 4 -b :5000 In the ECS task definition, 5000 container port is forwarded to port 80 of the host. The security group has an inbound rule allowing everyone at port 80. When I ran the ECS task with this ECS task definition, following are the application logs, which seem fine. [2017-09-13 16:45:34 +0000] [9] [INFO] Starting gunicorn 19.6.0 [2017-09-13 16:45:34 +0000] [9] [INFO] Listening at: http://0.0.0.0:5000 (9) [2017-09-13 16:45:34 +0000] [9] [INFO] Using worker: gevent [2017-09-13 16:45:34 +0000] [12] [INFO] Booting worker with pid: 12 [2017-09-13 16:45:34 +0000] [13] [INFO] Booting worker with pid: 13 [2017-09-13 16:45:35 +0000] [15] [INFO] Booting worker with pid: 15 [2017-09-13 16:45:35 +0000] [16] [INFO] Booting worker with pid: 16 But I am unable to access the service endpoints using the EC2 instance's public IP/Public DNS address. I tried to get into the running container and curl the application url curl localhost:5000. Following are the logs that I see (the connections are closed) [2017-09-13 17:42:42 +0000] [14] [DEBUG] GET / [2017-09-13 17:42:42 +0000] [14] [DEBUG] Closing connection. [2017-09-13 17:42:56 +0000] [12] [DEBUG] GET / [2017-09-13 17:42:56 +0000] [12] [DEBUG] Closing connection. [2017-09-13 17:53:20 +0000] [14] [DEBUG] GET /users/get_mfatype/ [2017-09-13 17:53:20 +0000] [14] [DEBUG] Closing connection. The same docker image is working as expected when I run locally. I even tried running the same docker image inside EC2 instance, which is working fine. I am not able to find the root cause why the application is not running as ECS task. Am I missing anything?
Gunicorn timeout and streaming dynamic content
I have Django app using StreamingHttpResponse that fails when gunicorn worker times out. Basically extending timeout is not an option as streaming can take longer as it depends on network speed. Web server won't time out as is actually doing something but gunicorn workers seems to not recognize it. I am aware of a choice between sync and async workers supported by gunicorn and using for example gevent. it starts: gunicorn -D -p /path/to/django.pid --bind 127.0.0.1:8000 --workers 2 -k gevent --worker-connections 10 --max-requests 100 myapp.wsgi:application gunicorn log: [2016-01-21 15:12:34 +0000] [30333] [INFO] Listening at: ... [2016-01-21 15:12:34 +0000] [30333] [INFO] Using worker: gevent [2016-01-21 15:14:22 +0000] [30338] [DEBUG] GET /url/1/ [2016-01-21 15:14:22 +0000] [30338] [DEBUG] Closing connection. [2016-01-21 15:14:24 +0000] [30343] [DEBUG] GET /download/1/ ... [2016-01-21 15:15:43 +0000] [30333] [CRITICAL] WORKER TIMEOUT (pid:30343) [2016-01-21 15:15:43 +0000] [30343] [DEBUG] Closing connection. [2016-01-21 15:15:43 +0000] [30343] [INFO] Worker exiting (pid: 30343) [2016-01-21 15:15:43 +0000] [31203] [INFO] Booting worker with pid: 31203 nginx log 2016/01/21 15:15:43 [error] 23160#0: *10849 upstream prematurely closed connection while reading upstream... Why deploying the same app using fastcgi and flup never exposed that problem? Anyone could advice?