Wowza LoadBalancer - wowza

I want to implement wowza load balancer. So that my one wowza server can handle all global request and throw it to the edge servers.
I read the pdf and went to all examples they provide but still can not get how to redirect traffic and which tags to be used in Server.xml to configure
Please anyone can guide me trough or It would be very nice that anyone can share the sample of configuration from both ends

Here is two way to follow. One of them
Wowza Origin- Edge architecture, you publish stream to the origin server and the edge server restream the origin one with stream repeater. In this case you need to know wowza edge is active or not,the other way, the proper way I use, without wowza configuration you can handle this lb with nginx or haproxy with upstream method. See details in nginx rtmp balance

For future reference, as of mid-2020 there's a load balancing add-on.
It supports all protocols except WebRTC.
There's a step-by-step guide in the Readme.html inside the archive.

Related

Can a remote server send response to a local client on a custom port?

For network gurus out there, I'll like to ask some questions regarding some unique setup where the server will be sending a request to a client on localhost on a certain port.
I have a cloudy understanding of some network fundamentals that I hope you'll be able to help me out.
Kindly check the image below:
Basically, there's a static website hosted in AWS s3 and at some point this website will send a request to https://localhost:8001.
I was expecting that it will connect to the nginx container listening on port 8001 in my local machine, but it results in 504 gateway error.
My questions are:
Is it possible for a remote server to directly send data to a client at a particular port by addressing it as localhost?
How is it possible for the static website to communicate to my local docker container?
Thanks in advance.
In the setup you show, in the context of a Web site, localhost isn't in your picture at all. It's the desktop machine running the end user's Web browser.
More generally, you show several boxes in your diagram – "local machine", "Docker VM", "individual container", "server in Amazon's data center" – and within each of these boxes, if they make an outbound request to localhost, it reaches back to itself.
You have two basic options here:
(1) Set up a separate (Route 53) DNS name for your back-end service, and use that https://backend.example.com/... host name in your front-end application.
(2) Set up an HTTP reverse proxy that forwards /, /assets, ... to S3, and /api to the back-end service. In your front-end application use only the HTTP path with no host name at all.
The second option is more work to set up, but once you've set it up, it's much easier to develop code for. Webpack has a similar "proxy the backend" option for day-to-day development. This setup means the front-end application itself doesn't care where it's running, and you don't need to rebuild the application if the URL changes (or an individual developer needs to run it on their local system).

ActiveMQ Artemis http and https in bootstrap.xml

I hope you have an idea.
I am working with an ActiveMQ Artemis Broker and installed a metrics plugin to use with prometheus and grafana (https://github.com/rh-messaging/artemis-prometheus-metrics-plugin/). Like the instruction says, I added <app url="metrics" war="metrics.war"/> to the bootstrap.xml
We're working with a vendor providing us with the Grafana dashboards as long as we are providing metrics they can work with. The problem is that the vendor wants to access the metrics page (https://activemq:port/metrics) via HTTP and not HTTPS, which is configured in the bootstrap.xml ( <web bind="https://0.0.0.0:port" path="web" keyStorePath=...) Their effort would be disproportionately high to change their system to work with HTTPS now.
Is it possible to configure the jetty-Webserver to serve the console etc. via HTTPS and the URL activemq:port/metrics via HTTP?
I tried to add another web-container in the bootstrap.xml, now binding bind="http://0.0.0.0:port/" and adding the metrics plugin in it but the webserver wasn't happy with two web-containers :/
Thanks for your help :)
This is not currently possible. However, the project could be enhanced to support multiple web instances in bootstrap.xml. Contributions are always welcome.

http/2 on swisscom cloudfoundry?

I have a Nuxt.js/NodeJs application hosted on the swisscom cloud (cloudfoundry). Unfortunately all my files are loaded over the http/1.1 protocol and not over http/2.
Previously I had my application hosted on the google cloud and the content was delivered correctly over http/2.
Now my question is, if http/2 is supported on cloudfoundry? And if so, what do I have to do, to get my contents over http/2.
Now my question is, if http/2 is supported on cloudfoundry? And if so, what do I have to do, to get my contents over http/2.
Not when using standard HTTP routes, which go through Gorouter. See this issue for some more background & future path to support this.
https://github.com/cloudfoundry/gorouter/issues/195
In the meantime, you can use TCP routes if you really need to use HTTP/2 on CF. This bypasses Gorouter and allows TCP traffic to go directly to your app. See these two links for more details on TCP routes.
https://docs.cloudfoundry.org/devguide/deploy-apps/routes-domains.html#http-vs-tcp-routes
https://docs.cloudfoundry.org/devguide/deploy-apps/routes-domains.html#create-route
For what it's worth, you need to check with your CF provider/operators to determine if TCP routes are enabled. They are an optional feature. In addition, your org/space quota will need to allow you to create routes.
Hope that helps!

What are the disadvantages of using AWS ELB directly with Gunicorn (no nginx)?

Typical setups I've found on Google to run a django application on AWS all suggest a setup like
ELB -> nginx -> gunicorn -> django
I was wondering why the nginx part is really needed here? Isn't ELB sufficient as proxy?
In our case, we are running multiple Gunicorn/django instances in individual docker containers on ECS.
Without Nginx, It would work just fine and you will still be safe from the majority of DDOS attacks that can bring down an exposed gunicorn server.
I can only see Nginx helpful to add to the stack if it'll be serving your static files. However, it's much better to serve your static files by S3 (+ cloudfront as a bonus) since it's has high availability and reliability baked in.
Sources:
http://docs.gunicorn.org/en/latest/deploy.html#nginx-configuration
https://stackoverflow.com/a/12801140
I had to search a lot to get a satisfying answer :
ELB does not save you from DDoS attacks, it is more of a general purpose load balancer.
ELB directly sends the incoming request to the the Gunicorn server. It does not receive the full request before forwarding it to Gunicorn, i.e, if headers/body from the request is coming slowly because of bad internet connection from the client or whatever other reason, then the Gunicorn server will be waiting for the request to complete before it starts processing the request. In general, it's a bad practice to allow the same server to be the web server and application server, as this hogs up the resources of the application server(Gunicorn).
Nginx additionally helps serve static files and with GZIP compression, thus making it faster for sending/receiving data from both client/server.
Additionally, even in Gunicorn's documentation, it is recommended to use Nginx in front of it.

Webservice Endpoint - can someone externally scan all services available on a host?

Say we have hosted a few webservices over over https://mycompany.com/Service
e.g.
https://mycompany.com/Service/Service1
https://mycompany.com/Service/Service2
https://mycompany.com/Service/Service3
As you can see on mycompany.com we have hosted 3 webservices each having their distinct urls.
What we have is a Jboss instance with 3 different web wars deployed in it. When someone hits the service it gets past our firewall and then teh load balancer redirects to Jboss on port 8080 on the requried path and it gets serviced.
the 3 services are consumed by 3 different clients. My question if say Client1 using Service 1 is only given out the url corresponding to it can they use some kind of scanner that can also inform them that Service2 and Service3 are alaso available on mycompany.com/Service?
Irrespective of clients - can anyone simply use some scanner tool to identify what Service Endpoints are exposed on the host?
Kindly note they are a mix of SOAP (WSDL) and Rest based services deployed on same instance of Jboss.
Yes, someone can scan for those endpoints. Their scanner would generate a bunch of 404s in your logs, because it would have to guess the other URLs. If you have some kind of rate limiting firewall, it might take them quite a long time. You should be checking the logs regularly anyway.
If you expose your URL to the public internet, relying on people not finding it is just security via obscurity. You should secure each URL using application-level security, and assume that the bad guys already have the URL.
You may want to consider adding subdomains for the separate applications (e.g. service1.mycompany.com, service2.mycompany.com) - this will make firewalling easier.