How to get IP address Jetty is running on - jetty

I'm developing Spring MVC cluster website and emulating a cluster on one developer machine, running several instances of Jetty 9.2.2 on different local addresses:
127.0.0.10
127.0.0.11
127.0.0.12
and so on. To use CometD clustering solution, I need to know at runtime IP address of Jetty server, which is currently serving this particular runtime. I mean, would it be 127.0.0.10, or 127.0.0.12. I set this parameter in start.ini:
jetty.host=127.0.0.N
where N is different for every of 5 instances.
So, how do I know it at runtime?

The CometD Oort cluster supports three modes of discovering other nodes: automatic, static and manual.
The automatic way is based on multicast, so if you have multicast working on the hosts the problem should be solved.
With the static way, you just need one "well known" server to be up and running, and point all other nodes to that "well known" server.
With the manual way, you can use other discovery mechanisms (for example, lookup jetty.host in the System properties) and initialize the Oort instances with the discovered values.
It is all explained in the documentation.

Related

Kubernetes: How to connect one pod to another on an arbitrary port - with or without services?

We are currently transitioning our apps to Kubernetes and I have two apps, appP and appH, that I need to communicate with each other over a port unknown at start up time.
Unlike most of our apps, we don't have a set port for them will to communicate over. Before Kubernetes, third party app (out of my control) would tell appP to start processing an item, itemA, identified with a unique id and it would also tell appH to handle the processed data produced by appP.
To coordinate communications between appP and appH, appH would generate a port based on the unique id and publish the host and port info to connect on to an intermediate app (IA). appP, once done with it's processing queries IA for the connection information based on the unique id and sends it over.
Now we have to adapt this to kubernetes. Each app runs in its own deployment, as does the IA. So how can I setup appH to accept the connection over a port without being able to specify it in the service definition?
Note: I've seen some posts say that pods should be able to communicate to any other pods in the cluster regardless of specifying the ports in the service definition but I can't seem to find a ton of confirming information on this and I don't have a ton of time on our cluster where it is free to bang my head against.
Would it would just fine as is regardless? My biggest worry is the ip resolution. Currently appH grabs its ip based on the host it's running on (using boost). Not sure how this resolves within a container.
If not, my next thought would be if I could setup a headless service with selector for appH in order to allow for ip resolution. What I am unsure of then is if I could have appP connect to <appH_Service>:<arbitrary_port>?
Would the service even have to be headless in this scenario? I mostly say headless w/ selector because I saw in one specific post that it is the only one you don't need a port in the spec for it. Also because I am unsure if the connection would go through unless it was the actual pod's ip it was connecting with, rather than the services.
Any info or clarification is appreciated. For the most part, I can't really change the architecture of these apps right now, I just have to get them talking to each other as is and haven't found a ton of clear information on this type of case.
Note: We use helm and coredns if anyone is curious.
The Kubernetes networking model is as follows: a Pod is a group of containers that share a single network identity (a cluster IP). Any port exposed by a container is thus automatically exposed on the Pod. The model demands that each Pods can communicate with other Pods.
This means that your current design can work without modifications.
What Services bring to the table is that you can bring a stable network identity to a group of Pods that is otherwise very volatile. It does not apply to your appP/appH coupling, I think.

How to fire up all docker containers on a same local ip address in django?

I am writing a django based application with docker where there are 3 projects apps running in different containers. All django applications run at 0.0.0.0:8000.
But when I check the ip address of containers to browser the application in browser, they all run at different ip addresses:
project1 runs at 172.18.0.10:8000 can be accessed at: 172.18.0.10:8000/app1
project2 runs at 172.18.0.9:8000 can be accessed at: 172.18.0.9:8000/app2
project3 runs at 172.18.0.7:8000 can be accessed at: 172.18.0.7:8000/app3
which makes the hyperlinks of my app unusable. How do I run all the containers at one single ip, 'localhost:8000'?
Any suggestions where I am going wrong?
You are wrong in the design, mapping multiple containers to one ip+port is simply impossible. One port on one ip is always one application that listens, no matter if it is container application or not.
Simple prove: And who would then decide to which container to send the request? To all of them? Then who would decide which response is the correct one? That's what are ip addresses and ports for, to be able to send request to specific aplications on specific machines.
I think you should reconsider whatever you are doing, and do a bit more research on networking. There are several online courses on that. (I don't want to discourage you in any way, just aim you the right direction)
Simple solution without redesign you app, is putting in front of your app reverse proxy (e. g. nginx). That's the response to my rhetorical question. Reverse proxy can be a middle man that can decide to which application send the request based on something else then ip/port. Reverse proxy listens on some specific port and then by rules you provide to it (e. g. path based), can proxy the request to specific app/ip/port and proxy the response back.
But reverse proxy in this case is more a hack than proper solution, keep that in mind.

locating the service registery in a standalone LAN (in service discovery pattern)

Some background
I'm working on a project that involves a standalone LAN network with number of linux PC's and 1 central windows PC. I need to write web services (right now I got some examples work with jersey in java) for both the linux PC's and the central window PC. I'm wishing to publish an API Gateway in the central PC, which will need to know the addresses and ports of the other PC's so he can address their REST services.
The question at hand
My question can be seperated into 2 parts:
1) How will I make service discovery work? The option I know about from my research till now is:
Using etcd. Seems easy and simple, but I don't see the benefit of it over managing a database in the API Gateway and publishing on it routes for registering and deregistering services.
2) How will the other linux PC's services will know the address of the central windows PC? I read many articles about the service discovery pattern, and failed to find a single one that address the part about how exactly the services know the address of the service registery. Lets assume that the address is fixed in the LAN and doesn't change while my system should be working, but I don't know it when deploying (My clients need to deploy it in several different LANs where the address of central station can be different, and I can't trust them to define it in a config before deploying)
Thanks a lot in advance for any assistance :)
I don't have the reputation to answer but I am interested in this question for similar reasons.
You might find this question and answer useful on programmers stackexchange which talks about a broadcast approach.
I'm researching etcd and Netflix eureka and trying to understanding if this could be applied on the local LAN.

How to discover all other instances of my application on the local Windows network?

We want to add a 'collaborative' feature to our application, so our program should be able to automatically discover all other instances of itself that are running on the same local network, without needing any extra configuration from the users.
Our application runs on Windows, so it can use any APIs provided by the OS. We are assuming a network typical for a small business, a couple of Windows PCs, some routers, etc.
Also, will there be problems with anti-viruses, firewalls, and such? We don't want to scare our users.
You can send broadcast packets for that but that only works within a single subnet (actually apparently a "broadcast domain" but it's usually the subnet). If you just try every IP you can think of you might trigger firewall pop-ups that suggest that your software is trying to hack the computer. I think the best way is to use broadcast for the current subnet and offer a user interface for adding other hosts.

IIS binding and throughput, how do they work?

A consultant at work mentioned that you can have web services running on different endpoints and hence utilize the network correctly if I have more than one network card with different bandwidths.
Not being too network savvy, is he saying I can take my web service and tie it down to one network card and make sure clients make calls at that network card to access it as I have more bandwidth at that card?
Can I do this without changing the clients?
Also if my web service has a number of web methods and I want some web methods to run on a different network card, would I have to split the web service so that the web methods are on different web services? In other words I would have to write two web services?
Are you really maxing out your network that you need to implement something like this? I would look into bottlenecks within the application first before going down this road.
If your network is the bottleneck, then perhaps moving you web service to a completely different server might be a better solution. It'll mostly likely be cleaner and easier to implement.
Having said that, it can probably be done, but would be convoluted. Network cards would need to be on different networks. Wouldn't make sense if it's the same network. Each network card will have different IP address assigned.
In IIS, you'll need to make sure that site which houses your web service is configured for one particular IP address.
Can I do this without changing the clients?
Depends. You will need to make sure whoever is calling your web service does it using the IP address configured within IIS. That might mean either creating a DNS record that points to that particular IP address OR editing your clients to point to the right IP address.