using firebase on offline networks - offline

I have a intranet network of 50 computers without an internet connection; is it possible to use firebase to share data across these computers - for example, a chat program? My limited knowledge in this field hints that this should be possible if the firebase api was downloaded to the local network and referenced with local expressions rather than web links - is this possible?
Thank you
Greg

Firebase is a cloud service where all data is loaded from and synced back to the cloud. Currently there's no Firebase server that can be run locally, so it won't work without an internet connection.
Firebase does a very good job of punching through proxies / Firewalls though, so if you're concerned that your company's network connection might block it, I recommended at least giving it a try first -- it will probably work, unless you literally have no internet connection at all.

Related

504 Gateway Time-out from google cloud platform, but only sometimes

I'm hosting a CouchBase single node cluster in GCP and a flask backend OpenShift cluster which supports angular frontend. The problem is, when a post is called in my flask by my angular, it is taking too much time to get connected the VM (couchbase) and hence flask has to return a "504 Gateway Time-out". But this happens only sometimes. Sometimes it just works very well with proper speed. Not able to troubleshoot. The total data size is less than 100M, and everything is 100% memory resident in Couchbase. So I guess this is not a problem with Couchbase. Just the connection latency to GCP.
My guess is that the first time your flask backend is trying to connect for the first time to your VM, it's taking more than usual as it needs to establish the connection, authenticate and possibly do other things depending on your use-case.
This is a common problem when hosting your app on App Engine or something similar and the solution there is to use "warm-up requests". This basically spins up the whole connection (and in app engine case the instance) and makes a test connection just so when the desired connection comes, everything is already set up.
So I suggest that you check how warm-up requests work and configure something similar between your flask and VM. So basically a route in flask with the only purpose of establishing a test connections with a test package. This way your next connection will be up to speed with no 504 errors.
try to clear to cache of load balancer in GCP console
I already faced same kind of issue and resolved it using above technique

How to make a communication between Arduino, Web app and AWS?

I'm making a project where temperature and humidity levels are sensored by Arduino and send those data to AWS with ESP-8266-01s. At the same time, those data are also shown on the web application (it may be on Node.js/Java, etc.).
So what I'm asking is how the architecture should be. What is the best practice? Does AWS also provide a web app where I can use it for both database cloud as a web application or should I make a separate project as a web app to connect to AWS?
I searched on Google but the only answers I can find are two ways: Arduino and AWS without another aspect connected to it in my case the web app.
Make use of MQTT protocol.
Components required -
Pubsubclient.h library on esp8266 that will be used to publish temp and humidity data to MQTT Broker on AWS
mosquitto MQTT broker setup on AWS used to accept data from esp8266
Python script that will subscribe to data from the mosquitto broker and dumps into any database(my suggestion is influxdb)
Graphing platform to query database and display visual timeseries-graphs(my suggestion grafana)
Use AWS only for purchasing a virtual machine. Rest can be taken care using open-source Platforms.
Assuming you want to display graphs of temperature and humidity, Using grafana is the best practice.
You will not find a silver bullet here. A proper architecture for your case depends on many things and there can be different approaches with their own pros and cons.
There are many aspects to cover including connectivity, security, update, availability, costs.
Usually IoT devices are not connected directly to the cloud, because they don't have a constant connection, or any network connection. There is a hub (or middleware) that collects data from sensors/devices and send them to the cloud for processing.
But many cloud vendors provide some out of the box complex solutions here (including AWS).
I listed just examples.

locating the service registery in a standalone LAN (in service discovery pattern)

Some background
I'm working on a project that involves a standalone LAN network with number of linux PC's and 1 central windows PC. I need to write web services (right now I got some examples work with jersey in java) for both the linux PC's and the central window PC. I'm wishing to publish an API Gateway in the central PC, which will need to know the addresses and ports of the other PC's so he can address their REST services.
The question at hand
My question can be seperated into 2 parts:
1) How will I make service discovery work? The option I know about from my research till now is:
Using etcd. Seems easy and simple, but I don't see the benefit of it over managing a database in the API Gateway and publishing on it routes for registering and deregistering services.
2) How will the other linux PC's services will know the address of the central windows PC? I read many articles about the service discovery pattern, and failed to find a single one that address the part about how exactly the services know the address of the service registery. Lets assume that the address is fixed in the LAN and doesn't change while my system should be working, but I don't know it when deploying (My clients need to deploy it in several different LANs where the address of central station can be different, and I can't trust them to define it in a config before deploying)
Thanks a lot in advance for any assistance :)
I don't have the reputation to answer but I am interested in this question for similar reasons.
You might find this question and answer useful on programmers stackexchange which talks about a broadcast approach.
I'm researching etcd and Netflix eureka and trying to understanding if this could be applied on the local LAN.

How to access a web service behind a NAT?

We have a product we are deploying to some small businesses. It is basically a RESTful API over SSL using Tomcat. This is installed on the server in the small business and is accessed via an iPhone or other device portable device. So, the devices connecting to the server could come from any number of IP addresses.
The problem comes with the installation. When we install this service, it seems to always become a problem when doing port forwarding so the outside world can gain access to tomcat. It seems most time the owner doesn't know router password, etc, etc.
I am trying to research other ways we can accomplish this. I've come up with the following and would like to hear other thoughts on the topic.
Setup a SSH tunnel from each client office to a central server. Basically the remote devices would connect to that central server on a port and that traffic would be tunneled back to Tomcat in the office. Seems kind of redundant to have SSH and then SSL, but really no other way to accomplish it since end-to-end I need SSL (from device to office). Not sure of performance implications here, but I know it would work. Would need to monitor the tunnel and bring it back up if it goes done, would need to handle SSH key exchanges, etc.
Setup uPNP to try and configure the hole for me. Would likely work most of the time, but uPNP isn't guaranteed to be turned on. May be a good next step.
Come up with some type of NAT transversal scheme. I'm just not familiar with these and uncertain of how they exactly work. We have access to a centralized server which is required for the authentication if that makes it any easier.
What else should I be looking at to get this accomplished?
Is there no way this service can by hosted publicly by you or a hosting provider rather than with the customer?
I had a similar situation when I was developing kiosks. I never knew what type of network environment I'd have to deal with on the next installation.
I ended up creating a PPTP VPN to allow all the kiosks to connect to one server I hosted publicly. We then created a controller web service to expose access to the kiosks that were all connected via the VPN. I'm not sure how familiar you are with VPN's but with the VPN connection I was able to completely circumvent the firewall in front of each kiosk by accessing the kiosk via the VPN assigned IP.
Each kiosk node was incredibly easy to setup once I had a VPN server setup. It also brought management benefits and licensing revenue I originally didn't think about. with this infrastructure I was easily able to roll out services accessible via mobile phones.
Best of luck!
Solutions exist to "dynamically" access a software on a computer behind a NAT, but usually mostly for UDP communication.
The UDP hole punching technique is one of them. However, this isn't guranteed to work in every possible situation. If both sides of the communication are behind a "Symmetric Cone NAT" it won't.
You obivously can reduce the probability a client can't communicate using UPnP as a backup (or even primary) alternative.
I don't know Web Services enough and don't even know if using UDP for your webservice is an option (or if it is even possible).
Using the same technique for directly TCP is likely to fail (TCP connections aren't stateless - that causes a lot of problems here).
An alternative using the same technique, would be to set up some VPN based on UDP (just like OpenVPN), but as you stated, you'll have to manage keys, certificates, and so on. This can be automated (I did it) but still, it's not really trivial.
===EDIT===
If you really want to use TCP, you could create a simple "proxy" software on the client boxes which would serve as a relay.
You would have the following schema:
Web Service on client boxes, behind a NAT
The "proxy" software on the same boxes, establishing an outgoing (thus non-blocked) TCP connection to your company servers
Your company servers host a WebService as well, which requires a something like a "Client Identifier" to redirect the request to the adequate established TCP connection.
The proxy program interrogates the local WebService and send back the response to the company servers, which relay the response to the originate requester as well.
An alternative: you might ask the proxy software to directly connect to the requester to enhance performance, but then you might encounter the same NAT problems you're trying to avoid.
It's things like this that are the reason people are tunneling everything over http now, and why certain hardware vendors charge a small fortune for Layer 7 packet filtering.
This is a tremendous amount of work to fix one problem when the customer has at least three problems. Besides the one you've identified, if they don't know their own password, then who does? An administrator who doesn't work there anymore? That's a problem.
Second, if they don't know the password, that means they're almost certainly far behind on firmware updates to their firewall.
I think they should seriously consider doing a PROM reset on their firewall and reconfiguring from scratch (and upgrading the firmware while they're at it).
3 birds, one stone.
I had to do something similar in the past and I believe
the best option is the first one you proposed.
You can do in the easy way, using ssh with its -R option, using
publick key auth and a couple of scripts to check for
connectivity. Don't forget the various keep alive and timeout
features of ssh.
Don't worry about the performances. Use unprivileged users and ports
if you can. Don't bother to setup a CA, the public key of each remote
server is easier to maintain unless you are in the thousands.
Monitoring is quite simple. Each server should test the service on the
central server. If it fails either the tunnel is down or there's no connectivity.
Restarting the tunnel will not harm in any case.
Or you can do it at the network level, using IPsec (strongswan).
This can be trickier to setup and it's the option I used but I will
use SSH the next time, it would have saved me a lot of time.
+1 for going with a SSH tunnel. It's well known, widely available and not too hard to configure.
However, as you point out, you are running SSL already, so the SSH encryption is redundant. Instead of SSH you could just use a regular tunneling proxy, that provides the tunnelling without the encryption. I've used this one in the past, and it has worked well, although I didn't load test it - it was used with just a handful of users.
Here's a blog from someone who used the tunnelling proxy to access his webcam from outside his firewall.
Set up an Apache in front of your Tomcat. This Apache should be visible from the internet, where the Tomcat should not.
Configure Apache to forward all traffic to the Tomcat. This can easily be accomplished using mod_proxy (check out the ProxyPass and ProxyPassReverse directives).
Have your SSL certificate located in the Apache, so that all clients can talk HTTPS with the Apache server, which in turn talks plain HTTP with Tomcat.
No tunneling or other nastyness + you will be surprised how easy it is to configure Apache to do this.
If you want to have a RESTful integration to the client server, a tunnel to the central server that works as a proxy, seems the best approach.
But if this is not a hard requirement, you can let the central server handle the RESTfull stuff and integrate the central server and client server with other middleware. Good candidates would be RMI or JMS. For example, a RMI connection initiated by the client allows the server to do RMI calls to the client.
You could try to connect to an pc/ server and tunnel all the data via hamachi (Free VPN Software) because this tool you can install and it will create a reverse connection (from inside your nat to outside) so you can connect to it
site: http://hamachi.cc/

Converting existing C++ web service to a load balanced server?

We have a C++ (SOAP-based) web service deployed Using Systinet C++ Server, that has a single port for all the incoming connections from Java front-end.
However recently in production environment when it was tested with around 150 connections, the service went down and hence I wonder how to achieve load-balancing in a C++ SOAP-based web service?
The service is accessed as SOAP/HTTP?
Then you create several instances of you services and put some kind of router between your clients and the web service to distribute the requests across the instances. Often people use dedicated hardware routers for that purpose.
Note that this is often not truly load "balancing", in that the router can be pretty dumb, for example just using a simple round-robin alrgorithm. Such simple appraoches can be pretty effective.
I hope that your services are stateless, that simplifies things. If indiviual clients must maintain affinity to a particualr instance thing get a little tricker.