Proxy in between device and Google IoT Core using MQTT? - google-cloud-platform

I have a situation where I want to use Google IoT Core to support bi-directional communication between my devices and existing GCP stack. The trouble is, some of my devices cannot connect to GCP's MQTT bridge because they are blocked from reaching it directly. The communication must instead go through my own hosted server. In fact, some devices will not be allowed to trust traffic either inbound or outbound to anything but my own hosted server, and this is completely out of my control.
Basically all suggested solutions that I have found propose the use of MQTT over WebSockets. WebSockets consume too many system resources for the server I have available, and so MQTT proxy over WebSockets is extremely undesirable and likely is not even feasible for my use case. It also defeats the purpose of using a lightweight, low-bandwidth protocol like MQTT in the first place.
To make matters more complicated, Google IoT Core documentation explicitly says that it does not support bridging MQTT brokers with their MQTT bridge. So hosting my own MQTT server seems to be out of the question.
Is it even possible to create a proxy -- either forward or reverse -- for this use case that allows for native, encrypted, full-duplex MQTT traffic? If so, what would be the recommended way to achieve this?

If you have hybrid set-up, meaning you have on-premise servers and a cloud server and you want to bridge them using Google IoT by using MQTT.
You can try in this github link, upon checking this MQTT broker has been tested to Google IoT. Since Google IoT is not supporting 3rd paryt MQTT broker.

Related

AWS Iot Connection using legacy TCP Firmware

I have an existing solution for acquired IoT devices that have proprietary legacy firmware, that uses raw TCP.
The solution involves an application written in Node.JS running on an AWS EC2 instance. It creates a TCP server that opens a socket to listen to the device (each of them). Every time the IoT device pings the server, the server must ping back the device with a default message (created using the original message) as validation the connection is still online.
After pinging back, it then unscrambles the TCP data parsing it into Json and adding to a MongoDB database. Which then is used for displaying data to the customers.
Also, on the same EC2 instance, another application is deployed on-demand via front-end APIs requests to send commands to the IoT devices and logs it into the same DB, sometimes changing constants that are defined to each device separately.
As we are developing new devices in MQTT protocol, we are creating new serverless architecture using AWS IoT Core. I'm looking for an innovative serverless solution to integrate the legacy devices in this more robust and less demanding architecture (more cost effective).
As I can't change the protocol, it must still be a TCP, i'm looking for something to convert the TCP into MQTT and then forward it to the general IoT Core. That way, i can use the same architecture for both new and old devices. Giving me time to slowly decommission the older devices, not having to maintain both infrastructures.
The closest thing I've found is Amazon IoT Greengrass (https://aws.amazon.com/blogs/iot/converting-industrial-protocols-with-aws-iot-greengrass/), which appears to run a TCP client in an AWS lambda function, and then forwarding it as MQTT into AWS IoT Core.
We are testing this now, but i would like to know if anyone had similar issues with legacy protocols using Greengrass or other solutions.

AWS with existing TCP Server implementation

I have an existing AWS solution which includes the following. It uses a legacy application (written in C#) running on an EC2 instance. This legacy application implements a TCP server and listens on a specific TCP port. It contains custom code to decode the data and dump it into a database. The choice of the database is less important for now.
I'm after a more contemporary solution based on AWS which can potentially deprecate the existing legacy application. Most options with Amazon IoT involve HTTP and MQTT. I can't change the protocol. It must still be a TCP.
The closest thing I can find is Amazon IoT Greengrass (https://aws.amazon.com/blogs/iot/converting-industrial-protocols-with-aws-iot-greengrass/) which appears to involve running a TCP client in an AWS lambda function, and then forwarding it to MQTT / AWS IoT Core.
I'm curious what other approaches may be possible.

Choosing AWS service for MQTT broker

I need to build IOT MQTT broker that should work on secure MQTT protocol. I also need to manage users that connects to this service and manage subscription access control. Idon't need MQTT via web socket.
At first glance I was planning to use EC2 service in order to create Ubuntu virtual machine and install Mosquitto service in it. But later I found Internet of Things section that contains set of services.
Is it possible to construct MQTT service according my requirements by using Internet of Things. By choosing Internet of Things I hope to get more specialized functionality.
You can use AWS IoT for this instead, they have a managed MQTT endpoint that you can add 'things' to it.
https://docs.aws.amazon.com/iot/latest/developerguide/mqtt.html
You'll be able to easily connect the endpoint to other services as this is part of their cloud solutions.
https://docs.aws.amazon.com/iot/latest/developerguide/iot-gs.html

best architecture to deploy TCP/IP and UDP service on amazon AWS (Without EC2 instances)

i am traying to figure it out how is the best way to deploy a TCP/IP and UDP service on Amazon AWS.
I made a previous research to my question and i can not find anything. I found others protocols like HTTP, MQTT but no TCP or UDP
I need to refactor a GPS Tracking service running right now in AMAZON EC2. The GPS devices sent the position data using udp and tcp protocol. Every time a message is received the server have to respond with an ACKNOWLEDGE message, giving the reception confirmation to the gps device.
The problem i am facing right now and is the motivation to refactor is:
When the traffic increase, the server is not able to catch up all the messages.
I try to solve this issue with load balancer and autoscaling but UDP is not supported.
I was wondering if there is something like Api Gateway, which gave me a tcp or udp endpoint, leave the message on a SQS queue and process with a lambda function.
Thanks in advance!
Your question really doesn't make a lot of sense - you are asking how to run a service without running a server.
If you have reached the limits of a single instance, and you need to grow, look at using the AWS Network Load Balancer with an autoscaled group of EC2 instances. However, this will not support UDP - if you really need that, then you may have to look at 3rd party support in the AWS Marketplace.
Edit: Serverless architectures are designed for http based application, where you send a request and get a response. Since your app is TCP based, and uses persistent connections, most existing serverless implementations simply won't support it. You will need to rewrite your app to support http, or use traditional server based infrastructures that can support persistent connections.
Edit #2: As of Dec. 2018, API gateway supports WebSockets. This probably doesn't help with the original question, but opens up other alternatives if you need to run lambda code behind a long running connection.
If you want to go more Serverless, I think the ECS Container Service has instances that accept TCP and UDP. Also take a look at running Docker Containers with with Kubernetes. I am not sure if they support those protocols, but I believe they do.
If not, some EC2 instances with load balancing can be your best bet.

Kafka cluster security for IOT

I am new to the Kafka and want to deploy Kafka Production cluster for IOT. We will be receiving messages from Raspberry Pi over the internet to our Kafka cluster which we will be hosting on AWS.
Now the concern, since we need to open the KAFKA PORT to the outer internet we are opening a way to system threat as it will compromise with the security by opening port to outer world.
Please let me know what can be done so that we can prevent malicious access using KAFKA port over the internet.
Pardon me if I am not clear with the question, do let me know if rephrasing of queation is needed.
Consider using a REST Proxy in front of your Kafka brokers (such as the one from Confluent). Then you can secure your Kafka cluster just as you would secure any REST API exposed to the public internet. This architecture is proven in production for several very large IoT use cases.
There are two ways that are most effective for Kafka Security.
Implement SSL Encryption for Kafka.
Authentication using SASL
You can follow this guide. http://kafka.apache.org/documentation.html#security_sasl