implementing MQTT using Google cloud pub sub - python-2.7

I want to implement MQTT using pubsub API of google app engine in python. How can I run pub sub library in standard library. If I am required to run the older version of this API, can anyone provide with the sample. Also one issue with the latest library is that it is alpha version. Later on I will connect the MQTT client using the GCP-IOT protocol.

I would strongly advise against it. Not only you are wasting your time and energy, you are also trying to use something that is not meant to be used it that way. In the end, the cost is going to be huge compared deploying an MQTT on your own instance.
If you are looking for a fully managed solution from GCP, you might be interested in trying out GCP Core IOT which is currently in private beta. More details here: https://cloud.google.com/iot-core/

I second checking out Google IoT Core.
If you have a special use case, you could always connect Google PubSub to another MQTT-enabled IoT platform like Losant. Here is an example of it:
https://docs.losant.com/applications/integrations/#google-pubsub
Then, as you subscribe to messages from PubSub you could publish to MQTT topics and vice versa.
Disclaimer: I work for Losant.

Related

When to use Firestore vs Pub/Sub

Can you elaborate on the differences between Pub/Sub and Firestore and provide some scenarios or use cases on which one to choose?
I'm not sure which one to use for building an app for a food delivery service that services real-time updates reflected as soon as they are added or changed to the database, ensuring that customers and drivers are aware of when food is ready for pickup and when food is in transit to their end destination such as UberEats.
The difference is quite simple:
Firestore (RealtimeDB) is for backend to frontend (customers/users) communication and realtime updates
Pubsub is a backend to backend message bus for async processing.
In your use case, you won't use PubSub to send notification to your users! Use realtimeDB to perform these updates.
Pub/Sub is like a notification system wherein you receive updates when something is added, changed or removed.
Firestore, on the other hand, is a NoSQL database for mobile (Android, iOS) and other web apps that can be directly access via native SDK. It can support many data types, from simple strings to complex objects. It also supports whatever data structure that works best for your app.
It is best to use Firestore for your app as it provides realtime updates.
You can check for the detailed documentation of Pub/Sub and Firestore.
For Firestore, you can either use either mobile/web client library or server client library.
Here's the link for Firestore, containing its benefits and key features.

C++ example code of receiving data from Google Cloud Platform

I want to find a C++ example of receiving data via Google's "cloud" via Google's Pub/Sub.
Here it seems that C++ isn't supported:
https://github.com/googleapis/google-cloud-cpp/issues/777
and again here:
https://stackoverflow.com/a/62573062/997112
But on the main Github page:
https://github.com/googleapis/google-cloud-cpp
It says the languages are 90.5% C++.
Can anyone help/is it possible to receive data from the Google Cloud in C++?
The Cloud Pub/Sub client library in C++ recently became available. Code samples in the Pub/Sub documentation should all have C++ examples, for example, publishing messages and receiving messages.

Convert object to ByteBuffer

My situation is that developing on spring.boot.version = 1.4.2, can't upgrade our boot version(our service is so huge).
I need to use Kafka for our service.
So I implemented this feature using spring-cloud-stream-binder-kafka.
spring-cloud-stream-binder-kafka:1.1.2.RELEASE is supporting spring boot version 1.4.6, so I can implemented this feature.
Until now, it's not bad.
But we're using AWS on our services, there is no kafka in AWS as you know.
So I tried to use spring-cloud-stream-binder-kinesis:1.0.0.RELEASE.
But unfortunately, spring-cloud-stream-binder-kinesis:1.0.0.RELEASE version is supporting over bootVersion 2.0.0.
So I have to implement this feature using Kinesis Producer Library.
(I refer https://github.com/awslabs/amazon-kinesis-producer/blob/master/java/amazon-kinesis-producer-sample/src/com/amazonaws/services/kinesis/producer/sample/SampleProducer.java)
I have to publish Java object to kinesis, so I should pass Java Object to data argument of KinesisProducer.addUserRecord.
So, how can I convert Java Object to ByteBuffer?
You need to first convert it to a byte[], then call ByteBuffer.wrap() on that array.
You could use Java serialization to do this, but I strongly recommend using some form of JSON serialization. That will make the records easily used by other consumers, which is one of the reasons to use something like Kinesis in the first place.
Also, AWS does provide a managed Kafka service. I haven't used it, so can't compare to a self-managed Kafka cluster, and don't know if it's available in all regions. But if you already have the tools and experience to use Kafka, it might be a better choice for you.

How can i monitor usage of each iot device separately on aws using rules engine? Is there any other way to do the same?

we are currently using aws iot messaging and shadow service, and the total usage can be monitored using cloudwatch, but i want to monitor usage per device.i am new to aws so the only way i can think of is to make a rule which gets triggered every time message is published, extract the thing id from the topic and increase the counter for that thing in dynamodb. How can i do it step by step? i have followed this tutorial but it doesn't work. is there any better way to do the same.
I would look into some IoT analytics software. There is a lot of companies which do this type of thing. You could even build your own with open source software, but it would require you learn and stand up ELK, along with your own instrumentation. I work for a company (AppDynamics) which offers these capabilities along with other application performance monitoring. Have a look at our IoT solution.

how to integrate AWS services for language without sdk

AWS provides SDK only for some languages. How could I integrate AWS services in an application for which an official SDK is not provided. Eg C of Scala or Rust? I know that for Scala, some aws sdk projects are available but as they are individual contributions (and not aws releases), I am reluctant to use them.
All the SDKs do is wrap some minimal interface around the API calls made to the AWS servers. For any service you wish to integrate into your application, just head over to their API documentation, and write your own codes/wrappers.
For eg. this link takes you to the API reference for EC2 service.
In the early days of AWS, we needed an SDK for C++. At that time an SDK for C++ did not exist, so I wrote one based upon the REST API. This is no easy task as the Amazon API is huge and by the time you complete coding for one service, you have to go back and update with all of the AWS feature improvements and changes. This seems like a never ending loop.
Several of the AWS SDKs were started by third party developers and then contributed to Amazon as open source projects. If you have a popular language that you feel that others could benefit from, start an open source project and get everyone involved. It could become an official project if there is enough demand. Ping me if you do as I might be interested in contributing.