C++ example code of receiving data from Google Cloud Platform - c++

I want to find a C++ example of receiving data via Google's "cloud" via Google's Pub/Sub.
Here it seems that C++ isn't supported:
https://github.com/googleapis/google-cloud-cpp/issues/777
and again here:
https://stackoverflow.com/a/62573062/997112
But on the main Github page:
https://github.com/googleapis/google-cloud-cpp
It says the languages are 90.5% C++.
Can anyone help/is it possible to receive data from the Google Cloud in C++?

The Cloud Pub/Sub client library in C++ recently became available. Code samples in the Pub/Sub documentation should all have C++ examples, for example, publishing messages and receiving messages.

Related

How can I convert HL7 messages (from V2 to FHIR) using Google Data Fusion?

I'm using GCP and I'm new to Healthcare.
I want to convert HL7 V2 messages to HL7 FHIR messages. The presentation says that I can do it using Google Data Fusion plugin:
Accelerate your solution development on FHIR by importing your existing FHIR data into the Cloud Healthcare API from Cloud Storage. Transform your data from CSV/HL7v2 formats into FHIR format using Cloud Data Fusion plugins and manage it in the Cloud Healthcare API.
(taken from here)
But there seems to be no concrete documentation on how to do that. Can someone point me to the right direction ?
There is no public documentation available for that yet since the API is still on Alpha. To have access on this feature, you need to sign up in Cloud Health Care API Trusted Tester Program. Once approved, you will be granted access to the documentation and the feature will be added on your project.
I also for reference I found this data fusion pipeline on how to use the cloud healthcare plugins.

How to send sensor data (like temperature data from DHT11 sensor) to Google Cloud IoT Core and store it

I am working on connecting a Raspberry Pi (3B+) to Google Cloud and send sensor's data to Google IoT Core. But I couldn't find any content in this matter. I will be so thankful, if anyone would help me, in dealing with the same.
PS: I have already followed the interactive tutorial from Google Cloud itself and connected a simulated virtual device to Cloud and sent data. I am really looking for a tutorial, that helps me in connecting physical Raspberry Pi.
Thank you
You may want to try following along with this community article covering pretty much exactly what you're asking.
The article covers the following steps:
Creating a registry for your gateway device (the Raspberry Pi)
Adding a temperature / humidity sensor
Adding a light
Connecting the devices to Cloud IoT Core
Sending the data from the sensors to Google Cloud
Pulling the data back using PubSub
Create a Registry in Google Cloud IoT Core and setup devices and their public/private key pairs.
You will also have to setup PubSub topics for publishing device telemetry and state events while creating IoT Core Registries.
Once that is done, you can create a Streaming pipeline in Cloud Dataflow that will read data from the pubsub subscriber and sink it in Big Query (Relational Data Warehouse) or Big Table (No-SQL Data Warehouse).
Dataflow is managed service of Apache Beam where you can create and deploy pipelines written in JAVA or Python.
If you are not familiar with coding, you can use Data Fusion that will help you write your ETL's using drag and drop functionalities similar to Talend.
You can create Data Fusion instance in order to create Streaming ETL pipeline. The source will be pubsub and sink will be Big Query or Big Table based on your use case.
For reference:
https://cloud.google.com/dataflow/docs/guides/templates/provided-streaming
This link will guide you how to deploy google provided dataflow template from pubsub to big query.
For your own custom pipeline, you can take help fron the github link of pipeline code.

Can Microsoft Cognitive Services analyze Microsoft Office documents?

I'd like to do a key phrase analysis of a Microsoft Word document. It looks like the API only takes JSON documents. Is there a route to use real life documents like Microsoft Office documents?
A recent hackathon project, Resolving Managed Metadata Madness in SharePoint, answers this question.
The developer of that project used a three step process involving custom code. An Azure Function was written to extract the text to pass to the API. The function returns the results of the analysis back to Microsoft Flow.
A Flow attached to Document Library will call the Azure Function
that’ll do the heavy lifting
An Azure Function will run, extract
text, analyze it using Azure Cognitive Services, and then write the
info back to SharePoint Online
Finally, notifies admin of the
execution and the creator of the file.

how to integrate AWS services for language without sdk

AWS provides SDK only for some languages. How could I integrate AWS services in an application for which an official SDK is not provided. Eg C of Scala or Rust? I know that for Scala, some aws sdk projects are available but as they are individual contributions (and not aws releases), I am reluctant to use them.
All the SDKs do is wrap some minimal interface around the API calls made to the AWS servers. For any service you wish to integrate into your application, just head over to their API documentation, and write your own codes/wrappers.
For eg. this link takes you to the API reference for EC2 service.
In the early days of AWS, we needed an SDK for C++. At that time an SDK for C++ did not exist, so I wrote one based upon the REST API. This is no easy task as the Amazon API is huge and by the time you complete coding for one service, you have to go back and update with all of the AWS feature improvements and changes. This seems like a never ending loop.
Several of the AWS SDKs were started by third party developers and then contributed to Amazon as open source projects. If you have a popular language that you feel that others could benefit from, start an open source project and get everyone involved. It could become an official project if there is enough demand. Ping me if you do as I might be interested in contributing.

implementing MQTT using Google cloud pub sub

I want to implement MQTT using pubsub API of google app engine in python. How can I run pub sub library in standard library. If I am required to run the older version of this API, can anyone provide with the sample. Also one issue with the latest library is that it is alpha version. Later on I will connect the MQTT client using the GCP-IOT protocol.
I would strongly advise against it. Not only you are wasting your time and energy, you are also trying to use something that is not meant to be used it that way. In the end, the cost is going to be huge compared deploying an MQTT on your own instance.
If you are looking for a fully managed solution from GCP, you might be interested in trying out GCP Core IOT which is currently in private beta. More details here: https://cloud.google.com/iot-core/
I second checking out Google IoT Core.
If you have a special use case, you could always connect Google PubSub to another MQTT-enabled IoT platform like Losant. Here is an example of it:
https://docs.losant.com/applications/integrations/#google-pubsub
Then, as you subscribe to messages from PubSub you could publish to MQTT topics and vice versa.
Disclaimer: I work for Losant.