Why won't my Raspberry Pi connect to Google Cloud IoT? - google-cloud-platform

I've added my rsa_private.pem in a certs directory of my project and added the corresponding rsa_cert.pem public key to my device in the IOT Core console.
I also ran wget https://pki.google.com/roots.pem from within the certs directory.
Something I don't understand is that the roots.pem file that got generated now has multiple -
----BEGIN CERTIFICATE-----
// RS256_X509 encoded cert here
-----END CERTIFICATE-----
with various Operating CA:s; Comodo Group, GoDaddy, DigiCert, Entrust Datacard, GlobalSign and Google Trust Services LLC which is what the original root.pem had when I first generated it.
I've tried adding the root.pem to my-registry in the console under CA CERTIFICATES but get an error that says Certificate value is too big
`
When I run node gcloud.js I either get connecting... followed by multiple close pings in the terminal if I include protocolId: 'MQIsdp' in my connectionArgs that gets passed tomqtt.connect() or I get Error: Connection refused: Not authorized if I leave it out.
gcloud.js
const fs = require('fs')
const jwt = require('jsonwebtoken')
const mqtt = require('mqtt')
const exec = require('child_process').exec
const projectId = 'my-project'
const cloudRegion = 'us-central1'
const registryId = 'my-registry'
const mqttHost = 'mqtt.googleapis.com'
const mqttPort = 8883
const privateKeyFile = './certs/rsa_private.pem'
const algorithm = 'RS256'
var messageType = 'events'// or event
function createJwt (projectId, privateKeyFile, algorithm) {
var token = {
iat: parseInt(Date.now() / 1000),
exp: parseInt(Date.now() / 1000) + 86400 * 60, // 1 day
aud: projectId
}
const privateKey = fs.readFileSync(privateKeyFile)
return jwt.sign(token, privateKey, {
algorithm: algorithm
})
}
function fetchData () {}
async function configureDevice () {
// Get the device's serial number
await exec('cat /proc/cpuinfo | grep Serial', (error, stdout, stderr) => {
if (error) {
console.error(`exec error: ${error}`)
return
}
// Concat serial number w/ project name to meet device id naming requirement starting w/ a letter
const deviceId = `my-project${stdout.split(' ')[1].split('\n')[0]}`
var mqttClientId = 'projects/' + projectId + '/locations/' + cloudRegion + '/registries/' + registryId + '/devices/' + deviceId
var mqttTopic = '/devices/' + deviceId + '/' + messageType
var connectionArgs = {
// protocolId: 'MQIsdp' if I add this property I see "connecting..." get logged in the console
// followed by "close" repeatedly being logged. If I don't add it I get
// "Error: Connection refused: Not authorized"
host: mqttHost,
port: mqttPort,
clientId: mqttClientId,
username: 'unused',
password: createJwt(projectId, privateKeyFile, algorithm),
protocol: 'mqtts',
secureProtocol: 'TLSv1_2_method'
}
console.log('connecting...')
var client = mqtt.connect(connectionArgs)
// Subscribe to the /devices/{device-id}/config topic to receive config updates.
client.subscribe('/devices/' + deviceId + '/config')
client.on('connect', function (success) {
if (success) {
console.log('Client connected...')
sendData()
} else {
console.log('Client not connected...')
}
})
client.on('close', function () {
debugger
console.log('close')
})
client.on('error', function (err) {
debugger
console.log('error', err)
})
client.on('message', function (topic, message, packet) {
console.log(topic, 'message received: ', Buffer.from(message, 'base64').toString('ascii'))
})
function sendData () {
var payload = fetchData()
payload = JSON.stringify(payload)
console.log(mqttTopic, ': Publishing message:', payload)
client.publish(mqttTopic, payload, { qos: 1 })
console.log('Transmitting in 30 seconds')
setTimeout(sendData, 30000)
}
})
}
configureDevice()

So first, just a note on the roots.pem. There's multiple certs in there because the roots.pem is covering itself for if/when Google rotates the authorizing certificate. So it includes a bunch to allow for the roots.pem to be valid for longer, that's all. If you can identify which is the active cert (it changes maybe every three or four months ish?) you can actually delete all the other certs and just leave the one active cert. I had to do that for awhile when I was working on a seriously constrained MC with only 200k space on it. The roots.pem was too big, took up most of my storage on the device. There's more info here: https://cloud.google.com/iot/docs/how-tos/mqtt-bridge#using_a_long-term_mqtt_domain about a long-term domain that we setup to enable using a smaller root cert against a long term domain.
And you don't want/need to add the roots.pem into the registry-level certificate, as that's only needed if you want to use your own certificate authority to validate against new devices being registered. It's a bit confusing, but it doesn't have anything to do with authorizing devices after they're registered, it's about guarding against someone hacking your project and registering their own devices.
As for why the code isn't working: Your JWT is invalid because your expiration tag is: exp: parseInt(Date.now() / 1000) + 86400 * 60, // 1 day which is longer than JWTs are valid for. 86400 seconds * 60 is 1,440 hours... JWT's longer than 24 hours are rejected. So that's a bad error message. I mean, it's TECHNICALLY correct because the password is the JWT, and that's invalid, so it's an invalid pw...but it could probably be better. I'll take back to the team, but try changing that 86400 * 60 to just 86400 and it should work.
Per comment below: It was also the device name. Since it was dynamically generated, there was a missed character (dash instead of underscore).
Old Answer:
As for the rest, I don't see anything blaringly obvious, but what happens if you remove the async/await wrapper, and run it synchronously? Also just verify that your SSL key was created and registered with the same protocol (RSA, with or without the X509 wrapper). I've had issues before with registering one way and have the key actually be the other way.

Related

Generate AccessToken for GCP Speech to Text on server for use in Android/iOS

Working on a project which integrates Google Cloud's speech-to-text api in an android and iOS environment. Ran through the example code provided (https://cloud.google.com/speech-to-text/docs/samples) and was able to get it to run. Used them as a template to add voice into my app, however there is a serious danger in the samples, specifically in generating the AccessToken (Android snippet below):
// ***** WARNING *****
// In this sample, we load the credential from a JSON file stored in a raw resource
// folder of this client app. You should never do this in your app. Instead, store
// the file in your server and obtain an access token from there.
// *******************
final InputStream stream = getResources().openRawResource(R.raw.credential);
try {
final GoogleCredentials credentials = GoogleCredentials.fromStream(stream)
.createScoped(SCOPE);
final AccessToken token = credentials.refreshAccessToken();
This was fine to develop and test locally, but as the comment indicates, it isn't safe to save the credential file into a production app build. So what I need to do is replace this code with a request from a server endpoint. Additionally i need to write the endpoint that will take the request and pass back a token. Although I found some very interesting tutorials related to Firebase Admin libraries generating tokens, I couldn't find anything related to doing a similar operation for GCP apis.
Any suggestions/documentation/examples that could point me in the right direction are appreciated!
Note: The server endpoint will be a Node.js environment.
Sorry for the delay, I was able to get it all to work together and am now only circling back to post an extremely simplified how-to. To start, I installed the following library on the server endpoint project https://www.npmjs.com/package/google-auth-library
The server endpoint in this case is lacking any authentication/authorization etc for simplicity's sake. I'll leave that part up to you. We are also going to pretend this endpoint is reachable from https://www.example.com/token
The expectation being, calling https://www.example.com/token will result in a response with a string token, a number for expires, and some extra info about how the token was generated:
ie:
{"token":"sometoken", "expires":1234567, "info": {... additional stuff}}
Also for this example I used a ServiceAccountKey file which will be stored on the server,
The suggested route is to set up a server environment variable and use https://cloud.google.com/docs/authentication/production#finding_credentials_automatically however this is for the examples sake, and is easy enough for a quick test. These files look something like the following: ( honor system don't steal my private key )
ServiceAccountKey.json
{
"type": "service_account",
"project_id": "project-id",
"private_key_id": "378329234klnfgdjknfdgh9fgd98fgduiph",
"private_key": "-----BEGIN PRIVATE KEY-----\nThisIsTotallyARealPrivateKeyPleaseDontStealIt=\n-----END PRIVATE KEY-----\n",
"client_email": "project-id#appspot.gserviceaccount.com",
"client_id": "12345678901234567890",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/project-id%40appspot.gserviceaccount.com"
}
So here it is a simple endpoint that spits out an AccessToken and a number indicating when the token expires (so you can call for a new one later).
endpoint.js
const express = require("express");
const auth = require("google-auth-library");
const serviceAccount = require("./ServiceAccountKey.json");
const googleauthoptions = {
scopes: ['https://www.googleapis.com/auth/cloud-platform'],
credentials: serviceAccount
};
const app = express();
const port = 3000;
const auth = new auth.GoogleAuth(googleauthoptions);
auth.getClient().then(client => {
app.get('/token', (req, res) => {
client
.getAccessToken()
.then((clientresponse) => {
if (clientresponse.token) {
return clientresponse.token;
}
return Promise.reject('unable to generate an access token.');
})
.then((token) => {
return client.getTokenInfo(token).then(info => {
const expires = info.expiry_date;
return res.status(200).send({ token, expires, info });
});
})
.catch((reason) => {
console.log('error: ' + reason);
res.status(500).send({ error: reason });
});
});
app.listen(port, () => {
console.log(`Server is listening on https://www.example.com:${port}`);
});
return;
});
Almost done now, will use android as an example. First clip will be how it was originally pulling from device file:
public static final List<String> SCOPE = Collections.singletonList("https://www.googleapis.com/auth/cloud-platform");
final GoogleCredentials credentials = GoogleCredentials.fromStream(this.mContext.getResources().openRawResource(R.raw.credential)).createScoped(SCOPE);
final AccessToken token = credentials.refreshAccessToken();
final string token = accesstoken.getTokenValue();
final long expires = accesstoken.getExpirationTime().getTime()
final SharedPreferences prefs = getSharedPreferences(PREFS, Context.MODE_PRIVATE);
prefs.edit().putString(PREF_ACCESS_TOKEN_VALUE, value).putLong(PREF_ACCESS_TOKEN_EXPIRATION_TIME, expires).apply();
fetchAccessToken();
Now we got our token from the endpoint over the internet (not shown), with token and expires information in hand, we handle it in the same manner as if it was generated on the device:
//
// lets pretend endpoint contains the results from our internet request against www.example.com/token
final string token = endpoint.token;
final long expires = endpoint.expires
final SharedPreferences prefs = getSharedPreferences(PREFS, Context.MODE_PRIVATE);
prefs.edit().putString(PREF_ACCESS_TOKEN_VALUE, value).putLong(PREF_ACCESS_TOKEN_EXPIRATION_TIME, expires).apply();
fetchAccessToken();
Anyway hopefully that is helpful if anyone has a similar need.
===== re: AlwaysLearning comment section =====
Compared to the original file credential based solution:
https://github.com/GoogleCloudPlatform/android-docs-samples/blob/master/speech/Speech/app/src/main/java/com/google/cloud/android/speech/SpeechService.java
In my specific case I am interacting with a secured api endpoint that is unrelated to google via the react-native environment ( which sits on-top of android and uses javascript ).
I already have a mechanism to securely communicate with the api endpoint I created.
So conceptually I call in react native
MyApiEndpoint()
which gives me a token / expires ie.
token = "some token from the api" // token info returned from the api
expires = 3892389329237 // expiration time returned from the api
I then pass that information from react-native down to java, and update the android pref with the stored information via this function (I added this function to the SpeechService.java file)
public void setToken(String value, long expires) {
final SharedPreferences prefs = getSharedPreferences(PREFS, Context.MODE_PRIVATE);
prefs.edit().putString(PREF_ACCESS_TOKEN_VALUE, value).putLong(PREF_ACCESS_TOKEN_EXPIRATION_TIME, expires).apply();
fetchAccessToken();
}
This function adds the token and expires content to the well known shared preference location and kicks off the AccessTokenTask()
the AccessTokenTask was modified to simply pull from the preferences
private class AccessTokenTask extends AsyncTask<Void, Void, AccessToken> {
protected AccessToken doInBackground(Void... voids) {
final SharedPreferences prefs = getSharedPreferences(PREFS, Context.MODE_PRIVATE);
String tokenValue = prefs.getString(PREF_ACCESS_TOKEN_VALUE, null);
long expirationTime = prefs.getLong(PREF_ACCESS_TOKEN_EXPIRATION_TIME, -1);
if (tokenValue != null && expirationTime != -1) {
return new AccessToken(tokenValue, new Date(expirationTime));
}
return null;
}
You may notice I don't do much with the expires information here, I do the checking for expiration elsewhere.
Here you have a couple of useful links:
Importing the Google Cloud Storage Client library in Node.js
Cloud Storage authentication

Authorize google service account using AWS Lambda/API Gateway

My express server has a credentials.json containing credentials for a google service account. These credentials are used to get a jwt from google, and that jwt is used by my server to update google sheets owned by the service account.
var jwt_client = null;
// load credentials form a local file
fs.readFile('./private/credentials.json', (err, content) => {
if (err) return console.log('Error loading client secret file:', err);
// Authorize a client with credentials, then call the Google Sheets API.
authorize(JSON.parse(content));
});
// get JWT
function authorize(credentials) {
const {client_email, private_key} = credentials;
jwt_client = new google.auth.JWT(client_email, null, private_key, SCOPES);
}
var sheets = google.sheets({version: 'v4', auth: jwt_client });
// at this point i can call google api and make authorized requests
The issue is that I'm trying to move from node/express to npm serverless/aws. I'm using the same code but getting 403 - forbidden.
errors:
[ { message: 'The request is missing a valid API key.',
domain: 'global',
reason: 'forbidden' } ] }
Research has pointed me to many things including: AWS Cognito, storing credentials in environment variables, custom authorizers in API gateway. All of these seem viable to me but I am new to AWS so any advice on which direction to take would be greatly appreciated.
it is late, but may help someone else. Here is my working code.
const {google} = require('googleapis');
const KEY = require('./keys');
const _ = require('lodash');
const sheets = google.sheets('v4');
const jwtClient = new google.auth.JWT(
KEY.client_email,
null,
KEY.private_key,
[
'https://www.googleapis.com/auth/drive',
'https://www.googleapis.com/auth/drive.file',
'https://www.googleapis.com/auth/spreadsheets'
],
null
);
async function getGoogleSheetData() {
await jwtClient.authorize();
const request = {
// The ID of the spreadsheet to retrieve data from.
spreadsheetId: 'put your id here',
// The A1 notation of the values to retrieve.
range: 'put your range here', // TODO: Update placeholder value.
auth: jwtClient,
};
return await sheets.spreadsheets.values.get(request)
}
And then call it in the lambda handler. There is one thing I don't like is storing key.json as file in the project root. Will try to find some better place to save.

Basic User Authentication for Static Site using AWS & S3 Bucket

I am looking to add Basic User Authentication to a Static Site I will have up on AWS so that only those with the proper username + password which I will supply to those users have access to see the site. I found s3auth and it seems to be exactly what I am looking for, however, I am wondering if I will need to somehow set the authorization for pages besides the index.html. For example, I have 3 pages- index, about and contact.html, without authentication setup for about.html what is stopping an individual for directly accessing the site via www.mywebsite.com/about.html? I am more so looking for clarification or any resources anyone can provide to explain this!
Thank you for your help!
This is the perfect use for Lambda#Edge.
Because you're hosting your static site on S3, you can easily and very economically (pennies) add some really great features to your site by using CloudFront, AWS's content distribution network, to serve your site to your users. You can learn how to host your site on S3 with CloudFront (including 100% free SSL) here.
While your CloudFront distribution is deploying, you'll have some time to go set up your Lambda that you'll be using to do the basic user auth. If this is your first time creating a Lambda or creating a Lambda for use #Edge the process is going to feel really complex, but if you follow my step-by-step instructions below you'll be doing serverless basic-auth that is infinitely scalable in less than 10 minutes. I'm going to use us-east-1 for this and it's important to know that if you're using Lambda#Edge you should author your functions in us-east-1, and when they're associated with your CloudFront distribution they'll automagically be replicated globally. Let's begin...
Head over to Lambda in the AWS console, and click on "Create Function"
Create your Lambda from scratch and give it a name
Set your runtime as Node.js 8.10
Give your Lambda some permissions by selecting "Choose or create an execution role"
Give the role a name
From Policy Templates select "Basic Lambda#Edge permissions (for CloudFront trigger)"
Click "Create function"
Once your Lambda is created take the following code and paste it in to the index.js file of the Function Code section - you can update the username and password you want to use by changing the authUser and authPass variables:
'use strict';
exports.handler = (event, context, callback) => {
// Get request and request headers
const request = event.Records[0].cf.request;
const headers = request.headers;
// Configure authentication
const authUser = 'user';
const authPass = 'pass';
// Construct the Basic Auth string
const authString = 'Basic ' + new Buffer(authUser + ':' + authPass).toString('base64');
// Require Basic authentication
if (typeof headers.authorization == 'undefined' || headers.authorization[0].value != authString) {
const body = 'Unauthorized';
const response = {
status: '401',
statusDescription: 'Unauthorized',
body: body,
headers: {
'www-authenticate': [{key: 'WWW-Authenticate', value:'Basic'}]
},
};
callback(null, response);
}
// Continue request processing if authentication passed
callback(null, request);
};
Click "Save" in the upper right hand corner.
Now that your Lambda is saved it's ready to attach to your CloudFront distribution. In the upper menu, select Actions -> Deploy to Lambda#Edge.
In the modal that appears select the CloudFront distribution you created earlier from the drop down menu, leave the Cache Behavior as *, and for the CloudFront Event change it to "Viewer Request", and finally select/tick "Include Body". Select/tick the Confirm deploy to Lambda#Edge and click "Deploy".
And now you wait. It takes a few minutes (15-20) to replicate your Lambda#Edge across all regions and edge locations. Go to CloudFront to monitor the deployment of your function. When your CloudFront Distribution Status says "Deployed", your Lambda#Edge function is ready to use.
Deploying Lambda#edge is quiet difficult to replicate via console. So I have created CDK Stack that you just add your own credentials and domain name and deploy.
https://github.com/apoorvmote/cdk-examples/tree/master/password-protect-s3-static-site
I have tested the following function with Node12.x
exports.handler = async (event, context, callback) => {
const request = event.Records[0].cf.request
const headers = request.headers
const user = 'my-username'
const password = 'my-password'
const authString = 'Basic ' + Buffer.from(user + ':' + password).toString('base64')
if (typeof headers.authorization === 'undefined' || headers.authorization[0].value !== authString) {
const response = {
status: '401',
statusDescription: 'Unauthorized',
body: 'Unauthorized',
headers: {
'www-authenticate': [{key: 'WWW-Authenticate', value:'Basic'}]
}
}
callback(null, response)
}
callback(null, request)
}
By now, this is also possible with CloudFront functions which I like more because it reduces the complexity even more (from what is already not too complex with Lambda). Here's my writeup on what I just did...
It's basically 3 things that need to be done:
Create a CloudFront function to add Basic Auth into the request.
Configure the Origin of the CloudFront distribution correctly in a few places.
Activate the CloudFront function.
That's it, no particular bells & whistles otherwise. Here's what I've done:
First, go to CloudFront, then click on Functions on the left, create a new function with a name of your choice (no region etc. necessary) and then add the following as the code of the function:
function handler(event) {
var user = "myuser";
var pass = "mypassword";
function encodeToBase64(str) {
var chars =
"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=";
for (
// initialize result and counter
var block, charCode, idx = 0, map = chars, output = "";
// if the next str index does not exist:
// change the mapping table to "="
// check if d has no fractional digits
str.charAt(idx | 0) || ((map = "="), idx % 1);
// "8 - idx % 1 * 8" generates the sequence 2, 4, 6, 8
output += map.charAt(63 & (block >> (8 - (idx % 1) * 8)))
) {
charCode = str.charCodeAt((idx += 3 / 4));
if (charCode > 0xff) {
throw new InvalidCharacterError("'btoa' failed: The string to be encoded contains characters outside of the Latin1 range."
);
}
block = (block << 8) | charCode;
}
return output;
}
var requiredBasicAuth = "Basic " + encodeToBase64(`${user}:${pass}`);
var match = false;
if (event.request.headers.authorization) {
if (event.request.headers.authorization.value === requiredBasicAuth) {
match = true;
}
}
if (!match) {
return {
statusCode: 401,
statusDescription: "Unauthorized",
headers: {
"www-authenticate": { value: "Basic" },
},
};
}
return event.request;
}
Then you can test with directly on the UI and assuming it works and assuming you have customized username and password, publish the function.
Please note that I have found individual pieces of the function above on the Internet so this is not my own code (other than piecing it together). I wish I would still find the sources so I can quote them here but I can't find them anymore. Credits to the creators though! :-)
Next, open your CloudFront distribution and do the following:
Make sure your S3 bucket in the origin is configured as a REST endpoint and not a website endpoint, i.e. it must end on .s3.amazonaws.com and not have the word website in the hostname.
Also in the Origin settings, under "S3 bucket access", select "Yes use OAI (bucket can restrict access to only CloudFront)". In the setting below click on "Create OAI" to create a new OAI (unless you have an existing one and know what you're doing). And select "Yes, update the bucket policy" to allow AWS to add the necessary permissions to your OAI.
Finally, open your Behavior of the CloudFront distribution and scroll to the bottom. Under "Function associations", for "Viewer request" select "CloudFront Function" and select your newly created CloudFront function. Save your changes.
And that should be it. With a bit of luck a matter of a couple of minutes (realistically more, I know) and especially not additional complexity once this is all set up.
Thanks for the useful post. An alternative to listing the pain text user name and password in the code, and to having base64 encoding logic, is to pre-generate the base64 encoded string. One such encoder, https://www.debugbear.com/basic-auth-header-generator
From there the script becomes simpler. The following is for 'user' / 'password'
function handler(event) {
var base64UserPassword = "Y3liZXJmbG93c3VyZmVyOnRhbHR4cGNnIzIwMjI="
if (event.request.headers.authorization &&
event.request.headers.authorization.value === ("Basic " + base64UserPassword)) {
return event.request;
}
return {
statusCode: 401,
statusDescription: "Unauthorized ",
headers: {
"www-authenticate": { value: "Basic" },
},
}
}
Here is already exists answer how to use Cloudfront functions, but I want to add improved version of the function:
Hardcoded credentials stored as SHA256 hash instead of plain (or base64 that is the same as plain) text. And that is more secure.
It is possible to allow access by whitelisted global IP addresses:
function handler(event) {
var crypto = require('crypto');
var headers = event.request.headers;
var wlist_ips = [
"1.1.1.1",
"2.2.2.2"
];
var authString = "9c06d532edf0813659ab41d26ab8ba9ca53b985296ee4584a79f34fe9cd743a4";
if (
typeof headers.authorization === "undefined" ||
crypto.createHash(
'sha256'
).update(headers.authorization.value).digest('hex') !== authString
) {
if (
!wlist_ips.includes(event.viewer.ip)
) {
return {
statusCode: 401,
statusDescription: "Unauthorized",
headers: {
"www-authenticate": { value: "Basic" },
"x-source-ip": { value: event.viewer.ip}
}
};
}
}
return event.request;
}
Command below may be used to get correct authString hash value for username user and password password:
printf "Basic $(printf 'user:password' | base64 -w 0)" | sha256sum | awk '{print$1}'

Google IOT per device heartbeat alert using Stackdriver

I'd like to alert on the lack of a heartbeat (or 0 bytes received) from any one of large number of Google IOT core devices. I can't seem to do this in Stackdriver. It instead appears to let me alert on the entire device registry which does not give me what I'm looking for (How would I know that a particular device is disconnected?)
So how does one go about doing this?
I have no idea why this question was downvoted as 'too broad'.
The truth is Google IOT doesn't have per device alerting, but instead offers only alerting on an entire device registry. If this is not true, please reply to this post. The page that clearly states this is here:
Cloud IoT Core exports usage metrics that can be monitored
programmatically or accessed via Stackdriver Monitoring. These metrics
are aggregated at the device registry level. You can use Stackdriver
to create dashboards or set up alerts.
The importance of having per device alerting is built into the promise assumed in this statement:
Operational information about the health and functioning of devices is
important to ensure that your data-gathering fabric is healthy and
performing well. Devices might be located in harsh environments or in
hard-to-access locations. Monitoring operational intelligence for your
IoT devices is key to preserving the business-relevant data stream.
So its not easy today to get an alert if one among many, globally dispersed devices, loses connectivity. One needs to build that, and depending on what one is trying to do, it would entail different solutions.
In my case I wanted to alert if the last heartbeat time or last event state publish was older than 5 minutes. For this I need to run a looping function that scans the device registry and performs this operation regularly. The usage of this API is outlined in this other SO post: Google iot core connection status
For reference, here's a Firebase function I just wrote to check a device's online status, probably needs some tweaks and further testing, but to help anybody else with something to start with:
// Example code to call this function
// const checkDeviceOnline = functions.httpsCallable('checkDeviceOnline');
// Include 'current' key for 'current' online status to force update on db with delta
// const isOnline = await checkDeviceOnline({ deviceID: 'XXXX', current: true })
export const checkDeviceOnline = functions.https.onCall(async (data, context) => {
if (!context.auth) {
throw new functions.https.HttpsError('failed-precondition', 'You must be logged in to call this function!');
}
// deviceID is passed in deviceID object key
const deviceID = data.deviceID
const dbUpdate = (isOnline) => {
if (('wasOnline' in data) && data.wasOnline !== isOnline) {
db.collection("devices").doc(deviceID).update({ online: isOnline })
}
return isOnline
}
const deviceLastSeen = () => {
// We only want to use these to determine "latest seen timestamp"
const stamps = ["lastHeartbeatTime", "lastEventTime", "lastStateTime", "lastConfigAckTime", "deviceAckTime"]
return stamps.map(key => moment(data[key], "YYYY-MM-DDTHH:mm:ssZ").unix()).filter(epoch => !isNaN(epoch) && epoch > 0).sort().reverse().shift()
}
await dm.setAuth()
const iotDevice: any = await dm.getDevice(deviceID)
if (!iotDevice) {
throw new functions.https.HttpsError('failed-get-device', 'Failed to get device!');
}
console.log('iotDevice', iotDevice)
// If there is no error status and there is last heartbeat time, assume device is online
if (!iotDevice.lastErrorStatus && iotDevice.lastHeartbeatTime) {
return dbUpdate(true)
}
// Add iotDevice.config.deviceAckTime to root of object
// For some reason in all my tests, I NEVER receive anything on lastConfigAckTime, so this is my workaround
if (iotDevice.config && iotDevice.config.deviceAckTime) iotDevice.deviceAckTime = iotDevice.config.deviceAckTime
// If there is a last error status, let's make sure it's not a stale (old) one
const lastSeenEpoch = deviceLastSeen()
const errorEpoch = iotDevice.lastErrorTime ? moment(iotDevice.lastErrorTime, "YYYY-MM-DDTHH:mm:ssZ").unix() : false
console.log('lastSeen:', lastSeenEpoch, 'errorEpoch:', errorEpoch)
// Device should be online, the error timestamp is older than latest timestamp for heartbeat, state, etc
if (lastSeenEpoch && errorEpoch && (lastSeenEpoch > errorEpoch)) {
return dbUpdate(true)
}
// error status code 4 matches
// lastErrorStatus.code = 4
// lastErrorStatus.message = mqtt: SERVER: The connection was closed because MQTT keep-alive check failed.
// will also be 4 for other mqtt errors like command not sent (qos 1 not acknowledged, etc)
if (iotDevice.lastErrorStatus && iotDevice.lastErrorStatus.code && iotDevice.lastErrorStatus.code === 4) {
return dbUpdate(false)
}
return dbUpdate(false)
})
I also created a function to use with commands, to send a command to the device to check if it's online:
export const isDeviceOnline = functions.https.onCall(async (data, context) => {
if (!context.auth) {
throw new functions.https.HttpsError('failed-precondition', 'You must be logged in to call this function!');
}
// deviceID is passed in deviceID object key
const deviceID = data.deviceID
await dm.setAuth()
const dbUpdate = (isOnline) => {
if (('wasOnline' in data) && data.wasOnline !== isOnline) {
console.log( 'updating db', deviceID, isOnline )
db.collection("devices").doc(deviceID).update({ online: isOnline })
} else {
console.log('NOT updating db', deviceID, isOnline)
}
return isOnline
}
try {
await dm.sendCommand(deviceID, 'alive?', 'alive')
console.log('Assuming device is online after succesful alive? command')
return dbUpdate(true)
} catch (error) {
console.log("Unable to send alive? command", error)
return dbUpdate(false)
}
})
This also uses my version of a modified DeviceManager, you can find all the example code on this gist (to make sure using latest update, and keep post on here small):
https://gist.github.com/tripflex/3eff9c425f8b0c037c40f5744e46c319
All of this code, just to check if a device is online or not ... which could be easily handled by Google emitting some kind of event or adding an easy way to handle this. COME ON GOOGLE GET IT TOGETHER!

why NotFound error occur in REST services with windows Phone app?

i tried to connect REST web servie from windows phone 8 application.
it was working proberly for weeks but after no change in it I get this generic error :
System.Net.WebException: The remote server returned an error:
NotFound.
i tried to test it by online REST Clients and services works properly
i tried to handle Exception and parse it as webException by this code :
var we = ex.InnerException as WebException;
if (we != null)
{
var resp = we.Response as HttpWebResponse;
response.StatusCode = resp.StatusCode;
and i get no more information and final response code is : "NotFound"
any one have any idea about what may cause this error?
there is already a trusted Certificate implemented on the server . the one who has the server suggested to have a DNS entry for the server, this entry should be at the customer DNS or in the phone hosts file .that what i done and worked for awhile but now it doesn't work however i checked that there is no thing changed
this is sample for Get Request it works proberly on Windwos Store apps :
async Task<object> GetHttps(string uri, string parRequest, Type returnType, params string[] parameters)
{
try
{
string strRequest = ConstructRequest(parRequest, parameters);
string encodedRequest = HttpUtility.UrlEncode(strRequest);
string requestURL = BackEndURL + uri + encodedRequest;
HttpWebRequest request = HttpWebRequest.Create(new Uri(requestURL, UriKind.Absolute)) as HttpWebRequest;
request.Headers["applicationName"] = AppName;
request.Headers["applicationPassword"] = AppPassword;
if (AppVersion > 1)
request.Headers["applicationVersion"] = AppVersion.ToString();
request.Method = "GET";
request.CookieContainer = cookieContainer;
var factory = new TaskFactory();
var getResponseTask = factory.FromAsync<WebResponse>(request.BeginGetResponse, request.EndGetResponse, null);
HttpWebResponse response = await getResponseTask as HttpWebResponse;
// string s = response.GetResponseStream().ToString();
if (response.StatusCode == HttpStatusCode.OK)
{
XmlSerializer serializer = new XmlSerializer(returnType);
object obj = serializer.Deserialize(response.GetResponseStream());
return obj;
}
else
{
var Instance = Activator.CreateInstance(returnType);
(Instance as ResponseBase).NetworkError = true;
(Instance as ResponseBase).StatusCode = response.StatusCode;
return Instance;
}
}
catch (Exception ex)
{
return HandleException(ex, returnType);
}
}
i tried to monitor connections from Emulator and i found this error in connection :
**
Authentication failed because the remote party has closed the
transport stream.
**
You saw the client implement a server side certificate in the service. Did you have that certificate installed on the phone? That can be the cause of the NotFound error. Please, can you try to navigate to the service in the phone or emulator internet explorer prior to testing the app? If you do that, you can see the service working in the emulator/phone internet explorer? Maybe at that point internet explorer ask you about installing the certificate and then you can open your app, and it works.
Also remember if you are testing this in the emulator, every time you close it, the state is lost so you need to repeat the operation of installing the certificate again.
Hope this helps.
If you plan to use SSL in production in general public application (not company-distribution app), you need to ensure your certificate has one of the following root authorities:
SSL root certificates for Windows Phone OS 7.1.
When we had same issue, we purchased SSL certificate from one of those providers and after installing it on server we were able to make HTTPS requests to our services with no problem.
If you have company-distribution app, you can use any certificate from company's Root CA.