I need the system to be secure.
I tired to encode the image with base64 and sending the string via MQTT to Iot Core. Then decode the string with a cloud function and finally storage the decoded image in Google Cloud Storage. The problem is the limited size of a message in MQTT.
Using a Cloud Function and then storage in Google Cloud Storage is not really secure, anyone could hit that url and I loos control of all the ESP32CAM comunication.
Am I missing something? is there a really secure way to send files to Google Cloud Storage from to IoT Core?
Thanks
IoT Core should not be used to transfer big blobs.
However, you can take advantage of the secure connection between IoT Core and the device to send credentials to the device to access GCS securely.
Create a service account with write only access to your GCS bucket.
Pass a key for that service account to the device through IoT Core(via configuration change, for example)
The device then can use that key to connect securely to GCS and upload the image.
Depending on your preferences and the particular use case you can rotate the keys to access GCS whenever you want, or be as granular as you want with the permissions (one key for all the devices, one key per device, ...)
The way I've done it is to break the image up into 256K packages (well, 255K-ish with a header of 8 bytes that's an int which represents the order it should be reassembled on the other end since Pub/Sub isn't guaranteed ordering).
#rbarbero's answer is another good one, where you send down creds to talk to GCS directly.
Another way to do it would be to have the device talk to something local that's more powerful which has that service credential directly to GCS and just bypass IoT Core entirely.
No need to base64 encode it and the pubsub MQTT buffer can be resized.
I use:
#include <PubSubClient.h>
...
void setup() {
...
boolean res = mqttClient.setBufferSize(50*1024); // ok for 640*480
if (res) Serial.println("Buffer resized."); else Serial.println("Buffer resizing failed");
...
}
void sendPic() {
...
if (fb->len) // send only images with size >0
if (mqttClient.beginPublish("test_loc/esp32-cam/pic_ms", fb->len + sizeof(long), false))
{
// send image data + millis()
unsigned long m = millis();
int noBytes;
noBytes = mqttClient.write(fb->buf, fb->len);
noBytes = mqttClient.write((byte *) &m, sizeof(long));
if (!mqttClient.endPublish())
{
// error!
Serial.println("\nError sending data.");
}
}
...
}
Here I send 640*480 image and append current millis() value at the end for stitching it back to video through ffmpeg.
Related
I want to stream the microphone audio from the web browser to AWS S3.
Got it working
this.recorder = new window.MediaRecorder(...);
this.recorder.addEventListener('dataavailable', (e) => {
this.chunks.push(e.data);
});
and then when user clicks on stop upload the chunks new Blob(this.chunks, { type: 'audio/wav' }) as multiparts to AWS S3.
But the problem is if the recording is 2-3 hours longer then it might take exceptionally longer and user might close the browser before waiting for the recording to complete uploading.
Is there a way we can stream the web audio directly to S3 while it's going on?
Things I tried but can't get a working example:
Kineses video streams, looks like it's only for real time streaming between multiple clients and I have to write my own client which will then save it to S3.
Thought to use kinesis data firehose but couldn't find any client data producer from brower.
Even tried to find any resource using aws lex or aws ivs but I think they are just over engineering for my use case.
Any help will be appreciated.
You can set the timeslice parameter when calling start() on the MediaRecorder. The MediaRecorder will then emit chunks which roughly match the length of the timeslice parameter.
You could upload those chunks using S3's multipart upload feature as you already mentioned.
Please note that you need a library like extendable-media-recorder if you want to record a WAV file since no browser supports that out of the box.
I need the system to be secure.
I tired to encode the image with base64 and sending the string via MQTT to Iot Core. Then decode the string with a cloud function and finally storage the decoded image in Google Cloud Storage. The problem is the limited size of a message in MQTT.
Using a Cloud Function and then storage in Google Cloud Storage is not really secure, anyone could hit that url and I loos control of all the ESP32CAM comunication.
Am I missing something? is there a really secure way to send files to Google Cloud Storage from to IoT Core?
Thanks
IoT Core should not be used to transfer big blobs.
However, you can take advantage of the secure connection between IoT Core and the device to send credentials to the device to access GCS securely.
Create a service account with write only access to your GCS bucket.
Pass a key for that service account to the device through IoT Core(via configuration change, for example)
The device then can use that key to connect securely to GCS and upload the image.
Depending on your preferences and the particular use case you can rotate the keys to access GCS whenever you want, or be as granular as you want with the permissions (one key for all the devices, one key per device, ...)
The way I've done it is to break the image up into 256K packages (well, 255K-ish with a header of 8 bytes that's an int which represents the order it should be reassembled on the other end since Pub/Sub isn't guaranteed ordering).
#rbarbero's answer is another good one, where you send down creds to talk to GCS directly.
Another way to do it would be to have the device talk to something local that's more powerful which has that service credential directly to GCS and just bypass IoT Core entirely.
No need to base64 encode it and the pubsub MQTT buffer can be resized.
I use:
#include <PubSubClient.h>
...
void setup() {
...
boolean res = mqttClient.setBufferSize(50*1024); // ok for 640*480
if (res) Serial.println("Buffer resized."); else Serial.println("Buffer resizing failed");
...
}
void sendPic() {
...
if (fb->len) // send only images with size >0
if (mqttClient.beginPublish("test_loc/esp32-cam/pic_ms", fb->len + sizeof(long), false))
{
// send image data + millis()
unsigned long m = millis();
int noBytes;
noBytes = mqttClient.write(fb->buf, fb->len);
noBytes = mqttClient.write((byte *) &m, sizeof(long));
if (!mqttClient.endPublish())
{
// error!
Serial.println("\nError sending data.");
}
}
...
}
Here I send 640*480 image and append current millis() value at the end for stitching it back to video through ffmpeg.
I'd like to block communication with a device in a registry in Google Cloud IOT.
The gcloud command that is used to block communication: https://cloud.google.com/iot/docs/gcloud-examples#block_or_allow_communication_from_a_device
The Patch API doesn't make it clear how one can block communication of a device using the API
So how is this achieved?
There is an example snippet for patching a device available that may be helpful for you.
Instead of sending a EC value in the patch body, you could update the device to have communication blocked.
In Python, you would do this as:
client = get_client(service_account_json)
registry_path = 'projects/{}/locations/{}/registries/{}'.format(
project_id, cloud_region, registry_id)
patch = {
'blocked': 'True'
}
device_name = '{}/devices/{}'.format(registry_path, device_id)
return client.projects().locations().registries().devices().patch(
name=device_name, updateMask='blocked', body=patch).execute()
Thanks to this community I've learned that is possible to send AWS SNS Push notifications via Lambda with node.js (as a result of Parse migration). I am still struggling with the following:
Can this be done client to client x likes y's z. Where x is user 1, y is user 2 and z is the object being liked? If so, it seems like Cognito is not required that it can read directly from the database but is that accurate?
Does anyone have an example of how this was implemented?
Again, we don't want to broadcast to all users on a schedule but rather when a client performs an action.
Thanks so much in advance!
Let's say you have Device1 which creates a piece of content. That is distributed to a number of users. Device2 receives this content and "likes" it.
Assumption:
you have registered for push notifications on the device, and created a SNS endpoint on AWS. You have stored that endpoint ARN in your database, and associated it with either the Cognito Id, or the content Id. If your data is normalized, then you'd typically have the SNS endpoint associated with the device.
Your Lambda will need to have access to that data source and look up that SNS endpoint to send push notifications to. This will depend on what sort of data store you are using (RDS, DynamoDB, something else). What sort of access that is, and how you secure it is a whole other topic.
From your Lambda, you fetch the ARN to send the push notification to. If you pass in the content Id from the Like, and have the Cognito Id from the device that Liked it, you can then look up the information you need. You then construct a SNS payload (I'm assuming APNS in this example), then send it off to SNS.
var message = {
"default": "New Like!",
"APNS": {
"aps": {
"alert": "New Like!"
}
}
};
var deviceParams = {
Message: JSON.stringify(message),
Subject: "New Like",
TargetArn: targetArn,
MessageStructure: "json"
};
self.sns.publish(deviceParams, function (err) {
if (err) {
console.error("Error sending SNS: ", err);
}
});
It's not all done for you like it might be with Parse. You need to work a lot harder on AWS, but you have near unlimited power to do what you want.
If this is a bit too much, you may want to consider Google's newly updated Firebase platform. It's very Parse-like: https://firebase.google.com/
Hope that helps you move forward a bit.
Further reading:
http://docs.aws.amazon.com/sns/latest/dg/mobile-push-apns.html
https://developer.apple.com/library/ios/documentation/NetworkingInternet/Conceptual/RemoteNotificationsPG/Chapters/TheNotificationPayload.html
http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/SNS.html
I am creating a polling app, and each poll is going to have an associated image of the particular topic.
I am using Firebase to dynamically update polls as events occur. In Firebase, I am storing the relevant Image URL (referencing the URL in Amazon S3), and I am then using Picasso to load the image onto the client's device (see code below).
I have already noticed that I may be handling this data inefficiently, resulting in unnecessary Get requests to my Amazon files in S3. I was wondering what options I have with Picasso (i.e. I am thinking some caching) to pull the images for each client just once and them store them locally (I do not want them to remain on the client's device permanently, however). My goal is to minimize costs but not compromise performance. Below is my current code:
mPollsRef.child(mCurrentDateString).child(homePollFragmentIndexConvertedToFirebaseReferenceImmediatelyBelowDate).addListenerForSingleValueEvent(new ValueEventListener() {
#Override
public void onDataChange(DataSnapshot dataSnapshot) {
int numberOfPollAnswersAtIndexBelowDate = (int) dataSnapshot.child("Poll_Answers").getChildrenCount();
Log.e("TAG", "There are " + numberOfPollAnswersAtIndexBelowDate + " polls answers at index " + homePollFragmentIndexConvertedToFirebaseReferenceImmediatelyBelowDate);
addRadioButtonsWithFirebaseAnswers(dataSnapshot, numberOfPollAnswersAtIndexBelowDate);
String pollQuestion = dataSnapshot.child("Poll_Question").getValue().toString();
mPollQuestion.setText(pollQuestion);
//This is where the image "GET" from Amazon S3 using Picasso begins; the URL is in Firebase and then I use that URL
//with the Picasso.load method
final String mImageURL = (String) dataSnapshot.child("Image").getValue();
Picasso.with(getContext())
.load(mImageURL)
.fit()
.into((ImageView) rootView.findViewById(R.id.poll_image));
}
#Override
public void onCancelled(FirebaseError firebaseError) {
}
});
First, the Picasso instance will hold a memory cache by default (or you can configure it).
Second, disk caching is done by the HTTP client. You should use OkHttp 3+ in 2016. By default, Picasso will make a reasonable default cache with OkHttp if you include OkHttp in your dependencies. You can also set the Downloader when creating the Picasso instance (make sure to set the cache on the client and use OkHttpDownloader or comparable).
Third, OkHttp will respect cache headers, so make sure the max-age and max-stale have appropriate values.