Sending UID / Device ID via Greengrass - amazon-web-services

I'm running several Greengrass Cores and they send Data to a MQTT Stream.
I deployed a Lambda on GGC reading the SerialPort coming in and push it to the Stream.
But now I want to check which device is sending the Data - I tried this one to check out the hostname
import socket
host = socket.gethostname()
but the core sends the value "sandbox" so i think the lambda isn't authorized to read the host name.
The SDK has no Documentation for this:
https://github.com/aws/aws-greengrass-core-sdk-python
I want to push the data to a mqqt stream like this:
response = client.publish(
topic='customer/events/{DEVICE-ID or UID or ARN}/',
payload=jsonData.encode())

I found something useful in another AWS Python Example - ThingNames are registered in the System Env so you can import OS and get the ThingName like this:
import os
device = os.environ['AWS_IOT_THING_NAME']

Related

Google Cloud IoT - Single MQTT client instance for all devices in a registry

I am able to publish events to a device in my Cloud IOT Registry via an MQTT client created this way (using paho python):
self.__client = mqtt.Client(client_id='projects/{}/locations/{}/registries/{}/devices/{}'.format(project_id,
cloud_region,
registry_id,
device_id))
Now I'm wondering if I can create an MQTT client being able to publish events to multiple devices by setting the client id at registry level (i.e. not specifying the device id):
self.__client = mqtt.Client(client_id='projects/{}/locations/{}/registries/{}'.format(project_id,
cloud_region,
registry_id))
This client is not able to connect even if I've added a CA Certificate to the registry.
My question is: can a single MQTT Client instance publish events to a set of devices defined in a registry?
Should I use a gateway instead?
No, you can't send messages to a registry like this.
The way you'd want to do this is either 1) Use a gateway like you say, send one message then spread it to the devices locally. Or 2) Grab the list of devices in the registry using the DeviceManagerClient(), and iterate over them each sending each device the message in a loop.
Check out this: https://cloud.google.com/iot/docs/samples/device-manager-samples#list_devices_in_a_registry
For fetching the list of devices in a registry. Snippet for python:
# project_id = 'YOUR_PROJECT_ID'
# cloud_region = 'us-central1'
# registry_id = 'your-registry-id'
print("Listing devices")
client = iot_v1.DeviceManagerClient()
registry_path = client.registry_path(project_id, cloud_region, registry_id)
devices = list(client.list_devices(request={"parent": registry_path}))
for device in devices:
print("Device: {} : {}".format(device.num_id, device.id))
return devices
So in that for device in devices loop you can call your code to get the MQTT client and send the message you want to the specified device.

Google Cloud IOT, block communication via API

I'd like to block communication with a device in a registry in Google Cloud IOT.
The gcloud command that is used to block communication: https://cloud.google.com/iot/docs/gcloud-examples#block_or_allow_communication_from_a_device
The Patch API doesn't make it clear how one can block communication of a device using the API
So how is this achieved?
There is an example snippet for patching a device available that may be helpful for you.
Instead of sending a EC value in the patch body, you could update the device to have communication blocked.
In Python, you would do this as:
client = get_client(service_account_json)
registry_path = 'projects/{}/locations/{}/registries/{}'.format(
project_id, cloud_region, registry_id)
patch = {
'blocked': 'True'
}
device_name = '{}/devices/{}'.format(registry_path, device_id)
return client.projects().locations().registries().devices().patch(
name=device_name, updateMask='blocked', body=patch).execute()

How to send message to event hub via python

I am trying to connect lots of iot objects to an eventhub and save them to a blob storage(also an sql database). I want to do this with python(and I am not sure if this is a recommended practice). The documentation about python was confusing. I tried a few examples but they create an entry to blob storage but entries seems to be irrelevant.
Things like this:
Objavro.codecnullavro.schema\EC{"type":"record","name":"EventData","namespace":"Microsoft.ServiceBus.Messaging","fields":[{"name":"SequenceNumber","type":"long"}...
which is not what I send. How can I solve this?
You could use the azure-eventhub Python SDK to send messages to Event Hub which is available on pypi.
And there is a send sample showing how to send messages:
import os
from azure.eventhub import EventHubProducerClient, EventData
producer = EventHubProducerClient.from_connection_string(
conn_str=CONNECTION_STR,
eventhub_name=EVENTHUB_NAME
)
with producer:
event_data_batch = producer.create_batch()
event_data_batch.add(EventData('Single message'))
producer.send_batch(event_data_batch)
I'm interested in The documentation about python was confusing. I tried a few examples but they create an entry to blob storage but entries seems to be irrelevant.
Could you share your code with me? I'm wondering what's the input/output for Event Hub and Storage Blob and how's the data processing flow.
btw, for Azure Storage Blob Python SDK usage, you could check the repo and [blob samples] for more information.
This is the connection string format for inserting new messages in eventhub using kafka-python. If you were using kafka and want to replace you just have to change this connection string.
import ssl
context = ssl.create_default_context()
context.options &= ssl.OP_NO_TLSv1
context.options &= ssl.OP_NO_TLSv1_1
self.kafka = KafkaProducer(bootstrap_servers=KAFKA_HOST,connections_max_idle_ms=5400000,security_protocol='SASL_SSL',value_serializer=lambda v: json.dumps(v).encode('utf-8'),sasl_mechanism='PLAIN',sasl_plain_username='$ConnectionString',sasl_plain_password={YOUR_KAFKA_ENDPOINT},api_version = (0,10),retries=5,ssl_context = context)
KAFKA_HOST = "{your_eventhub}.servicebus.windows.net:9093"
KAFKA_ENDPOINT="Endpoint=sb://{your_eventhub}.servicebus.windows.net/;SharedAccessKeyName=RootSendAccessKey;SharedAccessKey={youraccesskey}"
You can find KAFKA_HOST and KAFKA_ENDPOING from your Azure Console.

RPi3 can't find ffe0 service via Bluepy python

I have an RPi2 with GPIO-hm10 ble module that connects and communicates with a ble-relay board (RB1). I want to replace the RPi2 with an RPi3. So I tested the RPi3 with an identical test-unit relay board RB2 and using this python script the RPi3 can connect and communicate with the RB2. So I was ready to swap them.
Here is the visual of it:
I also tried connecting to both relay boards (RB1 & RB2) from the BLE scanner app on the iphone and I can connect and send commands just fine by writing to their characterstic.
I can connect and pair and trust both boards from the RPi3 via bluetoothctl and see their UUID services just fine. But when I run my python code to toggle the relays on RB2:
import bluepy.btle as btle
p = btle.Peripheral("00:0E:0B:00:75:12", "random")
s = p.getServiceByUUID("0000ffe0-0000-1000-8000-00805f9b34fb")
c = s.getCharacteristics()[0]
c.write("o", "utf-8")
p.disconnect()
I get this error on RB1 only::
File "/usr/local/lib/python2.7/dist-packages/bluepy/btle.py", line 449, in getServiceByUUID
raise BTLEException(BTLEException.GATT_ERROR, "Service %s not found" % (uuid.getCommonName()))
bluepy.btle.BTLEException: Service ffe0 not found
But the service is uuid is correct, here is a terminal session output. As you can see, I can connect to the RB1 and see the UUID Services including the ffe0 I need:
[bluetooth]# connect 00:0E:0B:00:75:12
Attempting to connect to 00:0E:0B:00:75:12
[CHG] Device 00:0E:0B:00:75:12 Connected: yes
Connection successful
[CHG] Device 00:0E:0B:00:75:12 UUIDs:
00001800-0000-1000-8000-00805f9b34fb
00001801-0000-1000-8000-00805f9b34fb
0000ffe0-0000-1000-8000-00805f9b34fb
[bluetooth]# info 00:0E:0B:00:75:12
Device 00:0E:0B:00:75:12
Name: BT Bee-BLE
Alias: BT Bee-BLE
Paired: no
Trusted: yes
Blocked: no
Connected: yes
LegacyPairing: no
UUID: Generic Access Profile
(00001800-0000-1000-8000-00805f9b34fb)
UUID: Generic Attribute Profile
(00001801-0000-1000-8000-00805f9b34fb)
UUID: Unknown
(0000ffe0-0000-1000-8000-00805f9b34fb)
Why is that happening? Could something be saved somewhere in the tsrb430 RB1 that could be causing this?
After hours of inspecting the btle.py file, I noticed the getServices() function was not being called. I added a call to it and now it find can find the service:
#!/usr/bin/env python
import bluepy.btle as btle
p = btle.Peripheral("00:0E:0B:00:75:12", "random")
services=p.getServices()
for service in services:
print service
s = p.getServiceByUUID("0000ffe0-0000-1000-8000-00805f9b34fb")
c = s.getCharacteristics()[0]
c.write("e", "utf-8")
p.disconnect()

boto.sqs connect to non-aws endpoint

I'm currently in need of connecting to a fake_sqs server for dev purposes but I can't find an easy way to specify endpoint to the boto.sqs connection. Currently in java and node.js there are ways to specify the queue endpoint and by passing something like 'localhst:someport' I can connect to my own sqs-like instance. I've tried the following with boto:
fake_region = regioninfo.SQSRegionInfo(name=name, endpoint=endpoint)
conn = fake_region.connect(aws_access_key_id="TEST", aws_secret_access_key="TEST", port=9324, is_secure=False);
and then:
queue = connAmazon.get_queue('some_queue')
but it fails to retrieve the queue object,it returns None. Has anyone achieved to connect to an own sqs instance ?
Here's how to create an SQS connection that connects to fake_sqs:
region = boto.sqs.regioninfo.SQSRegionInfo(
connection=None,
name='fake_sqs',
endpoint='localhost', # or wherever fake_sqs is running
connection_cls=boto.sqs.connection.SQSConnection,
)
conn = boto.sqs.connection.SQSConnection(
aws_access_key_id='fake_key',
aws_secret_access_key='fake_secret',
is_secure=False,
port=4568, # or wherever fake_sqs is running
region=region,
)
region.connection = conn
# you can now work with conn
# conn.create_queue('test_queue')
Be aware that, at the time of this writing, the fake_sqs library does not respond correctly to GET requests, which is how boto makes many of its requests. You can install a fork that has patched this functionality here: https://github.com/adammck/fake_sqs