I have an RPi2 with GPIO-hm10 ble module that connects and communicates with a ble-relay board (RB1). I want to replace the RPi2 with an RPi3. So I tested the RPi3 with an identical test-unit relay board RB2 and using this python script the RPi3 can connect and communicate with the RB2. So I was ready to swap them.
Here is the visual of it:
I also tried connecting to both relay boards (RB1 & RB2) from the BLE scanner app on the iphone and I can connect and send commands just fine by writing to their characterstic.
I can connect and pair and trust both boards from the RPi3 via bluetoothctl and see their UUID services just fine. But when I run my python code to toggle the relays on RB2:
import bluepy.btle as btle
p = btle.Peripheral("00:0E:0B:00:75:12", "random")
s = p.getServiceByUUID("0000ffe0-0000-1000-8000-00805f9b34fb")
c = s.getCharacteristics()[0]
c.write("o", "utf-8")
p.disconnect()
I get this error on RB1 only::
File "/usr/local/lib/python2.7/dist-packages/bluepy/btle.py", line 449, in getServiceByUUID
raise BTLEException(BTLEException.GATT_ERROR, "Service %s not found" % (uuid.getCommonName()))
bluepy.btle.BTLEException: Service ffe0 not found
But the service is uuid is correct, here is a terminal session output. As you can see, I can connect to the RB1 and see the UUID Services including the ffe0 I need:
[bluetooth]# connect 00:0E:0B:00:75:12
Attempting to connect to 00:0E:0B:00:75:12
[CHG] Device 00:0E:0B:00:75:12 Connected: yes
Connection successful
[CHG] Device 00:0E:0B:00:75:12 UUIDs:
00001800-0000-1000-8000-00805f9b34fb
00001801-0000-1000-8000-00805f9b34fb
0000ffe0-0000-1000-8000-00805f9b34fb
[bluetooth]# info 00:0E:0B:00:75:12
Device 00:0E:0B:00:75:12
Name: BT Bee-BLE
Alias: BT Bee-BLE
Paired: no
Trusted: yes
Blocked: no
Connected: yes
LegacyPairing: no
UUID: Generic Access Profile
(00001800-0000-1000-8000-00805f9b34fb)
UUID: Generic Attribute Profile
(00001801-0000-1000-8000-00805f9b34fb)
UUID: Unknown
(0000ffe0-0000-1000-8000-00805f9b34fb)
Why is that happening? Could something be saved somewhere in the tsrb430 RB1 that could be causing this?
After hours of inspecting the btle.py file, I noticed the getServices() function was not being called. I added a call to it and now it find can find the service:
#!/usr/bin/env python
import bluepy.btle as btle
p = btle.Peripheral("00:0E:0B:00:75:12", "random")
services=p.getServices()
for service in services:
print service
s = p.getServiceByUUID("0000ffe0-0000-1000-8000-00805f9b34fb")
c = s.getCharacteristics()[0]
c.write("e", "utf-8")
p.disconnect()
Related
I broke my head while looking for the cause of this error (4 days)
I have an OPC DA server running in a remote machine.
The OPC DA client is located in another machine.
in the client implementation I create an external instance via CoCreateInstanceEx()
HRESULT result = ::CoCreateInstanceEx(clsid, NULL, clsctx, sinptr, 1, &mqi);
PRINT_COMERRORIF(result, "CoCreateInstanceEx failed");
and it works fine and I get a pointer to the remote OPC server (mqi.pItf)
the problem comes when I call the advise() function of IConnectionPoint interface
I specify that I found the connection point and I return a pointer to the IOPCShutdown interface (_MY_shutdown) before calling the advise function
result = server_object->QueryInterface(IID_IConnectionPointContainer
(void**)&connection_point_container);
PRINT_COMERRORIF(result, CTXID(this) "No IConnectionPointContainer interface");
result = connection_point_container->FindConnectionPoint(IID_IOPCShutdown, &_MY_shutdown);
PRINT_COMERRORIF(result, CTXID(this) "No IOPCShutdown connection point found");
result = _MY_shutdown->Advise(_MY_shutdown_callback, &_MY_shutdown_cookie); // HERE IS THE ISSUE
PRINT_COMERRORIF(result, CTXID(this) "IOPCShutdown::Advise Failed");
and I got this error:
IOPCShutdown::Advise Failed : error 80040202
I've checked the DCOM Setting for Discovery of Remote OPC Servers configuration and I did everything as described but no way ;(
Here is my configuration:
Server side
- OPC DA Server installed and running
- local user account is created
- DCOM settings are configured as required
- Policy settings are configured as well
Client side
- OPC DA client interface installed.
- local user accounts are created on the both Nodes. Accounts have the same
name and passwords like on the server.
firewall is disabled in both server/client.
Have you properly configured DCOM and policy settings on a Client side?
As mentioned in comments, because for asynchronous connections (when callback is invoked) your client behaves as a server and server - as a client.
It works when I change the authentification level of my OPC DA server in DCOM Config from "Connect" to "None", I don't know why but it works ^^
I have followed the AWS tutorial step by step. https://aws.amazon.com/premiumsupport/knowledge-center/iot-core-publish-mqtt-messages-python/
I have created the open-ended policy with the *, registered a thing and attached it to the policy, generated, downloaded, and activated the certificates. I have tried to connect and publish to a subscription using both the AWS IoT SDK for Python v2 and the original sdk but neither work. The code I'm using is straight from AWS's demo example connection code but they just wont connect.
While using the AWS IoT SDK for Python v2 I get this error message:
RuntimeError: 1038 (AWS_IO_FILE_VALIDATION_FAILURE): A file was read and the input did not match the expected value
While using the original SDK I get this error message:
TimeoutError: [Errno 60] Operation timed out
The python code I'm using:
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: MIT-0
import time as t
import json
import AWSIoTPythonSDK.MQTTLib as AWSIoTPyMQTT
# Define ENDPOINT, CLIENT_ID, PATH_TO_CERT, PATH_TO_KEY, PATH_TO_ROOT, MESSAGE, TOPIC, and RANGE
ENDPOINT = "XXXXX-ats.iot.ap-southeast-2.amazonaws.com"
CLIENT_ID = "testDevice"
PATH_TO_CERT = "certs/XXXX-certificate.pem.crt"
PATH_TO_KEY = "certs/XXXX-private.pem.key"
PATH_TO_ROOT = "certs/root.pem"
MESSAGE = "Hello World"
TOPIC = "test/testing"
RANGE = 20
myAWSIoTMQTTClient = AWSIoTPyMQTT.AWSIoTMQTTClient(CLIENT_ID)
myAWSIoTMQTTClient.configureEndpoint(ENDPOINT, 8883)
myAWSIoTMQTTClient.configureCredentials(PATH_TO_ROOT, PATH_TO_KEY, PATH_TO_CERT)
myAWSIoTMQTTClient.connect()
print('Begin Publish')
for i in range (RANGE):
data = "{} [{}]".format(MESSAGE, i+1)
message = {"message" : data}
myAWSIoTMQTTClient.publish(TOPIC, json.dumps(message), 1)
print("Published: '" + json.dumps(message) + "' to the topic: " + "'test/testing'")
t.sleep(0.1)
print('Publish End')
myAWSIoTMQTTClient.disconnect()
(I censored the endpoint and the certificate ID)
(I'm using a macbook air and on a public school network)
I went home and tested it and it works perfectly. If you have this same problem, try troubleshooting your network. I think my school blocks MQTT or something.
MQTT works with the particular port number 8883 which you will configure in myAWSIoTMQTTClient.configureEndpoint(ENDPOINT, 8883).
In one of my AWS IOT course I learnt that some network administrators will block all ports which are not commonly used, to avoid unwanted traffic and MQTT is something which is specific to IOT industry. This could be the reason why it did not worked when you tried in school network and it worked when you tried in your home.
I'm running several Greengrass Cores and they send Data to a MQTT Stream.
I deployed a Lambda on GGC reading the SerialPort coming in and push it to the Stream.
But now I want to check which device is sending the Data - I tried this one to check out the hostname
import socket
host = socket.gethostname()
but the core sends the value "sandbox" so i think the lambda isn't authorized to read the host name.
The SDK has no Documentation for this:
https://github.com/aws/aws-greengrass-core-sdk-python
I want to push the data to a mqqt stream like this:
response = client.publish(
topic='customer/events/{DEVICE-ID or UID or ARN}/',
payload=jsonData.encode())
I found something useful in another AWS Python Example - ThingNames are registered in the System Env so you can import OS and get the ThingName like this:
import os
device = os.environ['AWS_IOT_THING_NAME']
I have set up two AMI(redhat) instances, one with an ActiveMQ 5.11 broker, the other with some stress testing tools (gatling w/ Mqtt, mqtt-stres, etc..). I stopped iptables service (firewall) on both instances. AWS security groups allow port 1883 for mqtt messaging.
Running mqtt-stress.py succeeds, it makes the round trip, and I get a message back from the broker. When I spin up a Gatling simulation, all the threads move to the 'Active' state and never make it to 'Done'. I even setup a broker on the same machine as Gatlin and pointed the Simulation to 'localhost', same result. Nothing appears in the simulation.log. tcpdump on the original server was showing that no incoming connections were being made. Here is the simulation:
import io.gatling.core.Predef._
import org.fusesource.mqtt.client.QoS
import scala.concurrent.duration._
import com.github.mnogu.gatling.mqtt.Predef._
class BasicMqttSimulation extends Simulation {
val mqttConf = mqtt
// MQTT broker
//.host("tcp://127.0.0.1:1883")
.host("tcp://175.10.130.10:1883")
val scn = scenario("MQTT Test")
.exec(mqtt("request")
// topic: "foo"
// payload: "Hello"
// QoS: AT_LEAST_ONCE
// retain: false
.publish("foo", "Hello", QoS.AT_LEAST_ONCE, retain = false))
setUp(
scn
.inject(constantUsersPerSec(10) during(10 seconds)))
.protocols(mqttConf)
}
Any tips on how to trouble shoot this further?
Rebuilt the instances and tested step by step,
sudo setenforce 0 # did not survive a reboot
Need to add rules for SElinux.
I'm currently in need of connecting to a fake_sqs server for dev purposes but I can't find an easy way to specify endpoint to the boto.sqs connection. Currently in java and node.js there are ways to specify the queue endpoint and by passing something like 'localhst:someport' I can connect to my own sqs-like instance. I've tried the following with boto:
fake_region = regioninfo.SQSRegionInfo(name=name, endpoint=endpoint)
conn = fake_region.connect(aws_access_key_id="TEST", aws_secret_access_key="TEST", port=9324, is_secure=False);
and then:
queue = connAmazon.get_queue('some_queue')
but it fails to retrieve the queue object,it returns None. Has anyone achieved to connect to an own sqs instance ?
Here's how to create an SQS connection that connects to fake_sqs:
region = boto.sqs.regioninfo.SQSRegionInfo(
connection=None,
name='fake_sqs',
endpoint='localhost', # or wherever fake_sqs is running
connection_cls=boto.sqs.connection.SQSConnection,
)
conn = boto.sqs.connection.SQSConnection(
aws_access_key_id='fake_key',
aws_secret_access_key='fake_secret',
is_secure=False,
port=4568, # or wherever fake_sqs is running
region=region,
)
region.connection = conn
# you can now work with conn
# conn.create_queue('test_queue')
Be aware that, at the time of this writing, the fake_sqs library does not respond correctly to GET requests, which is how boto makes many of its requests. You can install a fork that has patched this functionality here: https://github.com/adammck/fake_sqs