WSO2 IOT SERVER MODIFICATION - wso2

I want to modify my wso2 iot server to do the following task. Now I have only one LED bulb which is connected to my RASPBERRY PI. I want to increase the number of LEDs up to 3. Now the system turn on the only LED when the temperature in greater than 28 C. I want to turn on the 2nd LED when the temperature in greater than 30 C & turn on the 3rd LED when the temp is greater then 35 C. So when the temperature is 36 C all 3 LEDs are working.
How can I do this? Can I modify some files in WSO2 iot server & raspberry pi agent? If it is possible then what are those files and what should be modified?
Or can I write a new file or files to do that one? If it is possible then how can I do that?
Your help on this is very much appreciated.

Related

DJI SDK Can't start coordinated Waypoint mission

I am creating WaypointMission using DJI ROS SDK.
I want my vehicle to pass smoothly through provided waypoints, so I set enable coordinated mode like so:
waypoint_task.trace_mode = dji_sdk::MissionWaypointTask::TRACE_COORDINATED;
The problem is, no matter how many or how dense control points are, SDK always responds with error message WAYPOINT_MISSION_POINTS_NOT_ENOUGH:
Screenshot
With TRACE_POINT mission uploads successfully, but those stops at waypoints are no good for filmmaking.
Also, where I can find information about actions I can execute on waypoints using dji_sdk/MissionWaypointAction.msg?
I've It's been about 6 months since I used the onboard SDK, so it may have been fixed by now, but when I was using the ROS version, one of the bugs I found was that after you upload "two many" waypoints, the waypoint mission won't work. If I recall correctly, that number was around 28. So, if you have a waypoint mission with 27 waypoints, it would work, but 28 wouldn't. Also, it was cumulative, so if you did a waypoint mission with 10 waypoints, and later did one with 18, the second would fail, and you wouldn't be able to do a waypoint mission again until you restarted.
I also tried the non-ros version of the SDK. It worked better, but was also buggy and hard to use, and wouldn't allow more than 99 waypoints in a mission.

How to use external power supply for a Proximity Sensor with Arduino

Is it doable to read a proximity sensor values with Arduino Uno, but the sensor requires 24vdc, here's the sensor link
and here's the power supply I'd like to use, this link
This is all for learning purposes, to see how to use an external power source to power a 3-wire 24vdc sensor
Thanks
the power supply voltage of the sensor dosnt matter, only the signal potential matters. Ardunio analog pins needs the input potential between - 5 to +5v so make sure the signal potential lies in between this. One thing to take care while using an external power supply is to make ext and ardunio's ground common.
Since you are doing this for learning purpose you don't have to waste money on that power supply. You can power the probe from arduino if you want so. But you have to take care of these things.
1)Curent consuption
As per this pr12-4dp-autonics-12800368 datasheet which is similar to the device you are using the current requirement of the device is 10mA which an arduino uno can provide (40mA is max for arduino uno) check your device data sheet for its current consumption.
2)voltage level
Arduino uno power out pins support 5 volts and 3.3 volts. so you have to convert the 5 volt to required volts with help of voltage converter ICs (for example LM2577 IC). check whether the IC you chose supports the current required by the sensor.
3)Input signal logic level from the sensor
This you can do with voltage converter ICs or an humble voltage divider circuit.
If you are using the external power supply you have to take care of only 3rd step.

Stream live camera feed from RPI compute module to RPI 3

I'm developing a portable hardware/software application to use 2 cameras in a stereo vision configuration, and process the raw data for information to output.
For this reason I have a Raspberry pi Compute module kit, and a Raspberry pi 3.
The compute module kit will operate the two cameras
The pi 3 will run the code as it has the computational power
OpenCV (C++) is the preferred CV package
As this is a portable application, internet based streaming is not a suitable option.
I've not had time to play around with the GPIO pins, or find a method of streaming the two camera feeds from the compute module to the pi 3.
How would you suggest I proceed with this? Has anyone performed such a project? What links can you provide to help me implement this?
This is for a dissertation project, and will hopefully help in the long run when developing as a full prototype.
Frame Size: 640x480
Frame Rate: 15 fps
The cameras are 5cm apart from each other
Updated Answer
I have been doing some further tests on this. Using the iperf tool and my own simple TCP connection code as well, I connected two Raspberry Pis directly to each other over wired Ethernet and measured the TCP performance.
Using the standard, built-in 10/100 interface on a Raspberry Pi 2 and a Raspberry Pi 3, you can achieve 94Mbits/s.
If, however, you put a TRENDnet USB3 Gigabit adaptor on each Pi, and repeat the test, you can get 189Mbits/s and almost 200 if you set the MTU to 4088.
Original Answer
I made a quick test - not a full answer - but more than I can add as a comment or format correctly!
I set up 2 Raspberry Pi 2s with a wired Ethernet connection. I took a 640x480 picture on one as a JPEG - and it came out at 178,000 bytes.
Then, on the receiving Pi, I set up to receive 1,000 frames. Like this:
#!/bin/bash
for ((i=0;i<1000;i++)); do
echo $i
nc -l 1234 > pic-${i}.jpg
done
On the sending Pi, I set up to transit the picture 1,000 times:
for ((i=0;i<1000;i++)) ; do nc 192.168.0.65 1234 < pipic1.jpg ;done
That took 34 seconds, so it does 33 fps roughly but it stuttered a lot because of writing to the filesystem and therefore SD card. So, I removed the
nc -l 1234 > pic-${i}.jpg
and didn't write the data to disk - which is what you will need as you are writing to the screen, as follows:
nc -l 1234 > /dev/null

How to get a DHT22/AM2302 sensor to work with MCP23017 I2C Port Expander

I am building out a monitoring and automation system for my various greenhouses in the garden. Due to the number of devices that need to be controlled and monitored rather than purchasing additional Raspberry PIs I decided to go with a GPIO port expander and make use of the existing Raspberry PI hardware.
I have however managed to hit a "wall" when it comes to reading the DHT22 Temperature and Humidity sensor when I connect it up via the MCP23017. I know I have the MCP23017 connected correctly as I am able make LEDs flash and relay switches turn on and off.
None of these require reading data from the device connected though and this where I am running into the problem and would really appreciate any help or advice.
I am running a Revison 2 Raspberry PI A, sudo i2cdetect -y 1 shows MCP23017 connected to 0X20.
I am using Python 2.7 for the coding, using the Adafruit_DHT module and for addressing the MCP23017 I am using the wiringpi2 module. Although I am open to other suggestions here with regards to using wiringpi2.
The Adafruit_DHT uses the following syntax to connect to and retrieve information from the sensor:
import Adafruit_DHT
sensor = Adafruit_DHT.DHT22
pin = 7
humidity, temperature = Adafruit_DHT.read_retry(sensor, pin)
The above works fine when connecting directly to one of the "standard" GPIO ports. However I have not worked out how to address the additional new GPIO ports in this same manner i.e.
I have tried the following with no success:
import Adafruit_DHT
import wiringpi2
wiringpi2.wiringPiSetupGpio()
wiringpi2.mcp23017Setup(65,0x20)
for n in (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16):
humidity, temperature = Adafruit_DHT.read_retry(sensor22, wiringpi2.digitalRead(n+64))
print n, humidity, temperature
So the above cycles through each new port, however I don't get any results back, as I say if I connect the device directly the GPIOs on the PI is works fine. I suspect the issue with how I am telling the driver which PIN to use.
I would really appreciate any advice or help on this issue. As I said I am open to other ideas on how to use this DHT22 with the MCP23017.

Is there a document on using the micro usb port on Google Glass?

I just see the post that make your own Google Glass earphone. on How can I get audio output from Google Glass to a 3.5mm headphone jack?
I'm curious that is there a document for Google Glass or Android in general document that defines that kind of function?
Thanks
It's undocumented, but its pinout has been experimentally determined. When a 500kΩ resistor is placed between pin 4 (Sense) and pin 5 (GND) of a micro-USB plug, the system treats it as an audio device. The pinout is:
Pin 1: +5V. This charges the battery. Glass can be charged while audio is active.
Pin 2: Right audio out
Pin 3: Left audio out
Pin 4: Sense. Connect to Pin 5 with a 500kΩ resistor.
Pin 5: GND
Try to answer my own question with some research.
From the tear down images and kernel source code. I found something.
Google Glass uses a MAX14532E chip to switch 3 functions: USB, TTY, Audio, or nothing.
However the ID resistor checking routine only check 500K(Stereo) or 1M(Mono). If resistor of those two values are detected the micro usb will switch to Audio automatically. I still need to do more test to see if I can switch it manually.