How can I make a MicroBit MakeCode mass code creator? - makecode

I am working on a project for my town's Maker Faire. What I'm trying to do is have a Micro:Bit send a message through radio, where another one would pick it up and send it through another channel. Then another Micro:Bit would pick that up and so on and so forth. I have the code for the starting micro:bit that sends the first message, and the second micro:bit that receives the first one's message and sends it out again. Each new Micro:Bit bumps up the radio channels by one. Is there any way to do this automatically without having to manually bump it up for each new Micro:bit?
This is my code for the second Micro:Bit:
radio.onReceivedString(function (receivedString) {
radio.setGroup(1)
basic.showString(receivedString)
radio.setGroup(2)
radio.sendString(receivedString)
})
Thanks!

The challenge here is coming up with a way so that each micro:bit knows what its sequence number is on startup. If you're able to initialise each micro:bit with a unique sequence number (eg: 0, 1, 2, 3, 4, 5), then you can flash the same code on each micro:bit and just use the sequence number as an offset. ie: setGroup(sequenceNumber)... setGroup (sequenceNumber + 1).
In the case of the first micro:bit that will be groups 0 and 1 respectively, in the case of the second micro:bit that will be groups 1 and 2 respectively, and so on.
I can think of a few ways of having each micro:bit have its own unique sequence number on startup. One way you can do that is have them all set to 0 on startup, and then use the buttons on each micro:bit to change the sequence number. Something like this:
let sequenceNumber = 0;
input.onButtonPressed(Button.A, function () {
if (sequenceNumber > 0) sequenceNumber--;
})
input.onButtonPressed(Button.B, function () {
sequenceNumber++;
})
radio.onReceivedString(function (receivedString) {
radio.setGroup(sequenceNumber)
basic.showString(receivedString)
radio.setGroup(sequenceNumber + 1)
radio.sendString(receivedString)
})
The above strategy would require you to go around each micro:bit and manually set their sequence number after flashing it. If you flash it again, you'll have to repeat the process..
Another way to approach this is to have all micro:bits running the same program, except for one, which we'll refer to as the master. This master micro:bit will keep a list of all devices its seen (over radio on a preset group, eg: 0) and for every new micro:bit that requests a sequence number, it'll assign it a unique number and send it back. Each of the other micro:bits will startup in an initialization phase where it continuously requests a sequence number and polls until it's been assigned one by the master micro:bit.
Something like the following:
Master:
let masterGroupNumber = 0; // predetermined group number for master micro:bit
let currentSequenceNumber = 1;
let devices: { [key: string]: number } = {};
radio.setGroup(masterGroupNumber);
radio.onReceivedValue(function (name: string, value: number) {
if (name === "uid") {
// We're received a unique id. Assign it a sequence number
// if it has not already been assigned
let uid = value.toString();
if (!devices[uid])
devices[uid] = currentSequenceNumber++;
radio.sendValue(uid, devices[uid]);
}
})
All other micro:bits:
// Begin the program in the initialization phase,
// we're waiting to be assigned a sequence number
let masterGroupNumber = 0; // predetermined group number for master micro:bit
let sequenceNumber = 0;
let hasSequenceNumber = false;
radio.setGroup(masterGroupNumber);
let uniqueDeviceId = control.deviceSerialNumber();
radio.onReceivedValue(function (name: string, value: number) {
if (name === uniqueDeviceId.toString()) {
sequenceNumber = value;
hasSequenceNumber = true;
}
})
// Poll till we've received a sequence number
while (!hasSequenceNumber) {
// Broadcast our unique device id.
radio.sendValue("uid", uniqueDeviceId);
// Wait a litte
basic.pause(500);
}
// We've established communication with the master micro:bit
// and have received a sequence number
radio.onReceivedString(function (receivedString) {
radio.setGroup(sequenceNumber)
basic.showString(receivedString)
radio.setGroup(sequenceNumber + 1)
radio.sendString(receivedString)
})
There's probably a few other ways you could go about doing this, but I hope this gives you some ideas.
ps: I did not have a chance to test if this code works. Let me know how it goes :)

Related

facebook messenger chat bot with Watson conversation api Context variable manipulation

I am facing a problem with Facebook meseger chat- bot ,problem is with Context variable storing and update .
My code is divided in two parts
_________**********_________
part 1:
var index = 0;
Facebookcontexts.forEach(function(value) {
console.log(value.From);
if (value.From == sender_psid) {
FacebookContext.context = value.FacebookContext;
console.log("Inside foreach "+JSON.stringify(FacebookContext.context));
contextIndex = index;
}
index = index + 1;
});
Here I have created an array named Facebookcontexts to store contexts for different users.This is where I am getting position of the user in a Facebookcontexts array which is used for later .
_________**************_____________
part 2:
if((FacebookContext.context==null)||(Facebookcontexts.find(v=>v.From==sender_psid)==undefined)){
Facebookcontexts.push({"From":sender_psid,"FacebookContext":response.context})
console.log("I am where sender is unknmown"+JSON.stringify(Facebookcontexts)+"\n"+Facebookcontexts.length);
}
else if(Facebookcontexts.find(v=>v.From==sender_psid)!=undefined){
Facebookcontexts[contextIndex].FacebookContext=response.context;
console.log("I am at where I know the sender"+JSON.stringify(Facebookcontexts)+"\n"+Facebookcontexts.length);
}
I am deciding to create a new record or update old one in if and else
Issue:
My Issue is every time if((FacebookContext.context==null)||(Facebookcontexts.find(v=>v.From==sender_psid)==undefined)) is getting checked and for that array length is 1 all the time
I will look for some help from you guys.
Thanks ins advance

Koa middleware - generator concurrency testing

I've hit a bit of an interesting road block in my attempt at writing unit tests for some middleware as I can't seem to come up with a feasible means to fake two concurrent connections for a generator function which is a piece of koa middleware.
I have a constructor function that takes some setup options and returns a generator. This generator has access to some variables via closure which increment per request and decrement when the complete. Here is a subset of the code to give you an idea of what i'm trying to accomplish.
module.exports = function (options = {}) {
let connections = 0;
let {
max = 100
...
} = options;
return function *() {
connections++
...
if (connections > max) {
connections--;
// callback here
}
...
}
}
In simple terms I want to be able to keep track of multiple simultaneous "connections" in which I fire a callback when a max number of requests have been met. However, in my test i get back a single instance of this generator and can only call it once mimicking a single request, thus i can never meet the connections > max conditional
it("Should trigger callback when max connections reached", () => {
const gen = middleware({
max: 1,
onMax: function (current, max) {
this.maxReached = true;
}
}).call(context);
gen.next();
expect(context.maxReached).to.be.true;
});
Sometimes you just need a good night sleep to dream your answer. This was simply a matter of calling the same generator with two different contexts that represented two different requests and store a value to tests against on the latter. The counter would still increment because I never returned up the middleware chain (response) in order to decrement. It's more of a fake concurrency.
const middleware = limiter({
max: 1,
onMax: function (current, max) {
this.maxReached = true;
}
});
middleware.call(reqContext).next();
middleware.call(secondReqContext).next();
expect(secondReqContext.maxReached).to.be.true;

Schedule/batch for large number of webservice callouts?

I'am new to Apex and I have to call a webservice for every account (for some thousands of accounts).
Usualy a single webservice request takes 500 to 5000 ms.
As far as I know schedulable and batchable classes are required for this task.
My idea was to group the accounts by country codes (Europe only) and start a batch for every group.
First batch is started by the schedulable class, next ones start in batch finish method:
global class AccValidator implements Database.Batchable<sObject>, Database.AllowsCallouts {
private List<String> countryCodes;
private countryIndex;
global AccValidator(List<String> countryCodes, Integer countryIndex) {
this.countryCodes = countryCodes;
this.countryIndex = countryIndex;
...
}
// Get Accounts for current country code
global Database.QueryLocator start(Database.BatchableContext bc) {...}
global void execute(Database.BatchableContext bc, list<Account> myAccounts) {
for (Integer i = 0; i < this.AccAccounts.size(); i++) {
// Callout for every Account
HttpRequest request ...
Http http = new Http();
HttpResponse response = http.send(request);
...
}
}
global void finish(Database.BatchableContext BC) {
if (this.countryIndex < this.countryCodes.size() - 1) {
// start next batch
Database.executeBatch(new AccValidator(this.countryCodes, this.countryIndex + 1), 200);
}
}
global static List<String> getCountryCodes() {...}
}
And my schedule class:
global class AccValidatorSchedule implements Schedulable {
global void execute(SchedulableContext sc) {
List<String> countryCodes = AccValidator.getCountryCodes();
Id AccAddressID = Database.executeBatch(new AccValidator(countryCodes, 0), 200);
}
}
Now I'am stuck with Salesforces execution governors and limits:
For nearly all callouts I get the exceptions "Read timed out" or "Exceeded maximum time allotted for callout (120000 ms)".
I also tried asynchronous callouts, but they don't work with batches.
So, is there any way to schedule a large number of callouts?
Have you tried to limit your execute method to 100? Salesforce only allows 100 callout per transaction. I.e.
Id AccAddressID = Database.executeBatch(new AccValidator(countryCodes, 0), 100);
Perhaps this might help you:
https://salesforce.stackexchange.com/questions/131448/fatal-errorsystem-limitexception-too-many-callouts-101

How to report invalid data while processing data with Google dataflow?

I am looking at the documentation and the provided examples to find out how I can report invalid data while processing data with Google's dataflow service.
Pipeline p = Pipeline.create(options);
p.apply(TextIO.Read.named("ReadMyFile").from(options.getInput()))
.apply(new SomeTransformation())
.apply(TextIO.Write.named("WriteMyFile").to(options.getOutput()));
p.run();
In addition to the actual in-/output, I want to produce a 2nd output file that contains records that which are considered invalid (e.g. missing data, malformed data, values were too high). I want to troubleshoot those records and process them separately.
Input: gs://.../input.csv
Output: gs://.../output.csv
List of invalid records: gs://.../invalid.csv
How can I redirect those invalid records into a separate output?
You can use PCollectionTuples to return multiple PCollections from a single transform. For example,
TupleTag<String> mainOutput = new TupleTag<>("main");
TupleTag<String> missingData = new TupleTag<>("missing");
TupleTag<String> badValues = new TupleTag<>("bad");
Pipeline p = Pipeline.create(options);
PCollectionTuple all = p
.apply(TextIO.Read.named("ReadMyFile").from(options.getInput()))
.apply(new SomeTransformation());
all.get(mainOutput)
.apply(TextIO.Write.named("WriteMyFile").to(options.getOutput()));
all.get(missingData)
.apply(TextIO.Write.named("WriteMissingData").to(...));
...
PCollectionTuples can either be built up directly out of existing PCollections, or emitted from ParDo operations with side outputs, e.g.
PCollectionTuple partitioned = input.apply(ParDo
.of(new DoFn<String, String>() {
public void processElement(ProcessContext c) {
if (checkOK(c.element()) {
// Shows up in partitioned.get(mainOutput).
c.output(...);
} else if (hasMissingData(c.element())) {
// Shows up in partitioned.get(missingData).
c.sideOutput(missingData, c.element());
} else {
// Shows up in partitioned.get(badValues).
c.sideOutput(badValues, c.element());
}
}
})
.withOutputTags(mainOutput, TupleTagList.of(missingData).and(badValues)));
Note that in general the various side outputs need not have the same type, and data can be emitted any number of times to any number of side outputs (rather than the strict partitioning we have here).
Your SomeTransformation class could then look something like
class SomeTransformation extends PTransform<PCollection<String>,
PCollectionTuple> {
public PCollectionTuple apply(PCollection<String> input) {
// Filter into good and bad data.
PCollectionTuple partitioned = ...
// Process the good data.
PCollection<String> processed =
partitioned.get(mainOutput)
.apply(...)
.apply(...)
...;
// Repackage everything into a new output tuple.
return PCollectionTuple.of(mainOutput, processed)
.and(missingData, partitioned.get(missingData))
.and(badValues, partitioned.get(badValues));
}
}
Robert's suggestion of using sideOutputs is great, but note that this will only work if the bad data is identified by your ParDos. There currently isn't a way to identify bad records hit during initial decoding (where the error is hit in Coder.decode). We've got plans to work on that soon.

Creating a cached counter path for large data lists in Firebase

Using Firebase to count the total records is done this way:
var table = new Firebase('http://beta.firebase.com/user/tablename');
table.on('value', function(snapshot) {
var count = 0;
snapshot.forEach(function() {
count++;
});
//count is now safe to use.
});
Is there a way to avoid enumeration by having a cached counter in a different path?
I was thinking in some "counter" object which keeps the history of changes and the last computed value.
counter:
{
value: 672,
history:
{
+2, -4, +1, +1, +1
}
}
in a transaction then:
pick one history item, update the value, remove the history item.
Also who would be responsible of doing this?
Here's an example that combines the idea of a counter with an incremental, numeric ID. For your use case, you could skip the ID portion, but the principles are still the same.
The core of this is a transaction that, when you create a new record, adds one to your counter:
var fb = new Firebase('http://beta.firebase.com');
// stores incremental id before adding record
function incRecord(data) {
// increment the counter
fb.child('counter/value').transaction(function(currentValue) {
return (currentValue||0) + 1
}, function(err, committed, ss) {
if( err ) {
console.error(err);
}
else if( committed ) {
// if you want to pass the counter into the data,
// just use ss.val() here to fetch it
addRecord(data);
// could also store an audit history about changes to the counter, assuming we had a user ID or something to that effect with this:
// but you don't need this history to increment it
// fb.child('counter/history/'+ss.val()).set(userId);
}
});
}
// creates new record
function addRecord(data) {
// you could pass the record value here, I just set the value to "record #<id>"
fb.child('records').push(data, function(err) {
err && console.error(err);
});
}
Then invoke it by calling something like this:
incRecord({ hello: 'world' });