Observe multiple resources of a Contiki device with CoAP - web-services

This question concerns the CoAP protocol and the CoRE link format as implemented in Contiki-OS.
Suppose a Contiki device sharing several resources which are:
Sensors
Temperature
Humidity
Motion
Battery voltage
Solar panel voltage
A client can access these resources with their respective URL like for example:
REQ: GET /sensors/humidity
Each of these resources is periodic (except Motion obviously) and observable, but the maximum number of allowed observers configured by default is limited to 3 actually.
We could increase this number to be equal to the number of observable resources and create an observer for each of these resource (I don't know what would be the consequences). We may furthermore create a global "Sensors" periodic resource and share all resource at once.
But is there a better way to do that? Is the standard providing a mechanism which allow us to combine several periodic resources in one observer?
Thanks.

Coap observe draft says:
If multiple subjects are of interest to an observer, the observer must register separately for all of them.
So, to reply to your question, no, there is no standard way, at most you could add another url that returns all the sensors if you prefer (but imho it's a very bad solution).
Instead I would just override the default maximum observers setting. In fact the observe draft doesn't say anything about a maximum number of observers, it's just up to you to set one to be sure that your device doesn't crash because of memory allocation.

Related

Algorithm or data structure for broadcast messages in 3D

Let's say some threads produce data and every piece of data has associated 3D coordinate. And other threads consumes these data and every consumer thread has cubic volume of interest described by center and "radius" (size of the cube). Consumer threads can update their cube of interest parameter (like move it) from time to time. Every piece of data is broadcasted - a copy of it should be received by every thread which has cube of interest which includes this coordinate.
What multi-threaded data structure can be used for this with the best performance? I am using C++, but generic algorithm pointer is fine too.
Bonus: it would be nice if an algorithm will have possibility to generalize to multiple network nodes (some nodes produce data and some consumes with the same rules as threads).
Extra information: there are more consumers than producers, there are much more data broadcasts than cube of interest changes (cube size changes are very rare, but moving is quite common event). It's okay if consumer will start receiving data from the new cube of interest after some delay after changing it (but before that it should continue receive data from the previous cube).
Your terminology is problematic. A cube by definition does not have a radius; a sphere does. A broadcast by definition is received by everyone, it is not received only by those who are interested; a multicast is.
I have encountered this problem in the development of an MMORPG. The approach taken in the development of that MMORPG was a bit wacky, but in the decade that followed my thinking has evolved so I have a much better idea of how to go about it now.
The solution is a bit involved, but it does not require any advanced notions like space partitioning, and it is reusable for all kinds of information that the consumers will inevitably need besides just 3D coordinates. Furthermore, it is reusable for entirely different projects.
We begin by building a light-weight data modelling framework which allows us to describe, instantiate, and manipulate finite, self-contained sets of inter-related observable data known as "Entities" in memory and perform various operations on them in an application-agnostic way.
Description can be done in simple object-relational terms. ("Object-relational" means relational with inheritance.)
Instantiation means that given a schema, the framework creates a container (an "EntitySpace") to hold, during runtime, instances of entities described by the schema.
Manipulation means being able to read and write properties of those entities.
Self-contained means that although an entity may contain a property which is a reference to another entity, the other entity must reside within the same EntitySpace.
Observable means that when the value of a property changes, a notification is issued by the EntitySpace, telling us which property of which entity has changed. Anyone can register for notifications from an EntitySpace, and receives all of them.
Once you have such a framework, you can build lots of useful functionality around it in an entirely application-agnostic way. For example:
Serialization: you can serialize and de-serialize an EntitySpace to and from markup.
Filtering: you can create a special kind of EntitySpace which does not contain storage, and instead acts as a view into a subset of another EntitySpace, filtering entities based on the values of certain properties.
Mirroring: You can keep an EntitySpace in sync with another, by responding to each property-changed notification from one and applying the change to the other, and vice versa.
Remoting: You can interject a transport layer between the two mirrored parts, thus keeping them mirrored while they reside on different threads or on different physical machines.
Every node in the network must have a corresponding "agent" object running inside every node that it needs data from. If you have a centralized architecture, (and I will continue under this hypothesis,) this means that within the server you will have one agent object for each client connected to that server. The agent represents the client, so the fact that the client is remote becomes irrelevant. The agent is only responsible for filtering and sending data to the client that it represents, so multi-threading becomes irrelevant, too.
An agent registers for notifications from the server's EntitySpace and filters them based on whatever criteria you choose. One such criterion for an Entity which contains a 3D-coordinate property can be whether that 3D-coordinate is within the client's area of interest. The center-of-sphere-and-radius approach will work, the center-of-cube-and-size approach will probably work even better. (No need for calculating a square.)

ECS and appropriate usage in games

I've been reading about Entity-Component-Systems and i think i understand the basic concept:
Entities are just IDs with their Components stored in Arrays to reduce cache misses. Systems then iterate over one or more of these Arrays and process the data contained in the Components.
But i don't quite understand how these systems are supposed to efficently and cleanly interact with one and another.
1: If my entity has a health component, how would i go about damaging it?
Just doing health -= damage wouldn't account for dying if health goes below or equal 0. But adding a damage() function to the component would defy the point of components being only data. Basically: How do systems process components which need to respond to their changes and change other components based on their changes? (Without copy-and-pasting the damage code into each system which can possibly inflict damage)
2: Components are supposed to be data-only structs with no functions. How do i best approach entity-specific behaviour like exploding on death. It seems unpractical to fill the Health component with memory-wasting data like explodesOnDeath=false when only one or two out of many entities will actually explode on death. I am not sure how to solve this elegantly.
Is there a common approach to these problems?
Ease of modification (for ex with Lua scripts) and high chances of compatibility are important to me, as i really like games with high modding potential. :)
Used Language: C++
I am also new to the field, but here are my experiences with ECS models:
How do systems process components which need to respond to their changes and change other components based on their changes?
As you correctly pointed out, the components are just containers of data, so don't give them functions. All the logic is handled by the systems and each new piece of logic is handled by a different system. So its a good choice to seperate the logic of "dealing damage" from "killing an entity". The comminication
between the DamageSystem and the DeathSystem (with other words, when should an entity be killed) can the be based on the HealthComponent.
Possible implementation:
You typically have one system (The DamageSystem) that calculates the new health of an entity. For this purpose, it can use all sorts of information (components) about the entity (maybe your entities have some shield to protect them, etc.). If the health falls below 0, the DamageSystem does not care, as its only purpose is to contain the logic of dealing damage.
Besides the DamageSystem, you also want to have some sort of DeathSystem, that checks for each entity if the health is below 0. If this is the case, some action is taken. As every entity does sth on their death (which is the reason why your explodesOnDeath=false is not a bad idea), it is usefull to have a DeathComponent that stores some kind of enum for the death animation (e.g. exploding or just vanishing), a path to a sound file (e.g. a fancy exploding sound) and other stuff you need.
With this approach, all the damage calculation is located at one place and seperated from e.g. the logic of the death of an entity.
Hope this helps!

Is it necessary to include GameObjects whose physics are deterministic in worldUpdate?

In order to reduce data transfer size and the computational time for serializing world objects for each worldUpdate, I was wondering if it is possible to omit syncs for objects whose physics can be entirely, faithfully simulated on the client-side gameEngine (they are not playerObjects so playerInput does not affect them directly, and their physics are entirely deterministic). Interactions with these GameObjects would be entirely handled by GameEvents that are much less frequent. I feel like this should be possible if the client is running the same physics as the server and has access to the same initial conditions.
When I try to omit GameObjects from subsequent worldUpdates, I see that their motion becomes more choppy and they move faster than if they were not omitted; however, when I stop the game server while keeping the client open, their motion is more like what I would expect if I hadn't omitted them. This is all on my local machine with extrapolation synchronization.
The short answer is that the latest version of Lance (1.0.8 at the time of this writing) doesn't support user omission of game objects from world updates, but it does implement a diffing mechanism that omits objects from the update if their netScheme properties haven't changed, saving up on bandwidth.
This means that if you have static objects, like walls, for example, they will only get transmitted once for each player. Not transmitting this at all is an interesting feature to have.
If objects you're referring to are not static, then there is no real way to know their position deterministically. You might have considered using the world step count, but different clients process different world steps at different times due to the web's inherent latency. A client can't know what is the true step being handled by the server at a given point in time, so it cannot deterministically decide on such an object's position. This is why Lance uses the Authoritative server model - to allow one single source of truth, and make sure clients are synched up.
If you still want to manually avoid sending updates for an object, you can edit its netScheme so that it doesn't return anything but its id, for example:
static get netScheme() {
return {
id: { type: Serializer.TYPES.INT32 }
};
}
Though it's not a typical use due to the aforementioned reasons, so if you encounter specific sync issues and this is still a feature you're interested in, it's best if you submit a feature request in the Lance issue tracker. Make sure to include details on your use case to promote a healthy discussion

Generating a Hardware-ID on Windows

What is the best way to generate a unique hardware ID on Microsoft Windows with C++ that is not easily spoofable (with for example changing the MAC Address)?
Windows stores a unique Guid per machine in the registry at:
HKEY_LOCAL_MACHINE\Software\Microsoft\Cryptography\MachineGuid
This used to be the CPU serial number but today there are many types of motherboards and this factor is not accurate. MAC address can be easily forged. That leaves us with the internal hard drive serial number. See also: http://www.codeproject.com/Articles/319181/Haephrati-Searching-for-a-reliable-Hardware-ID
There are a variety of "tricks", but the only real "physical answer" is "no, there is no solution".
A "machine" is nothing more than a passive bus with some hardware around.
Although each piece of iron can provide a somehow usable identifier, every piece of iron can be replaced by a user for whatever bad or good reason you can never be fully aware of (so if you base your functionality on this, you create problems to your user, and hence -as a consequence- to yourself every time an hardware have to be replaced / reinitialized / reconfigured etc. etc.).
Now, if your problem is identify a machine in a context where many machines have to inter-operate together, this is a role well played by MAC or IP addresses or Hostnames. But be prepared to the idea that they are not necessarily constant on long time-period (so avoid to hard-code them - instead "discover then" upon any of your start-up)
If your problem is -instead- identify a software instance or a licence, you have probably better to concentrate on another kind of solution: you sell licences to "users" (it is the user that has the money, not his computer!), not to their "machines" (that users must be free to change whenever they need/like without your permission, since you din't licence the hardware or the OS...), hence your problem is not to identify a machine, but a USER (consider that a same machine can be a host for many user and that a same user can work on a variety of machines ..., you cannot assume/impose a 1:1 relation, without running into some kind of problems sooner or later, when this idiom ifs found to no more fit).
The idea should be to register the users in a somewhat reachable site, give them keys you generate, and check that a same user/key pair is not con-temporarily used more than an agreed number of times under a given time period. When violations exceed, or keys becomes old, just block and wait for the user to renew.
As you can see, the answer mostly depends on the reason behind your question, more than from the question itself.
There are various IDs assigned to hardware that can be read and combined to form a machine key. For example, you could get the ID of the hard drive where the software is stored, the proc ID, etc. Some of these can be set more easily than others, but part of the strength is in combining multiple pieces together that are not necessarily strong enough by themselves.
Here is a program (also available as DLL) that can read and show your computer/hardware ID: http://www.soft.tahionic.com/download-hdd_id/index.html
Use Win32 System HDS APIs.
Don't read the registry, it has no sense at all.

concurrency issue :reduce credit for account

hi we try to implement a process like when a user does something, his company's credit will be deducted accordingly.
But there is a concurrency issue when multiple users in one company participant in the process because the credit got deducted wrong.
Can anyone point a right direction for such issue?
thanks very much.
This is a classic problem that is entirely independent of the implementation language(s).
You have a shared resource that is maintaining a persistent data store. (This is typically a database, likely an RDBMS).
You also have a (business) process that uses and/or modifies the information maintained in the shared data store.
When this process can be performed concurrently by multiple actors, the issue of informational integrity arises.
The most common way to address this is to serialize access to the shared resources, so that the operation against the shared resources occur in sequence.
This serialization can happen at the actor level, or, at the shared resource itself, and can take many forms, such as queuing actions, or using messaging, or using transactions at the shared resource. Its here that considerations such as system type, application, and the platforms and systems that are used become important and determine the design of the overall system.
Take a look at this wikipedia article on db transactions, and then google your way to more technical content on this topic. You may also wish to take a look at messaging systems, and if you are feeling adventurous, also read up on software transactional memory.