Omnet++ and MiXiM - c++

I am trying to run simulations for 802.15.4a devices.
For this reason I am using MiXiM which provides very useful modules.
More specifically I want to have a first (very easy) configuration where two Host802154A communicate with each other.
I then created a network.ned as follows
package eval;
import inet.physicallayer.ieee802154.bitlevel.Ieee802154UWBIRRadioMedium;
import org.mixim.modules.node.Host802154A;
//
// TODO documentation
//
network env
{
#display("bgb=639,446");
submodules:
dev1: Host802154A {
#display("p=128,166");
}
dev2: Host802154A {
#display("p=402,166");
}
ieee802154Medium: Ieee802154UWBIRRadioMedium {
#display("p=513,37");
}
}
I checked many guides to run experiments but I am not sure I understood how to 'start'.
I need an omnetpp.ini file but what should it contain?
Do I have to define two .cc files for dev1 and dev2?
I just want to have the two devices exchanging messages, nothing more than that.

Indeed, you are going to need an omnetpp.ini file.
Check the OMNeT++ manual and the most important tutorial - the TicToc Tutorial
A (bit outdated) quick-start guide is available here: https://omnetpp.org/pmwiki/index.php?n=Main.OmnetppInNutshell
As for additional .cc files, if you rely on standard host definitions from MiXiM, you wont need any additional .cc files.
The basic MiXiM examples provide more insight on how MiXiM hosts and classes are instantiated and used, while the omnetpp.ini will provide the correct parametrization.

Related

Get ScriptOrigin from v8::Module

It seems trivial, but I've searched far and wide.
I'm using this resource to make v8 run with ES Modules and I'm trying to implement my own search/load algorithm. Thus far, I've managed to make a simple system which loads a file from a known location, however I'd like to implement external modules. This means that the known location is actually unknown throughout the application. Take the following directory tree as an example:
~/
- index.js
import 'module1_index'; // This is successfully resolved to /libs/module1/module1_index.js
/libs/module1/
- module1_index.js
export * from './lib.js' // This import fails because it is looking for ./lib.js in ~/source
- lib.js
export /* literally anything */
The above example begins by executing the index.js file from ~. When module1_index.js is executed, lib.js is looked for from ~ and consequently fails. In order to address this, the files must be looked for relative to the file being executed at the moment, however I have not found a means to do this.
First Attempt
I'm given the opportunity to look for the file in the callResolve method (main.cpp:280):
v8::MaybeLocal<v8::Module> callResolve(v8::Local<v8::Context> context, v8::Local<v8::String> specifier, v8::Local<v8::Module> referrer)
or in loadModule (main.cpp:197)
v8::MaybeLocal<v8::Module> loadModule(char code[], char name[], v8::Local<v8::Context> cx)
however, as mentioned, I have found no function by which to extract the ScriptOrigin from the module. I should mention, when files are successfully resolved, the ScriptOrigin is initiated with the exact path to the file, and is reliable.
Second Attempt
I set up a stack, which keeps track of the current file being executed. Every import which is made is pushed onto the stack. Once the file has finished executing, it is popped. This also did not work, as there was no way to reliably determine once the file had finished executing.
It seems that the loadModule function does just that: loads. It does not execute, so I cannot pop after the module has loaded, as the imports are not fully resolved. The checkModule/execModule functions are only invoked on dynamic imports, making them useless to determining the completion of a static import.
I'm at a loss. I'm not familiar with v8 enough to know where to look, although I have dug through some NodeJS source code looking for an implementation, to no avail.
Any pointers are greatly appreciated.
Thanks.
Jake.
I don't know much about module resolution, but looking at V8's sources, I can see an example mapping a v8::Module to a std::string absolute_path, which sounds like what you're looking for. I'm not copying the whole code here, because the way it uses custom metadata is a bit involved; the short story is that it keeps a std::unordered_map to keep data about each module's source on the side. (I wonder if it would be possible to use Module::ScriptId() as that map's key, for simplification.)
Code search finds a bunch more example uses of InstantiateModule, mostly in tests. Tests often serve as useful examples/documentation :-)

How to specify a remote preprocessor include path like 192.0.2.17://usr/include

Is it possible to specify a C/C++ include path to a remote preprocessor server?
The point here is to have once central location for header files. This makes upgrades, version consistency, and a host of other things much better than people running all willy-nilly including different versions of things.
Minimal, Complete, and Verifiable Example
Typical include. On Linux, would default to /usr/include/ or the like; in Windows VS, to a location specified in the $(IncludePath) variable.
#include <iostream>
int main() {
std::cout << "hello, world" << std::endl;
return 0;
}
Now imagine that we set our include path as follows:
C_INCLUDE_PATH=192.0.2.17://usr/include;/usr/include;
The above would first check the remote server at 192.0.2.17 to see if the iostream library existed. Failing this, /usr/include would be checked.
This is a bit of a stretch to illustrate the point:
#include <192.0.2.17://iostream>
int main() {
std::cout << "hello, world" << std::endl;
}
Thanks, Keith :^)
Since you want version control anyway you could just use git (like thousands of other projects). So each user has a local clone of anything needed.
To answer the original question: No. I'm not aware of any preprocessor supporting such an include scheme.
I'm not aware of any compiler that retrieves include files or libraries remotely, so this is not something you can do directly.
The best you can do is have these dependencies on an NFS share that you can mount and then add that path to your include path.
I wouldn't put references to this in the code like that, and as dbush said, you'd have to enhance the preprocessor.
But there might be cute ways to do this within the Make system. That is, if you're using Make (for instance), you could add steps to the Makefile that force a refresh of data.
However, I would suggest this is WRONG because it's not just the include files that need to be fresh. If an include has changed, the related code has probably also changed, and you would need those changes, too. Your magic #include stuff isn't going to do a thing to make sure people have the right code / libraries that the includes are for.
I'm not sure why proper use of source code repositories don't already handle this for you.

Can I use all the functionally I see exposed by the .TLH file?

Background:
I have an existing code that uses functionality provided by Microsoft, to post XML data over HTTP. Specifically, IServerXMLHTTPRequest (included in MSXML3 and up) from msxml4.dll (COM). I am moving to msxml6.dll as msxml4.dll is not supported anymore (superseded by MSXML6). More information about MSXML versions.
Code:
#import "msxml6.dll"
using namespace MSXML2;
…
IServerXMLHTTPRequestPtr spIXMLHTTPRequest = NULL;
hr = spIXMLHTTPRequest.CreateInstance(__uuidof(ServerXMLHTTP40));
Problem:
When building my app with msxml4.dll as well as msxml6.dll the following is included in the msxml4.tlh and msxml6.tlh respectively:
struct __declspec(uuid("88d969c6-f192-11d4-a65f-0040963251e5"))
ServerXMLHTTP40;
// [ default ] interface IServerXMLHTTPRequest2
As I understand, looking at msxml6.tlh, I can use ServerXMLHTTP40 (and not change the code to ServerXMLHTTP60) with msxml6.dll (same for DOMDocument40, FreeThreadedDOMDocument40, XMLSchemaCache40 etc.).
Now, searching the registry in a fresh Windows 7 Ultimate installation, I cannot find the uuid above. As a result, this code fails on this machine:
hr = spIXMLHTTPRequest.CreateInstance(__uuidof(ServerXMLHTTP40));
Questions:
If msxml6 is exposing ServerXMLHTTP40, why is it that I cannot find it in the registry? Can I use ServerXMLHTTP40 when msxml6 is installed (msxml4 is not installed)?
Need additional information? Just let me know. Thank you!
.TLH file (as a product of import from .TLB, which is in turn a compiled version of .IDL file) is a description of interfaces, structures, methods etc. which ones uses to talk through COM to another object. There is no guarantee or promise that the other party implementing these interfaces is installed or otherwise available, or even exists at all.
Yes you have the signatures defined for you convenience. You might need to install runtime that implements the functionality. MSXML 4 might need a separate install regardless of where you obtained the development details from.

Dojo build to single file

I want to build my Dojo JavaScript code that I have carefully structured into packages into a single JavaScript file. I'm a little confused as to how to do it.
For now I have this:
var profile = {
...
layers: {
'app': {
include: [
'dojo/module1',
'dojo/module2',
...,
'dojo/moduleN',
'package2/module1',
'package2/module2',
...,
'package2/moduleN'
]
}
}
...
};
Do I really have to manually add all the modules to the app layer? Can't I just say "all", or better yet, "all referenced"? I don't want to include the dojo/something modul if I don't use it. Also, in my release folder, that's all I would like to have - one file.
So - can this even be achieved? Clean Dojo automatic build of only referenced modules into a single (minified and obfuscated of course) JavaScript file?
Take a look at the examples in the Layers section of this build tutorial:
It’s also possible to create a custom build of dojo.js; this is particularly relevant when using AMD, since by default (for backwards compatibility), the dojo/main module is added automatically by the build system to dojo.js, which wastes space by loading modules that your code may not actually use. In order to create a custom build of dojo.js, you simply define it as a separate layer, setting both customBase and boot to true:
var profile = {
layers: {
"dojo/dojo": {
include: [ "dojo/dojo", "app/main" ],
customBase: true,
boot: true
}
}
};
You can include an entire "app" in a single layer by including the root of that app (or module). Note that if a module in that app is not explicitly required by that app, it would have to be included manually. See the second example in the Layers section in the above tutorial for an illustration of that.
You can also define packages to include in your layers, if you want to change or customize the layout of your project:
packages: [
{name:'dojo', location:'other/dojotoolkit/location/dojo'},
/* ... */
],
layers: {
'dojo/dojo': { include: ['dojo/dojo'] },
/* ... */
}
You don't have to specify all the modules, if the module you add already has dependencies on others. For example, if you include 'app/MainApplication' to a layer, the builder would include all the modules that app/MainApplication depens on. If your MainApplication.js touches everything in your project, everything would be included.
During the build of a layer, dojo parses require() and define() calls in every module. Then it builds the dependency tree. Nls resources are also included.
In your code, you should name your layer as a file in existing package. In my build, it caused errors when I name a layer with a single word. You should code
var profile =
layers: {
'existingPackage/fileName': {
...
}
}
If you want to have exacltly one file, you have to include 'dojo/dojo' in your layer and specify customBase and boot flags.
Dojo always build every package before building layers. You will always have dojo and dijit folders in your release directory containing minified versions of dojo filies in them.
Just copy the layer file you need and delete everything other.

Joomla: Plugins for a plugin - how to 'advertise' capabilities?

I am working on a plugin which will have its own plugins to handle various events.
Now I'm thinking of enabling this plugins to add their own "commands". But I wonder how to treat that most efficiently. I have a list of my own commands which I search in the article anyway. Should I then just trigger a DoWhatYouWant($article)-event - or, since I do the searching (and parsing of params) anyway, perhaps I could build a global command-list and then trigger an "ExecuteCommand($article,$cmd,$params)"-event? Sounds nicer, but then (I think) I'd have to build this command-list (so that my program know what to look for), so every plugin would have to somehow 'advertise' what it could do, i.e. the names of commands it could handle - and I have no idea how that could be done.
Or is there a better (more standardized?) approach?
If you import your plugins trough the plugin helper
JPluginHelper::importPlugin('mycmdplugins');
then you can get all available commands which are supported by your sub plugins like
$cmds = JDispatcher::getInstance()->trigger('onMyAwesomeCmds');
With the $cmds variable you know now which commands are supported by the sub plugins and you can parse the article for them. Then you can do
foreach ($cmds as $cmd) {
preg_match_all("{".$cmd."*}", $article->text, $matches, PREG_SET_ORDER);
if (!empty($matches)) {
JDispatcher::getInstance()->trigger('onMyAwesome'.ucfirst($cmd), array($article, $params));
}
}
To eliminate more repeating tasks I suggest that the additional plugins will extend a base class from your plugins folder.