I am working on a project using
inet 3.6.6
veins 5 with veins_inet3
I need to set a scenario using .poly.xml files generated in SUMO to model the obstacles in a city. I also need to use the IEEE 802.11p protocol available in veins. I'm facing two problems:
I don't know how to include obstacles in the .poly.xml format when looking at the veins_inet example. I know that in inet (independently) I can include obstacles using the *.physicalEnvironment.config variable in the .ini file. However, I tried to include the .poly files and it didn't work. Do I need to include an obstacles module?
I need to use the IEEE 802.11p protocol, however, I'm not completely sure how to do it. What I think could work is to include the veinsMobility module in my node and in that way the channel will be updated. Also, I'm not sure if I need to include the ConnectionManager module and the BaseWorldUtility.
I'll really appreciate any thoughts on this.
The Veins radio stack (global modules world, connectionManager, obstacles, and vehicleObstacles and per-node modules appl, nic, and veinsmobility) is entirely separate from the INET Framework radio stack (global modules radioMedium, physicalEnvironment and per-node modules app, udp, ipv4, wlan, and mobility). Both can be configured to model various configurations of IEEE 802.11 WLAN, though both focus on different aspects. However, even the INET Framework can be configured to closely approximate a WLAN station using IEEE 802.11 OCB mode at 5.9 GHz ("802.11p"). The Veins module documentation goes into some more detail here.
The Veins radio stack (as of Veins 5.1), by default, instantiates all SUMO polygons that match a given type (the Veins 5.1 example configuration uses polygon type building) as radio obstacles. This is being done at run time, by querying a list of all existing polygons from the running SUMO simulation. In brief: there is typically no need to supply a manually-generated file with obstacle definitions; radio obstacles can be configured to simply "automatically" appear (again, as in the Veins 5.1 example configuration).
The INET Framework (as of version 4.3.0) does not allow dynamically creating radio obstacles at run time, so there is no "automatic" way to read and instantiate radio obstacles from SUMO polygons. Here, you would need to manually convert the SUMO .poly.xml file to an obstacles.xml definition file (or write a converter script to do that for you, which is straightforward to do).
Related
we have OpenCart Plugin, that support only version 3.0. We have task to add support to previos version of OpenCart 2.3. There is any way how to do it in one plugin? Or do we need create plugin for each version?
Yes, there are ways to do this. I think it's a huge pain though to maintain fully, and it may cause you a huge support headache. It will require extra files, such as files with code to detect appropriate version of OC first and then necessary functions within those files to point to the various the specific-version folder structures with the appropriate version files. You'll have to then account for the fact that you are making people carry two sets of folders/file structures in their opencart directory when they only need to use one set for the appropriate version for the plugin to be run on. As an example, the marketplace and extension folders are different between both the versions you are mentioning. These are some things to consider.
You'd have to set a global variable of somewhere of some sort to detect and store the OC version first, something along the lines of:
global $oc_version;
$oc_version = (int)str_replace('.','',VERSION);
Then you would have a whole bunch of functions telling oc what to do with your module depending on the oc version detected, such as specifying the path for where to run the module folder from and to run twig or tpl. Something along the lines of:
if ($data['oc_version'] = 2300)
// Do Stuff
} elsif ($data['oc_version'] = 3000)
// Do other stuff
}
However, the issue you'll encounter here with my examples is that if the version someone is using is let's say 3.0.2.0 (and not 3.0) and there are no changes that actually affect your module, then trying to go based on detecting the OC version won't work. You'd have to change your operators, put more thought, etc. Hence, you'll have to keep re-modifying parts of the same code frequently with any minor patch/version release. I don't see how it's saving you any more work going this route.
theoretically possible using " https://www.opencart.com/index.php?route=marketplace/extension/info&extension_id=31589 " and with small modifications in controller files. But I prefer convert tpl to twig.
I'm writing a C++ library for an existing networking protocol (one with an document specifying the exact packet layout). As there are a considerable number of packet definitions, rather than writing all the serialization/de-serialization methods manually, are there any serialization libraries which are capable of specifying a packet layout specifically?
I've been looking at things like Google Protobuf and Apache Thrift, but they seem to be focused towards developing a server and client in tandem, where the packet layout does not matter along as it is consistent across a single release of the software. I need to serialize to an existing specification, so need to determine the field ordering, length, endianness, etc. explicitly. Is there anything that can help make this less of a chore?
There is a library/tools called PADS which should be ideal for this. See this SO answer here, the project home page here, some GitHub-ish stuff here. There seems to be some Haskell related stuff here. I've just tried and succeeded in downloading PADS/C from the homepage (note that the download server's username and password are given at the bottom of their license agreement).
It's a bit like writing a Google Protocol Buffer schema, except you're specifying bits/bytes in an arbitrary data stream, which is what you have.
I tried to get PADS/ML downloaded from https://github.com/yitzhakm/PADS-ML working some time ago, but ran into a lot of trouble and ultimately failed.
As you're interested in C (which is about as close to C++ as you're going to get) you might try the PADS/C library.
Can anyone explain me if is possible for the DLNA standard pass information of available external subtitles (.srt files) when playing media files (videos) without transcoding the video file.
If is possible then can anyone show me where this is explained in the DLNA standard? or how can this be implemented?
I'm trying to implement this using platinium library and don't know how to do it or if this is possible.
Thanks
Possible? Yes. Standardized? No. Reliable? Absolutely not. There is no specification of how to do subtitles right, neither in UPnP or DLNA. It ultimately is a question of how DMR wants the subtitles to be served by DMS, so it largely depends on the specific DMR you want to use. Some DMRs require a specific nonstandard DIDL-Lite field in media description (Samsung TVs seem to be promoting <sec:CaptionInfoEx>), some DMRs are happy with a somewhat standard-like <res protocolInfo="http-get:*:text/srt:*">. Both cases enclosing the URL of your SRT file, of course. It might be perfectly possible that your DMR does not support subtitles at all. There is no such requirement in either UPnP or DLNA (have i already said that?).
So Platinum does not have any subtitle support out of the box. You can create the <res> tag with existing logic - setting PLT_ProtocolInfo with ContentType of text/srt and assigning to PLT_MediaItemResource with m_Uri of your SRT file (served by your DMS).
Adding a new field is more tricky, PLT_Didl has a fixed set of fields which you must extend along with PLT_MediaObject::ToDidl which is pretty fixed in its operation. I consider this part of Platinum somewhat rushed, in comparison to the visible designing effort put in the rest of the framework.
In any case, your DMS must be also ready to act as HTTP server for your subtitles, which means giving the power to whatever class you have as implementation of PLT_MediaServerDelegate::ProcessFileRequest.
What does it take to build a Native Client app from scratch? I have looked into the documentation, and fiddled with several apps, however, I am now moving onto making my own app and I don't see anything related to creating the foundation of a native client app.
Depending on the version of the SDK you want to use, you have a couple of options.
Pepper 16 and 17: use init_project.py or use an example as a starting point
If you are using pepper_16 or pepper_17, you will find a Python script init_project.py in the project_templates in the SDK. It will setup up a complete set of files (.cc, .html, .nmf) with comments indicating where you need to add code. Run python init_project.py -h to see what options it accepts. Additional documentation can be found at https://developers.google.com/native-client/pepper17/devguide/tutorial.
Pepper 18 and newer: use an example as the starting point
If you are using pepper_18 or newer, init_project.py is no longer included. Instead you can copy a very small example from the examples directory (e.g., hello_world_glibc or hello_world_newlib for C or hello_world_interactive for C++) and use that as a starting point.
Writing completely from scratch
If you want to write your app completely from scratch, first ensure that the SDK is working by compiling and running a few of the examples. Then a good next step is to look at the classes pp::Module and pp:Instance, which your app will need to implement.
On the HTML side, write a simple page with the EMBED element for the Native Client module. Then add the JavaScript event handlers for loadstart, progress, error, abort, load, loadend, and message and have the handlers write the event data to, e.g., the JavaScript console, so that it's possible to tell what went wrong if the Native Client module didn't load. The load_progress example shows how to do this.
Next, create the manifest file (.nmf). From pepper_18 and onwards you can use the generate_nmf.py script found in the tools/ directory for this. If you want to write it from scratch, the examples provide examples both for using newlib and glibc (the two Standard C librares currently supported). See hello_world_newlib/ and hello_world_glibc/, respectively.
If you haven't used a gcc-family compiler before, it is also a good idea to look at the Makefile for some of the examples to see what compiler and linker flags to use. Compiling both for 32-bit and 64-bit right from the beginning is recommended.
Easiest way is to follow the quick start doc at https://developers.google.com/native-client/pepper18/quick-start, in particular steps 5-7 of the tutorial ( https://developers.google.com/native-client/pepper18/devguide/tutorial ) which seems to be what you are asking about.
I have a substantial body of source code (OOFILE) which I'm finally putting up on Sourceforge. I need to decide if I should go with a monolithic include directory or keep the header files with the source tree.
I want to make this decision before pushing to the svn repo on SourceForge. I expect a lot of people who use it after that move will keep a working copy checked out directly from SF so won't want to change their structure.
The full source tree has about 262 files in 25 folders. There are a lot more classes than that suggests as due to conforming to 8.3 character names (yes it dates back to Win3.1) many classes are in one file. As I used to develop with ObjectMaster, that never bothered me but I will be splitting it up to conform to more recent trends to minimise the number of classes per file. From a quick skim of the class list, there are about 600 classes.
OOFILE is a cross-platform product expected to be built on Mac, Windows and assorted Unix platforms. As it started life on Mac, with compilers that point to include trees rather than flat include dirs, headers were kept with the source.
Later, mainly to keep some Visual Studio users happy, a build was reorganised with a single include directory. I'm trying to choose between those models.
The entire OOFILE product covers quite a few domains:
database front-end
range of database backends
simple 2D graphing engine for Mac and Windows
simple character-mode report-writer for trivial html and text listing
very rich banding report-writer with Mac and Windows Preview and Printing and cross-platform generation of text, RTF, HTML and XML reports
forms integration engine for easy CRUD forms binding to the database, with implementations on PowerPlant and MFC
cross-platform utility classes
file and directory manipulation
strings
arrays
XML and tag generation
Many people only want to use it on a single platform and some of those code areas are pure legacy (eg: PowerPlant UI framework on classic Mac). It therefore seems people would appreciate not having headers from those unwanted areas dumped in their monolithic include directory.
I started thinking about having an include directory split up into a few of the domains above and then realised that was sounding more like the original structure.
In summary, the choices seem to be:
Keep original model, all headers adjacent to source - max flexibility at cost of some complex includes in projects.
one include directory with everything inside
split includes by domain, so there may be about 6 directories for someone using the lot but a pure database user would probably have a single directory.
From a Unix build aspect, the recommended structure has been 2. My situation is complicated by needing to keep Visual Studio and XCode users happy (sniff, CodeWarrior, how I doth miss thee!).
Edit - the chosen solution:
I went with four subdirectories in include. I started trying to divide them up further by platform but it just got very noisy very quickly.
Personally I would go with 2, or 3 if really pushed.
But whichever you choose, please make it crystal clear in the build instructions how to set up the include paths. Nothing dooms an open source project more than it being really difficult to build - developers want a quick out-of-the-box experience and if it involves faffing around with many undocumented environment variables (or whatever) most will simply go away.