Using the Measuring Channel Throughput tutorial, I am unable to locate the method to change the interval ([s]) and numValueLimit parameters. When used in the INI file, I get the following error: Entry potentially does not match any parameters.
The tutorial states
Channel throughput is a statistic of transmitter modules, such as the PacketTransmitter in LayeredEthernetPhy. Throughput is measured with a sliding window. By default, the window is 0.1s or 100 packets, whichever comes first. The parameters of the window, such as the window interval, are configurable from the ini file, as module.statistic.parameter. For example:
*.host.eth[0].phyLayer.transmitter.throughput.interval = 0.2s
When I run the tutorial out of the box and allow ~50 packet transmissions, the throughput vector:
ChannelThroughputMeasurementShowcase.destination.eth[0].phyLayer.receiver has only one entry in the throughput:vector which is ~44 Mbps.
From what I can tell, this is an average from multiple measurements based on this sliding window.
What I would like to do is change this sliding window so that I can get more values in the vector based on the described settings.
Have these values been depreciated in this new version of OMNeT/INET?
I'm using INET version: inet-4.4.1-302861f35c along with OMNet++ Version: 6.0, Build id: 220413-71d8fab425
Related
I'm trying to adjust button width depending on the current language that is set in comboBox. For some language like spanish some of the texts are just too long to be fitted into PUSHBUTTON. All controls are in .rc file. To calculate PUSHBUTTON width I'm using RECT (rect.right - rect.left) and to calculate text widht I'm using GetTextExtentPoint32W but unfortunately this method is giving me different values depending in what PC is running. In my laptop where resolution is set to 1920x1080 and scalling100% (recommended is 125%) text width is around 25% bigger than in PC with the same configuration
It depends on the Device Context which may be different across PCs.
Also, you need to be DPI-Aware (since your scalling is not 100%) and GDI isn't.
Suggestion: move to Direct2D.
I'm currently getting a strange behavior from my GLFW based application, running under 64bit Windows 10 Enterprise (8 core 16 threads, 32GB, RTX2080) w/two external 4K monitors.
The application is a plain vanilla glfw loop (with some imgui), except that I place the window with glfwSetWindowMonitor(...) during startup right after calling glfwCreateWindow(...). This somehow halves the framerate (a little below 30fps) when the window is placed on the two external monitors' area. If I move the window just one pixel the frame rate quickly doubles to the monitors' framerate (60fps).
Is this something others have seen? Can I somehow shake the app out of its slumber?
I have configured the Basler camera (aca1920-40um) which is connected to the USB port, I have duplicate frames when I use PylonViewer software and I store a sequence of still images. What parameters should I change to prevent this from happening?
The parameters that I set after connecting camera to pc are:
Enable acquisition frame rate = active.
fps = 25 (acquisition frame rate); trigger = off; Exposure auto = off; exposure time = 1000 .
In the next step, I took the frame using OpenCV and c++ with a code similar to the following link, which again gives me a duplicate frame.
Convert images from Pylon to Opencv in c++
I had the same problem and contacted Basler customer service about it. The issue you are running into is likely due to how you have the recording options set in PylonViewer.
Go to the Recording Settings and set 'Record a frame every' to 1 and select 'Frame(s)' from the drop-down list.
screenshot of pylonviewer recording settings
This worked for me. It was not at all intuitive that those setting applied to the 'Video', I thought it only related the the 'Sequence of still images' option given the layout of the UI.
I have application that can control other application position. On start it get current monitors' layout and dimensions. They are used to calculate the proper position of the window on the secondary monitor with the different (not default) scaling. But when the user updates the monitors' layout application keeps using initial values.
is there any possibility to capture scaling change or monitors layout change?
You need to receive and handle the WM_DPICHANGED and WM_DISPLAYCHANGE messages.
I suggest you read MSDN's documentation about High DPI Desktop Application Development and Multiple Display Monitors for more details.
I'm writing a C++ application whose main window needs to receive real-time data from a server and draw plots and histograms in realtime based on this data. I'm using GTK3 (actually its C++ binding gtkmm) and Cairo.
In particular, data is received every 1 second from the network, and refresh happens every time the data is received, thus every 1 second. Refresh is done by calling the invalidate_rect() method for the entire drawing area, whose on_draw() even redraws all figures and plots using the newly received data.
Now, the application works but it's extremely unreliable. In particular, it freezes very often, especially when the CPU load increases. The CPU usage of my application, as well as memory, are very low. Suddenly the window becomes grey and unresponsive, and I need to kill it with Ctrl-C, since even pressing the window close icon doesn't work.
I'm wondering: is it the wrong approach to call invalidate_rect() in the scenario above? What is a better way, using GTKMM/Cairo, to obtain smooth graphics in a reliable way?