How can I show print statements in debug mode of OPNET Modeler? - c++

I'm writing C++ code in OPNET Modeler.
I try to simulate my scenario in debugger mode & I need to trace the function that I wrote it. I need to show print statements which I put it in my code.
I used in debugger mode: ***ltr function_name()*** then ***c***
But the result looks like:
Type 'help' for Command Summary
ODB> ltr enqueue_packet()
Added trace #0: trace on label (enqueue_packet())
ODB> c
|-----------------------------------------------------------------------------|
| Progress: Time (1 min. 52 sec.); Events (500,002) |
| Speed: Average (82,575 events/sec.); Current (82,575 events/sec.) |
| Time : Elapsed (6.1 sec.) |
| DES Log: 28 entries |
|-----------------------------------------------------------------------------|
|-----------------------------------------------------------------------------|
| Progress: Time (1 min. 55 sec.); Events (1,000,002) |
| Speed: Average (69,027 events/sec.); Current (59,298 events/sec.) |
| Time : Elapsed (14 sec.) |
| DES Log: 28 entries |
|-----------------------------------------------------------------------------|
|-----------------------------------------------------------------------------|
| Progress: Time (1 min. 59 sec.); Events (1,500,002) |
| Speed: Average (51,464 events/sec.); Current (34,108 events/sec.) |
| Time : Elapsed (29 sec.) |
| DES Log: 28 entries |
|-----------------------------------------------------------------------------|
|-----------------------------------------------------------------------------|
| Simulation Completed - Collating Results. |
| Events: Total (1,591,301); Average Speed (48,803 events/sec.) |
| Time : Elapsed (33 sec.); Simulated (2 min. 0 sec.) |
| DES Log: 29 entries |
|-----------------------------------------------------------------------------|
|-----------------------------------------------------------------------------|
| Reading network model. |
|-----------------------------------------------------------------------------|
I need to show the print statements in my code.
Where it has to be appeared?
Is there any step before run the simulation to insure that OPNET debugger using Visual Studio & go through my code??

OPNET Modeler provides the following commands to print trace output:
op_prg_odb_print_major() Prints a sequence of strings to the standard output device, in the format of ODB trace statements starting at the major indentation level.
op_prg_odb_print_minor() Prints a sequence of strings to the standard output device, in the format of ODB trace statements at the minor indentation level.
op_prg_text_output() Prints a sequence of user-defined strings to the standard output device.
For example:
if (op_prg_odb_ltrace_active ("tcp_window")) {
/* a trace is enabled, output Window-Related Variables */
char str0[128], str1[128], str2[128];
sprintf (str0, "rcv requests pending : (%d)", num_rcvs_allowed);
sprintf (str1, "local receive window : (%d)", receive_window);
sprintf (str2, "remote receive window : (%d)", remote_window);
op_prg_odb_print_major ("Window-Related Variables", str0, str1, str2, OPC_NIL);
sprintf (str0, "send unacked : (%d)", send_unacked);
sprintf (str1, "send_next : (%d)", send_next);
sprintf (str2, "receive next : (%d)", receive_next);
op_prg_odb_print_minor (str0, str1, str2, OPC_NIL);
}
Example output as it appears on the standard output device:
| Window-Related Variables
| rcv requests pending : (3)
| local receive window : (6400)
| remote receive window : (10788)
| send unacked : (4525)
| send_next : (5000)
| receive_next : (1200)
[Code taken from OPNET Modeler documentation.]
Note: I am guessing that you are modifying the standard models and are using the stdmod Repository. If this is the case, your code is not being compiled and you will not see any print statements in the debugger. See preference "Network simulation Repositories" to see if you are using a repository instead of compiling your own code.

I don't have much idea about what your trying to do , but i think you can output statements directly to a debugger for C++ code using
OutputDebugStringA("Your string here");
or just
OutputDebugString("Your string here");
Hope this helps!

Related

qemu-system-riscv32 -M sifive_u cannot be debugged using GDB if started with OpenSBI (-bios default)

I am working on a kind of custom assembler for RISC-V, and am testing it with qemu.
The best way that I could find to get some raw machine code into qemu-system-riscv32 was to create an ELF file and place the entrypoint at 0x80000000 and use the -machine sifive_u, which also has memory-mapped UART that I can use to print to the serial console.
By adding -S -s flags, I could attach gdb and and step through the machine code (layout asm; ni).
At some point, presumably by updating qemu, this stopped working until I set -bios none. When I found that solution I was happy, but also curious about what the benefits of the "default BIOS", which seems to be OpenSBI.
Through trial and error I figured out that with OpenSBI, I can put the code anywhere between 0x80020000 and 0x90000000 and it will jump to the entrypoint set in the ELF file. Before jumping to my code it will print out a POST message like this:
OpenSBI v0.9
____ _____ ____ _____
/ __ \ / ____| _ \_ _|
| | | |_ __ ___ _ __ | (___ | |_) || |
| | | | '_ \ / _ \ '_ \ \___ \| _ < | |
| |__| | |_) | __/ | | |____) | |_) || |_
\____/| .__/ \___|_| |_|_____/|____/_____|
| |
|_|
Platform Name : SiFive HiFive Unleashed A00
Platform Features : timer,mfdeleg
Platform HART Count : 2
Firmware Base : 0x80000000
Firmware Size : 108 KB
Runtime SBI Version : 0.2
Domain0 Name : root
Domain0 Boot HART : 1
Domain0 HARTs : 0*,1*
Domain0 Region00 : 0x80000000-0x8001ffff ()
Domain0 Region01 : 0x00000000-0xffffffff (R,W,X)
Domain0 Next Address : 0x80440000
Domain0 Next Arg1 : 0x87000000
Domain0 Next Mode : S-mode
Domain0 SysReset : yes
Boot HART ID : 1
Boot HART Domain : root
Boot HART ISA : rv32imafdcsu
Boot HART Features : scounteren,mcounteren
Boot HART PMP Count : 16
Boot HART PMP Granularity : 4
Boot HART PMP Address Bits: 32
Boot HART MHPM Count : 0
Boot HART MHPM Count : 0
Boot HART MIDELEG : 0x00000222
Boot HART MEDELEG : 0x0000b109
However, if I now start qemu with -S -s and continue using c, it seems that qemu gets stuck in the OpenSBI code at address 0x80000978 on the instruction csrr a5,mip. Entering c or ni at this point will simply freeze until I interrupt with CTRL-C. However as soon as I detach GDB, the program continues running as normal.
How can I get past this instruction and/or start debugging at my entrypoint? I would prefer testing my code after OpenSBI, which gives me access to the device-tree file, which I would be interested in using in the future.
EDIT: I've managed to port my code to use the virt riscv32 machine, which has a different Serial device, and have no problems with debugging there. At this point this is perhaps more of a bug report, but I will leave the question up for a bit just in case.

How put quote with windows Processor(_Total)?

When I process with powershell it works
(get-counter -Counter "\Processor(_Total)\% Processor Time" -SampleInterval 1 -MaxSamples 5 |
select -ExpandProperty countersamples |
select -ExpandProperty cookedvalue |
Measure-Object -Average).average
Result: 1,75677
but when i translate this request with c++
string commande="get-counter -Counter \"\\Processor(_Total)\\% Processor Time\" -SampleInterval 300 -MaxSamples 3 | select -ExpandProperty countersamples | select -ExpandProperty cookedvalue | Measure-Object -Average).average";`
Result:
The term '_Total' is not recognized as the name of a cmdlet, function, script file, or operable program
It works when i put only with c++
string commande="get-counter"
Result:
Timestamp CounterSamples
--------- --------------
23/08/2019 11:30:33 \\win2008\network interface(realtek rtl8139c+ fast ethernet nic _4)\bytes total/sec : 0
I try
//Processor(_Total)//% Processor or
\Processor(_Total)\% Processor or
///Processor(_Total)///% Processor
Nothing works Do you know why? thanks

LibVncClient get operation system info

I use the libvncclient, to build a viewer, in which i try to integrate a specific hotkeys which do a bit of scripting, that are done as menu options, such as enable taskmanager,'run cmd' for window, and 'open terminal,'update repos' etc. I need to detect the operating system info, but i don't see anything to get this info from in rfb proto
rfbClient *client = new client();
if(!ConnectToRFBServer(client,client->serverHost,client->serverPort))
return FALSE;
if (!InitialiseRFBConnection(client))
return FALSE;
I looked trough the rfbclient.h and rfbClient structure doesn't hold any callback/or field that stores this info, as well as there is no apis for that it seems. But in rfc there is this thing https://www.rfc-editor.org/rfc/rfc6143#section-7.3.2
After receiving the ClientInit message, the server sends a ServerInit
message. This tells the client the width and height of the server's
framebuffer, its pixel format, and the name associated with the
desktop:
Richardson & Levine Informational [Page 11]
RFC 6143 The Remote Framebuffer Protocol March 2011
+--------------+--------------+------------------------------+
| No. of bytes | Type [Value] | Description |
+--------------+--------------+------------------------------+
| 2 | U16 | framebuffer-width in pixels |
| 2 | U16 | framebuffer-height in pixels |
| 16 | PIXEL_FORMAT | server-pixel-format |
| 4 | U32 | name-length |
| name-length | U8 array | name-string |
+--------------+--------------+------------------------------+
But it seems that libvnc doesn't handle that, is there any way that this info could be taken?

How to take the output of Sys.command as string in OCaml?

In OCaml, I have this piece of code:
let s =Sys.command ("minisat test.txt | grep 'SATIS' ");;
I want to take the output of minisat test.txt | grep "SATIS" , which is SATISFIABLE/UNSATISFIABLE to the string s.
I am getting the following output:
SATISFIABLE
val s : int = 0
So, how can I make the output of this command to a string.
Also, is it possible to even import time?
This is the output I get when I try minisat test.txt in terminal
WARNING: for repeatability, setting FPU to use double precision
============================[ Problem Statistics ]=============================
| |
| Number of variables: 5 |
| Number of clauses: 3 |
| Parse time: 0.00 s |
| Eliminated clauses: 0.00 Mb |
| Simplification time: 0.00 s |
| |
============================[ Search Statistics ]==============================
| Conflicts | ORIGINAL | LEARNT | Progress |
| | Vars Clauses Literals | Limit Clauses Lit/Cl | |
===============================================================================
===============================================================================
restarts : 1
conflicts : 0 (-nan /sec)
decisions : 1 (0.00 % random) (inf /sec)
propagations : 0 (-nan /sec)
conflict literals : 0 (-nan % deleted)
Memory used : 8.00 MB
CPU time : 0 s
SATISFIABLE
If you use just Sys, you can't.
However, you can create a temporary file (see the Filename module's documentation here) and tell the command to output in it:
let string_of_command () =
let tmp_file = Filename.temp_file "" ".txt" in
let _ = Sys.command ## "minisat test.txt | grep 'SATIS' >" ^ tmp_file in
let chan = open_in tmp_file in
let s = input_line chan in
close_in chan;
s
Note that this function is drafty: you have to properly handle potential errors happening. Anyway, you can adapt it to your needs I guess.
You can avoid the temporary file trick by using the Unix library or more advanced libraries.
You have to use Unix.open_process_in or Unix.create_process, if you want to capture the output.
Or better use a higher level wrapper like 'shell' (from ocamlnet):
http://projects.camlcity.org/projects/dl/ocamlnet-4.0.2/doc/html-main/Shell_intro.html
But I wouldn't pipe it to grep (not portable). Parse the output with your favorite regex library inside OCAML.

websocket fin bit in c++

im' having lately a little problem with websockets fin bit and my c++ server. Whenever i try to use FIN = 0, host drops connection with no reason. Here is part of my code to calculate FIN:
string ret
ret += (unsigned char)((fin & 1) << 7) | (opcode & 63);
When i use FIN = 1, my first byte in frame is 129, which is correct and user gets correct answear. With FIN =0 first byte is 1 which also seems to be good and then after sending connection drops. Tried to send the same packets of data with both flags and only FIN =0 fails;
Why i try to use FIN = 0? well i'm trying to make a little three.js + websocket game, i'd like a server to send all the models through the websocket for every player, so i expect a heavy load which i'd like to control.
I'd be happy to provide any additional informations.
Thanks in advance.
I have no idea about C++, but I know a bit about WebSockets.
Which value do you have in the other byte? When you send a FIN=0 frame, you still need to send the frame options in it. Subsequent frames must be of the option "Continuation", and nothing else. As far as I remember, continuation frames cannot even have the RSV bits different than 0.
If you send a frame with FIN=0 without type (text or binary), it will probably fail. If you send a FIN=1 with a type different that "Continuation" after a FIN=0 will fail.
So the key is, what are you sending in the second byte? Also, it would be great if you try with Google Chrome and check in the console why is the connection being shut down.
OPCODE:
|Opcode | Meaning | |
-+--------+-------------------------------------+-----------|
| 0 | Continuation Frame | |
-+--------+-------------------------------------+-----------|
| 1 | Text Frame | |
-+--------+-------------------------------------+-----------|
| 2 | Binary Frame | |
-+--------+-------------------------------------+-----------|
| 8 | Connection Close Frame | |
-+--------+-------------------------------------+-----------|
| 9 | Ping Frame | |
-+--------+-------------------------------------+-----------|
| 10 | Pong Frame | |
-+--------+-------------------------------------+-----------|