im' having lately a little problem with websockets fin bit and my c++ server. Whenever i try to use FIN = 0, host drops connection with no reason. Here is part of my code to calculate FIN:
string ret
ret += (unsigned char)((fin & 1) << 7) | (opcode & 63);
When i use FIN = 1, my first byte in frame is 129, which is correct and user gets correct answear. With FIN =0 first byte is 1 which also seems to be good and then after sending connection drops. Tried to send the same packets of data with both flags and only FIN =0 fails;
Why i try to use FIN = 0? well i'm trying to make a little three.js + websocket game, i'd like a server to send all the models through the websocket for every player, so i expect a heavy load which i'd like to control.
I'd be happy to provide any additional informations.
Thanks in advance.
I have no idea about C++, but I know a bit about WebSockets.
Which value do you have in the other byte? When you send a FIN=0 frame, you still need to send the frame options in it. Subsequent frames must be of the option "Continuation", and nothing else. As far as I remember, continuation frames cannot even have the RSV bits different than 0.
If you send a frame with FIN=0 without type (text or binary), it will probably fail. If you send a FIN=1 with a type different that "Continuation" after a FIN=0 will fail.
So the key is, what are you sending in the second byte? Also, it would be great if you try with Google Chrome and check in the console why is the connection being shut down.
OPCODE:
|Opcode | Meaning | |
-+--------+-------------------------------------+-----------|
| 0 | Continuation Frame | |
-+--------+-------------------------------------+-----------|
| 1 | Text Frame | |
-+--------+-------------------------------------+-----------|
| 2 | Binary Frame | |
-+--------+-------------------------------------+-----------|
| 8 | Connection Close Frame | |
-+--------+-------------------------------------+-----------|
| 9 | Ping Frame | |
-+--------+-------------------------------------+-----------|
| 10 | Pong Frame | |
-+--------+-------------------------------------+-----------|
Related
I have a group of people who started receiving a specific type of social benefit called benefitA, I am interested in knowing what(if any) social benefits the people in the group might have received immediately before they started receiving BenefitA.
My optimal result would be a table with the number people who was receiving respectively BenefitB, BenefitC and not receiving any benefit “BenefitNon” immediately before they started receiving BenefitA.
My data is organized as a relation database with a Facttabel containing an ID for each person in my data and several dimension tables connected to the facttabel. The important ones here at DimDreamYdelse(showing type of benefit received), DimDreamTid(showing week and year). Here is an example of the raw data.
Data Example
I'm not sure how to approach this in PowerBi as I am fairly new to this program. Any advice is most welcome.
I have tried to solve the problem in SQL but as I need this as part of a running report i need to do it in PowerBi. This bit of code might however give some context to what I want to do.
USE FLISDATA_Beskaeftigelse;
SELECT dbo.FactDream.DimDreamTid , dbo.FactDream.DimDreamBenefit , dbo.DimDreamTid.Aar, dbo.DimDreamTid.UgeIAar, dbo.DimDreamBenefit.Benefit,
FROM dbo.FactDream INNER JOIN
dbo.DimDreamTid ON dbo.FactDream.DimDreamTid = dbo.DimDreamTid.DimDreamTidID INNER JOIN
dbo.DimDreamYdelse ON dbo.FactDream.DimDreamBenefit = dbo.DimDreamYdelse.DimDreamBenefitID
WHERE (dbo.DimDreamYdelse.Ydelse LIKE 'Benefit%') AND (dbo.DimDreamTid.Aar = '2019')
ORDER BY dbo.DimDreamTid.Aar, dbo.DimDreamTid.UgeIAar
I suggest to use PowerQuery to transform your table into more suitable form for your analysis. Things would be much easier if each row of the table represents the "change" of benefit plan like this.
| Person ID | Benefit From | Benefit To | Date |
|-----------|--------------|------------|------------|
| 15 | BenefitNon | BenefitA | 2019-07-01 |
| 15 | BenefitA | BenefitNon | 2019-12-01 |
| 17 | BenefitC | BenefitA | 2019-06-01 |
| 17 | BenefitA | BenefitB | 2019-08-01 |
| 17 | BenefitB | BenefitA | 2019-09-01 |
| ...
Then you can simply count the numbers by COUNTROWS(BenefitChanges) filtering/slicing with both Benefit From and Benefit To.
I use the libvncclient, to build a viewer, in which i try to integrate a specific hotkeys which do a bit of scripting, that are done as menu options, such as enable taskmanager,'run cmd' for window, and 'open terminal,'update repos' etc. I need to detect the operating system info, but i don't see anything to get this info from in rfb proto
rfbClient *client = new client();
if(!ConnectToRFBServer(client,client->serverHost,client->serverPort))
return FALSE;
if (!InitialiseRFBConnection(client))
return FALSE;
I looked trough the rfbclient.h and rfbClient structure doesn't hold any callback/or field that stores this info, as well as there is no apis for that it seems. But in rfc there is this thing https://www.rfc-editor.org/rfc/rfc6143#section-7.3.2
After receiving the ClientInit message, the server sends a ServerInit
message. This tells the client the width and height of the server's
framebuffer, its pixel format, and the name associated with the
desktop:
Richardson & Levine Informational [Page 11]
RFC 6143 The Remote Framebuffer Protocol March 2011
+--------------+--------------+------------------------------+
| No. of bytes | Type [Value] | Description |
+--------------+--------------+------------------------------+
| 2 | U16 | framebuffer-width in pixels |
| 2 | U16 | framebuffer-height in pixels |
| 16 | PIXEL_FORMAT | server-pixel-format |
| 4 | U32 | name-length |
| name-length | U8 array | name-string |
+--------------+--------------+------------------------------+
But it seems that libvnc doesn't handle that, is there any way that this info could be taken?
I figured out how to read files into my pyspark shell (and script) from an S3 directory, e.g. by using:
rdd = sc.wholeTextFiles('s3n://bucketname/dir/*')
But, while that's great in letting me read all the files in ONE directory, I want to read every single file from all of the directories.
I don't want to flatten them or load everything at once, because I will have memory issues.
Instead, I need it to automatically go load all the files from each sub-directory in a batched manner. Is that possible?
Here's my directory structure:
S3_bucket_name -> year (2016 or 2017) -> month (max 12 folders) -> day (max 31 folders) -> sub-day folders (max 30; basically just partitioned the collecting each day).
Something like this, except it'll go for all 12 months and up to 31 days...
BucketName
|
|
|---Year(2016)
| |
| |---Month(11)
| | |
| | |---Day(01)
| | | |
| | | |---Sub-folder(01)
| | | |
| | | |---Sub-folder(02)
| | | |
| | |---Day(02)
| | | |
| | | |---Sub-folder(01)
| | | |
| | | |---Sub-folder(02)
| | | |
| |---Month(12)
|
|---Year(2017)
| |
| |---Month(1)
| | |
| | |---Day(01)
| | | |
| | | |---Sub-folder(01)
| | | |
| | | |---Sub-folder(02)
| | | |
| | |---Day(02)
| | | |
| | | |---Sub-folder(01)
| | | |
| | | |---Sub-folder(02)
| | | |
| |---Month(2)
Each arrow above represents a fork. e.g. I've been collecting data for 2 years, so there are 2 years in the "year" fork. Then for each year, up to 12 months max, and then for each month, up to 31 possible day folders. And in each day, there will be up to 30 folders just because I split it up that way...
I hope that makes sense...
I was looking at another post (read files recursively from sub directories with spark from s3 or local filesystem) where I believe they suggested using wildcards, so something like:
rdd = sc.wholeTextFiles('s3n://bucketname/*/data/*/*')
But the problem with that is it tries to find a common folder among the various subdirectories - in this case there are no guarantees and I would just need everything.
However, on that line of reasoning, I thought what if I did..:
rdd = sc.wholeTextFiles("s3n://bucketname/*/*/*/*/*')
But the issue is that now I get OutOfMemory errors, probably because it's loading everything at once and freaking out.
Ideally, what I would be able to do is this:
Go to the sub-directory level of the day and read those in, so e.g.
First read in 2016/12/01, then 2016/12/02, up until 2012/12/31, and then 2017/01/01, then 2017/01/02, ... 2017/01/31 and so on.
That way, instead of using five wildcards (*) as I did above, I would somehow have it know to look trough each sub-directory at the level of "day".
I thought of using a python dictionary to specify the file path to each of the days, but that seems like a rather cumbersome approach. What I mean by that is as follows:
file_dict = {
0:'2016/12/01/*/*',
1:'2016/12/02/*/*',
...
30:'2016/12/31/*/*',
}
basically for all the folders, and then iterating through them and loading them in using something like this:
sc.wholeTextFiles('s3n://bucketname/' + file_dict[i])
But I don't want to manually type out all those paths. I hope this made sense...
EDIT:
Another way of asking the question is, how do I read the files from a nested sub-directory structure in a batched way? How can I enumerate all the possible folder names in my s3 bucket in python? Maybe that would help...
EDIT2:
The structure of the data in each of my files is as follows:
{json object 1},
{json object 2},
{json object 3},
...
{json object n},
For it to be "true json", it either just needed to be like the above without a trailing comma at the end, or something like this (note square brackets, and lack of the final trailing comma:
[
{json object 1},
{json object 2},
{json object 3},
...
{json object n}
]
The reason I did it entirely in PySpark as a script I submit is because I forced myself to handle this formatting quirk manually. If I use Hive/Athena, I am not sure how to deal with it.
Why dont you use Hive, or even better, Athena? These will both deploy tables ontop of file systems, to give you access to all the data. Then you can capture this in to Spark
Alternatively, I believe you can also use HiveQL in Spark to set up a tempTable ontop of your file system location, and it'll register it all as a Hive table which you can execute SQL against. It's been a while since I've done that, but it is definitely do-able
I'm writing C++ code in OPNET Modeler.
I try to simulate my scenario in debugger mode & I need to trace the function that I wrote it. I need to show print statements which I put it in my code.
I used in debugger mode: ***ltr function_name()*** then ***c***
But the result looks like:
Type 'help' for Command Summary
ODB> ltr enqueue_packet()
Added trace #0: trace on label (enqueue_packet())
ODB> c
|-----------------------------------------------------------------------------|
| Progress: Time (1 min. 52 sec.); Events (500,002) |
| Speed: Average (82,575 events/sec.); Current (82,575 events/sec.) |
| Time : Elapsed (6.1 sec.) |
| DES Log: 28 entries |
|-----------------------------------------------------------------------------|
|-----------------------------------------------------------------------------|
| Progress: Time (1 min. 55 sec.); Events (1,000,002) |
| Speed: Average (69,027 events/sec.); Current (59,298 events/sec.) |
| Time : Elapsed (14 sec.) |
| DES Log: 28 entries |
|-----------------------------------------------------------------------------|
|-----------------------------------------------------------------------------|
| Progress: Time (1 min. 59 sec.); Events (1,500,002) |
| Speed: Average (51,464 events/sec.); Current (34,108 events/sec.) |
| Time : Elapsed (29 sec.) |
| DES Log: 28 entries |
|-----------------------------------------------------------------------------|
|-----------------------------------------------------------------------------|
| Simulation Completed - Collating Results. |
| Events: Total (1,591,301); Average Speed (48,803 events/sec.) |
| Time : Elapsed (33 sec.); Simulated (2 min. 0 sec.) |
| DES Log: 29 entries |
|-----------------------------------------------------------------------------|
|-----------------------------------------------------------------------------|
| Reading network model. |
|-----------------------------------------------------------------------------|
I need to show the print statements in my code.
Where it has to be appeared?
Is there any step before run the simulation to insure that OPNET debugger using Visual Studio & go through my code??
OPNET Modeler provides the following commands to print trace output:
op_prg_odb_print_major() Prints a sequence of strings to the standard output device, in the format of ODB trace statements starting at the major indentation level.
op_prg_odb_print_minor() Prints a sequence of strings to the standard output device, in the format of ODB trace statements at the minor indentation level.
op_prg_text_output() Prints a sequence of user-defined strings to the standard output device.
For example:
if (op_prg_odb_ltrace_active ("tcp_window")) {
/* a trace is enabled, output Window-Related Variables */
char str0[128], str1[128], str2[128];
sprintf (str0, "rcv requests pending : (%d)", num_rcvs_allowed);
sprintf (str1, "local receive window : (%d)", receive_window);
sprintf (str2, "remote receive window : (%d)", remote_window);
op_prg_odb_print_major ("Window-Related Variables", str0, str1, str2, OPC_NIL);
sprintf (str0, "send unacked : (%d)", send_unacked);
sprintf (str1, "send_next : (%d)", send_next);
sprintf (str2, "receive next : (%d)", receive_next);
op_prg_odb_print_minor (str0, str1, str2, OPC_NIL);
}
Example output as it appears on the standard output device:
| Window-Related Variables
| rcv requests pending : (3)
| local receive window : (6400)
| remote receive window : (10788)
| send unacked : (4525)
| send_next : (5000)
| receive_next : (1200)
[Code taken from OPNET Modeler documentation.]
Note: I am guessing that you are modifying the standard models and are using the stdmod Repository. If this is the case, your code is not being compiled and you will not see any print statements in the debugger. See preference "Network simulation Repositories" to see if you are using a repository instead of compiling your own code.
I don't have much idea about what your trying to do , but i think you can output statements directly to a debugger for C++ code using
OutputDebugStringA("Your string here");
or just
OutputDebugString("Your string here");
Hope this helps!
The SqlServer datetime data type is used to hold timestamps and it is 64 bits long - http://msdn.microsoft.com/en-us/library/ms187819.aspx
I am looking for a sane way to work with it in C++, something in the boost library, probably?
Thanks.
EDIT
I would settle for being able to do these two operations:
Display the timestamp in some human readable format, like 2012-01-15 16:54:13.123
Parse a string like 2012-01-15 16:54:13.123 into the respective SqlServer datetime value.
EDIT2
Here is what I know until now. I have a table with a datetime column. When I select rows from it, I get this column back with the data type of DBTYPE_DBTIMESTAMP. According to http://msdn.microsoft.com/en-us/library/ms187819.aspx it should be an 8 byte value, however, I get back a 16 byte value, for instance:
00070015000c07db 00000000001f0007
I could not find any description of this format, but examining it reveals the following structure:
0007 0015 000c 07db 00000000 001f 0007
^ ^ ^ ^ ^ ^
| | | | | |
| | | | | +--- minutes (7)
| | | | +----+--- seconds (31)
| | | +-------------+----+--- year (2011)
| | +----+-------------+----+--- month (12)
| +----+----+-------------+----+--- day (21)
+----+----+----+-------------+----+--- hour (7)
Which corresponds to 2011-12-21 07:07:31. So, this appears to be easy, but where is the documentation? Are DBTYPE_DBTIMESTAMP values always reported in this format? Is it SqlSever CE specific or whether the Express and other flavours work the same? Can it contain milliseconds?
BTW, I am using OLEDB to access the database.
Why don't you handle the thing on the side of the sql-server?
You could write a view, which gives you the DBTYPE_DBTIMESTAMP as datetime or varchar back. You could just define, which format you want.
Use the cast()- or convert()- Function
To write back values, you could wirte a little function, also at the sql-server.
You could use the function:
Convert(datetime, [value], [format])