I'm trying to use the new event log API to get the oldest record number from a windows event log, but cannot get the the API to return the same answer as event viewer displays (looking at the details EventRecordID). Some sample code I'm using is below:
EVT_HANDLE log = EvtOpenLog(NULL, _logName, EvtOpenChannelPath);
EVT_VARIANT buf;
DWORD need = 0;
int vlen = sizeof(EVT_VARIANT);
ZeroMemory(&buf, vlen);
EvtGetLogInfo(log, EvtLogOldestRecordNumber, vlen, &buf, &need);
UINT64 old = buf.UInt64Val;
EvtClose(log);
What the API appears to be doing is returning the record number of the oldest event in the log, but not the oldest accessible event... What I mean by that is lets say you have 10 records in your log, 1-10 and you clear your log. The next 10 events inserted will be 11-20. If you use the API, it will return 1, not 11 like event viewer displays. If you try to retrieve event 1 using EvtQuery/EvtNext it will fail and not return an event -- as I would expect.
Does anyone have experience with this method? What am I doing wrong? I have used the method successfully with other properties (i.e. EvtLogNumberOfLogRecords), but cannot get this property (EvtLogOldestRecordNumber) to behave as expected.
http://msdn.microsoft.com/en-us/library/aa385385(v=VS.85).aspx
I was not able to get the new API to work for the oldest record number and had to revert to using the legacy API to retrieve the oldest record number.
msdn.microsoft.com/en-us/library/aa363665(VS.85).aspx
Related
I'm new to PQ and trying to do following:
Get updates from server
Transform it.
Post data back.
While code works just fine i'd like it to be performed each N minutes until application closure.
Also LastMessageId variable should be revaluated after each call of GetUpdates() and i need to somehow call GetUpdates() again with it.
I've tried Function.InvokeAfter but didn't get how to run it more than once.
Recursion blow stack out ofc.
The only solution i see is to use List.Generate but struggle to understand how it can be used with delay.
let
//Get list of records
GetUpdates = (optional offset as number) as list => 1,
Updates = GetUpdates(),
// Store last update_id
LastMessageId = List.Last(Updates)[update_id],
// Prepare and response
Process = (item as record) as record =>
// Map Process function to each item in the list of records
Map = List.Transform(Updates, each Process(_))
in
Map
PowerBI does not support continuous automatic re-loading of data in the desktop.
Online, you can enforce a refresh as fast as 15 minutes using direct query1
Alternative methods:
You could do this in Excel and use VBA to re-execute the query on a schedule
Streaming data in PowerBI2
Streaming data with Flow and PowerBI3
1: Supported DirectQuery Sources
2: Realtime Streaming in PowerBI
3: Streaming data with Flow
4: Don't forget to enable historic logging!
Please, consider following scenario:
IgniteUI 16.1 igGrid powered with igGridUpdating feature and RESTDataSource
User creates a new record through modal dialog
Post request is initiated with form data
Server processes the create request and returns an object, populated with correct ID
In success handler on the client side, the newly added in the grid row has to be found and updated with correct ID returned from the server.
The ID column serves as a grid's primary key and it's hidden
What happens when a new row is adding?
We are watching infragistics.lob-16.1.js
In _dialogOpening(), row 68167, _originalValues are computed via $.extend(this._originalValues, values, this._originalValues), where values = _getDefaultValues() or with other words values.id = this._pkVal. And _pkVal is a counter that is incremented each time when a new row appears.
Keeping that in mind, later, _endEditDialog() is called, where newValues, representing the entered data by the user, are merged with default values of the input form: newValues = this._getNewValuesForRow(colElements) followed by newValues = $.extend({}, prevValues, newValues) and prevValues are the same _originalValues from above.
Then an _addRow() is called, which calls on its run grid.dataSource.addRow() and a transaction is created.
My point here is the updating feature generates ID automatically for the new row and ID = CurrentRowsCount + 1.
So, if the grid contains 8 records, then newly created record will automatically be assigned with ID = 9. And imagine, if one of existing records has an ID = 9, then igGridUpdating's updateRow(rowId, values) will update both rows, existing and the new one. And I realy want to call this method in order to update the row with the data, returned from the server.
How could I intervene in the whole picture and accomplish the update of the new row?
The auto-generated primary keys are only meant to cover the most basic scenarios. If your app supports row deletion you should change them with something that will keep them unique using the generatePrimaryKeyValue event.
Using updateRow after receiving the permanent keys from the server is the way to go, however, remember to pop the transaction from the allTransactions array so the update doesn't go to the server on the next saveChanges call.
I have an Analytics pipeline added just before the standard one in section to delete duplicate triggered pageevents before submitting all to database so I can have unique triggered events as there seems to be a bug on android/ios devices that triggers several events within few seconds interval.
In this custom pipeline I need to get the list of all goals/events the current user triggered in his session so I can compare with the values in dataset obtained from args parameter and delete the ones already triggered.
The args.DataSet.Tables["PageEvents"] only returns the set to be submitted to database and that doesn't help since it is changing each time this pipeline runs. I also tried Sitecore.Analytics.Tracker.Visitor.DataSet but I get a null value for these properties.
Does anyone knows a way how to get a list with all goals the user triggered so far in his session without requesting it directly to the database ?
Some code:
public class CommitUniqueAnalytics : CommitDataSetProcessor
{
public override void Process(CommitDataSetArgs args)
{
Assert.ArgumentNotNull(args, "args");
var table = args.DataSet.Tables["PageEvents"];
if (table != null)
{
//Sitecore.Analytics.Tracker.Visitor.DataSet.PageEvents - this list always empty
...........
}
}
}
I had a similar question.
In Sitecore 7.5 I found that this worked:
Tracker.Current.Session.Interaction.Pages.SelectMany(x=>x.PageEvents)
However I'm a little worried that this will be inefficient if the Pages collection is very large.
In my app, a user can create a message and send it. When the user sends the message, the message is created with createRecord and the server replies with 201 Created if successful.
Also, the user can get messages from other users through a websocket. When it receives a message, I push it into the store with pushPayload.
var parsedData = JSON.parse(data);
this.store.pushPayload('message', parsedData);
The problem is, when a user sends a message and saves it, they also get it back from the websocket, and even though both objects have the same id, the store ends up with duplicate messages.
How can I tell the store than when I push or save something with the same id of an already existing element, it should override it?
Simply perform a check to see whether the model is already in the store before adding it:
var parsedData = JSON.parse(data);
if(this.store.hasRecordForId ('typeOfYourRecord', parsedData.id)){
// logic you want to run when the model is already in the store
var existingItem = this.store.find('typeOfYourRecord', parsedData.id);
// perform updates using returned data here
} else {
this.store.pushPayload('message', parsedData);
}
The only method I found to avoid this problem is to run my update in a new runloop. If the delay in ms in long enough, the problem won't occur.
It seems that receiving the update from the websocket and the request at nearly the same time creates a race condition in Ember Data.
Problem: How can I get liveliness notifications of booth publisher connect and disconnect?
Background:
I'm working with a OpenDDS implementation where I have a publisher and a subscriber of a data type (dt), using the same topic, located on separate computers.
The reader on the subscriber side has overridden implementations of on_data_available(...)and on_liveliness_changed(...). My subscriber is started first, resulting in a callback to on_liveliness_changed(...) which says that there are no writers available. When the publisher is started I get a new callback to telling me there is a writer available, and when the publisher publishes, on_data_available(...) is called. So far everything is working as expected.
The writer on the publisher has a overridden implementation of on_publication_matched(...). When starting the publisher, on_publication_matched(...) gets called since we already have a subscriber started.
The problem is that when the publisher disconnects, I get no callback to on_liveliness_changed(...) on the reader side, nor do I get a new callback when the publisher is started again.
I have tried to change the readerQos by setting the readerQos.liveliness.lease_duration.
But the result is that the on_data_available(...) never gets called, and the only callback to on_liveliness_changed(...) is at startup, telling me that there are no publishers.
DDS::DataReaderQos readerQos;
DDS::StatusKind mask = DDS::DATA_AVAILABLE_STATUS | DDS::LIVELINESS_CHANGED_STATUS | DDS::LIVELINESS_LOST_STATUS ;
m_subscriber->get_default_datareader_qos( readerQos );
DDS::Duration_t t = { 3, 0 };
readerQos.liveliness.lease_duration = t;
m_binary_Reader = static_cast<binary::binary_tdatareader( m_subscriber->create_datareader(m_Sender_Topic,readerQos,this, mask, 0, false) );
/Kristofer
Ok, guess there aren't many DDS users here.
After some research I found that a reader/writer match occurs only if this compatibility criterion is satisfied: offered lease_duration <= requested lease_duration
The solution was to set the writer QoS to offer the same liveliness. There is probably a way of checking if the requested reader QoS could be supplied by the corresponding writer, and if not, use a "lower" QoS, all thou I haven't tried it yet.
In the on_liveliness_changed callback method I simply evaluated the alive_count in the from the LivelinessChangedStatus.
/Kristofer