How to monitor clipboard changes in X11 Windows? - c++

I am pretty much exhausted all the possibilities of finding an X11 API to perform the following thing.
I have a thread which is trying to monitor for an event or notification to know when anything is copied into clipboard by any X11 client. I do not want to monitor a specific Atom Target (clipboard format), but generally looking for changes in clipboards.
Once, I know that something has changed in the clipboard, I can dive in and perform XConvertSelection() on all the target formats (I want to request server to give me all the possible ways to convert the copied data), and futher process them into SelectionRequest event.
Again, I want to generally get request for all the formats (thinking to enumerate between 1 to 1000 to check target Atoms), and not register changes for one specific format. Based on response from the server, if a particular atom is absent, I can check None as the property member, or else store other target Atom Names in a list.
Can anyone help me with how to monitor for changes in Clipboard? Also, does iterating 1 to 1000 will guarantee exhaustive search of all possible format? Or is there a better way to do just that?

To monitor changes, use XFixes. With XCB it is used like:
// Enable XFixes
auto xfixes = xcb_get_extension_data(connection, &xcb_xfixes_id); // do not free!
ev_selection_change_notify = xfixes->first_event + XCB_XFIXES_SELECTION_NOTIFY;
auto *version = xcb_xfixes_query_version_reply(xcb_xfixes_query_version(connection, XCB_XFIXES_MAJOR_VERSION, XCB_XFIXES_MINOR_VERSION));
// Subscribe to clipboard notifications
xcb_xfixes_select_selection_input(connection, root, clipboard, XCB_XFIXES_SELECTION_EVENT_MASK_SET_SELECTION_OWNER);
// Event loop:
auto *event = xcb_poll_for_event(connection);
int etype = event->response_type & 0x7f;
if (etype == ev_selection_change_notify) {
auto *notification = reinterpret_cast<xcb_xfixes_selection_notify_event_t *>(event);
...
}
...
In Xlib it should be similar.
To check the list of available targets, don’t loop to 1000! Simply query the TARGETS target, it should give you the list of valid targets for the clipboard content.
There is a caveat, though: instead of “the” clipboard, X11 allow applications to use “selections” which can be tagged by arbitrary atoms. Of those CLIPBOARD is of primary interest but PRIMARY and (rarely used) SECONDARY are also there, as well as arbitrary selections for “private communication”.
Reference: https://www.x.org/releases/X11R7.7/doc/xorg-docs/icccm/icccm.html#Use_of_Selection_Atoms

Related

Understanding the point of supply blocks (on-demand supplies)

I'm having trouble getting my head around the purpose of supply {…} blocks/the on-demand supplies that they create.
Live supplies (that is, the types that come from a Supplier and get new values whenever that Supplier emits a value) make sense to me – they're a version of asynchronous streams that I can use to broadcast a message from one or more senders to one or more receivers. It's easy to see use cases for responding to a live stream of messages: I might want to take an action every time I get a UI event from a GUI interface, or every time a chat application broadcasts that it has received a new message.
But on-demand supplies don't make a similar amount of sense. The docs say that
An on-demand broadcast is like Netflix: everyone who starts streaming a movie (taps a supply), always starts it from the beginning (gets all the values), regardless of how many people are watching it right now.
Ok, fair enough. But why/when would I want those semantics?
The examples also leave me scratching my head a bit. The Concurancy page currently provides three examples of a supply block, but two of them just emit the values from a for loop. The third is a bit more detailed:
my $bread-supplier = Supplier.new;
my $vegetable-supplier = Supplier.new;
my $supply = supply {
whenever $bread-supplier.Supply {
emit("We've got bread: " ~ $_);
};
whenever $vegetable-supplier.Supply {
emit("We've got a vegetable: " ~ $_);
};
}
$supply.tap( -> $v { say "$v" });
$vegetable-supplier.emit("Radish"); # OUTPUT: «We've got a vegetable: Radish␤»
$bread-supplier.emit("Thick sliced"); # OUTPUT: «We've got bread: Thick sliced␤»
$vegetable-supplier.emit("Lettuce"); # OUTPUT: «We've got a vegetable: Lettuce␤»
There, the supply block is doing something. Specifically, it's reacting to the input of two different (live) Suppliers and then merging them into a single Supply. That does seem fairly useful.
… except that if I want to transform the output of two Suppliers and merge their output into a single combined stream, I can just use
my $supply = Supply.merge:
$bread-supplier.Supply.map( { "We've got bread: $_" }),
$vegetable-supplier.Supply.map({ "We've got a vegetable: $_" });
And, indeed, if I replace the supply block in that example with the map/merge above, I get exactly the same output. Further, neither the supply block version nor the map/merge version produce any output if the tap is moved below the calls to .emit, which shows that the "on-demand" aspect of supply blocks doesn't really come into play here.
At a more general level, I don't believe the Raku (or Cro) docs provide any examples of a supply block that isn't either in some way transforming the output of a live Supply or emitting values based on a for loop or Supply.interval. None of those seem like especially compelling use cases, other than as a different way to transform Supplys.
Given all of the above, I'm tempted to mostly write off the supply block as a construct that isn't all that useful, other than as a possible alternate syntax for certain Supply combinators. However, I have it on fairly good authority that
while Supplier is often reached for, many times one would be better off writing a supply block that emits the values.
Given that, I'm willing to hazard a pretty confident guess that I'm missing something about supply blocks. I'd appreciate any insight into what that might be.
Given you mentioned Supply.merge, let's start with that. Imagine it wasn't in the Raku standard library, and we had to implement it. What would we have to take care of in order to reach a correct implementation? At least:
Produce a Supply result that, when tapped, will...
Tap (that is, subscribe to) all of the input supplies.
When one of the input supplies emits a value, emit it to our tapper...
...but make sure we follow the serial supply rule, which is that we only emit one message at a time; it's possible that two of our input supplies will emit values at the same time from different threads, so this isn't an automatic property.
When all of our supplies have sent their done event, send the done event also.
If any of the input supplies we tapped sends a quit event, relay it, and also close the taps of all of the other input supplies.
Make very sure we don't have any odd races that will lead to breaking the supply grammar emit* [done|quit].
When a tap on the resulting Supply we produce is closed, be sure to close the tap on all (still active) input supplies we tapped.
Good luck!
So how does the standard library do it? Like this:
method merge(*#s) {
#s.unshift(self) if self.DEFINITE; # add if instance method
# [I elided optimizations for when there are 0 or 1 things to merge]
supply {
for #s {
whenever $_ -> \value { emit(value) }
}
}
}
The point of supply blocks is to greatly ease correctly implementing reusable operations over one or more Supplys. The key risks it aims to remove are:
Not correctly handling concurrently arriving messages in the case that we have tapped more than one Supply, potentially leading us to corrupt state (since many supply combinators we might wish to write will have state too; merge is so simple as not to). A supply block promises us that we'll only be processing one message at a time, removing that danger.
Losing track of subscriptions, and thus leaking resources, which will become a problem in any longer-running program.
The second is easy to overlook, especially when working in a garbage-collected language like Raku. Indeed, if I start iterating some Seq and then stop doing so before reaching the end of it, the iterator becomes unreachable and the GC eats it in a while. If I'm iterating over lines of a file and there's an implicit file handle there, I risk the file not being closed in a very timely way and might run out of handles if I'm unlucky, but at least there's some path to it getting closed and the resources released.
Not so with reactive programming: the references point from producer to consumer, so if a consumer "stops caring" but hasn't closed the tap, then the producer will retain its reference to the consumer (thus causing a memory leak) and keep sending it messages (thus doing throwaway work). This can eventually bring down an application. The Cro chat example that was linked is an example:
my $chat = Supplier.new;
get -> 'chat' {
web-socket -> $incoming {
supply {
whenever $incoming -> $message {
$chat.emit(await $message.body-text);
}
whenever $chat -> $text {
emit $text;
}
}
}
}
What happens when a WebSocket client disconnects? The tap on the Supply we returned using the supply block is closed, causing an implicit close of the taps of the incoming WebSocket messages and also of $chat. Without this, the subscriber list of the $chat Supplier would grow without bound, and in turn keep alive an object graph of some size for each previous connection too.
Thus, even in this case where a live Supply is very directly involved, we'll often have subscriptions to it that come and go over time. On-demand supplies are primarily about resource acquisition and release; sometimes, that resource will be a subscription to a live Supply.
A fair question is if we could have written this example without a supply block. And yes, we can; this probably works:
my $chat = Supplier.new;
get -> 'chat' {
web-socket -> $incoming {
my $emit-and-discard = $incoming.map(-> $message {
$chat.emit(await $message.body-text);
Supply.from-list()
}).flat;
Supply.merge($chat, $emit-and-discard)
}
}
Noting it's some effort in Supply-space to map into nothing. I personally find that less readable - and this didn't even avoid a supply block, it's just hidden inside the implementation of merge. Trickier still are cases where the number of supplies that are tapped changes over time, such as in recursive file watching where new directories to watch may appear. I don't really know how'd I'd express that in terms of combinators that appear in the standard library.
I spent some time teaching reactive programming (not with Raku, but with .Net). Things were easy with one asynchronous stream, but got more difficult when we started getting to cases with multiple of them. Some things fit naturally into combinators like "merge" or "zip" or "combine latest". Others can be bashed into those kinds of shapes with enough creativity - but it often felt contorted to me rather than expressive. And what happens when the problem can't be expressed in the combinators? In Raku terms, one creates output Suppliers, taps input supplies, writes logic that emits things from the inputs into the outputs, and so forth. Subscription management, error propagation, completion propagation, and concurrency control have to be taken care of each time - and it's oh so easy to mess it up.
Of course, the existence of supply blocks doesn't stop being taking the fragile path in Raku too. This is what I meant when I said:
while Supplier is often reached for, many times one would be better off writing a supply block that emits the values
I wasn't thinking here about the publish/subscribe case, where we really do want to broadcast values and are at the entrypoint to a reactive chain. I was thinking about the cases where we tap one or more Supply, take the values, do something, and then emit things into another Supplier. Here is an example where I migrated such code towards a supply block; here is another example that came a little later on in the same codebase. Hopefully these examples clear up what I had in mind.

How to create a stop-filter (instead of a pass-filter) when reading CAN messages? [C++, Linux]

I am using a SocketCAN to access the CAN bus.
I have successfully created pass-filters like this:
struct can_filter m_Filter;
// ... setting up m_Filters
setsockopt(m_CanSockId, SOL_CAN_RAW, CAN_RAW_FILTER, m_Filter,
sizeof(struct can_filter));
This instructs to let CAN messages pass when meeting the filter settings.
Now I want to create a stop-filter but I do not know how to do it.
For example: I wish to let all CAN messages pass except the ones with ID 0x18DAF101.
Does anybody know how to do it?
You have to set the bit CAN_INV_FILTER in your filter to invert the filter logic.
From the documentation behind the link you have provided:
The filter can be inverted in this semantic, when the CAN_INV_FILTER
bit is set in can_id element of the can_filter structure.

Wowza: Modifying a Stream as it is playing?

It seems like this must be happening in many different contexts such as adding subtitles. What I want to do is grab a frame, change some feature within it and then "put it back" so that the end user sees this change. I think I know how to grab and modify the frame but re-inserting it into the stream I do not see how to do. Would appreciate a link or code.
On a live stream, there are a few things to consider depending on what the end goal might be. If it's true packet / frame level manipulation you would likely need to make the modification and set the output to a new stream (source remains unscathed but new stream has the modification). Modifying the stream inline will be very problematic.
Packet level modification using IMediaStreamLivePacketNotify
You can implement the IMediaStreamLivePacketNotify interface to handle new packets and modify them as necessary. Example implementation:
private class PacketListener implements IMediaStreamLivePacketNotify
{
#Override
public void onLivePacket(IMediaStream stream, AMFPacket packet)
{
// handle packet modifications
}
}
Upon modifying the packet you could publish it to a secondary stream that you publish through the Publisher object.
Publisher.createInstance(vhost, appName, appInstName);
The publisher contains functionality to add A/V data to your new stream:
switch (packet.getType())
{
case IVHost.CONTENTTYPE_AUDIO:
publisher.addAudioData(packet.getData(), packet.getAbsTimecode());
break;
case IVHost.CONTENTTYPE_VIDEO:
publisher.addVideoData(packet.getData(), packet.getAbsTimecode());
break;
case IVHost.CONTENTTYPE_DATA:
case IVHost.CONTENTTYPE_DATA3:
publisher.addDataData(packet.getData(), packet.getAbsTimecode());
}
There is similar functionality within the Duplicate Streams module for a broader look at this implementation.
Packet level modification using getPlayPackets()
You could also look at the IMediaStream object and leverage the IMediaStream.getPlayPackets() functionality. Then you can obtain the packets and modify as needed in a corresponding thread that continually processes the inbound stream. Thereafter, you could use the Publisher object to publish the new stream (similar to the above).
Metadata injection
However, if you are just looking to inject some metadata the process becomes much more basic. You can modify the AMFDataList within the source stream to include the new meta information.
Adding onto the stream
If you are looking to add data onto the inline stream (vs modifying it) you could simply add it via the ImediaStream object:
IMediaStream.addAudioData(..)

Composing Flow Graphs

I've been playing around with Akka Streams and get the idea of creating Flows and wiring them together using FlowGraphs.
I know this part of Akka is still under development so some things may not be finished and some other bits may change, but is it possible to create a FlowGraph that isn't "complete" - i.e. isn't attached to a Sink - and pass it around to different parts of my code to be extended by adding Flow's to it and finally completed by adding a Sink?
Basically, I'd like to be able to compose FlowGraphs but don't understand how... Especially if a FlowGraph has split a stream by using a Broadcast.
Thanks
The next week (December) will be documentation writing for us, so I hope this will help you to get into akka streams more easily! Having that said, here's a quick answer:
Basically you need a PartialFlowGraph instead of FlowGraph. In those we allow the usage of UndefinedSink and UndefinedSource which you can then"attach" afterwards. In your case, we also provide a simple helper builder to create graphs which have exactly one "missing" sink – those can be treated exactly as if it was a Source, see below:
// for akka-streams 1.0-M1
val source = Source() { implicit b ⇒
// prepare an undefined sink, which can be relpaced by a proper sink afterwards
val sink = UndefinedSink[Int]
// build your processing graph
Source(1 to 10) ~> sink
// return the undefined sink which you mean to "fill in" afterwards
sink
}
// use the partial graph (source) multiple times, each time with a different sink
source.runWith(Sink.ignore)
source.runWith(Sink.foreach(x ⇒ println(x)))
Hope this helps!

understanding RProperty IPC communication

i'm studying this source base. Basically this is an Anim server client for Symbian 3rd edition for the purpose of grabbing input events without consuming them in a reliable way.
If you spot this line of the server, here it is basically setting the RProperty value (apparently to an increasing counter); it seems no actual processing of the input is done.
inside this client line, the client is supposed to be receiving the notification data, but it only calls Attach.
my understanding is that Attach is only required to be called once, but is not clear in the client what event is triggered every time the server sets the RProperty
How (and where) is the client supposed to access the RProperty value?
After Attaching the client will somewhere Subscribe to the property where it passes a TRequestStatus reference. The server will signal the request status property via the kernel when the asynchronous event has happened (in your case the property was changed). If your example source code is implemented in the right way, you will find an active object (AO; CActive derived class) hanging around and the iStatus of this AO will be passed to the RProperty API. In this case the RunL function of the AO will be called when the property has been changed.
It is essential in Symbian to understand the active object framework and quite few people do it actually. Unfortunately I did not find a really good description online (they are explained quite well in Symbian OS Internals book) but this page at least gives you a quick example.
Example
In the ConstructL of your CMyActive subclass of CActive:
CKeyEventsClient* iClient;
RProperty iProperty;
// ...
void CMyActive::ConstructL()
{
RProcess myProcess;
TSecureId propertyCategory = myProcess.SecureId();
// avoid interference with other properties by defining the category
// as a secure ID of your process (perhaps it's the only allowed value)
TUint propertyKey = 1; // whatever you want
iClient = CKeyEventsClient::NewL(propertyCategory, propertyKey, ...);
iClient->OpenNotificationPropertyL(&iProperty);
// ...
CActiveScheduler::Add(this);
iProperty.Subscribe(iStatus);
SetActive();
}
Your RunL will be called when the property has been changed:
void CMyActive::RunL()
{
if (iStatus.Int() != KErrCancel) User::LeaveIfError(iStatus.Int());
// forward the error to RunError
// "To ensure that the subscriber does not miss updates, it should
// re-issue a subscription request before retrieving the current value
// and acting on it." (from docs)
iProperty.Subscribe(iStatus);
TInt value; // this type is passed to RProperty::Define() in the client
TInt err = iProperty.Get(value);
if (err != KErrNotFound) User::LeaveIfError(err);
SetActive();
}