Documentation of DPDK shows that rte_eal_remote_launch can only be called by master lcore. Does it mean I can only launch extra threads in runtime with master thread? Can I assign it to slave threads?
ps: There is another question. The documentation also said things like:
Note: This function is not designed to offer optimum performance. It
is just a practical way to launch a function on another lcore at
initialization time.
What does it mean? Is there another more effiective way to launch a thread?
Based on the logical cores specified as part of EAL Arguments, one can choose either a single or multiple cores for DPDK LCORE. The DPDK API rte_eal_remote_launch helps to start a specific function with arguments under DPDK context on specific LCORE. If not invoked with the API, the thread will be in wait state (created and waiting to run user application). One should use the API in the context of the master core for launching new functions to be running under DPDK EAL context.
Can I assign it to slave threads?
[Answer] One run alternative threads or worker threads other than DPDK lcore threads also by using
Service cores using rte_service API
non-dpdk or non-eal threads using rte_thread_register
Please refer the DPDK documentation for sample with explanation for use of rte_remote_launch
Related
I need to run an ebpf security app over dpdk but don't have maps to do it.
Without dpdk the ebpf app parses every incoming packet 5tuple and other params and compares them with an ebpf map containing ACLs. Packets matching the ACLs are dropped. The map is being dynamically updated with ACLs by a user space app.
However in the dpdk implementation of ebpf there are no maps.
Is there an alternative way to feed the ebpf app with a list of ACL rules and update them dynamically ?
There are no BPF maps in the DPDK implementation of eBPF and no other way to persist state between two runs of the BPF VM. So your best bet is indeed to encode that information into the program itself.
[EDIT-1] for the question Is there an alternative way to feed the ebpf app with a list of ACL rules and update them dynamically ?
[Answer] yes there is, please refer to option-1 below for running eBPF code in dpdk thread context to get access to tables, acl and other huge page cotnext.
There are 2 instances of EBPF in DPDK
running eBPF code in DPDK thread context
Running eBPF with AF_XDP to inject the packet to DPDK process
eBPF in DPDK thread context: is intended to allow user to create sandbox logic that run in DPDK threads or callback achieving goals like dynamic debug, debug counters, special rules etc.
eBPF in AF_XDP: is not part of DPDK logic, but part of kernel packet processing via XDP. Packets that are marked to be sent to DPDK can use AF_XDP sockets via BPF map to sent to DPDK application. The advantage with this approach the NIC interface need not bind with UIO drivers like uio_pci_generic, vfio-pci or igb_uio
Since it is not clear from the question I need to run an ebpf security app over dpdk, please find answers for both
Method-1 (eBPF context on DPDK threads): In this approach, you will rewrite and adopt the XDP ebpf to accept MBUF strcut data (or mtod of MBUF) to access ethernet + payload. EBPF 5 tuple maps need to be converted to lookup either with DPDK API or custom API to compare against packet 5 Tuple. To mimic the same behaviour as eBPF Maps updated via user application, you will need to make use DPDK multiple process (secondary application) to update 5 tuple table entries.
Method-2 (eBPF context in Kernel XDP hooks): using the existing eBPF security app built the binary. Using DPDK AF_XDP load the eBPF binary to the interface. Update the 5 tuple lookup table by BPF API (Python or C or GO) wrapper appropriately (as you are done previously).
Note:
If you are using method-1, To achieve native performance my recommendation is to rewrite the eBPF code DPDK API (since you want to use DPDK). One can leverage HW offloads, multiple queues, PTYPES, RTE_FLOW and there is no overhead to run the sandbox eBPF in the DPDK thread context.
If you are using method-2 Since the action for 5 tuple match is drop I have to assume in case of a match you would be injecting back to kernel. hence DPDK thread is not required. Unless you have specific packets to be processed by DPDK userspace.
The shared userspace can be created with rte_malloc which can be accessed by multiple DPDK process.
Does gRPC server/client have any concept of thread pool for connections? That it is possible to reuse threads, pre-allocate threads, queue request on thread limit reached, etc.
If no, how does it work, is it just allocating/destroying a thread when it need, without any limitation and/or reuse? If yes, is it possible to configure it?
It depends whether you are using sync or async API.
For a sync client, your RPC calls blocks the calling thread, so it is not really relevant. For a sync server, there is an internal threadpool handling all the incoming requests, you can use a grpc::ResourceQuota on a ServerBuilder to limit the max number of threads used by the threadpool.
For async client and server, gRPC uses CompletionQueue as a way for users to define their own threading model. A common way of building clients and servers is to use a user-provided threadpool to run CompletionQueue::Next in each thread. Then once it gets some tag from the Next call, you can cast it to a user-defined type and run some methods to proceed the state. In this case, the user has full control of threads being used.
Note gRPC does create some internal threads, but they should not be used for the majority of the rpc work.
In the DPDK Keep Alive Sample Application, each slave core accesses the global rte_global_keepalive_info to mark itself as still alive.
Consider the case when you have a master application that uses core 1, and a slave application that uses core 2. The master application needs to regularly check if the slave application is still alive. So the master creates rte_global_keepalive_info and expects that the slave will regularly call rte_keepalive_mark_alive() using this variable.
If, however, the master and slave applications cannot share global variables as they are distinct processes with separate memory allocations, how is it possible for the slave application to "mark alive" the rte_global_keepalive_info created by the master application? Should the master still use rte_keepalive_create() to create the rte_global_keepalive_info variable?
Basically, both processes should use some form of Inter Process Communication, for example, use a shared memory with shm_open(3)
There is an example for that, please have a look at keepalive shared memory management
and
keepalive Agent example
I am developing a multi-threaded application and using Cassandra for the back-end.
Earlier, I created a separate session for each child thread and closed the session before killing the thread after its execution. But then I thought it might be an expensive job so I now designed it like, I have a single session opened at the start of the server and any number of clients can use that session for querying purposes.
Question: I just want to know if this is correct, or is there a better way to do this? I know connection pooling is an option but, is that really needed in this scenario?
It's certainly thread safe in the Java driver, so I assume the C++ driver is the same.
You are encouraged to only create one session and have all your threads use it so that the driver can efficiently maintain a connection pool to the cluster and process commands from your client threads asynchronously.
If you create multiple sessions on one client machine or keep opening and closing sessions, you would be forcing the driver to keep making and dropping connections to the cluster, which is wasteful of resources.
Quoting this Datastax blog post about 4 simple rules when using the DataStax drivers for Cassandra:
Use one Cluster instance per (physical) cluster (per application
lifetime)
Use at most one Session per keyspace, or use a single
Session and explicitely specify the keyspace in your queries
If you execute a statement more than once, consider using a PreparedStatement
You can reduce the number of network roundtrips and also have atomic operations by using Batches
The C/C++ driver is definitely thread safe at the session and future levels.
The CassSession object is used for query execution. Internally, a session object also manages a pool of client connections to Cassandra and uses a load balancing policy to distribute requests across those connections. An application should create a single session object per keyspace as a session object is designed to be created once, reused, and shared by multiple threads within the application.
They actually have a section called Thread Safety:
A CassSession is designed to be used concurrently from multiple threads. CassFuture is also thread safe. Other than these exclusions, in general, functions that might modify an object’s state are NOT thread safe. Objects that are immutable (marked ‘const’) can be read safely by multiple threads.
They also have a note about freeing objects. That is not thread safe. So you have to make sure all your threads are done before you free objects:
NOTE: The object/resource free-ing functions (e.g. cass_cluster_free, cass_session_free, … cass_*_free) cannot be called concurrently on the same instance of an object.
Source:
http://datastax.github.io/cpp-driver/topics/
I understand that Akka actors should not block in order to stay reactive to messages, but how do I structure my service where I want to monitor a process running for an indefinite period of time?
For example, we are using the Amazon Kinesis Connector library. You create a connector with a given configuration, which inherits from Runnable, and then call the Run() method. The connector simply runs indefinitely, pulling data from Kinesis, and writing it to Amazon S3. In fact, if the runnable returns, then that is an error, and it needs to be restarted.
Approach (1) would be to simply create a child actor for each Kinesis Connector running, and if the Run() method returns, you throw an exception, the Supervising Actor notices the exception and restarts the child actor. One connector per child actor per thread.
Approach (2) would be for the child actor to wrap the Kinesis Connector in a Future, and if the future returns, the actor would restart the Connector in another Future. Conceivably a single actor could manage multiple Connectors, but does this mean each Future is executing in a separate thread?
Which approach would be most in line with the philosophy of Akka, or is there some other approach people recommend? In general, I want to catch any problems with any Connector, and restart it. In total there would not be more than a half dozen Connectors running in parallel.
I would take approach 1. It should be noted though that actors do not have a dedicated thread by default but they share a thread pool (the so called dispatcher, see: http://doc.akka.io/docs/akka/2.3.6/scala/dispatchers.html). This means that blocking is inherently dangerous because it exhausts the threads of the pool not letting other non-blocked actors to run (since the blocked actors do not put the thread back into the pool). Therefore you should separate blocking calls into a fixed size pool of dedicated actors, and you should assign these actors a PinnedDispatcher. This latter step ensures that these actors do not interfere with each other (they each have a dedicated thread) and ensures that these actors do not interfere with the rest of the system (all of the other actors will run on another dispatchers, usually on default-dispatcher). Be sure though to limit the number of actors running on the PinnedDispatcher since the number of used threads will grow with the number of actors on that dispatcher.
Of your two options, I'd say 1 is the more appropriate. No.2 suffers from the fact that, in order to exit from the future monad's world you need to call an Await somewhere, and there you need to specify a max duration which, in your case, does not make sense.
Maybe you could look into other options before going for it, tough. A few keywords that may inspire you are streams and distributed channels.