How to do deep copy of rte_lpm? - dpdk

I wanted to create a clone of one existing rte_lpm structure. It looks like there is no function available in DPDK to do this directly.
Docs: https://doc.dpdk.org/api/rte__lpm_8h.html
Is there any way to clone it manually ?

Unfortunately, there is no such functionality. Also there is also no functionality to iterate over the rules. But there are few options:
We can keep the original routing table along the LPM structure. For example, a simple list. This list will be used to show to a user the routing table.
There are few examples in DPDK, have a look at the l3fwd: https://doc.dpdk.org/guides/sample_app_ug/l3_forward.html
So, instead of creating a deep copy of LPM, we can just create a new one and populate it with the rules form this list.
We can create two LMPs and the very beginning and add rules to both of them. This might slow down the add/delete rule process, but once you need a copy -- you always have it.
Another cons is that you will have just one copy, i.e. if you need a few -- this all not work.
So instead of creating a deep copy of LPM, we just always keep two tables.

Related

Is there a way to get scrollstate from Lazyrow

I have made a LazyRow that i now want to be able to get the scrollposition from. What i understand Scrollablerow has been deprecated. (correct me if im wrong) The thing is that i cant make a scrollablerow so i thought lets make a lazy one then. but i have no clue how to get scrollposition from the lazyrow. i know how to get index but not position if that eaven exists. here is what i have tried.
val scrollState = rememberScrollState()
LazyRow(scrollState = scrollstate){
}
For LazyScrollers, there are separate LazyStates.
I think there's just one, in fact, i.e. rememberLazyListState()
Pass that as the scroll state to the row and then you can access all kinds of info. For example, you could get the index of the first visible item, as well as its offset. There are direct properties for this stuff in the object returned by the above initialisation. You can also perform some more complex operations using the lazyListState.layoutInfo property that you get.
Also, ScrollableRow may be deprecated as a #Composable, but it is just refactored, a bit. Now, you can use the horozontalScroll() and verticalScroll() Modifiers, both of which accept a scrollState parameter, which expects the same object as the one you've created in the question.
Usually, you'd use LazyScrollers since they're not tough to implement and also are super-performant, but the general idea is that they are used with large datasets whereas non-lazy scrollers are okay for small sized lists and stuff. This is because the lazy ones cache only a small fraction of the entire list, making your UI peformant, which is not something regular scrollers do, and not a problem for small datasets.
They're like equivalents of RecyclerView from the View System

Repository pattern: isn't getting the entire domain object bad behavior (read method)?

A repository pattern is there to abstract away the actual data source and I do see a lot of benefits in that, but a repository should not use IQueryable to prevent leaking DB information and it should always return domain objects, not DTO's or POCO's, and it is this last thing I have trouble with getting my head around.
If a repository pattern always has to return a domain object, doesn't that mean it fetches way too much data most of the times? Lets say it returns an employee domain object with forty properties and in the service and view layers consuming that object only five of those properties are actually used.
It means the database has fetched a lot of unnecessary data a pumped that across the network. Doing that with one object is hardly noticeable, but if millions of records are pushed across that way and a lot of of the data is thrown away every time, is that not considered bad behavior?
Yes, when adding or editing or deleting the object, you will use the entire object, but reading the entire object and pushing it to another layer which uses only a fraction of it is not utilizing the underline database and network in the most optimal way. What am I missing here?
There's nothing preventing you from having a separate read model (which could a separately stored projection of the domain or a query-time projection) and separating out the command and query concerns - CQRS.
If you then put something like GraphQL in front of your read side then the consumer can decide exactly what data they want from the full model down to individual field/property level.
Your commands still interact with the full domain model as before (except where it's a performance no-brainer to use set based operations).

Glimmer.js how to reset tracked property to initial value without using the constructor

In Glimmer.js, what is the best way to reset a tracked property to an initial value without using the constructor?
Note: Cannot use the constructor because it is only called once on initial page render and never called again on subsequent page clicks.
There are two parts to this answer, but the common theme between them is that they emphasize switching from an imperative style (explicitly setting values in a lifecycle hook) to a declarative style (using true one-way data flow and/or using decorators to clearly indicate where you’re doing some kind of transformation of local state based on arguments).
Are you sure you need to do that? A lot of times people think they do and they should actually just restructure their data flow. For example, much of the time in Ember Classic, people reached for a pattern of "forking" data using hooks like didInsertElement or didReceiveAttrs. In Glimmer components (whether in Ember Octane or in standalone Glimmer.js), it's idiomatic instead to simply manage your updates in the owner of the data: really doing data-down-actions-up.
Occasionally, it does actually make sense to create local copies of tracked data in a component—for example, when you want to have a clean separation between data coming from your API and the way you handle data in a form (because user interfaces are API boundaries!). In those scenarios, the #localCopy and #trackedReset decorators from tracked-toolbox are great solutions.
#localCopy does roughly what its name suggests. It creates a local copy of data passed in via arguments, which you can change locally via actions, but which also switches back to the argument if the argument value changes.
#trackedReset creates some local state which resets when an argument updates. Unlike #localCopy, the state is not a copy of the argument, it just needs to reset when the argument updates.
With either of these approaches, you end up with a much more “declarative” data flow than in the old Ember Classic approach: “forking” the data is done via decorators (approach 2), and much of the time you don’t end up forking it at all because you just push the changes back up to the owner of the original data (1).

Perform UPDATE without SELECT in eclipselink

Is is possible (without writing custom SQL) to have Eclipselink trust me as to whether to perform an update or insert on a merge, rather than perform a select, then an update or insert? If so, how?
In my mind I'd like to use a transient flag and a custom if statement to determine whether the item is already in the database or not, and instruct eclipselink to perform the query required. I understand Hibernate provides this as update() and save()
A few notable points:
I have a large amount of objects that are being batch-merged, as such
persist() is not suitable for me (also the objects do not exist in the
library except when they are passed in for the merge anyway)
Because there are so many objects being merged, it is unlikely that there will be any cache hits, so eclipselink has been unable to tell if its sent it in before via the cache
Because amount of objects going in (to a non-local database, in this case) the SELECTs are a problem, especially given I can tell which will be required before the operation occurs
I'd really rather not switch to Hibernate
Thanks. Perhaps I am missing something obvious!
What you are looking for is called existence checking in EclipseLink, and can be configured using the #ExistenceChecking annotation as described here:
http://eclipse.org/eclipselink/documentation/2.4/jpa/extensions/a_existencechecking.htm
Try specifying #ExistenceChecking(ExistenceType.CHECK_CACHE) as while it states that check_cache is the default,this is for Native EclipseLink projects. JPA projects use a Check_Database as the default to conform to the JPA specification requiring that merge calls merge into data from the database if necessary. Using the check_cache will prevent EclipseLink from querying at all, so you can query yourself based on your own criteria. Existing objects will be required to be in the cache though, otherwise there is nothing to merge into, and EclipseLink will have to perform an insert.
Another option is to use a customizer to define the DoesExistQuery used for each class. This could allow you to override the checkEarlyReturn method to perform as needed to determine existence.
The above options still use the JPA merge and so still require getting the existing data to merge into - so it will still require selects for existing objects not in the cache. If all you are after is an update all type statement that will update the object or insert the object as is, without tracking only what has changed, you might try looking at native EclipseLink functionality, such as the UnitOfWork api. Using something like ((EntityManagerImpl)em.getDelegate()).getUnitOfWork().updateObject(entity) or use the UOW execute your own UpdateObjectQuery would avoid the selects for existing objects, at the loss of only sending changes.
I got fixes for skipping select call before insert/update in eclipse link jpa
https://www.eclipse.org/eclipselink/documentation/2.4/jpa/extensions/a_existencechecking.htm

Remove keys from cache

I'm using memcached and c++. I want to remove all keys from server using c++ api. It would be better to remove them without list of the keys.
There is function in documentation: memcached_dump and memcached_delete. First one returns the list of keys, and the second one - removes them.
But here is the quote from the docs of first function:
memcached_dump() is used to get a list of keys found in memcached(1)
servers. Because memcached(1) does not guarentee to dump all keys you
can not assume you have fetched all keys from the server.
The first question: any ways to fetch ALL keys and the second is: how to use these functions at all. There aren't any examples in documentation.
Thanks.
Sounds like you want memcached_flush ?
An elegant way to remove the memcached keys would be the use of basic delete command.
But as we don't know which keys to delete, you ought to keep a log of the data being set in the memcached. You could dump these log along with their time-stamp in any data-store. By this procedure you would be able to delete keys with certain rules thereby providing to better control in the delete operation.
Logging keys is a useful way of managing cache data when you need to be able to delete a bunch of keys. In addition, using a prefix can provide a way of managing the cached data as a whole.
function save($key,$data,$group){
cache_log_key($group,$key);
cache_save($application_prefix.$key,$data);
}
function deleteGroup($group){
$loggedKeys = cache_get_log($group);
foreach($loggedKeys as $key){
cache_delete($application_prefix.$key);
}
cache_delete_log($group);
}