Has anyone tried out working with the Delete and update command of SPDataSource used in conjunction with SPGridView? If yes, can you share a working example?
There is no way to do this without code. It specifies the query used to first retrieve the item(s) to be deleted, you in turn should add a handler for deleting the items the DeleteCommand retrieves in code.... I know, not very clean, but the SPDataSource is basically only useable for selecting items... CAML is a retrieval only language, it itself does not have any Command either (i.e. SELECT is implicit, DELETE, INSERT UPDATE do not exist in CAML)
Related
I wanted to create a clone of one existing rte_lpm structure. It looks like there is no function available in DPDK to do this directly.
Docs: https://doc.dpdk.org/api/rte__lpm_8h.html
Is there any way to clone it manually ?
Unfortunately, there is no such functionality. Also there is also no functionality to iterate over the rules. But there are few options:
We can keep the original routing table along the LPM structure. For example, a simple list. This list will be used to show to a user the routing table.
There are few examples in DPDK, have a look at the l3fwd: https://doc.dpdk.org/guides/sample_app_ug/l3_forward.html
So, instead of creating a deep copy of LPM, we can just create a new one and populate it with the rules form this list.
We can create two LMPs and the very beginning and add rules to both of them. This might slow down the add/delete rule process, but once you need a copy -- you always have it.
Another cons is that you will have just one copy, i.e. if you need a few -- this all not work.
So instead of creating a deep copy of LPM, we just always keep two tables.
Is is possible (without writing custom SQL) to have Eclipselink trust me as to whether to perform an update or insert on a merge, rather than perform a select, then an update or insert? If so, how?
In my mind I'd like to use a transient flag and a custom if statement to determine whether the item is already in the database or not, and instruct eclipselink to perform the query required. I understand Hibernate provides this as update() and save()
A few notable points:
I have a large amount of objects that are being batch-merged, as such
persist() is not suitable for me (also the objects do not exist in the
library except when they are passed in for the merge anyway)
Because there are so many objects being merged, it is unlikely that there will be any cache hits, so eclipselink has been unable to tell if its sent it in before via the cache
Because amount of objects going in (to a non-local database, in this case) the SELECTs are a problem, especially given I can tell which will be required before the operation occurs
I'd really rather not switch to Hibernate
Thanks. Perhaps I am missing something obvious!
What you are looking for is called existence checking in EclipseLink, and can be configured using the #ExistenceChecking annotation as described here:
http://eclipse.org/eclipselink/documentation/2.4/jpa/extensions/a_existencechecking.htm
Try specifying #ExistenceChecking(ExistenceType.CHECK_CACHE) as while it states that check_cache is the default,this is for Native EclipseLink projects. JPA projects use a Check_Database as the default to conform to the JPA specification requiring that merge calls merge into data from the database if necessary. Using the check_cache will prevent EclipseLink from querying at all, so you can query yourself based on your own criteria. Existing objects will be required to be in the cache though, otherwise there is nothing to merge into, and EclipseLink will have to perform an insert.
Another option is to use a customizer to define the DoesExistQuery used for each class. This could allow you to override the checkEarlyReturn method to perform as needed to determine existence.
The above options still use the JPA merge and so still require getting the existing data to merge into - so it will still require selects for existing objects not in the cache. If all you are after is an update all type statement that will update the object or insert the object as is, without tracking only what has changed, you might try looking at native EclipseLink functionality, such as the UnitOfWork api. Using something like ((EntityManagerImpl)em.getDelegate()).getUnitOfWork().updateObject(entity) or use the UOW execute your own UpdateObjectQuery would avoid the selects for existing objects, at the loss of only sending changes.
I got fixes for skipping select call before insert/update in eclipse link jpa
https://www.eclipse.org/eclipselink/documentation/2.4/jpa/extensions/a_existencechecking.htm
My scenario is that I have an item in the cache with several existing tags. I want to update the item using the PutAndUnlock method. Do I need to first retrieve any existing tags and pass them to the PutAndUnlock method to preserve them?
Related to that, adding new tags, would I have to get the existing tags and append any new ones before passing them to the PutAndUnlock method?
The switch (operationInfo.OperationType) case MDHOperationType.PUT_AND_UNLOCK in Microsoft.ApplicationServer.Caching.MultiDirectoryHashtable.PreProcess(AMDHObjectNode oldObjectNode, ref AMDHObjectNode newObjectNode, ref MDHOperationInfo operationInfo) is where the MDHOperationInfo determines whether the existing item is null (or throws), or if it should do the put.
PreProcess was called by
MultiDirectoryHashtable.PutNodeInSlot(ref MDHOperationInfo operationInfo, MDHDirectoryNode dir, int slotIndex).
The return from PreProcess is true, so it should end up calling dir.PutNodeInSlot(slotIndex, newObjectNode), which would have what I believe is the final answer.
As far as I can tell, it just does a simple assignment into the MDHNode slot, without doing any comparison of the Tags. It assigns a new Version using operationInfo.AssignNewInternalCacheItemVersion(), but I can't see where it ever does anything with the Tags.
If you look at usages for the NewCacheItem property on Microsoft.ApplicationServer.Caching.MDHOperationInfo, they don't seem to be touching the Tags.
I'm using memcached and c++. I want to remove all keys from server using c++ api. It would be better to remove them without list of the keys.
There is function in documentation: memcached_dump and memcached_delete. First one returns the list of keys, and the second one - removes them.
But here is the quote from the docs of first function:
memcached_dump() is used to get a list of keys found in memcached(1)
servers. Because memcached(1) does not guarentee to dump all keys you
can not assume you have fetched all keys from the server.
The first question: any ways to fetch ALL keys and the second is: how to use these functions at all. There aren't any examples in documentation.
Thanks.
Sounds like you want memcached_flush ?
An elegant way to remove the memcached keys would be the use of basic delete command.
But as we don't know which keys to delete, you ought to keep a log of the data being set in the memcached. You could dump these log along with their time-stamp in any data-store. By this procedure you would be able to delete keys with certain rules thereby providing to better control in the delete operation.
Logging keys is a useful way of managing cache data when you need to be able to delete a bunch of keys. In addition, using a prefix can provide a way of managing the cached data as a whole.
function save($key,$data,$group){
cache_log_key($group,$key);
cache_save($application_prefix.$key,$data);
}
function deleteGroup($group){
$loggedKeys = cache_get_log($group);
foreach($loggedKeys as $key){
cache_delete($application_prefix.$key);
}
cache_delete_log($group);
}
Can't seem to rename an existing Verity collection in ColdFusion without deleting, recreating, and rebuilding the collection. Problem is, I have some very large collections I'd rather not have to delete and rebuild from scratch. Any one have a handy trick for this conundrum?
I don't believe that there is an easy way to rename a Verity collection. You can always use
<cfcollection action="map" ...>
to assign an alias to an existing collection, provided you do not need to re-use the original name.
For the Verity part (without considering ColdFusion), it's easy enough to detach a collection, rename it, and reattach it again:
rcadmin> indexdetach
Server Alias:YourDocserver
Index Alias:CollectionName
Index Type [(c)ollection,(t)ree,(p)arametric,(r)ecommendation]:c
Save changes? [y|n]:y
<<Return>> SUCCESS
rcadmin> collpurge
Collection alias:CollectionName
Admin Alias:AdminServer
Save changes? [y|n]:y
<<Return>> SUCCESS
rcadmin> adminsignal
Admin Alias:AdminServer
Type of signal (Shutdown=2,WSRefresh=3,RestartAllServers=4):4
Save changes? [y|n]:y
<<Return>> SUCCESS
Now you can rename the collection directory, and re-attach. (If you are unsure of any of these values, check them with collget before you take it offline).
rcadmin> collset
Admin Alias:AdminServer
Collection Alias:NewCollectionName
Modify Type (Update=0, Insert=1):1
Path:
Gateway[(o)dbc|(n)otes|(e)xchange|(d)ocumentum|(f)ilesys|(w)eb|o(t)her]:
Style Alias:
Document Access (Public=0,Secure=1,Anonymous=2):
Query Parser [(s)imple|(b)oolPlus|(f)reeText|(o)ldFreeText|O(l)dSimple|O(t)her]:
Description:
Max. Search Time(msecs):
Save changes? [y|n]:y
rcadmin> indexattach
Index Alias:NewCollectionName
Index Type [(c)ollection,(t)ree,(p)arametric,(r)ecommendation]:c
Server Alias:YourDocserver
Modify Type (Update=0, Insert=1):1
Index State (offline=0,hidden=1,online=2):2
Threads (default=3):
Save changes? [y|n]:y
<<Return>> SUCCESS
It should now show up again in the 'hierarchyview'.
You can also use the "merge" utility to copy content from one collection to another, with a new name.
Looks like this is not possible. Deleting and re-creating the collection with the desired name appears to be the only approach available.