I have a requirement where in i need to maintain a list in memory.
Eg -
list<products>
every user will run the application and add new product/update product/ remove product from the list
and all the changes should be reflected on that list.
I am trying to store the list in the objectcache.
but when i run the application it creates products but the moment i run it second time the list is not in the cache.
need a help.
Following is the code -
public class ProductManagement
{
List<Productlist> _productList;
ObjectCache cache= MemoryCache.Default;
public int Createproduct(int id,string productname)
{
if (cache.Contains("Productlist"))
{
_productList = (List<Productlist>)cache.Get("Productlist");
}
else
{
_productList = new List<Productlist>();
}
Product pro = new Product();
pro.ID = id;
pro.ProductName = productname;
_productList.Add(pro);
cache.AddOrGetExisting("Productlist", _productList, DateTime.MaxValue);
return id;
}
public Product GetProductbyId(int id)
{
if (cache.Contains("Productlist"))
{
_productList = (List<Productlist>)cache.Get("Productlist");
}
else
{
_productList = new List<Productlist>();
}
var product = _productList.Single(i => i.ID == id);
return product;
}
}
how i can make sure that the list is persistent even when the application is not running, can this be possible.
also how to keep the cache alive and to retrieve the same cache next time.
Many thanks for help.
Memory can be different http://en.wikipedia.org/wiki/Computer_memory. MemoryCache stores information in Process Memory, which exists only while Process exists.
If your requirement is to maintain list in process memory - your task is done, and more than you do not need to use ObjectCache cache= MemoryCache.Default; you can just keep the list as a field for ProductManagement.
If you need to keep the list between application launches - you need to do additional work to write the list to file when you close application and read it when you open application.
Related
We are using Spark job with emr-dynamodb-connector to load the data from S3 files into Dyanamodb.
https://github.com/awslabs/emr-dynamodb-connector
But if document is already present in dynamodb, my code is replacing it.
Is there a way to avoid updating existing records (based on id) if they are present in Dynamodb. If id is present in dynamodb, i simply don't want to update it, just skip that id and write rest of records. Code i am using is
JobConf ddbConf = new JobConf(spark.sparkContext().hadoopConfiguration());
ddbConf.set("dynamodb.output.tableName", tableName);
ddbConf.set("dynamodb.throughput.write.percent", "50");
ddbConf.set("mapred.input.format.class", "org.apache.hadoop.dynamodb.read.DynamoDBInputFormat");
ddbConf.set("mapred.output.format.class", "org.apache.hadoop.dynamodb.write.DynamoDBOutputFormat");
JavaPairRDD<Text, DynamoDBItemWritable> ddbInsertFormattedRDD = finalDatasetToBeSaved.toJavaRDD().mapToPair(new PairFunction<Row, Text, DynamoDBItemWritable>() {
#Override
public Tuple2<Text, DynamoDBItemWritable> call(Row row) throws Exception {
Map<String, AttributeValue> ddbMap = new HashMap<String, AttributeValue>();
for (int i = 0 ; i <= schemaDdb.length - 1; i++) {
Object value = row.get(i);
if (value != null) {
AttributeValue att = new AttributeValue();
if(schemaDdb[i]._2.toString().equalsIgnoreCase("IntegerType")){
att.setN(value.toString());
}else{
att.setS(value.toString());
}
ddbMap.put((String)schemaDdb[i]._1, att);
}
}
DynamoDBItemWritable item = new DynamoDBItemWritable();
item.setItem(ddbMap);
return new Tuple2<Text, DynamoDBItemWritable>(new Text(""), item);
}
});
ddbInsertFormattedRDD.saveAsHadoopDataset(ddbConf);
By saying Is there a way to avoid updating existing records (based on id) if they are already present, Do you want to add another document instead of replacing/updating it?
If yes, I am afraid it wont be possible with primary key, since that should be unique and distinguishes it from other. You need to make a key non-primary in order to do this.
If you want to ignore the insertion (if item exists), you can use condition-expression attribute_not_exists(your-key) as defined in the documentation
I have an List items to be inserted into the DynamoDb collection. The size of the list may vary from 100 to 10k. I looking for an optimised way to Batch Write all the items using the BatchWriteItemEnhancedRequest (JAVA SDK2). What is the best way to add the items into the WriteBatch builder and then write the request using BatchWriteItemEnhancedRequest?
My Current Code:
WriteBatch.Builder<T> builder = BatchWriteItemEnhancedRequest.builder().writeBatches(builder.build()).build();
items.forEach(item -> { builder.addPutItem(item); });
BatchWriteItemEnhancedRequest bwr = BatchWriteItemEnhancedRequest.builder().writeBatches(builder.build()).build()
BatchWriteResult batchWriteResult =
DynamoDB.enhancedClient().batchWriteItem(getBatchWriteItemEnhancedRequest(builder));
do {
// Check for unprocessed keys which could happen if you exceed
// provisioned throughput
List<T> unprocessedItems = batchWriteResult.unprocessedPutItemsForTable(getTable());
if (unprocessedItems.size() != 0) {
unprocessedItems.forEach(unprocessedItem -> {
builder.addPutItem(unprocessedItem);
});
batchWriteResult = DynamoDB.enhancedClient().batchWriteItem(getBatchWriteItemEnhancedRequest(builder));
}
} while (batchWriteResult.unprocessedPutItemsForTable(getTable()).size() > 0);
Looking for a batching logic and a more better way to execute the BatchWriteItemEnhancedRequest.
I came up with a utility class to deal with that. Their batches of batches approach in v2 is overly complex for most use cases, especially when we're still limited to 25 items overall.
public class DynamoDbUtil {
private static final int MAX_DYNAMODB_BATCH_SIZE = 25; // AWS blows chunks if you try to include more than 25 items in a batch or sub-batch
/**
* Writes the list of items to the specified DynamoDB table.
*/
public static <T> void batchWrite(Class<T> itemType, List<T> items, DynamoDbEnhancedClient client, DynamoDbTable<T> table) {
Stream<List<T>> chunksOfItems = Lists.partition(items, MAX_DYNAMODB_BATCH_SIZE);
chunksOfItems.forEach(chunkOfItems -> {
List<T> unprocessedItems = batchWriteImpl(itemType, chunkOfItems, client, table);
while (!unprocessedItems.isEmpty()) {
// some failed (provisioning problems, etc.), so write those again
unprocessedItems = batchWriteImpl(itemType, unprocessedItems, client, table);
}
});
}
/**
* Writes a single batch of (at most) 25 items to DynamoDB.
* Note that the overall limit of items in a batch is 25, so you can't have nested batches
* of 25 each that would exceed that overall limit.
*
* #return those items that couldn't be written due to provisioning issues, etc., but were otherwise valid
*/
private static <T> List<T> batchWriteImpl(Class<T> itemType, List<T> chunkOfItems, DynamoDbEnhancedClient client, DynamoDbTable<T> table) {
WriteBatch.Builder<T> subBatchBuilder = WriteBatch.builder(itemType).mappedTableResource(table);
chunkOfItems.forEach(subBatchBuilder::addPutItem);
BatchWriteItemEnhancedRequest.Builder overallBatchBuilder = BatchWriteItemEnhancedRequest.builder();
overallBatchBuilder.addWriteBatch(subBatchBuilder.build());
return client.batchWriteItem(overallBatchBuilder.build()).unprocessedPutItemsForTable(table);
}
}
I have created workflow in my sitecore project and on final state ( Approval ) I just want auto publish to a particular database.
So where should I do the changes to point to database.
Thanks
In order to perform automatic publishing, your final state should contain a workflow action, that does the job for you. You may take a look on Sample Workflow (that comes by default with Sitecore) - Approved state. It contains child item Auto Publish, that has two fields.
Type string:
Sitecore.Workflows.Simple.PublishAction, Sitecore.Kernel
sets the class that in fact does publishing. You may inherit from that class and implement your own behavior, supply extra parameters etc. I would advise you to take dotPeek or Reflector and look-up this class implementation so that you may adjust your own code.
Parameters:
deep=0
..stands for publishing child items recursively.
Update: Lets take a look on decompiled class from Sample Workflow Auto Publish action:
public class PublishAction
{
public void Process(WorkflowPipelineArgs args)
{
Item dataItem = args.DataItem;
Item innerItem = args.ProcessorItem.InnerItem;
Database[] targets = this.GetTargets(dataItem);
PublishManager.PublishItem(dataItem, targets, new Language[1]
{
dataItem.Language
}, (this.GetDeep(innerItem) ? 1 : 0) != 0, 0 != 0);
}
private bool GetDeep(Item actionItem)
{
return actionItem["deep"] == "1" || WebUtil.ParseUrlParameters(actionItem["parameters"])["deep"] == "1";
}
private Database[] GetTargets(Item item)
{
using (new SecurityDisabler())
{
Item obj = item.Database.Items["/sitecore/system/publishing targets"];
if (obj != null)
{
ArrayList arrayList = new ArrayList();
foreach (BaseItem baseItem in obj.Children)
{
string name = baseItem["Target database"];
if (name.Length > 0)
{
Database database = Factory.GetDatabase(name, false);
if (database != null)
arrayList.Add((object)database);
else
Log.Warn("Unknown database in PublishAction: " + name, (object)this);
}
}
return arrayList.ToArray(typeof(Database)) as Database[];
}
}
return new Database[0];
}
}
GetTargets() method from above default example does publishing to all targets that are specified under /sitecore/system/publishing targets path. As I mentioned above, you may create your own class with your own implementation and reference that from workflow action definition item.
You can look into Sample workflow's Auto publish action. But in general you can create a Workflow Action with type: Sitecore.Workflows.Simple.PublishAction, Sitecore.Kernel and set parameters as deep=1&related=1&targets=somedb,web&alllanguages=1
I am a complete code noob and need help writing a test class for a trigger in Salesforce. Any help would be greatly appreciated.
Here is the trigger:
trigger UpdateWonAccounts on Opportunity(before Update) {
Set < Id > accountIds = new Set < Id > ();
//Collect End user Ids which has won Opportunities
for (Opportunity o : Trigger.new) {
if (o.isWon && o.EndUserAccountName__c != null) {
accountIds.add(o.EndUserAccountName__c);
}
}
List < Account > lstAccount = new List < Account > ();
//Iterate and collect all the end user records
for (Account a : [Select Id, Status__c From Account where Id IN : accountIds]) {
lstAccount.add(new Account(Id = a.Id, Status__c = true));
}
//If there are any accounts then update the records
if (!lstAccount.isEmpty()) {
update lstAccount;
}
}
Read An Introduction to Apex Code Test Methods and How To Write A Trigger Test.
Basically, you want to create a new testmethod that updates (inserts, deletes, undeletes, etc. depending on your trigger conditions) a record or sObject.
It looks somewhat like this:
public class myClass {
static testMethod void myTest() {
// Add test method logic to insert and update a new Opportunity here
}
}
In my application I have two views, Main View and Contacts view, and I have a saved Contacts ID.
When I load my second view, that should access default Contacts Database using my saved ID list and get these contacts information Title, etc. It leaves because the contact has been deleted.
So, How can I check if the contact that I has its ID is exists before I try to access its fields and cause a leave ?
CContactDatabase* contactsDb = CContactDatabase::OpenL();
CleanupStack::PushL(contactsDb);
for (TInt i = 0; i < CsIDs.Count(); i++)// looping through contacts.
{
TRAPD(err, contactsDb->ReadContactL(CsIDs[i])) //---->CsIDs is an array that holds IDs
if(KErrNotFound == err)
{
CsIDs.Remove(i);
}
}
CleanupStack::PopAndDestroy(1,contactsDb);
Thanks Abhijith for your help, and I figured out the reason behind this issue, I shouldn't call
ReadContactL directly under TRAPD under For loop, So I created a function that checks the ID validity
and I called it under TRAPD, and now my Contacts List View loads well, and invalid IDs removed from
My saved IDs list.
Solution is to follow Symbian C++ rules when dealing with "Leave":
void LoadContactsL()
{
CContactDatabase* contactsDb = CContactDatabase::OpenL();
CleanupStack::PushL(contactsDb);
for (TInt i = 0; i < CsIDs.Count(); i++)// looping through contacts.
{
TRAPD(err, ChickValidContactsIDL(i)) //-->Calling IDs checking function under TRAPD
if(KErrNotFound == err)
{
CsIDs.Remove(i);
}
}
CleanupStack::PopAndDestroy(1,contactsDb);
}
// A function that checks invalid IDs.
//Important Symbian rule: Return "void" for functions that "Leave" under TRAP harness.
void ChickValidContactsIDL(TInt index)
{
CPbkContactEngine* iPbkEngine = CPbkContactEngine::NewL(&iEikonEnv->FsSession());
CleanupStack::PushL(iPbkEngine);
iPbkEngine->OpenContactL(CsIDs[index]);
CleanupStack::PopAndDestroy(1,iPbkEngine);
}