How to find and restore deleted items? - targetprocess

We have converted a user story into a task. And all tasks of this user story has been automatically deleted.
How to restore these tasks?
I dont found something on this subject in documentation.

At the time of writing this answer, there is no ability yet to restore items from the UI.
However, they have an idea to implement it in future:
Undo / Undelete action / Ability to retrieve / restore deleted entities / Improved History Audit / Recycle bin
For now you can contact support#targetprocess.com, and they are able to restore / fix / reassign the deleted tickets for you.

Related

Postman save and lock (prevent editing) request / collection

In Postman, once you have created a request or collection and have fine tuned it, is there any way to lock it so as to make it read only so that it can't be accidentally altered?
Obviously I would need something to toggle it back to editable again!
Thanks in advance
I don't think there is a option to make a collection read only for admin ( the creator of the collection). Few of the way of avoiding unnecessary changes are:
if you can edit rights of other users within the workspace make
it view only for selected users inside the workspace.
Create a fork of the collection so that you revert back
Create a copy of the collection
Download collection as json file
Personally i prefere downloading collection as json as this keeps the workspace clean and tidy .
else:
i prefer creating inprogress workspace and final workspace and sharing completed collection to final workspace and deleting it from ingrogress workspace everytime i finish something.
If changes is required , i will work in inprogress workspace by creating a copy

aws delete reports group history

I'm attempting to use the AWS CLI to delete the history of a codebuild reports-group. (Context: It was muddied when we were initially setting up these reports.)
I notice that it's possible to just delete the entire reports-group, but I only want to clear the history. Is there an easy way to delete the history without destroying the entire reports-group?
The man page gives options for deleting an individual report, but there are possibly 500+, and I've no idea how nor the intent to run that command that many times.
My man page diving so far has landed me here:
aws codebuild delete-reports help
So far I have also found batch-delete-builds, but there's no batch-delete-reports that I can tell. Should I just delete the reports-group or is there a command that just isn't named as expected?
there is no such api, you can delete the report group. https://docs.aws.amazon.com/cli/latest/reference/codebuild/delete-report-group.html

Refresh Sitecore index to include CD's

I've written some code to refresh an index when an item is programmatically added to Sitecore. Now as the live system is made up of 1 CM and 2 CD Servers I need my code to also trigger the indexing to be refreshed on the CD Servers (unfortunately my dev machine is just a single box so I can't test this fully). I've looked online but can't find anything about this when triggering a re-index programmatically.
So the question is do I need to write code for this or does Sitecore do this by default and if I do need to write code, does anyone have ideas how I go about this. My current code is below.
ISearchIndex index = ContentSearchManager.GetIndex("GeorgeDrexler_web_index");
Sitecore.Data.Database database = Sitecore.Configuration.Factory.GetDatabase("web");
Item item = database.GetItem("/sitecore/content/GeorgeDrexler/Global/Applications");
index.Refresh(new SitecoreIndexableItem(item));
My config for the index has the remotebuild strategy enabled
<strategy ref="contentSearch/indexConfigurations/indexUpdateStrategies/remoteRebuild" />
As #Hishaam Namooya pointed out in his comment, publishing from master to web should trigger the web index updates out of the box, unless you've disabled something in the configurations.
Note that items won't publish unless they are in a final workflow state, so if you want a completely automated process that creates the item, updates the local index, and then immediately updates the web index, you will also need to update the workflow state to your final approved state and then trigger a publish of the item.

How long does it take for AWS S3 to save and load an item?

S3 FAQ mentions that "Amazon S3 buckets in all Regions provide read-after-write consistency for PUTS of new objects and eventual consistency for overwrite PUTS and DELETES." However, I don't know how long it takes to get eventual consistency. I tried to search for this but couldn't find an answer in S3 documentation.
Situation:
We have a website consists of 7 steps. When user clicks on save in each step, we want to save a json document (contains information of all 7 steps) to Amazon S3. Currently we plan to:
Create a single S3 bucket to store all json documents.
When user saves step 1 we create a new item in S3.
When user saves step 2-7 we override the existing item.
After user saves a step and refresh the page, he should be able to see the information he just saved. i.e. We want to make sure that we always read after write.
The full json document (all 7 steps completed) is around 20 KB.
After users clicked on save button we can freeze the page for some time and they cannot make other changes until save is finished.
Question:
How long does it take for AWS S3 to save and load an item? (We can freeze our website when document is being saved to S3)
Is there a function to calculate save/load time based on item size?
Is the save/load time gonna be different if I choose another S3 region? If so which is the best region for Seattle?
I wanted to add to #error2007s answers.
How long does it take for AWS S3 to save and load an item? (We can freeze our website when document is being saved to S3)
It's not only that you will not find the exact time anywhere - there's actually no such thing exact time. That's just what "eventual consistency" is all about: consistency will be achieved eventually. You can't know when.
If somebody gave you an upper bound for how long a system would take to achieve consistency, then you wouldn't call it "eventually consistent" anymore. It would be "consistent within X amount of time".
The problem now becomes, "How do I deal with eventual consistency?" (instead of trying to "beat it")
To really find the answer to that question, you need to first understand what kind of consistency you truly need, and how exactly the eventual consistency of S3 could affect your workflow.
Based on your description, I understand that you would write a total of 7 times to S3, once for each step you have. For the first write, as you correctly cited the FAQs, you get strong consistency for any reads after that. For all the subsequent writes (which are really "replacing" the original object), you might observe eventual consistency - that is, if you try to read the overwritten object, you might get the most recent version, or you might get an older version. This is what is referred to as "eventual consistency" on S3 in this scenario.
A few alternatives for you to consider:
don't write to S3 on every single step; instead, keep the data for each step on the client side, and then only write 1 single object to S3 after the 7th step. This way, there's only 1 write, no "overwrites", so no "eventual consistency". This might or might not be possible for your specific scenario, you need to evaluate that.
alternatively, write to S3 objects with different names for each step. E.g., something like: after step 1, save that to bruno-preferences-step-1.json; then, after step 2, save the results to bruno-preferences-step-2.json; and so on, then save the final preferences file to bruno-preferences.json, or maybe even bruno-preferences-step-7.json, giving yourself the flexibility to add more steps in the future. Note that the idea here to avoid overwrites, which could cause eventual consistency issues. Using this approach, you only write new objects, you never overwrite them.
finally, you might want to consider Amazon DynamoDB. It's a NoSQL database, you can securely connect to it directly from the browser or from your server. It provides you with replication, automatic scaling, load distribution (just like S3). And you also have the option to tell DynamoDB that you want to perform strongly consistent reads (the default is eventually consistent reads; you have to change a parameter to get strongly consistent reads). DynamoDB is typically used for "small" records, 20kB is definitely within the range -- the maximum size of a record would be 400kB as of today. You might want to check this out: DynamoDB FAQs: What is the consistency model of Amazon DynamoDB?
How long does it take for AWS S3 to save and load an item? (We can freeze our website when document is being saved to S3)
You will not find the exact time anywhere. If you ask AWS they will give you approx timings. Your file is 20 KB so as per my experience from S3 usage the time will be more or less 60-90 Sec.
Is there a function to calculate save/load time based on item size?
No there is no any function using which you can calculate this.
Is the save/load time gonna be different if I choose another S3 region? If so which is the best region for Seattle?
For Seattle US West Oregon Will work with no problem.
You can also take a look at this experiment for comparison https://github.com/andrewgaul/are-we-consistent-yet

How to disable/deactivate a data pipeline?

I have just created a data pipeline and activated it. But while running, it showed WAITING_ON_DEPENDENCIES for my EC2Resource. I suspect that this might be due to some permissions issue.
So, I now want to edit the pipeline. But when I open the pipeline, it says "Pipeline is active.". And many of the fields are not editable anymore. Is there any way to deactive and/or edit the pipeline?
Regards.
I encountered the same problem.
The only way that I was able to proceed, was to clone the pipeline into a new one and then edit the new one. All the fields are editable there. The old one I deleted.
The limitations on editing an active pipeline are here.
Before you activate a pipeline, you can make any changes to it. After you activate a pipeline, you can edit the pipeline with the following restrictions. The changes you make apply to new runs of the pipeline objects after you save them and then activate the pipeline again.
You can't remove an object
You can't change the schedule period of an existing object
You can't add, delete, or modify reference fields in an existing object
You can't reference an existing object in an output field of a new object
You can't change the scheduled start date of an object (instead, activate the pipeline with a specific date and time)
Basically, you can't change the schedule or the execution graph. That still leaves a lot of non-"ref" properties you can edit (S3 paths, etc.) Otherwise, you will need to use the Clone-and-Edit trick.
You can also deactivate a pipeline in order to pause the execution schedule, either to edit the pipeline or for something like a DB maintenance window.