Limiting Execution Time for a called CFC - coldfusion

We have a ColdFusion edit that checks MX records to soft validate email addresses. We get occasional long responses from the DNS servers and need to restrict how long we wait for a response before giving up.
The component called does the actual lookup. The code reproduced below is part of a larver validation script.
What I want to do is limit how long I will wait for the lookup component to run and then handle the error when it exceeds that time. This code appears correct to me, but it just blows right past the timeout (I have a "sleep" in the component to mimic a slow response.)
try {
requesttimeout="10";
MyNewArray[1] = right(MyEmail, RightChars);
mycnt = new common.functions.mxLookup(MyNewArray);
Caller.EmailReturnCode = iif(mycnt gt 0,'"0"','"2"');
}
catch ("coldfusion.runtime.RequestTimedOutException" a) {
Caller.EmailReturnCode = "2";
}
catch (any e) {
Caller.EmailReturnCode = "2";
}

thread action="run" name="doLookup" {
variables.mycnt = new common.functions.mxLookup(MyNewArray);
}
thread action="join" name="doLookup" timeout="20000";
if(doLookup.Status is "RUNNING"){
thread action="terminate" name="doLookup";
throw("MX Lookup of #MyEmail# timed out", "toException", "MX Lookup exceeded allowed time of 20 seconds at #timeFormat(now(),'HH:mm:sss')#");
}
Caller.EmailReturnCode = iif(mycnt gt 0,'"0"','"2"');
Thanks to Alex
This solution works perfectly.

Related

Attempting to put a delay in a loop in Postman

I'm trying to put a 1 second delay using setTimeout(()=>{},1000) in the Pre-request Script for a Postman POST call.
var moment = require('moment');
var tap1TimeStr = pm.environment.get("now");
var tap1TimeMoment = moment(tap1TimeStr,"YYYY-MM-DDTHH:mm:ss");
var expTap2Time = tap1TimeMoment.add(2, 'minutes').format("YYYY-MM-DDTHH:mm:ss");
console.log("Tap 2 timestamp should be: " + expTap2Time);
var timestamp;
var timecheck = false;
while(!timecheck)
{
setTimeout(() => {},1000);
timecheck = moment.utc().isSame(expTap2Time);
console.log("timecheck: " + timecheck);
timestamp = moment.utc().format("YYYY-MM-DDTHH:mm:ss");
}
console.log("Timestamp is now: " + timestamp);
pm.environment.set("now", timestamp);
But it doesn't seem to work and I can see that the console.log line is being printed far more frequently than 1sec. And the exercise here is to send the "Tap 2" POST exactly 2mins after the first POST (tracked by the 'now' variable). Also, it seems like Postman takes a fair bit of time before it even starts executing this particular script.
Edit: The main requirement here is to send the "Tap 2" POST request exactly 2mins AFTER the "Tap 1" POST request. HOW best to implement that? Espcially if setTimeout() is non-blocking and thus probably can't be used in a loop.
Anyone has any ideas?
setTimeout() takes a callback function which is executed after the specified delay so only what happens in the callback function will happen after that delay.
setTimeout(() => {
console.log("This will be executed after 1 second");
}, 1000);
console.log("This will immediately be executed");
setTimeout() is asynchronous and non-blocking so JavaScript will call set timeout but not wait for 1 second for it to return and instead immediately move on to the next instruction. Only after that 1 second has passed the callback passed to setTimeout() will be scheduled and executed. Have a look at this YouTube video for a good explanation of what's going on.

Round shipping cost in Shopware 5 based on condition

I need to round the shipping costs when an article is added till the order is completed based on if certain condition is active. I already tried subscribing the following events:
'sOrder::getOrderById::after' => 'afterGetOrder',
'Shopware_Modules_Order_SaveOrder_FilterParams' => 'filterOrderParams'
with the following implementation:
public function filterOrderParams(\Enlight_Event_EventArgs $args)
{
$return = $args->getReturn();
$return['invoice_shipping'] = ceil($return['invoice_shipping']);
$return['invoice_shipping_net'] = ceil($return['invoice_shipping_net']);
$subject->sShippingcostsNumeric = ceil($subject->sShippingcostsNumeric);
$subject->sShippingcostsNumericNet = ceil($subject->sShippingcostsNumericNet);
$args->setReturn($return);
}
public function afterGetOrder(\Enlight_Hook_HookArgs $args)
{
if (!$this->checkIfActive()) {
return;
}
$return = $args->getReturn();
$return['invoice_shipping'] = ceil($return['invoice_shipping']);
$return['invoice_shipping_net'] = ceil($return['invoice_shipping_net']);
$args->setReturn($return);
}
But it does not seem to be working. I also tried
Shopware()->Modules()->Order()->sShippingcosts = ceil($sBasket['sShippingcosts']); but nothing. I know there is an event on order save, which I can hook and change the parameters, but that is too late as the cost in the cart and checkout would not be rounded up (not until the order is completed)
So is there a way to round shipping costs in shopware 5 ?
EDIT: Ideally it would also be great if the changes are stored in database directly so any further manipulation on the order shows / updates the correct rounded values.
For anyone looking, I ended up changing the template variable in the Frontend_Checkout event subscriber and letting it change the value in the database on Shopware_Modules_Order_SaveOrder_FilterParams. Hope it helps someone. Took me a while to figure this out. This hook sOrder::getOrderById::after was not needed at all, and doesn't seem to be working either.

Strange behavior trying to perform data migration in Dynamo DB

We're trying to make a simple data migration in one of our tables in DDB.
Basically we're adding a new field and we need to backfill all the Documents in one of our tables.
This table has around 700K documents.
The process we follow is quite simple:
Manually trigger a lambda that will scan the table and for each document, will update the document and continue doing the same til its close to the 15 minutes top, in that case
Puts LastEvaluatedKey into SQS to trigger new lambda execution that uses that key to continue scanning.
Process goes on spawining lambdas sequentially as needed until there are no more documents
The problem we found is as follows...
Once the migration is done we noticed that the number of documents updated is way lower than the total number of documents existing in that table. It's a random value, not the same always but it ranges from tens of thousands to hundreds of thousands (worst case we seen was 300K difference).
This is obviously a problem, because if we scan the documents again, it seems obvious some documents were not migrated. We thought at first this was because of some clients updating/inserting new documents but the throughput on that table is not that large that will justify such a big difference, so this is not that there are new documents being added while we run the migration.
We tried a second approach that was first scanning, because if we only scan, we noticed that number of scan documents == count of documents in table, so we tried to dump the IDs of the documents in another table, then scan that table and update those items again. Funny thing, same problem happens with this new table with just IDs, there are way less than the count in the table we want to update, thus, we're back to square one.
We thought about using parallel scans but I don't see how this could benefit plus I don't want to compromise reading capacity for the table while running the migration.
Anybody with experience in data migrations in DDB can shed some light here? We're not able to figure out what we're doing wrong.
UPDATE: Sharing the function that is triggered and actually scans and updates
#Override
public Map<String, AttributeValue> migrateDocuments(String lastEvaluatedKey, String typeKey){
LOG.info("Migrate Documents started {} ", lastEvaluatedKey);
int noOfDocumentsMigrated = 0;
Map<String, AttributeValue> docLastEvaluatedKey = null;
DynamoDBMapperConfig documentConfig = new DynamoDBMapperConfig.TableNameOverride("KnowledgeDocumentMigration").config();
if(lastEvaluatedKey != null) {
docLastEvaluatedKey = new HashMap<String,AttributeValue>();
docLastEvaluatedKey.put("base_id", new AttributeValue().withS(lastEvaluatedKey));
docLastEvaluatedKey.put("type_key",new AttributeValue().withS(typeKey));
}
Instant endTime = Instant.now().plusSeconds(840);
LOG.info("Migrate Documents endTime:{}", endTime);
try {
do {
ScanResultPage<Document> docScanList = documentDao.scanDocuments(docLastEvaluatedKey, documentConfig);
docLastEvaluatedKey = docScanList.getLastEvaluatedKey();
LOG.info("Migrate Docs- docScanList Size: {}", docScanList.getScannedCount());
docLastEvaluatedKey = docScanList.getLastEvaluatedKey();
LOG.info("lastEvaluatedKey:{}", docLastEvaluatedKey);
final int chunkSize = 25;
final AtomicInteger counter = new AtomicInteger();
final Collection<List<Document>> docChunkList = docScanList.getResults().stream()
.collect(Collectors.groupingBy(it -> counter.getAndIncrement() / chunkSize)).values();
List<List<Document>> docListSplit = docChunkList.stream().collect(Collectors.toList());
docListSplit.forEach(docList -> {
TransactionWriteRequest documentTx = new TransactionWriteRequest();
for (Document document : docList) {
LOG.info("Migrate Documents- docList Size: {}", docList.size());
LOG.info("Migrate Documents- Doc Id: {}", document.getId());
if (!StringUtils.isNullOrEmpty(document.getType()) && document.getType().equalsIgnoreCase("Faq")) {
if (docIdsList.contains(document.getId())) {
LOG.info("this doc already migrated:{}", document);
} else {
docIdsList.add(document.getId());
}
if ((!StringUtils.isNullOrEmpty(document.getFaq().getQuestion()))) {
LOG.info("doc FAQ {}", document.getFaq().getQuestion());
document.setTitle(document.getFaq().getQuestion());
document.setTitleSearch(document.getFaq().getQuestion().toLowerCase());
documentTx.addUpdate(document);
}
} else if (StringUtils.isNullOrEmpty(document.getType())) {
if (!StringUtils.isNullOrEmpty(document.getTitle()) ) {
if (!StringUtils.isNullOrEmpty(document.getQuestion())) {
document.setTitle(document.getQuestion());
document.setQuestion(null);
}
LOG.info("title {}", document.getTitle());
document.setTitleSearch(document.getTitle().toLowerCase());
documentTx.addUpdate(document);
}
}
}
if (documentTx.getTransactionWriteOperations() != null
&& !documentTx.getTransactionWriteOperations().isEmpty() && docList.size() > 0) {
LOG.info("DocumentTx size {}", documentTx.getTransactionWriteOperations().size());
documentDao.executeTransaction(documentTx, null);
}
});
noOfDocumentsMigrated = noOfDocumentsMigrated + docScanList.getScannedCount();
}while(docLastEvaluatedKey != null && (endTime.compareTo(Instant.now()) > 0));
LOG.info("Migrate Documents execution finished at:{}", Instant.now());
if(docLastEvaluatedKey != null && docLastEvaluatedKey.get("base_id") != null)
sqsAdapter.get().sendMessage(docLastEvaluatedKey.get("base_id").toString(), docLastEvaluatedKey.get("type_key").toString(),
MIGRATE, MIGRATE_DOCUMENT_QUEUE_NAME);
LOG.info("No Of Documents Migrated:{}", noOfDocumentsMigrated);
}catch(Exception e) {
LOG.error("Exception", e);
}
return docLastEvaluatedKey;
}
Note: I would've added this speculation as a comment but my reputation does not allow
I think the issue that you're seeing here could be caused by the Scans not being ordered. So as long as your Scan would be executed in a single lambda I'd expect to you see that everything was handled fine. However, as soon as you hit the runtime limit of the lambda & start a new one your Scan will essentially get a new "ScanID" which might come in a different order. Based on the different order you're now skipping a certain set of entries.
I haven't tried to replicate this behavior & sadly there is no clear indication in the AWS documentation whether a Scan Request can be created in a new Session/Application.
I think #Charles' suggestion might help you in this case as you can simply run the entire migration in one process.

Spring JPA #QueryHint ignored for lock timeout?

I have a query in a Spring data JpaRepository like so:
#Lock(value = LockModeType.PESSIMISTIC_WRITE)
#QueryHints({#QueryHint(name = "javax.persistence.lock.timeout", value = "70000")})
Collection<AnyCLass> findBy ...
However, in my test, if I run the transaction ( that uses this query as the first query) in two concurrent threads I get an SQL Lock timeout (SQL Error: 50200, SQLState: HYT00) after one second, which is the default for the H2 in memory.
If the transaction is faster then one second everything works as expected.
Possible workaround for this Issue is a simple retry on the enclosing method:
#Retryable(backoff = #Backoff(value = 500,random = true))
void getSometingLocked(){
findById(id)
}

Meteor regex find() far slower than in MongoDB console

I've been researching A LOT for past 2 weeks and can't pinpoint the exact reason of my Meteor app returning results too slow.
Currently I have only a single collection in my Mongo database with around 2,00,000 documents. And to search I am using Meteor subscriptions on the basis of a given keyword. Here is my query:
db.collection.find({$or:[
{title:{$regex:".*java.*", $options:"i"}},
{company:{$regex:".*java.*", $options:"i"}}
]})
When I run above query in mongo shell, the results are returned instantly. But when I use it in Meteor client, the results take almost 40 seconds to return from server. Here is my meteor client code:
Template.testing.onCreated(function () {
var instance = this;
// initialize the reactive variables
instance.loaded = new ReactiveVar(0);
instance.limit = new ReactiveVar(20);
instance.autorun(function () {
// get the limit
var limit = instance.limit.get();
var keyword = Router.current().params.query.k;
var searchByLocation = Router.current().params.query.l;
var startDate = Session.get("startDate");
var endDate = Session.get("endDate");
// subscribe to the posts publication
var subscription = instance.subscribe('sub_testing', limit,keyword,searchByLocation,startDate,endDate);
// if subscription is ready, set limit to newLimit
$('#searchbutton').val('Searching');
if (subscription.ready()) {
$('#searchbutton').val('Search');
instance.loaded.set(limit);
} else {
console.log("> Subscription is not ready yet. \n\n");
}
});
instance.testing = function() {
return Collection.find({}, {sort:{id:-1},limit: instance.loaded.get()});
}
And here is my meteor server code:
Meteor.publish('sub_testing', function(limit,keyword,searchByLocation,startDate,endDate) {
Meteor._sleepForMs(200);
var pat = ".*" + keyword + ".*";
var pat2 = ".*" + searchByLocation + ".*";
return Jobstesting.find({$or:[{title:{$regex: pat, $options:"i"}}, { company:{$regex:pat,$options:"i"}},{ description:{$regex:pat,$options:"i"}},{location:{$regex:pat2,$options:"i"}},{country:{$regex:pat2,$options:"i"}}],$and:[{date_posted: { $gte : endDate, $lt: startDate }},{sort:{date_posted:-1},limit: limit,skip: limit});
});
One point I'd also like to mention here that I use "Load More" pagination and by default the limit parameter gets 20 records. On each "Load More" click, I increment the limit parameter by 20 so on first click it is 20, on second click 40 and so on...
Any help where I'm going wrong would be appreciated.
But when I use it in Meteor client, the results take almost 40 seconds to return from server.
You may be misunderstanding how Meteor is accessing your data.
Queries run on the client are processed on the client.
Meteor.publish - Makes data available on the server
Meteor.subscribe - Downloads that data from the server to the client.
Collection.find - Looks through the data on the client.
If you think the Meteor side is slow, you should time it server side (print time before/after) and file a bug.
If you're implementing a pager, you might try a meteor method instead, or
a pager package.