Change headers value when user inject in gatling scenario - amazon-web-services

I want to change headers value every time when user injected in Gatling. because I have an error in my code when code is running. error is " Signature expired: 20200124T170359Z is now earlier than 20200124T170552Z (20200124T172052Z - 15 min. ".
My code is
val signer: AwsSigner = AwsSigner(AwsCredentialsProviderWithSession, region, Service, clock)
val signedHeaders = signer.getSignedHeaders(Uri, PostMethod, queryParams, headers, emptyPayload)
val scen =scenario("Home page").repeat(100) {
.exec(
http("Custom headers")
.get("Url"+"?Action=SendMessage&MessageBody=" + queryEnc)
.headers(signedHeaders)
setUp(
sendLoadToAws.scen.inject(rampUsersPerSec(10) to 15 during (60))
)

You need use feed, there is documentation for more informations https://gatling.io/docs/current/session/feeder/ .
So, the first create this feed
val signedHeaders = {
val signedHeaders = signer.getSignedHeaders(Uri, PostMethod, queryParams, headers, emptyPayload)
Iterator.continually(Map(
"header_1" -> signedHeaders.header_1,
"header_2" -> signedHeaders.header_2
}
Now need add feed for your scenario
.feed(signedHeaders)
.exec(
http("Custom headers")
.get("url"+"?Action=SendMessage&MessageBody=" + queryEnc)
And finaly add headers to request
.feed(signedHeaders)
.exec(
http("Custom headers")
.get("url"+"?Action=SendMessage&MessageBody=" + queryEnc)
.header("header_name_1", "${header_1}")
.header("header_name_2", "${header_2}")

I have a problem in to refresh of headers value before 15 Minutes, because my code run after 15 minutes shows that singatureDoesNotMatch or signature expired after 15 minutes. so I did corrections in my code with the hint of #Amerousful. I used Session like
val scen = scenario("Home page") {
exec(session => session.set("authroization", signedHeaders("Authorization"))
.set("host", signedHeaders("Host"))
.set("x-amz-date", signedHeaders("x-amz-date"))
.set("x-amz-security-token", signedHeaders("x-amz-security-token"))
)
.exec(
http("Custom headers")
.get("url + "?Action=SendMessage&MessageBody=" + message)
.header("Authorization", "${authroization}")
.header("Host", "${host}")
.header("x-amz-date", "${x-amz-date}")
.header("x-amz-security-token", "${x-amz-security-token}")
//.header("header", "${signer}")
)
setUp(
scen.inject(nothingFor(5), constantUsersPerSec(80) during (3600)))
After running this code, the header value refreshes every time when user injects and make new Signature. And the scenario is running as your given time.

Related

AWS LEX: Slot update, intent update and then new publishing bot through a Lambda function

I am writing a lambda function that has an array of words that I want to put into a slotType, basically updating it every time. Here is how it goes. Initially, the slotType has values ['car', 'bus']. Next time I run the lambda function the values get updated to ['car', 'bus', 'train', 'flight'] which is basically after appending a new array into the old one.
I want to know how I publish the bot every time the Lambda function gets invoked so the next time I hit the lex bot from the front-end, it uses the latest slotType in the intent and newly published bot alias. Yep, also the alias!
I know for a fact that the put_slot_type() is working because the slot is getting updated in the bot.
Here is the function which basically takes in new labels as parameters.
def lex_extend_slots(new_labels):
print('entering lex model...')
lex = boto3.client('lex-models')
slot_name = 'keysDb'
intent_name = 'searchKeys'
bot_name = 'photosBot'
res = lex.get_slot_type(
name = slot_name,
version = '$LATEST'
)
current_labels = res['enumerationValues']
latest_checksum = res['checksum']
arr = [x['value'] for x in current_labels]
labels = arr + new_labels
print('arry: ', arr)
print('new_labels', new_labels)
print('labels in lex: ', labels)
labels = list(set(labels))
enumerationList = [{'value': label, 'synonyms': []} for label in labels]
print('getting ready to push enum..: ', enumerationList)
res_slot = lex.put_slot_type(
name = slot_name,
description = 'updated slots...',
enumerationValues = enumerationList,
valueSelectionStrategy = 'TOP_RESOLUTION',
)
res_build_intent = lex.create_intent_version(
name = intent_name
)
res_build_bot = lex.create_bot_version(
name = bot_name,
checksum = latest_checksum
)
return current_labels
It looks like you're using Version 1 of the Lex Models API on Boto3.
You can use the put_bot method in the lex-models client to effectively create or update your Lex bot.
The put_bot method expects the full list of intents to be used for building the bot.
It is worth mentioning that you will first need to use put_intent to update your intents to ensure they use the latest version of your updated slotType.
Here's the documentation for put_intent.
The appropriate methods for creating and updating aliases are contained in the same link that I've shared above.

Attempting to put a delay in a loop in Postman

I'm trying to put a 1 second delay using setTimeout(()=>{},1000) in the Pre-request Script for a Postman POST call.
var moment = require('moment');
var tap1TimeStr = pm.environment.get("now");
var tap1TimeMoment = moment(tap1TimeStr,"YYYY-MM-DDTHH:mm:ss");
var expTap2Time = tap1TimeMoment.add(2, 'minutes').format("YYYY-MM-DDTHH:mm:ss");
console.log("Tap 2 timestamp should be: " + expTap2Time);
var timestamp;
var timecheck = false;
while(!timecheck)
{
setTimeout(() => {},1000);
timecheck = moment.utc().isSame(expTap2Time);
console.log("timecheck: " + timecheck);
timestamp = moment.utc().format("YYYY-MM-DDTHH:mm:ss");
}
console.log("Timestamp is now: " + timestamp);
pm.environment.set("now", timestamp);
But it doesn't seem to work and I can see that the console.log line is being printed far more frequently than 1sec. And the exercise here is to send the "Tap 2" POST exactly 2mins after the first POST (tracked by the 'now' variable). Also, it seems like Postman takes a fair bit of time before it even starts executing this particular script.
Edit: The main requirement here is to send the "Tap 2" POST request exactly 2mins AFTER the "Tap 1" POST request. HOW best to implement that? Espcially if setTimeout() is non-blocking and thus probably can't be used in a loop.
Anyone has any ideas?
setTimeout() takes a callback function which is executed after the specified delay so only what happens in the callback function will happen after that delay.
setTimeout(() => {
console.log("This will be executed after 1 second");
}, 1000);
console.log("This will immediately be executed");
setTimeout() is asynchronous and non-blocking so JavaScript will call set timeout but not wait for 1 second for it to return and instead immediately move on to the next instruction. Only after that 1 second has passed the callback passed to setTimeout() will be scheduled and executed. Have a look at this YouTube video for a good explanation of what's going on.

Meteor regex find() far slower than in MongoDB console

I've been researching A LOT for past 2 weeks and can't pinpoint the exact reason of my Meteor app returning results too slow.
Currently I have only a single collection in my Mongo database with around 2,00,000 documents. And to search I am using Meteor subscriptions on the basis of a given keyword. Here is my query:
db.collection.find({$or:[
{title:{$regex:".*java.*", $options:"i"}},
{company:{$regex:".*java.*", $options:"i"}}
]})
When I run above query in mongo shell, the results are returned instantly. But when I use it in Meteor client, the results take almost 40 seconds to return from server. Here is my meteor client code:
Template.testing.onCreated(function () {
var instance = this;
// initialize the reactive variables
instance.loaded = new ReactiveVar(0);
instance.limit = new ReactiveVar(20);
instance.autorun(function () {
// get the limit
var limit = instance.limit.get();
var keyword = Router.current().params.query.k;
var searchByLocation = Router.current().params.query.l;
var startDate = Session.get("startDate");
var endDate = Session.get("endDate");
// subscribe to the posts publication
var subscription = instance.subscribe('sub_testing', limit,keyword,searchByLocation,startDate,endDate);
// if subscription is ready, set limit to newLimit
$('#searchbutton').val('Searching');
if (subscription.ready()) {
$('#searchbutton').val('Search');
instance.loaded.set(limit);
} else {
console.log("> Subscription is not ready yet. \n\n");
}
});
instance.testing = function() {
return Collection.find({}, {sort:{id:-1},limit: instance.loaded.get()});
}
And here is my meteor server code:
Meteor.publish('sub_testing', function(limit,keyword,searchByLocation,startDate,endDate) {
Meteor._sleepForMs(200);
var pat = ".*" + keyword + ".*";
var pat2 = ".*" + searchByLocation + ".*";
return Jobstesting.find({$or:[{title:{$regex: pat, $options:"i"}}, { company:{$regex:pat,$options:"i"}},{ description:{$regex:pat,$options:"i"}},{location:{$regex:pat2,$options:"i"}},{country:{$regex:pat2,$options:"i"}}],$and:[{date_posted: { $gte : endDate, $lt: startDate }},{sort:{date_posted:-1},limit: limit,skip: limit});
});
One point I'd also like to mention here that I use "Load More" pagination and by default the limit parameter gets 20 records. On each "Load More" click, I increment the limit parameter by 20 so on first click it is 20, on second click 40 and so on...
Any help where I'm going wrong would be appreciated.
But when I use it in Meteor client, the results take almost 40 seconds to return from server.
You may be misunderstanding how Meteor is accessing your data.
Queries run on the client are processed on the client.
Meteor.publish - Makes data available on the server
Meteor.subscribe - Downloads that data from the server to the client.
Collection.find - Looks through the data on the client.
If you think the Meteor side is slow, you should time it server side (print time before/after) and file a bug.
If you're implementing a pager, you might try a meteor method instead, or
a pager package.

Update SharedWith using Sharepoint 2013 API

I am trying to update specific property of Sharepoint 2013 list item, ie. 'SharedWithUsersId'. Although I get the response code as 204 (indicating success), when I check the list item again, I am not seeing any change in the sharedWithUSersId. When I tried changing other properties, like title, it worked like a charm. So, I want to know if 'sharedWithUSersId' property cannot be edited via this particular api call?
Cheers, Z
It can be changed, but not directly. You can do it by REST API. For example:
function setNewPermissionsForGroup() {
var endpointUri = appweburl + "/_api/SP.AppContextSite(#target)/web/lists/getbytitle('";
endpointUri += listTitle + "')/roleassignments/addroleassignment(principalid=" + groupId;
endpointUri += ",roledefid=" + targetRoleDefinitionId + ")?#target='" + hostweburl + "'";
executor.executeAsync({
url: endpointUri,
method: 'POST',
headers: { 'X-RequestDigest':$('#__REQUESTDIGEST').val() },
success: successHandler,
error: errorHandler
});
}
You can find more details here

ColdFusion HTTP POST large strings

Has anyone noticed that if you try to post a string that exceeds 1,000,000 characters, it simply does not include the field with the request?
...and doesn't throw()!
eg.
<cfscript>
var h = new http( url = "http://...", method = "post" );
h.addParam( type = "formField", name = "a", value = repeatString("a",5000) );
h.addParam( type = "formField", name = "b", value = repeatString("b",1000000) );
h.addParam( type = "formField", name = "c", value = repeatString("c",1000001) );
var p = h.send().getPrefix();
writeDump( var = p, abort = true );
</cfscript>
The "a" and "b" fields are present in the form scope of the recipient page.
The "c" field is missing!
ColdFusion 9,0,1,274733 + chf9010002.jar, Mac OS X 10.6.8, Java 1.6.0_31
Edit: It now works as expected!
Not sure what has changed? My cf admin configuration remains the same. The only possible candidate I can come up with is a recent Apple Java update. Could that be it?
You may need to specify
enctype="multipart/form-data"
This is a setting within CF administrator.
In Coldfusion 9 (this setting has existed for a while, but may exist elsewhere in other versions):
Click on "server settings" group to expand, click on "settings" link (top link). On the settings page:
Maximum size of post data 100 MB (default)
Limits the amount of data that can be posted to the server in a single request. ColdFusion rejects requests larger than the specified limit.
It's interesting that you're hitting a limit right at 100,000 ; sounds like someone got a little lazy with the "bytes" computation. :) At any rate, I'd try tinkering with this setting.
Just an FYI: You'll encounter a similar issue with data truncation on data inserts/updates unless you set your datasource to allow "Long Text Buffer (chr)" greater than the 64,000 default limit.