/*
typedef struct _HRFS_VOLUME_CONTROL_BLOCK
{
FSRTL_ADVANCED_FCB_HEADER VolumeFileHeader;
ULONG nodeType;
FAST_MUTEX AdvancedFcbHeaderMutex;
....
};
*/
DumpFileObject(*(pVolDev->fileObject));
Vcb = (HRFS_VOLUME_CONTROL_BLOCK_PTR)ExAllocatePool(PagedPool, sizeof(HRFS_VOLUME_CONTROL_BLOCK));
pVolDev->fileObject->SectionObjectPointer = \
(PSECTION_OBJECT_POINTERS)ExAllocatePool(PagedPool, sizeof(SECTION_OBJECT_POINTERS));;
pVolDev->fileObject->WriteAccess = TRUE;
pVolDev->fileObject->ReadAccess = TRUE;
pVolDev->fileObject->DeleteAccess = TRUE;
pVolDev->fileObject->FsContext = &HrfsData.gVolume;
pVolDev->fileObject->Vpb = Vpb;
CC_FILE_SIZES fileSize;
fileSize.AllocationSize.QuadPart = fileSize.FileSize.QuadPart = sizeof(PACKED_BOOT_SECTOR);
fileSize.ValidDataLength.QuadPart = 0xFFFFFFFFFFFFFFFF;
CcInitializeCacheMap(pVolDev->fileObject,
&fileSize,
TRUE,
&HrfsData.CacheManagerNoOpCallbacks,
Vcb);
In this Code segment a crash occured when I call the CcInitializeCacheMap function.
The FILE_OBJECT and the dump infomation is as below :
fileObject.Size : d8
fileObject.DeviceObject : c2221670
fileObject.Vpb : c39302e0
fileObject.FsContext : 32166f0
fileObject.FsContext2 : 0
fileObject.SectionObjectPointer : 0
fileObject.PrivateCacheMap : 0
fileObject.FinalStatus : 0
fileObject.RelatedFileObject : 0
fileObject.LockOperation : 0
fileObject.DeletePending : 0
fileObject.ReadAccess : 1
fileObject.WriteAccess : 1
fileObject.DeleteAccess : 1
fileObject.SharedRead : 0
fileObject.SharedWrite : 0
fileObject.SharedDelete : 0
fileObject.Flags : 40100
fileObject.FileName : 247bb70
fileObject.CurrentByteOffset : 0
fileObject.Waiters : 0
fileObject.Busy : 0
fileObject.LastLock : 0
fileObject.FileObjectExtension : 0
The stack text is as below:
fffff880`0247bac0 fffff880`03241c78 : fffff880`00000000 00000000`00000000 00000000`00000001 fffff880`032166c8 : nt!CcInitializeCacheMap+0xd3
fffff880`0247bba0 fffff880`0323e095 : fffffa80`c303b010 fffffa80`c2222040 fffffa80`c39302e0 fffffa80`c3d56a40 : fastfatDemo!FatMountVolume+0xaf8 [G:\BaiduNetdiskDownload\fastfat_V1G13\fastfat_File_System_Driver\FsCtrl.c # 1460]
fffff880`0247c2f0 fffff880`0323ecb7 : fffffa80`c303b010 fffffa80`c259bb40 00000000`00000065 00000000`00000003 : fastfatDemo!FatCommonFileSystemControl+0xe5 [G:\BaiduNetdiskDownload\fastfat_V1G13\fastfat_File_System_Driver\FsCtrl.c # 1053]
fffff880`0247c340 fffff880`0113d4bc : fffffa80`c3d56a40 fffffa80`c259bb40 00000000`00000000 00000000`00000000 : fastfatDemo!FatFsdFileSystemControl+0x127 [G:\BaiduNetdiskDownload\fastfat_V1G13\fastfat_File_System_Driver\FsCtrl.c # 969]
fffff880`0247c3a0 fffff880`01138971 : fffffa80`c3d56450 00000000`00000000 fffffa80`c3024200 fffffa80`c3129cb0 : fltmgr!FltpFsControlMountVolume+0x28c
fffff880`0247c470 fffff800`04334e6b : fffffa80`c3d56450 00000000`00000000 fffffa80`c3d56450 fffffa80`c259bb40 : fltmgr!FltpFsControl+0x101
fffff880`0247c4d0 fffff800`040789e7 : fffff880`0247c7c0 fffff880`0247c701 fffffa80`c2221600 00000000`00000000 : nt!IopMountVolume+0x28f
fffff880`0247c590 fffff800`044fac6d : 00000000`00000025 00000000`00000000 fffff880`0247c7c0 fffff880`0247c768 : nt!IopCheckVpbMounted+0x1b7
fffff880`0247c600 fffff800`044229a4 : fffffa80`c2221670 00000000`00000000 fffffa80`c31dbb10 fffff8a0`00000001 : nt!IopParseDevice+0xb4d
fffff880`0247c760 fffff800`042fd756 : 00000000`00000000 fffff880`0247c8e0 00000000`00000040 fffffa80`c15c07b0 : nt!ObpLookupObjectName+0x784
fffff880`0247c860 fffff800`044c9d88 : fffffa80`c3d20cb0 00000000`00000000 00000000`00000401 fffff800`043fdef6 : nt!ObOpenObjectByName+0x306
fffff880`0247c930 fffff800`0435d7f4 : fffffa80`c629f870 fffff8a0`80100080 00000000`0029f4f8 00000000`0029f448 : nt!IopCreateFile+0xa08
fffff880`0247c9e0 fffff800`040b4bd3 : fffffa80`c3539b00 00000000`00000001 fffffa80`c629f870 fffff800`042fe1e4 : nt!NtCreateFile+0x78
fffff880`0247ca70 00000000`77629dda : 000007fe`fd3760d6 00000000`00000000 00000000`80000000 00000000`00000000 : nt!KiSystemServiceCopyEnd+0x13
00000000`0029f428 000007fe`fd3760d6 : 00000000`00000000 00000000`80000000 00000000`00000000 00000000`000c0000 : ntdll!ZwCreateFile+0xa
00000000`0029f430 00000000`773b0add : 00000000`0034bec0 00000000`80000000 00000000`00000003 00000000`0029f892 : KERNELBASE!CreateFileW+0x2cd
00000000`0029f590 000007fe`f1971c1e : 00000000`00000000 00000000`00000000 00000000`01d14280 00000000`0029f830 : kernel32!CreateFileWImplementation+0x7d
00000000`0029f5f0 00000000`00000000 : 00000000`00000000 00000000`01d14280 00000000`0029f830 00000000`00000003 : FVEAPI+0x1c1e
I traced the address to nt!CcInitializeCacheMap+0xd3
and found there is a compaire instruction .
So what courced the crash to CcInitializeCacheMap by my program ?
This code should not set to PagedPool Type .
//ErrorCode:
Vcb = (HRFS_VOLUME_CONTROL_BLOCK_PTR)ExAllocatePool(PagedPool, sizeof(HRFS_VOLUME_CONTROL_BLOCK));
pVolDev->fileObject->SectionObjectPointer = \
(PSECTION_OBJECT_POINTERS)ExAllocatePool(PagedPool, sizeof(SECTION_OBJECT_POINTERS));;
Related
Started running DynamoDB on my local and below powershell script I'm using to perform a test insert, from the output I see it is able to read the table but fails on putitem operation,
Appreciate any suggestions or references on this, not sure what I'm missing here.
$localEndpoint = 'http://localhost:8000'
$region = 'us-west-2'
$table = Get-DDBTable -TableName test01 -EndpointUrl $localEndpoint
$table
$client = New-Object Amazon.DynamoDBv2.AmazonDynamoDBClient($localEndpoint, $region)
$client
$req = New-Object Amazon.DynamoDBv2.Model.PutItemRequest
$req.TableName = 'test01'
$req.Item = New-Object 'system.collections.generic.dictionary[string,Amazon.DynamoDBv2.Model.AttributeValue]'
$valObj = New-Object Amazon.DynamoDBv2.Model.AttributeValue
$valObj.S = 'MyName'
$req.Item.Add('Name',$valObj)
$req
$client.PutItem($req)
Results with exception:
ArchivalSummary :
AttributeDefinitions : {Name}
BillingModeSummary :
CreationDateTime : 3/11/2022 11:49:44 PM
GlobalSecondaryIndexes : {}
GlobalTableVersion :
ItemCount : 0
KeySchema : {Name}
LatestStreamArn :
LatestStreamLabel :
LocalSecondaryIndexes : {}
ProvisionedThroughput : Amazon.DynamoDBv2.Model.ProvisionedThroughputDescription
Replicas : {}
RestoreSummary :
SSEDescription :
StreamSpecification :
TableArn : arn:aws:dynamodb:ddblocal:000000000000:table/test01
TableId :
TableName : test01
TableSizeBytes : 0
TableStatus : ACTIVE
Config : Amazon.DynamoDBv2.AmazonDynamoDBConfig
ConditionalOperator :
ConditionExpression :
Expected : {}
ExpressionAttributeNames : {}
ExpressionAttributeValues : {}
Item : {[Name, Amazon.DynamoDBv2.Model.AttributeValue]}
ReturnConsumedCapacity :
ReturnItemCollectionMetrics :
ReturnValues :
TableName : test01
StreamUploadProgressCallback :
RequestState : {}
UseSigV4 : False
Exception calling "PutItem" with "1" argument(s): "Credential must have exactly 5 slash-delimited elements, e.g. keyid/date/region/service/term, got 'http://localhost:8000/20220312/us-west-2/dynamodb/aws4_request'"
At line:19 char:1
+ $client.PutItem($req)
+ ~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [], MethodInvocationException
+ FullyQualifiedErrorId : AmazonDynamoDBException
My code looks like this
__try
{
RaiseException(1, 0, 0, NULL);
}
__except(EXCEPTION_EXECUTE_HANDLER)
{
...
}
However, __except does not start and crash occurs as it is
32bit has no problem 64bit has a problem 
Next is the contents of dump
EXCEPTION_RECORD: (.exr -1)
ExceptionAddress: 00007ffc6461a308 (KERNELBASE!RaiseException+0x0000000000000068)
ExceptionCode: 00000001
ExceptionFlags: 00000000
NumberParameters: 0
STACK_TEXT:
00000000`07936cd0 00000000`c0ba5ee4 : 00000000`179f8470 00007ffc`676a0efc 00000000`179f8470 00000000`00000246 : KERNELBASE!RaiseException+0x68
00000000`07936db0 00000000`179f8470 : 00007ffc`676a0efc 00000000`179f8470 00000000`00000246 00000000`00000000 : 0xc0ba5ee4
00000000`07936db8 00007ffc`676a0efc : 00000000`179f8470 00000000`00000246 00000000`00000000 00000000`00000000 : 0x179f8470
00000000`07936dc0 00000000`c0ba4648 : 00000000`00000000 00000000`00000000 00000000`00000000 00000000`079383e1 : ntdll!RtlpRemoveVectoredHandler+0xf4
00000000`07936e00 00000000`00000000 : 00000000`00000000 00000000`00000000 00000000`079383e1 00000000`00000000 : 0xc0ba4648
I don't understand why __except doesn't start
I am trying to figure out a post-processing code to extract data from the output of an indexing-variable in PYOMO. The output of the variable is given below.
x : Size=24, Index=x_index
Key : Lower : Value : Upper : Fixed : Stale : Domain
('alum', 'Face 1') : 0 : 0.0 : 1 : False : False : Binary
('alum', 'Face 2') : 0 : 0.0 : 1 : False : False : Binary
('alum', 'Face 3') : 0 : 0.0 : 1 : False : False : Binary
('alum', 'Face 4') : 0 : 0.0 : 1 : False : False : Binary
('alum', 'Face 5') : 0 : 0.0 : 1 : False : False : Binary
('alum', 'Face 6') : 0 : 0.0 : 1 : False : False : Binary
('copper', 'Face 1') : 0 : 1.0 : 1 : False : False : Binary
('copper', 'Face 2') : 0 : 1.1025499604398013e-08 : 1 : False : False : Binary
('copper', 'Face 3') : 0 : 0.7535049595290465 : 1 : False : False : Binary
('copper', 'Face 4') : 0 : 1.0003766762453678e-08 : 1 : False : False : Binary
('copper', 'Face 5') : 0 : 1.0265826814190929e-08 : 1 : False : False : Binary
('copper', 'Face 6') : 0 : 1.0 : 1 : False : False : Binary
('steel', 'Face 1') : 0 : 0.0 : 1 : False : False : Binary
('steel', 'Face 2') : 0 : 0.0 : 1 : False : False : Binary
('steel', 'Face 3') : 0 : 0.0 : 1 : False : False : Binary
('steel', 'Face 4') : 0 : 0.0 : 1 : False : False : Binary
('steel', 'Face 5') : 0 : 0.0 : 1 : False : False : Binary
('steel', 'Face 6') : 0 : 0.0 : 1 : False : False : Binary
('zinc', 'Face 1') : 0 : 1.0461836921235404e-08 : 1 : False : False : Binary
('zinc', 'Face 2') : 0 : 1.0 : 1 : False : False : Binary
('zinc', 'Face 3') : 0 : 0.24649506011873923 : 1 : False : False : Binary
('zinc', 'Face 4') : 0 : 1.0 : 1 : False : False : Binary
('zinc', 'Face 5') : 0 : 1.0 : 1 : False : False : Binary
('zinc', 'Face 6') : 0 : 9.618909950291308e-09 : 1 : False : False : Binary
The expected output is a dictionary as shown below
selected_materials = {'Face 1':'copper',
'Face 2':'zinc',
'Face 3':'copper',
'Face 4':'zinc',
'Face 5':'zinc',
'Face 6':'copper' }
The idea is to choose a material for each face. The selection criteria is the maximum value obtained in the output variable 'x', for each key (combinations of material, face). Eg. For Face 1, the value is compared among 4 materials and the one with highest value is chosen.
My attempt:
I created a code to find the highest value among materials for an individual face as shown below.
max([pyo.value(mdl.x[m, 'Face 1']) for m in materials])
where m is a list as given below (defined in the initial step of the algorithm)
materials = ['steel', 'alum', 'copper', 'zinc']
But finding the material corresponding to the highest value seems challenging. If someone has an idea, kindly help me.
I would appreciate it if you could suggest me some better idea if any.
There are a couple ways to do this. First thing I'd do is pull the values out of the variables into a plain old python data structure, which makes it a bit easier to work with. There are probably a couple variants of the example below that you could implement, depending on how comfortable you are with comprehensions, etc.
import pyomo.environ as pyo
from collections import defaultdict
from operator import itemgetter
matls = ['steel', 'wood']
faces = ['Face 1', 'Face 2']
some_values = { ('steel', 'Face 1') : 1,
('wood', 'Face 1') : 2.2,
('steel', 'Face 2') : 3.5,
('wood', 'Face 2') : 1.1}
# PYOMO MODEL
m = pyo.ConcreteModel()
# Sets
m.M = pyo.Set(initialize=matls)
m.F = pyo.Set(initialize=faces)
# Variables
# Initializing just for the purposes of sorting later....normally NOT needed
m.X = pyo.Var(m.M, m.F, domain=pyo.NonNegativeReals, initialize=some_values)
m.pprint()
# let's pull the values out into a list of tuples.
# this isn't totally necessary, but it is pretty clear, and good staring place
res = [ (face, matl, m.X[matl, face].value) for face in m.F for matl in m.M]
for item in res:
print(item)
# ok... let's gather the results by face and then get the max. (you could also sort the results or whatever...
choices = defaultdict(list)
for face, matl, score in res:
choices[face].append( (matl, score) )
# pick the max
for face in choices:
matl, score = max(choices[face], key=itemgetter(1))
print(f'for face {face} the best material is: {matl} with score {score:.2f}')
Yields:
1 Var Declarations
X : Size=4, Index=X_index
Key : Lower : Value : Upper : Fixed : Stale : Domain
('steel', 'Face 1') : 0 : 1 : None : False : False : NonNegativeReals
('steel', 'Face 2') : 0 : 3.5 : None : False : False : NonNegativeReals
('wood', 'Face 1') : 0 : 2.2 : None : False : False : NonNegativeReals
('wood', 'Face 2') : 0 : 1.1 : None : False : False : NonNegativeReals
4 Declarations: M F X_index X
('Face 1', 'steel', 1)
('Face 1', 'wood', 2.2)
('Face 2', 'steel', 3.5)
('Face 2', 'wood', 1.1)
for face Face 1 the best material is: wood with score 2.20
for face Face 2 the best material is: steel with score 3.50
**
Quick summary: C++ app loading data from SQL server using using OTL4, writing to Mongo using mongocxx bulk_write, the strings seem to getting mangled somehow so they don't work in the aggregation pipeline (but appear fine otherwise).
**
I have a simple Mongo collection which doesn't seem to behave as expected with an aggregation pipeline when I'm projecting multiple fields. It's a trivial document, no nesting, fields are just doubles and strings.
First 2 queries work as expected:
> db.TemporaryData.aggregate( [ { $project : { ParametersId:1 } } ] )
{ "_id" : ObjectId("5c28f751a531251fd0007c72"), "ParametersId" : 526988617 }
{ "_id" : ObjectId("5c28f751a531251fd0007c73"), "ParametersId" : 526988617 }
{ "_id" : ObjectId("5c28f751a531251fd0007c74"), "ParametersId" : 526988617 }
{ "_id" : ObjectId("5c28f751a531251fd0007c75"), "ParametersId" : 526988617 }
{ "_id" : ObjectId("5c28f751a531251fd0007c76"), "ParametersId" : 526988617 }
> db.TemporaryData.aggregate( [ { $project : { Col1:1 } } ] )
{ "_id" : ObjectId("5c28f751a531251fd0007c72"), "Col1" : 575 }
{ "_id" : ObjectId("5c28f751a531251fd0007c73"), "Col1" : 579 }
{ "_id" : ObjectId("5c28f751a531251fd0007c74"), "Col1" : 616 }
{ "_id" : ObjectId("5c28f751a531251fd0007c75"), "Col1" : 617 }
{ "_id" : ObjectId("5c28f751a531251fd0007c76"), "Col1" : 622 }
But then combining doesn't return both the fields as expected.
> db.TemporaryData.aggregate( [ { $project : { ParametersId:1, Col1:1 } } ] )
{ "_id" : ObjectId("5c28f751a531251fd0007c72"), "ParametersId" : 526988617 }
{ "_id" : ObjectId("5c28f751a531251fd0007c73"), "ParametersId" : 526988617 }
{ "_id" : ObjectId("5c28f751a531251fd0007c74"), "ParametersId" : 526988617 }
{ "_id" : ObjectId("5c28f751a531251fd0007c75"), "ParametersId" : 526988617 }
{ "_id" : ObjectId("5c28f751a531251fd0007c76"), "ParametersId" : 526988617 }
It seems to be specific to the ParametersId field, for instance if I choose 2 other fields it's OK.
> db.TemporaryData.aggregate( [ { $project : { Col1:1, Col2:1 } } ] )
{ "_id" : ObjectId("5c28f751a531251fd0007c72"), "Col1" : 575, "Col2" : "1101-2" }
{ "_id" : ObjectId("5c28f751a531251fd0007c73"), "Col1" : 579, "Col2" : "1103-2" }
{ "_id" : ObjectId("5c28f751a531251fd0007c74"), "Col1" : 616, "Col2" : "1300-3" }
{ "_id" : ObjectId("5c28f751a531251fd0007c75"), "Col1" : 617, "Col2" : "1300-3" }
{ "_id" : ObjectId("5c28f751a531251fd0007c76"), "Col1" : 622, "Col2" : "1400-3" }
For some reason when I include ParametersId field, all hell breaks loose in the pipeline:
> db.TemporaryData.aggregate( [ { $project : { ParametersId:1, Col2:1, Col1:1, Col3:1 } } ] )
{ "_id" : ObjectId("5c28f751a531251fd0007c72"), "ParametersId" : 526988617, "Col1" : 575 }
{ "_id" : ObjectId("5c28f751a531251fd0007c73"), "ParametersId" : 526988617, "Col1" : 579 }
{ "_id" : ObjectId("5c28f751a531251fd0007c74"), "ParametersId" : 526988617, "Col1" : 616 }
{ "_id" : ObjectId("5c28f751a531251fd0007c75"), "ParametersId" : 526988617, "Col1" : 617 }
{ "_id" : ObjectId("5c28f751a531251fd0007c76"), "ParametersId" : 526988617, "Col1" : 622 }
DB version and the data:
> db.version()
4.0.2
> db.TemporaryData.find()
{ "_id" : ObjectId("5c28f751a531251fd0007c72"), "CellId" : 998909269, "ParametersId" : 526988617, "Order" : 1, "Col1" : 575, "Col2" : "1101-2", "Col3" : "CHF" }
{ "_id" : ObjectId("5c28f751a531251fd0007c73"), "CellId" : 998909269, "ParametersId" : 526988617, "Order" : 1, "Col1" : 579, "Col2" : "1103-2", "Col3" : "CHF" }
{ "_id" : ObjectId("5c28f751a531251fd0007c74"), "CellId" : 998909269, "ParametersId" : 526988617, "Order" : 1, "Col1" : 616, "Col2" : "1300-3", "Col3" : "CHF" }
{ "_id" : ObjectId("5c28f751a531251fd0007c75"), "CellId" : 998909269, "ParametersId" : 526988617, "Order" : 36, "Col1" : 617, "Col2" : "1300-3", "Col3" : "CHF" }
{ "_id" : ObjectId("5c28f751a531251fd0007c76"), "CellId" : 998909269, "ParametersId" : 526988617, "Order" : 1, "Col1" : 622, "Col2" : "1400-3", "Col3" : "CHF" }
Update: enquoting the field names makes no difference. I'm typing all the above in the mongo.exe command line, but I see the same behavior in my C++ application with a slightly more complex pipeline (projecting all fields to guarantee order).
This same app is actually creating the data in the first place - does anyone know anything which can go wrong? All using the mongocxx lib.
** update **
Turns out there's something going wrong with my handling of strings. Without the string fields in the data, it's all fine. So I've knackered my strings, somehow, even though they look and behave correctly in other ways they don't play nice with the aggregation pipeline. I'm using mongocxx::collection.bulk_write to write standard std::strings which are being loaded from sql server through the OTL4 header. In-between there's a strncpy_s when they get stored internally. I can't seem to create a simple reproducible example.
Just to be safe that there is no conflict with anything else, try using the projection with a strict formatted json: (add quotes to keys)
db.TemporaryData.aggregate( [ { $project : { "ParametersId":1, "Col1":1 } } ] )
Finally found the issue was corrupt documents, which because I was using bulk_write for the insert were getting into the database but causing this strange behavior. I switched to using insert_many, which threw up the document was corrupt, and then I could track down the bug.
The docs were corrupt because I was writing the same field-value data multiple times, which seems to be break the bsoncxx::builder::stream::document I was using to construct them.
I want to extract the addressId for a given housenumber in a response with a long array. The array response looks like this (snippet):
: : "footprint":null,
: : "type":null,
: : "addressId":"0011442239",
: : "streetName":"solitudestr.",
: : "streetNrFirstSuffix":null,
: : "streetNrFirst":null,
: : "streetNrLastSuffix":null,
: : "streetNrLast":null,
: : "houseNumber":"25",
: : "houseName":null,
: : "city":"stuttgart",
: : "postcode":"70499",
: : "stateOrProvince":null,
: : "countryName":null,
: : "poBoxNr":null,
: : "poBoxType":null,
: : "attention":null,
: : "geographicAreas":
: : [
: : ],
: : "firstName":null,
: : "lastName":null,
: : "title":null,
: : "region":"BW",
: : "additionalInfo":null,
: : "properties":
: : [
: : ],
: : "extAddressId":null,
: : "entrance":null,
: : "district":null,
: : "addressLine1":null,
: : "addressLine2":null,
: : "addressLine3":null,
: : "addressLine4":null,
: : "companyName":null,
: : "contactName":null,
: : "houseNrExt":null,
: : "derbyStack":false
: },
: {
: : "footprint":null,
: : "type":null,
: : "addressId":"0011442246",
: : "streetName":"solitudestr.",
: : "streetNrFirstSuffix":null,
: : "streetNrFirst":null,
: : "streetNrLastSuffix":null,
: : "streetNrLast":null,
: : "houseNumber":"26",
: : "houseName":null,
: : "city":"stuttgart",
: : "postcode":"70499",
: : "stateOrProvince":null,
: : "countryName":null,
: : "poBoxNr":null,
: : "poBoxType":null,
: : "attention":null,
: : "geographicAreas":
: : [
: : ],
: : "firstName":null,
: : "lastName":null,
: : "title":null,
: : "region":"BW",
: : "additionalInfo":null,
: : "properties":
: : [
: : ],
: : "extAddressId":null,
: : "entrance":null,
: : "district":null,
: : "addressLine1":null,
: : "addressLine2":null,
: : "addressLine3":null,
: : "addressLine4":null,
: : "companyName":null,
: : "contactName":null,
: : "houseNrExt":null,
: : "derbyStack":false
: },
i only show 2 housenumbers in this response as an example but the original response is bigger.
Q: How can i match the adressId for a specific houseNumber (i have these houseNumbers in my CSV dataset) ? I Could do a regex which extracts all addressId's but then i'd have to use the correct matching no. in Jmeter. However, i cannot assume that the ordening of these will remain same in the different environments we test the script against.
I would recommend reconsidering using regular expressions to deal with JSON data.
Starting from JMeter 3.0 you have a JSON Path PostProcessor. Using it you can execute arbitrary JSONPath queries so extracting the addressID for the given houseNumber would be as simple as:
`$..[?(#.houseNumber == '25')].addressId`
Demo:
You can use a JMeter Variable instead of the hard-coded 25 value like:
$..[?(#.houseNumber == '${houseNumber}')].addressId
If for some reason you have to use JMeter < 3.0 you still can have JSON Path postprocessing capabilities using JSON Path Extractor via JMeter Plugins
See Advanced Usage of the JSON Path Extractor in JMeter article, in particular Conditional Select chapter for more information.
You may use a regex that will capture the digits after addressId and before a specific houseNumber if you use an unrolled tempered greedy token (for better efficiency) in between them to make sure the regex engine does not overflow to another record.
"addressId":"(\d+)"(?:[^\n"]*(?:\n(?!: +: +\[)[^\n"]*|"(?!houseNumber")[^\n"]*)*"houseNumber":"25"|$)
See the regex demo (replace 25 with the necessary house number)
Details:
"addressId":" - literal string
(\d+) - Group 1 ($1$ template value) capturing 1+ digits
" - a quote
(?:[^\n"]*(?:\n(?!: +: +\[)[^\n"]*|"(?!houseNumber")[^\n"]*)*"houseNumber":"25"|$) - a non-capturing group with 2 alternatives, one being $ (end of string) or:
[^\n"]* - zero or more chars other than newline and "
(?: - then come 2 alternatives:
\n(?!: +: +\[)[^\n"]* - a newline not followed with : : [ like string and followed with 0+chars other than a newline and "
| - or
"(?!houseNumber")[^\n"]* - a " not followed with houseNumber and followed with 0+chars other than a newline and "
)* - than may repeat 0 or more times
"houseNumber":"25" - hourse number literal string.