VMware, Output the Network HealthCheck in on CSV file - vmware

I have a script, it work perfectly and everything is show on my power shell screen "Console". But I try to figure Out how to export in CSV
Script:
foreach($vds in Get-VDSwitch)
{
$vds.ExtensionData.Runtime.HostMemberRuntime | %{
$.HealthCheckResult | where{$ -is [VMware.Vim.VMwareDVSVlanHealthCheckResult]} |
Select #{N='vdSwitch';E={$vds.Name}},
UplinkPortKey,
#{N='TrunkedVLAN';E={
($.TrunkedVLAN | %{
if($.Start -eq $.End){
"{0}" -f $.Start
}
else{
"{0}-{1}" -f $.Start,$.End
}
}) -join ','
}}
}
}
The Output on screen look like this;
VsanEnabled : False
VsanDiskClaimMode : Manual
HATotalSlots : 3099
HAUsedSlots : 22
HAAvailableSlots : 1527
HASlotCpuMHz : 32
HASlotMemoryMb : 328
HASlotMemoryGB : 0.3203125
HASlotNumVCpus : 1
ParentId : Folder-group-h28
ParentFolder : host
HAEnabled : True
HAAdmissionControlEnabled : True
HAFailoverLevel : 1
HARestartPriority : Medium
HAIsolationResponse : DoNothing
VMSwapfilePolicy : WithVM
DrsEnabled : True
DrsMode : FullyAutomated
DrsAutomationLevel : FullyAutomated
EVCMode : intel-nehalem
Name : mac01dmzp01
CustomFields : {}
ExtensionData : VMware.Vim.ClusterComputeResource
Id : ClusterComputeResource-domain-c12033
Uid : /VIServer=cn\t175726#mac01vcp02.cn.ca:443/Cluster=ClusterComputeResource-domain-c12033/
vdSwitch : vds-toronto-mac01-2-ports-10Gbe
UplinkPortKey : 78
TrunkedVLAN : 11-17,396,500
vdSwitch : vds-toronto-mac01-2-ports-10Gbe
UplinkPortKey : 79
TrunkedVLAN : 11-17,396,500
vdSwitch : vds-toronto-mac01-2-ports-10Gbe
UplinkPortKey : 82
TrunkedVLAN : 11-17,396,500
vdSwitch : vds-toronto-mac01-2-ports-10Gbe
UplinkPortKey : 83
TrunkedVLAN : 11-17,396,500
vdSwitch : vds-toronto-mac01-2-ports-10Gbe
UplinkPortKey : 358
TrunkedVLAN : 11-17,396,500
vdSwitch : vds-toronto-mac01-2-ports-10Gbe
UplinkPortKey : 359
TrunkedVLAN : 11-17,396,500
a lot more ......

I found the way to do it, is with a function.
#####################################################
# vSphere 6.5
# Get ESX HealthCheck Network Config from VDS
#
# by Gerald Begin (Nov.20 2018)
#################################
##### Set Script Location
Set-Location T:\___Main-Script___\_VDS-vLANs_
##### Add VMWare Module.
Get-Module -Name VMware* -ListAvailable | Import-Module
##### Output Path
$Desti = 'T:\___Main-Script___\_VDS-vLANs_\Output'
Import-Module -Name "T:\__Script_Functions__\Connect2All.ps1" -Force:$true # Function to Connect to ALL vCenters
$Clster = "mac01dmzp01"
#### --------------------------------------
function GetInfo {
###################################################
foreach($vds in Get-VDSwitch)
{
$vds.ExtensionData.Runtime.HostMemberRuntime | %{
$_.HealthCheckResult | where{$_ -is [VMware.Vim.VMwareDVSVlanHealthCheckResult]} |
Select #{N='vdSwitch';E={$vds.Name}},
UplinkPortKey,
#{N='TrunkedVLAN';E={
($_.TrunkedVLAN | %{
if($_.Start -eq $_.End){
"{0}" -f $_.Start
}
else{
"{0}-{1}" -f $_.Start,$_.End
}
}) -join ','
}}
}
}
}
Get-Cluster -Name $Clster | GetInfo | Export-Csv -Path $Desti\Results.csv -NoTypeInformation
Disconnect-VIServer * -Confirm:$false

Related

Getting exception for putItem when accessing dynamoDB (local) on windows

Started running DynamoDB on my local and below powershell script I'm using to perform a test insert, from the output I see it is able to read the table but fails on putitem operation,
Appreciate any suggestions or references on this, not sure what I'm missing here.
$localEndpoint = 'http://localhost:8000'
$region = 'us-west-2'
$table = Get-DDBTable -TableName test01 -EndpointUrl $localEndpoint
$table
$client = New-Object Amazon.DynamoDBv2.AmazonDynamoDBClient($localEndpoint, $region)
$client
$req = New-Object Amazon.DynamoDBv2.Model.PutItemRequest
$req.TableName = 'test01'
$req.Item = New-Object 'system.collections.generic.dictionary[string,Amazon.DynamoDBv2.Model.AttributeValue]'
$valObj = New-Object Amazon.DynamoDBv2.Model.AttributeValue
$valObj.S = 'MyName'
$req.Item.Add('Name',$valObj)
$req
$client.PutItem($req)
Results with exception:
ArchivalSummary :
AttributeDefinitions : {Name}
BillingModeSummary :
CreationDateTime : 3/11/2022 11:49:44 PM
GlobalSecondaryIndexes : {}
GlobalTableVersion :
ItemCount : 0
KeySchema : {Name}
LatestStreamArn :
LatestStreamLabel :
LocalSecondaryIndexes : {}
ProvisionedThroughput : Amazon.DynamoDBv2.Model.ProvisionedThroughputDescription
Replicas : {}
RestoreSummary :
SSEDescription :
StreamSpecification :
TableArn : arn:aws:dynamodb:ddblocal:000000000000:table/test01
TableId :
TableName : test01
TableSizeBytes : 0
TableStatus : ACTIVE
Config : Amazon.DynamoDBv2.AmazonDynamoDBConfig
ConditionalOperator :
ConditionExpression :
Expected : {}
ExpressionAttributeNames : {}
ExpressionAttributeValues : {}
Item : {[Name, Amazon.DynamoDBv2.Model.AttributeValue]}
ReturnConsumedCapacity :
ReturnItemCollectionMetrics :
ReturnValues :
TableName : test01
StreamUploadProgressCallback :
RequestState : {}
UseSigV4 : False
Exception calling "PutItem" with "1" argument(s): "Credential must have exactly 5 slash-delimited elements, e.g. keyid/date/region/service/term, got 'http://localhost:8000/20220312/us-west-2/dynamodb/aws4_request'"
At line:19 char:1
+ $client.PutItem($req)
+ ~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [], MethodInvocationException
+ FullyQualifiedErrorId : AmazonDynamoDBException

How to match float value in mongoquery?

This is my sample documents collection of MongoDB. I need an expected result so anyone guide me to solve my problem
/* 1 */
{
"_id" : 1.0,
"user_id" : "5c592f6716209a24f4125001",
"tu_code" : "5"
}
/* 2 */
{
"_id" : 2.0,
"user_id" : "5c592f6716209a24f4125002",
"tu_code" : "50"
}
/* 3 */
{
"_id" : 3.0,
"user_id" : "5c592f6716209a24f4125003",
"tu_code" : "50.21"
}
/* 4 */
{
"_id" : 4.0,
"user_id" : "5c592f6716209a24f4125004",
"tu_code" : "50.22"
}
/* 5 */
{
"_id" : 5.0,
"user_id" : "5c592f6716209a24f4125005",
"tu_code" : "52"
}
/* 6 */
{
"_id" : 3.0,
"user_id" : "5c592f6716209a24f4125006",
"tu_code" : "5.1"
}
/* 7 */
{
"_id" : 6.0,
"user_id" : "5c592f6716209a24f4125007",
"tu_code" : "5.1.1"
}
/* 8 */
{
"_id" : 7.0,
"user_id" : "5c592f6716209a24f4125008",
"tu_code" : "5.2"
}
/* 9 */
{
"_id" : 8.0,
"user_id" : "5c592f6716209a24f4125009",
"tu_code" : "5.2.1"
}
Here I mentioned my mongo aggregate query below
[
{ "$project": {
"tu_code": { "$toLower": "$tu_code" },
}},
{ "$match": { "tu_code": {"$regex": "^5.*"}}}
]
This above query produces the below result
| _id| tu_code |
| ---| --------|
| 1 | 5 |
| 2 | 50 |
| 3 | 50.21 |
| 4 | 50.22 |
| 5 | 52 |
| 6 | 53 |
| 7 | 5.1 |
| 8 | 5.1.1 |
| 9 | 5.2 |
| 10 | 5.2.1 |
But I want expected result given below
| _id| tu_code |
| ---| --------|
| 1 | 5 |
| 7 | 5.1 |
| 8 | 5.1.1 |
| 9 | 5.2 |
| 10 | 5.2.1 |
Anyone help me to solve this I don't know how to get my expected result.
You can use the \b (word boundary) operator at the end of search expression,
[
{
"$match": {
"tu_code": {
"$regex": "^5\\b"
}
}
}
]
Playground

C++ writing to mongo, string fields not working in aggregation pipeline

**
Quick summary: C++ app loading data from SQL server using using OTL4, writing to Mongo using mongocxx bulk_write, the strings seem to getting mangled somehow so they don't work in the aggregation pipeline (but appear fine otherwise).
**
I have a simple Mongo collection which doesn't seem to behave as expected with an aggregation pipeline when I'm projecting multiple fields. It's a trivial document, no nesting, fields are just doubles and strings.
First 2 queries work as expected:
> db.TemporaryData.aggregate( [ { $project : { ParametersId:1 } } ] )
{ "_id" : ObjectId("5c28f751a531251fd0007c72"), "ParametersId" : 526988617 }
{ "_id" : ObjectId("5c28f751a531251fd0007c73"), "ParametersId" : 526988617 }
{ "_id" : ObjectId("5c28f751a531251fd0007c74"), "ParametersId" : 526988617 }
{ "_id" : ObjectId("5c28f751a531251fd0007c75"), "ParametersId" : 526988617 }
{ "_id" : ObjectId("5c28f751a531251fd0007c76"), "ParametersId" : 526988617 }
> db.TemporaryData.aggregate( [ { $project : { Col1:1 } } ] )
{ "_id" : ObjectId("5c28f751a531251fd0007c72"), "Col1" : 575 }
{ "_id" : ObjectId("5c28f751a531251fd0007c73"), "Col1" : 579 }
{ "_id" : ObjectId("5c28f751a531251fd0007c74"), "Col1" : 616 }
{ "_id" : ObjectId("5c28f751a531251fd0007c75"), "Col1" : 617 }
{ "_id" : ObjectId("5c28f751a531251fd0007c76"), "Col1" : 622 }
But then combining doesn't return both the fields as expected.
> db.TemporaryData.aggregate( [ { $project : { ParametersId:1, Col1:1 } } ] )
{ "_id" : ObjectId("5c28f751a531251fd0007c72"), "ParametersId" : 526988617 }
{ "_id" : ObjectId("5c28f751a531251fd0007c73"), "ParametersId" : 526988617 }
{ "_id" : ObjectId("5c28f751a531251fd0007c74"), "ParametersId" : 526988617 }
{ "_id" : ObjectId("5c28f751a531251fd0007c75"), "ParametersId" : 526988617 }
{ "_id" : ObjectId("5c28f751a531251fd0007c76"), "ParametersId" : 526988617 }
It seems to be specific to the ParametersId field, for instance if I choose 2 other fields it's OK.
> db.TemporaryData.aggregate( [ { $project : { Col1:1, Col2:1 } } ] )
{ "_id" : ObjectId("5c28f751a531251fd0007c72"), "Col1" : 575, "Col2" : "1101-2" }
{ "_id" : ObjectId("5c28f751a531251fd0007c73"), "Col1" : 579, "Col2" : "1103-2" }
{ "_id" : ObjectId("5c28f751a531251fd0007c74"), "Col1" : 616, "Col2" : "1300-3" }
{ "_id" : ObjectId("5c28f751a531251fd0007c75"), "Col1" : 617, "Col2" : "1300-3" }
{ "_id" : ObjectId("5c28f751a531251fd0007c76"), "Col1" : 622, "Col2" : "1400-3" }
For some reason when I include ParametersId field, all hell breaks loose in the pipeline:
> db.TemporaryData.aggregate( [ { $project : { ParametersId:1, Col2:1, Col1:1, Col3:1 } } ] )
{ "_id" : ObjectId("5c28f751a531251fd0007c72"), "ParametersId" : 526988617, "Col1" : 575 }
{ "_id" : ObjectId("5c28f751a531251fd0007c73"), "ParametersId" : 526988617, "Col1" : 579 }
{ "_id" : ObjectId("5c28f751a531251fd0007c74"), "ParametersId" : 526988617, "Col1" : 616 }
{ "_id" : ObjectId("5c28f751a531251fd0007c75"), "ParametersId" : 526988617, "Col1" : 617 }
{ "_id" : ObjectId("5c28f751a531251fd0007c76"), "ParametersId" : 526988617, "Col1" : 622 }
DB version and the data:
> db.version()
4.0.2
> db.TemporaryData.find()
{ "_id" : ObjectId("5c28f751a531251fd0007c72"), "CellId" : 998909269, "ParametersId" : 526988617, "Order" : 1, "Col1" : 575, "Col2" : "1101-2", "Col3" : "CHF" }
{ "_id" : ObjectId("5c28f751a531251fd0007c73"), "CellId" : 998909269, "ParametersId" : 526988617, "Order" : 1, "Col1" : 579, "Col2" : "1103-2", "Col3" : "CHF" }
{ "_id" : ObjectId("5c28f751a531251fd0007c74"), "CellId" : 998909269, "ParametersId" : 526988617, "Order" : 1, "Col1" : 616, "Col2" : "1300-3", "Col3" : "CHF" }
{ "_id" : ObjectId("5c28f751a531251fd0007c75"), "CellId" : 998909269, "ParametersId" : 526988617, "Order" : 36, "Col1" : 617, "Col2" : "1300-3", "Col3" : "CHF" }
{ "_id" : ObjectId("5c28f751a531251fd0007c76"), "CellId" : 998909269, "ParametersId" : 526988617, "Order" : 1, "Col1" : 622, "Col2" : "1400-3", "Col3" : "CHF" }
Update: enquoting the field names makes no difference. I'm typing all the above in the mongo.exe command line, but I see the same behavior in my C++ application with a slightly more complex pipeline (projecting all fields to guarantee order).
This same app is actually creating the data in the first place - does anyone know anything which can go wrong? All using the mongocxx lib.
** update **
Turns out there's something going wrong with my handling of strings. Without the string fields in the data, it's all fine. So I've knackered my strings, somehow, even though they look and behave correctly in other ways they don't play nice with the aggregation pipeline. I'm using mongocxx::collection.bulk_write to write standard std::strings which are being loaded from sql server through the OTL4 header. In-between there's a strncpy_s when they get stored internally. I can't seem to create a simple reproducible example.
Just to be safe that there is no conflict with anything else, try using the projection with a strict formatted json: (add quotes to keys)
db.TemporaryData.aggregate( [ { $project : { "ParametersId":1, "Col1":1 } } ] )
Finally found the issue was corrupt documents, which because I was using bulk_write for the insert were getting into the database but causing this strange behavior. I switched to using insert_many, which threw up the document was corrupt, and then I could track down the bug.
The docs were corrupt because I was writing the same field-value data multiple times, which seems to be break the bsoncxx::builder::stream::document I was using to construct them.

journalctl to search metadata with regex

Is it possible to search in journalctl via metadata with patterns. What I am doing right now is to search like journalctl CONTAINER_NAME=cranky.hello --lines=100 -f. But what I want to achieve is to search everything after that '.'. Some search pattern like journalctl CONTAINER_NAME=cranky.* --lines=100 -f. Which will also search CONTAINER_NAME metadata like:
cranky.world
cranky.alive
Below are example of output when journalctl is executed:
journalctl CONTAINER_NAME=cranky.hello --lines=100 -f
Oct 17 14:33:35 lottery-staging docker[55587]: chdir: /usr/src/app
Oct 17 14:33:35 lottery-staging docker[55587]: daemon: False
Oct 17 14:33:35 lottery-staging docker[55587]: raw_env: []
Oct 17 14:33:35 lottery-staging docker[55587]: pidfile: None
Oct 17 14:33:35 lottery-staging docker[55587]: worker_tmp_dir: None
journalctl CONTAINER_NAME=cranky.hello --lines=100 -f -o json
{ "__CURSOR" : "s=d98b3d664a71409d9a4d6145b0f8ad93;i=731e;b=2f9d75ec91044d52b8c5e5091370bcf7;m=285b067a063;t=55bbf0361352a;x=64b377c33c8fba96", "__REALTIME_TIMESTAMP" : "1508250837136682", "__MONOTONIC_TIMESTAMP" : "2773213487203", "_BOOT_ID" : "2f9d75ec91044d52b8c5e5091370bcf7", "CONTAINER_TAG" : "", "_TRANSPORT" : "journal", "_PID" : "55587", "_UID" : "0", "_GID" : "0", "_COMM" : "docker", "_EXE" : "/usr/bin/docker", "_CMDLINE" : "/usr/bin/docker daemon -H unix:///var/run/docker.sock -H tcp://0.0.0.0:2375 --userland-proxy=false --tlscert /etc/dockercloud/agent/cert.pem --tlskey /etc/dockercloud/agent/key.pem --tlscacert /etc/dockercloud/agent/ca.pem --tlsverify --log-driver journald", "_SYSTEMD_CGROUP" : "/", "_SELINUX_CONTEXT" : [ 117, 110, 99, 111, 110, 102, 105, 110, 101, 100, 10 ], "_MACHINE_ID" : "0a80624bd4c45a792b0a857c59a858d6", "_HOSTNAME" : "lottery-staging", "PRIORITY" : "6", "MESSAGE" : "Running migrations:", "CONTAINER_ID_FULL" : "c8f60546e9d50f034f364259c409760b3390d979d57a773eccd8d852e1c3553f", "CONTAINER_NAME" : "ghost-1.lottery-staging-stack.c6118be4", "CONTAINER_ID" : "c8f60546e9d5", "_SOURCE_REALTIME_TIMESTAMP" : "1508250837135650" }
{ "__CURSOR" : "s=d98b3d664a71409d9a4d6145b0f8ad93;i=731f;b=2f9d75ec91044d52b8c5e5091370bcf7;m=285b067a2a2;t=55bbf0361376a;x=6c87fea4ea155d00", "__REALTIME_TIMESTAMP" : "1508250837137258", "__MONOTONIC_TIMESTAMP" : "2773213487778", "_BOOT_ID" : "2f9d75ec91044d52b8c5e5091370bcf7", "CONTAINER_TAG" : "", "_TRANSPORT" : "journal", "_PID" : "55587", "_UID" : "0", "_GID" : "0", "_COMM" : "docker", "_EXE" : "/usr/bin/docker", "_CMDLINE" : "/usr/bin/docker daemon -H unix:///var/run/docker.sock -H tcp://0.0.0.0:2375 --userland-proxy=false --tlscert /etc/dockercloud/agent/cert.pem --tlskey /etc/dockercloud/agent/key.pem --tlscacert /etc/dockercloud/agent/ca.pem --tlsverify --log-driver journald", "_SYSTEMD_CGROUP" : "/", "_SELINUX_CONTEXT" : [ 117, 110, 99, 111, 110, 102, 105, 110, 101, 100, 10 ], "_MACHINE_ID" : "0a80624bd4c45a792b0a857c59a858d6", "_HOSTNAME" : "lottery-staging", "PRIORITY" : "6", "MESSAGE" : " No migrations to apply.", "CONTAINER_ID_FULL" : "c8f60546e9d50f034f364259c409760b3390d979d57a773eccd8d852e1c3553f", "CONTAINER_NAME" : "ghost-1.lottery-staging-stack.c6118be4", "CONTAINER_ID" : "c8f60546e9d5", "_SOURCE_REALTIME_TIMESTAMP" : "1508250837135667" }
journalctl does not accept patterns for anything other than unit names (in the -u argument). Depending on your needs, you could perform some filtering using JSON output and grep, as in:
journalctl -u docker -o json -n1000 | grep 'CONTAINER_NAME.*cranky\.'

sed regex find & replace (awk solutions welcome)

I'm working on a JSON file (for MongoDB) and need to convert a field name to Database Reference. I'm attempting to do it via sed (though I'm open to solutions using awk, etc), but I'm a complete noob with the tool and am struggling.
Input:
...
"FECTransID" : 4030720141206780377,
"CID" : "N00031103",
"CmteID" : "C00465971",
"RecipCode" : "RW",
"Amount" : 500,
....
Output needed:
...
"FECTransID" : 4030720141206780377,
"CID" : "N00031103",
"CmteID" : {
"ref" : "Cmtes",
"$id" : "C00278101",
"$db" : "OpenSecrets"
},
"RecipCode" : "RW",
"Amount" : 500,
....
My sed command attempt is:
sed -r 's/\"CmteID\" \: \(\"[\w\d]\{9\}\",\)/\"CmteID\" : { \
\"ref\" : \"Cmtes\", \
\"$id\" : \1 \
\"$db\" : \"OpenSecrets\" \
}/' <IN_FILE >OUT_FILE
but I get this error when I run it:
sed: -e expression #1, char 198: invalid reference \1 on `s' command's RHS
Any help would be appreciated. Thanks.
An awk approach:
awk '$1=="\"CmteID\"" {$3="{\n\t\"ref\" : \"Cmtes\",\
\n\t\"\$id\" : "$3"\
\n\t\"\$db\" : \"OpenSecrets\"\n},"}1' infile
Explanation
When the first field is matched $1=="\"CmteID\"" we are changing the third field for the expected string, the only variable part is CmteID value , assigned in: \n\t\"\$id\" : "$3"
Line breaks added (escape char \) to improve the clarity of the code.
Results
"FECTransID" : 4030720141206780377,
"CID" : "N00031103",
"CmteID" : {
"ref" : "Cmtes",
"$id" : "C00465971",
"$db" : "OpenSecrets"
},
"RecipCode" : "RW",
"Amount" : 500,
sed is for simple substitutions on individual lines, that is all. This problem is not like that, so this is not a job for sed.
$ cat tst.awk
BEGIN { FS=OFS=" : " }
$1 == "\"CmteID\"" {
print $1, "{"
print " \"ref\"", "\"Cmtes\""
print " \"$id\"", $2
print " \"$db\"", "\"OpenSecrets\""
$0 = "},"
}
{ print }
$ awk -f tst.awk file
...
TransID" : 4030720141206780377,
"CID" : "N00031103",
"CmteID" : {
"ref" : "Cmtes"
"$id" : "C00465971",
"$db" : "OpenSecrets"
},
"RecipCode" : "RW",
"Amount" : 500,
....
awk to the rescue!
$ awk '$1=="\"CmteID\""{print $1 ": {";
print "\t\"ref\" : \"Cmtes\",";
print "\t\"$id\" : "$3;
print "\t\"$db\" : \"OpenSecrets\",";
print "},";
next}1' jsonfile
...
"FECTransID" : 4030720141206780377,
"CID" : "N00031103",
"CmteID": {
"ref" : "Cmtes",
"$id" : "C00465971",
"$db" : "OpenSecrets",
},
"RecipCode" : "RW",
"Amount" : 500,
....
with some cleanup
$ awk -v NT="\n\t" 'function q(x) {return "\""x"\"";};
$1==q("CmteID") {$3 = " {"
NT q("ref") " : " q("Cmtes") ","
NT q("$id") " : " $3
NT q("$db") " : " q("OpenSecrets")
",\n},"}1' jsonfile
...
"FECTransID" : 4030720141206780377,
"CID" : "N00031103",
"CmteID" : {
"ref" : "Cmtes",
"$id" : "C00465971",
"$db" : "OpenSecrets",
},
"RecipCode" : "RW",
"Amount" : 500,
....
Many languages have built-in JSON parsers. PHP is one of them:
#!/usr/bin/php
<?php
$infile = $argv[1];
$outfile = $argv[2];
$data = json_decode(file_get_contents($infile));
$id = $data["CmteID"];
$data["CmteID"] = array("ref"=>"Cmtes", "\$id"=>$id, "\$db"=>"OpenSecrets");
file_put_contents($outfile, json_encode($data));
Untested but it should work. Make it executable and call ./myscript.php IN_FILE OUT_FILE.
My main point being, JSON is not text and using text-replacement on it can lead to problems, just like other structured data formats like XML!
This might work for you (GNU sed):
sed -r 's/"CmteID" : (.*)/"CmteID" : { \
"ref" : "Cmtes", \
"$id" : \1 \
"$db" : "OpenSecrets" \
},/' fileIn >fileOut
This was a case of over quoting. The parens grouping the $id had been quoted unneccessarily as the -r was inforce.