Doctrine2 get index of current item in collection - doctrine-orm

how can i get the index of the current item in a loop of a doctrine2 collection? I tried the following, but it returns always the number of all items
$items = $element->getItems();
echo $items->count() // 12
foreach ($items as $item) {
echo $items->key(); // 12
}

Try using the indexOf method:
foreach ($items as $item) {
echo $items->indexOf($item); // the index
}

Related

How to instantiate an object and set member fields with PowerShell?

I'm trying to work out how to create objects, set fields for that object, and then add the object to a collection.
Specifically, how can I create a $newPerson where the Name field is "joe" and with an array consisting of "phone1, phone2, phone3"? Similarly, "sue" has an array of "cell4 etc" and "alice" with her attributes, and so on. Ultimately, to put these three objects into an array of objects, $collectionOfPeople?
output:
thufir#dur:~/flwor/csv$
thufir#dur:~/flwor/csv$ pwsh import.ps1
people name
joe name
phone1 attribute
phone2 attribute
phone3 attribute
sue name
cell4 attribute
home5 attribute
alice name
atrib6 attribute
x7 attribute
y9 attribute
z10 attribute
thufir#dur:~/flwor/csv$
code:
$tempAttributes = #()
$collectionOfPeople = #()
function attribute([string]$line) {
Write-Host $line "attribute"
$tempAttributes += $line
}
function name([string]$line) {
Write-Host $line "name"
#is a $newPerson ever instantiated?
$newPerson = [PSCustomObject]#{
Name = $line
Attributes = $tempAttributes
}
$newPerson #empty? no output
$collectionOfPeople += $newPerson
$tempAttributes = #()
}
$output = switch -regex -file people.csv {
'\d' { attribute($_) ; $_ }
default { name($_); $_ }
}
#how to read the file from top to bottom?
#[array]::Reverse($output)
#$output
$collectionOfPeople #empty???
input from a CSV file:
people
joe
phone1
phone2
phone3
sue
cell4
home5
alice
atrib6
x7
y9
z10
In the above CSV there's no marker between records, so I'm using an if statement on the assumption that every attribute has digits while the names never have digits.
You could do something like the following, which keeps closely to your current logic. This does assume that the order of data in your file is exactly as you have stated. .
switch -regex -file people.csv {
'\d' { $Attributes.Add($_) }
default {
if ($Attributes -and $Name) {
[pscustomobject]#{Name = $Name; Attributes = $Attributes}
}
$Name = $_
$Attributes = [Collections.Generic.List[string]]#()
}
}
# Output Last Set of Attributes
[pscustomobject]#{Name = $Name; Attributes = $Attributes}
# Cleanup
Remove-Variable Name
Remove-Variable Attributes
If there are no attributes for a name, that name is ignored. That feature can be changed by adding a condition elseif ($Name) { [pscustomobject]#{Name = $Name; Attributes = $null} } within the default block.

I have and indexed array that looks like this

$values[0]['myname']['myheight']['height'];
$values[1]['myname']['myheight']['height'];
$values[2]['myname']['myheight']['height'];
$values[3]['myname']['myheight']['height'];
I want to loop through the array to check 'height' and skip through the other values if the are set especially the first key in my $values array.
I tried something like
$values[$key];
if (is_numeric($key) ) {
do this
}
but this doesn't seem to achieve it
Try this
foreach ($value as $key => $val) {
if(isset($val['myname']['myheight']['height'])){
#do this
}
}

Retrieve selective items from list

Is there a way to get items from a list according to some function?
I know there is a way to get items by regular expression by using lsearch -regexp but It's not what I need.
In Tcl 8.6, you can use the lmap command to do this by using continue to skip the items you don't want (or break to indicate that you've done enough processing):
set items {0 1 2 3 4 5 6 7 8 9 10}
set filtered [lmap i $items {if {$i==sqrt($i)**2} {set i} else continue}]
# Result: 0 1 4 9
This can be obviously extended into a procedure that takes a lambda term and a list.
proc filter {list lambda} {
lmap i $list {
if {[apply $lambda $i]} {
set i
} else {
continue
}
}
}
set filtered [filter $items {i { expr {$i == sqrt($i)**2} }}]
It's possible to do something similar in Tcl 8.5 with foreach though you'll need to do more work yourself to build the list of result items with lappend…
proc filter {list lambda} {
set result {}
foreach i $list {
if {[apply $lambda $i]} {
lappend result $i
}
}
return $result
}
Usage is identical. (Tcl 8.4 and before — now unsupported — don't support the apply command.)

Coutn unique occurance of match in a line

I have file with entry:
(----) Manish Garg 74163: V2.0.1_I3_SIT: KeyStroke Logger decrypted file for key stroke displayed a difference of 4 hours from CCM time. - 74163: KeyStroke Logger decrypted file for key stroke displayed a difference of 4 hours from CCM time. 2014/07/04
I want to look for the unique count of id "74163" or any id in a line.
Currently it gives the output as :
updated_workitem value> "74163"
Count> "2"
But i want the count value as 1.(I dont want to include duplicate entry in count)
My code is
my $workitem;
$file = new IO::File;
$file->open("<compare.log") or die "Cannot open compare.log";
#file_list = <$file>;
$file->close;
foreach $line (#file_list) {
while ($line =~ m/(\d{4,}[,|:])/g ){
#temp = split(/[:|,]/, $1);
push #work_items, $temp[0];
}
}
my %count;
my #wi_to_built;
map { $count{$_}++ } #work_items;
foreach $workitem (sort keys (%count)) {
chomp($workitem);
print "updated_workitem value> \"$workitem\"\n";
print "Count> \"$count{$workitem}\"\n";
}
Use a hash to track unique ids found in a particular line:
foreach my $line (#file_list) {
my %line_ids;
while ($line =~ m/(\d{4,})[,|:]/g ){
$line_ids{$1} = 1; # Record unique ids
}
push #work_items, keys %line_ids; # Save the ids
}
Note, I've changed your regex slightly so you don't need to split to a temporary array.
You can remove duplicate from array before doing map { $count{$_}++ } #work_items;
#work_items = uniq(#work_items);
sub uniq {
my %seen;
grep !$seen{$_}++, #_;
}
Demo

Problems with pushing data into an array, declaring a hash and conditional statements in Perl

Can anyone help, I'm having problems with my Perl script. I want to push a 3-column data input file into an array, select ID numbers and names, declare a hash using both IDs as the key and the value as the values and then run an if-else conditional statement to select the key-value pairs that have a value greater than 2.
Here's an example of the input.txt data file where column 1 is ID number, column 2 is ID name and column 3 value associated with columns 1 and 2.
ENSG00000251791 SCARNA6 2.5
ENSG00000238862 SNORD19B 6.3
ENSG00000238527 SN-112 -3
ENSG00000222373 RNY.5P5 1.3
I can get the first part pushing the data into an array but I can't the rest of it to work. I've created two hashes that contain ID number:value and ID name:value pairs as I'd like both columns in the output file:
ENSG00000251791 SCARNA6 2.5
ENSG00000238862 SNORD19B 6.3
Here's the code:
use strict;
use warnings;
my $input = 'input.txt';
my #input_vars;
open my $input_file_handle, '<', $input or die $!;
while (<$input_file_handle>) {
chomp $_;
push #input_vars, $_;
}
close $input_file_handle;
# regex to select ID name, ID number and value
my %id;
foreach (#input_vars) {
my $regex = '/\w+\s[\w-]+\s\d+\.\d+/';
while ($_ =~ m/$regex/g) {
my $id1{$1} = $3;
my $id2{$2} = $3;
}
}
foreach (#input_vars) {
print "$_ ";
if ($id1{$_} >= 2) {
print "$id1{$_}";
} else {
print "N/A";
}
if ($id2{$_} >= 2) {
print "$id2{$_}";
} else {
print "N/A";
print "n";
}
I think I have over-complicated it by creating a regex to select ID numbers and names so if there's a simpler, more efficient way, that would be great.
Change the first foreach loop to:
foreach (#input_vars) {
if (/(\w+)\s([\w-]+)\s(\d+\.\d+)$/) {
$id1{$1} = $3;
$id2{$2} = $3;
}
}