I have these two files, and would like to find the matching patterns
for the second file based on the entries from the first file, then print out the
matched results.
Here is the first file,
switch1 Ethernet2 4 4 5
switch1 Ethernet4 4 4 5
switch1 Ethernet7 4 4 5
switch1 Ethernet9 4 4 5
switch1 Ethernet10 4 4 5
switch1 Ethernet13 1 4 5
switch1 Ethernet14 1 4 5
switch1 Ethernet15 1 4 5
switch2 Ethernet5 4 4 5
switch2 Ethernet6 1 4 5
switch2 Ethernet7 4 5
switch2 Ethernet8 1 4 5
switch1 Ethernet1 2002 2697 2523
switch1 Ethernet17 2002 2576 2515
switch1 Ethernet11 2002 2621 2617
switch2 Ethernet1 2001 2512 2577
And here is the second file,
switch1 Ethernet1 1 4 5 40 2002 2523 2590 2621 2656 2661 2665 2684 2697 2999
switch1 Ethernet2 1 4 5 40 2002 2504 2505 2508 2514 2516 2517 2531 2533 2535 2544 2545 2547 2549 2555 2566 2571 2575 2589 2590 2597 2604 2611 2626 2629 2649 2666 2672 2684 2691 2695
switch1 Ethernet3 40
switch1 Ethernet4 1 4 5 40 2002 2504 2516 2517 2535 2547 2549 2571 2579 2590 2597 2604 2605 2620 2624 2629 2649 2684 2695
switch1 Ethernet5 1 4 5 40 55 56 2000 2002 2010 2037 2128 2401 2409 2504 2526 2531 2540 2541 2548 2562 2575 2578 2579 2588 2590 2597 2604 2606 2608 2612 2615 2616 2621 2638 2640 2645 2650 2666 2667 2669 2670 2674 2678 2684 2690 2696
switch1 Ethernet6 40
switch1 Ethernet7 1 4 5 40 2037 2128 2174 2401 2409 2463 2526 2535 2540 2541 2544 2562 2578 2579 2590 2616 2621 2625 2631 2645 2659 2667 2670 2674 2678 2682 2684 2690 2696
switch1 Ethernet8 1 4 5 40 2037 2128 2396 2401 2409 2420 2531 2619 2640 2653 2658 2669 2677 2683 2684
switch1 Ethernet9 1 4 5 40 2128 2169 2396 2401 2409 2420 2504 2515 2531 2553 2578 2597 2619 2621 2640 2658 2669 2677 2683 2684 2694
switch1 Ethernet10 1 4 5 40 2079 2128 2169 2378 2396 2453 2509 2578 2591 2597 2621 2634 2641 2657
switch1 Ethernet11 1 4 5 40 2002 2128 2169 2396 2453 2509 2512 2520 2526 2549 2552 2564 2571 2575 2589 2591 2597 2611 2617 2621 2634 2641 2657 2671 2676 2686 2694
switch1 Ethernet12 1 4 5 40 2079 2378 2396 2453 2515 2531 2553 2597 2619 2621 2640 2657 2669 2677 2684 2694
switch1 Ethernet13 1 4 5 40 2174 2396 2453 2463 2508 2524 2531 2536 2546 2567 2597 2629 2640 2657 2669 2674 2684
switch1 Ethernet14 1 4 5 40 2524 2536 2544 2567 2575 2582 2628 2640 2659 2674 2681 2689
switch1 Ethernet15 1 4 5 40 2515 2553 2575 2582 2621 2628 2640 2681 2689 2694
switch1 Ethernet16 1 4 5 40 2513 2539 2544 2553 2561 2573 2575 2582 2619 2640 2670 2681
switch1 Ethernet17 1 4 5 40 2002 2508 2513 2515 2531 2538 2539 2544 2547 2553 2561 2570 2573 2575 2576 2582 2586 2601 2608 2619 2621 2640 2658 2670 2681
the results should display,
switch1 Ethernet13 1 4 5
switch1 Ethernet14 1 4 5
switch1 Ethernet15 1 4 5
switch1 Ethernet8 1 4 5
switch1 Ethernet16 1 4 5
switch1 Ethernet1 2002 2697 2523
switch1 Ethernet17 2002 2576 2515
The challenging part for me is that I don't know
how to compare a pattern from one line of the first file to all the lines of the second file. And in this case, it has 5 patterns to match, when the second file doesn't have
consistent fields to work with. Some lines are shorter than the others, and the matching fields may not be the same with other lines.
Any idea or suggestion would be greatly appreciated.
Thanks much in advance.
Something like this should work, but there's probably a better way
#!/bin/bash
patternfile='file1'
exec 4<$patternfile
otherfile='file2'
while read -u4 pattern; do
grep ".*`echo ""$pattern"" | sed 's/\s\+/ \\\+.* /g'`.*" $otherfile
done
P.S. This doesn't output all of the lines you said you wanted:
Ethernet8 is switch2 in one and switch1 in the other. fixing that in regex would be really ugly, but you could use awk to get only the records you want.
Ethernet16 isn't even in the first file, I think that line is a mistake.
Ethernet1 and Ethernet17 have the records out of order, which my solution can't deal with.
Related
The following measure shows the average per Werks and MATNR but static, for all values in the table.
CALCULATE(
AVERAGEX(
SUMMARIZE( LieferscheineUnique, LieferscheineUnique[WERKS], LieferscheineUnique[MATNR] ),
CALCULATE( AVERAGE(LieferscheineUnique[fci_zu_PA] ) )
),
ALLEXCEPT( LieferscheineUnique, LieferscheineUnique[WERKS], LieferscheineUnique[MATNR] )
)
I have the following table
DATUM
WERKS
ID
MATNR
fci_zu_PA
Average_fci_per_Werks&MATNR
08.04.2021
H006
1
10009
41,7
35,84
12.04.2021
H006
2
10009
43,3
35,84
14.04.2021
H006
3
10009
43,5
35,84
08.04.2021
H100
4
10009
43,3
38,20
22.04.2021
H100
5
10009
43,3
38,20
22.04.2021
H100
6
10010
24,5
35,01
Now I want the average per WERKS and MATNR displayed in each row according to the date filter.
The desired output would look like this:
DATUM
WERKS
ID
MATNR
fci_zu_PA
Average_fci_per_Werks&MATNR
08.04.2021
H006
1
10009
41,7
42,83
12.04.2021
H006
2
10009
43,3
42,83
14.04.2021
H006
3
10009
43,5
42,83
08.04.2021
H100
4
10009
43,7
43,50
22.04.2021
H100
5
10009
43,3
43,50
22.04.2021
H100
6
10010
24,5
24,50
It would be great if someone knows how to achieve this.
I need to generate the variable sum which cumulatively adds up the changes in TA_envi_tot across reporter-partner pairs and years. reporter_iso and partner_iso are string variables. Meanwhile, id is generated by egen id =group(reporter_iso partner_iso).
I tried these codes but I don't get to generate the values in the "sum" column below:
bysort id (year): gen sum=TA_envi_tot[_n] + TA_envi_tot[_n+1] if TA_envi_tot[_n]!=TA_envi_tot[_n-1]
bysort id (year): replace sum = sum[_n-1] if missing(sum)
id reporter_iso partner_iso year TA_envi_tot sum
3271 ATG DEU 1981 0 0
3271 ATG DEU 1982 0 0
3271 ATG DEU 1983 0 0
3271 ATG DEU 1984 36 36
3271 ATG DEU 1985 36 36
3271 ATG DEU 1986 36 36
3271 ATG DEU 1987 67 103
3271 ATG DEU 1988 67 103
3271 ATG DEU 1989 67 103
4217 BDI BEL 1981 3 3
4217 BDI BEL 1982 3 3
4217 BDI BEL 1983 3 3
4217 BDI BEL 1984 35 38
4217 BDI BEL 1985 35 38
4217 BDI BEL 1986 35 38
4217 BDI BEL 1987 35 38
4217 BDI BEL 1988 36 74
4217 BDI BEL 1989 36 74
4217 BDI BEL 1990 36 74
clear
input id str3 (reporter_iso partner_iso) year TA_envi_tot sum
3271 ATG DEU 1981 0 0
3271 ATG DEU 1982 0 0
3271 ATG DEU 1983 0 0
3271 ATG DEU 1984 36 36
3271 ATG DEU 1985 36 36
3271 ATG DEU 1986 36 36
3271 ATG DEU 1987 67 103
3271 ATG DEU 1988 67 103
3271 ATG DEU 1989 67 103
4217 BDI BEL 1981 3 3
4217 BDI BEL 1982 3 3
4217 BDI BEL 1983 3 3
4217 BDI BEL 1984 35 38
4217 BDI BEL 1985 35 38
4217 BDI BEL 1986 35 38
4217 BDI BEL 1987 35 38
4217 BDI BEL 1988 36 74
4217 BDI BEL 1989 36 74
4217 BDI BEL 1990 36 74
end
bysort id (year) : gen wanted = sum(TA_envi_tot * (TA_envi_tot != TA_envi_tot[_n-1]))
I am trying to reshape a variable to wide but not getting proper way to do so.
I have the day wise count dataset for each SSUID and i would like to reshape the day to wide to show the count for each SSUID in aggregate.
Dataset:
ssuid day count
1226 1 3
1226 2 7
1226 3 5
1226 4 7
1226 5 7
1226 6 6
1227 1 3
1227 2 6
1227 3 7
1227 4 4
1228 1 4
1228 2 4
1228 3 6
1228 4 7
1228 5 5
1229 1 3
1229 2 6
1229 3 6
1229 4 6
1229 5 5
I tried some code but getting the error:
count variable not constant within SSUID variable
My code:
reshape wide day, i(ssuid) j(count)
I would like to get the following result:
ssuid day1 day2 day3 day4 day5 day6
1226 3 7 5 7 7 6
1227 3 6 7 4 . .
1228 4 4 6 7 5 .
1229 3 6 6 6 5 .
The following works for me:
clear
input ssuid day count
1226 1 3
1226 2 7
1226 3 5
1226 4 7
1226 5 7
1226 6 6
1227 1 3
1227 2 6
1227 3 7
1227 4 4
1228 1 4
1228 2 4
1228 3 6
1228 4 7
1228 5 5
1229 1 3
1229 2 6
1229 3 6
1229 4 6
1229 5 5
end
reshape wide count, i(ssuid) j(day)
rename count# day#
list
+-------------------------------------------------+
| ssuid day1 day2 day3 day4 day5 day6 |
|-------------------------------------------------|
1. | 1226 3 7 5 7 7 6 |
2. | 1227 3 6 7 4 . . |
3. | 1228 4 4 6 7 5 . |
4. | 1229 3 6 6 6 5 . |
+-------------------------------------------------+
I have a text file in which some lines aren't formatted the same way as the others. To be more precise: In this type of lines, fields are written on 9 characters, whereas on the other lines, fields are 7 characters length. Format of a (misformatted) line being:
1st field on 40 characters, then 4 spaces, then 1 character (field named note, for example letter 'a', space ' ' or letter 'b'), then 4 spaces. The preceding sequence 4 spaces+1 character+4 spaces (example below) being repeated a number of times:
a
Such a line would for example be:
a a a a a c b a a a a a
Due to the specificity of these lines, I get an offset between lines:
26 26 26 26 26 26 26 26 26 26 26
a a a a a c b a a a a
I would like to get rid of 2 unwanted spaces, so that on my line, a field would become 7 characters wide, as in others:
1st field on 40 characters, then 3 spaces, then 1 character (field named note, for example letter 'a'), then 3 spaces.
I can "find" such a line using the regex find in Notepad++:
Find what : ^ {40}(( {4})(?<note>.)( {4}))+
But how do I replace all utterances of the 4 spaces+1 character+4 spaces by 3 spaces+1 character+3 spaces, so that fields on every line have the same length?
The desired format, after substitution, would for example be:
26 26 26 26 26 26 26 26 26 26 26
a a a a a c b a a a a
Here is a larger extract of the offset file:
20/03/2018
H0917 26_LAV 0 Semaine 2 En service le 04 Septembre 2017 - TAD Partiel 11/07/2017 16:09 H0917
Vertical Notes
Montferrier-sur-Lez - Cirad de Baillargu Montferrier-sur-Lez - Cirad de Baillarguet
Montpellier Occitanie Montpellier Occitanie
26 Occitanie - Montferrier-sur-Lez 26 Montferrier-sur-Lez - Cirad de BaillarguMontpellier Occitanie Montferrier-sur-Lez - Cirad de Baillarguet
########## VOYAGES
0 Semaine Montferrier-sur-Lez - Cirad de Baillarguet
26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26
a a a a a c b a a a a a a a a a a
Occitanie 6:50 7:20 7:50 8:20 8:50 9:30 10:10 10:40 11:10 11:40 12:10 12:10 12:40 13:10 13:40 14:10 14:40 15:10 15:40 16:10 16:40 17:10 17:40 18:10 18:40 19:10 19:40 20:10 20:40
Lycée Frédéric Bazille 6:58 7:28 7:58 8:28 8:58 9:37 10:17 10:47 11:17 11:47 12:17 12:17 12:47 13:17 13:47 14:18 14:48 15:18 15:48 16:18 16:49 17:19 17:49 18:19 18:49 19:18 19:47 20:17 20:47
La Lironde 7:02 7:32 8:02 8:35 9:05 9:42 10:22 10:52 11:22 11:52 12:23 12:23 12:52 13:22 13:52 14:23 14:53 15:23 15:53 16:23 16:56 17:26 17:56 18:26 18:56 19:23 19:52 20:22 20:52
Chemin Neuf 7:07 7:37 8:07 8:40 9:10 9:47 10:27 10:57 11:27 11:57 12:28 12:28 12:57 13:27 13:57 14:28 14:58 15:28 15:58 16:28 17:01 17:31 18:01 18:31 19:01 19:28 19:57 20:27 20:57
La Grand Font 7:08 7:38 8:08 8:41 9:11 9:48 10:28 10:58 11:28 11:58 12:29 12:29 12:58 13:28 13:58 14:29 14:59 15:29 15:59 16:29 17:02 17:32 18:02 18:32 19:02 19:29 19:58 20:28 20:58
Picadou 7:09 7:39 8:09 8:42 9:12 9:49 10:29 10:59 11:29 11:59 12:30 12:30 12:59 13:29 13:59 14:30 15:00 15:30 16:00 16:30 17:03 17:33 18:03 18:33 19:03 19:30 19:59 20:29 20:59
Distillerie 7:11 7:41 8:11 8:44 9:14 9:51 10:31 11:01 11:31 12:01 12:32 12:32 13:01 13:31 14:01 14:32 15:02 15:32 16:02 16:32 17:05 17:35 18:05 18:35 19:05 19:32 20:01 20:31 21:01
Cirad de Baillarguet 7:14 7:44 8:14 8:47 9:17 9:54 10:34 11:04 11:34 12:04 12:35 12:35 13:04 13:34 14:04 14:35 15:05 15:35 16:05 16:35 17:08 17:38 18:08 18:38 19:08 19:35 20:04 20:34 21:04
Correct file should be:
20/03/2018
H0917 26_LAV 0 Semaine 2 En service le 04 Septembre 2017 - TAD Partiel 11/07/2017 16:09 H0917
Vertical Notes
Montferrier-sur-Lez - Cirad de Baillargu Montferrier-sur-Lez - Cirad de Baillarguet
Montpellier Occitanie Montpellier Occitanie
26 Occitanie - Montferrier-sur-Lez 26 Montferrier-sur-Lez - Cirad de BaillarguMontpellier Occitanie Montferrier-sur-Lez - Cirad de Baillarguet
########## VOYAGES
0 Semaine Montferrier-sur-Lez - Cirad de Baillarguet
26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26
a a a a a c b a a a a a a a a a a
Occitanie 6:50 7:20 7:50 8:20 8:50 9:30 10:10 10:40 11:10 11:40 12:10 12:10 12:40 13:10 13:40 14:10 14:40 15:10 15:40 16:10 16:40 17:10 17:40 18:10 18:40 19:10 19:40 20:10 20:40
Lycée Frédéric Bazille 6:58 7:28 7:58 8:28 8:58 9:37 10:17 10:47 11:17 11:47 12:17 12:17 12:47 13:17 13:47 14:18 14:48 15:18 15:48 16:18 16:49 17:19 17:49 18:19 18:49 19:18 19:47 20:17 20:47
La Lironde 7:02 7:32 8:02 8:35 9:05 9:42 10:22 10:52 11:22 11:52 12:23 12:23 12:52 13:22 13:52 14:23 14:53 15:23 15:53 16:23 16:56 17:26 17:56 18:26 18:56 19:23 19:52 20:22 20:52
Chemin Neuf 7:07 7:37 8:07 8:40 9:10 9:47 10:27 10:57 11:27 11:57 12:28 12:28 12:57 13:27 13:57 14:28 14:58 15:28 15:58 16:28 17:01 17:31 18:01 18:31 19:01 19:28 19:57 20:27 20:57
La Grand Font 7:08 7:38 8:08 8:41 9:11 9:48 10:28 10:58 11:28 11:58 12:29 12:29 12:58 13:28 13:58 14:29 14:59 15:29 15:59 16:29 17:02 17:32 18:02 18:32 19:02 19:29 19:58 20:28 20:58
Picadou 7:09 7:39 8:09 8:42 9:12 9:49 10:29 10:59 11:29 11:59 12:30 12:30 12:59 13:29 13:59 14:30 15:00 15:30 16:00 16:30 17:03 17:33 18:03 18:33 19:03 19:30 19:59 20:29 20:59
Distillerie 7:11 7:41 8:11 8:44 9:14 9:51 10:31 11:01 11:31 12:01 12:32 12:32 13:01 13:31 14:01 14:32 15:02 15:32 16:02 16:32 17:05 17:35 18:05 18:35 19:05 19:32 20:01 20:31 21:01
Cirad de Baillarguet 7:14 7:44 8:14 8:47 9:17 9:54 10:34 11:04 11:34 12:04 12:35 12:35 13:04 13:34 14:04 14:35 15:05 15:35 16:05 16:35 17:08 17:38 18:08 18:38 19:08 19:35 20:04 20:34 21:04
Thanks for helping,
Julien
Your question is a bit unclear, but this got things aligned for me:
([^ ])
\1
Note there is one space before and one after the parenthesis. This removes one space before and one after each single non-space character. It ignores the 26s since those are two non-space characters, so replace all works.
Correlation Loading Plot from Pro PLS in SAS
Hi All,
I used Proc PLS to do a multivariate analysis and got a plot as attached. How can I remove the green colored points in the picture? I think they are the observations' correlation values. For example, I have 90 observations, and each of them will have a loading value on factor1 and factor2, so there will be 90 green points shown in the picture. Who can tell me which option can suppress them?
for example, data is like this:
par1 par2 par3 par4 par5 par6 par7 location
2680 0.546089996 237 1 0.172 2.25 305 5
3750 0.54836587 140 1.55 0.111 1.06 425 5
3590 0.54878718 168 1.27 0.131 0.969 516 5
2390 0.549510935 183 1.07 0.096 1.84 260 5
3780 0.549631747 140 1.12 0.118 1.98 472 5
2790 0.549934008 200 1.1 0.221 2.13 321 5
2880 0.5499945 227 1.14 0.185 1.54 439 5
2910 0.550357733 259 1.31 0.116 1.31 289 5
2420 0.550842789 177 1.32 0.044067423 1.95 260 5
3850 0.550964187 128 1.41 0.117 1.08 471 5
3530 0.552425146 165 1.23 0.11 1.57 494 5
2730 0.552913856 223 1.03 0.17 2 330 5
3130 0.553158535 252 1.02 0.174 2.13 322 5
3040 0.553709856 272 1.21 0.155 1.97 317 5
3830 0.554139421 153 1.27 0.137 1.47 455 5
3930 0.554569654 164 1.17 0.116 1.5 481 5
2430 0.554569654 136 1.3 0.198 2.11 226 8
3630 0.555247085 137 1.17 0.1 1.75 413 5
2490 0.555432126 176 1.06 0.113 1.39 236 5
3490 0.555555556 166 1.28 0.044444444 1.65 465 5
3840 0.556173526 164 1.23 0.0949 1.66 470 5
2480 0.556173526 239 1.28 0.102 2.2 238 5
3760 0.556173526 191 1.33 0.131 2.12 447 5
3850 0.556173526 174 1.35 0.241 2.42 381 3
3410 0.557413601 174 1.14 0.107 1.48 419 5
2960 0.559284116 229 1.08 0.165 1.99 304 5
3410 0.559284116 137 1.19 0.291 2.17 375 8
3300 0.560538117 121 1.13 0.153 1.82 352 8
3090 0.560538117 134 1.16 0.167 1.17 416 4
3210 0.560538117 124 1.09 0.172 0.82 390 4
3950 0.560538117 130 1.29 0.199 1.89 440 4
3300 0.561167228 131 1.06 0.242 2.45 367 8
2210 0.561167228 162 0.885 0.288 3.32 208 4
3170 0.561797753 126 1.3 0.151 1.31 388 4
2740 0.561797753 96.1 1.22 0.245 0.827 254 3
3750 0.561797753 144 1.08 0.257 2.62 366 3
3640 0.562429696 120 1.32 0.159 1.63 347 8
3210 0.563063063 148 1.29 0.206 2.18 352 8
2300 0.563697858 179 0.936 0.181 2.29 223 2
3410 0.564334086 141 0.856 0.136 2.03 370 8
3500 0.564334086 126 1.38 0.177 1.45 355 8
3470 0.564334086 101 0.989 0.222 1.84 349 3
2260 0.564334086 171 0.942 0.224 2.08 219 2
2220 0.564334086 180 0.956 0.281 1.84 219 4
2340 0.564971751 165 1.05 0.228 2.25 240 8
2380 0.564971751 161 0.976 0.287 1.6 214 4
3220 0.56561086 148 1.21 0.121 0.568 520 6
3920 0.566251416 176 1.08 0.045300113 2.26 637 6
3830 0.566251416 137 1.48 0.203 1.23 387 3
2510 0.566251416 152 1.24 0.222 1.84 223 8
2760 0.566251416 168 0.994 0.282 1.31 280 4
2640 0.566251416 154 0.979 0.345 1.52 291 4
3570 0.566893424 165 1.33 0.155 2.18 505 6
3170 0.566893424 126 1.08 0.162 1.41 341 4
3700 0.566893424 159 1.3 0.17 1.64 449 4
3250 0.566893424 104 1.32 0.2 1.37 372 8
3740 0.566893424 159 1.23 0.216 1.69 409 1
3380 0.566893424 163 1.53 0.245 2.19 367 3
3240 0.56753689 136 1.07 0.153 1.88 383 4
3400 0.56753689 109 1.36 0.161 1.16 420 4
3760 0.56753689 150 0.93 0.169 1.68 537 4
3560 0.56753689 123 1.03 0.193 2.32 374 8
2360 0.56753689 163 0.697 0.235 1.94 243 8
2430 0.56753689 166 0.762 0.247 2.31 231 8
3330 0.568181818 148 1.11 0.174 2 393 4
3080 0.568181818 139 1.13 0.188 2.08 349 8
3230 0.568181818 116 1.23 0.199 1.77 328 8
2180 0.568181818 144 1.01 0.215 2.13 207 8
2520 0.568181818 128 0.809 0.369 1.65 306 4
3320 0.568828214 152 1.15 0.14 1.65 395 4
2300 0.568828214 134 0.908 0.221 1.56 233 8
3730 0.568828214 141 1.58 0.238 1.96 405 3
3800 0.568828214 160 1.24 0.241 2.2 402 3
2440 0.568828214 153 1.03 0.258 1.89 223 4
3910 0.568828214 209 1.26 0.275 2.26 350 3
4010 0.569476082 139 1.28 0.045558087 1.7 602 6
2340 0.570125428 167 1.1 0.18 1.57 208 2
2360 0.570125428 176 0.704 0.2 1.6 219 2
3490 0.570776256 171 1.43 0.269 2.4 360 3
2620 0.571428571 132 1.09 0.202 1.8 224 8
3740 0.571428571 172 1.27 0.256 1.92 355 3
3600 0.57208238 128 1.16 0.17 1.94 434 4
3360 0.57208238 150 1.18 0.171 1.81 353 1
3620 0.57208238 131 1.28 0.177 2.24 360 3
3560 0.57208238 139 1.15 0.229 1.9 366 3
2740 0.572737686 277 0.876 0.171 1.71 290 10
2340 0.572737686 148 0.964 0.231 1.18 250 6
2760 0.572737686 168 0.905 0.303 2.1 264 4
2890 0.572737686 204 0.857 0.331 2.32 272 2
code is :
proc pls data=check method=rrr;
class location;
model par1-par7=location;
run;
In general, I don't think there's a simple way to do what you're looking for. You may want to construct your own graph.
You can get the template for the graph; I'll paste that here. Unfortunately all of the data printed on the graph is printed in a single statement, so it's not helpful to just comment out one line: you comment out the scatterplot x=CORRX y=CORRY and you remove all of the data. I also don't see that ODS Graphics Editor will be able to do this.
You would be best off probably constructing your own chart using this as a base, but calling it from PROC SGRENDER so you can control how the data comes in.
Here's the template, and you'll see the spot I'm talking about:
proc template;
define statgraph Stat.PLS.Graphics.CorrLoadPlot;
dynamic Radius1 Radius2 Radius3 Radius4 xLabel xShortLabel yLabel
yShortLabel CorrX CorrXLab TraceX CorrY CorrYLab TraceY _byline_
_bytitle_ _byfootnote_;
BeginGraph /;
entrytitle "Correlation Loading Plot";
layout overlayequated / equatetype=square commonaxisopts=(
tickvaluelist=(-1.0 -0.75 -0.5 -0.25 0 0.25 0.5 0.75 1.0) viewmin=
-1 viewmax=1) xaxisopts=(label=XLABEL shortlabel=XSHORTLABEL
offsetmin=0.05 offsetmax=0.05 gridDisplay=auto_off) yaxisopts=(
label=YLABEL shortlabel=YSHORTLABEL offsetmin=0.05 offsetmax=0.05
gridDisplay=auto_off);
ellipseparm semimajor=RADIUS1 semiminor=RADIUS1 slope=0.0 xorigin=
0.0 yorigin=0.0 / clip=true display=(outline) outlineattrs=(
pattern=dash) datatransparency=0.75;
scatterplot x=XCIRCLE1LABEL y=YCIRCLE1LABEL / markercharacter=
CIRCLE1LABEL datatransparency=0.75 primary=true;
ellipseparm semimajor=RADIUS2 semiminor=RADIUS2 slope=0.0 xorigin=
0.0 yorigin=0.0 / clip=true display=(outline) outlineattrs=(
pattern=dash) datatransparency=0.75;
scatterplot x=XCIRCLE2LABEL y=YCIRCLE2LABEL / markercharacter=
CIRCLE2LABEL datatransparency=0.75 primary=true;
ellipseparm semimajor=RADIUS3 semiminor=RADIUS3 slope=0.0 xorigin=
0.0 yorigin=0.0 / clip=true display=(outline) outlineattrs=(
pattern=dash) datatransparency=0.75;
scatterplot x=XCIRCLE3LABEL y=YCIRCLE3LABEL / markercharacter=
CIRCLE3LABEL datatransparency=0.75 primary=true;
ellipseparm semimajor=RADIUS4 semiminor=RADIUS4 slope=0.0 xorigin=
0.0 yorigin=0.0 / clip=true display=(outline) outlineattrs=(
pattern=dash) datatransparency=0.75;
scatterplot x=XCIRCLE4LABEL y=YCIRCLE4LABEL / markercharacter=
CIRCLE4LABEL datatransparency=0.75 primary=true;
scatterplot x=CORRX y=CORRY / group=CORRGROUP Name="ScatterVars"
markercharacter=CORRLABEL rolename=(_id1=_ID1 _id2=_ID2 _id3=
_ID3 _id4=_ID4 _id5=_ID5) tip=(y x group markercharacter _id1
_id2 _id3 _id4 _id5) tiplabel=(y=CORRXLAB x=CORRYLAB group=
"Corr Type" markercharacter="Corr ID");
SeriesPlot x=TRACEX y=TRACEY / tip=(y x) tiplabel=(y=CORRYLAB x=
CORRXLAB);
endlayout;
if (_BYTITLE_)
entrytitle _BYLINE_ / textattrs=GRAPHVALUETEXT;
else
if (_BYFOOTNOTE_)
entryfootnote halign=left _BYLINE_;
endif;
endif;
EndGraph;
end;
run;
I would consider posting this on communities.sas.com and seeing if one of the developers can give you more specific information; Sanjay and Dan often post there and may well be able to give you a simpler answer.