Currently when looking at the summary page of a build pipeline in Azure DevOps I can only see up to 11 warnings per project. Is there a way to show the total count of warnings? I don't need to see what the warnings are, just a count of the total number of warnings.
Not sure why the max count is 11 on your side, because it can display 14 warnings here:
I'm afraid you may need double check whether 11 is the actual total count for that pipeline.
Anyway, you have another way, check its actual number programmatically. Just run below scripts from the powershell terminal in your local machine:
$token = "{token}"
$url="https://dev.azure.com/{org name}/{project name}/_build/results?buildId={the build id you want to check}&__rt=fps&__ver=2"
$token = [System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes(":$($token)"))
$response = Invoke-RestMethod -Uri $url -Headers #{Authorization = "Basic $token"} -Method Get
Write-Host "results = $($response.fps.dataProviders.data.'ms.vss-build-web.run-details-data-provider'.issues.Count| ConvertTo-Json -Depth 100)"
Related
i have set of servers (150) for logging and a command (to get disk space). How can i execute this command for each server.
Suppose if script is taking 1 min to get report of command for single server, how can i send report for all the servers for every 10 min?
use strict;
use warnings;
use Net::SSH::Perl;
use Filesys::DiskSpace;
# i have almost more than 100 servers..
my %hosts = (
'localhost' => {
user => "z",
password => "qumquat",
},
'129.221.63.205' => {
user => "z",
password => "aardvark",
},
'129.221.63.205' => {
user => "z",
password => "aardvark",
},
);
# file system /home or /dev/sda5
my $dir = "/home";
my $cmd = "df $dir";
foreach my $host (keys %hosts) {
my $ssh = Net::SSH::Perl->new($host,port => 22,debug => 1,protocol => 2,1 );
$ssh->login($hostdata{$host}{user},$hostdata{$host}{password} );
my ($out) = $ssh->cmd($cmd});
print "$out\n";
}
It has to send output of disk space for each server
Is there a reason this needs to be done in Perl? There is an existing tool, dsh, which provides precisely this functionality of using ssh to run a shell command on multiple hosts and report the output from each. It also has the ability, with the -c (concurrent) switch to run the command at the same time on all hosts rather than waiting for each one to complete before going on to the next, which you would need if you want to monitor 150 machines every 10 minutes, but it takes 1 minute to check each host.
To use dsh, first create a file in ~/.dsh/group/ containing a list of your servers. I'll put mine in ~/.dsh/group/test-group with the content:
galera-1
galera-2
galera-3
Then I can run the command
dsh -g test-group -c 'df -h /'
And get back the result:
galera-3: Filesystem Size Used Avail Use% Mounted on
galera-3: /dev/mapper/debian-system 140G 36G 99G 27% /
galera-1: Filesystem Size Used Avail Use% Mounted on
galera-1: /dev/mapper/debian-system 140G 29G 106G 22% /
galera-2: Filesystem Size Used Avail Use% Mounted on
galera-2: /dev/mapper/debian-system 140G 26G 109G 20% /
(They're out-of-order because I used -c, so the command was sent to all three servers at once and the results were printed in the order the responses were received. Without -c, they would appear in the same order the servers are listed in the group file, but then it would wait for each reponse before connecting to the next server.)
But, really, with the talk of repeating this check every 10 minutes, it sounds like what you really want is a proper monitoring system such as Icinga (a high-performance fork of the better-known Nagios), rather than just a way to run commands remotely on multiple machines (which is what dsh provides). Unfortunately, configuring an Icinga monitoring system is too involved for me to provide an example here, but I can tell you that monitoring disk space is one of the checks that are included and enabled by default when using it.
There is a ready-made tool called Ansible for exactly this purpose. There you can define your list of servers, group then and execute commands on all of them.
I am retrieving a list of files in a folder with tens of thousands of files. I've let the script execute for a few hours but it never seems to progress after this part. Is there a way to diagnose what's going on or the progress of the list? It used to take a few minutes to load the list, so I am not sure why it's all of a sudden hanging now.
Code
function EchoDump($echoOut = 0)
{
if($echoOut)
echo str_repeat("<!-- Agent Smith -->", 1000);
else
return str_repeat("<!-- Agent Smith -->", 1000);
}
$sftp_connection = new Net_SFTP($website ,2222);
$login_result = $sftp_connection->login($credentials->login(), $credentials->password());
echo "<pre>Login Status: ".( $login_result ? "Success" : "Failure") ."</pre>";
if($sftp_connection->chdir($photosFolder))
echo "<pre>Changed to $photosFolder</pre>";
else
{
echo "<pre>Failed to change to $photosFolder</pre>";
}
echo "<pre>Downloading List.</pre>";
EchoDump(1);
$rawlist = $sftp_connection->rawlist();
echo "<pre>List downloaded.</pre>";
EchoDump(1);
Output
Login Status: Success
Changed to /vspfiles/photos
Downloading List.
And then it will just sit with Downloading List as the last output forever.
Real time logging might help provide some insight. You can enable that by adding define('NET_SSH2_LOGGING', 3); to the top of your script. Once you got that maybe post it on pastebin.org for dissection.
That said, I will say that rawlist() doesn't return directory info on demand. It'll return everything. It's not the most efficient of designs but back in phpseclib 1.0 days Iterators didn't exist. phpseclib works with PHP4 so it had to work with what it had. The current master branch requires PHP 5.6 and is under active development so maybe this behavior will change but we'll see.
Well first of all, I am learning sharepoint 2013 and I have been following a few tutorials, so far I just setup a farm and everything seems to be working properly except for this service that is being logged into the event viewer every 5 minutes:
The Execute method of job definition
Microsoft.Office.Server.UserProfiles.LMTRepopulationJob (ID
1e573155-b7f6-441b-919b-53b2f05770f7) threw an exception. More
information is included below.
Unexpected exception in FeedCacheService.BulkLMTUpdate: Unable to
create a DataCache. SPDistributedCache is probably down..
I found out that this is a job that is configured to execute every 5 minutes
But regarding the assumption that the SPDistributedCache is probably down, I already verified it and it is running
As you can see, it is actually running, also I checked the host cache via SP powershell (get-cachehost and get-cacheclusterhealth) and still all seems fine
Yet when I execute the command get-cache I am getting only the default value, and for what I have read there should be listed another cache types like:
DistributedAccessCache_XXXXXXXXXXXXXXXXXXXXXXXXX
DistributedBouncerCache_XXXXXXXXXXXXXXXXXXXXXXXX
DistributedSearchCache_XXXXXXXXXXXXXXXXXXXXXXXXX
DistributedServerToAppServerAccessTokenCache_XXXXXXX
DistributedViewStateCache_XXXXXXXXXXXXXXXXXXXXXXX
Among others which I think probably should include DataCache
Until now I already tried a few workaround but without success
Restart-Service AppFabricCachingService
Remove-SPDistributedCacheServiceInstance
Add-SPDistributedCacheServiceInstance
Restart-CacheCluster
Even this script that it seems to work on many cases to repair the AppFabric Caching Service
$SPFarm = Get-SPFarm
$cacheClusterName = "SPDistributedCacheCluster_" + $SPFarm.Id.ToString()
$cacheClusterManager = [Microsoft.SharePoint.DistributedCaching.Utilities.SPDistributedCacheClusterInfoManager]::Local
$cacheClusterInfo = $cacheClusterManager.GetSPDistributedCacheClusterInfo($cacheClusterName);
$instanceName ="SPDistributedCacheService Name=AppFabricCachingService"
$serviceInstance = Get-SPServiceInstance | ? {($_.Service.Tostring()) -eq $instanceName -and ($_.Server.Name) -eq $env:computername}
$serviceInstance.Delete()
Add-SPDistributedCacheServiceInstance
$cacheClusterInfo.CacheHostsInfoCollection
Well if anyone has any suggestion, I will appreciate very much, thank you in advance!
This is a generic error message, meaning that the real issue isn't known (hence the word "Probably").
I believe that the key to solving this problem when it is not the Probably , is in looking in the ULS log for the events that have occurred just before it. Events of type "Unexpected", do not appear in the events log and are often seen before a generaic type of error.
In many cases you might see something like "File not Found". This usually means that the noted file is not in the assembly cache. Since the distributed Cache utilizes the AppFabric, which is outside of Sharepoint, then the only way for Sharepoint to find it's file, is to look in the assembly cache. The sharepoint pre Installer should have put the files there, but it might have failed or maybe someone uninstalled the App Fabric and re Installed it manually, which would have removed the files from the assembly and not put them back.
Before Restart-CacheCluster you could specify the connection to your SharePoint Database (The catalog name could be not the same)
Use-CacheCluster -ConnectionString "Data Source=(SharePoint DB Server)
\\(Optional Instance);Initial Catalog=CacheClusterConfigurationDB;
Integrated Security=True" -ProviderType System.Data.SqlClient
NOTE: It doesn't work permanently
NOTE 2: If you don't have named instance on DB Server, just put the name of your server without "\".
If you don't have a catalog, you could follow this script
***
Remove-Cache default
New-Cache SharePointCache
Get-CacheConfig SharePointCache
Set-CacheConfig SharePointCache -NotificationsEnabled True
***
New-CacheCluster -Provider System.Data.SqlClient -ConnectionString "Data Source=(SharePoint DB Server)\\(Optional Instance);Initial Catalog=CacheClusterConfigurationDB;Integrated Security=True" -Size Small
Register-CacheHost -Provider System.Data.SqlClient -ConnectionString "Data Source=(SharePoint DB Server)\\(Optional Instance);Initial Catalog=CacheClusterConfigurationDB;Integrated Security=True" -Account "Domain\spservices_account" -CachePort 22233 -ClusterPort 22234 -ArbitrationPort 22235 -ReplicationPort 22236 -HostName [Name_of_your_server]
Add-CacheHost -Provider System.Data.SqlClient -ConnectionString "Data Source=(SharePoint DB Server)\\(Optional Instance);Initial Catalog=CacheClusterConfigurationDB;Integrated Security=True" -Account "Domain\spservices_account"
Add-CacheAdmin -Provider System.Data.SqlClient -ConnectionString "Data Source=(SharePoint DB Server)\\(Optional Instance);Initial Catalog=CacheClusterConfigurationDB;Integrated Security=True"
Use-CacheCluster
You could specify or check your database configuration in regedit
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AppFabric\V1.0\Configuration
Look for ConnectionString string value, and set your connection string
Data Source=(SharePoint DB Server)\(Optional Instance);Initial Catalog=CacheClusterConfigurationDB;Integrated Security=True
To query the status of the server you could use:
Get-SPServiceInstance | ? {($_.service.tostring()) -eq “SPDistributedCacheService Name=AppFabricCachingService”} | select Server, Status
Get-SPServer | ? {($_.ServiceInstances | % TypeName) -contains "Distributed Cache"} | % Address
Get-AFCache | Format-Table –AutoSize
Get-CacheHost
Aditional:
If you need to change your service account, you could do this procedure:
$f = Get-SPFarm
$svc = $f.Services | ? {$_.Name -eq "AppFabricCachingService"}
$acc = Get-SPManagedAccount -Identity "Domain\spservices_account"
$svc.ProcessIdentity.CurrentIdentityType = "SpecificUser"
$svc.ProcessIdentity.ManagedAccount = $acc
$svc.ProcessIdentity.Update()
$svc.ProcessIdentity.Deploy()
have you changed the distributed cache account from the farm account?
what build number are you on?
is this a single server farm?
off the top of my head, the only thing left is this:
Grant-CacheAllowedClientAccount -Account "domain\ProfileserviceWebAppIdentity"
i would do an iisreset and restart the owstimer service after you run this command.
when I run unit tests for my application, first tests are successful, then around 100, tests start to fail, due to PDOException (Too many connections). I have already searched about this problem, but was not able to solve it.
My config is as follows:
<phpunit
backupGlobals = "false"
backupStaticAttributes = "false"
colors = "true"
convertErrorsToExceptions = "true"
convertNoticesToExceptions = "true"
convertWarningsToExceptions = "true"
processIsolation = "false"
stopOnFailure = "false"
syntaxCheck = "false"
bootstrap = "bootstrap.php.cache" >
If I change processIsolation to "true", all tests generate an error (E):
Caused by ErrorException: unserialize(): Error at offset 0 of 79 bytes
For that I tried setting "detect_unicode = Off" inside php.ini file.
If I run tests in smaller batches, like with "--group something", all tests are successful.
Can someone help me solve the issue when running all the tests at once? I really want to get rid of the PDOException.
Thanks in advance!
You should increase the maximum number of concurrent connections in your DB server.
If you're using MySQL, edit /etc/mysql/my.cnf and set the max_connections parameter to the number of concurrent connections you need. Then restart the MySQL server.
Keep in mind: In theory, the physical limits are very high. But if your queries cause a high CPU load or memory consumption, your DB server could eat up the resources required for other processes. This means, you could run out of memory, or your system can become overloaded.
For people who are having the same issue, here is more specific steps to config the my.cnf file.
If you are sure you are on the right my.cnf file, put max_connections = 500 (default is 151) to [mysqld] section in the my.cnf. Don't put it in the [client] section.
To make sure you are on the right my.cnf, if you have multiple mysqld installed from Homebrew or XAMMPP, find the right mysqld, for XAMMPP using /Applications/XAMPP/xamppfiles/sbin/mysqld --verbose --help | grep -A 1 "Default options" and you will get something like this:
Default options are read from the following files in the given order:
/Applications/XAMPP/xamppfiles/etc/xampp/my.cnf /Applications/XAMPP/xamppfiles/etc/my.cnf ~/.my.cnf
Normally it's at /Applications/XAMPP/xamppfiles/etc/my.cnf.
I need to detect when a computer is idle for a certain time period. My definition of idleness is:
No users logged in, either by remote methods or on the local machine
X server inactivity, with no movement of mouse or key presses
TTY keyboard inactivity (hopefully)
Since the majority of distros have now moved to logind, I should be able to use its DBUS interface to find out if users are logged in, and also to monitor logins/logouts. I have used xautolock to detect X idleness before, and I could continue using that, but xscreensaver is also available. Preferably however I want to move away from any specific dependencies like the screensaver due to different desktop environments using different components.
Ideally, I would also be able to base idleness on TTY keyboard inactivity, however this isn't my biggest concern. According to this answer, I should be able to directly query the /dev/input/* interfaces, however I have no clue how to go about this.
My previous attempts at making such a monitor have used Bash, due to the ease of changing a plain text script file, howver I am happy using C++ in case more advanced methods are required to accomplish this.
From a purely shell standpoint (since you tagged this bash), you can get really close to what you want.
#!/bin/sh
users_are_logged_in() {
who |grep -q .
return $?
}
x_is_blanked() {
local DISPLAY=:0
if xscreensaver-command -time |grep -q 'screen blanked'; then
return 0 # we found a blanked xscreensaver: return true
fi
# no blanked xscreensaver. Look for DPMS modes
xset -q |awk '
/DPMS is Enabled/ { dpms = 1 } # DPMS is enabled
/Monitor is On$/ { monitor = 1 } # The monitor is on
END { if(dpms && !monitor) { exit 0 } else { exit 1 } }'
return $? # true when DPMS is enabled and the monitor is not on
}
nobody_here() {
! users_are_logged_in && x_is_blanked
return $?
}
if nobody_here; then
sleep 2m
if nobody_here; then
# ...
fi
fi
This assumes that a user can log in in two minutes and that otherwise, there is no TTY keyboard activity.
You should verify that the who |grep works on your system (i.e. no headers). I had originally grepped for / but then it won't work on FreeBSD. If who has headers, maybe try [ $(who |grep -c .) -gt 1 ] which will tell you that the number of lines that who outputs is more than one.
I share your worry about the screensaver part; xscreensaver likely isn't running in the login manager (any other form of X would involve a user logged in, which who would detect), e.g. GDM uses gnome-screensaver, whose syntax would be slightly different. The DPMS part may be good enough, giving a far larger buffer for graphical logins than the two minutes for console login.
Using return $? in the last line of a function is redundant. I used it to clarify that we're actually using the return value from the previous line. nobody_here short circuits, so if no users are logged in, there is no need to run the more expensive check for the status of X.
Side note: Be careful about using the term "idle" as it more typically refers to resource (hardware, that is) consumption (e.g. CPU load). See the uptime command for load averages for the most common way of determining system (resource) idleness. (This is why I named my function nobody_here instead of e.g. is_idle)