I am building blueprints in vRealize Automation 7.2 and I need to be able to execute code from a remote location as part of the process. I know I can use encrypted properties to provide the credentials of a user and then execute scripts in a different user context, but is that my only option? I see in vRealize Orchestrator that I can change the credential of the user executing a workflow, but I'm not sure that's my best option either.
I found a way by mapping a drive on the deployed machine to the network location using powershell scripts.
$driveLetter = "U"
$networkPath = "\\network\share"
$userName = "domain\username"
$password = "password"
$psDrive = Get-PSDrive -Name $driveLetter
if ($psDrive)
{
Remove-PSDrive $psDrive
}
$token = $password | ConvertTo-SecureString -AsPlainText -Force
$credentials = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $userName, $token -ErrorAction Stop
$output = New-PSDrive -Name $driveLetter -Root $networkPath -PSProvider FileSystem -Credential $credentials -Persist -Scope Global
$driveLetter = $output.Name + ":"
I was then able to map the resulting drive letter to other steps install location using vRealize Automation software components and binding the properties to this component's property by marking the property's Binding box checked and setting the Value to ConnectionStep_1~driveLetter property.
Related
We would like to transfer the development via azure devops to another company and we ask ourselves whether not only new releases can be pushed through this pipeline. But also data could be downloaded from the productive environment via the azure devops or aws devops pipeline?
I researched myself but found nothing about it.
does any of you have more information on this?
Thank you
Is it possible to download files/data during the build pipeline on Azure DevOps?
In Azure DevOps, there isn't a task to download files/data, but you can use the PowerShell task to connect to FTP server and download files.
For detailed information, you can refer to this similar question.
- task: PowerShell#2
inputs:
targetType: 'inline'
script: |
#FTP Server Information - SET VARIABLES
$ftp = "ftp://XXX.com/"
$user = 'UserName'
$pass = 'Password'
$folder = 'FTP_Folder'
$target = "C:\Folder\Folder1\"
#SET CREDENTIALS
$credentials = new-object System.Net.NetworkCredential($user, $pass)
function Get-FtpDir ($url,$credentials) {
$request = [Net.WebRequest]::Create($url)
$request.Method = [System.Net.WebRequestMethods+FTP]::ListDirectory
if ($credentials) { $request.Credentials = $credentials }
$response = $request.GetResponse()
$reader = New-Object IO.StreamReader $response.GetResponseStream()
while(-not $reader.EndOfStream) {
$reader.ReadLine()
}
#$reader.ReadToEnd()
$reader.Close()
$response.Close()
}
#SET FOLDER PATH
$folderPath= $ftp + "/" + $folder + "/"
$files = Get-FTPDir -url $folderPath -credentials $credentials
$files
$webclient = New-Object System.Net.WebClient
$webclient.Credentials = New-Object System.Net.NetworkCredential($user,$pass)
$counter = 0
foreach ($file in ($files | where {$_ -like "*.txt"})){
$source=$folderPath + $file
$destination = $target + $file
$webclient.DownloadFile($source, $target+$file)
#PRINT FILE NAME AND COUNTER
$counter++
$counter
$source
}
Certificate comes from: PowerShell Connect to FTP server and get files.
You should use artifacts when it is inside your "environment".
Otherwise you can use the normal command line tools like git or curl and wget this depends on your build agent.
So, I've got the following powershell script to find inactive AD users and disable their accounts, creating a log file containing a list of what accounts have been disabled:
Import-Module ActiveDirectory
# Set the number of days since last logon
$DaysInactive = 60
$InactiveDate = (Get-Date).Adddays(-($DaysInactive))
# Get AD Users that haven't logged on in xx days
$Users = Get-ADUser -Filter { LastLogonDate -lt $InactiveDate -and Enabled -eq $true } -
Properties LastLogonDate | Select-Object #{ Name="Username"; Expression=.
{$_.SamAccountName} }, Name, LastLogonDate, DistinguishedName
# Export results to CSV
$Users | Export-Csv C:\Temp\InactiveUsers.csv -NoTypeInformation
# Disable Inactive Users
ForEach ($Item in $Users){
$DistName = $Item.DistinguishedName
Disable-ADAccount -Identity $DistName
Get-ADUser -Filter { DistinguishedName -eq $DistName } | Select-Object #{ Name="Username"; Expression={$_.SamAccountName} }, Name, Enabled
}
The script works and is doing everything it should. What I am trying to figure out is how to automate this in an AWS environment.
I'm guessing I need to use a Lambda function in AWS to trigger this script to run on a schedule but don't know where to start.
Any help greatly appreciated.
I recomment to create a Lambda function with dotnet environment: https://docs.aws.amazon.com/lambda/latest/dg/lambda-powershell.html
Use a CloudWatch Event on a Scheduled basis to trigger the function:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/RunLambdaSchedule.html
An alternative, if you like to to have a more pipeline style execution you could use CodePipeline and CodeBuild to run the script. Use again CloudWatch to trigger the CodePipeline on a scheduled basis!
We have previously generated a list of Google's API end-points utilised by the SDK by grepping the source repo. Now that that doesn't seem to be available, has anyone else found a way of obtaining such a list? We need to be able to whitelist these end-points on our corporate firewall/proxy.
Thanks!
PART 1
If your objective is to whitelist URLs for your firewall, the URL *.googleapis.com will cover 99% of everything you need. There are only a few endpoints left:
bookstore.endpoints.endpoints-portal-demo.cloud.goog
cloudvolumesgcp-api.netapp.com
echo-api.endpoints.endpoints-portal-demo.cloud.goog
elasticsearch-service.gcpmarketplace.elastic.co
gcp.redisenterprise.com
payg-prod.gcpmarketplace.confluent.cloud
prod.cloud.datastax.com
PART 2
List the Google API endpoints that are available for your project with this command:
gcloud services list --available --format json | jq -r ".[].config.name"
https://cloud.google.com/sdk/gcloud/reference/services/list
Refer to PART 5 for a PowerShell script that produces a similar list.
PART 3
Process the Discovery Document which provides machine readable information:
Google API Discovery Service
curl https://www.googleapis.com/discovery/v1/apis | jq -r ".items[].discoveryRestUrl"
Once you have a list of discovery documents, process each document and extract the rootUrl key.
curl https://youtubereporting.googleapis.com/$discovery/rest?version=v1 | jq -r ".rootUrl"
PART 4
PowerShell script to process the Discovery Document and generate an API endpoint list:
Copy this code to a file named list_google_apis.ps1. Run the command as follows:
powershell ".\list_google_apis.ps1 | Sort-Object -Unique | Out-File -Encoding ASCII -FilePath apilist.txt"
There will be some errors displayed as some of the discovery document URLs result in 404 (NOT FOUND) errors.
$url_discovery = "https://www.googleapis.com/discovery/v1/apis"
$params = #{
Uri = $url_discovery
ContentType = 'application/json'
}
$r = Invoke-RestMethod #params
foreach($item in $r.items) {
$url = $item.discoveryRestUrl
try {
$p = #{
Uri = $url
ContentType = 'application/json'
}
$doc = Invoke-RestMethod #p
$doc.rootUrl
} catch {
Write-Host "Failed:" $url -ForegroundColor Red
}
}
PART 5
PowerShell script that I wrote a while back that produces similar output to gcloud services list.
Documentation for the API:
https://cloud.google.com/service-usage/docs/reference/rest/v1/services/list
<#
.SYNOPSIS
This program displays a list of Google Cloud services
.DESCRIPTION
Google Service Management allows service producers to publish their services on
Google Cloud Platform so that they can be discovered and used by service consumers.
.NOTES
This program requires the Google Cloud SDK CLI is installed and set up.
https://cloud.google.com/sdk/docs/quickstarts
.LINK
PowerShell Invoke-RestMethod
https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/invoke-restmethod?view=powershell-5.1
Google Cloud CLI print-access-token Documentation
https://cloud.google.com/sdk/gcloud/reference/auth/print-access-token
Google Cloud API Documentation
https://cloud.google.com/service-infrastructure/docs/service-management/reference/rest
https://cloud.google.com/service-usage/docs/reference/rest/v1/services
https://cloud.google.com/service-infrastructure/docs/service-management/reference/rest/v1/services/list
#>
function Get-AccessToken {
# Get an OAuth Access Token
$accessToken=gcloud auth print-access-token
return $accessToken
}
function Display-ServiceTable {
Param([array][Parameter(Position = 0, Mandatory = $true)] $serviceList)
if ($serviceList.Count -lt 1) {
Write-Output "No services were found"
return
}
# Display as a table
$serviceList.serviceConfig | Select name, title | Format-Table -Wrap | more
}
function Get-ServiceList {
Param([string][Parameter(Position = 0, Mandatory = $true)] $accessToken)
# Build the url
# https://cloud.google.com/service-infrastructure/docs/service-management/reference/rest/v1/services/list
$url="https://servicemanagement.googleapis.com/v1/services"
# Build the Invoke-RestMethod parameters
# https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/invoke-restmethod?view=powershell-5.1
$params = #{
Headers = #{
Authorization = "Bearer " + $accessToken
}
Method = 'Get'
ContentType = "application/json"
}
# Create an array to store the API output which is an array of services
$services = #()
# Google APIs page the output
$nextPageToken = $null
do {
if ($nextPageToken -eq $null)
{
$uri = $url
} else {
$uri = $url + "?pageToken=$nextPageToken"
}
try {
# Get the list of services
$output = Invoke-RestMethod #params -Uri $uri
} catch {
Write-Host "Error: REST API failed." -ForegroundColor Red
Write-Host "URL: $url" -ForegroundColor Red
Write-Host $_.Exception.Message -ForegroundColor Red
return $services
}
# Debug: Display as JSON
# $output | ConvertTo-Json
# Append services to list
$services += $output.services
$nextPageToken = $output.nextPageToken
} while ($nextPageToken -ne $null)
return $services
}
############################################################
# Main Program
############################################################
$accessToken = Get-AccessToken
$serviceList = Get-ServiceList $accessToken
Display-ServiceTable $serviceList
Command-line tool JQ
I'm trying to rename hostname and add to AD of a spot instance. It is a simple powershell script. I've read the docs that by default user data will be disable after it gets executed once and if <persist>true</persist> is used it will not be disabled.
I think I saw somewhere this(enabling to be run at each startup) is done via taskscheduler but can't find the link.
Can someone point me to the task scheduler job or the way to manually disable the userdata once my if conditions are met.
<powershell>
Set-ExecutionPolicy unrestricted -Force
$instanceName = "test-name5"
$username = "domain\username"
$password = "password" | ConvertTo-SecureString -AsPlainText -Force
$cred = New-Object -typename System.Management.Automation.PSCredential($username, $password)
Start-Sleep -s 5
$hostname = hostname
$domain = (Get-WmiObject win32_computersystem).Domain
if (!($hostname -eq $instanceName)){
Rename-Computer -NewName $instanceName -restart -force
}Elseif (!($domain -eq 'my.domain.local')){
Start-Sleep -s 5
Add-Computer -DomainName my.domain.local -OUPath "OU=Windows,OU=QAServers,OU=Servers,DC=my,DC=domain,DC=local" -Credential $cred -Force -Restart -erroraction 'stop'
}Else {
####code to disable the running of userdata once above conditions
are met####
}
</powershell>
<persist>true</persist>
It's worth reading the ec2config-service documentation, as the setting you want is referenced in there.
You want the Ec2HandleUserData setting, which is configured in the Config.xml.
Powershell can easily update this setting:
$path = 'C:\Program Files\Amazon\Ec2ConfigService\Settings\config.xml'
$xml = [xml](Get-Content $path)
$state = $xml.Ec2ConfigurationSettings.Plugins.Plugin | where {$_.Name -eq 'Ec2HandleUserData'}
$state.State = 'Disabled'
$xml.Save($path)
I use this code when creating custom AMI's to re-enable userdata handling ($state.State = 'Enabled').
EDIT: The above is for ec2config not ec2launch which is what the OP is using. I'd missed this originally.
I this case I think you need to change the way your script runs, rather than use <persist> and then try to disable its functionality, I would remove the persist tag and call InitializeInstance.ps1 –Schedule (documentation link) in your if for the conditions you want the userdata to re-run:
if ($hostname -ne $instanceName) {
& C:\ProgramData\Amazon\EC2-Windows\Launch\Scripts\InitializeInstance.ps1 -Schedule
Rename-Computer -NewName $instanceName -Restart -Force
}
elseif ($domain -ne 'my.domain.local') {
& C:\ProgramData\Amazon\EC2-Windows\Launch\Scripts\InitializeInstance.ps1 -Schedule
Add-Computer -DomainName aws.macmillan.local -OUPath "OU=Windows,OU=QAServers,OU=Servers,DC=my,DC=domain,DC=local" -Credential $cred -Force -Restart -ErrorAction 'stop'
}
As I said in the comments of the previous answer, I had 3 options and since I found the aws scheduled task I went with the last option. Answering my own question since it'll be easy to spot the code.
<powershell>
Set-ExecutionPolicy unrestricted -Force
#Enter instance hostname here
$instanceName = "test-name8"
$username = "domain\username"
#Using ssm parameter store to avoid having the password in plaintext
$password = (Get-SSMParameterValue -Name AD-Password -WithDecryption $True -Region us-east-1).Parameters[0].Value | ConvertTo-SecureString -asPlainText -Force
Start-Sleep -s 3
$cred = New-Object -typename System.Management.Automation.PSCredential($username, $password)
Start-Sleep -s 5
$hostname = hostname
$domain = (Get-WmiObject win32_computersystem).Domain
if ($hostname -ne $instanceName){
Rename-Computer -NewName $instanceName -restart -force
}Elseif ($domain -ne 'my.domain.local'){
Start-Sleep -s 5
Add-Computer -DomainName my.domain.local -OUPath "OU=Windows,OU=QAServers,OU=Servers,DC=my,DC=domain,DC=local" -Credential $cred -Force -Restart -erroraction 'stop'
}Else {
Disable-ScheduledTask -TaskName "Amazon Ec2 Launch - Userdata Execution"
Unregister-ScheduledTask -TaskName "Amazon Ec2 Launch - Userdata Execution"
}
</powershell>
<persist>true</persist>
note: a role that has ssm policies must be attached while launching the server for this ssm parameter command to work.
I was solving similar issue and I had to change Windows Server 2016 hostname and enroll it to Elastic Server Fleet. Also I could not allow instance to be rebooted. I used this code to solve this.
NB. I understand that it is not direct way of doing this and has numerous drawbacks, but in my circumstances goal was achieved without negative impact.
<powershell>
$ComputerName = "MyPCRandomName"
Set-ItemProperty -path "HKLM:\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters" -name "Hostname" -value $ComputerName
elastic-agent enroll --enrollment-token 123 --url=321
</powershell>
I have a Powershell Script I'm working on for post-migration SSRS report administration tasks.
In this particular scenario we have a DEV environment (where I've been primarily testing) which hosts a single instance of SSRS, and a Prod environment which is a scaled out deployment across 4 nodes.
I'm new to Powershell (just discovered it 2 days ago...) and the script I have is pretty simple:
Clear-Host
$Username = "domain\myUsername"
$Password = "myPassword"
$Cred = New-Object System.Management.Automation.PSCredential -ArgumentList #($Username,(ConvertTo-SecureString -String $Password -AsPlainText -Force))
# Dev Connection String
$webServiceUrl = 'http://DEVwebServer.domain.com/reportserver/reportservice2010.asmx?WSDL'
# Prod Connection String
# $webServiceUrl = 'http://PRODwebServerNode1.domain.com/reportserver/reportservice2010.asmx?WSDL'
$rs = New-WebServiceProxy -Uri $webServiceUrl -Credential $Cred
$reports = $rs.ListChildren("/Some Folder Under Root", $true) | Where-Object { $_.TypeName -eq "Report" }
$type = $ssrsProxy.GetType().Namespace;
$schedDefType = "{0}.ScheduleDefinition" -f $type;
$schedDef = New-Object ($schedDefType)
$warning = #();
foreach ($report in $reports) {
$sched = $rs.GetExecutionOptions($report.Path, [ref]$schedDef);
$snapShotExists = $rs.ListItemHistory($report.Path);
if($sched -eq "Snapshot") {
Write-Host "Following report is configured to run from Snapshot:" -ForegroundColor Yellow
Write-Host ("Report Name: {0}`nReport Path: {1}`nExecution Type: {2}`n" -f $report.Name, $report.Path, $sched)
if ($snapShotExists) {
Write-Host "Does Snapshot Exist..?`n" -ForegroundColor Yellow
Write-Host "Yes!`tNumber of Snapshots: " $snapShotExists.Count -ForegroundColor Green
$snapShotExists.CreationDate
Write-Host "`n------------------------------------------------------------"
}
elseif (!$snapShotExists) {
Write-Host "Does Snapshot Exist..?`n" -ForegroundColor Yellow
Write-Host ("No!`n") -ForegroundColor Red
Write-Host "Creating Snapshot.......`n" -ForegroundColor Yellow
$rs.CreateItemHistorySnapshot($report.Path, [ref]$warning);
Write-Host "Snapshot Created!`n" -ForegroundColor Green
$snapShotExists.CreationDate
Write-Host "`n------------------------------------------------------------"
}
}
}
The purpose of the script is simply to recursively iterate over all of the reports for the given folder in the $reports variable, check to see if the execution type is set to "Snapshot", if it is check to see if a "History Snapshot" exists, and if one does not exist, create one.
When I run this in Dev it works just fine, but when I run in PROD I get the following error repeated for each $report in my foreach loop:
Any ideas on why this would work in one and not the other and how to overcome this error?
I was able to get this working on the Prod instance by making some adjustments using this answer as a guide:
By updating my call to New-WebServiceProxy to add a Class and Namespace flag, I was able to update the script in the following ways:
...
# Add Class and Namespace flags to New-WebServiceProxy call
$rs = New-WebServiceProxy -Class 'RS' -Namespace 'RS' -Uri $webServiceUrl -Credential $Cred
$reports = $rs.ListChildren("/Business and Technology Solutions", $true) | Where-Object { $_.TypeName -eq "Report" }
# Declare new "ScheduleDefintion object using the Class declared in the New-WebServiceProxy call
$schedDef = New-Object RS.ScheduleDefinition
$warning = #();
foreach ($report in $reports) {
# Referencing the "Item" property from the ScheduleDefinition
$execType = $rs.GetExecutionOptions($report.Path, [ref]$schedDef.Item)
...
I don't think the adding of the Class and Namespace flags on the New-WebServiceProxy call was exactly what did it, as I think it's just a cleaner way to ensure you're getting the proper Namespace from the WebService. Maybe just a little sugar.
I think the key change was making sure to the "Item" property from the schedule definition object, although I'm not sure why it was working in Dev without doing so...