Unable to set tags for azure virtual machine from Azure automation runbook - azure-virtual-machine

I am using the below code to set TAGs to my Azure virtual machine. The code is working when I am running it on my laptop (VM are getting tagged). However, when I run the same code from Azure Automation runbook, the virtual machines are not getting tagged. No errors or warnings observed post runbook execution.
Code:
$resource_group = "agentinstall-poc"
$tags = (Get-AzureRmResource -ResourceGroupName $resource_group -Name "client-2").Tags
$tags += #{manju="rao"}
Set-AzureRmResource -ResourceGroupName $resource_group -Name "client-2" -ResourceType "Microsoft.Compute/VirtualMachines" -Tag $tags -Force -ApiVersion '2015-06-15'

The problem was that the PowerShell modules in the Azure automation account are not updated by default (they are v1.0 ish when the account gets created). I had to update the modules and they started working.

Related

Checking for the result of the AWS CLI 'run-task' command, task stopped succesfully or from an error?

I'm currently moving an application off of static EC2 servers to ECS, as until now the release process has been ssh'ing into the server to git pull/migrate the database.
I've created everything I need using terraform to deploy my code from my organisations' Elastic Container Registry. I have a cluster, some services and task definitions.
I can deploy the app successfully for any given version now, however my main problem is finding a way to run migrations.
My approach so far has been to split the application into 3 services, I have my 'web' service which handles all HTTP traffic (serving the frontend, responding to API requests), my 'cron' service which handles things like sending emails/push notifications on specific times/events and my 'migrate' service which is just the 'cron' service but with the entryPoint to the container overwritten to just run the migrations (as I don't need any of the apache2 stuff for this container, and I didn't see reason to make another one for just migrations).
The problem I had with this was the 'migrate' service would constantly try and schedule more tasks for migrating the database, even though it only needed to be done once. So I've scrapped it as a service and kept it as a task definition however, so that I can still place it into my cluster.
As part of the deploy process I'm writing, I run that task inside the cluster via a bash script so I can wait until the migrations finish before deciding whether to take the application out of maintenance mode (if the migrations fail) or to deploy the new 'web'/'cron' containers once the migration has been completed.
Currently this is inside a shell script (ran by Github actions) that looks like this:
#!/usr/bin/env bash
CLUSTER_NAME=$1
echo $CLUSTER_NAME
OUTPUT=`aws ecs run-task --cluster ${CLUSTER_NAME} --task-definition saas-app-migrate`
if [$? -n 0]; then
>&2 echo $OUTPUT
exit 1
fi
TASKS=`echo $OUTPUT | jq '.tasks[].taskArn' | jq #sh | sed -e "s/'//g" | sed -e 's/"//g'`
for task in $TASKS
do
# check for task to be done
done
Because $TASKS contains the taskArn of any tasks that have been spawned by this, I am freely able to query the task however I don't know what information I'm looking for.
The AWS documentation says I should use the 'describe-task' command to then find out why a task has reached the 'STOPPED' status, as it provides a 'stopCode' and 'stoppedReason' property in the response. However, it doesn't say what these values would be if it was succesfully stopped? I don't want to have to introduce a manual step in my deployment where I wait until the migrations are done - with the application not being usable - to then tell my release process to continue.
Is there a link to documentation I might have missed with the values I'm searching for, or an alternate way to handle this case?

Dynamodb local web shell does not load

I am running DynamoDB locally using the instructions here. To remove potential docker networking issues I am using the "Download Locally" version of the instructions. Before running dynamo locally I run aws configure to set some fake values for AWS access, secret, and region, and here is the output:
$ aws configure
AWS Access Key ID [****************fake]:
AWS Secret Access Key [****************ake2]:
Default region name [local]:
Default output format [json]:
here is the output of running dynamo locally:
$ java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -sharedDb
Initializing DynamoDB Local with the following configuration:
Port: 8000
InMemory: false
DbPath: null
SharedDb: true
shouldDelayTransientStatuses: false
CorsParams: *
I can confirm that the DynamoDB is running locally successfully by listing tables using aws cli
$ aws dynamodb list-tables --endpoint-url http://localhost:8000
{
"TableNames": []
}
but when I visit http://localhost:8000/shell in my browser, this is the error I get and the page does not load.
I tried running curl on the shell to see if I can get a more useful error message:
$ curl http://localhost:8000/shell
{
"__type":"com.amazonaws.dynamodb.v20120810#MissingAuthenticationToken",
"Message":"Request must contain either a valid (registered) AWS access key ID or X.509 certificate."}%
I tried looking up the error above, but I don't have much choice in doing setup when running the shell merely in the browser. Any help is appreciated on how I can run the Dynamodb javascript web shell with this setting.
Software versions:
aws cli: aws-cli/2.4.7 Python/3.9.9 Darwin/20.6.0 source/x86_64 prompt/off
OS: MacOS Big Sur 11.6.2 (20G314)
DynamoDB Local Web Shell was deprecated with version 1.16.X and is not available any longer from 1.17.X to latest. There are no immediate plans for a new Web Shell to be introduced.
You can download an old version of DynamoDB Local < 1.17.X should you wish to use the Web Shell.
Available versions:
aws s3 ls s3://dynamodb-local-frankfurt/
Download most recent working version with Web Shell:
aws s3 ls s3://dynamodb-local-frankfurt/dynamodb_local_2021-04-27.tar.gz .
The next release of DynamoDB Local will have an updated README indicating its deprecation
As I answered in DynamoDB local http://localhost:8000/shell this appears to be a regression in new versions of DynamoDB Local, where the shell mysteriously stopped working, whereas in versions from a year ago it does work.
Somebody should report it to Amazon. If there is some flag that new versions require you to set to enable the shell, it isn't documented anywhere that I can find.
Update JAVA to the latest version and voila, it works!

GCE Windows startup script

I am facing a strange problem where my windows-startup-script-ps1 is not running at startup as per official link. This corresponds to a PowerShell script as the name indicates. Having looked further into Serial Port logs I found that the error is:
The term 'gs://mybucket/metadata.ps1' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify
that the path is correct and try again.
I tried giving it the Cloud Storage API object path but it's still getting this error. My Compute Engine VM has read-only Cloud Storage access and I verified it by running gsutil inside my Compute Engine VM. Can anyone shed some light on this issue?
Function Write-SerialPort ([string] $message) {
$port = new-Object System.IO.Ports.SerialPort COM1,9600,None,8,one
$port.open()
$port.WriteLine($message)
$port.Close()
}
Write-SerialPort ("STARTING GCE Startup Script")
$IsInstalled = ((Get-WindowsFeature -Name Web-Server).installed)
if ($isinstalled -eq $false){
Install-WindowsFeature Web-Server -IncludeManagementTools -IncludeAllSubFeature -Confirm:$false
Enable-WindowsOptionalFeature -Online -FeatureName IIS-ASPNET45
Write-SerialPort ("Installation Complete")
}
else{
Write-SerialPort ("IIS Server (WebServer) is already installed")
}
Write-SerialPort ("FINISHING GCE Startup Script")
From what I could test on my end, it looks like you're adding the Cloud Storage path for the PowerShell script in the windows-startup-script-ps1 metadata key, which is meant for local startup scripts that have previously been added to the VM's image, instead of the windows-startup-script-url key that supports GCS objects:
$ gcloud compute instances create your-windows-instance --scopes storage-ro --metadata windows-startup-script-url=gs://mybucket/metadata.ps1

Can't run remote PowerShell commands in custom CloudFormation AMI (WinServer 2012)

The issue I'm going to describe works OK on a stock Windows Server 2012 AMI from Amazon. I'm facing issues with a custom AMI.
I created a custom AMI for Windows Server 2012 by creating an image from an EC2 machine.
Just before creating the custom AMI, I used the Ec2ConfigServiceSetting.exe to make sure:
The instance receives a new machine name based on its IP.
The password of the user is changed on boot.
The instance is provisioned using the script I have in place in UserData.
I also shut down the instance using Sysprep from the Ec2ConfigServiceSetting before creating the image for the custom AMI.
However, when I run a remote PowerShell command (from C# code, if it matters), it doesn't work. From C#-land, the command gets executed OK, but nothing happens in the machine.
Let's say my remote PS command launches a program in the remote machine (agent.exe). My script looks a little bit like:
Set-Location C:\path\in\disk
$env:Path = "C:\some\thing;" + $env:Path
C:\path\to\agent.exe --daemon
Once I log into the Ec2 instance, agent.exe --daemon is NOT running. However, if I first log into the instance, then run the remote PowerShell command, agent.exe --daemon DOES run.
This works perfectly with a stock AMI from Amazon, so I can only assume there's some configuration I'm missing for this to work (and, why does it work if I first log in using RDesktop?)
We found in the past some issues regarding SSL initialization without a user profile, so in our provisioning script (UserData) we do some things someone might consider shenanigans:
net user Administrator hardcoded-password
net user ec2-user hardcoded-password /add
$pwd = (ConvertTo-SecureString 'hardcoded-password' -AsPlainText -Force)
$cred = New-Object System.Management.Automation.PSCredential('Administrator', $pwd)
Start-Process cmd -LoadUserProfile -Credential $cred

Azure DevOps for Azure PostgreSQL

I did some research and found that Azure DevOps does have any outofthebox implementations to support CI CD for Azure PostgreSQL.
Does anyone , have any idea , How can we configure Azure DevOps for PaaS offering of Azure PostgreSQL Database
Please help.
As of today , there is no out-of-the-box Azure DevOps template to deploy for the PaaS version of Azure Postgres.
I'm not sure if I understand OP's question correctly, and it's been 9 months since OP posted the question. But this appears to be the right answer.
At least one Microsoft-hosted agent on Azure Devops has PostgreSQL built in, just not enabled by default. Enabling and using it is simple.
The "Microsoft-hosted agents" page, https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/hosted?view=azure-devops&tabs=yaml, has a table listing the available agents. You can click on the link in the rightmost column to see a list of included software. The link for the agent vs2017-win2016 points to a list of all the software pre-installed on the Microsoft Windows Server 2016 Datacenter. Scrolling down the page, or searching for "postgres", reveals the pertinent information for PostgreSQL.
This example Azure Pipelines job can be used to start PostgreSQL.
- job: foo-postgresql-bar
pool:
vmImage: 'vs2017-win2016'
steps:
- powershell: |
echo 'PGBIN is ' $env:PGBIN
echo 'PGDATA is ' $env:PGDATA
echo 'PGROOT is ' $env:PGROOT
echo 'Contents of PGBIN'
ls $env:PGBIN
Set-Service postgresql-x64-13 -StartupType manual
Start-Service postgresql-x64-13
Get-CimInstance win32_service | Where-Object Name -eq "postgresql-x64-13"
displayName: 'Setup PostgreSQL'
Only the Set-Service and Start-Service commands are required; the rest of the PowerShell script is optional.
The echo and ls commands simply verify the information from the table. The Set-Service command enables the service, the Start-Service command starts the service, and the Get-CimInstance command verifies that it's running.
In a production environment, you may read the return code after Start-Service, instead of using the Get-CimInstance command, in order to verify that the service is running.