Azure DevOps for Azure PostgreSQL - postgresql-11

I did some research and found that Azure DevOps does have any outofthebox implementations to support CI CD for Azure PostgreSQL.
Does anyone , have any idea , How can we configure Azure DevOps for PaaS offering of Azure PostgreSQL Database
Please help.

As of today , there is no out-of-the-box Azure DevOps template to deploy for the PaaS version of Azure Postgres.

I'm not sure if I understand OP's question correctly, and it's been 9 months since OP posted the question. But this appears to be the right answer.
At least one Microsoft-hosted agent on Azure Devops has PostgreSQL built in, just not enabled by default. Enabling and using it is simple.
The "Microsoft-hosted agents" page, https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/hosted?view=azure-devops&tabs=yaml, has a table listing the available agents. You can click on the link in the rightmost column to see a list of included software. The link for the agent vs2017-win2016 points to a list of all the software pre-installed on the Microsoft Windows Server 2016 Datacenter. Scrolling down the page, or searching for "postgres", reveals the pertinent information for PostgreSQL.
This example Azure Pipelines job can be used to start PostgreSQL.
- job: foo-postgresql-bar
pool:
vmImage: 'vs2017-win2016'
steps:
- powershell: |
echo 'PGBIN is ' $env:PGBIN
echo 'PGDATA is ' $env:PGDATA
echo 'PGROOT is ' $env:PGROOT
echo 'Contents of PGBIN'
ls $env:PGBIN
Set-Service postgresql-x64-13 -StartupType manual
Start-Service postgresql-x64-13
Get-CimInstance win32_service | Where-Object Name -eq "postgresql-x64-13"
displayName: 'Setup PostgreSQL'
Only the Set-Service and Start-Service commands are required; the rest of the PowerShell script is optional.
The echo and ls commands simply verify the information from the table. The Set-Service command enables the service, the Start-Service command starts the service, and the Get-CimInstance command verifies that it's running.
In a production environment, you may read the return code after Start-Service, instead of using the Get-CimInstance command, in order to verify that the service is running.

Related

Dynamodb local web shell does not load

I am running DynamoDB locally using the instructions here. To remove potential docker networking issues I am using the "Download Locally" version of the instructions. Before running dynamo locally I run aws configure to set some fake values for AWS access, secret, and region, and here is the output:
$ aws configure
AWS Access Key ID [****************fake]:
AWS Secret Access Key [****************ake2]:
Default region name [local]:
Default output format [json]:
here is the output of running dynamo locally:
$ java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -sharedDb
Initializing DynamoDB Local with the following configuration:
Port: 8000
InMemory: false
DbPath: null
SharedDb: true
shouldDelayTransientStatuses: false
CorsParams: *
I can confirm that the DynamoDB is running locally successfully by listing tables using aws cli
$ aws dynamodb list-tables --endpoint-url http://localhost:8000
{
"TableNames": []
}
but when I visit http://localhost:8000/shell in my browser, this is the error I get and the page does not load.
I tried running curl on the shell to see if I can get a more useful error message:
$ curl http://localhost:8000/shell
{
"__type":"com.amazonaws.dynamodb.v20120810#MissingAuthenticationToken",
"Message":"Request must contain either a valid (registered) AWS access key ID or X.509 certificate."}%
I tried looking up the error above, but I don't have much choice in doing setup when running the shell merely in the browser. Any help is appreciated on how I can run the Dynamodb javascript web shell with this setting.
Software versions:
aws cli: aws-cli/2.4.7 Python/3.9.9 Darwin/20.6.0 source/x86_64 prompt/off
OS: MacOS Big Sur 11.6.2 (20G314)
DynamoDB Local Web Shell was deprecated with version 1.16.X and is not available any longer from 1.17.X to latest. There are no immediate plans for a new Web Shell to be introduced.
You can download an old version of DynamoDB Local < 1.17.X should you wish to use the Web Shell.
Available versions:
aws s3 ls s3://dynamodb-local-frankfurt/
Download most recent working version with Web Shell:
aws s3 ls s3://dynamodb-local-frankfurt/dynamodb_local_2021-04-27.tar.gz .
The next release of DynamoDB Local will have an updated README indicating its deprecation
As I answered in DynamoDB local http://localhost:8000/shell this appears to be a regression in new versions of DynamoDB Local, where the shell mysteriously stopped working, whereas in versions from a year ago it does work.
Somebody should report it to Amazon. If there is some flag that new versions require you to set to enable the shell, it isn't documented anywhere that I can find.
Update JAVA to the latest version and voila, it works!

Connect Google Cloud Build to Google Cloud SQL

Google Cloud Run allows for using Cloud SQL. But what if you need Cloud SQL when building your container in Google Cloud Build? Is that possible?
Background
I have a Next.js project, that runs in a Container on Google Cloud Run. Pushing my code to Cloud Build (installing the stuff, generating static pages and putting everything in a Container) and deploying to Cloud Run works perfectly. 👌
Cloud SQL
But, I just added some functionality in which it also needs to some data from my PostgreSQL instance that runs on Google Cloud SQL. This data is used when building the project (generating the static pages).
Locally, on my machine, this works fine as the project can connect to my CloudSQL proxy. While running in CloudRun this should also work, as Cloud Run allows for connecting to my Postgres instance on Cloud SQL.
My problem
When building my project with Cloud Build, I need access to my database to be able to generate my static pages. I am looking for a way to connect my Docker cloud builder to Cloud SQL, perhaps just like Cloud Run (fully managed) provides a mechanism that connects using the Cloud SQL Proxy.
That way I could be connecting to /cloudsql/INSTANCE_CONNECTION_NAME while building my project!
Question
So my question is: How do I connect to my PostgreSQL instance on Google Cloud SQL via the Cloud SQL Proxy while building my project on Google Cloud Build?
Things like my database credentials, etc. already live in Secrets Manager, so I should be able to use those details I guess 🤔
You can use the container that you want (and you need) to generate your static pages, and download cloud sql proxy to open a tunnel with the database
- name: '<YOUR CONTAINER>'
entrypoint: 'sh'
args:
- -c
- |
wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
chmod +x cloud_sql_proxy
./cloud_sql_proxy -instances=<my-project-id:us-central1:myPostgresInstance>=tcp:5432 &
<YOUR SCRIPT>
App engine has an exec wrapper which has the benefit of proxying your Cloud SQL in for you, so I use that to connect to the DB in cloud build (so do some google tutorials).
However, be warned of trouble ahead: Cloud Build runs exclusively* in us-central1 which means it'll be pathologically slow to connect from anywhere else. For one or two operations, I don't care but if you're running a whole suite of integration tests that simply will not work.
Also, you'll need to grant permission for GCB to access GCSQL.
steps:
- id: 'Connect to DB using appengine wrapper to help'
name: gcr.io/google-appengine/exec-wrapper
args:
[
'-i', # The image you want to connect to the db from
'$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME:$SHORT_SHA',
'-s', # The postgres instance
'${PROJECT_ID}:${_POSTGRES_REGION}:${_POSTGRES_INSTANCE_NAME}',
'-e', # Get your secrets here...
'GCLOUD_ENV_SECRET_NAME=${_GCLOUD_ENV_SECRET_NAME}',
'--', # And then the command you want to run, in my case a database migration
'python',
'manage.py',
'migrate',
]
substitutions:
_GCLOUD_ENV_SECRET_NAME: mysecret
_GCR_HOSTNAME: eu.gcr.io
_POSTGRES_INSTANCE_NAME: my-instance
_POSTGRES_REGION: europe-west1
* unless you're willing to pay more and get very stung by Beta software, in which case you can use cloud build workers (at the time of writing are in Beta, anyway... I'll come back and update if they make it into production and fix the issues)
The ENV VARS (including DB connections) are not available during build steps.
However, you can use ENTRYPOINT (of Docker) to run commands when the container runs (after completing the build steps).
I was having the need to run DB migrations when a new build was deployed (i.e. when the container starts running) and using ENTRYPOINT (to a file/command) was able to run migrations (which require DB connection details, not available during the build-process).
"How to" part is pretty brief and is located here : https://stackoverflow.com/a/69088911/867451

How do i continue working with Amplify on a new machine?

I'm using react native for my project. On my old machine, when i ran amplify status, i had Auth, Api and Storage services listed.
I moved to my new machine, installed node, watchman, brew etc... and then navigated to my react native project and ran: react-native run-ios, and voila, my app is running. All the calls to my AWS Api, Auth and Storage are working perfectly.
Now i can make some amplify commands. Such as amplify status. I tried: amplify env add: here's what i got:
Users-MBP-2:projectname username$ amplify env add
Note: It is recommended to run this command from the root of your app directory
? Do you want to use an existing environment? Yes
? Choose the environment you would like to use: dev
Using default provider awscloudformation
✖ There was an error initializing your environment.
init failed
Error: ENOENT: no such file or directory, open '/Users/username/.aws/credentials'
at Object.openSync (fs.js:462:3)
at Proxy.readFileSync (fs.js:364:35)
at Object.readFileSync (/usr/local/lib/node_modules/#aws-amplify/cli/node_modules/aws-sdk/lib/util.js:95:26)
at IniLoader.parseFile (/usr/local/lib/node_modules/#aws-amplify/cli/node_modules/aws-sdk/lib/shared-ini/ini-loader.js:6:47)
at IniLoader.loadFrom (/usr/local/lib/node_modules/#aws-amplify/cli/node_modules/aws-sdk/lib/shared-ini/ini-loader.js:56:30)
at Config.region (/usr/local/lib/node_modules/#aws-amplify/cli/node_modules/aws-sdk/lib/node_loader.js:100:36)
at Config.set (/usr/local/lib/node_modules/#aws-amplify/cli/node_modules/aws-sdk/lib/config.js:507:39)
at Config.<anonymous> (/usr/local/lib/node_modules/#aws-amplify/cli/node_modules/aws-sdk/lib/config.js:342:12)
at Config.each (/usr/local/lib/node_modules/#aws-amplify/cli/node_modules/aws-sdk/lib/util.js:507:32)
at new Config (/usr/local/lib/node_modules/#aws-amplify/cli/node_modules/aws-sdk/lib/config.js:341:19) {
errno: -2,
syscall: 'open',
code: 'ENOENT',
path: '/Users/username/.aws/credentials'
}
Do you think credentials info needs to be brought/configured to my new machine?
When i run amplify configure project it's like doing an amplify init and building a project from scratch. I'm being asked:
? Enter a name for the project: ProjectName
? Choose your default editor: Visual Studio Code
? Choose the type of app that you're building javascript
Please tell us about your project
? What javascript framework are you using (Use arrow keys)
angular
ember
ionic
react
❯ react-native
vue
none
etc....
I also already have a region, username and accessKey, secretAccess key etc..
I do not want to replace or ruin anything in my current backend or current project! Whats going on?
Ensure amplify-cli is installed and you're logged in with your AWS details.
npm install -g #aws-amplify/cli
amplify configure
Running amplify configure is mainly to give the cli knowledge of your AWS account so subsequent commands can have access to things.
If you get amplify: command not found errors try restarting your terminal. If still no luck, you will need to check amplify has been added to your PATH variable.
Run amplify env add , but choose an existing environment. This will let you choose the environment you created on your other machine so you can pull those settings down to your new machine.
amplify env add
? Do you want to use an existing environment? Yes
Production
Follow up with:
amplify pull
You don't need to run amplify add auth again or anything. All of that will pull down automatically after you've done the above.
You DO NOT need to do all config again, but some for sure
You have to install amplify cli npm install -g #aws-amplify/cli
use amplify pull
https://docs.amplify.aws/cli/start#amplify-pull
Follow the rest of steps -
-- provide the accessKeyId, secretAccessKey
-- region
-- select amplify project
and then rest of app related thing like IDE, directory......
I tried every solution then I found this. (in MacBook)
% sudo -i
Password:
~ root# npm install -g #aws-amplify/cli
-- Ctrl+D to exist from Root user
% amplify pull --appId xxxx --envName yyyy.
Note: To get --appId xxxx --envName yyyy
Log in to the AWS console. Choose AWS Amplify. Click your app. Go to Backend
environments. Find the backend environment you wish to pull. Click
Edit backend. See top right then click 'Local setup instructions
' ( amplify pull --appId
YOUR_APP_ID --envName YOUR_ENV_NAME )
Waiting until it request to verify your amplify.
✔ Successfully received Amplify Studio tokens.
? Choose your default editor: Visual Studio Code
? Choose the type of app that you're building javascript
Please tell us about your project
? What javascript framework are you using react
? Source Directory Path: src
? Distribution Directory Path: build
? Build Command: npm run-script build
? Start Command: npm run-script start
✔ Synced UI components.
? Do you plan on modifying this backend? Yes
⠴ Building resource api/xxxx✅ GraphQL schema compiled successfully.
Edit your schema at ....
✔ Successfully pulled backend environment yyyy from the cloud.
✅
Successfully pulled backend environment staging from the cloud.
Run 'amplify pull' to sync future upstream changes.
% amplify pull
% npm install
% npm start
Hope this help every one!!
Happy Coding :)

Unable to set tags for azure virtual machine from Azure automation runbook

I am using the below code to set TAGs to my Azure virtual machine. The code is working when I am running it on my laptop (VM are getting tagged). However, when I run the same code from Azure Automation runbook, the virtual machines are not getting tagged. No errors or warnings observed post runbook execution.
Code:
$resource_group = "agentinstall-poc"
$tags = (Get-AzureRmResource -ResourceGroupName $resource_group -Name "client-2").Tags
$tags += #{manju="rao"}
Set-AzureRmResource -ResourceGroupName $resource_group -Name "client-2" -ResourceType "Microsoft.Compute/VirtualMachines" -Tag $tags -Force -ApiVersion '2015-06-15'
The problem was that the PowerShell modules in the Azure automation account are not updated by default (they are v1.0 ish when the account gets created). I had to update the modules and they started working.

MUPX Production Logs MeteorJS

I have a MeteorJS app using Mupx.
This is less stable version that uses Docker to deploy. Now that I have deployed it, I am wondering how one can get access to the server logs.
In the non-Docker version, apparently you just run mup logs -f, but it's not properly documented as how to do so with the Docker variant.
Any suggestions?
UPDATE:
I have since discovered you can use docker commands directly:
docker ps will show the id of your application.
docker logs -f ${id} will tail the logs.
Mupx githup page claims that 'mupx logs -f' should work.