List PowerBI workspace collection keys from arm template - powerbi

When using ARM templates to deploy various Azure components you can use some functions. One of them is called listkeys and you can use it to return through the output the keys that were created during the deployment, for example when deploying a storage account.
Is there a way to get the keys when deploying a Power BI workspace collection?

According to you mentioned link, if we want to use listKeys function, then we need to know resourceName and ApiVersion.
From the Azure PowerBI workspace collection get access keys API, we could get resource name
Microsoft.PowerBI/workspaceCollections/{workspaceCollectionName} and API version "2016-01-29"
So please have a try to use the follow coding, it works for me correctly.
"outputs": {
"exampleOutput": {
"value": "[listKeys(resourceId('Microsoft.PowerBI/workspaceCollections', parameters('workspaceCollections_tompowerBItest')), '2016-01-29')]",
"type": "object"
}
Check the created PowerBI Service from Azure portal
Whole ARM template I used:
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"workspaceCollections_tompowerBItest": {
"defaultValue": "tomjustforbitest",
"type": "string"
}
},
"variables": {},
"resources": [
{
"type": "Microsoft.PowerBI/workspaceCollections",
"sku": {
"name": "S1",
"tier": "Standard"
},
"tags": {},
"name": "[parameters('workspaceCollections_tompowerBItest')]",
"apiVersion": "2016-01-29",
"location": "South Central US"
}
],
"outputs": {
"exampleOutput": {
"value": "[listKeys(resourceId('Microsoft.PowerBI/workspaceCollections', parameters('workspaceCollections_tompowerBItest')), '2016-01-29')]",
"type": "object"
}
}
}

Related

Callback url value in email for verifying account

This is an extension of Unable to validate account confirmation in WSO2 version 6.0 issue.
I have same regex pattern in my self-registration section. But when I'm creating users using rest API, the link which I got in the email is
https://localhost:9443/accountrecoveryendpoint/confirmregistration.do?confirmation=ce790759-1086-4870-a673-35b5927351d8&userstoredomain=PRIMARY&username=samyu&tenantdomain=carbon.super&callback={{callback}}
and when I created the user using manually the link which I got is
https://localhost:9443/accountrecoveryendpoint/confirmregistration.do?confirmation=dff024e7-d7e7-48ef-bb60-1c1c4d6f3b1c&userstoredomain=PRIMARY&username=sam&tenantdomain=carbon.super&callback=https%3A%2F%2Flocalhost%3A9443%2Fmyaccount.
So, the difference between these two links is that callback. So what configuration should I make in order to get the callback value
When you are trying this from the recovery portal, the callback value is set automatically. If you are trying with the REST API you need to include that in the request. The following is a sample JSON payload.
{
"user": {
"username": "kim",
"realm": "PRIMARY",
"password": "Password12!",
"claims": [
{
"uri": "http://wso2.org/claims/givenname",
"value": "kim"
},
{
"uri": "http://wso2.org/claims/emailaddress",
"value": "kimAndie#gmail.com"
},
{
"uri": "http://wso2.org/claims/lastname",
"value": "Anderson"
},
{
"uri": "http://wso2.org/claims/mobile",
"value": "+947729465558"
}
]
},
"properties": [
{
"key": "callback",
"value": "https://localhost:9443/myaccount"
}
]
}
Notice the way how you need to send the callback when using the REST API.

In Azure DevOps, is it possible to enumerate children pipeline build artifacts recursively with API?

In Azure DevOps, I want to get a list of recursive artifact elements from a pipeline build. It would be nice if I don't have to download the whole artifact root object. Does any one know how to do this with the current API?
The portal already supports this feature in the pipeline artifacts view. You can open and browse child artifacts, with the ability to download. The API however does not seem to support this use case.
Current API
https://learn.microsoft.com/en-us/rest/api/azure/devops/build/Artifacts/List?view=azure-devops-rest-6.0#buildartifact
I was able to find a request for the feature, but I'm not sure if it will be implemented soon.
https://developercommunity.visualstudio.com/idea/1300697/api-list-artifacts-enumerate-recursively-same-as-w.html
Has anyone else been able to work around this?
This is not documented but you can use the same API call as it is done on Azure DevOps. So it would be
POST https://dev.azure.com/{org}/_apis/Contribution/HierarchyQuery?api-version=5.0-preview
Minimal Json Payload:
{
"contributionIds": [
"ms.vss-build-web.run-artifacts-data-provider"
],
"dataProviderContext": {
"properties": {
"artifactId": 111, //obtain this from https://dev.azure.com/{org}/{proj}/_apis/build/builds/####/artifacts
"buildId": 1234,
"sourcePage": {
"routeValues": {
"project": "[ADOProjectNameHere]"
}
}
}
}
}
In my case it was:
https://dev.azure.com/thecodemanual/_apis/Contribution/HierarchyQuery/project/4fa6b279-3db9-4cb0-aab8-e06c2ad550b2?api-version=5.0-preview.1
With similar payload similar to this one:
{
"contributionIds": [
"ms.vss-build-web.run-artifacts-data-provider"
],
"dataProviderContext": {
"properties": {
"artifactId": 1158,
"buildId": 7875,
"sourcePage": {
"url": "https://dev.azure.com/thecodemanual/DevOps%20Manual/_build/results?buildId=7875&view=artifacts&pathAsName=false&type=publishedArtifacts",
"routeId": "ms.vss-build-web.ci-results-hub-route",
"routeValues": {
"project": "DevOps Manual",
"viewname": "build-results",
"controller": "ContributedPage",
"action": "Execute",
"serviceHost": "be1a2b52-5ed1-4713-8508-ed226307f634 (thecodemanual)"
}
}
}
}
}
So you would get such response:
{
"dataProviderSharedData": {},
"dataProviders": {
"ms.vss-web.component-data": {},
"ms.vss-web.shared-data": null,
"ms.vss-build-web.run-artifacts-data-provider": {
"buildId": 7875,
"buildNumber": "20201114.2",
"definitionId": 72,
"definitionName": "kmadof.hadar",
"items": [
{
"artifactId": 1158,
"name": "/hadar.zip",
"sourcePath": "/hadar.zip",
"size": 1330975,
"type": "file",
"items": null
},
{
"artifactId": 1158,
"name": "/scripts",
"sourcePath": "/scripts",
"size": 843,
"type": "directory",
"items": [
{
"artifactId": 1158,
"name": "/scripts/check-hadar-settings.ps1",
"sourcePath": "/scripts/check-hadar-settings.ps1",
"size": 336,
"type": "file",
"items": null
},
{
"artifactId": 1158,
"name": "/scripts/check-webapp-settings.ps1",
"sourcePath": "/scripts/check-webapp-settings.ps1",
"size": 507,
"type": "file",
"items": null
}
]
}
]
}
}
}
You need to use a fully scoped Personal Access Token (PAT) to authorize your request.
You can try as the steps below:
Execute the endpoint "Artifacts - Get Artifact" of the Artifacts API. From the response body, you can see the value of "downloadUrl" like as this.
https://artprodcus3.artifacts.visualstudio.com/{organization_ID}/{project_ID}/_apis/artifact/{object_ID}/content?format=zip
This URL is used to download (GET) the whole artifact as a ZIP file.
If you want to download a specified sub-folder or file in the artifact.
To download a specified sub-folder in the artifact, you can execute the following endpoint.
GET https://artprodcus3.artifacts.visualstudio.com/{organization_ID}/{project_ID}/_apis/artifact/{object_ID}/content?format=zip&subPath={/path/to/the/folder}
For example:
GET https://artprodcus3.artifacts.visualstudio.com/{organization_ID}/{project_ID}/_apis/artifact/{object_ID}/content?format=zip&subPath=/ef-tools
This will download the folder "ef-tools" and its content as a ZIP file from your artifact "drop".
To download a specified file in the artifact, you can execute the following endpoint.
GET https://artprodcus3.artifacts.visualstudio.com/{organization_ID}/{project_ID}/_apis/artifact/{object_ID}/content?format=file&subPath={/path/to/the/file}
For example:
GET https://artprodcus3.artifacts.visualstudio.com/{organization_ID}/{project_ID}/_apis/artifact/{object_ID}/content?format=file&subPath=/ef-tools/migrate.exe
This will download the file "ef-tools/migrate.exe" from your artifact "drop".

GCP Dataproc has Druid available in alpha. How to load segments?

The dataproc page describing druid support has no section on how to load data into the cluster. I've been trying to do this using GC Storage, but don't know how to set up a spec for it that works. I'd expect the "firehose" section to have some google specific references to a bucket, but there are no examples how to do this.
What is the method to load data into Druid, running on GCP dataproc straight out of the box?
I haven't used Dataproc version of Druid, but have a small cluster running in Google Compute VM. The way I ingest data to it from GCS is by using Google Cloud Storage Druid extension - https://druid.apache.org/docs/latest/development/extensions-core/google.html
To enable extension you need to add it to a list of extension in your Druid common.properties file:
druid.extensions.loadList=["druid-google-extensions", "postgresql-metadata-storage"]
To ingest data from GCS I send HTTP POST request to http://druid-overlord-host:8081/druid/indexer/v1/task
The POST request body contains JSON file with ingestion spec(see ["ioConfig"]["firehose"] section):
{
"type": "index_parallel",
"spec": {
"dataSchema": {
"dataSource": "daily_xport_test",
"granularitySpec": {
"type": "uniform",
"segmentGranularity": "MONTH",
"queryGranularity": "NONE",
"rollup": false
},
"parser": {
"type": "string",
"parseSpec": {
"format": "json",
"timestampSpec": {
"column": "dateday",
"format": "auto"
},
"dimensionsSpec": {
"dimensions": [{
"type": "string",
"name": "id",
"createBitmapIndex": true
},
{
"type": "long",
"name": "clicks_count_total"
},
{
"type": "long",
"name": "ctr"
},
"deleted",
"device_type",
"target_url"
]
}
}
}
},
"ioConfig": {
"type": "index_parallel",
"firehose": {
"type": "static-google-blobstore",
"blobs": [{
"bucket": "data-test",
"path": "/sample_data/daily_export_18092019/000000000000.json.gz"
}],
"filter": "*.json.gz$"
},
"appendToExisting": false
},
"tuningConfig": {
"type": "index_parallel",
"maxNumSubTasks": 1,
"maxRowsInMemory": 1000000,
"pushTimeout": 0,
"maxRetry": 3,
"taskStatusCheckPeriodMs": 1000,
"chatHandlerTimeout": "PT10S",
"chatHandlerNumRetries": 5
}
}
}
Example cURL command to start ingestion task in Druid(spec.json contains JSON from the previous section):
curl -X 'POST' -H 'Content-Type:application/json' -d #spec.json http://druid-overlord-host:8081/druid/indexer/v1/task

Create tagged vm instance snapshots

I'm trying to create snapshots of my vm instance on google cloud platform with a custom tag, but it's currently not working as expected. I'm sending the following Post request body to API referring to this documentation Google docs
{
"name":"<SnapshotName>",
"labels": {
"<LabelKey>":"<LabelValue>"
}
}
this gives me a positive 200 OK response, but no label appears.
{
"kind": "compute#operation",
"id": "<id>",
"name": "<name>",
"zone": "<Zone Link>",
"operationType": "createSnapshot",
"targetLink": "<Target Link>",
"targetId": "<Target ID>",
"status": "PENDING",
"user": "<User>",
"progress": 0,
"insertTime": "<Time>",
"selfLink": "<Self Link>"
}
additionally I tried to use syntax described in "Labeling Resources" documentation Google Labeling Resources
{
"name":"<SnapshotName>",
"labels": [{
"<Key>":"<LabelKey>"
"<Value>":"<LabelValue>"
}]
}
this gave me the same result.
In web interface it's possible to create snapshots and label it manually, but I would like to create them with a custom label via API.
Am I doing something wrong, or is it just broken?

Amazon Redshift - Unload to S3 - Dynamic S3 file name

I have been using UNLOAD statement in Redshift for a while now, it makes it easier to dump the file to S3 and then allow people to analysie.
The time has come to try to automate it. We have Amazon Data Pipeline running for several tasks and I wanted to run SQLActivity to execute UNLOAD automatically. I use SQL script hosted in S3.
The query itself is correct but what I have been trying to figure out is how can I dynamically assign the name of the file. For example:
UNLOAD('<the_query>')
TO 's3://my-bucket/' || to_char(current_date)
WITH CREDENTIALS '<credentials>'
ALLOWOVERWRITE
PARALLEL OFF
doesn't work and of course I suspect that you can't execute functions (to_char) in the "TO" line. Is there any other way I can do it?
And if UNLOAD is not the way, do I have any other options how to automate such tasks with current available infrastructure (Redshift + S3 + Data Pipeline, our Amazon EMR is not active yet).
The only thing that I thought could work (but not sure) is not instead of using script, to copy the script into the Script option in SQLActivity (at the moment it points to a file) and reference {#ScheduleStartTime}
Why not use RedshiftCopyActivity to copy from Redshift to S3? Input is RedshiftDataNode and output is S3DataNode where you can specify expression for directoryPath.
You can also specify the transformSql property in RedshiftCopyActivity to override the default value of : select * from + inputRedshiftTable.
Sample pipeline:
{
"objects": [{
"id": "CSVId1",
"name": "DefaultCSV1",
"type": "CSV"
}, {
"id": "RedshiftDatabaseId1",
"databaseName": "dbname",
"username": "user",
"name": "DefaultRedshiftDatabase1",
"*password": "password",
"type": "RedshiftDatabase",
"clusterId": "redshiftclusterId"
}, {
"id": "Default",
"scheduleType": "timeseries",
"failureAndRerunMode": "CASCADE",
"name": "Default",
"role": "DataPipelineDefaultRole",
"resourceRole": "DataPipelineDefaultResourceRole"
}, {
"id": "RedshiftDataNodeId1",
"schedule": {
"ref": "ScheduleId1"
},
"tableName": "orders",
"name": "DefaultRedshiftDataNode1",
"type": "RedshiftDataNode",
"database": {
"ref": "RedshiftDatabaseId1"
}
}, {
"id": "Ec2ResourceId1",
"schedule": {
"ref": "ScheduleId1"
},
"securityGroups": "MySecurityGroup",
"name": "DefaultEc2Resource1",
"role": "DataPipelineDefaultRole",
"logUri": "s3://myLogs",
"resourceRole": "DataPipelineDefaultResourceRole",
"type": "Ec2Resource"
}, {
"myComment": "This object is used to control the task schedule.",
"id": "DefaultSchedule1",
"name": "RunOnce",
"occurrences": "1",
"period": "1 Day",
"type": "Schedule",
"startAt": "FIRST_ACTIVATION_DATE_TIME"
}, {
"id": "S3DataNodeId1",
"schedule": {
"ref": "ScheduleId1"
},
"directoryPath": "s3://my-bucket/#{format(#scheduledStartTime, 'YYYY-MM-dd-HH-mm-ss')}",
"name": "DefaultS3DataNode1",
"dataFormat": {
"ref": "CSVId1"
},
"type": "S3DataNode"
}, {
"id": "RedshiftCopyActivityId1",
"output": {
"ref": "S3DataNodeId1"
},
"input": {
"ref": "RedshiftDataNodeId1"
},
"schedule": {
"ref": "ScheduleId1"
},
"name": "DefaultRedshiftCopyActivity1",
"runsOn": {
"ref": "Ec2ResourceId1"
},
"type": "RedshiftCopyActivity"
}]
}
Are you able to SSH into the cluster? If so, I would suggest writing a shell script where you can create variables and whatnot, then pass in those variables into a connection's statement-query
By using a redshift procedural wrapper around unload statement and dynamically deriving the s3 path name.
Execute the dynamic query and in your job, call the procedure that dynamically creates the UNLOAD statement and executes the statement.
This way you can avoid the other services. But depends on what kind of usecase you are working on.