How can I install the sample AdventureWorksDW database on SQL DW using an ARM script - azure-sqldw

I can create a SQL DW using ARM no problem. However, the portal supports an option of also installing a sample database - e.g. AdventureWorksDW. How can I do the equivalent using an ARM script?
BTW, I clicked on "automation options" on the portal add it shows an ARM script with an extension that probably is the piece that installs the sample database, but it asks for some parameters (e.g. storageKey, storageUri) that I don't know.
Here's what I think is the relevant portion of the ARM JSON:
"name": "PolybaseImport",
"type": "extensions",
"apiVersion": "2014-04-01-preview",
"dependsOn": [
"[concat('Microsoft.Sql/servers/', parameters('serverName'), '/databases/', parameters('databaseName'))]"
],
"properties": {
"storageKeyType": "[parameters('storageKeyType')]",
"storageKey": "[parameters('storageKey')]",
"storageUri": "[parameters('storageUri')]",
"administratorLogin": "[parameters('administratorLogin')]",
"administratorLoginPassword": "[parameters('administratorLoginPassword')]",
"operationMode": "PolybaseImport"
}
More specifically, looking at the ARM deploy script generated from the portal, here are the key elements that I need to know in order to auto deploy using my own ARM script:
…
"storageKey": {
"value": null  <- without knowing this, I can’t deploy.
},
"storageKeyType": {
"value": "SharedAccessKey"
},
"storageUri": {
"value": https://sqldwsamplesdefault.blob.core.windows.net/adventureworksdw/AdventureWorksDWPolybaseImport/Manifest.xml  <- this is not a public blob, so can’t look at it
},
…

AFAIK that's currently not possible. The portal kicks off a workflow that provisions the new DW resources, generates the sample DW schema then loads data. The sample is stored in a non-public blob so you won't be able to access it.
I don't think it's hard to make it available publicly but it does take some work so perhaps you should add a suggestion here: https://feedback.azure.com/forums/307516-sql-data-warehouse

Related

Streaming Dataflow Power BI

I am trying to do a very simple streaming dataflow as described here:
https://learn.microsoft.com/en-us/power-bi/transform-model/dataflows/dataflows-streaming
I was able to get it work briefly with some manually typed in data, but now it isn't populating any more. I am dumping data into a blob directory, and the data is not showing up in the preview. I am getting notifications that the refresh is failing:
Streaming model definition is syntactically or semantically incorrect.
I keep dumping data into the directory, and nothing shows up in the preview. I've tried turning the dataflow on and off, makes no difference. Nothing shows up in Power BI, nothing shows up in the data preview. Nothing shows up in the input box, nothing shows up in the output box.
The data is of the form:
[
{ "id": "1",
"amount": "3"},
{ "id": "2",
"amount": "4"}
]
Although it also fails with data of the form
{
"id": "1",
"amount: "3"
}
What would cause such an error message?

substitution variable $BRANCH_NAME gives nothing while building

I'm building docker images using cloud builder trigger, previously $BRNACH_NAME was working but now its giving null.
Thanks in advance.
I will post my comment as an answer as it is too long for comment section.
According to this documentation, you should have the possibility to use $BRANCH_NAME default substitution for builds invoked by triggers.
In the same documentation it is stated that:
If a default substitution is not available (such as with sourceless
builds, or with builds that use storage source), then occurrences of
the missing variable are replaced with an empty string.
I assume this might be the reason you are receiving NULL.
Have you performed any changes? Could you please provide some further information, such as your .yaml/.json file, your trigger configuration and the error you are receiving?
The problem was not in $BRANCH_NAME, I was using the resulted JSON to fetch the branch name.
like,
"source": {
"repoSource": {
"projectId": "project_id",
"repoName": "bitbucket_repo_name",
"branchName": "integration"
}
}
and
I was using build_details['source']['repoSource']['branchName']
but now it's giving like
"source": {
"repoSource": {
"projectId": "project_id",
"repoName": "bitbucket_repo_name",
"commitSha": "ght8939jj5jd9jfjfjigk0949jh8wh4w"
}
},
so, now I'm using build_details['substitutions']['BRANCH_NAME'] and its working fine.

How to lock files with TFVC REST API?

I'm trying to create a C++ library that uses the TFVC REST API so that I can support TFS from within a program.
I've been successful using rapidjson and chilkat to build and send requests for a lot of the functionality so far-- add, delete, rename, etc.
My issue is that I cannot seem to apply any locks. I want users to be able to 'checkout' a file and to do so a lock must be applied.
This is for a TFS 2017 Server. Here's a link to the TFVC REST API docs
https://learn.microsoft.com/en-us/rest/api/azure/devops/tfvc/changesets/create?view=azure-devops-rest-5.0#versioncontrolchangetype
Here's my test:
{
"changes":[
{
"changeType":"lock",
"item":{
"contentMetadata":{
"contentType":"rawText",
"encoding":1200
},
"path":"$/TFStestAT/TextFile1.txt",
"version":"131"
}
}
],
"comment":"(sample) Locking a file via Advanced REST Client"
}
Here's the response:
{
"$id": "1",
"innerException": null,
"message": "The specified change type Lock is not supported.",
"typeName": "System.ArgumentException, mscorlib",
"typeKey": "ArgumentException",
"errorCode": 0,
"eventId": 0
}
With no "checkout" changeType and Lock not being supported, how should I go about locking?
Any tips would be greatly appreciated!

Is there anyway to determine what IAM permissions I actually need for a CloudFormation template?

Just wondering whats the best practice for determining what permissions I should give for my CloudFormation template?
After some time of trying to give the minimal permissions it require, I find that thats really time consuming and error prone. I note that depending on the state of my stack, really new vs some updates vs delete, I will need different permissions.
I guess, it should be possible for there to be some parser that given a CloudFormation template can determine the minimum set of permissions it require?
Maybe I can give ec2:* access to resources tagged Cost Center: My Project Name? Is this ok? But I wonder what happens when I change my project name for example?
Alternatively, isit ok to assume its ok to give say ec2:* access based on the assumption the CloudFormation parts is usually only executed off CodeCommit/Github/CodePipeline and its not something that is likely to be public/easy to hack? --- Tho this sounds like a flawed statement to me ...
In the short term, you can use aws-leastprivilege. But it doesn't support every resource type.
For the long term: as mentioned in this 2019 re:invent talk, CloudFormation is working towards open sourcing and migrating most of its resource types to a new public resource schema. One of the benefits of this is that you'll be able to see the permissions required to perform each operation.
E.g. for AWS::ImageBuilder::Image, the schema says
"handlers": {
"create": {
"permissions": [
"iam:GetRole",
"imagebuilder:GetImageRecipe",
"imagebuilder:GetInfrastructureConfiguration",
"imagebuilder:GetDistributionConfiguration",
"imagebuilder:GetImage",
"imagebuilder:CreateImage",
"imagebuilder:TagResource"
]
},
"read": {
"permissions": [
"imagebuilder:GetImage"
]
},
"delete": {
"permissions": [
"imagebuilder:GetImage",
"imagebuilder:DeleteImage",
"imagebuilder:UnTagResource"
]
},
"list": {
"permissions": [
"imagebuilder:ListImages"
]
}
}

Dataflow Datastore to GCS Template, javascriptTextTransformFunctionName Error

I am using the Cloud Datastore to Cloud Storage Text template from Cloud Dataflow.
My python code correctly submits the request and uses javascriptTextTransformFunctionName to run the correct function in my Google Cloud Storage bucket.
Here is a minimized part of the code that is running
function format(inJson) {
var output = {};
output.administrator = inJson.properties.administrator.keyValue.path[0].id;
return output;
And here is the Json I am looking to format, cut down, but only the other children of "properties."
"properties": {
"administrator": {
"keyValue": {
"path": [
{
"kind": "Kind",
"id": "5706504271298560"
}
]
}
}
}
}
And I am getting this exception:
java.lang.RuntimeException:
org.apache.beam.sdk.util.UserCodeException:
javax.script.ScriptException: TypeError: Cannot read
property "keyValue" from undefined in <eval> at line number 5
I understand what it is saying the error is, but I don't know why its happening. If you take the format function and that json and run it through your browser console you can easily test and see that it pulls out and returns an object with "administrator" equal to "5706504271298560".
I did not found the solution to your problem but I expect to be of some help:
Found this post and this one with the same issue. The first one was fixed installing NodeJS library, the second one changing the kind of quotes for the Java.Type().
Nashorn official docs: call Java.type with a fully qualified Java class name, and then to call the returned function to instantiate a class from JavaScript.