How do you model an action in RAML - action

I would like to understand what is the best approach for modeling an action on a resource using RAML.
E.g. I have the following resource definition in RAML:
/orders:
type: collection
get:
description: Gets all orders
post:
description: Creates a new order
/{orderId}:
type: element
get:
description: Gets a order
put:
description: Updates a order
delete:
description: Deletes a order
Now for an order I would like to model an "approve" action. Is there a best practice of doing this with RAML ?

You could PUT or PATCH for setting some "Approval" to true in your model.
You could think about the approval as a resource. For example:
/orders:
type: collection
get:
post:
/{orderId}:
type: element
get:
put:
delete:
/approval:
post:
get:
...
It's not a RAML best practice. It's more related with how do you represent your model in REST.

You could use a PATCH request with a "patch document" that raises the approved flag on an order.

Related

Creating Batch Operations with AWS Amplify [GraphQL, DataStore, AppSync]

I've currently been handling batch operations with a for loop, but obviously, this is not the best approach, especially as I'm adding an 'upload by CSV' option, which will take 1000+ putItems.
I searched around for the best ways to implement this, specifically this link:
https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-dynamodb-batch.html
However, even after following those steps mentioned I'm not able to achieve a batch operation. Below is my code for a 'batch delete' operation.
Here is my schema.graphql file:
type Client #model #auth(rules: [{ allow: owner }]) {
id: ID!
name: String!
company: String
phone: String
email: String
}
type Mutation {
batchDelete(ids: [ID]): [Client]
}
I then create two new files. One request mapping template and one response mapping template.
#set($clientsdata = [])
#foreach($item in ${ctx.args.clients})
$util.qr($clientsdata.delete($util.dynamodb.toMapValues($item)))
#end
{
"version" : "2018-05-29",
"operation" : "BatchDeleteItem",
"tables" : {
"Clients": $utils.toJson($clientsdata)
}
}
and then as per the tutorial a "simple pass through" response mapping template:
$util.toJson($ctx.result.data.Posts)
However now when I run the batchdelete command, I keep getting nothing returned.
Would really appreciate guidance on this!
When it comes to performing DynamoDB batch operations in tandem with Amplify, note that the table name specified in the schema is actually different per environment, i.e. your "Client" table wouldn't be recognized as "Clients" as you have stated it in the request mapping template, but rather the name it is given on Amplify push, per environment.
E.g. Client-<some alphanumeric number>-envName
Add the full name of the table to your request and response mapping templates.
Also your foreach statement should read:
#foreach($item in ${ctx.args.clientsdata}) wherein you iterate through each of the items in the array that is passed as the argument to the context object.
Hope this helps.

DM create bigquery view then authorize it on dataset

Using Google Deployment Manager, has anybody found a way to first create a view in BigQuery, then authorize one or more datasets used by the view, sometimes in different projects, and were not created/managed by deployment manager? Creating a dataset with a view wasn't too challenging. Here is the jinja template named inventoryServices_bigquery_territory_views.jinja:
resources:
- name: territory-{{properties["OU"]}}
type: gcp-types/bigquery-v2:datasets
properties:
datasetReference:
datasetId: territory_{{properties["OU"]}}
- name: files
type: gcp-types/bigquery-v2:tables
properties:
datasetId: $(ref.territory-{{properties["OU"]}}.datasetReference.datasetId)
tableReference:
tableId: files
view:
query: >
SELECT DATE(DAY) DAY, ou, email, name, mimeType
FROM `{{properties["files_table_id"]}}`
WHERE LOWER(SPLIT(ou, "/")[SAFE_OFFSET(1)]) = "{{properties["OU"]}}"
useLegacySql: false
The deployment configuration references the above template like this:
imports:
- path: inventoryServices_bigquery_territory_views.jinja
resources:
- name: inventoryServices_bigquery_territory_views
type: inventoryServices_bigquery_territory_views.jinja
In the example above files_table_id is the project.dataset.table that needs the newly created view authorized.
I have seen some examples of managing IAM at project/folder/org level, but my need is on the dataset, not project. Looking at the resource representation of a dataset it seems like I can update access.view with the newly created view, but am a bit lost on how I would do that without removing existing access levels, and for datasets in projects different than the one the new view is created in. Any help appreciated.
Edit:
I tried adding the dataset which needs the view authorized like so, then deploy in preview mode just to see how it interprets the config:
-name: files-source
type: gcp-types/bigquery-v2:datasets
properties:
datasetReference:
datasetId: {{properties["files_table_id"]}}
access:
view:
projectId: {{env['project']}}
datasetId: $(ref.territory-{{properties["OU"]}}.datasetReference.datasetId)
tableId: $(ref.territory_files.tableReference.tableId)
But when I deploy in preview mode it throws this error:
errors:
- code: MANIFEST_EXPANSION_USER_ERROR
location: /deployments/inventoryservices-bigquery-territory-views-us/manifests/manifest-1582283242420
message: |-
Manifest expansion encountered the following errors: mapping values are not allowed here
in "<unicode string>", line 26, column 7:
type: gcp-types/bigquery-v2:datasets
^ Resource: config
Strange to me, hard to make much sense of that error since the line/column it points to is formatted exactly the same as the other dataset in the config, except that maybe it doesn't like that the files-source dataset already exists and was created from outside of deployment manager.

AWS CloudFormation & Service Catalog - Can I require tags with user values?

Our problem seems very basic and I would expect common.
We have tags that must always be applied (for billing). However, the tag values are only known at the time the stack is deployed... We don't know what the tag values will be when developing the stack, or when creating the product in the Service Catalog...
We don't want to wait until AFTER the resource is deployed to discover the tag is missing, so as cool as AWS config may be, we don't want to rely on its rules if we don't have to.
So things like Tag Options don't work, because it appears that they expect we know the tag value months prior to some deployment (which isn't the case.)
Is there any way to mandate tags be used for a cloudformation template when it is deployed? Better yet, can we have service catalog query for a tag value when deploying? Tags like "system" or "project", for instance, come and go over time and are not known up-front for many types of cloudformation templates we develop.
Isn't this a common scenario?
I am worried that I am missing something very, very simple and basic which mandates tags be used up-front, but I can't seem to figure out what. Thank you in advance. I really did Google a lot before asking, without finding a satisfying answer.
I don't know anything about service catalog but you can create Conditions and then use it to conditionally create (or even fail) your resource creation. Conditional Resource Creation e.g.
Parameters:
ResourceTag:
Type: String
Default: ''
Conditions:
isTagEmpty:
!Equals [!Ref ResourceTag, '']
Resources:
DBInstance:
Type: AWS::RDS::DBInstance
Condition: isTagEmpty
Properties:
DBInstanceClass: <DB Instance Type>
Here RDS DB instance will only be created if tag is non-empty. But cloudformation will still return success.
Alternatively, you can try & fail the resource creation.
Resources:
DBInstance:
Type: AWS::RDS::DBInstance
Properties:
DBInstanceClass: !If [isTagEmpty, !Ref "AWS::NoValue", <DB instance type>]
I haven't tried this but it should fail as DB instance type will be invalid if tag is null.
Edit: You can also create your stack using the createStack CFN API. Write some code to read & validate the input (e.g. read from service catalog) & call the createStack API. I am doing the same from Lambda (nodejs) reading some input from Parameter Store. Sample code -
module.exports.create = async (event, context, callback) => {
let request = JSON.parse(event.body);
let subnetids = await ssm.getParameter({
Name: '/vpc/public-subnets'
}).promise();
let securitygroups = await ssm.getParameter({
Name: '/vpc/lambda-security-group'
}).promise();
let params = {
StackName: request.customerName, /* required */
Capabilities: [
'CAPABILITY_IAM',
'CAPABILITY_NAMED_IAM',
'CAPABILITY_AUTO_EXPAND',
/* more items */
],
ClientRequestToken: 'qwdfghjk3912',
EnableTerminationProtection: false,
OnFailure: request.onfailure,
Parameters: [
{
ParameterKey: "SubnetIds",
ParameterValue: subnetids.Parameter.Value,
},
{
ParameterKey: 'SecurityGroupIds',
ParameterValue: securitygroups.Parameter.Value,
},
{
ParameterKey: 'OpsPoolArnList',
ParameterValue: request.userPoolList,
},
/* more items */
],
TemplateURL: request.templateUrl,
};
cfn.config.region = request.region;
let result = await cfn.createStack(params).promise();
console.log(result);
}
Another option: add a AWS Custom Resource backed by Lambda. Check for tags in this section & return failure if it doesn't satisfy the constraints. Make all other resource creation depend on this resource (so that they all create if your checks pass). Link also contains example. You will also have to add handling for stack update & deletion (like a default success). I think this is your best bet as of now.

Error when trying to create a serviceaccount key in deployment manager

The error is below:
ERROR: (gcloud.deployment-manager.deployments.update) Error in Operation [operation-1544517871651-57cbb1716c8b8-4fa66ff2-9980028f]: errors:
- code: MISSING_REQUIRED_FIELD
location: /deployments/infrastructure/resources/projects/resources-practice/serviceAccounts/storage-buckets-backend/keys/json->$.properties->$.parent
message: |-
Missing required field 'parent' with schema:
{
"type" : "string"
}
Below is my jinja template content:
resource:
- name: {{ name }}-keys
type: iam.v1.serviceAccounts.key
properties:
name: projects/{{ properties["projectID"] }}/serviceAccounts/{{ serviceAccount["name"] }}/keys/json
privateKeyType: enum(TYPE_GOOGLE_CREDENTIALS_FILE)
keyAlgorithm: enum(KEY_ALG_RSA_2048)
P.S.
My reference for the properties is based on https://cloud.google.com/iam/reference/rest/v1/projects.serviceAccounts.keys
I will post the response of #John as the answer for the benefit of the community.
The parent was missing, needing an existing service account:
projects/{PROJECT_ID}/serviceAccounts/{ACCOUNT}
where ACCOUNT value can be the email or the uniqueID of the service account.
Regarding the template, please remove the enum wrapping the privateKeyType and keyAlgoritm.
The above deployment creates a service account credentials for an existing service account, and in order to retrieve this downloadable json key file, it can be exposed using outputs using the publicKeyData property then have it base64decoded.

JSON API response and ember model names

A quick question about the JSON API response key "type" matching up with an Ember model name.
If I have a model, say "models/photo.js" and I have a route like "/photos", my JSON API response looks like this
{
data: [{
id: "298486374",
type: "photos",
attributes: {
name: "photo_name_1.png",
description: "A photo!"
}
},{
id: "298434523",
type: "photos",
attributes: {
name: "photo_name_2.png",
description: "Another photo!"
}
}]
}
I'm under the assumption that my model name should be singular but this error pops up
Assertion Failed: You tried to push data with a type 'photos' but no model could be found with that name
This is, of course, because my model is named "photo"
Now in the JSON API spec there is a note that reads "This spec is agnostic about inflection rules, so the value of type can be either plural or singular. However, the same value should be used consistently throughout an implementation."
So,
tl;dr Is the "Ember way" of doing things to have both the model names and the JSON API response key "type" both be singular? or does it not matter as long as they match?
JSON API serializer expects plural type. Payload example from guides.
Since modelNameFromPayloadKey function singularizes key, it works with singular type:
// as is
modelNameFromPayloadKey: function(key) {
return singularize(normalizeModelName(key));
}
but inverse operation payloadKeyFromModelName pluralizes model name and should be changed, if you use singular type in your backend:
// as is
payloadKeyFromModelName: function(modelName) {
return pluralize(modelName);
}
It is important that the internal Ember Data JSON API format differs a bit from the one used by JSONAPISerializer. Store.push expects singular type, JSON API serializer expects plural.
From discussion:
"...ED uses camelCased attributes and singular types internally, regardless of what adapter/serializer you're using.
When you're using the JSON API adapter/serializer we want users to be able to use the examples available on jsonapi.org and have it just work. Most users never have to care about the internal format since the serializer handles the work for them.
This is documented in the guides, http://guides.emberjs.com/v2.0.0/models/pushing-records-into-the-store/
..."
Depending on your use case, you might try pushPayload instead of push. As the documentation suggests, it does some normalization; and in my case it covered the "plural vs. singular" problem.