Difference between Organizations.OrdererOrg.Policies and Orderer.Policies in configtx.yaml file Hyperledger Fabric - blockchain

I was looking at the configtx.yaml file in the test-network of hyperledger fabric samples and trying to understand what is going on. I find that there are Policies under 'Organizations' for the orderer and each peer, but also there are policies for Orderer separately as well as there are Policies under Channel and Application.
I am very confused on what exactly are the differences between these in terms of what they are trying to specify.
Here is a few places it occurs as an example:
Organizations:
# .... .... .... omit previous code
- &OrdererOrg
# Policies defines the set of policies at this level of the config tree
# For organization policies, their canonical path is usually
# /Channel/<Application|Orderer>/<OrgName>/<PolicyName>
Policies:
Readers:
Type: Signature
Rule: "OR('OrdererMSP.member')"
Writers:
Type: Signature
Rule: "OR('OrdererMSP.member')"
Admins:
Type: Signature
Rule: "OR('OrdererMSP.admin')"
OrdererEndpoints:
- orderer.example.com:7050
And also here:
Orderer: &OrdererDefaults
# .... .... .... omit previous code
# Policies defines the set of policies at this level of the config tree
# For Orderer policies, their canonical path is
# /Channel/Orderer/<PolicyName>
Policies:
Readers:
Type: ImplicitMeta
Rule: "ANY Readers"
Writers:
Type: ImplicitMeta
Rule: "ANY Writers"
Admins:
Type: ImplicitMeta
Rule: "MAJORITY Admins"
# BlockValidation specifies what signatures must be included in the block
# from the orderer for the peer to validate it.
BlockValidation:
Type: ImplicitMeta
Rule: "ANY Writers"

For Organizations: Orderer is an organization. This section is the definition of an organization, the organizational identities (ref). It includes: Name, ID, where certificates locate, who can be read, who can write, who are admins of this Org.
For Orderer section: It defines how orderers work (sample config) like: Consensus type, condition generates a block, how can read/write this config

Related

<AWS S3> How to archive files to a folder with date as the prefix?

There is a requirement to archive files inside a bucket folder (i.e. put under prefix) for those files having last modified date exceeding a particular time (say 7 days) to a subfolder with date as the prefix:
Sample folder structure:
a.txt
b.txt
20210826
c.txt (with last modified date over 1 week)
20210819
d.txt (with last modified date over 2 weeks)
Any idea how this can be achieved? It seems there's no readily-available archiving policy to achieve this.
The only way I can think of is through a lambda function (with scheduler trigger) to :
Scan all the files timestamp to see which are older than 1 week
Move the matched files to under a prefix (e.g. 20210826/c.txt)
Another question is about purging. If files are put under a date prefix, how can we configure the LifecycleConfiguration Rule in the CloudFormation template?
LifecycleConfiguration:
Rules:
- Id: DeletionRule
Prefix: '' (how to set it to cater for different dates as the key)
Status: Enabled
ExpirationInDays: !FindInMap [EnvironmentsMap, !Ref env, S3FileRetentionIndays]
You can configure a lifecycle rule that transitions the objects after x time to a different storage class, and then, capture that operation using Event Bridge based on the CloudTrail API called. https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-ct-api-tutorial.html
Finally, Event Bridge triggers a Lambda function that moves the S3 object to a subfolder with the date prefix.
Regarding the expiration configuration, you'll need to create a parent folder for all the purged files and set that as the prefix.
Your Cloudformation template will look like this:
LifecycleConfiguration:
Rules:
- Id: DeletionRule
Prefix: 'purged-files/'
Status: Enabled
ExpirationInDays: !FindInMap [EnvironmentsMap, !Ref env, S3FileRetentionIndays]

CloudFront ForwardedValues ambiguous documentation

I'm configuring my CloudFront using CloudFormation, and on the AWS documentation page for the ForwardedValues property, we can see the following statement:
If you specify true for QueryString and you don't specify any values for QueryStringCacheKeys, CloudFront forwards all query string parameters to the origin and caches based on all query string parameters.
The word in bold (caches) are causing some confusion, as the meaning of this sentence is completely dependent on caches being a verb or caches being a noun:
Verb: CloudFormation will cache the queryparameter
Noun: CloudFormation will forward the queryparameter to the cache, but it will not cache the queryparameters
If I don't specify the QueryStringCacheKeys, what is the behaviour of CloudFront?
I fixed it by specifying CachePolicyId
as in https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-managed-cache-policies.html
Type: "AWS::CloudFront::Distribution"
Properties:
DistributionConfig:
DefaultCacheBehavior:
# Name: Managed-CachingOptimized
# need either this parameter or 'ForwardedValues'
CachePolicyId: 658327ea-f89d-4fab-a63d-7e88639e58f6
If you don't specify the QueryStringCacheKeys, but only this:
ForwardedValues:
QueryString: true
CloudFront will Forward all, cache based on all, which means that the request will be cached based on url+querystring and that the querystring is forwarded to the underlying system.
You can read more on this in the AWS documentation here.

Google Deployment Manager - Project creation permission denied

I am getting a 403 PERMISSION_DENIED response from GCP when running the deployment manager to create a deployment that creates a project, two service accounts and sets IAM policy for it using the cloud resource manager API.
- code: RESOURCE_ERROR
location: /deployments/test-deployment/resources/dm-test-project
message: '{"ResourceType":"cloudresourcemanager.v1.project","ResourceErrorCode":"403","ResourceErrorMessage":{"code":403,"message":"The
caller does not have permission","status":"PERMISSION_DENIED","statusMessage":"Forbidden","requestPath":"https://cloudresourcemanager.googleapis.com/v1/projects/dm-test-project","httpMethod":"GET"}}'
Before, I created a project 'DM Project Creation', enable some APIs, assign the Billing Account to it and then create a Service Account.
I already had an Organization node created, so I added the created Service Account in the org node and gave the following IAM roles:
- Project Creator
- Billing Account User
I was actually following this examples from Google Cloud Platform:
https://github.com/GoogleCloudPlatform/deploymentmanager-samples/tree/master/examples/v2/project_creation
https://github.com/GoogleCloudPlatform/deploymentmanager-samples/blob/master/community/cloud-foundation/templates/project/README.md
I run the following command to authenticate with the Service Account:
gcloud auth activate-service-account dm-project-creation#dm-creation-project-0.iam.gserviceaccount.com --key-file=/Users/famedina/Downloads/dm-creation-project-0-f1f92dd070ce.json
Then run the deployment manager passing the configuration file:
gcloud deployment-manager deployments create test-deployment --config config.yaml
imports:
- path: project.py
resources:
# The "name" property below will be the ID of the new project
# If you want your project to have a different name, use the "project-name"
# property.
- name: dm-test-project
type: project.py
properties:
# Change this to your organization ID.
organization-id: "<MY_ORG_ID"
# You can also create the project in a folder.
# If both organization-id and parent-folder-id are provided,
# the project will be created in parent-folder-id.
#parent-folder-id: "FOLDER_ID"
# Change the following to your organization's billing account
billing-account-name: billingAccounts/<MY_BILLING_ACC_ID>
# The apis to enable in the new project.
# To see the possible APIs, use: gcloud services list --available
apis:
- compute.googleapis.com
- deploymentmanager.googleapis.com
- pubsub.googleapis.com
- storage-component.googleapis.com
- monitoring.googleapis.com
- logging.googleapis.com
# The service accounts you want to create in the project
service-accounts:
- my-service-account-1
- my-service-account-2
bucket-export-settings:
create-bucket: true
# If using an already existing bucket, specify this
# bucket: <my bucket name>
# Makes the service account that Deployment Manager would use in the
# generated project when making deployments in this new project a
# project owner.
set-dm-service-account-as-owner: true
# The patches to apply to the project's IAM policy. Note that these are
# always applied as a patch to the project's current IAM policy, not as a
# diff with the existing properties stored in DM. This means that removing
# a binding from the 'add' section will not remove the binding on the
# project during the next update. Instead it must be added to the 'remove'
# section.
iam-policy-patch:
# These are the bindings to add.
add:
- role: roles/owner
members:
# NOTE: The DM service account that is creating this project will
# automatically be added as an owner.
- serviceAccount:98765432100#cloudservices.gserviceaccount.com
- role: roles/viewer
members:
- user:iamtester#deployment-manager.net
# The bindings to remove. Note that these are idempotent, in the sense
# that any binding here that is not actually on the project is considered
# to have been removed successfully.
remove:
- role: roles/owner
members:
# This is already not on the project, but in case it shows up, let's
# remove it.
- serviceAccount:1234567890#cloudservices.gserviceaccount.com```
I ran into this as well, and the error message is not actually explaining the underlying problem.
The key thing is that this is a GET operation, not an attempt to create the project. This is to verify global uniqueness of the project-id requested, and if not unique, PERMISSION_DENIED is thrown.
- code: RESOURCE_ERROR
location: /deployments/test-deployment/resources/dm-test-project
message: '{"ResourceType":"cloudresourcemanager.v1.project","ResourceErrorCode":"403","ResourceErrorMessage":{"code":403,"message":"The
caller does not have permission","status":"PERMISSION_DENIED","statusMessage":"Forbidden","requestPath":"https://cloudresourcemanager.googleapis.com/v1/projects/dm-test-project","httpMethod":"**GET**"}}'
Alot of room for improvement in the resulting error towards the end user.

GCP project creation via deploymentmanager

So im trying to create a project with google cloud deployment manager,
Ive structured the setup roughly as below:
# Structure
Org -> Folder1 -> Seed-Project(Location where I am running deployment manager from)
Organization:
IAM:
-> {Seed-Project-Number}#cloudservices.gserviceaccount.com:
- Compute Network Admin
- Compute Shared VPC Admin
- Organisation Viewer
- Project Creator
# DeploymentManager Resource:
type cloudresourcemanager.v1.project
name MyNewProject
parent
id: '{folder1-id}'
type: folder
projectId: MyNewProject
The desired result is that MyNewProject should be created under Folder1.
However; It appears as if the deployment manager service account does not have sufficent permissions:
$ CLOUDSDK_CORE_PROJECT=Seed-Project gcloud deployment-manager deployments \
create MyNewDeployment \
--config config.yaml \
--verbosity=debug
Error messageļ¼š
- code: RESOURCE_ERROR
location: /deployments/MyNewDeployment/resources/MyNewProject
message: '{"ResourceType":"cloudresourcemanager.v1.project",
"ResourceErrorCode":"403","ResourceErrorMessage":{"code":403,"message":"The
caller does not have permission","status":"PERMISSION_DENIED","statusMessage":"Forbidden","requestPath":"https://cloudresourcemanager.googleapis.com/v1/projects/MyNewProject","httpMethod":"GET"}}'
I've done some digging, and it appears to be calling the resourcemanager.projects.get method; The 'Compute Shared VPC Admin (roles/compute.xpnAdmin)' role should provide this permission as documented here: https://cloud.google.com/iam/docs/understanding-roles
Except that doesn't seem to be the case, whats going on ?
Edit
Id like to add some additional information gathered from debugging efforts:
These are the API requests from the deployment manager, (from the seed project).
You can see that the caller is an anonymous service account, this isn't what id expect to see. (Id expect to see {Seed-Project-Number}#cloudservices.gserviceaccount.com as the calling account here)
Edit-2
config.yaml
imports:
- path: composite_types/project/project.py
name: project.py
resources:
- name: MyNewProject
type: project.py
properties:
parent:
type: folder
id: "{folder1-id}"
billingAccountId: billingAccounts/REDACTED
activateApis:
- compute.googleapis.com
- deploymentmanager.googleapis.com
- pubsub.googleapis.com
serviceAccounts: []
composite_types/project/* is an exact copy of the templates found here:
https://github.com/GoogleCloudPlatform/deploymentmanager-samples/tree/master/community/cloud-foundation/templates/project
The key thing is that this is a GET operation, not an attempt to create the project. This is to verify global uniqueness of the project-id requested, and if not unique, PERMISSION_DENIED is thrown.
Lousy error message, lots of wasted developer hours !
Probably late, but just to share that I ran into similar issue today.Double checked every permission mentioned in the Readme for the serviceAccount under which the deployment manager job runs ({Seed-Project-Number}#cloudservices.gserviceaccount.com in the question), turned out that the Billing Account User role was not assigned contrary to what I thought earlier. Granting that and running it again worked.

Serverless Framework S3 Event Rule

I am creating a s3 listener using serverless framework. A user has requested a specific file format, for the s3 trigger for the event.
I currently have
functions:
LambdaTrigger:
name: ${self:service}-${self:custom.environment}
description: lambda_trigger
handler: handler.lambda_handler
tags:
project: ""
owner: ""
environment: ${self:custom.environment}
events:
- existingS3:
bucket: ${self:custom.listener_bucket_name}
event: s3:ObjectCreated:*
rules:
- prefix: ${self:custom.listener_prefix}
- suffix: ${self:custom.listener_suffix}
my user is requesting that lambda only be triggered when file is of
/ID1_ID2_ID3.tar
I have handled the prefix and the suffix condition in the above function, but I am wondering how or even if it is possible to construct a new rule to only be triggered when file has the format of ID1_ID2_ID3 where each id N integers.
According to the docs, an earlier question and my own experience in the matter, prefix and suffix parameters cannot be wildcard or regular expressions, so I'm affraid that unless you find some clever way to circunvent the restriction it's not possible to do what you want.