autodiscover ASGs: InvalidClientTokenId: - amazon-web-services

I installed cluster-autoscaler in my cluster k8S v1.17.5
and I have an error during a deployment
E0729 15:09:09.661938 1 aws_manager.go:261] Failed to regenerate ASG cache: cannot autodiscover ASGs: InvalidClientTokenId: The security token included in the request is invalid.
status code: 403, request id: a2b12..........
F0729 15:09:09.661961 1 aws_cloud_provider.go:376] Failed to create AWS Manager: cannot autodiscover ASGs: InvalidClientTokenId: The security token included in the request is invalid.
status code: 403, request id: a2b12da3-.........
my values.yaml
autoDiscovery:
# Only cloudProvider `aws` and `gce` are supported by auto-discovery at this time
# AWS: Set tags as described in https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md#auto-discovery-setup
clusterName: my-cluster
tags:
- k8s.io/cluster-autoscaler/enabled
- k8s.io/cluster-autoscaler/my-cluster
- kubernetes.io/cluster/my-cluster
autoscalingGroups: []
# At least one element is required if not using autoDiscovery
# - name: asg1
# maxSize: 2
# minSize: 1
# - name: asg2
# maxSize: 2
# minSize: 1
autoscalingGroupsnamePrefix: []
# At least one element is required if not using autoDiscovery
# - name: ig01
# maxSize: 10
# minSize: 0
# - name: ig02
# maxSize: 10
# minSize: 0
# Required if cloudProvider=aws
awsRegion: "eu-west-2"
awsAccessKeyID: "xxxxxxxxxxx"
awsSecretAccessKey: "xxxxxxxxxxxxx"
# Required if cloudProvider=azure
# clientID/ClientSecret with contributor permission to Cluster and Node ResourceGroup
azureClientID: ""
azureClientSecret: ""
# Cluster resource Group
azureResourceGroup: ""
azureSubscriptionID: ""
azureTenantID: ""
# if using AKS azureVMType should be set to "AKS"
azureVMType: "AKS"
azureClusterName: ""
azureNodeResourceGroup: ""
# if using MSI, ensure subscription ID and resource group are set
azureUseManagedIdentityExtension: false
# Currently only `gce`, `aws`, `azure` & `spotinst` are supported
cloudProvider: aws
# Configuration file for cloud provider
cloudConfigPath: ~/.aws/credentials
image:
repository: k8s.gcr.io/cluster-autoscaler
tag: v1.17.1
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
# - myRegistrKeySecretName
tolerations: []
## Extra ENV passed to the container
extraEnv: {}
extraArgs:
v: 4
stderrthreshold: info
logtostderr: true
# write-status-configmap: true
# leader-elect: true
# skip-nodes-with-local-storage: false
expander: least-waste
scale-down-enabled: true
# balance-similar-node-groups: true
# min-replica-count: 2
# scale-down-utilization-threshold: 0.5
# scale-down-non-empty-candidates-count: 5
# max-node-provision-time: 15m0s
# scan-interval: 10s
scale-down-delay-after-add: 10m
scale-down-delay-after-delete: 0s
scale-down-delay-after-failure: 3m
# scale-down-unneeded-time: 10m
# skip-nodes-with-local-storage: false
# skip-nodes-with-system-pods: true
## Affinity for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
## affinity: {}
podDisruptionBudget: |
maxUnavailable: 1
# minAvailable: 2
## Node labels for pod assignment
## Ref: https://kubernetes.io/docs/user-guide/node-selection/
nodeSelector: {}
podAnnotations: {}
podLabels: {}
replicaCount: 1
rbac:
## If true, create & use RBAC resources
##
create: true
## If true, create & use Pod Security Policy resources
## https://kubernetes.io/docs/concepts/policy/pod-security-policy/
pspEnabled: false
serviceAccount:
# Specifies whether a service account should be created
create: true
# The name of the ServiceAccount to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
## Annotations for the Service Account
##
serviceAccountAnnotations: {}
resources:
limits:
cpu: 100m
memory: 300Mi
requests:
cpu: 100m
memory: 300Mi
priorityClassName: "system-node-critical"
# Defaults to "ClusterFirst". Valid values are
# 'ClusterFirstWithHostNet', 'ClusterFirst', 'Default' or 'None'
# autoscaler does not depend on cluster DNS, recommended to set this to "Default"
dnsPolicy: "ClusterFirst"
## Security context for pod
## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
securityContext:
runAsNonRoot: true
runAsUser: 1001
runAsGroup: 1001
## Security context for container
## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
containerSecurityContext:
capabilities:
drop:
- all
## Deployment update strategy
## Ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy
# updateStrategy:
# rollingUpdate:
# maxSurge: 1
# maxUnavailable: 0
# type: RollingUpdate
service:
annotations: {}
## List of IP addresses at which the service is available
## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips
##
externalIPs: []
loadBalancerIP: ""
loadBalancerSourceRanges: []
servicePort: 8085
portName: http
type: ClusterIP
spotinst:
account: ""
token: ""
image:
repository: spotinst/kubernetes-cluster-autoscaler
tag: 0.6.0
pullPolicy: IfNotPresent
## Are you using Prometheus Operator?
serviceMonitor:
enabled: true
interval: "10s"
# Namespace Prometheus is installed in
namespace: cattle-prometheus
## Defaults to whats used if you follow CoreOS [Prometheus Install Instructions](https://github.com/helm/charts/tree/master/stable/prometheus-operator#tldr)
## [Prometheus Selector Label](https://github.com/helm/charts/tree/master/stable/prometheus-operator#prometheus-operator-1)
## [Kube Prometheus Selector Label](https://github.com/helm/charts/tree/master/stable/prometheus-operator#exporters)
selector:
prometheus: kube-prometheus
# The metrics path to scrape - autoscaler exposes /metrics (standard)
path: /metrics
## String to partially override cluster-autoscaler.fullname template (will maintain the release name)
nameOverride: ""
## String to fully override cluster-autoscaler.fullname template
fullnameOverride: ""
# Allow overridding the .Capabilities.KubeVersion.GitVersion (useful for "helm template" command)
kubeTargetVersionOverride: ""
## Priorities Expander
## If extraArgs.expander is set to priority, then expanderPriorities is used to define cluster-autoscaler-priority-expander priorities
## https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/expander/priority/readme.md
expanderPriorities: {}

How are you handling IAM roles for that pod?
The cluster autoscaler needs an IAM role with permissions to do some operations on your autoscaling group: https://github.com/helm/charts/tree/master/stable/cluster-autoscaler#iam
You need to create an IAM role and then the helm template should take care of creating a service account for you that uses that role. Just like they explain here: https://github.com/helm/charts/tree/master/stable/cluster-autoscaler#iam-roles-for-service-accounts-irsa
Once you have the IAM role configured, you would then need to --set
rbac.serviceAccountAnnotations."eks.amazonaws.com/role-arn"=arn:aws:iam::123456789012:role/MyRoleName
when installing.
This is a pretty good explanation on how it works (although it could be a bit dense if you are starting): https://aws.amazon.com/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/

the following parameters are filled in with correct keys
# Required if cloudProvider=aws
awsRegion: "eu-west-2"
awsAccessKeyID: "xxxxxxxxxxx"
awsSecretAccessKey: "xxxxxxxxxxxxx"
so i don't understand why my config is wrong

Related

How to connect from a public GKE pod to a GCP Cloud SQL using a private connection

I have a Java application running in a docker container. I am deploying all this to my GKE cluster. I'd like to have it connected to a CloudSQL instance via private IP. However I struggle for two days now to get it working. I followed the following guide:
https://cloud.google.com/sql/docs/mysql/configure-private-services-access#gcloud_1
I managed to create a private service connection and also gave my CloudSQL instance the address range. As far as I understood this should be sufficient for the Pod to be able to connect to the CloudSQL instance.
However it just does not work. I pass the private IP from CloudSQL as the host for the Java application JDBC (Database) connection.
│ 2022-02-14 22:03:31.299 WARN 1 --- [ main] o.h.e.j.e.i.JdbcEnvironmentInitiator : HHH000342: Could not obtain connection to query metadata │
│ │
│ com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
Here are some details to the problem.
The address allocation
➜ google-cloud-sdk gcloud compute addresses list
NAME ADDRESS/RANGE TYPE PURPOSE NETWORK REGION SUBNET STATUS
google-managed-services-default 10.77.0.0/16 INTERNAL VPC_PEERING default RESERVED
The vpc peering connection
➜ google-cloud-sdk gcloud services vpc-peerings list --network=default
---
network: projects/1071923183712/global/networks/default
peering: servicenetworking-googleapis-com
reservedPeeringRanges:
- google-managed-services-default
service: services/servicenetworking.googleapis.com
Here is my CloudSQL info. Please not that the PRIVATE IP Address is 10.77.0.5 and therefore matches the address range from above 10.77.0.0/16. I guess this part is working.
➜ google-cloud-sdk gcloud sql instances describe alpha-3
backendType: SECOND_GEN
connectionName: barbarus:europe-west4:alpha-3
createTime: '2022-02-14T19:28:02.465Z'
databaseInstalledVersion: MYSQL_5_7_36
databaseVersion: MYSQL_5_7
etag: 758de240b161b946689e5732d8e71d396c772c0e03904c46af3b61f59b1038a0
gceZone: europe-west4-a
instanceType: CLOUD_SQL_INSTANCE
ipAddresses:
- ipAddress: 34.90.174.243
type: PRIMARY
- ipAddress: 10.77.0.5
type: PRIVATE
kind: sql#instance
name: alpha-3
project: barbarus
region: europe-west4
selfLink: https://sqladmin.googleapis.com/sql/v1beta4/projects/barbarus/instances/alpha-3
serverCaCert:
cert: |-
-----BEGIN CERTIFICATE-----
//...
-----END CERTIFICATE-----
certSerialNumber: '0'
commonName: C=US,O=Google\, Inc,CN=Google Cloud SQL Server CA,dnQualifier=d495898b-f6c7-4e2f-9c59-c02ccf2c1395
createTime: '2022-02-14T19:29:35.325Z'
expirationTime: '2032-02-12T19:30:35.325Z'
instance: alpha-3
kind: sql#sslCert
sha1Fingerprint: 3ee799b139bf335ef39554b07a5027c9319087cb
serviceAccountEmailAddress: p1071923183712-d99fsz#gcp-sa-cloud-sql.iam.gserviceaccount.com
settings:
activationPolicy: ALWAYS
availabilityType: ZONAL
backupConfiguration:
backupRetentionSettings:
retainedBackups: 7
retentionUnit: COUNT
binaryLogEnabled: true
enabled: true
kind: sql#backupConfiguration
location: us
startTime: 12:00
transactionLogRetentionDays: 7
dataDiskSizeGb: '10'
dataDiskType: PD_HDD
ipConfiguration:
allocatedIpRange: google-managed-services-default
ipv4Enabled: true
privateNetwork: projects/barbarus/global/networks/default
requireSsl: false
kind: sql#settings
locationPreference:
kind: sql#locationPreference
zone: europe-west4-a
pricingPlan: PER_USE
replicationType: SYNCHRONOUS
settingsVersion: '1'
storageAutoResize: true
storageAutoResizeLimit: '0'
tier: db-f1-micro
state: RUNNABLE
The problem I see is with the Pod's IP Address. It is 10.0.5.3 and that is not in the range of 10.77.0.0/16 and therefore the pod can't see the CloudSQL instance.
See here is the Pod's info:
Name: game-server-5b9dd47cbd-vt2gw
Namespace: default
Priority: 0
Node: gke-barbarus-node-pool-1a5ea7d5-bg3m/10.164.15.216
Start Time: Tue, 15 Feb 2022 00:33:56 +0100
Labels: app=game-server
app.kubernetes.io/managed-by=gcp-cloud-build-deploy
pod-template-hash=5b9dd47cbd
Annotations: <none>
Status: Running
IP: 10.0.5.3
IPs:
IP: 10.0.5.3
Controlled By: ReplicaSet/game-server-5b9dd47cbd
Containers:
game-server:
Container ID: containerd://57d9540b1e5f5cb3fcc4517fa42377282943d292ba810c83cd7eb50bd4f1e3dd
Image: eu.gcr.io/barbarus/game-server#sha256:72d518a53652d32d0d438d2a5443c44cc8e12bb15cb1a59c843ce72466900141
Image ID: eu.gcr.io/barbarus/game-server#sha256:72d518a53652d32d0d438d2a5443c44cc8e12bb15cb1a59c843ce72466900141
Port: <none>
Host Port: <none>
State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 15 Feb 2022 00:36:48 +0100
Finished: Tue, 15 Feb 2022 00:38:01 +0100
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 15 Feb 2022 00:35:23 +0100
Finished: Tue, 15 Feb 2022 00:36:35 +0100
Ready: False
Restart Count: 2
Environment:
SQL_CONNECTION: <set to the key 'SQL_CONNECTION' of config map 'game-server'> Optional: false
SQL_USER: <set to the key 'SQL_USER' of config map 'game-server'> Optional: false
SQL_DATABASE: <set to the key 'SQL_DATABASE' of config map 'game-server'> Optional: false
SQL_PASSWORD: <set to the key 'SQL_PASSWORD' of config map 'game-server'> Optional: false
LOG_LEVEL: <set to the key 'LOG_LEVEL' of config map 'game-server'> Optional: false
WORLD_ID: <set to the key 'WORLD_ID' of config map 'game-server'> Optional: false
WORLD_SIZE: <set to the key 'WORLD_SIZE' of config map 'game-server'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sknlk (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-sknlk:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m7s default-scheduler Successfully assigned default/game-server-5b9dd47cbd-vt2gw to gke-barbarus-node-pool-1a5ea7d5-bg3m
Normal Pulling 4m6s kubelet Pulling image "eu.gcr.io/barbarus/game-server#sha256:72d518a53652d32d0d438d2a5443c44cc8e12bb15cb1a59c843ce72466900141"
Normal Pulled 3m55s kubelet Successfully pulled image "eu.gcr.io/barbarus/game-server#sha256:72d518a53652d32d0d438d2a5443c44cc8e12bb15cb1a59c843ce72466900141" in 11.09487284s
Normal Created 75s (x3 over 3m54s) kubelet Created container game-server
Normal Started 75s (x3 over 3m54s) kubelet Started container game-server
Normal Pulled 75s (x2 over 2m41s) kubelet Container image "eu.gcr.io/barbarus/game-server#sha256:72d518a53652d32d0d438d2a5443c44cc8e12bb15cb1a59c843ce72466900141" already present on machine
Warning BackOff 1s (x2 over 87s) kubelet Back-off restarting failed container
Finally this is what gcloud container clusters describe gives me:
➜ google-cloud-sdk gcloud container clusters describe --region=europe-west4 barbarus
addonsConfig:
gcePersistentDiskCsiDriverConfig:
enabled: true
kubernetesDashboard:
disabled: true
networkPolicyConfig:
disabled: true
autopilot: {}
autoscaling:
autoscalingProfile: BALANCED
binaryAuthorization: {}
clusterIpv4Cidr: 10.0.0.0/14
createTime: '2022-02-14T19:34:03+00:00'
currentMasterVersion: 1.21.6-gke.1500
currentNodeCount: 3
currentNodeVersion: 1.21.6-gke.1500
databaseEncryption:
state: DECRYPTED
endpoint: 34.141.141.150
id: 39e7249b48c24d23a8b70b0c11cd18901565336b397147dab4778dc75dfc34e2
initialClusterVersion: 1.21.6-gke.1500
initialNodeCount: 1
instanceGroupUrls:
- https://www.googleapis.com/compute/v1/projects/barbarus/zones/europe-west4-a/instanceGroupManagers/gke-barbarus-node-pool-e291e3d6-grp
- https://www.googleapis.com/compute/v1/projects/barbarus/zones/europe-west4-b/instanceGroupManagers/gke-barbarus-node-pool-5aa35c39-grp
- https://www.googleapis.com/compute/v1/projects/barbarus/zones/europe-west4-c/instanceGroupManagers/gke-barbarus-node-pool-380645b7-grp
ipAllocationPolicy:
useRoutes: true
labelFingerprint: a9dc16a7
legacyAbac: {}
location: europe-west4
locations:
- europe-west4-a
- europe-west4-b
- europe-west4-c
loggingConfig:
componentConfig:
enableComponents:
- SYSTEM_COMPONENTS
- WORKLOADS
loggingService: logging.googleapis.com/kubernetes
maintenancePolicy:
resourceVersion: e3b0c442
masterAuth:
clusterCaCertificate: // ...
masterAuthorizedNetworksConfig: {}
monitoringConfig:
componentConfig:
enableComponents:
- SYSTEM_COMPONENTS
monitoringService: monitoring.googleapis.com/kubernetes
name: barbarus
network: default
networkConfig:
defaultSnatStatus: {}
network: projects/barbarus/global/networks/default
serviceExternalIpsConfig: {}
subnetwork: projects/barbarus/regions/europe-west4/subnetworks/default
nodeConfig:
diskSizeGb: 100
diskType: pd-standard
imageType: COS_CONTAINERD
machineType: e2-medium
metadata:
disable-legacy-endpoints: 'true'
oauthScopes:
- https://www.googleapis.com/auth/cloud-platform
preemptible: true
serviceAccount: default#barbarus.iam.gserviceaccount.com
shieldedInstanceConfig:
enableIntegrityMonitoring: true
nodeIpv4CidrSize: 24
nodePoolDefaults:
nodeConfigDefaults: {}
nodePools:
- config:
diskSizeGb: 100
diskType: pd-standard
imageType: COS_CONTAINERD
machineType: e2-medium
metadata:
disable-legacy-endpoints: 'true'
oauthScopes:
- https://www.googleapis.com/auth/cloud-platform
preemptible: true
serviceAccount: default#barbarus.iam.gserviceaccount.com
shieldedInstanceConfig:
enableIntegrityMonitoring: true
initialNodeCount: 1
instanceGroupUrls:
- https://www.googleapis.com/compute/v1/projects/barbarus/zones/europe-west4-a/instanceGroupManagers/gke-barbarus-node-pool-e291e3d6-grp
- https://www.googleapis.com/compute/v1/projects/barbarus/zones/europe-west4-b/instanceGroupManagers/gke-barbarus-node-pool-5aa35c39-grp
- https://www.googleapis.com/compute/v1/projects/barbarus/zones/europe-west4-c/instanceGroupManagers/gke-barbarus-node-pool-380645b7-grp
locations:
- europe-west4-a
- europe-west4-b
- europe-west4-c
management:
autoRepair: true
autoUpgrade: true
name: node-pool
podIpv4CidrSize: 24
selfLink: https://container.googleapis.com/v1/projects/barbarus/locations/europe-west4/clusters/barbarus/nodePools/node-pool
status: RUNNING
upgradeSettings:
maxSurge: 1
version: 1.21.6-gke.1500
notificationConfig:
pubsub: {}
releaseChannel:
channel: REGULAR
selfLink: https://container.googleapis.com/v1/projects/barbarus/locations/europe-west4/clusters/barbarus
servicesIpv4Cidr: 10.3.240.0/20
shieldedNodes:
enabled: true
status: RUNNING
subnetwork: default
zone: europe-west4
I have no idea how I can give the pod a reference to the address allocation I made for the private service connection.
What I tried is to spin up a GKE cluster with a Cluster default pod address range of 10.77.0.0/16 which sounded logical since I want the pods to appear in the same address range as the CloudSQL. However GCP gives me an error when I try to do that:
(1) insufficient regional quota to satisfy request: resource "CPUS": request requires '9.0' and is short '1.0'. project has a quota of '8.0' with '8.0' available. View and manage quotas at https://console.cloud.google.com/iam-admin/quotas?usage=USED&project=hait-barbarus (2) insufficient regional quota to satisfy request: resource "IN_USE_ADDRESSES": request requires '9.0' and is short '5.0'. project has a quota of '4.0' with '4.0' available. View and manage quotas at https://console.cloud.google.com/iam-admin/quotas?usage=USED&project=hait-barbarus.
So I am not able to give the pods the proper address range for the private service connection how can they ever discover the CloudSQL instance?
EDIT #1: The GKE cluster's service account has the SQL Client role.

Error creating peer channel Amazon Managed Blockchain Hyperledger Fabric v1.4

I hope someone could help me with the following problem.
I am using Amazon Managed Blockchain with the framework Hyperledge Fabric v1.4 and I followed this documentation https://docs.aws.amazon.com/managed-blockchain/latest/hyperledger-fabric-dev/get-started-create-channel.html.
This is the error I get when I try to create the channel with that command line:
Command line:
docker exec cli peer channel create -c mychannel -f /opt/home/mychannel.pb -o $ORDERER --cafile /opt/home/managedblockchain-tls-chain.pem --tls
Error:
2022-01-17 10:34:47.356 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
Error: got unexpected status: BAD_REQUEST -- error validating channel creation transaction for new channel 'mychannel', could not succesfully apply update to template configuration: error authorizing update: error validating DeltaSet: policy for [Group] /Channel/Application not satisfied: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Admins' sub-policies to be satisfied
The admin certificate is in a folder "admin-msp".
My configxt.yaml (I did not get any error executing a previous step with "docker exec cli configtxgen -outputCreateChannelTx /opt/home/mychannel.pb -profile OneOrgChannel -channelID mychannel --configPath /opt/home/"):
Organizations:
- &Org1
Name: m-Q37N3LRUKNFDXBZ7GARMYFBYIE
ID: m-Q37N3LRUKNFDXBZ7GARMYFBYIE
Policies: &Org1Policies
Readers:
Type: Signature
Rule: "OR('Org1.member')"
# If your MSP is configured with the new NodeOUs, you might
# want to use a more specific rule like the following:
# Rule: "OR('Org1.admin', 'Org1.peer', 'Org1.client')"
Writers:
Type: Signature
Rule: "OR('Org1.member')"
# If your MSP is configured with the new NodeOUs, you might
# want to use a more specific rule like the following:
# Rule: "OR('Org1.admin', 'Org1.client')"
Admins:
Type: Signature
Rule: "OR('Org1.admin')"
# MSPDir is the filesystem path which contains the MSP configuration.
MSPDir: /opt/home/admin-msp
# AnchorPeers defines the location of peers which can be used for
# cross-org gossip communication. Note, this value is only encoded in
# the genesis block in the Application section context.
AnchorPeers:
- Host: 127.0.0.1
Port: 7051
Capabilities:
Channel: &ChannelCapabilities
V1_4_3: true
V1_3: false
V1_1: false
Orderer: &OrdererCapabilities
V1_4_2: true
V1_1: false
Application: &ApplicationCapabilities
V1_4_2: true
V1_3: false
V1_2: false
V1_1: false
Channel: &ChannelDefaults
Policies:
# Who may invoke the 'Deliver' API
Readers:
Type: ImplicitMeta
Rule: "ANY Readers"
# Who may invoke the 'Broadcast' API
Writers:
Type: ImplicitMeta
Rule: "ANY Writers"
# By default, who may modify elements at this config level
Admins:
Type: ImplicitMeta
Rule: "MAJORITY Admins"
Capabilities:
<<: *ChannelCapabilities
Application: &ApplicationDefaults
Policies: &ApplicationDefaultPolicies
LifecycleEndorsement:
Type: ImplicitMeta
Rule: "ANY Readers"
Endorsement:
Type: ImplicitMeta
Rule: "ANY Readers"
Readers:
Type: ImplicitMeta
Rule: "ANY Readers"
Writers:
Type: ImplicitMeta
Rule: "ANY Writers"
Admins:
Type: ImplicitMeta
Rule: "MAJORITY Admins"
Capabilities:
<<: *ApplicationCapabilities
Profiles:
OneOrgChannel:
<<: *ChannelDefaults
Consortium: AWSSystemConsortium
Application:
<<: *ApplicationDefaults
Organizations:
- <<: *Org1
My docker-compose-cli.yaml file:
version: '2'
services:
cli:
container_name: cli
image: hyperledger/fabric-tools:1.4
tty: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- FABRIC_LOGGING_SPEC=info # Set logging level to debug for more verbose logging
- CORE_PEER_ID=cli
- CORE_CHAINCODE_KEEPALIVE=10
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_ROOTCERT_FILE=/opt/home/managedblockchain-tls-chain.pem
- CORE_PEER_LOCALMSPID=$Member
- CORE_PEER_MSPCONFIGPATH=/opt/home/admin-msp
- CORE_PEER_ADDRESS=$MyPeerNodeEndpoint
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: /bin/bash
volumes:
- /var/run/:/host/var/run/
- /home/ec2-user/fabric-samples/chaincode:/opt/gopath/src/github.com/
- /home/ec2-user:/opt/home
Thanks in advance :).

Unable to access Cloud Endpoints on GCP Endpoints Portal

I created the API using GKE and Cloud Endpoint gRPC everything is fine but when I try to access my API from Endpoints Portal this is not working.
EndPoint Portal For API
Enter any id in ayah_id and try to execute you will see error.
ENOTFOUND: Error resolving domain "https://quran.endpoints.utopian-button-227405.cloud.goog"
I don't know why this is not working even my API is running successfully on http://34.71.56.199/v1/image/ayah/ayah-1 I'm using Http Transcoding actual gRPC running on 34.71.56.199:81
I think I missed some configuration steps. Can someone please let me know what I miss.
Update
api_config.yaml
# The configuration schema is defined by service.proto file
# https://github.com/googleapis/googleapis/blob/master/google/api/service.proto
type: google.api.Service
config_version: 3
name: quran.endpoints.utopian-button-227405.cloud.goog
usage:
rules:
# Allow unregistered calls for all methods.
- selector: "*"
allow_unregistered_calls: true
#
# API title to appear in the user interface (Google Cloud Console).
#
title: Quran gRPC API
apis:
- name: quran.Audio
- name: quran.Ayah
- name: quran.Edition
- name: quran.Image
- name: quran.Surah
- name: quran.Translation
Update 2
api_config.yaml
# The configuration schema is defined by service.proto file
# https://github.com/googleapis/googleapis/blob/master/google/api/service.proto
type: google.api.Service
config_version: 3
name: quran.endpoints.utopian-button-227405.cloud.goog
endpoints:
- name: quran.endpoints.utopian-button-227405.cloud.goog
target: "34.71.56.199"
usage:
rules:
# Allow unregistered calls for all methods.
- selector: "*"
allow_unregistered_calls: true
#
# API title to appear in the user interface (Google Cloud Console).
#
title: Quran gRPC API
apis:
- name: quran.Audio
- name: quran.Ayah
- name: quran.Edition
- name: quran.Image
- name: quran.Surah
- name: quran.Translation
api_config_http.yaml
# The configuration schema is defined by service.proto file
# https://github.com/googleapis/googleapis/blob/master/google/api/service.proto
type: google.api.Service
config_version: 3
name: quran.endpoints.utopian-button-227405.cloud.goog
#
# Http Transcoding.
#
# HTTP rules define translation from HTTP/REST/JSON to gRPC. With these rules
# HTTP/REST/JSON clients will be able to call the Quran service.
#
http:
rules:
#
# Image Service transcoding
#
- selector: quran.Image.CreateImage
post: '/v1/image'
body: '*'
- selector: quran.Image.FindImageByAyahId
get: '/v1/image/ayah/{id}'

TerminationProtected flag ignored for EMR cluster

I am setting up EMR cluster using CloudFormation template. Even though, I am passing TerminationProtected: false cluster is still created with termination protection turned on. Is this possible that the global company policy is overwriting this value? Here's a snippet of my YAML template
Type: AWS::EMR::Cluster
Properties:
Name: !Sub ${AWS::StackName}-cluster
Applications:
- Name: Hadoop
- Name: ZooKeeper
- Name: Spark
- Name: Ganglia
Instances:
MasterInstanceGroup:
...
CoreInstanceGroup:
...
Ec2SubnetId: ...
Ec2KeyName: ...
TerminationProtected: False
AdditionalMasterSecurityGroups:
- ...
BootstrapActions:
- ...
JobFlowRole : ...
LogUri: ...
ServiceRole : ...
ReleaseLabel: ...
VisibleToAllUsers: ...
Configurations:
- Classification: hdfs-site
ConfigurationProperties:
dfs.replication: '2'

Backup job of helm jenkins failing for no reason

I am using the official helm chart of Jenkins.
I have enabled backup and also provided backup credentials
Here is the relevant config in values.yaml
## Backup cronjob configuration
## Ref: https://github.com/maorfr/kube-tasks
backup:
# Backup must use RBAC
# So by enabling backup you are enabling RBAC specific for backup
enabled: true
# Used for label app.kubernetes.io/component
componentName: "jenkins-backup"
# Schedule to run jobs. Must be in cron time format
# Ref: https://crontab.guru/
schedule: "0 2 * * *"
labels: {}
annotations: {}
# Example for authorization to AWS S3 using kube2iam
# Can also be done using environment variables
# iam.amazonaws.com/role: "jenkins"
image:
repository: "maorfr/kube-tasks"
tag: "0.2.0"
# Additional arguments for kube-tasks
# Ref: https://github.com/maorfr/kube-tasks#simple-backup
extraArgs: []
# Add existingSecret for AWS credentials
existingSecret: {}
# gcpcredentials: "credentials.json"
## Example for using an existing secret
# jenkinsaws:
## Use this key for AWS access key ID
awsaccesskey: "AAAAJJJJDDDDDDJJJJJ"
## Use this key for AWS secret access key
awssecretkey: "frkmfrkmrlkmfrkmflkmlm"
# Add additional environment variables
# jenkinsgcp:
## Use this key for GCP credentials
env: []
# Example environment variable required for AWS credentials chain
# - name: "AWS_REGION"
# value: "us-east-1"
resources:
requests:
memory: 1Gi
cpu: 1
limits:
memory: 1Gi
cpu: 1
# Destination to store the backup artifacts
# Supported cloud storage services: AWS S3, Minio S3, Azure Blob Storage, Google Cloud Storage
# Additional support can added. Visit this repository for details
# Ref: https://github.com/maorfr/skbn
destination: "s3://jenkins-data/backup"
However the backup job fails as follows:
2020/01/22 20:19:23 Backup started!
2020/01/22 20:19:23 Getting clients
2020/01/22 20:19:26 NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
What is missing?
you must create secret which looks like this:
kubectl create secret generic jenkinsaws --from-literal=jenkins_aws_access_key=ACCESS_KEY --from-literal=jenkins_aws_secret_key=SECRET_KEY
then consume it like this:
existingSecret:
jenkinsaws:
awsaccesskey: jenkins_aws_access_key
awssecretkey: jenkins_aws_secret_key
where jenkins_aws_access_key/jenkins_aws_secret_key it's key of the secret
backup:
enabled: true
destination: "s3://jenkins-pumbala/backup"
schedule: "15 1 * * *"
env:
- name: "AWS_ACCESS_KEY_ID"
value: "AKIDFFERWT***D36G"
- name: "AWS_SECRET_ACCESS_KEY"
value: "5zGdfgdfgdf***************Isi"