I am using the official helm chart of Jenkins.
I have enabled backup and also provided backup credentials
Here is the relevant config in values.yaml
## Backup cronjob configuration
## Ref: https://github.com/maorfr/kube-tasks
backup:
# Backup must use RBAC
# So by enabling backup you are enabling RBAC specific for backup
enabled: true
# Used for label app.kubernetes.io/component
componentName: "jenkins-backup"
# Schedule to run jobs. Must be in cron time format
# Ref: https://crontab.guru/
schedule: "0 2 * * *"
labels: {}
annotations: {}
# Example for authorization to AWS S3 using kube2iam
# Can also be done using environment variables
# iam.amazonaws.com/role: "jenkins"
image:
repository: "maorfr/kube-tasks"
tag: "0.2.0"
# Additional arguments for kube-tasks
# Ref: https://github.com/maorfr/kube-tasks#simple-backup
extraArgs: []
# Add existingSecret for AWS credentials
existingSecret: {}
# gcpcredentials: "credentials.json"
## Example for using an existing secret
# jenkinsaws:
## Use this key for AWS access key ID
awsaccesskey: "AAAAJJJJDDDDDDJJJJJ"
## Use this key for AWS secret access key
awssecretkey: "frkmfrkmrlkmfrkmflkmlm"
# Add additional environment variables
# jenkinsgcp:
## Use this key for GCP credentials
env: []
# Example environment variable required for AWS credentials chain
# - name: "AWS_REGION"
# value: "us-east-1"
resources:
requests:
memory: 1Gi
cpu: 1
limits:
memory: 1Gi
cpu: 1
# Destination to store the backup artifacts
# Supported cloud storage services: AWS S3, Minio S3, Azure Blob Storage, Google Cloud Storage
# Additional support can added. Visit this repository for details
# Ref: https://github.com/maorfr/skbn
destination: "s3://jenkins-data/backup"
However the backup job fails as follows:
2020/01/22 20:19:23 Backup started!
2020/01/22 20:19:23 Getting clients
2020/01/22 20:19:26 NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
What is missing?
you must create secret which looks like this:
kubectl create secret generic jenkinsaws --from-literal=jenkins_aws_access_key=ACCESS_KEY --from-literal=jenkins_aws_secret_key=SECRET_KEY
then consume it like this:
existingSecret:
jenkinsaws:
awsaccesskey: jenkins_aws_access_key
awssecretkey: jenkins_aws_secret_key
where jenkins_aws_access_key/jenkins_aws_secret_key it's key of the secret
backup:
enabled: true
destination: "s3://jenkins-pumbala/backup"
schedule: "15 1 * * *"
env:
- name: "AWS_ACCESS_KEY_ID"
value: "AKIDFFERWT***D36G"
- name: "AWS_SECRET_ACCESS_KEY"
value: "5zGdfgdfgdf***************Isi"
Related
I have stored a key in the Secret manager of GCP and I'm trying to use that secret in the cloudbuild.yaml but every time I have this error:
ERROR: (gcloud.functions.deploy) argument --set-secrets: Secrets value configuration must match the pattern 'SECRET:VERSION' or 'projects/{PROJECT}/secrets/{SECRET}:{VERSION}' or 'projects/{PROJECT}/secrets/{SECRET}/versions/{VERSION}' where VERSION is a number or the label 'latest' [ 'projects/gcp-project/secrets/SECRETKEY/versions/latest' ]]
My cloud build file looks like this:
steps:
- id: installing-dependencies
name: 'python'
entrypoint: pip
args: ["install", "-r", "src/requirements.txt", "--user"]
- id: deploy-function
name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
args:
- gcloud
- functions
- deploy
- name_of_my_function
- --region=us-central1
- --source=./src
- --trigger-topic=name_of_my_topic
- --runtime=python37
- --set-secrets=[ SECRETKEY = 'projects/gcp-project/secrets/SECRETKEY/versions/latest' ]
waitFor: [ "installing-dependencies" ]
I was reading the documentation, but I don't have any other clue that could help me.
As mentioned by al-dann, there should not be any space in set-secret line as you can see the documentation
Final correction in code :
--set-secrets=[SECRETKEY='projects/gcp-project/secrets/SECRETKEY/versions/latest']
For more information, you can refer to the stackoverflow thread and blog where brief information about secret manager has been well explained.
I installed cluster-autoscaler in my cluster k8S v1.17.5
and I have an error during a deployment
E0729 15:09:09.661938 1 aws_manager.go:261] Failed to regenerate ASG cache: cannot autodiscover ASGs: InvalidClientTokenId: The security token included in the request is invalid.
status code: 403, request id: a2b12..........
F0729 15:09:09.661961 1 aws_cloud_provider.go:376] Failed to create AWS Manager: cannot autodiscover ASGs: InvalidClientTokenId: The security token included in the request is invalid.
status code: 403, request id: a2b12da3-.........
my values.yaml
autoDiscovery:
# Only cloudProvider `aws` and `gce` are supported by auto-discovery at this time
# AWS: Set tags as described in https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md#auto-discovery-setup
clusterName: my-cluster
tags:
- k8s.io/cluster-autoscaler/enabled
- k8s.io/cluster-autoscaler/my-cluster
- kubernetes.io/cluster/my-cluster
autoscalingGroups: []
# At least one element is required if not using autoDiscovery
# - name: asg1
# maxSize: 2
# minSize: 1
# - name: asg2
# maxSize: 2
# minSize: 1
autoscalingGroupsnamePrefix: []
# At least one element is required if not using autoDiscovery
# - name: ig01
# maxSize: 10
# minSize: 0
# - name: ig02
# maxSize: 10
# minSize: 0
# Required if cloudProvider=aws
awsRegion: "eu-west-2"
awsAccessKeyID: "xxxxxxxxxxx"
awsSecretAccessKey: "xxxxxxxxxxxxx"
# Required if cloudProvider=azure
# clientID/ClientSecret with contributor permission to Cluster and Node ResourceGroup
azureClientID: ""
azureClientSecret: ""
# Cluster resource Group
azureResourceGroup: ""
azureSubscriptionID: ""
azureTenantID: ""
# if using AKS azureVMType should be set to "AKS"
azureVMType: "AKS"
azureClusterName: ""
azureNodeResourceGroup: ""
# if using MSI, ensure subscription ID and resource group are set
azureUseManagedIdentityExtension: false
# Currently only `gce`, `aws`, `azure` & `spotinst` are supported
cloudProvider: aws
# Configuration file for cloud provider
cloudConfigPath: ~/.aws/credentials
image:
repository: k8s.gcr.io/cluster-autoscaler
tag: v1.17.1
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
# - myRegistrKeySecretName
tolerations: []
## Extra ENV passed to the container
extraEnv: {}
extraArgs:
v: 4
stderrthreshold: info
logtostderr: true
# write-status-configmap: true
# leader-elect: true
# skip-nodes-with-local-storage: false
expander: least-waste
scale-down-enabled: true
# balance-similar-node-groups: true
# min-replica-count: 2
# scale-down-utilization-threshold: 0.5
# scale-down-non-empty-candidates-count: 5
# max-node-provision-time: 15m0s
# scan-interval: 10s
scale-down-delay-after-add: 10m
scale-down-delay-after-delete: 0s
scale-down-delay-after-failure: 3m
# scale-down-unneeded-time: 10m
# skip-nodes-with-local-storage: false
# skip-nodes-with-system-pods: true
## Affinity for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
## affinity: {}
podDisruptionBudget: |
maxUnavailable: 1
# minAvailable: 2
## Node labels for pod assignment
## Ref: https://kubernetes.io/docs/user-guide/node-selection/
nodeSelector: {}
podAnnotations: {}
podLabels: {}
replicaCount: 1
rbac:
## If true, create & use RBAC resources
##
create: true
## If true, create & use Pod Security Policy resources
## https://kubernetes.io/docs/concepts/policy/pod-security-policy/
pspEnabled: false
serviceAccount:
# Specifies whether a service account should be created
create: true
# The name of the ServiceAccount to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
## Annotations for the Service Account
##
serviceAccountAnnotations: {}
resources:
limits:
cpu: 100m
memory: 300Mi
requests:
cpu: 100m
memory: 300Mi
priorityClassName: "system-node-critical"
# Defaults to "ClusterFirst". Valid values are
# 'ClusterFirstWithHostNet', 'ClusterFirst', 'Default' or 'None'
# autoscaler does not depend on cluster DNS, recommended to set this to "Default"
dnsPolicy: "ClusterFirst"
## Security context for pod
## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
securityContext:
runAsNonRoot: true
runAsUser: 1001
runAsGroup: 1001
## Security context for container
## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
containerSecurityContext:
capabilities:
drop:
- all
## Deployment update strategy
## Ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy
# updateStrategy:
# rollingUpdate:
# maxSurge: 1
# maxUnavailable: 0
# type: RollingUpdate
service:
annotations: {}
## List of IP addresses at which the service is available
## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips
##
externalIPs: []
loadBalancerIP: ""
loadBalancerSourceRanges: []
servicePort: 8085
portName: http
type: ClusterIP
spotinst:
account: ""
token: ""
image:
repository: spotinst/kubernetes-cluster-autoscaler
tag: 0.6.0
pullPolicy: IfNotPresent
## Are you using Prometheus Operator?
serviceMonitor:
enabled: true
interval: "10s"
# Namespace Prometheus is installed in
namespace: cattle-prometheus
## Defaults to whats used if you follow CoreOS [Prometheus Install Instructions](https://github.com/helm/charts/tree/master/stable/prometheus-operator#tldr)
## [Prometheus Selector Label](https://github.com/helm/charts/tree/master/stable/prometheus-operator#prometheus-operator-1)
## [Kube Prometheus Selector Label](https://github.com/helm/charts/tree/master/stable/prometheus-operator#exporters)
selector:
prometheus: kube-prometheus
# The metrics path to scrape - autoscaler exposes /metrics (standard)
path: /metrics
## String to partially override cluster-autoscaler.fullname template (will maintain the release name)
nameOverride: ""
## String to fully override cluster-autoscaler.fullname template
fullnameOverride: ""
# Allow overridding the .Capabilities.KubeVersion.GitVersion (useful for "helm template" command)
kubeTargetVersionOverride: ""
## Priorities Expander
## If extraArgs.expander is set to priority, then expanderPriorities is used to define cluster-autoscaler-priority-expander priorities
## https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/expander/priority/readme.md
expanderPriorities: {}
How are you handling IAM roles for that pod?
The cluster autoscaler needs an IAM role with permissions to do some operations on your autoscaling group: https://github.com/helm/charts/tree/master/stable/cluster-autoscaler#iam
You need to create an IAM role and then the helm template should take care of creating a service account for you that uses that role. Just like they explain here: https://github.com/helm/charts/tree/master/stable/cluster-autoscaler#iam-roles-for-service-accounts-irsa
Once you have the IAM role configured, you would then need to --set
rbac.serviceAccountAnnotations."eks.amazonaws.com/role-arn"=arn:aws:iam::123456789012:role/MyRoleName
when installing.
This is a pretty good explanation on how it works (although it could be a bit dense if you are starting): https://aws.amazon.com/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/
the following parameters are filled in with correct keys
# Required if cloudProvider=aws
awsRegion: "eu-west-2"
awsAccessKeyID: "xxxxxxxxxxx"
awsSecretAccessKey: "xxxxxxxxxxxxx"
so i don't understand why my config is wrong
Hi to all I created and used openAPI by yaml and I created endpoint that maps 2 cloud functions which use path templating to route the call no error by google sdk cli.
Now I call by POST https://myendpointname-3p5hncu3ha-ew.a.run.app/v1/setdndforrefcli/12588/dnd?key=[apikey] because it's mapped by below open api and reply me "Path does not match any requirement URI template.".
I don't know why path template in endpoint not work I added path_translation: APPEND_PATH_TO_ADDRESS to avoid google to use CONSTANT_ADDRESS default which append id in query string with brutal [name of cloud function]?GETid=12588 and overwrite query parameters with same name.
Somebody can tell me how can I debug the endpoint or the error in openAPI (that have green check ok icon in endpoint)?
# [START swagger]
swagger: '2.0'
info:
description: "Get data "
title: "Cloud Endpoint + GCF"
version: "1.0.0"
host: myendpointname-3p5hncu3ha-ew.a.run.app
# [END swagger]
basePath: "/v1"
#consumes:
# - application/json
#produces:
# - application/json
schemes:
- https
paths:
/setdndforrefcli/{id}/dnd:
post:
summary:
operationId: setdndforrefcli
parameters:
- name: id # is the id parameter in the path
in: path # is the parameter where is in query for rest or path for restful
required: true
type: integer
format: int64
minimum: 1
security:
- api_key: []
x-google-backend:
address: https://REGION-PROJECT-ID.cloudfunctions.net/mycloudfunction
path_translation: APPEND_PATH_TO_ADDRESS
protocol: h2
responses:
'200':
description: A successful response
schema:
type: string
# [START securityDef]
securityDefinitions:
# This section configures basic authentication with an API key.
api_key:
type: "apiKey"
name: "key"
in: "query"
# [END securityDef]
I had the same error, but after did some test I was able to use successfully the path templating (/endpoint/{id}). I resolved this issue as follows:
1 .- gcloud endpoints services deploy openapi-functions.yaml \
--project project
Here you will get a new Service Configuration that you will to use in the next steps.
2.-
chmod +x gcloud_build_image
./gcloud_build_image -s SERVICE \
-c NEWSERVICECONFIGURATION -p project
Its very important change the service configuration with every new deployment of the managed service.
3.- gcloud run deploy SERVICE \
--image="gcr.io/PROJECT/endpoints-runtime-serverless:SERVICE-NEW_SERVICE_CONFIGURATION" \
--allow-unauthenticated \
--platform managed \
--project=PROJECT
I am trying to delete all resources in my aws account, but the directions for aws-nuke says I need to create a config file:
At first you need to create a configfile for aws-nuke. This is a minimal one:
regions:
- eu-west-1
- global
account-blacklist:
- "999999999999" # production
accounts:
"000000000000": {} # aws-nuke-example
With this config we can run aws-nuke:
My question is, how do I create this config file that deletes everything associated with an account and leaves me with a blank account? Thanks!
If you want to completely nuke everything associated with an account you just have to replace the zeros for the account number you want to erase like in your example. The {} means all resources types. Save the file as.YAML format and next issue the command like this:
aws-nuke -c config/example.yaml --profile demo
Check my example config/example.yaml file here:
regions:
#Regions where the resources are
- "global"
- "eu-central-1"
- "eu-west-1"
- "eu-west-2"
- "eu-east-1"
- "eu-east-2"
- "us-east-1"
- "us-east-2"
- "us-west-1"
- "us-west-2"
account-blocklist:
#Accounts you dont want to change
- 123456789101 # e.g production account
resource-types: #not mandatory
targets:
# Specific resources you want to remove
- S3Object
- S3Bucket
- EC2Volume
excludes: #not mandatory
# Specific resources you don't want to remove
- IAMUser
accounts:
943725333913: {}
# the {} means all resources associated with this account
# instead you can use filters like this:
943725333913:
filters:
S3Bucket:
- "s3://my-bucket"
S3Object:
- type: "glob"
value: "s3://my-bucket/*"
I am trying to run aws-nuke to delete all the resources.
I am trying to run command
aws-nuke -c config/example.yaml --profile demo
config/example.yaml
---
regions:
- "global" # This is for all global resource types e.g. IAM
- "eu-west-1"
account-blacklist:
- "999999999999" # production
# optional: restrict nuking to these resources
resource-types:
targets:
- IAMUser
- IAMUserPolicyAttachment
- IAMUserAccessKey
- S3Bucket
- S3Object
- Route53HostedZone
- EC2Instance
- CloudFormationStack
accounts:
555133742123#demo:
filters:
IAMUser:
- "admin"
IAMUserPolicyAttachment:
- property: RoleName
value: "admin"
IAMUserAccessKey:
- property: UserName
value: "admin"
S3Bucket:
- "s3://my-bucket"
S3Object:
- type: "glob"
value: "s3://my-bucket/*"
Route53HostedZone:
- property: Name
type: "glob"
value: "*.zone.loc."
CloudFormationStack:
- property: "tag:team"
value: "myTeam"
Errors screenshot below.What is this missing
Disclaimer: I am an author of aws-nuke.
This is not an configuration problem of your YAML file, but a missing setting in your AWS account.
The IAM Alias is a globally unique name for your AWS Account. aws-nuke requires this as a safety guard, so you do not accidentally destroy your production accounts. The idea is that every production account contains at least the substring prod.
This might sound a bit unnecessary to demand this account, but we are very passionate to not nuke any production account.
You can follow the docs to specify the Alias via the web console, or you use the CLI:
aws iam create-account-alias --profile demo --account-alias my-test-account-8gmst3`
I guess we need to improve the error message.