I am creating a build configuration with the following YAML. Then, I trigger the build manually with oc. So, the following commands are run.
oc create -f mybuildconfig.yaml
oc start-build bc/ns-bc-myproject --wait
Build configuration YAML:
apiVersion: v1
kind: BuildConfig
metadata:
labels:
build: myproject
name: ns-bc-myproject
namespace: ns
spec:
output:
to:
kind: ImageStreamTag
name: 'ns-is-myproject:latest'
postCommit: {}
resources: {}
runPolicy: Serial
source:
git:
ref: dev_1.0
uri: 'https://github.com/ns/myproject.git'
type: Git
strategy:
sourceStrategy:
from:
kind: ImageStreamTag
name: 'nodejs:10'
namespace: openshift
type: Source
successfulBuildsHistoryLimit: 5
The build never goes through; it keeps failing with the message as Invalid output reference. What is missing?
You need to create an image stream in the namespace where your build config is pushing the image to.
Something like this will work for you:
- apiVersion: v1
kind: ImageStream
metadata:
labels:
application: ns-is-myproject
name: ns-is-myproject
namespace: ns-is-myproject
Related
In a self-managed ArgoCD the ArgoCD itself is defined as an application installed as a Helm chart with parameters.
How to add dex.config into Helm parameters inside application definition?
This does NOT work!
Here come en error that dex.config comes as a map not as a string.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: argocd
namespace: argocd
spec:
project: argocd
source:
chart: argo-cd
repoURL: https://argoproj.github.io/argo-helm
targetRevision: 5.19.1
helm:
parameters:
- name: configs.cm."timeout\.reconciliation"
value: "120s"
- name: configs.cm."dex\.config"
value: |
logger:
level: debug
format: json
connectors:
- type: saml
id: saml
name: AzureAD
config:
entityIssuer: https://argocd.example.com/api/dex/callback
ssoURL: https://login.microsoftonline.com/xxx/saml2
caData: |
...
redirectURI: https://argocd.example.com/api/dex/callback
usernameAttr: email
emailAttr: email
groupsAttr: Group
destination:
server: https://kubernetes.default.svc
namespace: argocd
I've been trying to run Google Kubernetes Engine deploy action for my github repo.
I have made a github workflow job run and everything works just fine except the deploy step.
Here is my error code:
Error from server (NotFound): deployments.apps "gke-deployment" not found
I'm assuming my yaml files are at fault, I'm fairly new to this so I got these from the internet and just edited a bit to fit my code, but I don't know the details.
Kustomize.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
metadata:
name: arbitrary
# Example configuration for the webserver
# at https://github.com/monopole/hello
commonLabels:
app: videoo-render
resources:
- deployment.yaml
- service.yaml
deployment.yaml (I think the error is here):
apiVersion: apps/v1
kind: Deployment
metadata:
name: the-deployment
spec:
replicas: 3
selector:
matchLabels:
deployment: video-render
template:
metadata:
labels:
deployment: video-render
spec:
containers:
- name: the-container
image: monopole/hello:1
command: ["/video-render",
"--port=8080",
"--enableRiskyFeature=$(ENABLE_RISKY)"]
ports:
- containerPort: 8080
env:
- name: ALT_GREETING
valueFrom:
configMapKeyRef:
name: the-map
key: altGreeting
- name: ENABLE_RISKY
valueFrom:
configMapKeyRef:
name: the-map
key: enableRisky
service.yaml:
kind: Service
apiVersion: v1
metadata:
name: the-service
spec:
selector:
deployment: video-render
type: LoadBalancer
ports:
- protocol: TCP
port: 8666
targetPort: 8080
Using ubuntu 20.04 image, repo is C++ code.
For anyone wondering why this happens:
You have to edit this line to an existing deployment:
DEPLOYMENT_NAME: gke-deployment # TODO: update to deployment name,
to:
DEPLOYMENT_NAME: existing-deployment-name
I have a deployment on Kubernetes (AWS EKS), with several environment variables defined in the deployment .yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: myApp
name: myAppName
spec:
replicas: 2
(...)
spec:
containers:
- env:
- name: MY_ENV_VAR
value: "my_value"
image: myDockerImage:prodV1
(...)
If I want to upgrade the pods to another version of the docker image, say prodV2, I can perform a rolling update which replaces the pods from prodV1 to prodV2 with zero downtime.
However, if I add another env variable, say MY_ENV_VAR_2 : "my_value_2" and perform the same rolling update, I don't see the new env var in the container. The only solution I found in order to have both env vars was to manually execute
kubectl delete deployment myAppName
kubectl create deployment -f myDeploymentFile.yaml
As you can see, this is not zero downtime, as deleting the deployment will terminate my pods and introduce a downtime until the new deployment is created and the new pods start.
Is there a way to better do this? Thank you!
Here is an example you might want to test yourself:
Noice I used spec.strategy.type: RollingUpdate.
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
spec:
replicas: 2
strategy:
type: RollingUpdate
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
env:
- name: MY_ENV_VAR
value: "my_value"
Apply:
➜ ~ kubectl apply -f deployment.yaml
➜ ~ kubectl exec -it nginx-<hash> env | grep MY_ENV_VAR
MY_ENV_VAR=my_value
Notice the env is as set in yaml
Now we edit the env in deployment.yaml:
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
spec:
replicas: 2
strategy:
type: RollingUpdate
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
env:
- name: MY_ENV_VAR
value: "my_new_value"
apply and wait for it to update:
➜ ~ kubectl apply -f deployment.yaml
➜ ~ kubectl get po --watch
# after it updated use Ctrl+C to stop the watch and run:
➜ ~ kubectl exec -it nginx-<new_hash> env | grep MY_ENV_VAR
MY_ENV_VAR=my_new_value
As you should see, the env changed. That is pretty much it.
I have a WorkflowTemplate "nyc-test-template" which I trigger via Argo Events and PubSub. So, if I publish a message {} into the PubSub topic "argo-events-nyc" the template specified via a workflowTempateRef is started. That does work just fine. Now I want to parameterize the to be started template.
My not-working draft looks as follows:
apiVersion: argoproj.io/v1alpha1
kind: EventSource
metadata:
name: pubsub-event-source-nyc
spec:
template:
serviceAccountName: argo-events
pubSub:
examplex:
jsonBody: true
topic: argo-events-nyc
subscriptionID: argo-events-nyc-sub
---
apiVersion: argoproj.io/v1alpha1
kind: Sensor
metadata:
name: pubsub-sensor-nyc
spec:
template:
serviceAccountName: argo-events-sa
dependencies:
- name: pubsub-event-source-dep
eventSourceName: pubsub-event-source-nyc
eventName: examplex
triggers:
- template:
name: argo-workflow-trigger
argoWorkflow:
group: argoproj.io
version: v1alpha1
resource: workflows
operation: submit
source:
resource:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: nyc-test-template-
namespace: argo
spec:
arguments:
parameters:
- name: wft
value: nyc-test-template
workflowTemplateRef:
# I'm pretty sure this inputs block is useless. But leaving it out
# and instead referencing arguments.parameters.wft won't work either.
inputs:
parameters:
- name: wft
name: "{{inputs.parameters.wft}}"
parameters:
- src:
dependencyName: pubsub-event-source-dep
dataKey: body.wft
dest: spec.arguments.parameters.0.value
What I would like to happen is this:
an empty message {} would trigger "nyc-test-template"
the message {"wft": "my-template"} would trigger "my-template"
Instead publishing an empty message will cause the Sensor to throw an error:
2021-03-29T15:31:16.386441528Z2021/03/29 15:31:16 Failed to parse workflow: error unmarshaling JSON: while decoding JSON: json: unknown field "inputs"
Frankly speaking - above yaml took crude inspiration from this example. It's not really the result of an educated guess as I still don't understand the mechanics of how parameters, arguments and inputs are interacting.
You can use when to toggle which template to use depending on a parameter.
Suppose I have two simple WorkflowTemplates like these:
apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: t1
spec:
templates:
- name: whalesay-template
container:
image: docker/whalesay
command: [cowsay]
args: [t1]
---
apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: t2
spec:
templates:
- name: whalesay-template
container:
image: docker/whalesay
command: [cowsay]
args: [t2]
I can choose to execute one or another template from the WorkflowTemplates depending on an argument passed to a Workflow (either manually or from an argo-events setup).
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: switch-
spec:
entrypoint: pick
arguments:
parameters:
- name: which
templates:
- name: pick
steps:
- - name: t1
when: "{{workflow.parameters.which}} == t1"
templateRef:
name: t1
template: whalesay-template
- name: t2
when: "{{workflow.parameters.which}} == t2"
templateRef:
name: t2
template: whalesay-template
For top-level arguments to a Workflow, you can use workflow.parameters.SOMETHING.
Building on the above, you can use a JSON parsing tool like jq to retrieve toggle value and then choose your template based on that value.
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: switch-
spec:
entrypoint: pick
arguments:
parameters:
- name: json
templates:
- name: pick
steps:
- - name: parse
template: parse
- - name: t1
when: "{{steps.parse.outputs.result}} == a"
templateRef:
name: t1
template: whalesay-template
- name: t2
when: "{{steps.parse.outputs.result}} == b"
templateRef:
name: t2
template: whalesay-template
- name: parse
container:
image: jorgeandrada/alpine-jq
command: [sh, -c]
env:
- name: JSON
value: "{{workflow.parameters.json}}"
args: [echo "$JSON" | jq -j '.test']
I should mention that using jq is a bit heavy-handed. In future versions of Argo (3.1+) there will be tools to inspect JSON more directly. But this solution is nicely reverse-compatible.
Credits go to Derek Wang.
apiVersion: argoproj.io/v1alpha1
kind: Sensor
metadata:
name: pubsub-sensor-nyc
spec:
template:
serviceAccountName: argo-events-sa
dependencies:
- name: pubsub-event-source-dep
eventSourceName: pubsub-event-source-nyc
eventName: examplex
triggers:
- template:
name: argo-workflow-trigger
argoWorkflow:
group: argoproj.io
version: v1alpha1
resource: workflows
operation: submit
source:
resource:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: nyc-test-template-
namespace: argo
spec:
workflowTemplateRef:
name: nyc-test-template
parameters:
- src:
dependencyName: pubsub-event-source-dep
dataKey: body.wft
value: nyc-test-template # default value
dest: spec.workflowTemplateRef.name # <- this
I have the following Deployment...
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: socket-server-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: socket-server
spec:
containers:
- name: socket-server
image: gcr.io/project-haswell-recon/socket-server:production-production-2
env:
- name: PORT
value: 80
ports:
- containerPort: 80
But I get the following error when I run kubectl create -f ./scripts/deployment.yml --namespace production
Error from server (BadRequest): error when creating "./scripts/deployment.yml": Deployment in version "v1beta1" cannot be handled as a Deployment: [pos 321]: json: expect char '"' but got char '8'
I pretty much copy and pasted this deployment from a previous working deployment, and altered a few details so I'm at a loss as to what this could be.
The problem is with the number 80. Here it's within an EnvVar context, so it has to be of type string and not int