PCF - x509: certificate has expired or is not yet valid - cloud-foundry

I am trying to setup pcf in Ubuntu20. While i am setting up it, its notable to create and found
x509: certificate has expired or is not yet valid
Here are deploy bash log file as follows. Can someone help me out ?
deploy-bosh.log
Starting registry... Finished (00:00:00)
Uploading stemcell 'bosh-warden-boshlite-ubuntu-xenial-go_agent/170.16'... Skipped [Stemcell already uploaded] (00:00:00)
Started deploying
Deleting VM '654e6637-3333-4879-a5a7-26a6066585ab'... Finished (00:00:14)
Creating VM for instance 'bosh/0' from stemcell '211465a3-381f-4fdd-83ba-9591803442f9'... Finished (00:00:05)
Waiting for the agent on VM '7091e293-6518-4008-b337-cbbf2d273eae' to be ready... Failed (00:00:04)
Failed deploying (00:00:23)
Stopping registry... Finished (00:00:00)
Cleaning up rendered CPI jobs... Finished (00:00:00)
Deploying:
Creating instance 'bosh/0':
Waiting until instance is ready:
Post https://mbus:<redacted>#10.144.0.2:6868/agent: x509: certificate has expired or is not yet valid
Exit code 1

Related

The error occurs when creating Openshift Cluster on AWS with IPI

We are trying to creating Openshift Cluster looking at this website.
https://keithtenzer.com/openshift/openshift-4-aws-ipi-installation-getting-started-guide/
We run "create cluster" but installation is failed.
The Error is following.
ERROR Attempted to gather ClusterOperator status after installation failure: listing ClusterOperator objects:
Get "https://api.xxxxx.openshift.yyyyy.private:6443/apis/config.openshift.io/v1/clusteroperators":
dial tcp: lookup api.xxxxx.openshift.yyyyy.private on aa.bbb.c.d:ee: no such host
ERROR Bootstrap failed to complete: Get "https://api.xxxxx.openshift.yyyyy.private:6443/version":
dial tcp: lookup api.xxxxx.openshift.yyyy.private on aa.bbb.c.d:ee: no such host
ERROR Failed waiting for Kubernetes API. T
We made openshift.yyyyy.private in Public Hosted Zone with Route53 before installation but it seems that api.xxxxx.openshift.yyyyy.private is NAME_RESOLUTION failed.
What should we do to complete installation?

Unable to start the Amazon SSM Agent - failed to start message bus

When registering an Amazon SSM Agent, it registers successfully in the SSM Managed Instances console, but the connection shows "Connection Lost".
When I try to start the service manually, I get the following error:
Error occurred fetching the seelog config file path: open /etc/amazon/ssm/seelog.xml: no such file or directory
Initializing new seelog logger
New Seelog Logger Creation Complete
2020-12-09 10:20:01 ERROR error occurred when starting amazon-ssm-agent: failed to start message bus, failed to start health channel: failed to listen on the channel: ipc:///var/lib/amazon/ssm/ipc/health, address in use
How exactly do I solve this? I've tried to restart the service a few times but no luck.
I was able to fix this issue by stopping the agent and purging the /var/lib/amazon/ssm/ipc directory
service amazon-ssm-agent stop
rm -rf /var/lib/amazon/ssm/ipc
service amazon-ssm-agent start

CockroachDB on AWS EKS cluster - [n?] no stores bootstrapped

I am attempting to deploy CockroachDB:v2.1.6 to a new AWS EKS cluster. Everything is deployed successfully; statefulset, services, pv's & pvc's are created. The AWS EBS volumes are created successfully too.
The issue is the pods never get to a READY state.
pod/cockroachdb-0 0/1 Running 0 14m
pod/cockroachdb-1 0/1 Running 0 14m
pod/cockroachdb-2 0/1 Running 0 14m
If I 'describe' the pods I get the following:
Normal Pulled 46s kubelet, ip-10-5-109-70.eu-central-1.compute.internal Container image "cockroachdb/cockroach:v2.1.6" already present on machine
Normal Created 46s kubelet, ip-10-5-109-70.eu-central-1.compute.internal Created container cockroachdb
Normal Started 46s kubelet, ip-10-5-109-70.eu-central-1.compute.internal Started container cockroachdb
Warning Unhealthy 1s (x8 over 36s) kubelet, ip-10-5-109-70.eu-central-1.compute.internal Readiness probe failed: HTTP probe failed with statuscode: 503
If I examine the logs of a pod I see this:
I200409 11:45:18.073666 14 server/server.go:1403 [n?] no stores bootstrapped and --join flag specified, awaiting init command.
W200409 11:45:18.076826 87 vendor/google.golang.org/grpc/clientconn.go:1293 grpc: addrConn.createTransport failed to connect to {cockroachdb-0.cockroachdb:26257 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp: lookup cockroachdb-0.cockroachdb on 172.20.0.10:53: no such host". Reconnecting...
W200409 11:45:18.076942 21 gossip/client.go:123 [n?] failed to start gossip client to cockroachdb-0.cockroachdb:26257: initial connection heartbeat failed: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp: lookup cockroachdb-0.cockroachdb on 172.20.0.10:53: no such host"
I came across this comment from the CockroachDB forum (https://forum.cockroachlabs.com/t/http-probe-failed-with-statuscode-503/2043/6)
Both the cockroach_out.log and cockroach_output1.log files you sent me (corresponding to mycockroach-cockroachdb-0 and mycockroach-cockroachdb-2) print out no stores bootstrapped during startup and prefix all their log lines with n?, indicating that they haven’t been allocated a node ID. I’d say that they may have never been properly initialized as part of the cluster.
I have deleted everything including pv's, pvc's & AWS EBS volumes through the kubectl delete command and reapplied with the same issue.
Any thoughts would be very much appreciated. Thank you
I was not aware that you had to initialize the CockroachDB cluster after creating it. I did the following to resolve my issue:
kubectl exec -it cockroachdb-0 -n /bin/sh
/cockroach/cockroach init
See here for more details - https://www.cockroachlabs.com/docs/v19.2/cockroach-init.html
After this the pods started running correctly.

Cloudfoundry restage terminal signals

Currently I have a micro service running in cloud foundry. I am trapping sigterm and sighup. I’m trying to verify which signal is sent when a cf restage is performed. I’ve seen the terminal signals for a lot of other commands except for this one in the docs. I would appreciate if somebody can point me to any documentation or just knowledge on the signal sent to the operating system on a cf restage. Thank you.
The signal that you are sent shouldn't differ between cf actions (i.e. stop, restart, restage, or even if your app is restarted due to foundation maintenance) it should always get a SIGTERM, ten seconds to nicely shutdown, followed by a SIGKILL.
https://docs.pivotal.io/pivotalcf/2-6/devguide/deploy-apps/app-lifecycle.html#shutdown
I did a little test on Pivotal Web Service to confirm when using cf restage, where I catch and log when SIGTERM is sent. You can see right in the middle where the SIGTERM is caught by the app. It's just a little harder to see in this case because you also have the staging logs coming through at the same time.
Hope that helps!
2019-08-25T22:02:02.90-0400 [CELL/0] OUT Cell 65a71ce1-e630-4765-8f60-adebfa730268 stopping instance a91e593b-d9b6-42aa-7021-b8cd
2019-08-25T22:02:02.98-0400 [API/9] OUT Creating build for app with guid f58e6aae-783d-4a28-bd30-54c20d314ef4
2019-08-25T22:02:03.87-0400 [STG/0] OUT Downloading binary_buildpack...
2019-08-25T22:02:03.91-0400 [APP/PROC/WEB/0] OUT running
2019-08-25T22:02:03.94-0400 [STG/0] OUT Downloaded binary_buildpack
2019-08-25T22:02:03.94-0400 [STG/0] OUT Cell 9aa90abe-6a8f-4485-90d1-71da907de9a3 creating container for instance 4cd508ee-3ce3-4e61-a9b7-5a997ca5583e
2019-08-25T22:02:05.36-0400 [STG/0] OUT Cell 9aa90abe-6a8f-4485-90d1-71da907de9a3 successfully created container for instance 4cd508ee-3ce3-4e61-a9b7-5a997ca5583e
2019-08-25T22:02:05.72-0400 [STG/0] OUT Downloading app package...
2019-08-25T22:02:05.72-0400 [STG/0] OUT Downloading build artifacts cache...
2019-08-25T22:02:05.77-0400 [STG/0] ERR Downloading build artifacts cache failed
2019-08-25T22:02:05.92-0400 [STG/0] OUT Downloaded app package (651.6K)
2019-08-25T22:02:06.57-0400 [STG/0] OUT -----> Binary Buildpack version 1.0.33
2019-08-25T22:02:06.83-0400 [STG/0] OUT Exit status 0
2019-08-25T22:02:06.83-0400 [STG/0] OUT Uploading droplet, build artifacts cache...
2019-08-25T22:02:06.83-0400 [STG/0] OUT Uploading droplet...
2019-08-25T22:02:06.83-0400 [STG/0] OUT Uploading build artifacts cache...
2019-08-25T22:02:06.97-0400 [STG/0] OUT Uploaded build artifacts cache (215B)
2019-08-25T22:02:07.02-0400 [API/2] OUT Creating droplet for app with guid f58e6aae-783d-4a28-bd30-54c20d314ef4
2019-08-25T22:02:08.12-0400 [APP/PROC/WEB/0] OUT SIGTERM caught, exiting
2019-08-25T22:02:08.13-0400 [CELL/SSHD/0] OUT Exit status 0
2019-08-25T22:02:08.20-0400 [APP/PROC/WEB/0] OUT Exit status 134
2019-08-25T22:02:08.28-0400 [CELL/0] OUT Cell 65a71ce1-e630-4765-8f60-adebfa730268 destroying container for instance a91e593b-d9b6-42aa-7021-b8cd
2019-08-25T22:02:08.91-0400 [PROXY/0] OUT Exit status 137
2019-08-25T22:02:09.16-0400 [CELL/0] OUT Cell 65a71ce1-e630-4765-8f60-adebfa730268 successfully destroyed container for instance a91e593b-d9b6-42aa-7021-b8cd
2019-08-25T22:02:10.07-0400 [STG/0] OUT Uploaded droplet (653.1K)
2019-08-25T22:02:10.07-0400 [STG/0] OUT Uploading complete
2019-08-25T22:02:11.24-0400 [STG/0] OUT Cell 9aa90abe-6a8f-4485-90d1-71da907de9a3 stopping instance 4cd508ee-3ce3-4e61-a9b7-5a997ca5583e
2019-08-25T22:02:11.24-0400 [STG/0] OUT Cell 9aa90abe-6a8f-4485-90d1-71da907de9a3 destroying container for instance 4cd508ee-3ce3-4e61-a9b7-5a997ca5583e
2019-08-25T22:02:11.68-0400 [CELL/0] OUT Cell e9fa9dcc-6c6e-4cd4-97cd-5781aa4c64e6 creating container for instance f2bc9aaa-64cf-4331-53b5-bd5f
2019-08-25T22:02:11.95-0400 [STG/0] OUT Cell 9aa90abe-6a8f-4485-90d1-71da907de9a3 successfully destroyed container for instance 4cd508ee-3ce3-4e61-a9b7-5a997ca5583e
2019-08-25T22:02:13.28-0400 [CELL/0] OUT Cell e9fa9dcc-6c6e-4cd4-97cd-5781aa4c64e6 successfully created container for instance f2bc9aaa-64cf-4331-53b5-bd5f
2019-08-25T22:02:14.43-0400 [CELL/0] OUT Downloading droplet...
2019-08-25T22:02:14.78-0400 [CELL/0] OUT Downloaded droplet (653.1K)
2019-08-25T22:02:16.07-0400 [APP/PROC/WEB/0] OUT running

How to deploy to Cloud Foundry with Docker Image from Private Docker Registry and Self-Signed SSL certificate?

How to add self-signed certificate to Cloud Foundry (PCFDev), so I would be able to deploy with Docker Image from private Docker Registry?
For this example I'm using PCFDev:
user#work:(0):~/Documents/$ cf push app-ui -o nexus-dev/app/app-ui:latest
Creating app app-ui in org pcfdev-org / space pcfdev-space as user...
OK
Creating route app-ui.local.pcfdev.io...
OK
Binding app-ui.local.pcfdev.io to app-ui...
OK
Starting app app-ui in org pcfdev-org / space pcfdev-space as user...
Creating container
Successfully created container
Staging...
Staging process started ...
Failed to talk to docker registry: Get https://nexus-dev/v2/: x509: certificate signed by unknown authority
Failed getting docker image by tag: Error parsing HTTP response: invalid character '<' looking for beginning of value: "<html>\r\n<head><title>400 The plain HTTP request was sent to HTTPS port</title></head>\r\n<body bgcolor=\"whit
e\">\r\n<center><h1>400 Bad Request</h1></center>\r\n<center>The plain HTTP request was sent to HTTPS port</center>\r\n<hr><center>nginx/1.10.0 (Ubuntu)</center>\r\n</body>\r\n</html>\r\n"
Staging process failed: Exit trace for group:
builder exited with error: failed to fetch metadata from [app/app-ui] with tag [latest] and insecure registries [] due to Error parsing HTTP response: invalid character '<' looking for beginning of value: "<html>\r\n<head><title>
400 The plain HTTP request was sent to HTTPS port</title></head>\r\n<body bgcolor=\"white\">\r\n<center><h1>400 Bad Request</h1></center>\r\n<center>The plain HTTP request was sent to HTTPS port</center>\r\n<hr><center>nginx/1.10.0
(Ubuntu)</center>\r\n</body>\r\n</html>\r\n"
Exit status 2
Staging Failed: Exited with status 2
Destroying container
Successfully destroyed container
FAILED
Error restarting application: StagingError
TIP: use 'cf logs app-ui --recent' for more information
You can start pcfdev with -r option,
e.g.
cf dev start -r host.pcfdev.io:5000
from Insecure Docker Registries