Get openstack helm running in a vm behind a proxy - virtualbox

I have been working on getting a helm distribution running on a vm for some time. I have found conflicting arguments about it being possible or not due to virtualization layers, but here it goes.
I am working with a virtualbox ubuntu 16.04 with 16gb ram
following the guide at https://docs.openstack.org/openstack-helm/latest/install/developer/requirements-and-host-config.html
I am also working behind a proxy and I have added all the relevant proxy settings in accordance with https://docs.openstack.org/openstack-helm/latest/troubleshooting/proxy.html
When I run the command make dev-deploy setup-host it passes and has the following output:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f6859822cc9a gcr.io/google_containers/kube-apiserver-amd64 "kube-apiserver --se…" 17 minutes ago Up 17 minutes k8s_kube-apiserver_kube-apiserver-ram_kube-system_9b1ce48429c89e5c30202699d11086af_2
c18ec1473790 gcr.io/google_containers/kube-scheduler-amd64 "kube-scheduler --ad…" 17 minutes ago Up 17 minutes k8s_kube-scheduler_kube-scheduler-ram_kube-system_65a679e8f744d3d257f72713d3790c3b_2
2b631d5fb2a3 gcr.io/google_containers/etcd-amd64 "etcd --listen-clien…" 17 minutes ago Up 17 minutes k8s_etcd_etcd-ram_kube-system_7278f85057e8bf5cb81c9f96d3b25320_2
5a8b005119e3 gcr.io/google_containers/kube-controller-manager-amd64 "kube-controller-man…" 17 minutes ago Up 17 minutes k8s_kube-controller-manager_kube-controller-manager-ram_kube-system_2150e730dce733115de72022e9130f4c_2
a1917416a6e1 gcr.io/google_containers/pause-amd64:3.0 "/pause" 17 minutes ago Up 17 minutes k8s_POD_kube-controller-manager-ram_kube-system_2150e730dce733115de72022e9130f4c_2
caa82c2f2e24 gcr.io/google_containers/pause-amd64:3.0 "/pause" 17 minutes ago Up 17 minutes k8s_POD_kube-scheduler-ram_kube-system_65a679e8f744d3d257f72713d3790c3b_2
183a2a436c5e gcr.io/google_containers/pause-amd64:3.0 "/pause" 17 minutes ago Up 17 minutes k8s_POD_etcd-ram_kube-system_7278f85057e8bf5cb81c9f96d3b25320_3
a3fbddd01f99 gcr.io/google_containers/pause-amd64:3.0 "/pause" 17 minutes ago Up 17 minutes k8s_POD_kube-apiserver-ram_kube-system_9b1ce48429c89e5c30202699d11086af_2
But when I run the coomand make dev-deploy k8s if fails with:
TASK [deploy-kubeadm-aio-master : deploying kubernetes on master node] *********************************************
TASK [deploy-kubeadm-aio-common : performing deploy-kube action] ***************************************************
fatal: [local]: FAILED! => {"changed": false, "msg": "+ '[' xdeploy-kube == xdeploy-kubelet ']'\n+ '[' xdeploy-kube == xdeploy-kube ']'\n+ '[' x '!=' x ']'\n+ '[' xdocker0 '!=' x ']'\n++ echo '{' '\"my_container_name\":' '\"kubeadm-deploy-kube\",' '\"user\":' '{' '\"uid\":' 1000, '\"gid\":' 1000, '\"home\":' '\"/home/nicolas\"' '},' '\"cluster\":' '{' '\"cni\":' '\"calico\"' '},' '\"kubelet\":' '{' '\"container_runtime\":' '\"docker\",' '\"net_support_linuxbridge\":' true, '\"pv_support_nfs\":' true, '\"pv_support_ceph\":' true '},' '\"helm\":' '{' '\"tiller_image\":' '\"gcr.io/kubernetes-helm/tiller:v2.7.2\"' '},' '\"k8s\":' '{' '\"kubernetesVersion\":' '\"v1.9.3\",' '\"imageRepository\":' '\"gcr.io/google_containers\",' '\"certificatesDir\":' '\"/etc/kubernetes/pki\",' '\"selfHosted\":' '\"false\",' '\"api\":' '{' '\"bindPort\":' 6443 '},' '\"networking\":' '{' '\"dnsDomain\":' '\"cluster.local\",' '\"podSubnet\":' '\"192.168.0.0/16\",' '\"serviceSubnet\":' '\"10.96.0.0/12\"' '}' '}' '}'\n++ jq '.k8s.api += {\"advertiseAddressDevice\": \"docker0\"}'\n+ PLAYBOOK_VARS='{\n \"my_container_name\": \"kubeadm-deploy-kube\",\n \"user\": {\n \"uid\": 1000,\n \"gid\": 1000,\n \"home\": \"/home/nicolas\"\n },\n \"cluster\": {\n \"cni\": \"calico\"\n },\n \"kubelet\": {\n \"container_runtime\": \"docker\",\n \"net_support_linuxbridge\": true,\n \"pv_support_nfs\": true,\n \"pv_support_ceph\": true\n },\n \"helm\": {\n \"tiller_image\": \"gcr.io/kubernetes-helm/tiller:v2.7.2\"\n },\n \"k8s\": {\n \"kubernetesVersion\": \"v1.9.3\",\n \"imageRepository\": \"gcr.io/google_containers\",\n \"certificatesDir\": \"/etc/kubernetes/pki\",\n \"selfHosted\": \"false\",\n \"api\": {\n \"bindPort\": 6443,\n \"advertiseAddressDevice\": \"docker0\"\n },\n \"networking\": {\n \"dnsDomain\": \"cluster.local\",\n \"podSubnet\": \"192.168.0.0/16\",\n \"serviceSubnet\": \"10.96.0.0/12\"\n }\n }\n}'\n+ exec ansible-playbook /opt/playbooks/kubeadm-aio-deploy-master.yaml --inventory=/opt/playbooks/inventory.ini --inventory=/opt/playbooks/vars.yaml '--extra-vars={\n \"my_container_name\": \"kubeadm-deploy-kube\",\n \"user\": {\n \"uid\": 1000,\n \"gid\": 1000,\n \"home\": \"/home/nicolas\"\n },\n \"cluster\": {\n \"cni\": \"calico\"\n },\n \"kubelet\": {\n \"container_runtime\": \"docker\",\n \"net_support_linuxbridge\": true,\n \"pv_support_nfs\": true,\n \"pv_support_ceph\": true\n },\n \"helm\": {\n \"tiller_image\": \"gcr.io/kubernetes-helm/tiller:v2.7.2\"\n },\n \"k8s\": {\n \"kubernetesVersion\": \"v1.9.3\",\n \"imageRepository\": \"gcr.io/google_containers\",\n \"certificatesDir\": \"/etc/kubernetes/pki\",\n \"selfHosted\": \"false\",\n \"api\": {\n \"bindPort\": 6443,\n \"advertiseAddressDevice\": \"docker0\"\n },\n \"networking\": {\n \"dnsDomain\": \"cluster.local\",\n \"podSubnet\": \"192.168.0.0/16\",\n \"serviceSubnet\": \"10.96.0.0/12\"\n }\n }\n}'\n\nPLAY [all]
*********************************************************************\n\nTASK [Gathering Facts] *********************************************************\nok:
[/mnt/rootfs]\n\nTASK [deploy-kubeadm-master : storing node hostname] ***************************\nok: [/mnt/rootfs]\n\nTASK [deploy-kubeadm-master : setup directorys on host] ************************\nok: [/mnt/rootfs] => (item=/etc/kubernetes)\nchanged: [/mnt/rootfs] => (item=/etc/kubernetes/pki)\n\nTASK [deploy-kubeadm-master : generating initial admin token] ******************\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : storing initial admin token] *********************\nok: [/mnt/rootfs]\n\nTASK [deploy-kubeadm-master : kubelet | copying config to host] ****************\nchanged: [/mnt/rootfs]\n\nTASK [deploy-kubeadm-master : master | deploy | certs | ca] ********************\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : master | deploy | certs | apiserver] *************\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : master | deploy | certs | apiserver-kubelet-client] ***\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : master | deploy | certs | sa] ********************\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : master | deploy | certs | front-proxy-ca] ********\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : master | deploy | certs | front-proxy-client] ****\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : master | deploy | kubeconfig | admin] ************\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : master | deploy | kubeconfig | kubelet] **********\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : master | deploy | kubeconfig | controller-manager] ***\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : master | deploy | kubeconfig | scheduler] ********\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : generating etcd static manifest] *****************\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : master | deploy | controlplane | apiserver] ******\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : master | deploy | controlplane | controller-manager] ***\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : master | deploy | controlplane | scheduler] ******\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : wait for kube api] *******************************\nFAILED - RETRYING: wait for kube api (120 retries left).\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : wait for node to come online] ********************\nFAILED
and then:
TASK [deploy-kubeadm-aio-common : dumping logs for deploy-kube action] *********************************************
ok: [local] => {
"out.stdout_lines": [
"",
"PLAY [all] *********************************************************************",
"",
"TASK [Gathering Facts] *********************************************************",
"ok: [/mnt/rootfs]",
"",
"TASK [deploy-kubeadm-master : storing node hostname] ***************************",
"ok: [/mnt/rootfs]",
"",
"TASK [deploy-kubeadm-master : setup directorys on host] ************************",
"ok: [/mnt/rootfs] => (item=/etc/kubernetes)",
"changed: [/mnt/rootfs] => (item=/etc/kubernetes/pki)",
"",
"TASK [deploy-kubeadm-master : generating initial admin token] ******************",
"changed: [/mnt/rootfs -> 127.0.0.1]",
"",
"TASK [deploy-kubeadm-master : storing initial admin token] *********************",
"ok: [/mnt/rootfs]",
"",
"TASK [deploy-kubeadm-master : kubelet | copying config to host] ****************",
"changed: [/mnt/rootfs]",
"",
"TASK [deploy-kubeadm-master : master | deploy | certs | ca] ********************",
"changed: [/mnt/rootfs -> 127.0.0.1]",
"",
"TASK [deploy-kubeadm-master : master | deploy | certs | apiserver] *************",
"changed: [/mnt/rootfs -> 127.0.0.1]",
"",
"TASK [deploy-kubeadm-master : master | deploy | certs | apiserver-kubelet-client] ***",
"changed: [/mnt/rootfs -> 127.0.0.1]",
"",
"TASK [deploy-kubeadm-master : master | deploy | certs | sa] ********************",
"changed: [/mnt/rootfs -> 127.0.0.1]",
"",
"TASK [deploy-kubeadm-master : master | deploy | certs | front-proxy-ca] ********",
"changed: [/mnt/rootfs -> 127.0.0.1]",
"",
"TASK [deploy-kubeadm-master : master | deploy | certs | front-proxy-client] ****",
"changed: [/mnt/rootfs -> 127.0.0.1]",
"",
"TASK [deploy-kubeadm-master : master | deploy | kubeconfig | admin] ************",
"changed: [/mnt/rootfs -> 127.0.0.1]",
"",
"TASK [deploy-kubeadm-master : master | deploy | kubeconfig | kubelet] **********",
"changed: [/mnt/rootfs -> 127.0.0.1]",
"",
"TASK [deploy-kubeadm-master : master | deploy | kubeconfig | controller-manager] ***",
"changed: [/mnt/rootfs -> 127.0.0.1]",
"",
"TASK [deploy-kubeadm-master : master | deploy | kubeconfig | scheduler] ********",
"changed: [/mnt/rootfs -> 127.0.0.1]",
"",
"TASK [deploy-kubeadm-master : generating etcd static manifest] *****************",
"changed: [/mnt/rootfs -> 127.0.0.1]",
"",
"TASK [deploy-kubeadm-master : master | deploy | controlplane | apiserver] ******",
"changed: [/mnt/rootfs -> 127.0.0.1]",
"",
"TASK [deploy-kubeadm-master : master | deploy | controlplane | controller-manager] ***",
"changed: [/mnt/rootfs -> 127.0.0.1]",
"",
"TASK [deploy-kubeadm-master : master | deploy | controlplane | scheduler] ******",
"changed: [/mnt/rootfs -> 127.0.0.1]",
"",
"TASK [deploy-kubeadm-master : wait for kube api] *******************************",
"FAILED - RETRYING: wait for kube api (120 retries left).",
"changed: [/mnt/rootfs -> 127.0.0.1]",
"",
"TASK [deploy-kubeadm-master : wait for node to come online] ********************",
"FAILED - RETRYING: wait for node to come online (120 retries left).",
"FAILED - RETRYING: wait for node to come online (119 retries left).",
...
"FAILED - RETRYING: wait for node to come online (2 retries left).",
"FAILED - RETRYING: wait for node to come online (1 retries left).",
"fatal: [/mnt/rootfs -> 127.0.0.1]: FAILED! => {\"attempts\": 120, \"changed\": true, \"cmd\": \"kubectl get node \\\"Ram\\\" --no-headers | gawk '{ print $2 }' | grep -q '\\\\(^Ready\\\\)\\\\|\\\\(^NotReady\\\\)'\", \"delta\": \"0:00:01.188128\", \"end\": \"2018-04-05 17:06:51.647344\", \"msg\": \"non-zero return code\", \"rc\": 1, \"start\": \"2018-04-05 17:06:50.459216\", \"stderr\": \"Error from server (NotFound): nodes \\\"Ram\\\" not found\", \"stderr_lines\": [\"Error from server (NotFound): nodes \\\"Ram\\\" not found\"], \"stdout\": \"\", \"stdout_lines\": []}",
"\tto retry, use: --limit #/opt/playbooks/kubeadm-aio-deploy-master.retry",
"",
"PLAY RECAP *********************************************************************",
"/mnt/rootfs : ok=21 changed=18 unreachable=0 failed=1 "
]
}
TASK [deploy-kubeadm-aio-common : exiting if deploy-kube action failed] ********************************************
fatal: [local]: FAILED! => {"changed": false, "cmd": "exit 1", "msg": "[Errno 2] No such file or directory", "rc": 2}
Any pointers would be greatly appreciated

I had the same issue in this step of the Ansible playbook when deploying an AIO openstack with Helm on a VM.
In this step a number of docker containers are created. (You can verify this with "docker ps" in another terminal. It takes some time)
ubuntu#xxx:~$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
07ee2d56038a ac5f7ee9ae7e "/tiller" 44 seconds ago Up 43 seconds k8s_tiller_tiller-deploy-7df7fbdd9c-dwzs8_kube-system_28952c16-1d8d-11e9-9ac4-fa163e98e40b_0
e49ef4eaee2c k8s.gcr.io/pause-amd64:3.1 "/pause" 46 seconds ago Up 45 seconds k8s_POD_tiller-deploy-7df7fbdd9c-dwzs8_kube-system_28952c16-1d8d-11e9-9ac4-fa163e98e40b_0
282cbd3089a6 fed89e8b4248 "/sidecar --v=2 --lo…" About a minute ago Up About a minute k8s_sidecar_kube-dns-7485c77cc-wr88r_kube-system_1c2a3761-1d8d-11e9-9ac4-fa163e98e40b_0
b93e2ab03f6c 459944ce8cc4 "/dnsmasq-nanny -v=2…" About a minute ago Up About a minute k8s_dnsmasq_kube-dns-7485c77cc-wr88r_kube-system_1c2a3761-1d8d-11e9-9ac4-fa163e98e40b_0
4148856fcdd4 512cd7425a73 "/kube-dns --domain=…" About a minute ago Up About a minute k8s_kubedns_kube-dns-7485c77cc-wr88r_kube-system_1c2a3761-1d8d-11e9-9ac4-fa163e98e40b_0
14b8cd3cd089 k8s.gcr.io/pause-amd64:3.1 "/pause" About a minute ago Up About a minute k8s_POD_kube-dns-7485c77cc-wr88r_kube-system_1c2a3761-1d8d-11e9-9ac4-fa163e98e40b_0
3bab5bea7b6e 6f1a824b2c81 "/usr/bin/kube-contr…" About a minute ago Up About a minute k8s_calico-kube-controllers_calico-kube-controllers-6bccb8d477-qbk55_kube-system_089abbd3-1d8d-11e9-9ac4-fa163e98e40b_0
f6916bba8ccb k8s.gcr.io/pause-amd64:3.1 "/pause" About a minute ago Up About a minute k8s_POD_calico-kube-controllers-6bccb8d477-qbk55_kube-system_089abbd3-1d8d-11e9-9ac4-fa163e98e40b_0
80cfd3f75475 a89b45f36d5e "start_runit" About a minute ago Up About a minute k8s_calico-node_calico-node-d5n8s_kube-system_0890d2d2-1d8d-11e9-9ac4-fa163e98e40b_0
9b8c7f4e9655 58c02f00d03b "/usr/local/bin/etcd…" About a minute ago Up About a minute k8s_calico-etcd_calico-etcd-d658p_kube-system_088f6c7e-1d8d-11e9-9ac4-fa163e98e40b_0
d78105adb690 k8s.gcr.io/pause-amd64:3.1 "/pause" About a minute ago Up About a minute k8s_POD_calico-node-d5n8s_kube-system_0890d2d2-1d8d-11e9-9ac4-fa163e98e40b_0
7a58c798a05a k8s.gcr.io/pause-amd64:3.1 "/pause" About a minute ago Up About a minute k8s_POD_calico-etcd-d658p_kube-system_088f6c7e-1d8d-11e9-9ac4-fa163e98e40b_0
5a8b67e4f12d 3eb53757e3db "/usr/local/bin/kube…" About a minute ago Up About a minute k8s_kube-proxy_kube-proxy-fc557_kube-system_01792fcc-1d8d-11e9-9ac4-fa163e98e40b_0
0cf582074588 k8s.gcr.io/pause-amd64:3.1 "/pause" About a minute ago Up About a minute k8s_POD_kube-proxy-fc557_kube-system_01792fcc-1d8d-11e9-9ac4-fa163e98e40b_0
c51f0e9ab095 a8706603e59d "kube-scheduler --ad…" 2 minutes ago Up 2 minutes k8s_kube-scheduler_kube-scheduler-openstack---helm_kube-system_4cfd2774e591ce0cf177c635d6ca6850_0
c1bdb76ab9cf 52920ad46f5b "etcd --listen-clien…" 2 minutes ago Up 2 minutes k8s_etcd_etcd-openstack---helm_kube-system_1a6fe3b688c5f1cbaa41d4e4e0dc702b_0
1d39d3d9cb39 a0e5065bdee0 "kube-controller-man…" 2 minutes ago Up 2 minutes k8s_kube-controller-manager_kube-controller-manager-openstack---helm_kube-system_7f18c2dbce121d9cbf120054643f96cf_0
e6cf7228793e f8187c0f74c8 "kube-apiserver --fe…" 2 minutes ago Up 2 minutes k8s_kube-apiserver_kube-apiserver-openstack---helm_kube-system_ef303a0f16f6d02a535b073b1ce77bd1_0
16ca266396fc 18be53808701 "dnsmasq --keep-in-f…" 2 minutes ago Up 2 minutes k8s_osh-dns-redirector_osh-dns-redirector-openstack---helm_kube-system_b4dcd4cd9d071cf5c953824f3630b532_0
ff4850d1148d k8s.gcr.io/pause-amd64:3.1 "/pause" 2 minutes ago Up 2 minutes k8s_POD_kube-scheduler-openstack---helm_kube-system_4cfd2774e591ce0cf177c635d6ca6850_0
093826e65165 k8s.gcr.io/pause-amd64:3.1 "/pause" 2 minutes ago Up 2 minutes k8s_POD_kube-controller-manager-openstack---helm_kube-system_7f18c2dbce121d9cbf120054643f96cf_0
498e4e03c00b k8s.gcr.io/pause-amd64:3.1 "/pause" 2 minutes ago Up 2 minutes k8s_POD_kube-apiserver-openstack---helm_kube-system_ef303a0f16f6d02a535b073b1ce77bd1_0
d31b3720f9d2 k8s.gcr.io/pause-amd64:3.1 "/pause" 2 minutes ago Up 2 minutes k8s_POD_etcd-openstack---helm_kube-system_1a6fe3b688c5f1cbaa41d4e4e0dc702b_0
fb5c6c8adec0 k8s.gcr.io/pause-amd64:3.1 "/pause" 2 minutes ago Up 2 minutes k8s_POD_osh-dns-redirector-openstack---helm_kube-system_b4dcd4cd9d071cf5c953824f3630b532_0
When I re-tried to deploy in another VM with more resources the playbook could finnish without errors.
TASK [deploy-kubeadm-aio-common : performing deploy-kubelet action] ************************************************************************************************************************************
changed: [local]
TASK [deploy-kubeadm-aio-common : removing container for deploy-kubelet action] ************************************************************************************************************************
changed: [local]
TASK [deploy-kubeadm-aio-master : deploying kubernetes on master node] *********************************************************************************************************************************
TASK [deploy-kubeadm-aio-common : performing deploy-kube action] ***************************************************************************************************************************************
changed: [local]
TASK [deploy-kubeadm-aio-common : removing container for deploy-kube action] ***************************************************************************************************************************
changed: [local]
[WARNING]: Could not match supplied host pattern, ignoring: nodes
PLAY [nodes] *******************************************************************************************************************************************************************************************
skipping: no hosts matched
PLAY RECAP *********************************************************************************************************************************************************************************************
local : ok=18 changed=13 unreachable=0 failed=0
I'm not 100% sure but it could be a resource issue. Maybe the containers aren't created fast enough on your VM and therefore the script returns these failures:
"FAILED - RETRYING: wait for node to come online (1 retries left).",
As you can see in my output, I have more containers running.
Working VM specs:
8 VCPUS
16GB RAM
60GB disk

I have the same error.
Problem is simple: your hostname doesn't match to this one that Ansible want to run a job.
E.g.:
root#k8s1:~# hostname
k8s1
So in /etc/hosts, it add:
XXX.XXX.XXX.XXX k8s1.XXX.test k8s1
Solution:
echo "k8s1.XXX.test" > /etc/hostname
hostname "k8s1.XXX.test"
reboot

Related

[Amazon](500150) SocketTimeoutException. Works locally but not on Lambda [Redshift, Spring Data JDBC, Spring Boot]

I am working on a simple API to query Redshift and I am encountering nothing but problems. The current one is that I am getting a SocketTimeoutException when I deploy it to Lambda. Googling this exception has tons of recommendations to add "client CIDR/IP address to the VPC security group". However, my credentials (and IP) work fine for me to access the Redshift DB from my DB Client (DBeaver), and when I run my Spring Boot application locally and call it from Postman. But once it is on Lambda I get the SocketTimeoutException.
I am reaching out to the team to check if I do need to whitelist an IP, but the headache I was having before this was about Spring Boot taking too long to build causing other types of time outs and I have a feeling that this issue has more to do with Spring Boot than it does with my Redshift connection.
Reasons I suspect this:
1. as I mentioned, I have had timeout issues for days but it only switched to the socket timeout when I went from variations of the suggested:
public StreamLambdaHandler() throws ContainerInitializationException {
long startTime = Instant.now().toEpochMilli();
handler = new SpringBootProxyHandlerBuilder()
.defaultProxy()
.asyncInit(startTime)
.springBootApplication(Application.class)
.buildAndInitialize();
}
to what a different API my company is using has:
private static SpringBootLambdaContainerHandler<AwsProxyRequest, AwsProxyResponse> handler;
static {
try {
handler = SpringBootLambdaContainerHandler.getAwsProxyHandler(Application.class);
} catch (ContainerInitializationException e) {
e.printStackTrace();
throw new RuntimeException("Could not initialize Spring Boot application", e);
}
}
2 My company deploys a much heavier api (with many endpoints, service classes, etc) that is only 60kb whereas my single endpoint api I am packaging as shaded with all the dependencies which put it at a whopping 19.6MB! I am guessing this might be affecting the load time?
3 it takes 4.227 seconds to load locally.
Full Stack Trace is really really long, but here is the bit I think is most relevant:
2023-02-06T07:13:30.139-06:00 INIT_START Runtime Version: java:11.v15 Runtime Version ARN: arn:aws:lambda:us-east-1::runtime:blahhalb
2023-02-06T07:13:30.715-06:00 13:13:30.711 [main] INFO com.amazonaws.serverless.proxy.internal.LambdaContainerHandler - Starting Lambda Container Handler
*****Starts app at 7:13:31*****
2023-02-06T07:13:31.634-06:00 . ____ _ __ _ _
2023-02-06T07:13:31.634-06:00 /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
2023-02-06T07:13:31.634-06:00 ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
2023-02-06T07:13:31.634-06:00 \\/ ___)| |_)| | | | | || (_| | ) ) ) )
2023-02-06T07:13:31.634-06:00 ' |____| .__|_| |_|_| |_\__, | / / / /
2023-02-06T07:13:31.634-06:00 =========|_|==============|___/=/_/_/_/
2023-02-06T07:13:31.638-06:00 :: Spring Boot ::
2023-02-06T07:13:31.834-06:00 2023-02-06 13:13:31.833 INFO 9 --- [ main] lambdainternal.AWSLambda : Starting AWSLambda using Java 11.0.14.1 on 169.254.10.245 with PID 9 (/var/runtime/lib/aws-lambda-java-runtime-0.2.0.jar started by sbx_user1051 in /var/task)
2023-02-06T07:13:31.835-06:00 2023-02-06 13:13:31.835 INFO 9 --- [ main] lambdainternal.AWSLambda : No active profile set, falling back to default profiles: default
2023-02-06T07:13:32.722-06:00 2023-02-06 13:13:32.722 INFO 9 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data JDBC repositories in DEFAULT mode.
2023-02-06T07:13:32.787-06:00 2023-02-06 13:13:32.787 INFO 9 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 58 ms. Found 1 JDBC repository interfaces.
2023-02-06T07:13:33.194-06:00 2023-02-06 13:13:33.194 INFO 9 --- [ main] c.a.s.p.i.servlet.AwsServletContext : Initializing Spring embedded WebApplicationContext
2023-02-06T07:13:33.194-06:00 2023-02-06 13:13:33.194 INFO 9 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 1281 ms
2023-02-06T07:13:33.587-06:00 2023-02-06 13:13:33.585 INFO 9 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting...
2023-02-06T07:13:40.762-06:00 13:13:40.758 [main] INFO
***** After failing to make connection after 7 seconds, restarts app*****
com.amazonaws.serverless.proxy.internal.LambdaContainerHandler - Starting Lambda Container Handler
2023-02-06T07:13:41.613-06:00 . ____ _ __ _ _
2023-02-06T07:13:41.613-06:00 /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
2023-02-06T07:13:41.613-06:00 ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
2023-02-06T07:13:41.613-06:00 \\/ ___)| |_)| | | | | || (_| | ) ) ) )
2023-02-06T07:13:41.613-06:00 ' |____| .__|_| |_|_| |_\__, | / / / /
2023-02-06T07:13:41.613-06:00 =========|_|==============|___/=/_/_/_/
2023-02-06T07:13:41.616-06:00 :: Spring Boot ::
2023-02-06T07:13:41.807-06:00 2023-02-06 13:13:41.805 INFO 12 --- [ main] lambdainternal.AWSLambda : Starting AWSLambda using Java 11.0.14.1 on 169.254.10.245 with PID 12 (/var/runtime/lib/aws-lambda-java-runtime-0.2.0.jar started by sbx_user1051 in /var/task)
2023-02-06T07:13:41.807-06:00 2023-02-06 13:13:41.807 INFO 12 --- [ main] lambdainternal.AWSLambda : No active profile set, falling back to default profiles: default
2023-02-06T07:13:42.699-06:00 2023-02-06 13:13:42.699 INFO 12 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data JDBC repositories in DEFAULT mode.
2023-02-06T07:13:42.762-06:00 2023-02-06 13:13:42.761 INFO 12 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 56 ms. Found 1 JDBC repository interfaces.
2023-02-06T07:13:43.160-06:00 2023-02-06 13:13:43.160 INFO 12 --- [ main] c.a.s.p.i.servlet.AwsServletContext : Initializing Spring embedded WebApplicationContext
2023-02-06T07:13:43.160-06:00 2023-02-06 13:13:43.160 INFO 12 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 1277 ms
2023-02-06T07:13:43.549-06:00 2023-02-06 13:13:43.548 INFO 12 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting...
2023-02-06T07:14:14.685-06:00 2023-02-06 13:14:14.684 ERROR 12 --- [ main]
*****Tries to make a connection for 31 seconds before giving me the SocketTimeoutException*****
com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Exception during pool initialization.
2023-02-06T07:14:14.685-06:00 java.sql.SQLException: [Amazon](500150) Error setting/closing connection: SocketTimeoutException.
2023-02-06T07:14:14.685-06:00 at com.amazon.redshift.client.PGClient.connect(Unknown Source) ~[task/:na]
2023-02-06T07:14:14.685-06:00 at com.amazon.redshift.client.PGClient.<init>(Unknown Source) ~[task/:na]
2023-02-06T07:14:14.685-06:00 at com.amazon.redshift.core.PGJDBCConnection.connect(Unknown Source) ~[task/:na]
2023-02-06T07:14:14.685-06:00 at com.amazon.jdbc.common.BaseConnectionFactory.doConnect(Unknown Source) ~[task/:na]
2023-02-06T07:14:14.685-06:00 at com.amazon.jdbc.common.AbstractDriver.connect(Unknown Source) ~[task/:na]
2023-02-06T07:14:14.685-06:00 at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:138) ~[task/:na]
Is it possible that this is a Spring Boot build timeout exception? or is it much more likely that it is in fact a Redshift connection issue?
SO your use case is to write an AWS Lambda function that can perform CRUD operations on a Redshift cluster? If so, you can implement this use case by using the Java Lambda run-time API.
com.amazonaws.services.lambda.runtime.RequestHandler
To perform Redshift data CRUD operations from Lambda, you can use software.amazon.awssdk.services.redshiftdata.RedshiftDataClient.
Once you setup your Lambda function correctly, you can use the Redshift data client to modify the data. For example:
private RedshiftDataClient getClient() {
Region region = Region.US_WEST_2;
RedshiftDataClient redshiftDataClient = RedshiftDataClient.builder()
.region(region)
.build();
return redshiftDataClient;
}
public void delPost(String id) {
try {
RedshiftDataClient redshiftDataClient = getClient();
String sqlStatement = "DELETE FROM blog WHERE idblog = '" + id + "'";
ExecuteStatementRequest statementRequest = ExecuteStatementRequest.builder()
.clusterIdentifier(clusterId)
.database(database)
.dbUser(dbUser)
.sql(sqlStatement)
.build();
redshiftDataClient.executeStatement(statementRequest);
} catch (RedshiftDataException e) {
System.err.println(e.getMessage());
System.exit(1);
}
}
ALso - as your Lambda function invokes Amazon Redshift, the IAM role that the Lambda function uses must have a policy that enables it to invoke this AWS Service from the Lambda function.
To conclude, you can use RedshiftDataClient as opposed to Spring APIs to insert/modify/delete Redshift data from an AWS Lambda function.

Getting deployment error while installing PCF Dev

I am trying to install pcf dev on local machine which have window 10 using below link.
https://pivotal.io/platform/pcf-tutorials/getting-started-with-pivotal-cloud-foundry-dev/install-pcf-dev
During installation I am getting below error in deplo-pass.log
Task 576 | 10:31:39 | Preparing deployment: Preparing deployment
(00:01:46) Task 576 | 10:34:11 | Preparing package compilation:
Finding packages to compile (00:00:01) Task 576 | 10:34:12 | Creating
missing vms: database/2fe9e267-1bb0-4be6-8a4b-b61e534bcd64 (0) Task
576 | 10:34:12 | Creating missing vms:
blobstore/8a9a1df4-39fe-4232-be3e-831d318bcb93 (0) Task 576 | 10:34:12
| Creating missing vms: control/29cf5702-030d-4cac-ac9d-5e4221562e3a
(0) Task 576 | 10:34:12 | Creating missing vms:
compute/927d5018-9f8d-4b94-aa37-fee45aef2280 (0) Task 576 | 10:34:12 |
Creating missing vms: router/d3df4a57-43dd-491d-91ce-c9eda8ca88f8 (0)
Task 576 | 10:34:46 | Creating missing vms:
blobstore/8a9a1df4-39fe-4232-be3e-831d318bcb93 (0) (00:00:34) Task 576
| 10:34:48 | Creating missing vms:
router/d3df4a57-43dd-491d-91ce-c9eda8ca88f8 (0) (00:00:36) Task 576 |
10:34:48 | Creating missing vms:
compute/927d5018-9f8d-4b94-aa37-fee45aef2280 (0) (00:00:36) Task 576 |
10:34:49 | Creating missing vms:
database/2fe9e267-1bb0-4be6-8a4b-b61e534bcd64 (0) (00:00:37) Task 576
| 10:34:57 | Creating missing vms:
control/29cf5702-030d-4cac-ac9d-5e4221562e3a (0) (00:00:45) Task 576 |
10:34:57 | Updating instance database:
database/2fe9e267-1bb0-4be6-8a4b-b61e534bcd64 (0) (canary) (00:06:47)
Task 576 | 10:41:44 | Updating instance blobstore:
blobstore/8a9a1df4-39fe-4232-be3e-831d318bcb93 (0) (canary) (00:01:03)
Task 576 | 10:42:47 | Updating instance control:
control/29cf5702-030d-4cac-ac9d-5e4221562e3a (0) (canary) (01:22:36)
L Error: 'control/29cf5702-030d-4cac-ac9d-5e4221562e3a (0)' is not running
after update. Review logs for failed jobs: routing-api,
cloud_controller_clock, credhub Task 576 | 12:05:25 | Error:
'control/29cf5702-030d-4cac-ac9d-5e4221562e3a (0)' is not running
after update. Review logs for failed jobs: routing-api,
cloud_controller_clock, credhub
how to reviews logs of failing jobs? Is any ways to see logs failed jobs: routing-api, cloud_controller_clock, credhub ?
You need to install the bosh cli first. https://bosh.io/docs/cli-v2-install/
Make sure bosh is installed:
my-mac: bosh -v
version 6.2.1-a28042ac-2020-02-10T18:41:00Z
Succeeded
Set the environment variables for bosh to connect to pcf-dev:
my-mac: cf dev bosh
Usage: eval $(cf dev bosh env)
my-mac: eval $(cf dev bosh env)
Ask bosh to show the name of your cf deployment, in this case cf-66ade9481d314315358c is the name:
my-mac: bosh deployments
Using environment '10.144.0.2' as client 'ops_manager'
Name Release(s) Stemcell(s) Team(s)
cf-66ade9481d314315358c binary-buildpack/1.0.30 bosh-warden-boshlite-ubuntu-xenial-go_agent/170.30 -
bosh-dns/1.10.0
bosh-dns-aliases/0.0.3
bpm/1.0.3
capi/1.71.4
cf-cli/1.9.0
cf-networking/2.18.2
cf-syslog-drain/8.1
cflinuxfs2/1.267.0
cflinuxfs3/0.62.0
consul/198
consul-drain/0.0.3
credhub/2.1.2
diego/2.22.1
dotnet-core-buildpack/2.2.5
garden-runc/1.18.0
go-buildpack/1.8.33
java-offline-buildpack/4.16.1
log-cache/2.0.1
loggregator/103.4
loggregator-agent/2.3
nats/26
nodejs-buildpack/1.6.43
php-buildpack/4.3.70
push-apps-manager-release/667.0.6
pxc/0.14.2
python-buildpack/1.6.28
routing/0.184.0
ruby-buildpack/1.7.31
silk/2.18.1
staticfile-buildpack/1.4.39
statsd-injector/1.5.0
uaa/66.0
1 deployments
Succeeded
Retrieve te logs with bosh using the name from the name column:
my-mac: bosh --deployment cf-66ade9481d314315358c logs
Using environment '10.144.0.2' as client 'ops_manager'
Using deployment 'cf-66ade9481d314315358c'
Task 784
Task 784 | 17:54:41 | Fetching logs for router/8828d2fd-10f6-4d1e-9a3f-ac3d6ef6b833 (0): Finding and packing log files
Task 784 | 17:54:41 | Fetching logs for database/e3b3bb98-1e73-41cf-be32-324280615a3b (0): Finding and packing log files
Task 784 | 17:54:41 | Fetching logs for compute/0820e2d7-998a-405a-bbea-e73a76ec26b4 (0): Finding and packing log files
Task 784 | 17:54:41 | Fetching logs for blobstore/5c3297a4-4ad0-43bc-8041-000ca8e38e28 (0): Finding and packing log files
Task 784 | 17:54:41 | Fetching logs for control/bf60350d-aab0-4db3-8cc2-05d666f5f3a8 (0): Finding and packing log files
Task 784 | 17:54:42 | Fetching logs for compute/0820e2d7-998a-405a-bbea-e73a76ec26b4 (0): Finding and packing log files (00:00:01)
Task 784 | 17:54:42 | Fetching logs for router/8828d2fd-10f6-4d1e-9a3f-ac3d6ef6b833 (0): Finding and packing log files (00:00:01)
Task 784 | 17:54:42 | Fetching logs for database/e3b3bb98-1e73-41cf-be32-324280615a3b (0): Finding and packing log files (00:00:01)
Task 784 | 17:54:42 | Fetching logs for blobstore/5c3297a4-4ad0-43bc-8041-000ca8e38e28 (0): Finding and packing log files (00:00:01)
Task 784 | 17:54:42 | Fetching logs for control/bf60350d-aab0-4db3-8cc2-05d666f5f3a8 (0): Finding and packing log files (00:00:01)
Task 784 | 17:54:43 | Fetching group of logs: Packing log files together
Task 784 Started Sat May 9 17:54:41 UTC 2020
Task 784 Finished Sat May 9 17:54:43 UTC 2020
Task 784 Duration 00:00:02
Task 784 done
Downloading resource 'f7d8c6d3-43f8-419a-a436-53a38155af47' to '/Users/my-mac/workspace/pcf-dev/cf-66ade9481d314315358c-20200509-195443-771607.tgz'...
0.00%
Succeeded
Unpack your downloaded log archive.

Docker: Running Celery in detatched -d mode

I'm not sure if I am missing something, or not quite understanding the nuances of containerization, but I have provisioned celery and celery-beat services that are working in attached mode, e.g., using:
docker-compose -f docker-compose.production.yml up --build
But the are not beating when used in detatched mode:
docker-compose -f docker-compose.production.yml up --build -d
The only output I can see is the following:
celery-beat_1 | celery beat v4.4.0 (cliffs) is starting.
celery-beat_1 | __ - ... __ - _
celery-beat_1 | LocalTime -> 2020-01-24 16:32:57
celery-beat_1 | Configuration ->
celery-beat_1 | . broker -> redis://redis:6379/0
celery-beat_1 | . loader -> celery.loaders.app.AppLoader
celery-beat_1 | . scheduler -> celery.beat.PersistentScheduler
celery-beat_1 | . db -> celerybeat-schedule
celery-beat_1 | . logfile -> [stderr]#%INFO
celery-beat_1 | . maxinterval -> 5.00 minutes (300s)
Whereas the output I want is to contain the following:
celery-beat_1 | [2020-01-24 14:56:08,325: DEBUG/MainProcess] Current schedule:
celery-beat_1 | <ScheduleEntry: Wave.celery.dispatch_surfers_waivers_periodic_task() Wave.celery.dispatch_surfers_waivers_periodic_task() <crontab: 6,16,26,36,46,56 * * * * (m/h/d/dM/MY)>
I'm not sure what I am missing here?

Cf-deployment fails on bosh-lite VM on Openstack

I have setup a bosh-lite VM on Openstack and now wanted to deploy CF.
cf-deployment fails with the following errors:
Task 28 | 06:16:17 | Creating missing vms: router/96e38261-9287-4a69-b526-eb361eb36d84 (0) (00:00:01)
L Error: CPI error 'Bosh::Clouds::CloudError' with message 'Creating VM with agent ID '{{0d5412ff-4140-4c10-9bcc-b82f8b8595ca}}': Unmarshaling VM properties: json: cannot unmarshal object into Go value of type string' in 'create_vm' CPI method
Task 28 | 06:16:17 | Creating missing vms: tcp-router/0a492310-b1d7-4092-9f12-7bbdaeafa51f (0) (00:00:01)
L Error: CPI error 'Bosh::Clouds::CloudError' with message 'Creating VM with agent ID '{{e2c6f33e-46b6-4600-a32e-8d113d327dd7}}': Unmarshaling VM properties: json: cannot unmarshal object into Go value of type string' in 'create_vm' CPI method
Task 28 | 06:16:25 | Creating missing vms: database/eb7319a4-83af-4463-8cb9-455c2f3689c9 (0) (00:00:09)
Task 28 | 06:16:25 | Creating missing vms: nats/96b398f8-cde3-4c7f-840a-d9abbe66bf4c (0) (00:00:09)
Task 28 | 06:16:25 | Creating missing vms: adapter/29400d43-6d55-49b7-8368-591f4f6357cd (0) (00:00:09)
Task 28 | 06:16:25 | Creating missing vms: cc-worker/fe3e01ce-26ff-45cd-97cd-8b771af7ba7d (0) (00:00:09)
Task 28 | 06:16:26 | Creating missing vms: scheduler/59b98563-5b97-41a3-ac17-9b65998d5091 (0) (00:00:10)
Task 28 | 06:16:26 | Creating missing vms: singleton-blobstore/233fa4c4-7ee0-4b03-83e1-2b958014bad5 (0) (00:00:10)
Task 28 | 06:16:26 | Creating missing vms: doppler/880250b2-9418-4c02-9423-15bf0abe01fb (0) (00:00:10)
Task 28 | 06:16:26 | Creating missing vms: consul/d362ee07-bfc2-4569-a702-6aa9b2806c2b (0) (00:00:10)
Task 28 | 06:16:27 | Creating missing vms: log-api/7387cb18-e24e-4129-be2b-7fecfb2e3170 (0) (00:00:11)
Task 28 | 06:16:27 | Creating missing vms: api/4806fa7e-d2be-4403-b9cc-f4c3cd32269d (0) (00:00:11)
Task 28 | 06:16:27 | Creating missing vms: diego-cell/91a70e59-e815-495b-8f17-f34bfaabb3b2 (0) (00:00:11)
Task 28 | 06:16:28 | Creating missing vms: uaa/c2fb065b-84b7-42fe-85fd-400947ca48f6 (0) (00:00:12)
Task 28 | 06:16:28 | Creating missing vms: diego-api/737edb1d-4e81-48f6-9c22-169e64a3c8bb (0) (00:00:12)
Task 28 | 06:16:28 | Error: CPI error 'Bosh::Clouds::CloudError' with message 'Creating VM with agent ID '{{0d5412ff-4140-4c10-9bcc-b82f8b8595ca}}': Unmarshaling VM properties: json: cannot unmarshal object into Go value of type string' in 'create_vm' CPI method

Docker on mac error message can't connect to Docker endpoint

I have tried the vagrant devenv for multi-peers network and it worked fine. now I am trying to do the same thing on mac, but I got such err message
vp_1 | 07:21:42.489 [dockercontroller] deployImage -> ERRO 04c Error building images: cannot connect to Docker endpoint
vp_1 | 07:21:42.489 [dockercontroller] deployImage -> ERRO 04d Image Output:
vp_1 | ********************
vp_1 |
vp_1 | ********************
vp_1 | 07:21:42.553 [dockercontroller] Start -> ERRO 05b start-could not recreate container cannot connect to Docker endpoint
vp_1 | 07:21:42.553 [container] unlockContainer -> DEBU 05c container lock deleted(dev-jdoe-04233c6dd8364b9f0749882eb6d1b50992b942aa0a664182946f411ab46802a88574932ccd75f8c75e780036e363d52dd56ccadc2bfde95709fc39148d76f050)
vp_1 | 07:21:42.553 [chaincode] Launch -> ERRO 05d launchAndWaitForRegister failed Error starting container: cannot connect to Docker endpoint
Belowing is my compose file,
vp:
image: hyperledger/fabric-peer
ports:
- "5000:5000"
environment:
- CORE_PEER_ADDRESSAUTODETECT=true
- CORE_VM_ENDPOINT=http://127.0.0.1:2375
- CORE_LOGGING_LEVEL=DEBUG
command: peer node start
I have tried assigning endpoint to
"unix:///var/run/docker.sock" and it appear the other err message as belowing
vp_1 | 07:39:39.642 [dockercontroller] deployImage -> ERRO 045 Error building images: dial unix /var/run/docker.sock: connect: no such file or directory
vp_1 | 07:39:39.642 [dockercontroller] deployImage -> ERRO 046 Image Output:
While CORE_VM_ENDPOINT set to unix:///var/run/docker.sock, please make sure that var/run/docker.sock exists in your host. please mount it if its not exist.
Also, refer to the following question, Hyperledger Docker endpoint not found