In the fresh vmware PKS kubernetes cluster, the secret is created for private docker-registry and it works as expected. But the kubectl is not pulling the image from public registry "https://registry-1.docker.io/v2/".
I am connected to corporate network and http_proxy, https_proxy is set to reach internet.
The docker login,pull is working but images are not pulled when kubectl deployments are created. The public image is failing for "dduportal/bats:0.4.0". The kubectl describe output is copied to path in the github.
I tried to add the secrets for public docker registry like private seperately. This is pointed out by someone, to keep the secrets seperate incase of pulling images from multiple private regstries. In my case, its public, but still kept separate.
kubectl create secret docker-registry regcred-public --docker-server=registry-1.docker.io --docker-username=<public-user> --docker-password=<token> --docker-email=<myemail>
kubectl create secret docker-registry regcred-private --docker-server=private-registry --docker-username=<private-user> --docker-password=password --docker-email=<myemail>
What could be issue?
how to make my kubectl cluster to pull images from public repository
when docker pull from commandline is working without any issues.
There is no clue except the message that it has failed to pull from
public registry. It could be better if there is any suggestion from
kubernetes cluster.
Is there any rules/configuration required from the cluster end?
Failed to pull image "dduportal/bats:0.4.0": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Problem may lay in incorrect setup of proxy HTTP.
First, create a systemd drop-in directory for the Docker service:
mkdir /etc/systemd/system/docker.service.d
Now create a file called /etc/systemd/system/docker.service.d/http-proxy.conf that adds the HTTP_PROXY environment variable:
[Service]
Environment="HTTP_PROXY=http://proxy.example.com:80/"
If you have internal Docker registries that you need to contact without proxying you can specify them via the NO_PROXY environment variable:
Environment="HTTP_PROXY=http://proxy.example.com:80/"
Environment="NO_PROXY=localhost,127.0.0.0/8,docker-registry.somecorporation.com"
Flush changes:
$ sudo systemctl daemon-reload
Verify that the configuration has been loaded:
$ sudo systemctl show --property Environment docker
Environment=HTTP_PROXY=http://proxy.example.com:80/
Restart Docker:
$ sudo systemctl restart docker
Link to the official Docker documentation for proxy HTTP:
docker-http.
Related
gitlab version 13.8.1-ee (install with helm)
GKE version : 1.16.15-gke.6000
I install gitlab & gitlab-runner on GKE, private cluster.
Also, I have nginx-ingress-controller for firewall rule, following docs.
https://gitlab.com/gitlab-org/charts/gitlab/blob/70f31743e1ff37bb00298cd6d0b69a0e8e035c33/charts/nginx/index.md
nginx-ingress:
controller:
scope:
enabled: true
namespace: default
service:
loadBalancerSourceRanges:
["IP","ADDRESSES"]
With this setting, gitlab-runner pod has error
couldn't execute POST against https://gitlab.my-domain.com/api/v4/runners: Post https://gitlab.my-domain.com/api/v4/runners: dial tcp [my-domain's-IP]: i/o timeout
Issue is same as this one.
Gitlab Runner can't access Gitlab self-hosted instance
But I already set cloudNAT & cloud Route, also adding IP address of CloudNAT in loadBalancerSourceRanges in gitlab's value.yaml.
To check if cloudNAT worked or not, I tried to exec pod and check IP
$ kubectl exec -it gitlab-gitlab-runner-xxxxxxxx /bin/sh
wget -qO- httpbin.org/ip
and it showed IP address of CloudNAT.
So, the request must be called using CloudNAT IP as source IP.
https://gitlab.my-domain.com/api/v4/runners
What can I do to solve it ?
It worked when I added kubernetes-pod-inner-ipaddress in loadBalancerSourceRanges. Both stable/nginx, https://kubernetes.github.io/ingress-nginx worked.
gitlab-runner called https://my-domain/api/v4/runners . I thought it would go through public network, so added only CloudNAT IP, but maybe it was not.
Still, it's a little bit weird.
First time I set 0.0.0.0/0 in loadBalancerSourceRanges, then added only CloudNAT IP in FW, https://my-domain/api/v4/runners worked.
So, loadBalancerSourceRanges may be used in 2 places, 1 is FW rule which we can see on GCP, the other is hidden.
I'm new to devops. I want to install Jenkins in AWS EC2 with docker.
I have installed the Jenkins by this command:
docker run -p 8080:8080 -p 50000:50000 -d -v jenkins_home:/var/jenkins_home jenkins/jenkins:lts
On AWS security group, I have enabled port 8080 and 50000. I also enabled port 22 for SSH, 27017 for Mongo and 3000 for Node.
I can see the Jenkins container when I run docker ps. However, when I run https://xxxx.us-east-2.compute.amazonaws.com:8080, there is not a Jenkins window popup for Jenkins setting and display error, ERR_SSL_PROTOCOL_ERROR.
Does someone know what's wrong here? Should I install Nginx as well? I didn't install it yet.
The error is due to the fact that you are using https:
https://xxxx.us-east-2.compute.amazonaws.com:8080
From your description it does not seem that you've setup any type of ssl connection to your instance. So you should connect using http only:
http://xxxx.us-east-2.compute.amazonaws.com:8080
But this is not good practice as you communicate using plain text. A common solution is to access your jenkins web-ui through ssh tunnel. This way the connection is encrypted and you don't have to exposed any jenkins port in your security groups.
I have a squid proxy installed in one of the AWS ec2 instance and pod running in kubernetes cluster.
I have added the env variable in deployment.yaml file to export the squid proxy LB as below
env:
-
name: http_proxy
value: "http://XXXX:3128"
-
name: https_proxy
value: "http://XXXX:3128"
I can see the access denied if i do a curl request from kubernetes pod console.
curl -k google.com
The request is not routing to squid proxy if I try to access from the application running in kubernetes pod
Can anyone suggest where I am doing wrong?
How to route all requests from the application running in pod to squid proxy?
You can try next:
1) Fix this on docker.service.d level by creating
/etc/systemd/system/docker.service.d/http-proxy.conf with following content:
[Service]
Environment="HTTP_PROXY=http://XXXX:3128"
Environment="HTTPS_PROXY=http://XXXX:3128"
Dont forget do afterwards
systemctl daemon-reload
systemctl restart docker
2)If you use own image, you can build it with specifying below in Dockerfile. With this approach only current container will use your squid proxy
ENV http_proxy XXXX:3128
ENV https_proxy XXXX:3128
3) Another way is to look into /etc/default/docker (For ubuntu):
cat /etc/default/docker
...
# If you need Docker to use an HTTP proxy, it can also be specified here.
#export http_proxy="http://127.0.0.1:3128/
This way you will set up set up proxy for ALL containers and not for only chosen one
I have also found some github kubernetes-squid sulotion. PLease take a lokk, but I feel that this is not you need, but anyway..
Hope it helps
I have setup a basic 2 node k8s cluster on AWS using KOPS .. I had issues connecting and interacting with the cluster using kubectl ... and I keep getting the error:
The connection to the server api.euwest2.dev.avi.k8s.com was refused - did you specify the right host or port? when trying to run any kubectl command .....
have done basic kops export kubecfg --name xyz.hhh.kjh.k8s.com --config=~$KUBECONFIG --> to export the kubeconfig for the cluster I have created. Not sure what else I'm missing to make a successful connection to the kubeapi-server to make kubectl work ?
Sounds like either:
Your kube-apiserver is not running.
Check with docker ps -a | grep apiserver on your Kubernetes master.
api.euwest2.dev.avi.k8s.com is resolving to an IP address where your nothing is listening.
208.73.210.217?
You have the wrong port configured for your kube-apiserver on your ~/.kube/config
server: https://api.euwest2.dev.avi.k8s.com:6443?
How do I configure a Python script to run as a service (re-launch on system restart, restart on failure) in Amazon AWS EC2 instance?
You can create a systemd service on the ec2 instance to achieve this. Steps are:
Create a service definition file:
sudo vi /lib/systemd/system/mypythonservice.service
Add the systemd unit file definition. You can check this or the systemd reference guide for more details:
[Unit]
Description=My Python Service
After=multi-user.target
[Service]
Type=idle
ExecStart=/usr/bin/python /home/myuser/mypythonproject.py
Restart=on-failure
[Install]
WantedBy=multi-user.target
Set the necessary permissions on the file:
sudo chmod 644 /lib/systemd/system/mypythonservice.service
Reload the systemd daemon:
sudo systemctl daemon-reload
Enable the service to start on reboot:
sudo systemctl enable mypythonservice.service
And of course you can add all of this as part of a EC2 Instance User Data script to automatically configure on instance launch.
Configure a Python as a service in AWS EC2
After much unsuccessful research to set up a Python API written on custom port 8080 to run on Amazon's Linux AMI operating system (AWS), I decided to solve this dilemma and share the solution with all of you.
See the solution in this link.