I have a squid proxy installed in one of the AWS ec2 instance and pod running in kubernetes cluster.
I have added the env variable in deployment.yaml file to export the squid proxy LB as below
env:
-
name: http_proxy
value: "http://XXXX:3128"
-
name: https_proxy
value: "http://XXXX:3128"
I can see the access denied if i do a curl request from kubernetes pod console.
curl -k google.com
The request is not routing to squid proxy if I try to access from the application running in kubernetes pod
Can anyone suggest where I am doing wrong?
How to route all requests from the application running in pod to squid proxy?
You can try next:
1) Fix this on docker.service.d level by creating
/etc/systemd/system/docker.service.d/http-proxy.conf with following content:
[Service]
Environment="HTTP_PROXY=http://XXXX:3128"
Environment="HTTPS_PROXY=http://XXXX:3128"
Dont forget do afterwards
systemctl daemon-reload
systemctl restart docker
2)If you use own image, you can build it with specifying below in Dockerfile. With this approach only current container will use your squid proxy
ENV http_proxy XXXX:3128
ENV https_proxy XXXX:3128
3) Another way is to look into /etc/default/docker (For ubuntu):
cat /etc/default/docker
...
# If you need Docker to use an HTTP proxy, it can also be specified here.
#export http_proxy="http://127.0.0.1:3128/
This way you will set up set up proxy for ALL containers and not for only chosen one
I have also found some github kubernetes-squid sulotion. PLease take a lokk, but I feel that this is not you need, but anyway..
Hope it helps
Related
gitlab version 13.8.1-ee (install with helm)
GKE version : 1.16.15-gke.6000
I install gitlab & gitlab-runner on GKE, private cluster.
Also, I have nginx-ingress-controller for firewall rule, following docs.
https://gitlab.com/gitlab-org/charts/gitlab/blob/70f31743e1ff37bb00298cd6d0b69a0e8e035c33/charts/nginx/index.md
nginx-ingress:
controller:
scope:
enabled: true
namespace: default
service:
loadBalancerSourceRanges:
["IP","ADDRESSES"]
With this setting, gitlab-runner pod has error
couldn't execute POST against https://gitlab.my-domain.com/api/v4/runners: Post https://gitlab.my-domain.com/api/v4/runners: dial tcp [my-domain's-IP]: i/o timeout
Issue is same as this one.
Gitlab Runner can't access Gitlab self-hosted instance
But I already set cloudNAT & cloud Route, also adding IP address of CloudNAT in loadBalancerSourceRanges in gitlab's value.yaml.
To check if cloudNAT worked or not, I tried to exec pod and check IP
$ kubectl exec -it gitlab-gitlab-runner-xxxxxxxx /bin/sh
wget -qO- httpbin.org/ip
and it showed IP address of CloudNAT.
So, the request must be called using CloudNAT IP as source IP.
https://gitlab.my-domain.com/api/v4/runners
What can I do to solve it ?
It worked when I added kubernetes-pod-inner-ipaddress in loadBalancerSourceRanges. Both stable/nginx, https://kubernetes.github.io/ingress-nginx worked.
gitlab-runner called https://my-domain/api/v4/runners . I thought it would go through public network, so added only CloudNAT IP, but maybe it was not.
Still, it's a little bit weird.
First time I set 0.0.0.0/0 in loadBalancerSourceRanges, then added only CloudNAT IP in FW, https://my-domain/api/v4/runners worked.
So, loadBalancerSourceRanges may be used in 2 places, 1 is FW rule which we can see on GCP, the other is hidden.
I'm new to devops. I want to install Jenkins in AWS EC2 with docker.
I have installed the Jenkins by this command:
docker run -p 8080:8080 -p 50000:50000 -d -v jenkins_home:/var/jenkins_home jenkins/jenkins:lts
On AWS security group, I have enabled port 8080 and 50000. I also enabled port 22 for SSH, 27017 for Mongo and 3000 for Node.
I can see the Jenkins container when I run docker ps. However, when I run https://xxxx.us-east-2.compute.amazonaws.com:8080, there is not a Jenkins window popup for Jenkins setting and display error, ERR_SSL_PROTOCOL_ERROR.
Does someone know what's wrong here? Should I install Nginx as well? I didn't install it yet.
The error is due to the fact that you are using https:
https://xxxx.us-east-2.compute.amazonaws.com:8080
From your description it does not seem that you've setup any type of ssl connection to your instance. So you should connect using http only:
http://xxxx.us-east-2.compute.amazonaws.com:8080
But this is not good practice as you communicate using plain text. A common solution is to access your jenkins web-ui through ssh tunnel. This way the connection is encrypted and you don't have to exposed any jenkins port in your security groups.
In the fresh vmware PKS kubernetes cluster, the secret is created for private docker-registry and it works as expected. But the kubectl is not pulling the image from public registry "https://registry-1.docker.io/v2/".
I am connected to corporate network and http_proxy, https_proxy is set to reach internet.
The docker login,pull is working but images are not pulled when kubectl deployments are created. The public image is failing for "dduportal/bats:0.4.0". The kubectl describe output is copied to path in the github.
I tried to add the secrets for public docker registry like private seperately. This is pointed out by someone, to keep the secrets seperate incase of pulling images from multiple private regstries. In my case, its public, but still kept separate.
kubectl create secret docker-registry regcred-public --docker-server=registry-1.docker.io --docker-username=<public-user> --docker-password=<token> --docker-email=<myemail>
kubectl create secret docker-registry regcred-private --docker-server=private-registry --docker-username=<private-user> --docker-password=password --docker-email=<myemail>
What could be issue?
how to make my kubectl cluster to pull images from public repository
when docker pull from commandline is working without any issues.
There is no clue except the message that it has failed to pull from
public registry. It could be better if there is any suggestion from
kubernetes cluster.
Is there any rules/configuration required from the cluster end?
Failed to pull image "dduportal/bats:0.4.0": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Problem may lay in incorrect setup of proxy HTTP.
First, create a systemd drop-in directory for the Docker service:
mkdir /etc/systemd/system/docker.service.d
Now create a file called /etc/systemd/system/docker.service.d/http-proxy.conf that adds the HTTP_PROXY environment variable:
[Service]
Environment="HTTP_PROXY=http://proxy.example.com:80/"
If you have internal Docker registries that you need to contact without proxying you can specify them via the NO_PROXY environment variable:
Environment="HTTP_PROXY=http://proxy.example.com:80/"
Environment="NO_PROXY=localhost,127.0.0.0/8,docker-registry.somecorporation.com"
Flush changes:
$ sudo systemctl daemon-reload
Verify that the configuration has been loaded:
$ sudo systemctl show --property Environment docker
Environment=HTTP_PROXY=http://proxy.example.com:80/
Restart Docker:
$ sudo systemctl restart docker
Link to the official Docker documentation for proxy HTTP:
docker-http.
I'm trying to install kubernetes dashboard on AWS Linux image but I'm getting JSON output on the browser. I have run the dashboard commands and given token but it did not work.
Kubernetes 1.14+
1) Open terminal on your workstation: (standard ssh tunnel to port 8002)
$ ssh -i "aws.pem" -L 8002:localhost:8002 ec2-user#ec2-50-50-50-50.eu-west-1.compute.amazonaws.com
2) When you are connected type:
$ kubectl proxy -p 8002
3) Open the following link with a web browser to access the dashboard endpoint: http://localhost:8002/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
Try this:
$ kubectl proxy
Open the following link with a web browser to access the dashboard endpoint:
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
More info
I had similar issue with reaching the dashboard following your linked tutorial.
One way of approaching your issue is to change the type of the service to LoadBalancer:
Exposes the service externally using a cloud provider’s load balancer.
NodePort and ClusterIP services, to which the external load balancer
will route, are automatically created.
For that use:
kubectl get services --all-namespaces
kubectl edit service kubernetes-dashboard -n kube-system -o yaml
and change the type to LoadBalancer.
Wait till the the ELB gets spawned(takes couple of minutes) and then run
kubectl get services --all-namespaces again and you will see the address of your dashboard service and you will be able to reach it under the “External Address”.
As for the tutorial you have posted it is from 2016, and it turns out something went wrong with the /ui in the address url, you can read more about it in this github issue. There is a claim that you should use /ui after authentication, but it also does not work.
For the default settings of ClusterIP you will be able to reach the dashboard on this address:
‘YOURHOSTNAME’/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
Another option is to delete the old dashboard:
Kubectl delete -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
Install the official one:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
Run kubectl proxy and reach it on localhost using:
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/overview
I have a jenkins instance running inside a docker container that's listening on port 8181.
Example URL of the jenkins instance:
http://ec2-34-155-164-97.us-west-2.compute.amazonaws.com/
I have a tomcat docker instance that's listening on port 8383 running inside the jenkins docker container.
I can access jenkins instance from my local browser. Is there any possible way that I can access my docker tomcat instance from my local browser?
Here is my docker run command:
docker run -d -v /var/run/docker.sock:/var/run/docker.sock \ -v $(which docker):/usr/bin/docker -p 8181:8080 jenkins-dsl
Please provide your suggestions.
It sounds like your docker run command simply needs to expose the port that your nested tomcat server is running on.
To do this, you need to pass in -p argument into your command. The -p argument is for binding a host port to the docker container's port:
-p <host_port>:<container_port>
You can pass in as many -p arguments as you want to bind multiple ports.
So if the docker tomcat server is running on port 8383 within the Jenkins docker container, then you can do something like this:
-p 8383:8080
Full command example:
docker run -d -it -p 8383:8080 --name tomcatServer docker-tomcat
I would assume that this would allow you to access tomcat server using the example URL provided like so:
http://ec2-34-155-164-97.us-west-2.compute.amazonaws.com:8383
However, you'd have to ensure your AWS Security Group will allow traffic to port 8383.
EDIT: Updated answer to reflect the resolution we discussed in the comments.
Edited
I could able to launch tomcat by specifying the port in the URL and opening the port in EC2 instance.
http://ec2-34-155-164-97.us-west-2.compute.amazonaws.com:8383
Latest Docker installation guide for Tomcat clearly says you will get this error when you launch it for the first time
You can then go to http://localhost:8888 or http://host-ip:8888 in a browser (noting that it will return a 404 since there are no webapps loaded by default).
its because you do not have any apps in the default webapps folder of Tomcat. your latest Tomcat docker image has the default apps in the "webapps.dist" folder, you have to copy it to "webapps" folder. Do the Following commands
# docker exec -it tomcat-container /bin/bash
# cd webapps.dist
# cp -R * ../webapps
"tomcat-container" is your container name.
now refresh your browser you will get it. if not let me know