Change JWT token expirationSeconds from IstioOperator resource - istio

We use the a IstioOperator kind of resource to generate the manifests via istioctl manifest generate command. We later apply those manifests to our cluster.
We would like to change the default 43200 seconds value for the third party JWTs to a different value, but we can’t find a way to do that from the IstioOperator manifest itself.
So question is, how can the expirationSeconds for the Istio deployments and mutatingWebHook configurations be changed from the default 43200 seconds value to a different value, by defining those values in the IstioOperator resource used to deploy Istio?

Related

How to automatically add vpc and region parameters into YAML file for deploying ingress controller?

Context:
I have been successful in setting up an eks cluster using eksctl with the help of AWS's documentation. The subsections that were main useful to me are:
clusters
> creating a cluster
Networking
> pod networking
> Installing AWS LB controller.
workloads
> application load balancing
Now I am trying use the same command that I learnt from those subsection to write a shell script and automate the entire setup process up until the sample app deployment.
I am stuck where I have to find out the VPC id and region code created via the eksctl and fill them into the v2_4_3_full.yaml file. (This file creates all the components necessary to provision an ingress component under the working namespace in Kubernetes).
I feel totally blank when I think of ways to automatically do that instead of manually referring the vpc and region ids.
Below is the part of the YAML file where that has to be done. Not only do the valueS have to be filled in automatically but even the corresponding parameters have to be added in. Those parameters will be the last two from below under args. I haven't a clue on how to achieve that.
spec:
containers:
- args:
- --cluster-name=your-cluster-name
- --ingress-class=alb
- --aws-vpc-id=vpc-xxxxxxxx
- --aws-region=region-code
I am not sure what ways exist if they do. But I imagine it would be some shell command that I don't know or some python package that does that. Nonetheless, any suggestion is greatly appreciated.

Is it possible track the number of docker pulls in Google Artifact Registry?

I'd like to measure the number of times a Docker image has been downloaded from a Google Artifact registry repository in my GCP project.
Is this possible?
Interesting question.
I think this would be useful too.
I think there aren't any Monitoring metrics (no artifactregistry resource type is listed nor metrics are listed)
However, you can use Artifact Registry audit logs and you'll need to explicitly enable Data Access logs see e.g. Docker-GetManifest.
NOTE I'm unsure whether this can be achieved from gcloud.
Monitoring Developer tools, I learned that Audit Logs are configured in Project Policies using AuditConfig's. I still don't know whether this functionality is available through gcloud (anyone?) but evidently, you can effect these changes directly using API calls e.g. projects.setIamPolicy:
gcloud projects get-iam-policy ${PROJECT}
auditConfigs:
- auditLogConfigs:
- logType: DATA_READ
- logType: DATA_WRITE
service: artifactregistry.googleapis.com
bindings:
- members:
- user:me
role: roles/owner
etag: BwXanQS_YWg=
Then, pull something from the repo and query the logs:
PROJECT=[[YOUR-PROJECT]]
REGION=[[YOUR-REGION]]
REPO=[[YOUR-REPO]]
FILTER="
logName=\"projects/${PROJECT}/logs/cloudaudit.googleapis.com%2Fdata_access\"
protoPayload.methodName=\"Docker-GetManifest\"
"
gcloud logging read "${FILTER}" \
--project=${PROJECT} \
--format="value(timestamp,protoPayload.methodName)"
Yields:
2022-03-20T01:57:16.537400441Z Docker-GetManifest
You ought to be able to create a logs-based metrics for these too.
We do not yet have platform logs for Artifact Registry unfortunately, so using the CALs is the only way to do this today. You can also turn the CALs into log-based metrics and get graphs and metrics that way too.
The recommendation to filter by 'Docker-GetManifest' is also correct - it's the only request type for which a Docker Pull always has exactly one. There will be a lot of other requests that are related but don't match 1:1. The logs will have all requests (Docker-Token, 0 or more layer pulls), including API requests like ListRepositories which is called by the UI in every AR region when you load the page.
Unfortunately, the theory about public requests not appearing is correct. CALs are about logging authentication events, and when a request has no authentication whatsover, CALs are not generated.

ElasticBeanstalk: Cant change environment type to load balanced in GUI?

Elastic Beanstalk. (Created with code star). Java. Spring Boot. EC2.
The documentation states that we can change the environment type within the GUI to load balanced from a single instance:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features-managing-env-types.html
However, when trying it out. After the changes have taken hold and my environment was load balanced. Upon further deployments, my changes appear to have been reset and the environment type is now shown to be single-instance.
Upon examination of the service package, there is cloud formation that states the environment type
template.yml:
EBConfigurationTemplate:
Description: The AWS Elastic Beanstalk configuration template to be created for this project, which defines configuration settings used to deploy different versions of an application.
Type: AWS::ElasticBeanstalk::ConfigurationTemplate
Properties:
ApplicationName: !Ref 'EBApplication'
Description: The name of the sample configuration template.
OptionSettings:
- Namespace: aws:elasticbeanstalk:environment
OptionName: EnvironmentType
Value: SingleInstance
Which must have overwritten any of the GUI changes that have been made by hand upon further deployment.
I presume such changes (anything already in cloud formation) must then be done through cloud formation and not be done by hand in the GUI? (What changes can be done by GUI is it changes related to .ebextensions files?)
Bonus: Has someone seen a GitHub project that have used ElasticBeanstalk with EC2 and hence must have done this in their cloud formation?
I presume such changes (anything already in cloud formation) must then be done through cloud formation and not be done by hand in the GUI?
Yes. It is a bad practice to manually modify resources created and managed by CFN, which can lead to issues. From docs:
changes made outside of CloudFormation can complicate stack update or deletion operations.
What happens when you do this, is so called drift. This means that CFN is simply not aware of any outside changes made to its resources. So once you start developing your solutions through CFN, every change to them should also be done through CFN.
Sadly I'm not sure what you are after in the bonus questions, thus can't comment on it.

Istio-pilot Consul Support

It's been a little unclear to me what requirements Istio-pilot using Consul adapter are. I am trying to setup and have istio-pilot Discovery to act as pure Envoy xDS. However, in one of the examples where Consul is used (from Istio src), it does install one kube-apiserver (and etcd for that matter). I would like to use Envoy as the data-plane (or istio-pilot agent for that matter), but leverage Consul for service discovery, and not integrate with Kubernetes. Does istio-pilot require K8 anyway for that use case?
Istio supports several different so called ServiceDiscovery implementations.
Kubernetes is one of them which discovers Services from Kubernetes Services.
But this is really just one of the possible ways to run Istio Pilot and you can use other ServiceDiscovery mechanisms line Consul via the command line argument --registries Consul.
See https://archive.istio.io/v1.4/docs/reference/commands/pilot-discovery/ for a detailed description of the command line arguments.
Once you run Pilot with that configuration it should load Services exclusively from Consul. These should be pushed to the data plane under the usual name <service name>.service.consul.
UPDATE:
From your comment below it seems that you not only want to not load Services from Kubernetes, but in general completely run without it.
While this indeed doesn't seem to be possible with 1.4 – i.e. watching Istio resources is always started – it seems to work with 1.5.
To achieve that you to start pilot with --disable-install-crds and --configDir
<config path> where <config path> points to a directory containing the yamls for the Istio specific resources that you might still need, like Sidecars, MeshPolicy, EnvoyFilter etc.
If --configDir is not defined Pilot will still try to get these resources from Kubernetes, so it is essential to add this argument even if the directory is empty.
Finally you should make sure that the MeshConfig that you pass to pilot via --meshConfig meshconfig.yaml does not point to a URL of galley by commenting this out, in case you copied an existing file /etc/istio/config/mesh from a running instance of Pilot:
configSources:
#- address: istio-galley.istio-system.svc:9901
# tlsSettings:
# mode: ISTIO_MUTUAL

How to avoid downtime when updating a stage variable on a API gateway deployment?

I have an API example_api currently deployed on stage DEV in AWS API gateway.
I want to update one of its stage variable and make sure the change is deployed. The API is provisioned by CloudFormation and the stage variables are mapped to template parameters.
I update the stack with boto3 and CloudFormation (using the UsePreviousTemplate flag) and provide the new value.
I then use boto3 to call create_deployment for example_api on DEV (to update already deployed example_api on DEV).
At this point, my API becomes unavailable for around 15-20 seconds. I keep receiving {"message":"Missing Authentication Token"} responses.
I guess I am doing something wrong here. I do I avoid such a downtime and make sure the new API is available ASAP?
Note: my API is accessed through a custom domain name in API gateway. The base path is mapped to the DEV stage.
Thanks
The problem was that the cloud formation template had created the stage using the StageDescription property of the Deployment resource and I did not understand the deployment/stage relationship properly.
Resource Stage DEV was initially binded with Deployment Named000.
My first update_stack call used to update stage variable but also rebind the stage DEV to the initial deployment (Named000). Losing any changes applied since (any new routes).
I was able to update stage variables and properly deploy with no downtime by creating a Deployment resource and appending a timestamp to its name to make sure a new resource is created every time the stack is generated with Troposphere. Then updating the stack with new stage variables would keep stage binded with latest deployment and avoid introducing downtime.