I've applied some updates of istio Policy and DestionationRule (basically istio configuration) around my services, exposed by the Ingress Gateway. And somehow it doesn't reflect my configuration policy.
After looking at the istioctl proxy-status command, I see many istio's proxies being staled.
PROXY CDS LDS EDS RDS PILOT VERSION
data-devops-istio-ingressgateway-66bddd766d-75gjz.data-devops SYNCED SYNCED SYNCED (100%) STALE istio-pilot-6cd95f9cc4-8pcw4 1.0.0
data-devops-istio-ingressgateway-66bddd766d-fjkcr.data-devops SYNCED SYNCED SYNCED (100%) STALE istio-pilot-6cd95f9cc4-8pcw4 1.0.0
istio-egressgateway-868bb74854-lzhqt.istio-system SYNCED SYNCED SYNCED (100%) NOT SENT istio-pilot-6cd95f9cc4-8pcw4 1.0.0
istio-egressgateway-868bb74854-p25p2.istio-system SYNCED SYNCED SYNCED (100%) NOT SENT istio-pilot-6cd95f9cc4-8pcw4 1.0.0
istio-ingressgateway-f86f68645-887dd.istio-system SYNCED STALE SYNCED (100%) SYNCED istio-pilot-6cd95f9cc4-8pcw4 1.0.0
istio-ingressgateway-f86f68645-bnrvt.istio-system SYNCED STALE SYNCED (100%) SYNCED istio-pilot-6cd95f9cc4-8pcw4 1.0.0
istio-ingressgateway-f86f68645-g6s9g.istio-system SYNCED STALE SYNCED (100%) SYNCED istio-pilot-6cd95f9cc4-8pcw4 1.0.0
istio-ingressgateway-f86f68645-gf4nq.istio-system SYNCED STALE SYNCED (100%) SYNCED istio-pilot-6cd95f9cc4-8pcw4 1.0.0
istio-ingressgateway-f86f68645-xzfth.istio-system SYNCED STALE SYNCED (100%) STALE istio-pilot-6cd95f9cc4-8pcw4 1.0.0
java-maven-app-canary-5b9f57b475-r9lfc.data-devops SYNCED SYNCED SYNCED (100%) SYNCED istio-pilot-6cd95f9cc4-8pcw4 1.0.0
java-maven-app-stable-56b9c47c9-nhmhd.data-devops SYNCED SYNCED SYNCED (100%) SYNCED istio-pilot-6cd95f9cc4-8pcw4 1.0.0
Is there any solution to solve the stale issue? Also, what does CDS, LDS, EDS, and RDS mean?
CDS - Cluster Discovery Service
LDS - Listener Discovery Service
EDS - Endpoint Discovery Service
RDS - Route Discovery Service
They all are explained well in the Envoy doc.
Coming back to your problem I would look at both Pilot and Istio Proxy logs to see whats happening. Its also worth increasing the Pilot replicas if you havent to see if there is any improvement. I have 2 or 3 Pilot pods running depending on the CPU utilisation. Hope this helps.
Related
I tried to use AWS migration hub to practice a migration from Azure to AWS.
Used the AWS application migration service (AGN) for the practice.
The migration was unsuccessful and I aborted it even before the discovery was completed and even before the replication was initiated.
I've cleaned up all the source servers, but unable to delete or remove the replicated servers from the migration hub.
they are showing up under "Data collectors" tab. Attached the screenshots for reference.
Any help here would be great.
Since a few days ago some tasks throw an error at the start of every DAG run. It seems the log file is not found to retrieve the logging from the task.
*** 404 GET https://storage.googleapis.com/download/storage/v1/b/europe-west1-ventis-brand-f-65ab79d1-bucket/o/logs%2Fimport_ebay_snapshot_feeds_ES%2Fstart%2F2021-11-30T08%3A00%3A00%2B00%3A00%2F3.log?alt=media: No such object: europe-west1-ventis-brand-f-65ab79d1-bucket/logs/import_ebay_snapshot_feeds_ES/start/2021-11-30T08:00:00+00:00/3.log: ('Request failed with status code', 404, 'Expected one of', <HTTPStatus.OK: 200>, <HTTPStatus.PARTIAL_CONTENT: 206>)
I've upgraded to the latest version of Cloud Composer and the tasks run on Python3.
There is the environment configuration:
Resources:
Workloads configuration:
Scheduler
1 vCPU, 2 GB memory, 2 GB storage
Number of schedulers: 2
Web server:
1 vCPU, 2 GB memory, 2 GB storage
Worker
1 vCPU, 2 GB memory, 1 GB storage
Number of workers:
Autoscaling between 1 and 4 workers
Core infrastructure:
Environment size: Small
GKE cluster
projects/*********************************
There are no related issues regarding this error in Cloud Composer changelog.
How could this being fixed?
Its a bug in the Cloud composer environment and has already been reported. You can track this conversation: Re: Log not visible via Airflow web log and other similar forums. For fixing the issue, its recommended to update you composer environment or use an stable version.
There is some workaround suggested (You can try in this order, they are independent from each other):
Remove the logs from the /logs folder in Composer GCS bucket and archive it in some other place (outside of /logs folder).
or
Manually update the web server configuration to read logs directly from a new bucket in your project. You would first need to grant viewer roles (like roles/storage.legacyBucketReader and roles/storage.legacyObjectReader) on the bucket to the service account running the web server.
edit /home/airflow/gcs/airflow.cfg → remote_base_log_folder = <newbucket> with proper permissions as described above.
or
If you don't have DRS (Domain restricted sharing) enabled which I believe you don't. You can create a new Composer environment, this time through v1 Composer API or without Beta features enabled in Cloud Console. This way Composer will create an environment without the DRS-compliant setup, so without the bucket-to-bucket synchronization.
The problem is that you would need to migrate your DAGs and data to the new environment.
Istio installation using docs “https://istio.io/docs/setup/platform-setup/openshift/“.
How we can do WebhookAdmission in open shift cluster 4.1 on aws. file (etc/origin/master/master-config.yaml) should be present on master instance but its not.
OpenShift 4.1 is based on Kubernetes 1.13, and MutatingAdmissionWebhook is enabled by default in Kubernetes 1.13. See here UPDATING THE MASTER CONFIGURATION
, Master configuration updates are not necessary if you are running OpenShift Container Platform 4.1. as mentioned.
After installation of SAP HANA Vora 1.2 on MapR 5.1, got below error messages and vora-catalog seems down.
Can anybody solve this problem?
2016-10-19 21:29:48,341 ERROR com.mapr.warden.service.baseservice.Service$ServiceMonitorRun run [vora-catalog_monitor]: Monitor command: [/opt/mapr/vora/warden-control.sh, catalog, check]can not determine if service: vora-catalog is running. Retrying. Retrial #1. Total retries count is: 3
The follwoing step solve this issue:
Shut down all vora services on all nodes.
Ensure, that also on OS level, no vora processes are still running.
Start the vora discovery service first on all nodes.
Please check that the discovery service is up and running.
The discovery service UI is available on port 8500.
Start the vora-dlog service on all nodes where it is installed.
check via discovery service UI that the service is available.
Start the vora-catalog service.
Check via discovery service UI that the service is available.
Start other vora services.
I deployed a Governance Registry, a master Data Services Server and a slave Data Services Server according this tutorial (Strategy B with JDBC):
http://wso2.org/library/tutorials/2010/04/sharing-registry-space-across-multiple-product-instances#CO_JDBC_Strategy_B
Now, how can I add my data services (.dbs files) to Data Services Servers from Governance Registry?
So now since you've the master and slave nodes, the initial data services have to be put into the master node at the standard data services deployment directory, which is at $SERVER_ROOT/repository/deployment/server/dataservices/. So after you have all the data services there, you can use the new deployment synchronizer tool that is shipped with DSS 2.6.0 (or any Carbon 3.2.0 based product). The deployment synchronizer can be used to conveniently sync the deployment artifacts between the registry and the file system.
So, in the master node, simply goto the deployment synchronizer tool in the main menu and check-in the data. And after you do that, from the slave nodes, you can simply check-out deployment artifacts, which will copy the data services to the file system and they will be deployed. For more information, read the section under "Deployment Synchronizer" here.