I know with the latest istio version, it start to integrate the metadata-exchange extension to envoy proxy, so that when communicate between each pod, envoy will potential use the "x-envoy-peer-metadata" header to exchange the pod metadata and save in metadata exchange extension cache.
So my questions is,
Is there a way that I can list the existing data in this metadata-exchange extension cache?
I know envoy proxy support DYNAMIC_METADATA in its access log format, is it related with the metadata-exchange extension?
Finally, how can I query the metadata in extension and log it in envoy access log. Such as, besides the DOWNSTREAM_REMOTE_ADDRESS, also i wish to output the downstream pod name.
Related
In https://istio.io/v1.7/docs/reference/config/policy-and-telemetry/mixer-overview/#attributes,
given Istio deployment has a fixed vocabulary of attributes that it
understands. The specific vocabulary is determined by the set of
attribute producers being used in the deployment. The primary
attribute producer in Istio is Envoy, although specialized Mixer
adapters can also generate attributes.
I'd like to know how does istio get these data (Attribute Vocabulary) from envoy(or mixer adapter) and how does envoy export these data in detail.
Because I want to develop the WASM plugin for logging and I need to define the custom log data which equals logging at telemetry v1 in isio...
I'd like to know how does istio get these data (Attribute Vocabulary) from envoy(or mixer adapter) and how does envoy export these data in detail.
According to banzaicloud
Because Istio Telemetry V2 lacks a central component (Mixer) with access to K8s metadata, the proxies themselves require the metadata necessary to provide rich metrics. Additionally, features provided by Mixer had to be added to the Envoy proxies to replace the Mixer-based telemetry. Istio Telemetry V2 uses two custom Envoy plugins to achieve just that.
In-proxy service-level metrics in Telemetry V2 are provided by two custom plugins, metadata-exchange and stats.
By default, in Istio 1.5, Telemetry V2 is enabled as compiled in Istio proxy filters, mainly for performance reasons. The same filters are also compiled to WebAssembly (WASM) modules and shipped with Istio proxy.
You can find more useful information in above documentation.
Because I want to develop the WASM plugin for logging and I need to define the custom log data which equals logging at telemetry v1 in istio
I've found this documentation on envoy site, there are all the attributes you might use.
There is an example about how to get these values in Wasm ABI.
Path expressions allow access to inner fields in structured attributes via a sequence of field names, map, and list indexes following an attribute name. For example, get_property({“node”, “id”}) in Wasm ABI extracts the value of id field in node message attribute, while get_property({“request”, “headers”, “my-header”}) refers to the comma-concatenated value of a particular request header.
I tried uploading a custom jar as cdap plugin and it has few errors in it. I want to delete that particular plugin and upload a new one. what is the process for it ? I tried looking for documentation and it was not much informative.
Thanks in advance!
You can click on the hamburger menu, and click on Control Center at the bottom of the left panel. In the Control Center, click on Filter by, and select the checkbox for Artifacts. After that, you should see the artifact being listed in the Control Center, which then you can delete.
Alternatively, we suggest that while developing, the version of the artifact should be suffixed with -SNAPSHOT (ie. 1.0.0-SNAPSHOT). Any -SNAPSHOT version can be overwritten simply by reuploading. This way, you don't have to delete first before deploying a patched plugin JAR.
Actually each Data Fusion instance is running in GCP tenant project inside fully isolated area, keeping all orchestration actions, pipeline lifecycle management tasks and coordination as a part of GCP managed scenarios, thus you can make a user defined actions within a dedicated Data Fusion UI or targeting execution environment via CDAP REST API HTTP calls.
The purpose for using Data Fusion UI is to create a visual design for data pipelines, controlling ETL data processing through different phases of data executions, therefore you can do the same accessing particular CDAP API inventory.
Looking into the origin CDAP documentation you can find Artifact HTTP RESTful API that offers a set of HTTP methods that you can consider to manage custom plugin operations.
Referencing GCP documentation, there are a few simple steps how to prepare sufficient environment, supplying INSTANCE_URL variable for the target Data Fusion instance in order to smoothly trigger API functions within HTTP call methods against CDAP endpoint, i.e.:
export INSTANCE_ID=your-instance-id
export CDAP_ENDPOINT=$(gcloud beta data-fusion instances describe \
--location=us-central1 \
--format="value(apiEndpoint)" \
${INSTANCE_ID})
When you are ready with above steps, you can push a particular HTTP call method, approaching specific action.
For plugin deletion, try this one, invoking HTTP DELETE method:
curl -X DELETE -H "Authorization: Bearer ${AUTH_TOKEN}" "${CDAP_ENDPOINT}/v3/namespaces/system/artifacts/<artifact-name>/versions/<artifact-version>"
I have a database on a Google Cloud SQL instance. I want to connect the database to pgBadger which is used to analyse the query. I have tried finding various methods, but they are asking for the log file location.
I believe there are 2 major limitations preventing an easy set up that would allow you to use pgBadger with logs generated by a Cloud SQL instance.
The first is the fact that Cloud SQL logs are processed by Stackdriver, and can only be accessed through it. It is actually possible to export logs from Stackdriver, however the outcome format and destination will still not meet the requirements for using pgBadger, which leads to the second major limitation.
Cloud SQL does not allow changes in all required configuration directives. The major one is the log_line_prefix, which currently does not follow the required format and it is not possible to change it. You can actually see what flags are supported in Cloud SQL in the Supported flags documentation.
In order to use pgBadger you would need to reformat the log entries, while exporting them to a location where pgBadger could do its job. Stackdriver can stream the logs through Pub/Sub, so you could develop an app to process and store them in the format you need.
I hope this helps.
I am looking into migrating my parse.com app to parse server with either AWS or Heroku.
The primary frustration I encountered with Parse in the past has been the resource limits
https://parse.com/docs/cloudcode/guide#cloud-code-resource-limits
Am I correct in assuming that following a migration the resource limits will be dependant on the new host (i.e. AWS or Heroku)?
Yes. Parse Server is simply a nodejs module which means that wherever you choose to host your nodejs app will decide which resource limits that will be imposed. You might also be able to set them yourself.
I recently moved it to AWS , so yes as stated in a comment its just a nodejs module so you have complete control over it. So mainly constraints here will be cpu , i/o and network of AWS. I would suggest reading the documentation provided here https://github.com/ParsePlatform/parse-server , they have also mentioned which ec2 instances we should take so that we can scale node and mongo properly.
We have a "mavenized" project with several containers (wso2esb, wso2dss, tomcat) and many components to deploy to them.
We are trying to find a way to deploy the datasource configuration for all our DSS services but I notice it is stored in its own DB (H2).
Do you know if there is any way to declare something like a XML file in order to create the datasource in the DSS in an automated way?
I tried to see the documentation but did not find anything useful for automatic deployment (meaning without using the admin pages).
Yeah, you can use the Carbon data source configuration file datasources.properties, to provide this information. This file should be located at $SERVER_ROOT/repository/conf.
A sample for this configuration file can be found in BPS sources.
After the data sources are defined using this, you can use them using the data source type "carbon data source" from data services.
You can easily deploy artifacts with the hot deployment functionality in WSO2 Servers by simply copying them to a specific directory in the server.
For Data Services Server you can copy the dbs files (in your case with the help of Maven) to $WSO2DSS_HOME/repository/deployment/server/dataservices dirctory. Similarly for BPELs its $WSO2BPS_HOME/repository/deployment/server/bpel
For CAR files created with carbon studio, its $WSO2CARBON_HOME/repository/deployment/server/carbonapps.
For ESB configs, its $WSO2ESB_HOME/repository/deployment/server/synapse-configs.