wso2 api mnager analytics dashborad show api name but not usages - wso2

I quick configured api manager analytics.It shows api names, but api uasge dose not showing any thing and no data availabe after invoke many times api and try out it in developer portal.
There is no any error in log and in console, subscription service return right values I subscribed in portal and shows in analytics dashboard. but service api usages return no value. I Think I do somthing wrong that dose not show it. I simply invoke designed rest, but api usages dose not show any thing
could please guide me what is problem?
API Manager deployment.toml
[database.shared_db]
type = "oracle"
url = "jdbc:oracle:thin:#172.24.64.116:1521/orcl"
username = "shared_db"
password = "shared_db"
driver = "oracle.jdbc.driver.OracleDriver"
validationQuery = "SELECT 1 FROM DUAL"
[database.apim_db]
type = "oracle"
url = "jdbc:oracle:thin:#172.24.64.116:1521/orcl"
username = "apim_db"
password = "apim_db"
driver = "oracle.jdbc.driver.OracleDriver"
validationQuery = "SELECT 1 FROM DUAL"
dashborad deployment.yml
#Data source for APIM Analytics
- name: APIM_ANALYTICS_DB
description: Datasource used for APIM Analytics
jndiConfig:
name: jdbc/APIM_ANALYTICS_DB
definition:
type: RDBMS
configuration:
jdbcUrl: 'jdbc:oracle:thin:#172.24.64.116:1521:orcl'
username: apim_analytics_db
password: apim_analytics_db
driverClassName: oracle.jdbc.driver.OracleDriver
maxPoolSize: 50
idleTimeout: 60000
connectionTestQuery: SELECT 1 FROM DUAL
validationTimeout: 30000
isAutoCommit: false
connectionInitSql: alter session set NLS_DATE_FORMAT='YYYY-MM-DD HH24:MI:SS'
#Main datasource used in API Manager
- name: AM_DB
description: Main datasource used by API Manager
jndiConfig:
name: jdbc/AM_DB
definition:
type: RDBMS
configuration:
jdbcUrl: 'jdbc:oracle:thin:#172.24.64.116:1521:orcl'
username: apim_db
password: apim_db
driverClassName: oracle.jdbc.driver.OracleDriver
maxPoolSize: 10
idleTimeout: 60000
connectionTestQuery: SELECT 1 FROM DUAL
validationTimeout: 30000
isAutoCommit: false
worker deployment.yml
- name: APIM_ANALYTICS_DB
description: "The datasource used for APIM statistics aggregated data."
jndiConfig:
name: jdbc/APIM_ANALYTICS_DB
definition:
type: RDBMS
configuration:
jdbcUrl: 'jdbc:oracle:thin:#172.24.64.116:1521:orcl'
username: 'apim_analytics_db'
password: 'apim_analytics_db'
driverClassName: oracle.jdbc.driver.OracleDriver
maxPoolSize: 50
idleTimeout: 60000
connectionTestQuery: SELECT 1 FROM DUAL
validationTimeout: 30000
isAutoCommit: false

If you are using Oracle schemas to configure Analytics in your environment, add the following datasource configuration under ANALYTICS_DB in the Dashboard's <analytics>/conf/dashboard/deployment.yaml
connectionInitSql: alter session set NLS_DATE_FORMAT='YYYY-MM-DD HH24:MI:SS'
A complete configuration will be as following
- name: APIM_ANALYTICS_DB
description: Datasource used for APIM Analytics
jndiConfig:
name: jdbc/APIM_ANALYTICS_DB
definition:
type: RDBMS
configuration:
...
connectionInitSql: alter session set NLS_DATE_FORMAT='YYYY-MM-DD HH24:MI:SS'
Perform the above-mentioned configurations and restart the Dashboard nodes and verify the behavior.

Related

Custom Check for GCP Cloud SQL Database Flags

I have been working with tfsec for about a week so I am still figuring things out. So far the product is pretty awesome. That being said I'm having a bit of trouble getting this custom check for Google Cloud SQL to work as expected. The goal of the check is to ensure the database flag for remote access is set to "off." The TF code below should pass the custom check, but it does not. Instead I get an error (see below):
I figured maybe I am not using subMatch/Predicatedmatch correctly, but no matter what I do the check keeps failing. There is a similar check that is included as a standard check for GCP. I ran the custom check logic through a YAML checker and it came back okay so I can rule that out any YAML specific syntax errors.
TF Code (Pass example)
resource "random_id" "db_name_suffix" {
byte_length = 4
}
resource "google_sql_database_instance" "instance" {
provider = google-beta
name = "private-instance-${random_id.db_name_suffix.hex}"
region = "us-central1"
database_version = "SQLSERVER_2019_STANDARD"
root_password = "#######"
depends_on = [google_service_networking_connection.private_vpc_connection]
settings {
tier = "db-f1-micro"
ip_configuration {
ipv4_enabled = false
private_network = google_compute_network.private_network.id
require_ssl = true
}
backup_configuration {
enabled = true
}
password_validation_policy {
min_length = 6
reuse_interval = 2
complexity = "COMPLEXITY_DEFAULT"
disallow_username_substring = true
password_change_interval = "30s"
enable_password_policy = true
}
database_flags {
name = "contained database authentication"
value = "off"
}
database_flags {
name = "cross db ownership chaining"
value = "off"
}
database_flags {
name = "remote access"
value = "off"
}
}
}
Tfsec Custom Check:
---
checks:
- code: SQL-01 Ensure Remote Access is disabled
description: Ensure Remote Access is disabled
impact: Prevents locally stored procedures form being run remotely
resolution: configure remote access = off
requiredTypes:
- resource
requiredLabels:
- google_sql_database_instance
severity: HIGH
matchSpec:
name: settings
action: isPresent
subMatchOne:
- name: database_flags
action: isPresent
predicateMatchSpec:
- name: name
action: equals
value: remote access
- name: value
action: equals
value: off
errorMessage: DB remote access has not been disabled
relatedLinks:
- http://testcontrols.com/gcp
Error Message
Error: invalid option: failed to load custom checks from ./custom_checks: Check did not pass the expected schema. yaml: unmarshal errors:
line 15: cannot unmarshal !!map into []custom.MatchSpec
I was able to get this working last night finally. This worked for me:
---
checks:
- code: SQL-01 Ensure Remote Access is disabled
description: Ensure Remote Access is disabled
impact: Prevents locally stored procedures form being run remotely
resolution: configure remote access = off
requiredTypes:
- resource
requiredLabels:
- google_sql_database_instance
severity: HIGH
matchSpec:
name: settings
action: isPresent
predicateMatchSpec:
- name: database_flags
action: isPresent
subMatch:
name: name
action: equals
value: remote access
- action: and
subMatch:
name: value
action: equals
value: off
errorMessage: DB remote access has not been disabled
relatedLinks:
- http://testcontrols.com/gcp

Google Workflow insert a bigquery job that queries a federated Google Drive table

I am working on an ELT using workflows. So far very good. However, one of my tables is based on a Google sheet and that job fails on "Access Denied: BigQuery BigQuery: Permission denied while getting Drive credentials."
I know I need to add the https://www.googleapis.com/auth/drive scope to the request and the service account that is used by the workflow needs access to the sheet. The access is correct and if I do an authenticated insert using curl it works fine.
My logic is that I should add the drive scope. However I do not know where/how to add it. Am I missing something?
The step in the Workflow:
call: googleapis.bigquery.v2.jobs.insert
args:
projectId: ${sys.get_env("GOOGLE_CLOUD_PROJECT_ID")}
body:
configuration:
query:
query: select * from `*****.domains_sheet_view`
destinationTable:
projectId: ${sys.get_env("GOOGLE_CLOUD_PROJECT_ID")}
datasetId: ***
tableId: domains
create_disposition: CREATE_IF_NEEDED
write_disposition: WRITE_TRUNCATE
allowLargeResults: true
useLegacySql: false```
AFAIK for connectors, you cannot customize the scope parameter but you can customize if you put together the HTTP call yourself.
add the service account as a viewer on the Google Docs
then run the workflow
here is my program
#workflow entrypoint
main:
steps:
- initialize:
assign:
- project: ${sys.get_env("GOOGLE_CLOUD_PROJECT_ID")}
- makeBQJob:
call: BQJobsInsertJobWithSheets
args:
project: ${project}
configuration:
query:
query: SELECT * FROM `ndc.autoritati_publice` LIMIT 10
destinationTable:
projectId: ${project}
datasetId: ndc
tableId: autoritati_destination
create_disposition: CREATE_IF_NEEDED
write_disposition: WRITE_TRUNCATE
allowLargeResults: true
useLegacySql: false
result: res
- final:
return: ${res}
#subworkflow definitions
BQJobsInsertJobWithSheets:
params: [project, configuration]
steps:
- runJob:
try:
call: http.post
args:
url: ${"https://bigquery.googleapis.com/bigquery/v2/projects/"+project+"/jobs"}
headers:
Content-type: "application/json"
auth:
type: OAuth2
scope: ["https://www.googleapis.com/auth/drive","https://www.googleapis.com/auth/cloud-platform","https://www.googleapis.com/auth/bigquery"]
body:
configuration: ${configuration}
result: queryResult
except:
as: e
steps:
- UnhandledException:
raise: ${e}
next: queryCompleted
- pageNotFound:
return: "Page not found."
- authError:
return: "Authentication error."
- queryCompleted:
return: ${queryResult.body}

DM create bigquery view then authorize it on dataset

Using Google Deployment Manager, has anybody found a way to first create a view in BigQuery, then authorize one or more datasets used by the view, sometimes in different projects, and were not created/managed by deployment manager? Creating a dataset with a view wasn't too challenging. Here is the jinja template named inventoryServices_bigquery_territory_views.jinja:
resources:
- name: territory-{{properties["OU"]}}
type: gcp-types/bigquery-v2:datasets
properties:
datasetReference:
datasetId: territory_{{properties["OU"]}}
- name: files
type: gcp-types/bigquery-v2:tables
properties:
datasetId: $(ref.territory-{{properties["OU"]}}.datasetReference.datasetId)
tableReference:
tableId: files
view:
query: >
SELECT DATE(DAY) DAY, ou, email, name, mimeType
FROM `{{properties["files_table_id"]}}`
WHERE LOWER(SPLIT(ou, "/")[SAFE_OFFSET(1)]) = "{{properties["OU"]}}"
useLegacySql: false
The deployment configuration references the above template like this:
imports:
- path: inventoryServices_bigquery_territory_views.jinja
resources:
- name: inventoryServices_bigquery_territory_views
type: inventoryServices_bigquery_territory_views.jinja
In the example above files_table_id is the project.dataset.table that needs the newly created view authorized.
I have seen some examples of managing IAM at project/folder/org level, but my need is on the dataset, not project. Looking at the resource representation of a dataset it seems like I can update access.view with the newly created view, but am a bit lost on how I would do that without removing existing access levels, and for datasets in projects different than the one the new view is created in. Any help appreciated.
Edit:
I tried adding the dataset which needs the view authorized like so, then deploy in preview mode just to see how it interprets the config:
-name: files-source
type: gcp-types/bigquery-v2:datasets
properties:
datasetReference:
datasetId: {{properties["files_table_id"]}}
access:
view:
projectId: {{env['project']}}
datasetId: $(ref.territory-{{properties["OU"]}}.datasetReference.datasetId)
tableId: $(ref.territory_files.tableReference.tableId)
But when I deploy in preview mode it throws this error:
errors:
- code: MANIFEST_EXPANSION_USER_ERROR
location: /deployments/inventoryservices-bigquery-territory-views-us/manifests/manifest-1582283242420
message: |-
Manifest expansion encountered the following errors: mapping values are not allowed here
in "<unicode string>", line 26, column 7:
type: gcp-types/bigquery-v2:datasets
^ Resource: config
Strange to me, hard to make much sense of that error since the line/column it points to is formatted exactly the same as the other dataset in the config, except that maybe it doesn't like that the files-source dataset already exists and was created from outside of deployment manager.

Error when trying to create a serviceaccount key in deployment manager

The error is below:
ERROR: (gcloud.deployment-manager.deployments.update) Error in Operation [operation-1544517871651-57cbb1716c8b8-4fa66ff2-9980028f]: errors:
- code: MISSING_REQUIRED_FIELD
location: /deployments/infrastructure/resources/projects/resources-practice/serviceAccounts/storage-buckets-backend/keys/json->$.properties->$.parent
message: |-
Missing required field 'parent' with schema:
{
"type" : "string"
}
Below is my jinja template content:
resource:
- name: {{ name }}-keys
type: iam.v1.serviceAccounts.key
properties:
name: projects/{{ properties["projectID"] }}/serviceAccounts/{{ serviceAccount["name"] }}/keys/json
privateKeyType: enum(TYPE_GOOGLE_CREDENTIALS_FILE)
keyAlgorithm: enum(KEY_ALG_RSA_2048)
P.S.
My reference for the properties is based on https://cloud.google.com/iam/reference/rest/v1/projects.serviceAccounts.keys
I will post the response of #John as the answer for the benefit of the community.
The parent was missing, needing an existing service account:
projects/{PROJECT_ID}/serviceAccounts/{ACCOUNT}
where ACCOUNT value can be the email or the uniqueID of the service account.
Regarding the template, please remove the enum wrapping the privateKeyType and keyAlgoritm.
The above deployment creates a service account credentials for an existing service account, and in order to retrieve this downloadable json key file, it can be exposed using outputs using the publicKeyData property then have it base64decoded.

Composite index required does not exist, yet defined in index.yaml

I have some IoT devices that are sending some data into a Google Cloud Datastore.
The Datastore is setup as Cloud Firestore in Datastore mode.
Each row has the following fields:
Name/ID
current_temperature
data
device_id
event
gc_pub_sub_id
published_at
target_temperature
And these are all under the ParticleEvent kind.
I wish to run the following query; select current_temperature, target_temperature from ParticleEvent where device_id = ‘abc123’ order by published_at desc.
I get the below error when I try to run that query:
GQL query error: Your Datastore does not have the composite index (developer-supplied) required for this query.
So I setup an index.yaml file with the following contents:
indexes:
- kind: ParticleEvent
properties:
- name: data
- name: device_id
- name: published_at
direction: desc
- kind: ParticleEvent
properties:
- name: current_temperature
- name: target_temperature
- name: device_id
- name: published_at
direction: desc
I used the gcloud tool to send this successfully up to the datastore and I can see both indexes in the indexes tab.
However I still get the above error when I try to run the query.
What do I need to add/change to my indexes to get this query to work?
Though in the comment I simply suggest select * (that's the best way, I do think)
There is a way make your query work.
- kind: ParticleEvent
properties:
- name: device_id
- name: published_at
direction: desc
- name: current_temperature
- name: target_temperature
The reason why is select is done at the end and thus you need the index of current_temperature and target_temperature in a lower level.
Why I don't suggest this way is because, when your data grows and you need more combination of indexing just because of select specific columns. Your index size will grow exponentially.
But let's say if you sure you will just use this once and always query the data like this, then feel free to indexing it.
Or, if the connection bandwidth between your computer and google cloud is very small such that downloading more data causes you lag.