I am trying to collect statistics in wso2AM using wso2BAM. When the API request completes no errors are reporting and the pass through works.
However when I try to access the statistics I get the error Table "API_REQUEST_SUMMARY" not found; SQL Statement:
SELECT time, year,month,day FROM API_REQUEST_SUMMARY order by time ASC limit 1 [42102-140]
No errors in the BAM logs. The Hadoop job is running every few seconds with no errors
Suggestions on what to look at?
OS is Linux
AM is 1.4.0
BAM is 2.3.0
I followed the instructions here for combining the two:
http://docs.wso2.org/wiki/display/AM140/Monitoring+Using+WSO2+BAM
Thanks,
Chris
The exact same exception can be happen, if user forgot to Copy the file /statistics/API_Manager_Analytics.tbox to directory, /repository/deployment/server/bam-toolbox.
So in that case only APIMGTSTATS_DB.db will be created without any tables.
Thanks
Ajith.
Related
I am new to dbt. currently i am trying to accessing S3 bucket which has parquet file via glue and Athena. I have configuration as per dbt documentation, however, after running the run dbt command it provided me how many model i am running, how may task it has there so up to this point it is good. but looks like after that it is hung and after some time its timed out. While checking dbt.log i found there is query running like below and it is running quite long time and eventually timed out. I am not sure why it is running and if any configuration i have to checked. I suspect it is coming from macro but there is no macro like that which run the below query. Please let me know if any pointer. Thank you.
query running by default after running dbt run command and not sure where it is coming from.
select table_catalog,table_schema,
case when table_type='BASE_TABLE' then 'table'
when table_type='VIEW' then 'view'
end as table_type
from information_schema.table
where regexp_like(table_schema,'(?i)\A\A')
I am attempting to clone a database. I was able to previous clone it in the console, but now I want to create a small script to automate this and it fails with the following error message:
(gcloud.sql.instances.clone) [ERROR_RDBMS] unable to update the following flags: cloudsql.enable_password_validation
If I attempt to clone it in the console, I get the same error shown above.
I looked up the documentation and enable_password_validation does not seem to be in the list of supported flags, which would explain why it can't update it.
If I run gcloud sql instances describe my-instance, I don't see the flag in question.
But running on the source instance:
SELECT * FROM pg_settings
yields this row in particular:
name
setting
unit
category
short_desc
extra_desc
context
vartype
source
min_val
max_val
enumvals
boot_val
reset_val
sourcefile
sourceline
pending_restart
cloudsql.enable_password_validation
off
NULL
Customized Options
Sets whether to enable Cloud SQL password validation.
NULL
superuser
bool
configuration file
NULL
NULL
NULL
on
off
/pgsql/data/postgresql.auto.conf
3
False
Any advice on how to solve this?
There is currently an ongoing issue with password validation in Cloud SQL Postgres instances. The issue involves the exact flag that is giving you problems cloudsql.enable_password_validation:
Diagnosis: Affected postgres instances from a recent release have the following flag set and are unable to remove or disable this flag: cloudsql.enable_password_validation=on. This flag does not appear in Cloud Console, and attempting to disable flag via gcloud returns error where the flag is not recognized or supported. Password validation occurs on every new client connection but is limited to 50 QPS, and thus higher rates will return errors.
When did this issue start occurring? Have you also attempted to clone the database since then? This is due to the issue receiving several updates. If you continue experiencing issues, you could open a support case with GCP as the status page recommends.
EDIT (2/24/2022)
I wanted to update this answer. The issue seems to be resolved as shown in the status page of Google Cloud:
The issue with Cloud SQL has been resolved for all affected instances as of Tuesday, 2022-02-22 14:30 US/Pacific. We thank you for your patience while we worked on resolving the issue.
If you still see this error, you can update the question confirming that it was not resolved as part of the outage resolution.
I'm having troubles with a job I've set up on dataflow.
Here is the context, I created a dataset on bigquery using the following path
bi-training-gcp:sales.sales_data
In the properties I can see that the data location is "US"
Now I want to run a job on dataflow and I enter the following command into the google shell
gcloud dataflow sql query ' SELECT country, DATE_TRUNC(ORDERDATE , MONTH),
sum(sales) FROM bi-training-gcp.sales.sales_data group by 1,2 ' --job-name=dataflow-sql-sales-monthly --region=us-east1 --bigquery-dataset=sales --bigquery-table=monthly_sales
The query is accepted by the console and returns me a sort of acceptation message.
After that I go to the dataflow dashboard. I can see a new job as queued but after 5 minutes or so the job fails and I get the following error messages:
Error
2021-09-29T18:06:00.795ZInvalid/unsupported arguments for SQL job launch: Invalid table specification in Data Catalog: Could not resolve table in Data Catalog: bi-training-gcp.sales.sales_data
Error 2021-09-29T18:10:31.592036462ZError occurred in the launcher
container: Template launch failed. See console logs.
My guess is that it cannot find my table. Maybe because I specified the wrong location/region, since my table is specified to be location in "US" I thought it would be on a US server (which is why I specified us-east1 as a region), but I tried all us regions with no success...
Does anybody know how I can solve this ?
Thank you
This error occurs if the Dataflow service account doesn't have access to the Data Catalog API. To resolve this issue, enable the Data Catalog API in the Google Cloud project that you're using to write and run queries. Alternately, assign the roles/datacatalog.
I'm trying to setup Redmine on the following products
redmine-4.0.7
Rails 5.2.4.2
Phusion Passenger 6.0.7
Apache/2.4.6
mysql Ver 14.14
I expected there will be initializing page however, I got `Internal Error' from http://mydomain/redmine/
I can see the following messages in log/prduction.log
Completed 500 Internal Server Error in 21ms (ActiveRecord: 1.5ms)
ActiveRecord::StatementInvalid (Mysql2::Error: Can't find file: './redmine/settings.frm' (errno: 13 - Permission denied): SHOW FULL FIELDS FROM `settings`):
It seems I need ./redmine/settings.frm but there isn't.
Does anyone know how to place ./redmine/settings.frm and what content should be in?
The error is thrown by your database server (i.e. MySQL). It seems that MySQL does not have the required permission to access the files where it stores the table data.
Usually, those files are handled (i.e. created, updated, and eventually deleted) entirely by MySQL which requires specific access patterns to ensure consistent data. Because of that, you should strongly avoid to manually change any files under control of MySQL. Instead, you should only use SQL commands to update table structures and table data.
o fix this issue now, you need to fix the permissions of your MySQL data files so that MySQL can properly access them. What exactly is required here is unfortunately not simply explained since there can be various causes. If you have jsut setup your MySQL server, it might be best start entirely new.
Am trying to invoke campaign via SOAP request ,which is suppose to insert data into data warehouse,but it's not happening.am using DS2 code for it
When I try run the same campaign in test mode in SAS CI it's happening
I want the log that gets generated via SOAP request.can anyone let me know the path for the same.
Thanks in advance.
You need to check the log files on your application & mid-tier servers. Search the log files for the time you ran your request.
Start by checking the ObjectSpawner log to see if any connections were made the SAS environment at that time: ex. C:\SASHome\Lev1\ObjectSpawner\Logs