Intermittent crash in API Manager Analytics Dashboard - wso2

I’m running API Manager 3.0 along with API Manager Analytics. My Analytics cluster is a single-node deployment. Using only a few dashboard widgets to monitor some APIs.
I’m facing intermittent outage of the analytics dashboard and dashboard homepage goes blank sometimes. What could be the possible reason for the same?
I don't see any errors in the log.

In my opinion, this has something to with your backend MSSSQL database. Microsoft SQL has the common issue of unrealistic growth of temp db specially when the same instance being used for multiplee applications.
You can try below options:
Try truncating MSSQL databases using below guides [1]
How do you truncate all tables in a database using TSQL?
Alternatively try to purge Analytics data as well and free some up space by removing historical data [2] https://apim.docs.wso2.com/en/3.0.0/learn/analytics/purging-analytics-data/

Related

BigQuery API Listed Twice in APIs & Services Dashboard

does anyone happen to know why the BigQuery API would be listed twice in the APIs & Services Dashboard in Google Clout Platform?
BigQuery seems to be functioning properly I just thought it was strange this is the only API that seems to be listed twice.. I don't think it could be enabled twice as both the links lead to the same overview page and all the metrics are the same.
Duplicate Bigquery API listed in dashboard
This behavior is apparently caused by the fact that bigquery-json.googleapis.com is an alias for bigquery.googleapis.com.
The BigQuery engineering team is aware of this issue and are working on resolving it. All further updates should occur on this Public report.

Stackdriver Logging Client Libraries - What happens during Google Downtime?

If you embed the Stackdrvier client library in your application and the Google stack driver API has downtime (Google documentation indicates 99.95% or 21.92 minutes of downtime/month)
My question is: What will happen to my application during the downtime? Will logging info build up in memory? Will it cause application errors or will it discard the log data and continue on?
Logging API downtimes can have different root causes and consequences. Google System Engineers have mechanisms in place to track and take mitigation actions so the downtime and its consequences are minimal but Google cannot guarantee data loss prevention in all outages all the time related to logging API.
Hopefully your application and pipeline can withstand up to (21.56 minutes) expected downtime a month (SLA 99.95%) as per the internal SLOs and SLAs of GCP.
The three scenarios you listed are plausible. In this period, your application sending the logs may have 500 responses from the network so it has to be able to deal with this kind of issue.
If the logging data manages to reach Google's platform but an outage prevents the data to be accessible, then Google's team will try their best to release backlogs, repopulate data, etc. They will post general notice on https://status.cloud.google.com/
If the issue is caused by the logging agent not sending data to our platform, then logging data may not be retrievable (but it could still be an infrastructural outage with one of the GCP products) or linked to something other than an outage like your application or its underlying host running out of resources or the logging agent being corrupted which is not covered by GCP Stackdriver SLA [1].
If the pipeline that ingests data from Logging API is backlogged, it could cause an outage but GCP team will try their best to make the data accessible after the outage ends.
If you suspect issues with Logging API malfunctioning, please contact support or file issue tracker or inspect open issues where Google's product team will provide updates live. Links below:
[1] https://cloud.google.com/stackdriver/sla#sla_exclusions
[2]
create new incident:
https://issuetracker.google.com/issues/new?component=187203&template=0
[3]
open issues:
https://issuetracker.google.com/savedsearches/559764

Using Amazon Redshift for analytics for a Django app with Postgresql as the database

I have a working Django web application that currently uses Postgresql as the database. Moving forward I would like to perform some analytics on the data and also generate reports etc. I would like to make use of Amazon Redshift as the data warehouse for the above goals.
In order to not affect the performance of the existing django web application, I was thinking of writing a NEW Django application that essentially would leverage a READ-ONLY replica of the Postgresql database and continuously write data from read-only replicas to the Amazon Redshift. My thinking is that perhaps the NEW Django application can be used to handle some/all of the Extract, Transform and Load functions
My questions are as follows:
1. Does the Django ORM work well with Amazon Redshift? If yes, how does one handle the model schema translations? Any pointers in this regard would be greatly appreciated.
2. Is there any better alternative to achieve the goals listed above?
Thanks in advance.

Fetching data from database using SSRS web service

Is there a way to fetch data from database by executing query without running any report using SSRS web service?
No. this is not possible.
You may need to explain what/why you want to do this to get a better response.
If you have access to the server to deploy a report, the report could be a very simple table that could be exported to excel.
If you cant deploy a report to use in this manner, then maybe there's a good reason for that? certainly security would be high on the list.

Bare minimum for a Sitecore content delivery set-up

We currently have a single installation multi-site setup, hosted in Europe, and are looking to move content delivery for a single site to China. This is partly for SEO purposes and partly to improve content delivery performance there. Content management performance isn't an issue.
Given that we'll be having to transfer data between two separate hosting companies we'd like to limit both how much gets sent, and if possible not send any data we wouldn't be happy to publish.
We have Sitecore analytics enabled, so this might be a complicating factor.
I've read the scaling guide, which suggests we'll need a minimum of both web and core databases in the new CD environment. They do suggest that if there is no extranet security configured it is possible to do without the core database in a pure CD environment.
Does anyone have any experience with this? What are the benefits/pitfalls? What is the bare minimum installation we can get away with?
Edit: Sitecore.NET 6.4.1 (rev. 111003)
Like divamatrix said, knowing the version number is essential.
But even though the older versions can run without the Core, I would stick to an installation that includes the Core so you will have less trouble upgrading in the future.
What you need on the Content Delivery side is:
Web database
Core database
Analytics database
Then on the Content Management side you need your usual:
Master database
Web database
Core database
Analytics database
Then setup SQL replication between the Core databases.
Analytics can be configure to run reports using data from CD and store them on CM.
You also need to setup Web Deployment for file replication between the instances.
Besides all this you need some extra configuration as is explained in the Scaling Guide.
If you are not using Sitecore 6.4 or higher, I would recommend upgrading first. Once you got this setup properly it will work like a charm!
To answer your question, older versions of Sitecore worked without the Core database. You didn't say which version of Sitecore you're using, but if it's anything current, the answer is going to be that you need a web database and a core database. Also, having analytics enabled is definitely a consideration you need to look at. You should probably look at setting up an your analytics database local to your CD hosting as this database can see a lot of traffic depending on the traffic of your site. You can have publishing set up to either publish to a local web database and then replicate or you can just let publishing should handle the transfer of data between your CM and CD environment.