How to Activate Audit trail in Siebel 8 for local databases?
There's an article about this on Siebel Unleashed called Configuring Siebel Audit Trail. I believe it's about Siebel 8. Hope that helps.
Siebel Remote only applies/generate audit trail records when syncing with the server.
Thus, your local audit trail will always be empty. Also, that table tends to get very big which is not a good thing for remote.
Related
I’m running API Manager 3.0 along with API Manager Analytics. My Analytics cluster is a single-node deployment. Using only a few dashboard widgets to monitor some APIs.
I’m facing intermittent outage of the analytics dashboard and dashboard homepage goes blank sometimes. What could be the possible reason for the same?
I don't see any errors in the log.
In my opinion, this has something to with your backend MSSSQL database. Microsoft SQL has the common issue of unrealistic growth of temp db specially when the same instance being used for multiplee applications.
You can try below options:
Try truncating MSSQL databases using below guides [1]
How do you truncate all tables in a database using TSQL?
Alternatively try to purge Analytics data as well and free some up space by removing historical data [2] https://apim.docs.wso2.com/en/3.0.0/learn/analytics/purging-analytics-data/
I have just been given admin access to a Google Analytics portal that tracks the corporate website's activity. The tracked data are to be moved to Amazon S3 via AppFlow.
I followed official AWS documentation in how to setup the connection between GA and AWS. We have created the connection successfully but I came across an issue I can't find an answer to:
Subobject field is empty. Currently, there are already ~4 months worth of data so I was thinking it's not an empty data thing. This issue does not allow me to proceed creating the flow as it is a required field. Any thoughts?
note: the client and the team is new to AWS, so we are setting it up as we go, learning on the way. thank you for the help!
Found the answer! The Google analytics account should have a Universal Analytics property available. Here are a few links:
https://docs.aws.amazon.com/appflow/latest/userguide/google-analytics.html
https://support.google.com/analytics/answer/6370521?hl=en
If you embed the Stackdrvier client library in your application and the Google stack driver API has downtime (Google documentation indicates 99.95% or 21.92 minutes of downtime/month)
My question is: What will happen to my application during the downtime? Will logging info build up in memory? Will it cause application errors or will it discard the log data and continue on?
Logging API downtimes can have different root causes and consequences. Google System Engineers have mechanisms in place to track and take mitigation actions so the downtime and its consequences are minimal but Google cannot guarantee data loss prevention in all outages all the time related to logging API.
Hopefully your application and pipeline can withstand up to (21.56 minutes) expected downtime a month (SLA 99.95%) as per the internal SLOs and SLAs of GCP.
The three scenarios you listed are plausible. In this period, your application sending the logs may have 500 responses from the network so it has to be able to deal with this kind of issue.
If the logging data manages to reach Google's platform but an outage prevents the data to be accessible, then Google's team will try their best to release backlogs, repopulate data, etc. They will post general notice on https://status.cloud.google.com/
If the issue is caused by the logging agent not sending data to our platform, then logging data may not be retrievable (but it could still be an infrastructural outage with one of the GCP products) or linked to something other than an outage like your application or its underlying host running out of resources or the logging agent being corrupted which is not covered by GCP Stackdriver SLA [1].
If the pipeline that ingests data from Logging API is backlogged, it could cause an outage but GCP team will try their best to make the data accessible after the outage ends.
If you suspect issues with Logging API malfunctioning, please contact support or file issue tracker or inspect open issues where Google's product team will provide updates live. Links below:
[1] https://cloud.google.com/stackdriver/sla#sla_exclusions
[2]
create new incident:
https://issuetracker.google.com/issues/new?component=187203&template=0
[3]
open issues:
https://issuetracker.google.com/savedsearches/559764
A simple question, yet I couldn't find much information on the subject. How is business activity monitoring related to business analytics? I always thought business analytics is a subsystem of the activity monitoring systems. But that's only my limited view so I was wondering. In that trail of thought, how are for instance WSO2 BAM and Google Analytics compared to each other?
Initially WSO2 BAM 2.x.x was just a data analytic framework that can process big data offline (as batch processes with Apache Hadoop) which can also receive data and visualize data.
But from BAM 2.4.0 it comprises WSO2 Complex Event Processing features (CEP) that can monitor events real-time, process them and visualize them in a relatively low latency according to [1].
In Google Analytics most analytics and dashboards are available out of the box but with WSO2 BAM you may need to write some hive queries and dashboards to come up with a great solution.
WSO2 BAM is open source (Apache Licences) and you can use it as you wish with great flexibility although it lacks some out of the box features compare to the Google Analytics.
From BAM 2.4.0 it comes with an inbuilt Activity Monitoring feature [2] that is based on the concept of an Activity ID. This can be used out of the box when your business process is properly configured for activity monitoring use case.
[1] https://docs.wso2.org/display/BAM240/Realtime+Analytics
[2] https://docs.wso2.org/display/BAM240/Activity+Monitoring+Dashboard
On some developer PC's at my organisation (which have local installs of AppFabric server and the underlying monitoring database), AppFabric is failing to populate the ASEventSourcesTable table, therefore resulting in no events arriving in the ASWcfEventsTable table.
If I manually insert what is required in the ASEventSourcesTable table (going off another AppFabric install etc where ASEventSourcesTable is populated automatically), then events are arriving and are visible through the dashboard (therefore suggesting all the moving parts are working - service, sql agent etc).
Any ideas on what would be stopping AppFabric 'parsing' IIS to determine what is a valid event source? Something in the config?
In fact, the process is a bit different. The Event Collection service (installed on the hosting server) captures the WCF ETW event data and writes it to the staging table (ASStagingTable) in the monitoring database.A SQL Agent job continuously runs and checks for new event records in the staging table, parses the event data, and moves it to the long-term storage WCF event table.
So, Check first the ASStagingTable and that every appfabric monitoring client have access to the monitoring db (network and connectionstring). Event logs can also give you more informations.
I would highly suggest you read this article.