I am having some trouble with Monitoring System Statistics in WSO2 Identity Server 5.3.3. There is no activity being reported in the Service Summary. We are about to go live with the WSO2 Identity server in a couple weeks, so I really want to keep an eye on the response time and counts.
Our current production system is not heavily used, but currently it shows Total Response Count: 0, when in fact I have tested several logins earlier today.
I know the monitoring was updating a few weeks ago, but something happened. Do I need to enable this via a setting or is it possible it was turned off?
This is the official documentation entry point on setting up and using statistics/analytics for latest WSO2 Identity server version. Please make sure you have followed all the steps accurately. You will have to separately enable it and configure in below files. After enabling you should be able to view stats.
<IS_HOME>/repository/conf/identity/identity.xml
<IS_HOME>/repository/deployment/server/eventpublishers/IsAnalytics-Publisher-wso2event-*
Related
I developed and support a client's mobile app that uses Firebase services.
Google Cloud Platform logged this event yesterday at 4:17 am:
'<my account email> has executed
google.api.serviceusage.v1.ServiceUsage.EnableService
on stackdriver.googleapis.com'
I was sleeping at the time and a review of Google Admin Console Login Audit Log does not show a login event around that same time.
Immediately, 100% errors were reporting for 'compute':
A look at the Stackdriver API overview page does not give any indication of activity:
My question, my concern, how/why did this service get activated and what is the activity driving the compute errors at 100%?
During my efforts to understand, I clicked on Compute Engine API in the API library, which enabled the API (but no VMs, Disk, etc. were created):
A short time later, Google Cloud Platform has several log entries:
google.devtools.cloudbuild.v1.CloudBuild.ListBuilds
was executed on builds
Number of returned items 1000
The 'compute' errors stopped.
When I disabled the Compute Engine API, the ListBuilds logs stopped, but the Computer Errors returned to 100%.
I have not found a definitive answer to my question.
It's clear that Stackdriver API was enabled, but I don't know why.
When enabled, 100% Compute errors were being reported (orange line on graph) without any details.
While customizing the Google Cloud Platform Dashboard for this account, I toggled/enabled the Compute Engine card/graph hoping that might reveal some clues regarding the 100% errors. That action initialized the Compute Engine API. Almost immediately the Compute errors ended but there was a surge of activity that has continued. Reviewing many resources I found information that suggest this is normal behavior.
I would still like to fully understand how Stackdriver was enabled, why it was enabled, what value it provides, if I can simply disable it and Compute API as this project will never require VM compute services.
I have gone through read the various questions involving throttling on stack overflow. However, I didn't find anyone with a similar issue to what I'm seeing. I have gone through the tutorials and setup process on the WSO2 site regarding throttling.
This is what I have done:
Setup an additional tier to allow 5 calls per minute on the
following levels (Advanced Throttling, Application Throttling,
Subscription Throttling).
Edit the API and set the subscription tier level to the new custom
tier
Set the Application to the new tier level
Set the Advanced Throttling Policy to apply to the API, then I saved & published
Ran 1100 HTTP requests from an application that calls the API on an
interval every second. Every request made was successfully processed
without any throttling.
I installed version 1.9 of API manager and setup the very same rules
The requests were throttled correctly.
Any help would be greatly appreciated, I'm not really sure if it is a bug or a configuration issue on my end.
Regards
So after much digging in the WSO2 documentation. I have found that in order to use the advanced throttling techniques (which are enabled by default) you must use Traffic Manager (which is disabled by default).
There are instructions on how to use Traffic Manager in the WSO2 documentation. If advanced throttling is disabled the basic throttling works as expected.
It took some time to discover this as the documentation doesn't clearly make the distinction very clear in the documentation.
I hope this helps someone having a similar issue.
We are working on setting up the wso2 production environment and the question came up about the importance of having high availability databases on the identity server side. We have concerns regarding access tokens. Does the IDS manage all that information or is it shared among the other DBs? Also, if the DB happens to go down on the IDS side, will it case all of wso2 to crash? Will APIs no longer be available for use? I can't seem to find much documentation on the matter.
Thanks you.
Database high availability is needed for WSO2 products.Tokens will be saved in the database. If database goes down, Identity server will not function properly.
I want to deploy an event processing stack, based on WSO2, but can't figure the Feature installation process.
I've downloaded latest Carbon (4.0.2) and want to install probably ESB, BRS, CEP, BAM and maybe later API Management.
I've connected to the Turing feature repository
2 questions:
in the available features list I don't see BAM or BRS, although ESB, CEP and API are there. What do I need to see these other parts ?
when I select CEP and ESB for installation I get a "install modified" and no features are selected.I imagine this is something to do with feature version incompatibility
if I just select ESB, the installation seems to proceed but the server won't restart (hangs waiting for one of the Synapse services.
It feels like I have the wrong process to determine what set of features/versions I need. How should I proceed ?
Carbon does not like to play well with it's other components. I've never been able to successfully use Carbon to manage any WSO2 stack. Each time I've setup/deployed a WSO2 stack I've ended up manually configuring the separate components config files individually. Usually starting with the ESB first, then adding in the CEP then the BAM.
You must also make sure they start in the correct order and that the config files don't stomp on each other (make sure your port offsets are set).
You don't need Carbon to run any instance of the WSO2 stack, simply 'install' it (unzip the wso2X.zip file) then make sure the service starts (call wso2X/bin/wso2server.sh start) and that's about it for the general setup, after that you need to configure each component to play nice with each other component (meaning you need to hook your BAM and CEP into your ESB, etc.) there isn't a lot of 'auto' config or discovery so it's usually easier to go the manual route with WSO2.
Also note that WSO2 products are Java extensions (essentially wrappers) around other Apache products (like Tomcat/Synapse) so usually if you are having a problem with WSO2, its because the underlying system (Tomcat/Synapse) was not properly configured (though that is no fault of your own as the WSO2 documentation does not make any mention of ensuring the base system is configured properly).
Also note that in my testing of WSO2 products, they consume huge amounts of memory (could not run more than the ESB and BAM on one machine because of the 8GB+ memory eaten by each) and a trouble ticket had to be put in to rectify a memory leak found in WSO2's Java modules, not sure if that was ever fixed.
Not trying to negate WSO2, but just be warned that it's not a pretty undertaking and you might fare better with other 'cloud' options if you have a choice.
edit: I've had to test out different 'cloud' stacks (with different types of 'plugins' or web services if you will) and how interoperable they were; as it turns out, they're pretty interoperable if YOU have total control over the individual stacks, otherwise the biggest downfall of any of the stacks that I found was simply documentation ... I don't care if a program has bugs or issues, as long as they are properly documented with possible workarounds (if any) so that I am aware of what is happening on my stack. Since WSO2's products were just Java wrappers for the Apache versions of their offerings (i.e. WSO2's ESB == Apache Synapse), any problems that occurred where usually solved in Apache's documentation (what little they had for certain problems) while WSO2's documentation had a lot of copy/paste issues (if they had any documentation beyond version 1). It was usually easier to just download and install the actual Apache offerings over WSO2's offerings then afterwards install WSO2's products and point them to the valid Apache configs/installs.
I did some testing with the Microsoft stack with Azure and general IIS/.NET offerings of equivalent services (The IIS/.NET equivalents of an ESB/CEP/BAM/etc. for what could be found). On the MS side, the documentation was enough (and there's enough people buying into the hype of cloud right now) that I could stand up most of the services semi-easy. I say semi-easy because of the misnomer (or my misunderstanding) of the 'ease of use' of 'cloud' services. I also found a product called Neuron ESB which is a .NET ESB offering, though I didn't do any thing with it during my testing so I can't speak to it.
Testing Amazon's offerings turned out the be some of the easier to setup and configure; the biggest issue with what I was testing for AWS was general internet latency.
Most of this is personal conjecture and I highly recommend you evaluate each as the 'cloud' space is constantly changing and each cloud platform has something slightly different to offer.
TLDR: the cloud space has a lot to offer and one should really consider what it is they are trying to achieve in the long run then evaluate each platforms offerings to see which fits. That being said, documentation and internal vendor interoperability (i.e. the vendor's products ability to easily communicate with each other) definitely help a product's 're-usability' factor.
Turing feature repository is not compatible with Carbon kernel 4.0.2. You can download Carbon kernel 4.2.0 and connect to Turing feature repository.
We currently have a single installation multi-site setup, hosted in Europe, and are looking to move content delivery for a single site to China. This is partly for SEO purposes and partly to improve content delivery performance there. Content management performance isn't an issue.
Given that we'll be having to transfer data between two separate hosting companies we'd like to limit both how much gets sent, and if possible not send any data we wouldn't be happy to publish.
We have Sitecore analytics enabled, so this might be a complicating factor.
I've read the scaling guide, which suggests we'll need a minimum of both web and core databases in the new CD environment. They do suggest that if there is no extranet security configured it is possible to do without the core database in a pure CD environment.
Does anyone have any experience with this? What are the benefits/pitfalls? What is the bare minimum installation we can get away with?
Edit: Sitecore.NET 6.4.1 (rev. 111003)
Like divamatrix said, knowing the version number is essential.
But even though the older versions can run without the Core, I would stick to an installation that includes the Core so you will have less trouble upgrading in the future.
What you need on the Content Delivery side is:
Web database
Core database
Analytics database
Then on the Content Management side you need your usual:
Master database
Web database
Core database
Analytics database
Then setup SQL replication between the Core databases.
Analytics can be configure to run reports using data from CD and store them on CM.
You also need to setup Web Deployment for file replication between the instances.
Besides all this you need some extra configuration as is explained in the Scaling Guide.
If you are not using Sitecore 6.4 or higher, I would recommend upgrading first. Once you got this setup properly it will work like a charm!
To answer your question, older versions of Sitecore worked without the Core database. You didn't say which version of Sitecore you're using, but if it's anything current, the answer is going to be that you need a web database and a core database. Also, having analytics enabled is definitely a consideration you need to look at. You should probably look at setting up an your analytics database local to your CD hosting as this database can see a lot of traffic depending on the traffic of your site. You can have publishing set up to either publish to a local web database and then replicate or you can just let publishing should handle the transfer of data between your CM and CD environment.