Sitecore azure Deployment - sitecore

I have an Azure subscription with location India. When using Sitecore Azure module the location for farms not contain India in that. Can I use that for Sitecore Azure deployment using the Sitecore Azure module?

The data centers appear to be defined in the /App_Config/AzureVendors/Microsoft.xml file. You can add a data center node to that XML something like this:
<vendor name="Microsoft">
<datacenters>
...
<datacenter name="West India">
<coordinates left="66%" top="56.5%" />
</datacenter>
</datacenters>
...
</vendor>
Note, however, that according to https://azure.microsoft.com/en-us/regions/#services, the India regions do not support all Cloud Service sizes.
I tested adding a new data center and it appears on the map and the menu seems to work. However, I have not actually tried deploying, so proceed with caution.

Related

AWS AppSync searchItems type return data while table is empty

I deleted all the items in the DataTemplate table but when I query them again with the searchDataTemplates endpoint on the app or in AppSync it returns the old data, but when I use the listDataTemplates it returns nothing which is correct. Needed to repopulate the data in the table.
data template table
search endpoint
list endpoint
when I updated items individually it worked just fine but when i deleted all the items from the console (around 700 items) the search endpoint stopped working. Just the search
UPDATE:
I repopulated the data hoping it'd reset but now the listDataTemplates shows the new data and the search still shows the old data, is there some cache that needs to be reset?
SECOND UPDATE:
I removed the table and the appsync functions are gone however when i recreated the table (with no data) the testing out the function still returns the old data. I'm guessing the opensearch stuff hasn't been updated?
If you are using AppSync with Amplify CLI, #searchable will automatically create the followings:
An OpenSearch Domain
A Lambda Function that will be attached to the DynamoDB Streams and push the changes (create/update/delete) over to your OpenSearch Domain.
And the problem that you're facing is most likely due to the Lambda Function created failed to push the changes from DynamoDB Streams to OpenSearch. A quick suggestion is to check on the created Lambda Function first.
Reference: #searchable
This issue can only happen if caching is enabled in your application.
I am not sure what's the infrastructure you are using, so i would go ahead with some educated guess. Please feel free to correct me if i overstepped.
From your description of question, you have an AppSync as API layer and DynamoDb as primary database.
If these are the only two resources you have, please check the AppSync cache configuration.
Open AppSync console
from left panel select APIs -> your api -> caching
Validate Caching behavior is set to None
In case if you have AWS OpenSearch enabled for search query (i could be wrong, however picking up from previous comment). Then validate the cluster configuration.
Open AWS Open Search Service console
From left panel select Domains and click on the openserch domain that you are using
scroll to the bottom right and look for Advanced cluster settings and ensure the attribute Fielddata cache allocation is set to 0
If Fielddata cache allocation is not 0, update the cluster configuration and modify the advanced cluster setting to set the Fielddata cache allocation field to 0.
Wait for a few minutes (I would suggest 5 minutes) and then retry your use-case.
I hope this would help resolve your issue.

CloudFront Top Referrers Report - ALL referrer URLs

In AWS I can find under:
Cloudfront >> Reports & Analytics >> Top Referrers (CloudFront Top Referrers Report)
There I get the top25 items. How can I get ALL of them?
I have turned on logging in my bucket, but it seems that the referrer is not part of the log-file. Any idea how amazon collects its top25 and how I can according to that get the whole list?
Thanks for your help, in advance.
Amazon's built in analytics are, as you've noticed, rather basic. The data you're looking for all lives in the logfiles that you can set cloudfront up to export (in the cs(Referer) field). If you know what you're looking for, you can set up a little pipeline to download logs, pull out the numbers you care about and generate reports.
Amazon also makes it easy[1] to set up Athena or Redshift to look directly at Cloudfront or S3 logfiles in their target bucket. After a one-time setup, you could query them directly for the numbers you need.
There are also paid services built to fill in the holes in Amazon's default reports. S3stat (https://www.s3stat.com/), for example, will give you a Top 200 Referrer list in its reports, with the ability to export complete lists.
[1] "easy", using Amazon's definition of the word, meaning really really hard.

Grafana - Configure Custom Metrics from Cloudwatch

I am new to Grafana. I am setting it up to view data from Cloudwatch for a Custom Metrics. Custom Metrics Namespace Name is JVMStats, Metric is JVMHeapUsed, Dimension is instance Id. If I configure these data, I am not able to get the graph. Can you please advice me on how to get the data?
Regards
Karthik
I want to do the same.
As far as I can tell, it's not possible out of the box with the latest Grafana (2.6 at time of writing). See this issue.
This pull request implements it. It's currently tagged as 3.0-beta1. So I expect we'll both be able to do what we want come version 3.0.
EDIT: inserting proof of 3.0-beta-1 working
I installed 3.0-beta-1, and was able to use Custom Metrics, as evidenced by this image:
I managed to add my custom metrics now, only issue I had was that I listed my custom metrics in the "Data Source" configurations with commmas and spaces:
Custom1, Custom2
but it must be only commas:
Custom1,Custom2
And it works for me. The preview in the text box shows this, but I missed it.
Another option is to configure AWS CloudWatch job to collect data into Axibase Time Series Database where CustomMetrics namespace is enabled out-of-the-box:
Disclosure: I work for Axibase.

Using New Relic with Sitecore

I am testing New Relic with Sitecore CMS. All of the New Relic web transactions are being sent to the items layout file, so I am unable to drill into item level details in New Relic.
I am trying to use the New Relic API to call SetTransactionName and set it to the items URL, but I can't seem to make it work. I have created an httprequestbegin pipeline processor, and I have put it right at the end, right after:
<processor type="Sitecore.Pipelines.HttpRequest.ExecuteRequest, Sitecore.Kernel"/>
I have the New Relic API assembly installed, and is also in my bin folder. Here is the line of code that I am trying to run.
NewRelic.Api.Agent.NewRelic.SetTransactionName("Custom", Sitecore.Context.RawUrl);
Any ideas what I am possibly doing wrong? All web transactions still show up as the items layout file.
I'm setting the transaction name in the httpRequestProcessed pipeline and that works. Started out using the httpRequestBegin but I found that is was not working every time. Also remember that your request must take longer than 500 ms to execute before NewRelic picks it up.
Additional integration points I did with Sitecore:
Log4Net Appender that reports to NewRelic using NoticeError
HttpModule picking up Application_Error and reporting to NewRelic using NoticeError
Use Item path to name transactions and use AddCustomParameter to add Language, Database, User etc.
There is a module on the marketplace that sorts all this out:
http://marketplace.sitecore.net/en/Modules/New_Relic.aspx
We had similar issues when we started using New Relic with our Sitecore application about 18 or so months back. Unfortunately nobody was using New Relic with Sitecore at the time. What we settled on was to add the following code to a base Page class that every page in our site inherits:
// --- Set custom transaction name for New Relic.
NewRelic.Api.Agent.NewRelic.SetTransactionName("Pages", Sitecore.Context.Item.Template.FullName));
// --- Set custom parameter to store raw url to assist with diagnostics.
NewRelic.Api.Agent.NewRelic.AddCustomParameter("RawUrl", Request.RawUrl);
For our application template names are enough to distinguish trends and we added the custom parameter to stuff the entire RawUrl (we noticed oddities at the time where New Relic wasn't capturing the complete url for us, that might not be the case any longer).

Simple question about Sitecore's publish:end:remote event

Anyone have luck with the publish:end:remote Sitecore event or can shed some light on how it's supposed to work? I simply cannot get it to fire.
From what I understand, it's an event that will trigger after a successful publish to a remote instance of Sitecore. The trouble is, there appears to be no documentation on which server(s) this event is fired on (master or slave) or which server should contain the config setting.
I have the "History Engine" enabled on both of my servers for all databases like so:
<Engines.HistoryEngine.Storage>
<obj type="Sitecore.Data.$(database).$(database)HistoryStorage, Sitecore.Kernel">
<param connectionStringName="$(id)">
</param>
</obj>
</Engines.HistoryEngine.Storage>
As a test, I added a custom class to the publish:end:remote event on both servers. The class simply logs "Hello World" via Log.Info(), but nothing shows up.
I am using Sitecore 6.4.1 (rev. 101221).
UPDATE 1
I have read the latest Scaling guide and instituted all of the required configuration changes. Both our single Staging/CM server and (2) Prod/CD servers have EnableEventQueues set to true and the ScalabilitySettings.config is in place on all instances. That said, I believe the issue is that Sitecore is storing these queued events in the core database. Our CD servers are isolated from the staging core database and they are only linked to Staging via the "web" database. Should I be storing these queued events in the production 'web' database like so...
/eventing/providers/add[#name="sitecore"]
... and set the following attribute: systemDatabaseName="coreweb"
UPDATE 2
I have set the eventing provider to use the (shared) production 'web' database and I now see event queues pouring into the EventQueue table. There are around 60 records for the "PublishEndRemoteEvent" event in that table at any given time. All of these events have the "InstanceName" set to my Staging instance name. "RaiseLocally" is set to FALSE and "RaiseGlobally" set to TRUE. Oddly, the "Created" date for new events are somehow 7 hours in the future. Our Staging server is located only 3 hours ahead of where I work. I'm thinking this time difference might be the culprit.
Be sure you have the "EnableEventQueues" setting set to true in both web.config files. You'll find it in the /sitecore/settings section of the web.config.
See my post in this thread on the SDN forum for more details:
http://sdn.sitecore.net/forum//ShowPost.aspx?PostID=34284
You may also want to check out the Scaling Guide document on SDN (it was recently updated):
http://sdn.sitecore.net/upload/sitecore6/64/scaling_guide_sc63-64-usletter.pdf
Time you looking at is stored in UTC. Because of this you shouldn't have problems even if your servers are situated on different continents.