I have some components in my Sitecore install that can be added to one of any multiple placeholders on a page. The data source location of the renderings of these components can change based on what placeholder they are added to the site. I started creating a processor like
<getRenderingDatasource>
<processor patch:after="*[#type='custom']" type="custom" />
</getRenderingDatasource>
The class is like
public class GetDynamicDataSourceLocations : GetDatasourceLocation
{
public void Process(GetRenderingDatasourceArgs args)
{
...
}
}
I can't get the placeholder that I'm trying to attach the rendering to. Is there any way I can get the placeholder or atleast the parent where the component is being added?
Thanks
It's a very good idea you have here, but the GetRenderingDatasourceArgs can't provide you with the data you need if you're configuring the allowed datasource locations on the placeholder.
I've searched through querystring & form variables and context items as well, but there is no reference to the placeholder available in the getRenderingDatasource pipeline.
I did come up with something that could be the solution, although it's a bit hacky.
Create a processor for the getPlaceholderRenderings. The GetPlaceholderRenderingsArgs will provide you with a placeholder key.
Store the key in a session variable (I don't know another way of transferring data between pipelines at this point)
Retrieve the key from the session in your getRenderingDatasource processor.
This is the code I used to test it:
// Add to the getRenderingDatasource pipeline.
public class GetPlaceholderKey
{
public void Process(GetPlaceholderRenderingsArgs args)
{
System.Web.HttpContext.Current.Session["Placeholder"] = args.PlaceholderKey;
}
}
// Add to the getRenderingDatasource pipeline.
public class GetAllowedDatasources
{
public void Process(GetRenderingDatasourceArgs args)
{
Debug.WriteLine(System.Web.HttpContext.Current.Session["Placeholder"]);
}
}
This works when you add a rendering to a placeholder, but I have not tested other scenarios.
I can imagine it would not work when you set the datasource of a rendering that is already placed in the placeholder.
Related
I want users to be able to create my ScriptableObject assets outside of play mode so they can bind them to their objects in the scene. Since I don't want Zenject creating the ScriptableObjects, a Factory solution is not what I'm looking for. Therefore, I must somehow get the instance of these script objects to the Zenject installer and use 'QueueForInject'. Here are the two approaches I've found:
A) Manually adding these script objects to the Installer via the Inspector Window. Example:
public class GameInstaller : MonoInstaller<GameInstaller>
{
// Visible in the inspector window so user can add Script Objects to List
public List<ScriptableObject> myScriptObjectsToInject;
public override void InstallBindings()
{
foreach (ScriptableObject scriptObject in myScriptObjectsToInject)
{
Container.QueueForInject(scriptObject);
}
}
}
B) Use Unity's AssetDatabase and Resources APIs to find all script object instances then iterate through the list to QueueForInject. Example
// search for all ScriptableObject that reside in the 'Resources' folder
string[] guids = AssetDatabase.FindAssets("t:ScriptableObject");
foreach (string guid in guids)
{
// retrieve the path string to the asset on disk relative to the Unity project folder
string assetPath = AssetDatabase.GUIDToAssetPath(guid);
// retrieve the type of the asset (i.e. the name of the class, which is whatever derives from ScriptableObject)
System.Type myType = AssetDatabase.GetMainAssetTypeAtPath(assetPath);
//Find the relative path of the asset.
string resourcesDirectoryName = $"/Resources/";
int indexOfLastResourceDirectory = assetPath.LastIndexOf(resourcesDirectoryName) + resourcesDirectoryName.Length;
int indexOfExtensionPeriod = assetPath.LastIndexOf(".");
string assetPathRelative = assetPath.Substring(indexOfLastResourceDirectory, indexOfExtensionPeriod - indexOfLastResourceDirectory);
//Grab the instance of the ScriptableObject.
ScriptableObject scriptObject = Resources.Load(assetPathRelative, myType) as ScriptableObject;
if (scriptObject == null)
{
Debug.LogWarning(
"ScriptableObject asset found, but it is not in a 'Resources' folder. Current folder = " +
assetPath);
continue;
}
else
{
Container.QueueForInject(scriptObject);
}
}
With option A) the end user must remember to manually place the script object in the list for every new script object that they create. I'd rather find an automated way so the user doesn't have to know/remember this extra manual process.
With option B) I get an automated solution which, is great. But the end user must remember to store each ScriptableObject asset file in a directory called "Resources" because it's the only way the Resources.Load API will find the instance. I can warn the user if not found, which is nice. But they still are forced to comply to remove the warnings.
I can live with option B if I must, but I'd really like to take it a step further and make it completely invisible to the end user. Has anyone come up with a craftier automated solution for ScriptObjects that exist outside of Play Mode?
I created a custom date field in the opportunity screen. The field shows up just as expected. I enter a date save the record, go out, go back in and the date is no longer there. However, when I query the database, the column is there and there is a data in the column. So, even though the date is being stored in the database it is not displaying it in the screen or when I added that value to the generic inquiry it is also blank. I am not a programmer but it seems like that should’ve been an easy thing to do. I found some references on the web of this type of problem but was hoping there was a more straightforward fix than what those appeared, as they were more programming than I am comfortable at this point.
I found many web pages explaining how to do this seemingly simple thing using the customization tools. Not sure what I'm missing.
I am running fairly recent version of 2019R1.
Any help would be appreciated!
Response from an Acumatica support person.. Evidently they changed some things with 2018R1. See comments/answers below from support. Image of the change I made in the customization tool is also below. After making this change the custom field worked as desired.
After reviewing it into more details, it sounds that it could be related to the new implementation of PXProjection on that DAC.
Unlike ver. 2017 R2, some DAC like the PX.Objects.CR.CROpportunity DAC were implemented as a regular Data Access Class:
[System.SerializableAttribute()]
[PXCacheName(Messages.Opportunity)]
[PXPrimaryGraph(typeof(OpportunityMaint))]
[CREmailContactsView(typeof(Select2<Contact,
LeftJoin<BAccount, On<BAccount.bAccountID, Equal<Contact.bAccountID>>>,
Where2<Where<Optional<CROpportunity.bAccountID>, IsNull, And<Contact.contactID, Equal<Optional<CROpportunity.contactID>>>>,
Or2<Where<Optional<CROpportunity.bAccountID>, IsNotNull, And<Contact.bAccountID, Equal<Optional<CROpportunity.bAccountID>>>>,
Or<Contact.contactType, Equal<ContactTypesAttribute.employee>>>>>))]
[PXEMailSource]//NOTE: for assignment map
public partial class CROpportunity : PX.Data.IBqlTable, IAssign, IPXSelectable
{
...
}
In version 2018 R1(and later) the PX.Objects.CR.CROpportunity DAC is a projection over the PX.Objects.CR.Standalone.CROpportunity and PX.Objects.CR.Standalone.CROpportunityRevision DACs:
[System.SerializableAttribute()]
[PXCacheName(Messages.Opportunity)]
[PXPrimaryGraph(typeof(OpportunityMaint))]
[CREmailContactsView(typeof(Select2<Contact,
LeftJoin<BAccount, On<BAccount.bAccountID, Equal<Contact.bAccountID>>>,
Where2<Where<Optional<CROpportunity.bAccountID>, IsNull, And<Contact.contactID, Equal<Optional<CROpportunity.contactID>>>>,
Or2<Where<Optional<CROpportunity.bAccountID>, IsNotNull, And<Contact.bAccountID, Equal<Optional<CROpportunity.bAccountID>>>>,
Or<Contact.contactType, Equal<ContactTypesAttribute.employee>>>>>))]
[PXEMailSource]//NOTE: for assignment map
[PXProjection(typeof(Select2<Standalone.CROpportunity,
InnerJoin<Standalone.CROpportunityRevision,
On<Standalone.CROpportunityRevision.opportunityID, Equal<Standalone.CROpportunity.opportunityID>,
And<Standalone.CROpportunityRevision.revisionID, Equal<Standalone.CROpportunity.defRevisionID>>>>>), Persistent = true)]
public partial class CROpportunity : IBqlTable, IAssign, IPXSelectable
{
...
}
Because of that change, it's now required to declare 2 extension classes, one for Standalone.CROpportunity (normal DAC) and the CROpportunity (PXProjection).
On the PXProjection DAC Extension, please remind to add BqlField to the correspondent field on the Standalone DAC, Ex.: BqlField = typeof(CROpportunityStandaloneExt.usrTest)
public class CROpportunityExt : PXCacheExtension<PX.Objects.CR.CROpportunity>
{
#region UsrTest
[PXDBDecimal(BqlField = typeof(CROpportunityStandaloneExt.usrTest))]
[PXUIField(DisplayName="Test Field")]
public virtual Decimal? UsrTest { get; set; }
public abstract class usrTest : IBqlField { }
#endregion
}
Please find more information on this article below:
Custom field on CROpportunity doesn't display saved value since upgrading from 6.10 or 2017R2 to 2018R1
change made in customization tool
It is possible to automatically generate Sitecore templates just coding models? I'm using Sitecore 8.0 and I saw Glass Mapper Code First approach but I cant find more information about that.
Not sure why there isn't much info about it, but you can definitely model/code first!. I do it alot using the attribute configuration approach like so:
[SitecoreType(true, "{generated guid}")]
public class ExampleModel
{
[SitecoreField("{generated guid}", SitecoreFieldType.SingleLineText)]
public virtual string Title { get; set; }
}
Now how this works. The SitecoreType 'true' value for the first parameter indicates it may be used for codefirst. There is a GlassCodeFirstDataprovider which has an Initialize method, executed in Sitecore's Initialize pipeline. This method will collect all configurations marked for codefirst and create it in the sql dataprovider. The sections and fields are stored in memory. It also takes inheritance into account (base templates).
I think you first need to uncomment some code in the GlassMapperScCustom class you get when you install the project via Nuget. The PostLoad method contains the few lines that execute the Initialize method of each CodeFirstDataprovider.
var dbs = global::Sitecore.Configuration.Factory.GetDatabases();
foreach (var db in dbs)
{
var provider = db.GetDataProviders().FirstOrDefault(x => x is GlassDataProvider) as GlassDataProvider;
if (provider != null)
{
using (new SecurityDisabler())
{
provider.Initialise(db);
}
}
}
Furthermore I would advise to use code first on development only. You can create packages or serialize the templates as usual and deploy them to other environment so you dont need the dataprovider (and potential risks) there.
You can. But it's not going to be Glass related.
Code first is exactly what Sitecore.PathFinder is looking to achieve. There's not a lot of info publicly available on this yet however.
Get started here: https://github.com/JakobChristensen/Sitecore.Pathfinder
If a page or component class has one instance field which is a non-synchronized object, f.ex. an ArrayList, and the application has code that structurally modifies this field, should the access to this field be synchronized ?
F.ex.:
public class MyPageOrComponent
{
#Persist
private List<String> myList;
void setupRender()
{
if (this.myList == null)
{
this.myList = new ArrayList<>();
}
}
void afterRender(MarkupWriter writer)
{
// Should this be synchronized ?
if (someCondition)
{
this.myList.add(something);
}
else
{
this.myList.remove(something);
}
}
}
I'm asking because I seem to understand that Tapestry creates only one instance of a page or component class and it uses this instance for all the connected clients (but please correct me if this is not true).
In short the answer is no, you don't have to because Tapestry does this for you. Tapestry will transform your pages and classes for you at runtime in such a way that wherever you interact with your fields, they will not actually be working on the instance variable but on a managed variable that is thread safe. The full inner workings are beyond me, but a brief reference to the transformation can be found here.
One warning, don't instantiate your page/component variables at decleration. I have seen some strange behaviour around this. So don't do this:
private List<String> myList = new ArrayList<String>;
Tapestry uses some runtime byte code magic to transform your pages and components. Pages and components are singletons but the properties are transformed so that they are backed by a PerThreadValue. This means that each request gets it's own copy of the value so no synchronization is required.
As suggested by #joostschouten you should never initialize a mutable property in the field declaration. The strange behaviour he discusses is caused beacause this will be shared by all requests (since the initializer is only fired once for the page/component singleton). Mutable fields should instead be initialized in a render method (eg #SetupRender)
First a little back story on what I am trying to accomplish.
I am in the process of creating a custom HTTP Module whose purposes is to intercept messages to multiple (15+) different ArcGIS REST web services. The intercepted requests and/or responses will be stripped of any restricted information based on the current user.
For instance, a call that returns multiple layers might have certain layers stripped out.
Unmodified Response:
"layers" : [
{
"id" : 0,
"name" : "Facilities",
"parentLayerId" : -1,
"defaultVisibility" : true,
"subLayerIds" : [1, 2, 3]
},
{
"id" : 1,
"name" : "Hazardous Sites",
"parentLayerId" : 0,
"defaultVisibility" : true,
"subLayerIds" : null
},
]
Modified Response:
"layers" : [
{
"id" : 0,
"name" : "Facilities",
"parentLayerId" : -1,
"defaultVisibility" : true,
"subLayerIds" : [1, 2, 3]
}
]
There are numerous services available, all uniquely identified via a URL. Each service returns very different information and so needs to be filtered different. Additionally, each service may return the data in a variety of formats (HTML, JSON, etc).
As such, I will need to create a multitude of different filters to apply to HttpRequest.Filters and/or HttpResponse.Filters.
Example:
// Request for layers and the format is JSON
IPolicy policy = GetPolicy(userContext);
Filter filter = new LayerJsonResponseFilter(Response.Filter, policy);
Response.Filter = filter;
Request and response filters are implemented by inheriting from Stream (or another class that inherits from Stream such as MemoryStream). I want to be able to easily create new filters without reimplementing Stream for each filter.
A potential solution is described in here: http://www.west-wind.com/weblog/posts/72596.aspx
However, I want to simplify the solution without losing the flexibility of specifying many different transformations without reimplementing the stream. I think that I can accomplish this by:
Inherit from MemoryStream so as to reduce the reimplementation of methods.
Always operate on full content, rather than chunked content.
Replace the events with an abstract method (e.g., Filter())
I have considered two potential solutions.
Solution 1: Create Multiple Filters Inheriting from ResponseFilter
In this scenario each filter contains the logic for performing the filtration. There would be 15+ filters created all inheriting from a common ResponseFilter abstract base class like so:
// All filters will inherit from ResponseFilter
public abstract class ResponseFilter : MemoryStream
{
public ResponseFilter(Stream stream, Policy policy) { }
// Must be overridden in a derived class with specific Filter logic.
public abstract string Filter(string content);
// Overridden to cache content.
public override void Write(byte[] buffer, int offset, int count) { }
// Overridden to perform the filter/transformation before the content is written.
public override void Flush()
{
// Get stream content as a string
string content = Filter(content);
// Write new content to stream
}
}
This would be used in the following way.
// Example
var policy = GetPolicy();
var filter = new MapServiceJsonResponseFilter(response.Filter, policy);
response.Filter = filter;
The advantage to this option is that the number of classes is kept to a minimum. However, it becomes difficult to reuse any filter logic anywhere else in the application should it become necessary. Additionally, unit testing the filters would require mocking the Stream, another disadvantage.
Solution 2: Create Multiple Filters, Inject into a Common ResponseFilter
In this scenario, a single response filter is created. The actual filter logic or algorithm is injected into the filter. All filters inherit from an abstract base class FilterBase.
// Represents an HttpResponse Filter. Renamed to avoid confusion with
// the filter algorithm.
public class ResponseFilterStream : MemoryStream
{
public ResponseFilterStream(Stream stream, FilterBase filter) { }
// Overridden to cache content.
public override void Write(byte[] buffer, int offset, int count) { }
// Overridden to perform the filter/transformation before the content is written.
public override void Flush()
{
// Get stream content as a string
string content = _filter.Filter(content);
// Write new content to stream
}
}
// All filter algorithms inherit from FilterBase and must implement
// the filter method.
public abstract class FilterBase
{
protected TransformBase(Policy policy) { }
// Overridden to perform the filter/transformation.
public abstract string Filter(string content);
}
This would be used in the following way.
// Example
var policy = GetPolicy();
var filter = new MapServiceJsonResponseFilter(policy);
ResponseFilter responseFilter = new ResponseFilter(response.Filter, filter);
response.Filter = filter;
The advantage to this solution is that the filtration logic is completely independent of any classes that implement stream. The logic can be more easily reused if necessary. Unit testing is a little simpler as well as I do not need to mock the stream.
However, there are more classes (exactly 1) and the usage is a little more complex, though not terribly so.
Note: I'll probably want to rename FilterBase as to avoid confusion with ResponseFilter. Perhaps TransformBase.
I have several goals that I want to meet with either solution.
The solution must be highly testable. Unit testing will be used to check for correctness of the filters. It is imperative that testing is as simple as possible.
The solution must easily support the creation of multiple filters (15+).
The solution should be readable (i.e., easy to maintain).
I think that solution 2 is the best solution for this given scenario. I can test the filtration logic completely independently of Stream with minimal additional complexity. Either solution will support #2 and #3, so testing gets the edge.
What other considerations might there be? Are there better alternatives?
Solution 2 is obviously preferable. However, it seems that the major crux of the problem lies in the construction of the Filters themselves. Hopefully there is a lot of reusable composition within the Filter implementations. Can a new filter be "configured" from composite parts?