SubSonic GetPaged method - different index needed in unit test - unit-testing

I'm using SubSonic 3, and am using the ActiveRecord approach. I have a simple query in my controller:
var posts = from p in Post.GetPaged(page ?? 0, 20)
orderby p.Published descending
select p;
The "page" variable is defined as an nullable int (int?).
Here's the problem, when I run the following test, it works fine and passes:
[Test]
public void IndexWithNullPageShouldLoadFirstPageOfRecentPosts()
{
//act
var testPosts = new List<Post>()
{
new Post() {PostID = 1, BlogID = 1, Published = null, Body = "1"},
new Post() {PostID = 2, BlogID = 1, Published = DateTime.Now, Body = "2"},
new Post() {PostID = 3, BlogID = 1, Published = DateTime.Now.AddDays(1), Body = "3"}
};
Post.Setup(testPosts);
//act
var result = controller.Index(null) as ViewResult;
//assert
Assert.IsNotNull(result);
var home = result.ViewData.Model as HomeViewModel;
Assert.IsInstanceOf(typeof(HomeViewModel), home, "Wrong ViewModel");
Assert.AreEqual(3, home.Posts.Count());
//make sure the sort worked correctly
Assert.AreEqual(3, home.Posts.ElementAt(0).PostID);
Assert.AreEqual(2, home.Posts.ElementAt(1).PostID);
Assert.AreEqual(1, home.Posts.ElementAt(2).PostID);
}
However, when I launch the website, it doesn't return any records (and yes, there are records in the live database). Both ways have the "page" variable set to null. I found that if I change the index in "GetPaged" method to 1 instead of 0 - then records are returned on the website... however, as soon as I do that - my tests don't pass anymore. All the documentation I've seen shows that the GetPaged index is zero-based... so I'm a bit confused here.
Any ideas?
Thanks,
Chad

This seems to be a bug/inconsistency in the way GetPaged works with ActiveRecord. As you've already worked out it's using a 1 based index instead of a 0 based index in the ActiveRecord repository. Can you please log this as an issue at github

Related

How to get the correct count for a Lucid Model's Paginate when joining with additional tables

I have 2 Lucid models: Ad and Campaign, which are associated using a Many:Many relationship. They have a pivot table which manages the relationship which has additional information, so my table structure is as follows:
ads
id
...
campaign_ads
campaign_id
ad_id
spend
sent
clicks
leads
ftds
campaigns
id
...
I am trying to fetch the results of a paginate query using the Ad models' query function, but in addition to the Ad models' fields, I would also like to fetch the sum of spend, sent, clicks, leads and ftds from the related Campaign models' pivots.
I have come up with the following code, which returns the correct information in the collection, but returns an incorrect value for the count
const Ad = use('App/Models/Ad');
const query = Ad.query()
.leftJoin('campaign_ads', 'ads.id', 'campaign_ads.ad_id')
.select('ads.*')
.sum('campaign_ads.spend as spend')
.sum('campaign_ads.sent as sent')
.sum('campaign_ads.clicks as clicks')
.sum('campaign_ads.leads as leads')
.sum('campaign_ads.ftds as ftds')
.groupBy('ads.id')
.paginate()
I assume that this is related to how the paginate function rewrites or performs the query, but I have no idea how to fix it.
Here is some example usage based on the answer:
const Ad = use('App/Models/Ad');
const query = Ad.query()
.leftJoin('campaign_ads', 'ads.id', 'campaign_ads.ad_id')
.select('ads.*')
.sum('campaign_ads.spend as spend')
.sum('campaign_ads.sent as sent')
.sum('campaign_ads.clicks as clicks')
.sum('campaign_ads.leads as leads')
.sum('campaign_ads.ftds as ftds')
.groupBy('ads.id')
const paginate = async (query, page = 1, perPage = 20) {
// Types of statements which are going to filter from the count query
const excludeAttrFromCount = ['order', 'columns', 'limit', 'offset', 'group']
// Clone the original query which we are paginating
const countByQuery = query.clone();
// Case Page and Per Page as Numbers
page = Number(page)
perPage = Number(perPage)
// Filter the statments from the array above so we have a query which can run cleanly for counting
countByQuery.query._statements = _.filter(countByQuery.query._statements, (statement) => {
return excludeAttrFromCount.indexOf(statement.grouping) < 0
})
// Since in my case, i'm working with a left join, i'm going to ensure that i'm only counting the unique models
countByQuery.countDistinct([Ad.table, 'id'].join('.'));
const counts = await countByQuery.first()
const total = parseInt(counts.count);
let data;
// If we get a count of 0, there's no point in delaying processing for an additional DB query
if (0 === total) {
data = [];
}
// Use the query's native `fetch` method, which already creates instances of the models and eager loads any relevant data
else {
const {rows} = await query.forPage(page, perPage).fetch();
data = rows;
}
// Create the results object that you would normally get
const result = {
total: total,
perPage: perPage,
page: page,
lastPage: Math.ceil(total / perPage),
data: data
}
// Create the meta data which we will pass to the pagination hook + serializer
const pages = _.omit(result, ['data'])
if (Ad.$hooks) {
await Ad.$hooks.after.exec('paginate', data, pages)
}
// Create and return the serialized versions
const Serializer = Ad.resolveSerializer()
return new Serializer(data, pages);
}
paginate(query, 1, 20)
.then(results => {
// do whatever you want to do with the results here
})
.catch(error => {
// do something with the error here
})
So, as I noted before in my notes, the problem that I was have was caused by how Lucid's query builder handles the paginate function, so I was forced to "roll my own". Here's what I came up with:
paginate (query, page = 1, perPage = 20) {
// Types of statements which are going to filter from the count query
const excludeAttrFromCount = ['order', 'columns', 'limit', 'offset', 'group']
// Clone the original query which we are paginating
const countByQuery = query.clone();
// Case Page and Per Page as Numbers
page = Number(page)
perPage = Number(perPage)
// Filter the statments from the array above so we have a query which can run cleanly for counting
countByQuery.query._statements = _.filter(countByQuery.query._statements, (statement) => {
return excludeAttrFromCount.indexOf(statement.grouping) < 0
})
// Since in my case, i'm working with a left join, i'm going to ensure that i'm only counting the unique models
countByQuery.countDistinct([this.#model.table, 'id'].join('.'));
const counts = await countByQuery.first()
const total = parseInt(counts.count);
let data;
// If we get a count of 0, there's no point in delaying processing for an additional DB query
if (0 === total) {
data = [];
}
// Use the query's native `fetch` method, which already creates instances of the models and eager loads any relevant data
else {
const {rows} = await query.forPage(page, perPage).fetch();
data = rows;
}
// Create the results object that you would normally get
const result = {
total: total,
perPage: perPage,
page: page,
lastPage: Math.ceil(total / perPage),
data: data
}
// Create the meta data which we will pass to the pagination hook + serializer
const pages = _.omit(result, ['data'])
// this.#model references the Model (not the instance). I reference it like this because this function is part of a larger class
if (this.#model.$hooks) {
await this.#model.$hooks.after.exec('paginate', data, pages)
}
// Create and return the serialized versions
const Serializer = this.#model.resolveSerializer()
return new Serializer(data, pages);
}
I only use this version of pagination when I detect group by in my query, and it follow's Lucid's own paginate function pretty closely, and returns identical feedback. While it's not a 100% drop-in solution, it's good enough for my needs

How do I query multiple IDs via the ContentSearchManager?

When I have an array of Sitecore IDs, for example TargetIDs from a MultilistField, how can I query the ContentSearchManager to return all the SearchResultItem objects?
I have tried the following which gives an "Only constant arguments is supported." error.
using (var s = Sitecore.ContentSearch.ContentSearchManager.GetIndex("sitecore_master_index").CreateSearchContext())
{
rpt.DataSource = s.GetQueryable<SearchResultItem>().Where(x => f.TargetIDs.Contains(x.ItemId));
rpt.DataBind();
}
I suppose I could build up the Linq query manually with multiple OR queries. Is there a way I can use Sitecore.ContentSearch.Utilities.LinqHelper to build the query for me?
Assuming I got this technique to work, is it worth using it for only, say, 10 items? I'm just starting my first Sitecore 7 project and I have it in mind that I want to use the index as much as possible.
Finally, does the Page Editor support editing fields somehow with a SearchResultItem as the source?
Update 1
I wrote this function which utilises the predicate builder as dunston suggests. I don't know yet if this is actually worth using (instead of Items).
public static List<T> GetSearchResultItemsByIDs<T>(ID[] ids, bool mustHaveUrl = true)
where T : Sitecore.ContentSearch.SearchTypes.SearchResultItem, new()
{
Assert.IsNotNull(ids, "ids");
if (!ids.Any())
{
return new List<T>();
}
using (var s = Sitecore.ContentSearch.ContentSearchManager.GetIndex("sitecore_master_index").CreateSearchContext())
{
var predicate = PredicateBuilder.True<T>();
predicate = ids.Aggregate(predicate, (current, id) => current.Or(p => p.ItemId == id));
var results = s.GetQueryable<T>().Where(predicate).ToDictionary(x => x.ItemId);
var query = from id in ids
let item = results.ContainsKey(id) ? results[id] : null
where item != null && (!mustHaveUrl || item.Url != null)
select item;
return query.ToList();
}
}
It forces the results to be in the same order as supplied in the IDs array, which in my case is important. (If anybody knows a better way of doing this, would love to know).
It also, by default, ensures that the Item has a URL.
My main code then becomes:
var f = (Sitecore.Data.Fields.MultilistField) rootItem.Fields["Main navigation links"];
rpt.DataSource = ContentSearchHelper.GetSearchResultItemsByIDs<SearchResultItem>(f.TargetIDs);
rpt.DataBind();
I'm still curious how the Page Editor copes with SearchResultItem or POCOs in general (my second question), am going to continue researching that now.
Thanks for reading,
Steve
You need to use the predicate builder to create multiple OR queries, or AND queries.
The code below should work.
using (var s = Sitecore.ContentSearch.ContentSearchManager.GetIndex("sitecore_master_index").CreateSearchContext())
{
var predicate = PredicateBuilder.True<SearchResultItem>();
foreach (var targetId in f.Targetids)
{
var tempTargetId = targetId;
predicate = predicate.Or(x => x.ItemId == tempTargetId)
}
rpt.DataSource = s.GetQueryable<SearchResultItem>().Where(predicate);
rpt.DataBind();
}

mock datareader failing on second call

In the test below, the mocked datareader returns the desired value the first time, but then returns the same value when the index should be 1.
Am I misusing the dataReader or Rhino stub syntax? What is the fix?
Cheers,
Berryl
failing test
[Test]
public void NullSafeGet_GetsBothProperties()
{
var sessionImplementor = MockRepository.GenerateStub<ISessionImplementor>();
var userType = new DateRangeUserType();
var reader = MockRepository.GenerateStub<IDataReader>();
var start = new DateTime(2011, 6, 1);
var end = new DateTime(2011, 7, 1);
reader.Stub(x => x[0]).Return(start);
reader.Stub(x => x[1]).Return(end); ***<==== returns Jun 1 instead of Jul1
var result = userType.NullSafeGet(reader, userType.PropertyNames, sessionImplementor, null);
Assert.That(result, Is.EqualTo(new DateRange(start, end, DateRange.MaxSupportedPrecision)));
}
Expected: <6/1/2011 12:00 AM - 7/1/2011 12:00 AM>
But was: <6/1/2011 12:00 AM - 6/1/2011 12:00 AM>
SUT (NHib CompositeUserType method)
public override object NullSafeGet(IDataReader dr, string[] names, ISessionImplementor session, object owner) {
if (dr == null) return null;
var foundStart = (DateTime)NHibernateUtil.DateTime.NullSafeGet(dr, names[0], session, owner);
var foundEnd = (DateTime)NHibernateUtil.DateTime.NullSafeGet(dr, names[1], session, owner);
var precision = DateRange.MaxSupportedPrecision;
var startDp = _getDatePoint(foundStart, precision);
var endDp = _getDatePoint(foundEnd, precision);
return new DateRange(startDp, endDp, precision);
}
You are not mocking everything that is called by NHibernate. This is roughly what NHibernate does with a reader:
...
int index = reader.GetOrdinal(name);
...
if (reader.IsDBNull(index)) {
return null;
} else {
...
val = rs[index];
...
}
Stub generated by Rhino will return 0 in response both GetOrdinal calls and it this is why it will return June1 both times. You can try to fix it by mocking GetOrdinal as well as indexer. Like this:
var reader = MockRepository.GenerateStub<IDataReader>();
var start = new DateTime(2011, 6, 1);
var end = new DateTime(2011, 7, 1);
reader.Stub(x => x.GetOrdinal(userType.PropertyNames[0])).Return(0);
reader.Stub(x => x.GetOrdinal(userType.PropertyNames[1])).Return(1);
reader.Stub(x => x[0]).Return(start);
reader.Stub(x => x[1]).Return(end);
But it might be worth reconsidering whether you really need to unit test UserType. It does not have a lot of responsibility other than calling NHibernate. Unit testing this class requires you to mock type you don't own (MS IDataReader). What's even worse is that this mock is used by another thirdparty (NHibernate). Essentially you need to look at NHibernate source code (which is what I did) to create a correct stub. Take a look at this article. It goes into a lot more details about why you should avoid mocking types that you don't own. You may be better off writing integration test for this class, using in-memory sqlite database.

NHibernate unit test cases 101

I found what I thought was a great article by Ayende on creating a simple base test fixture for NHib unit testing with SQLite.
My question here is the code for a test case in concrete test fixture. In EX_1 below, Ayende wraps both the save a fetch in a transaction which he commits, and has a Session.Clear in between. This works, or course, but so does EX_2.
All things equal I'd prefer the more compact, readable EX_2. Any thoughts on why the additional code in EX_1 is worth a bit of clutter?
Cheers,
Berryl
==== EX_1 =====
[Fact]
public void CanSaveAndLoadBlog_EX_1()
{
object id;
using (var tx = session.BeginTransaction())
{
id = session.Save(new Blog
{
AllowsComments = true,
CreatedAt = new DateTime(2000,1,1),
Subtitle = "Hello",
Title = "World",
});
tx.Commit();
}
session.Clear();
using (var tx = session.BeginTransaction())
{
var blog = session.Get<Blog>(id);
Assert.Equal(new DateTime(2000, 1, 1), blog.CreatedAt);
Assert.Equal("Hello", blog.Subtitle);
Assert.Equal("World", blog.Title);
Assert.True(blog.AllowsComments);
tx.Commit();
}
}
==== EX_2 =====
[Fact]
public void CanSaveAndLoadBlog_EX_2()
{
var id = session.Save(new Blog
{
AllowsComments = true,
CreatedAt = new DateTime(2000, 1, 1),
Subtitle = "Hello",
Title = "World",
});
var fromDb = session.Get<Blog>(id);
Assert.Equal(new DateTime(2000, 1, 1), fromDb.CreatedAt);
Assert.Equal("Hello", fromDb.Subtitle);
Assert.Equal("World", fromDb.Title);
Assert.True(fromDb.AllowsComments);
}
I believe with NHibernate it is encouraged to use transactions even when you you're only querying. Check this article http://nhprof.com/Learn/Alerts/DoNotUseImplicitTransactions.
Also your EX_2 code might not hit the database depending on what type of primary key you're using. If you're using Identity key that autoincrement, NHibernate will hit the database and get a primary key, but if you using guid, guid.comb, or hilo, you won't hit the database at all. So your Get would be grabbing what NHibernate has cached in memory unless you do a commit changes and then clear the session so you know you got nothing in memory.

Subsonic 3 Save() then Update()?

I need to get the primary key for a row and then insert it into one of the other columns in a string.
So I've tried to do it something like this:
newsObj = new news();
newsObj.name = "test"
newsObj.Save();
newsObj.url = String.Format("blah.aspx?p={0}",newsObj.col_id);
newsObj.Save();
But it doesn't treat it as the same data object so newsObj.col_id always comes back as a zero. Is there another way of doing this? I tried this on another page and to get it to work I had to set newsObj.SetIsLoaded(true);
This is the actual block of code:
page p;
if (pageId > 0)
p = new page(ps => ps.page_id == pageId);
else
p = new page();
if (publish)
p.page_published = 1;
if (User.IsInRole("administrator"))
p.page_approved = 1;
p.page_section = staticParent.page_section;
p.page_name = PageName.Text;
p.page_parent = parentPageId;
p.page_last_modified_date = DateTime.Now;
p.page_last_modified_by = (Guid)Membership.GetUser().ProviderUserKey;
p.Add();
string urlString = String.Empty;
if (parentPageId > 0)
{
urlString = Regex.Replace(staticParent.page_url, "(.aspx).*$", "$1"); // We just want the static page URL (blah.aspx)
p.page_url = String.Format("{0}?p={1}", urlString, p.page_id);
}
p.Save();
If I hover the p.Save(); I can see the correct values in the object but the DB is never updated and there is no exception.
Thanks!
I faced the same problem with that :
po oPo = new po();
oPo.name ="test";
oPo.save(); //till now it works.
oPo.name = "test2";
oPo.save(); //not really working, it's not saving the data since isLoaded is set to false
and the columns are not considered dirty.
it's a bug in the ActiveRecord.tt for version 3.0.0.3.
In the method public void Add(IDataProvider provider)
immediately after SetIsNew(false);
there should be : SetIsLoaded(true);
the reason why the save is not working the second time is because the object can't get dirty if it is not loaded. By adding the SetIsLoaded(true) in the ActiveRecord.tt, when you are going to do run custom tool, it's gonna regenerate the .cs perfectly.