Subsonic 3: How do I generate predefined set of stored procedures? - subsonic3

I have a large database with 1000s of stored procedures.. But I need to use only some of them (say about 25 stored procs out of 1000). What should I change in T4 templates to limit the code generation to only those stored procedures that I need? Otherwise Subsonic 3 will generate all of them...
Thanks,
Zohrab.

Here's What I did:
Added following into SQLServer.ttinclude
// Customize the SP list here
const string SP_SQL=#"SELECT*
FROM INFORMATION_SCHEMA.ROUTINES
WHERE ROUTINE_TYPE = 'PROCEDURE' AND
ROUTINE_NAME = 'My Stored Procedure Name' OR
ROUTINE_NAME = 'My other Stored Procedure Name'
";
Replaced List GetSPs method in SQLServer.ttinclude with following:
List<SP> GetSPs(){
var result=new List<SP>();
//[Z.B.]pull the stored procedures in a reader
using(IDataReader rdr=GetReader(SP_SQL)){
while(rdr.Read()){
var sp=new SP();
sp.Name = rdr["ROUTINE_NAME"].ToString();
sp.CleanName=CleanUp(sp.Name);
sp.Parameters=GetSPParams(sp.Name);
result.Add(sp);
}
}
return result;
}

Related

SuiteScript 2.0: Are there any search result limitations when executing a saved search via "getInputData" stage of map/reduce script?

I am currently building a map/reduce script in NetSuite which passes the results of a saved search from the getInputData stage to the map stage. This is being done by first running a WHILE loop in the getInputData stage to obtain the internal ids of each entry, inserting into an array, then passing over to the map stage. Like so:
// run saved search - unlimited rows from saved search.
do {
var subresults = invoiceSearch.run().getRange({ start: start, end: start + pageSize });
results = results.concat(subresults);
count = subresults.length;
start += pageSize + 1;
} while (count == pageSize);
var invSearchArray = [];
if(invoiceSearch){
//NOTE: .run().each has a limit of 4,000 results, hence the do-while loop above.
for (var i = 0; i < results.length; i++){
var invObj = new Object();
invObj['invID'] = results[i].getValue({name: 'internalid'});
invSearchArray.push(invObj);
}
}
return invSearchArray;
I implemented it this way because I feared there would be result restrictions, just as the ".run().each" function has (limited to 4000 results).
I made the assumption that passing the search object directly from getInputData to Map would have restricted results of 4000 as well. Can someone offer clarity on whether there are such restrictions? Am I right to fear the script holting prematurely because search results cannot be processed beyond 4000 in the getInputData stage of a map/reduce script?
Any example to aid me in understanding how a search object is processed in a map/reduce script would be most appreciated.
Thanks
If you simply return the Search instance, all results will be passed along to map, beyond the 1000 or 4000 limits of the getRange and each methods.
If the Search has 8500 results, all 8500 will get passed to map.
function getInputData() {
return search.load(...); // alternatively search.create(...)
}

How to track SyntaxNodes across Workspace.TryApplyChanges()

I'd like to track SyntaxNodes and SyntaxTrivias across
different versions of a Solution/Workspace.
I tried annotating some nodes with SyntaxAnnotations.
This works well as long as I don't update the workspace.
Calling Workspace.TryApplyChanges (successfully) seems to remove
all SyntaxAnnotations.
This surprised me. Why does this happen?
How can I track SyntaxNodes across workspace updates?
Example code follows:
var workspace = new AdhocWorkspace();
var project = workspace.AddProject("TestProject", LanguageNames.CSharp);
var klass = SyntaxFactory
.ClassDeclaration("Klass")
.WithAdditionalAnnotations(new SyntaxAnnotation("Foo"));
var compUnit = SyntaxFactory.CompilationUnit().AddMembers(klass);
var document = project.AddDocument("TestFile.cs", compUnit);
var docId = document.Id;
var solution = document.Project.Solution;
var root1 = document.GetSyntaxRootAsync().Result;
var klass1 = root1.GetAnnotatedNodes("Foo").FirstOrDefault();
var eq1 = klass1.IsEquivalentTo(klass); // returns true
var apply = workspace.TryApplyChanges(solution); // returns true
var root2 = workspace.CurrentSolution.GetDocument(docId).GetSyntaxRootAsync().Result;
var klass2 = root2.GetAnnotatedNodes("Foo").FirstOrDefault(); // returns null, why?
This happens because TryApplyChanges doesn't actually re-use your nodes as is. Instead it "replays" the same changes as textual changes to the actual solution, and then let's the parser re-parse.
This happens for a few reasons:
To avoid having annotations pile up over time in the trees and interfere with each other (consider something like that formatting or rename annotations used in CodeFixes still being present after the fix was applied).
To protect against trees that don't round-trip from showing up in CurrentSolution. It is possible to construct trees that the parser would never generate (consider changing operator precedence for example).
To ensure the changes are actually applied, requires changing the original representation - the files on disk or the text buffers in memory, not just using the new trees in the workspace.
You could consider using something like the SyntaxPath type from the Roslyn sources to try to find an equivalent node.

<Binary> in sql

I want to select all the binary data from a column of a SQL database (SQL Server Enterprise) using C++ query. I'm not sure what is in the binary data, and all it says is .
I tried this (it's been passed onto me to study off from) and I honestly don't 100% understand the code at some parts, as I commented):
SqlConnection^ cn = gcnew SqlConnection();
SqlCommand^ cmd;
SqlDataAdapter^ da;
DataTable^ dt;
cn->ConnectionString = "Server = localhost; Database=portable; User ID = glitch; Pwd = 1234";
cn->Open();
cmd=gcnew SqlCommand("SELECT BinaryColumn FROM RawData", cn);
da = gcnew SqlDataAdapter(cmd);
dt = gcnew DataTable("BinaryTemp"); //I'm confused about this piece of code, is it supposed to create a new table in the database or a temp one in the code?
da->Fill(dt);
for(int i = 0; i < dt->Rows->Count-1; i++)
{
String^ value_string;
value_string=dt->Rows[i]->ToString();
Console::WriteLine(value_string);
}
cn->Close();
Console::ReadLine();
but it only returns a lot of "System.Data.DataRow".
Can someone help me?
(I need to put it into a matrix form after I extract the binary data, so if anyone could provide help for that part as well, it'd be highly appreciated!)
dt->Rows[i] is indeed a DataRow ^. To extract a specific field from it, use its indexer:
array<char> ^blob=dt->Rows[i][0];
This extracts the first column (since you have only one) and returns an array representation of it.
To answer the question in your code, the way SqlDataAdapter works is like this:
you build a DataTable to hold the data to retrieve. You can fill in its columns, but you're not required to. Neither are you required to give it a name.
you build the adapter object, giving it a query and a connection object
you call the Fill method on the adapter, giving it the previously created DataTable to fill with whatever your query returns.
and you're done with the adapter. At this point you can dispose of it (for example inside a using statement if you're using C#).

Getting odd behavior from $query->setMaxResults()

When I call setMaxResults on a query, it seems to want to treat the max number as "2", no matter what it's actual value is.
function findMostRecentByOwnerUser(\Entities\User $user, $limit)
{
echo "2: $limit<br>";
$query = $this->getEntityManager()->createQuery('
SELECT t
FROM Entities\Thread t
JOIN t.messages m
JOIN t.group g
WHERE
g.ownerUser = :owner_user
ORDER BY m.timestamp DESC
');
$query->setParameter("owner_user", $user);
$query->setMaxResults(4);
echo $query->getSQL()."<br>";
$results = $query->getResult();
echo "3: ".count($results);
return $results;
}
When I comment out the setMaxResults line, I get 6 results. When I leave it in, I get the 2 most recent results. When I run the generated SQL code in phpMyAdmin, I get the 4 most recent results. The generated SQL, for reference, is:
SELECT <lots of columns, all from t0_>
FROM Thread t0_
INNER JOIN Message m1_ ON t0_.id = m1_.thread_id
INNER JOIN Groups g2_ ON t0_.group_id = g2_.id
WHERE g2_.ownerUser_id = ?
ORDER BY m1_.timestamp DESC
LIMIT 4
Edit:
While reading the DQL "Limit" documentation, I came across the following:
If your query contains a fetch-joined collection specifying the result limit methods are not working as you would expect. Set Max Results restricts the number of database result rows, however in the case of fetch-joined collections one root entity might appear in many rows, effectively hydrating less than the specified number of results.
I'm pretty sure that I'm not doing a fetch-joined collection. I'm under the impression that a fetch-joined collection is where I do something like SELECT t, m FROM Threads JOIN t.messages. Am I incorrect in my understanding of this?
An update : With Doctrine 2.2+ you can use the Paginator http://docs.doctrine-project.org/en/latest/tutorials/pagination.html
Using ->groupBy('your_entity.id') seem to solve the issue!
I solved the same issue by only fetching contents of the master table and having all joined tables fetched as fetch="EAGER" which is defined in the Entity (described here http://www.doctrine-project.org/docs/orm/2.1/en/reference/annotations-reference.html?highlight=eager#manytoone).
class VehicleRepository extends EntityRepository
{
/**
* #var integer
*/
protected $pageSize = 10;
public function page($number = 1)
{
return $this->_em->createQuery('SELECT v FROM Entities\VehicleManagement\Vehicles v')
->setMaxResults(100)
->setFirstResult($number - 1)
->getResult();
}
}
In my example repo you can see I only fetched the vehicle table to get the correct result amount. But all properties (like make, model, category) are fetched immediately.
(I also iterated over the Entity-contents because I needed the Entity represented as an array, but that shouldn't matter afaik.)
Here's an excerpt from my entity:
class Vehicles
{
...
/**
* #ManyToOne(targetEntity="Makes", fetch="EAGER")
* #var Makes
*/
public $make;
...
}
Its important that you map every Entity correctly otherwise it won't work.

MongoDB MapReduce update in place how to

*Basically I'm trying to order objects by their score over the last hour.
I'm trying to generate an hourly votes sum for objects in my database. Votes are embedded into each object. The object schema looks like this:
{
_id: ObjectId
score: int
hourly-score: int <- need to update this value so I can order by it
recently-voted: boolean
votes: {
"4e4634821dff6f103c040000": { <- Key is __toString of voter ObjectId
"_id": ObjectId("4e4634821dff6f103c040000"), <- Voter ObjectId
"a": 1, <- Vote amount
"ca": ISODate("2011-08-16T00:01:34.975Z"), <- Created at MongoDate
"ts": 1313452894 <- Created at timestamp
},
... repeat ...
}
}
This question is actually related to a question I asked a couple of days ago Best way to model a voting system in MongoDB
How would I (can I?) run a MapReduce command to do the following:
Only run on objects with recently-voted = true OR hourly-score > 0.
Calculate the sum of the votes created in the last hour.
Update hourly-score = the sum calculated above, and recently-voted = false.
I also read here that I can perform a MapReduce on the slave DB by running db.getMongo().setSlaveOk() before the M/R command. Could I run the reduce on a slave and update the master DB?
Are in-place updates even possible with Mongo MapReduce?
You can definitely do this. I'll address your questions one at a time:
1.
You can specify a query along with your map-reduce, which filters the set of objects which will be passed into the map phase. In the mongo shell, this would look like (assuming m and r are the names of your mapper and reducer functions, respectively):
> db.coll.mapReduce(m, r, {query: {$or: [{"recently-voted": true}, {"hourly-score": {$gt: 0}}]}})
2.
Step #1 will let you use your mapper on all documents with at least one vote in the last hour (or with recently-voted set to true), but not all the votes will have been in the last hour. So you'll need to filter the list in your mapper, and only emit those votes you wish to count:
function m() {
var hour_ago = new Date() - 3600000;
this.votes.forEach(function (vote) {
if (vote.ts > hour_ago) {
emit(/* your key */, this.vote.a);
}
});
}
And to reduce:
function r(key, values) {
var sum = 0;
values.forEach(function(value) { sum += value; });
return sum;
}
3.
To update the hourly scores table, you can use the reduceOutput option to map-reduce, which will call your reducer with both the emitted values, and the previously saved value in the output collection, (if any). The result of that pass will be saved into the output collection. This looks like:
> db.coll.mapReduce(m, r, {query: ..., out: {reduce: "output_coll"}})
In addition to re-reducing output, you can use merge which will overwrite documents in the output collection with newly created ones (but leaving behind any documents with an _id different than the _ids created by your m-r job), replace, which is effectively a drop-and-create operation and is the default, or use {inline: 1}, which will return the results directly to the shell or to your driver. Note that when using {inline: 1}, your results must fit in the size allowed for a single document (16MB in recent MongoDB releases).
(4.)
You can run map-reduce jobs on secondaries ("slaves"), but since secondaries cannot accept writes (that's what makes them secondary), you can only do this when using inline output.