Can I extend ClojureQL using protocols or multimethods? - clojure

I would like to add an "add-watch" method to the clojureql tables. Is it possible to just use a multimethod to do this?

You have to think about how you want this to work first. If a change happens to the SQL table, how will you detect it? The SQL database doesn't call you informing you of the change, so either you have to poll at an interval or you have to track the function in CQL which can update a table, ie. conj!, disj!, update-in!.
If you opt for monitoring those functions, you will currently have to modify the source and add a call to those 3 functions which alerts your watcher - It can simple be a function, no need to multimethods or protocols.
Think hard about when/where this is useful and how you as a user would best be served by CQL. Then if you come up with something brilliant, make an issue on Github and let us know about.
Thanks,
Lau

Related

Is it possible to create constraint with usage of regex in Neo4j?

I would like to ask you if it is possible to create constraint in Neo4j(cypher) with the usage of regex.
To be specific, I have lot of nodes which serves as IPs and I would like to ensure, that each node(property ip_address) is having proper format for IP address.
If the answer is no, is there any workaround ? The only one which currently comes to my mind, is to check every node in programming language before adding them to Neo4j
This isn't currently available in an easy-to-apply constraint form.
While the recommended approach when you need specific formatting is to handle this at the application layer, you could create a trigger that could check if a newly-added node of the given label has the correct formatting and fail out if not.
This does take some additional work and testing however.
TransactionEventHandlers are used to implement this. Here's the TransactionEventHandler java interface you'll need to implement.
Alternately you can use triggers in APOC Procedures to implement this with Cypher.

Set BatchSize for specific function of a WebJob

Is is possible to set batch size on function level within a webjob?
I have multiple functions in a webjob, some of them depend on other external APIs which does not allow a high degree of parallelization.
I have seen only the Singleton attribute which is not exactly what I am looking for.
just figured out that this is possible with a custom QueueProcessorFactory I already use.
An example from MS is here:
https://github.com/Azure/azure-webjobs-sdk-samples/blob/master/BasicSamples/MiscOperations/CustomQueueProcessorFactory.cs
Having attributes for this would be nice ;-)
Alex
Yeah, custom QueueProcessor instances are designed were designed to be the "escape hatch" allowing you full control in advanced scenarios. We want to keep the mainline paths simple and easy to use, while allowing you to drop down and deeply customize when needed. Adding a bunch of override options on QueueTriggerAttribute itself would be possible, but could also complicate the programming model.
If you would like to suggest a change, I suggest you log issues in the public repo: https://github.com/Azure/azure-webjobs-sdk/issues
Thanks :)

Is there a way to detect from which source an API is being called?

Is there any method to identify from which source an API is called? source refer to IOS application, web application like a page or button click( Ajax calls etc).
Although, saving a flag like (?source=ios or ?source=webapp) while calling api can be done but i just wanted to know is there any other better option to accomplish this?
I also feel this requirement is weird, because in general an App or a web application is used by n number of users so it is difficult to monitor those many API calls.
please give your valuable suggestions.
There is no perfect way to solve this. Designating a special flag won't solve your problem, because the consumer can put in whatever she wants and you cannot be sure if it is legit or not. The same holds true if you issue different API keys for different consumers - you never know if they decide to switch them up.
The only option that comes to my mind is to analyze the HTTP header and see what you can deduce from it. As you probably know a typical HTTP header looks something like this:
You can try and see how the requests from all sources differ in your case and decide if you can reliably differentiate between them. If you have the luxury of developing the client (i.e. this is not a public API), you can set your custom User-Agent strings for different sources.
But keep in mind that Referrer is not mandatory and thus it is not very reliable, and the user agent can also be spoofed. So it is a solution that is better than nothing, but it's not 100% reliable.
Hope this helps, also here is a similar question. Good luck!

Facebook style like system in modx cms (php)

Trying to build a simple like system in modx (which uses php snippets of code) I just need a button that logged in users can press which adds a 'like' to a resource.
Would it be best to update a custom table or TV? my thoughts are that if it is a template variable i can use getResource to sort by amount of likes.
Any thoughts on the best way to approach this or how to build this would help. My php knowledge is limited.
Depends how you are going to use it after and if you are storing more data than just a 'like' count. TV's are expensive on resources [even more so if you are going to whip through the entire resource set with getResources] so if you are going to do a lot of processing after the fact I would either look at a custom table ~or~ explore using property sets on your pages [I think it should be pretty easy to write a plugin that will update a page property]
I'd definitely go for a custom table.
While you could simply increment a numeric TV to count the amount of likes, you will come to a situation where anyone may be able to keep on liking a resource without limit - while you didn't specify the exact concept, that hardly can be desired. Using a custom table you could throw in a relational alias to the user ID that liked the resource, add a timestamp so you know when it happened, and let your fantasy run wild on additional features that are now open to you.
While not a hard requirement for custom tables, you will probably want to take the time to learn xPDO, which is the database abstraction layer MODX is based on. There's a great tutorial on the RTFM which walks you through it.

Query building in a database agnostic way

In a C++ application that can use just about any relational database, what would be the best way of generating queries that can be easily extended to allow for a database engine's eccentricities?
In other words, the code may need to retrieve data in a way that is not consistent among the various database engines. What's the best way to design the code on the client side to generate queries in a way that will make supporting a new database engine a relatively painless affair.
For example, if I have (MFC)code that looks like this:
CString query = "SELECT id FROM table"
results = dbConnection->Query(query);
and we decide to support some database that uses, um, "AVEC" instead of "FROM". Now whenever the user uses that database engine, this query will fail.
Options so far:
Worst option: have the code making the query check the database type.
Better option: Create query request method on the db connection object that takes a unique query "code" and returns the appropriate query based on the database engine in use.
Betterer option: Create a query builder class that allows the caller to construct queries without using any SQL directly. Once the query is completed, caller can invoke a "Generate" method which returns a query string approrpriate for the active database engine
Best option: ??
Note: The database engine itself is abstracted away through some thin layers of our own creation. It's the queries themselves are the only remaining problem.
Solution:
I've decided to go with the "better" option (query "selector") for two reasons.
Debugging: As mentioned below, debugging is going to be slightly easier with the selector approach since the queries are pre-built and listed out in a readable form in code.
Flexibility: It occurred to me that there are some databases which might have vastly better and completely different ways of solving a particular query. For example, with Access I perform a complicated query on multiple tables each time because I have to, but on Sql Server I'd like to setup a view. Selecting from the view and from several tables are completely different queries (i think) and this query selector would handle it easily.
You need your own query-writing object, which can be inherited from by database-specific implementations.
So you would do something like:
DbAgnosticQueryObject query = new PostgresSQLQuery();
query.setFrom('foo');
query.setSelect('id');
// and so on
CString queryString = query.toString();
It can get pretty complicated in there once you go past simple selects from a single table. There are already ORM packages out there that deal with a lot of these nuances; it may be worth at looking at them instead of writing your own.
Best option: Pick a database, and code to it.
How often are you going to up and swap out the database on the back end of a production system? And even if you did, you'd have a lot more to worry about than just minor syntax issues. (Major stuff like join syntax, even datatypes can differ widely between databases.)
Now, if you are designing a commercial application where you want the customer to be able to use one of several back-end options when they implement it, then you may have to specify "we support Oracle, MS SQl, or MYSQL" and code to those specific options.
All of your options can be reduced to
Worst option: have the code making the query check the database type.
It's just a matter of where you're putting the logic to check the database type.
The option that I've seen work best in practice is
Better option: Create query request method on the db connection object that takes a unique query "code" and returns the appropriate query based on the database engine in use.
In my experience it is much easier to test queries independently from the rest of your code. It gets a lot harder if you have objects that are piecing together queries from bits of syntax, because then you have to test the query-creation code and the query itself.
If you pull all of your SQL out into separate files that are written and maintained by hand, you can have someone who is an expert in SQL write them (you can still automate the testing of these queries). If you try to write query-generating functions you'll essentially have a C++ expert writing SQL.
Choose an ORM, and start mapping.
If you are to support more than one DB, your problem is only going to get worse.
And just think of DB that are comming - cloud dbs with no (or close to no) SQL, and Object databases.
Take your queries outside the code - put them in the DB or in a resource file and allow overrides for different database engines.
If you use SPs it's potentially even easier, since the SPs abstract away your database differences.
I would think that what you would want to do, if you needed the ability to support multiple databases, would be to create a data provider interface (or abstract class) and associated concrete implementations. The data provider would need to support your standard query operators and other common, supported functionality required support your query operations (have a look at IEnumerable extension methods in .NET 3.5). Each concrete provider would then translate these into specific queries based on the target database engine.
Essentially, what you do is create a database abstraction layer and have your code interact with it. If you can find one of these for C++, it would probably be worth buying instead of writing. You may also want to look for Inversion of Control (IoC) containers for C++ that would basically do this and more. I know of several for Java and C#, but I'm not familiar with any for C++.