The project that we worked on consists of 3 tiers: the presentation tier, the business logic tier and data tier, I will call them here the front, mid and back.
The front is written in PHP and it communicates with the mid via web service (XML-RPC, SOAP, etc.). Users can also write their own clients to talk to the mid. The nid is developed in Java, it performs business logic and provides data to the front, it may also throws exception to the front.
The question I am having is, if I want to have multi-lingual support in future, where shall I develop i18n? It makes sense to be at the front because of all the texts that it has, what about exception and other messages coming from the mid?
If a user develops their own client and the mid has multi-lingual support, the messages coming from it (like exception as said above) can therefore be in their selected language. That's the advantage I'm seeing. I just don't like the idea of having two layers with i18n code and having to handle i18n when I am handling an exception.
It depends a lot on your application.
If you think UI localization, the presentations is definitely affected.
I would say that the middle tier should not generate any messages.
Exceptions are intended for developers, not for users. So in the presentation capture the exception, and present it to the user in a localized way saying something like "Fatal error 12313 occurred, please send this report to ..."
(maybe even nicer, you don't show the exception text at all, offer a "Send a crash report" button, with a "Show report" button for the user to see that you are not sending any private data).
But if you thing about stuff beyond UI, then the others might also be affected.
The business logic might be affected (for instance the way the tax systems work are different from country to country). And that is independent of the UI (Canada or Australia have another tax system than US, even if the UI is still English).
So you might want to design this layer very modular.
The content of the database might also be affected. Imagine you have products that are not available (or are banned) in certain countries. So you might need extra fields (or tables) to carry that info.
So in the end the answer is "you have to think about i18n at every level!" and ask yourself "what if"
I would ask you a question: The i18n data would be handled in the back layer (data tier)?
If you say yes then you got it, but if you say no then I would put it in the mid layer (busieness tier) because medium and larger projects use to interact with I18N (exceptions, currencies, message formats, time zones, charsets, etc...)
I would put it in the front layer (presentation tier) for smaller projects.
Regards.
If you want to be completely internationalized, exceptions and other messages from the middle-tier should not include text. You should specify a code that the client must look up in a table to understand.
Related
I am working on a module which requires to submit a form with an insane amount of parameters (8k-10k). I am not sure whether this is a good idea or not. But that's the way it is. I have changed the settings in neo-runtime.xml file as mentioned in this link as bellow:
<var name='postParametersLimit'><number>10000.0</number></var> and restarted the server. But no use. CF still throws error 500. We can not see any robust information. I am working on CF9.0.2 and we are using IIS 7.5. Is there anything do i need to do?
"We gave our client a dynamic form where he can add his own form fields and now we have this problem. There was a mismatch between clients expectations and our thinking of the way client wants it."
Unfortunately, you're going to have to tell the client they can't have it how they want it. That post processing limit is there for security reasons and if you raise it too high, then you're re-opening your server to a denial of service attack using a hash algorithm collision.
We have tens of thousands of forms in our workflow system and work with banking and government clients. Once this update was applied (in development first), we had to raise the default to a certain value and stick with it. We made sure to note this limitation to the entire business team and add it to our coding standards document to ensure that all new development was done in accordance to the standard. After reworking a handful of existing forms to account for the limitation, we were able to push the security update to production without a problem.
Just tell them that there is a security restriction on the number of fields in a single form and they cannot cross that line. If you need to gather that much data, they'll have to break it up into multiple forms.
You can use a cfgrid instead of using a long form with huge amount of data to take input from the user.
cfgrid allows you to load only a limited amount of data from the database.
Using it you can prevent posting and loading of huge amount of data at once.
And if you are not a great supporter of cfgrid of cfajax features you can still use pagination or stuff like that, that will allow you to load a limited amount of data in your form and in turn less posting of data. But the later will need you to build a logic by yourself.
Start with the CF server limits first. This blog post should give you a pointer to where limits can be adjusted:
http://www.cutterscrossing.com/index.cfm/2012/3/27/ColdFusion-Security-Hotfix-and-Big-Forms
In REST URIs should be opaque to the client.
But when you build interactive javascript-based web-client for your application you actually have two clients! One for interaction with server and other one for users (actual GUI). Of course you will want to have friendly URIs, good enough to answer the question "where am I now?".
It's easier when a server just respond with HTML so people can just click on links and don't care about structure. Server provides URIs, server receives URIs.
It's easier with desktop client. The same staff. Just a button "show the resource" and user doesn't care what the URI is.
It's complicated with browser clients. There is the address bar. This leads to the fact that low-level part of web-client relies on the URIs structure of a server. Which is not RESTful.
It seems like the space between frontend and backend of the application is too tight for REST.
Does it mean that REST is not a good choice for reactive interactive js-based browser clients?
I think you're a little confused...
First of all, your initial assumption is flawed. URI opacity doesn't mean URIs have to be cryptic. It only means that clients should not rely on URI semantics for interaction. Friendly URIs are not only allowed, they are encouraged for the exact same reason you talk about: it's easier for developers to know what's going on.
Roy Fielding made that clear in the REST mailing list years ago, but it seems like that's a myth that won't go away easily:
REST does not require that a URI be opaque. The only place where the
word opaque occurs in my dissertation is where I complain about the
opaqueness of cookies. In fact, RESTful applications are, at all
times, encouraged to use human-meaningful, hierarchical identifiers in
order to maximize the serendipitous use of the information beyond what
is anticipated by the original application.
Second, you say it's easier when a server just respond with HTML so people can just follow links and don't care about structure. Well, that's exactly what REST is supposed to do. REST is merely a more formal and abstract definition of the architecture style of the web itself. Do some research on REST and HATEOAS.
Finally, to answer your question, whether REST is a good choice for you is not determined by client implementation details like that. You can have js-based clients, no problem, but the need to do that isn't reason enough to worry too much about getting REST right. REST is a good choice if you have projects with long term integration, maintainability and evolution goals. If you need something quick, that won't change a lot, or won't be integrated with a lot of different clients and services, don't worry too much about REST.
I'm very curious to know how this process works. These sites (http://www.sharkscope.com and http://www.pokertableratings.com) data mine thousands of hands per day from secure poker networks, such as PokerStars and Full Tilt.
Do they have a farm of servers running applications that open hundreds of tables (windows) and then somehow spider/datamine the hands that are being played?
How does this work, programming wise?
There are a few options. I've been researching it since I wanted to implement some of this functionality in a web app I'm working on. I'll use PokerStars for example, since they have, by far, the best security of any online poker site.
First, realize that there is no way for a developer to rip real time information from the PokerStars application itself. You can't access the API. You can, though, do the following:
Screen Scraping/OCR
PokerStars does its best to sabotage screen/text scraping of their application (by doing simple things like pixel level color fluctuations) but with enough motivation you can easily get around this. Google AutoHotkey combined with ImageSearch.
API Access and XML Feeds
PokerStars doesn't offer public access to its API. But it does offer an XML feed to developers who are pre-approved. This XML feed offers:
PokerStars Site Summary - shows player, table, and tournament counts
PokerStars Current Tournament data - files with information about upcoming and active tournaments. The data is provided in two files:
PokerStars Static Tournament Data - provides tournament information that does not change frequently, and
PokerStars Dynamic Tournament Data - provides frequently changing tournament information
PokerStars Tournament Results - provides information about completed tournaments. The data is provided in two files:
PokerStars Tournament Results – provides basic information about completed tournaments, and
PokerStars Tournament Expanded Results – provides expanded information about completed tournaments.
PokerStars Tournament Leaders Board - provides information about top PokerStars players ranked using PokerStars Tournament Ranking System
PokerStars Tournament Leaders Board BOP - provides information about top PokerStars players ranked using PokerStars Battle Of Planets Ranking System
Team PokerStars – provides information about Team PokerStars players and their online activity
It's highly unlikely that these sites have access to the XML feed (or an improved one which would provide all the functionality they need) since PokerStars isn't exactly on good terms with most of these sites.
This leaves two options. Scraping the network connection for said data, which I think is borderline impossible (I don't have experience with this so I'm not sure; I've heard it's highly encrypted and not easy to tinker with, but I'm not sure) and, mentioned above, screen scraping/OCR.
Option #2 is easy enough to implement and, with some work, can avoid detection. From what I've been able to gather, this is the only way they could be doing such massive data mining of PokerStars (I haven't looked into other sites but I've heard security on anything besides PokerStars/Full Tilt is quite horrendous).
[edit]
Reread your question and realized I didn't unambiguously answer it.
Yes, they likely have a massive amount of servers running watching all currently running tables, tournaments, etc. Realize that there is a decent amount of money in what they're doing.
This, for instance, could be how they do it (speculation):
Said bot applications watch the tables and data mine all information that gets "posted" to the chat log. They do this by already having a table of images that correspond to, for example, all letters of the alphabet (since PokerStars doesn't post their text as... text. All text in their software is actually an image). So, the bot then rips an image of the chat log, matches it against the store, converts the data to a format they can work with, and throws it in a database. Done.
[edit]
No, the data isn't sold to them by the poker sites themselves. This would be a PR nightmare if it ever got out, which it would. And it wouldn't account for the functionality of these sites, which appears to be instantaneous. OPR, Sharkscope, etc. There are, without a doubt, applications running that are ripping the data real time from the poker software, likely using the methods I listed.
maybe I can help.
I play poker, run a HUD, look at the stats and am a software developer.
I've seen a few posts on this suggesting it's done by OCR software grabbing the screen. Well, that's really difficult and processor hungry, so a programmer wouldn't choose to do that unless there were no other options.
Also, because you can open multiple windows, the poker window can be hidden or partially obscured by other things on the screen, so you couldn't guarantee to be able to capture the screen.
In short, they read the log files that are output by the poker software.
When you install your HUD like Sharkscope or Jivaro etc, than they run client software on your PC. It reads the log files and updates its own servers with every hand you play.
Most poker software is similar, but lets start with Pokerstars, as thats where I play. The Poker software outputs to local log files for every action you/it makes. It shows your cards, any opponents cards that you see plus what you do. eg. which button you have pressed, how much you/they bet etc. It posts these updates in near real time and timestamps the log file.
You can look at your own files to see this in action.
On a PC do this (not sure what you do on a Mac, but will be similar)
1. Load File Explorer
2. Select VIEW from the menu
3. Select HIDDEN ITEMS so that you can see the hidden data files
4. Goto C:\Users\Dave\AppData\Local\PokerStars.UK (you may not be called DAVE...)
5. Open the PokerStars.log.0 file in NOTEPAD
6. In Notepad, SEARCH for updateMyCard
7. It will show your card numerically
3c for 3 of Clubs
14d for Ace of Diamonds
You can see your opponents cards only where you saw them at the table.
Here is a few example lines from the log file.
OnTableData() round -2
:::TableViewImpl::updateMyCard() 8s (0) [2A0498]
:::TableViewImpl::updateMyCard() 13h (1) [2A0498]
:::TableViewImpl::updatePlayerCard() 7s (0) [2A0498]
:::TableViewImpl::updatePlayerCard() 14s (1) [2A0498]
[2015/12/13 12:19:34]
cheers, hope this helps
Dave
I've thought about this, and have two theories:
The "sniffer" sites have every table open, AND:
Are able to pull the hand data from the network stream. (or:)
Are obtaining the hand data from the GUI (screen scraping, pulling stuff out via the GUI API).
Alternately, they may have developed/modified clients to log everything for them, but I think one of the above solutions is likely simpler.
Well, they have two choices:
they spider/grab the data without consent. Then they risk being shut down anytime. The poker site can easily detect such monitoring at this scale and block it. And even risk a lawsuit for breach of the terms of service, which probably disallow the use of robots.
they pay for getting the data directly. This saves a lot of bandwidth (e.g. not having to load the full pages, extraction, updates with html changes etc.) and makes their business much less risky (legally and technically).
Guess which one they more likely chose; at least if the site has been around for some time without being shut down every now and then.
I'm not sure how it works but I have an application id and a key- which you get as a gold or silver subscriber- sign up for a month and send them an email and you will get access and the API documentation.
At my company we develop prefabricated web applications. While our applications work as-is in many cases, often we receive complex customization requests. We are having a problem in trying to perform this in a structured way. Generic functionality should not be influenced by customizations. At the moment we are looking into Spring Web Flow and it looks like it can handle a part of what we need.
For example, we have an Online Shopping and we have a request from a client that in a moment of checking out the Shopping Basket order has to be written to a proprietary logging system.
With SWF, it is possible to inherit our Generic Checkout Flow with ClientX Checkout Flow and to extend it with states necessary to perform a custom log write. This scenario seems to be handled well. This means we can keep our Generic Checkout Flow as is and extend it with custom functionality, according to Open/Closed principle. Our team in time can add functionality to the Generic Checkout Flow and this can be distributed to a client without modifying the extension.
However, sometimes clients request our pages to be customized. For example, in our Online Shopping app a client requests a multiple currencies feature. In this case, you need to modify the View as well as the Flow (Controller). Is there a technology that would let me extend the Generic View and not modify it? So far, only two solutions with majority of template-based views (JSP, Struts, Velocity etc.) seems to be
to have a specific version of view for each client. This obviously leads to implementation explosion
to make application configurable depending on parameter (if multipleCurrency then) that leads to code explosion - a number of configuration conditions that have to be checked in each page
What would be the best solution in this case? There are probably some other customization cases I am not able to recall. Is there maybe a component based view technology that would let me extend certain base view and does that makes sense.
What are typical solutions to a problem of configurable web applications?
each customization point implies some level of conditionality.
Where possible folks tend to use style sheets to control some aspects. For example display of a currency selector perhaps could be done like that.
Another thought for that currency example: 1 is the limiting case of many. So the model provides the list of currencies. The view displays a selector if there are many, and a fixed field if only one. Quite well-defined behaviour - easy to test reusable for other scenarios.
Go bounty!
This question has earned me a tumbleweed badge (7 views in 7 days!), which is somehow a strong confirmation that Navision has a very limited market share, which - I suspect - should be a confirmation Navision is neither all that great piece of software...
But hey... that's what we got as a back-end, so I am ready to fight with this. :-O
If there is some daring navision developer who is able to shed light onto this... the bounty is there for you! :)
Original Post
I have recently implemented a rather complex e-commerce system that interacts with a legacy back-end based on Navision 5. So far the exchange of data between the two platform has happened via XML files, but this method is quite clumsy and very much prone to mishaps.
Our needs are:
To expose certain elements of the business logic of each platform to the other one (for example: "what's the total amount ever purchased by this customer?", "what are the products currently on offer?", "how many new customers have registered on the website?", etc...).
To have mechanisms of feed-back / validation for the various transactions (for example: "Here's the a new order from customer X" ... "Ok, got it, the order will now start to be processed" ... "Ok, copy that, bye!").
If possible, avoid playing around with files, but keeping all of this happening in terms of calls/ports/services...
The most natural way I could think of would be to integrate the two systems via webservice, but Navision 5 does not support this natively. So I did my "due diligence" and found a few things on MSDN including this article and this other one.
According to these articles it should not be that difficult to create a webservice on Navision 5, but when I suggested this solution to the team in charge of the legacy system, they told us that it is "pure theory" and they do not know of anybody who ever implemented it.
I have no reason to doubt their word, but mileage can vary... and I thought that maybe in the SO community there are professionals from other countries who actually implemented something similar and are available to share their experience.
So, my question is two-folded:
Is there anybody who has tried this at home and would be available to share a bit on what have been the greatest difficulties, if the final result is reliable, if they think the outcome is worth the effort, etc... ?
Is there anybody who faced a similar problem but solved it with a different approach and that would be available to present their solution ("I never did it myself, but if I had to do it I would do it like this..." type of answers are also welcome)?
Thank you in advance for your time! :)
I too will chime in with a not-too helpful answer about Nav 6 :)
I've just completed a project using Nav 6. Suprisingly, the webservices are VERY easy to expose and consume. It's really a trivial matter to go find a object in the webservices interface and tick a box to tell it to expose itself.
Unsuprisingly, the webservices don't work as you'd expect, you have to often use some trial-and error to get objects and properties to persist, as it's really touchy as to the sequence of events you use to save and object. And each object seems to work slightly differently. eg: To create a customer, I eventually found out that you have to create and save a blank customer, massage this new record with a codeunit, then fetch the record and then write the customer's attributes and save again. I expected to just create a new customer(), set the attributes and save in one quick swoop.
I guess you're not too keen on upgrading to Nav6, but off the top of my head, Here is how you could simulate web services:
Sharepoint can already consume and expose webservices, so that tier isn't a problem.
Nav 5 doesn't have them 'naturally', but you could cook up your own program that acts as a webservice 'broker' - you're already getting info into and out of nav, via mostly XML. You could build this broker to take input as the xml files and massage them to use in a webservice call. You could even forgo the XML and write and read directly from the Db, as all the Nav info is stored there anyway. So here's what I'm thinking:
NAV <-> SQL SERVER <-> New 'broker' webservice <-> Sharepoint
Or if you already have the NAV API down pat and want to resuse your XML:
NAV <-> XML files <-> New 'broker' webservice <-> Sharepoint
If you are using XML and use filewatchers, the latency shouldn't be too bad, usually filewatchers pick up on a drop or change in milliseconds.
Hmm, but I'm thinking you are probably supposed to use BizTalk for stuff like this:
NAV <-> BizTalk <-> Sharepoint
But I don't know how easy it would be to set up BizTalk to communicate with Nav. I'll bet that it's pretty straighforward to get comms working, but this is speculation.
In any case, I don't know how helpful this post is, but maybe it gives you a few ideas to go with.
Cheers,
Lance
Where I work, we were able to use one of the web services from NAV 6 to integrate with SharePoint, so you can look up a customer or record and show it in a web part in SharePoint. I know your question is about NAV 5 in particular, but I have only seen this working on NAV 6. And I wasn't the developer who worked on this, so I don't have any more specifics I'm afraid.
Did you try asking on mibuso.com? They're much more Navision-focused.
When you say expose business logic, does this include executing AL code (e.g. a CodeUnit)? If you only need to perform queries against the database you could use NODBC & System.Data.Odbc or the CFront .NET API. Either of these can easily be wrapped using a .NET web service, and both support the native NAV database. To pass messages back you would still need to use COM, as described in the first article you mention.
Any of the above are entirely possible, and relatively easy depending on your proficiency in .NET, COM and NAV.
The second article you linked to describes using the NAS. I'm no expert on this, but I think this might require a special license granule. Before spending time implementing anything its worth checking whether your license includes the NAS, CFront or NODBC.
You can actually do a "technical upgrade" of NAV 5 to NAV 2009 and then use the native webservices. Meaning replacing the exe-files and the entire application (but not the objects, which will still be version 5), and a couple of other tricks. But it works, and thus you have 2009-webservice-functionality on your NAV 5 :-)
For the benefit of anyone googling this, if you're still on Navision 6 with the native database, a good way of connecting it is via the windows message queue.
You can write your own web service that puts XML requests into the queue and then wait for the reply to pop up. On the Navision side, you have one clients that pulls the queue and puts answers into the reply queue. There's a special non-GUI client called NAS for that.
I've been using this approach for 15 years to connect booking engines to out Navision back end. It works very well and has the advantage that you can peek at requests before they reach Navision, so you can protect your back end from too many or faulty requests.
The only problem with this approach is that the COMMIT command within Navision is super expensive. It's difficult to serve high volumes of requests. As soon as the back end needs committing, you are down to just a few requests per second. This may be fine for low volume.
For high volume you need to implement caching or getting some of the business logic out. For me it was getting hit by price comparison websites, so here the solution is serve those from web services written in Python and only pass on the requests if someone buys something...