I am trying to get data from Mango Automation into Power BI - powerbi

I work at an energy company and we are wanting to try introduce Power BI into our business after countless recommendations. Currently we are using this tool called Mango, it does offer dashboarding, but the tools it provides are very limited and have low amounts of customization available.
I am currently trying to get data from Mango into PowerBI but am having no luck. I understand Mango probably isn't that well known, so here is a bit about it that may prove helpful.
"Mango collects its data by reading from its various data sources every so often. This period varies depending on how critical the data from each data source is deemed. PLC’s controlling generators are normally every 30 seconds or sooner, but other devices might only be read every 5 or 10 minutes. The data that Mango collects is stored in its database based on specific settings customised for each data source and even further customised for each data point within each data source.
Mango SCADA units are primarily designed to go and get data from other devices rather than send data out to other devices, and as such they support a lot of different protocols over ethernet and serial interfaces to get that data, including the main one we use Modbus IP."
They do have a very limited “Publisher” feature that can send their data to other devices. The types supported are Modbus, BACnet, and HTTP Sender.
I don't want this to come across as if I haven't tried to research about this, my main role is just not technologically focused, so a lot of the terminology is foreign to me. I would appreciate any help offered, even if this is just pointing me in the right direction. Thanks heaps!

A description of the API can be found here:
I am not clear on whether you have access to the API directly or if you need to access the data in another way.
https://docs-v3.mango-os.com/explore-the-api
Then in PowerBi you use "get data" and choose WEB
This will generate something like
Json.Document(Web.Contents("[Your API URL]"))

Related

Is there a way to retrieve all targetable cities in the Ads API?

The autocomplete API allows us to retrieve lists of all countries, regions, and locales by leaving out the query string and setting the result limit to a large number, but this feature isn't available at the city level.
Is there a way that we can retrieve a full list of all targetable cities and their IDs? If not, can we cache the autocomplete data for cities to build up such a list?
That functionality is probably not supported because of the massive amount of return data that would result in fetching all the cities in the world, even with paging. Although limiting the response data by country (by using country_list=["ca"]) and then fetching all cities doesn't sound too far-fetched, however, it is not implemented either.
To me, it sounds like you have two options.
Create a bug report using our bug tool to request a wishlist feature (doesn't guarantee anything, but at least we can track it if we choose to implement it and can serve as a way to gauge interest in the feature)
IANAL, but according to the FB Platform Policies part 2 of section 2 states
You may cache data you receive through use of the Facebook API in order to improve your application’s user experience, but you should try to keep the data up to date. This permission does not give you any rights to such data.
Which sounds like you can cache the autocomplete data since it will better improve the UX of your app, however, just remember that you do not have the rights to the data. I would be cautious about this as it would really suck if you worked really hard to get all the caching functionality built in only to have FB say that it's not allowed. I would advise with some experts some more before pursuing this path.

How exactly does sharkscope or PTR data mine all those hands?

I'm very curious to know how this process works. These sites (http://www.sharkscope.com and http://www.pokertableratings.com) data mine thousands of hands per day from secure poker networks, such as PokerStars and Full Tilt.
Do they have a farm of servers running applications that open hundreds of tables (windows) and then somehow spider/datamine the hands that are being played?
How does this work, programming wise?
There are a few options. I've been researching it since I wanted to implement some of this functionality in a web app I'm working on. I'll use PokerStars for example, since they have, by far, the best security of any online poker site.
First, realize that there is no way for a developer to rip real time information from the PokerStars application itself. You can't access the API. You can, though, do the following:
Screen Scraping/OCR
PokerStars does its best to sabotage screen/text scraping of their application (by doing simple things like pixel level color fluctuations) but with enough motivation you can easily get around this. Google AutoHotkey combined with ImageSearch.
API Access and XML Feeds
PokerStars doesn't offer public access to its API. But it does offer an XML feed to developers who are pre-approved. This XML feed offers:
PokerStars Site Summary - shows player, table, and tournament counts
PokerStars Current Tournament data - files with information about upcoming and active tournaments. The data is provided in two files:
PokerStars Static Tournament Data - provides tournament information that does not change frequently, and
PokerStars Dynamic Tournament Data - provides frequently changing tournament information
PokerStars Tournament Results - provides information about completed tournaments. The data is provided in two files:
PokerStars Tournament Results – provides basic information about completed tournaments, and
PokerStars Tournament Expanded Results – provides expanded information about completed tournaments.
PokerStars Tournament Leaders Board - provides information about top PokerStars players ranked using PokerStars Tournament Ranking System
PokerStars Tournament Leaders Board BOP - provides information about top PokerStars players ranked using PokerStars Battle Of Planets Ranking System
Team PokerStars – provides information about Team PokerStars players and their online activity
It's highly unlikely that these sites have access to the XML feed (or an improved one which would provide all the functionality they need) since PokerStars isn't exactly on good terms with most of these sites.
This leaves two options. Scraping the network connection for said data, which I think is borderline impossible (I don't have experience with this so I'm not sure; I've heard it's highly encrypted and not easy to tinker with, but I'm not sure) and, mentioned above, screen scraping/OCR.
Option #2 is easy enough to implement and, with some work, can avoid detection. From what I've been able to gather, this is the only way they could be doing such massive data mining of PokerStars (I haven't looked into other sites but I've heard security on anything besides PokerStars/Full Tilt is quite horrendous).
[edit]
Reread your question and realized I didn't unambiguously answer it.
Yes, they likely have a massive amount of servers running watching all currently running tables, tournaments, etc. Realize that there is a decent amount of money in what they're doing.
This, for instance, could be how they do it (speculation):
Said bot applications watch the tables and data mine all information that gets "posted" to the chat log. They do this by already having a table of images that correspond to, for example, all letters of the alphabet (since PokerStars doesn't post their text as... text. All text in their software is actually an image). So, the bot then rips an image of the chat log, matches it against the store, converts the data to a format they can work with, and throws it in a database. Done.
[edit]
No, the data isn't sold to them by the poker sites themselves. This would be a PR nightmare if it ever got out, which it would. And it wouldn't account for the functionality of these sites, which appears to be instantaneous. OPR, Sharkscope, etc. There are, without a doubt, applications running that are ripping the data real time from the poker software, likely using the methods I listed.
maybe I can help.
I play poker, run a HUD, look at the stats and am a software developer.
I've seen a few posts on this suggesting it's done by OCR software grabbing the screen. Well, that's really difficult and processor hungry, so a programmer wouldn't choose to do that unless there were no other options.
Also, because you can open multiple windows, the poker window can be hidden or partially obscured by other things on the screen, so you couldn't guarantee to be able to capture the screen.
In short, they read the log files that are output by the poker software.
When you install your HUD like Sharkscope or Jivaro etc, than they run client software on your PC. It reads the log files and updates its own servers with every hand you play.
Most poker software is similar, but lets start with Pokerstars, as thats where I play. The Poker software outputs to local log files for every action you/it makes. It shows your cards, any opponents cards that you see plus what you do. eg. which button you have pressed, how much you/they bet etc. It posts these updates in near real time and timestamps the log file.
You can look at your own files to see this in action.
On a PC do this (not sure what you do on a Mac, but will be similar)
1. Load File Explorer
2. Select VIEW from the menu
3. Select HIDDEN ITEMS so that you can see the hidden data files
4. Goto C:\Users\Dave\AppData\Local\PokerStars.UK (you may not be called DAVE...)
5. Open the PokerStars.log.0 file in NOTEPAD
6. In Notepad, SEARCH for updateMyCard
7. It will show your card numerically
3c for 3 of Clubs
14d for Ace of Diamonds
You can see your opponents cards only where you saw them at the table.
Here is a few example lines from the log file.
OnTableData() round -2
:::TableViewImpl::updateMyCard() 8s (0) [2A0498]
:::TableViewImpl::updateMyCard() 13h (1) [2A0498]
:::TableViewImpl::updatePlayerCard() 7s (0) [2A0498]
:::TableViewImpl::updatePlayerCard() 14s (1) [2A0498]
[2015/12/13 12:19:34]
cheers, hope this helps
Dave
I've thought about this, and have two theories:
The "sniffer" sites have every table open, AND:
Are able to pull the hand data from the network stream. (or:)
Are obtaining the hand data from the GUI (screen scraping, pulling stuff out via the GUI API).
Alternately, they may have developed/modified clients to log everything for them, but I think one of the above solutions is likely simpler.
Well, they have two choices:
they spider/grab the data without consent. Then they risk being shut down anytime. The poker site can easily detect such monitoring at this scale and block it. And even risk a lawsuit for breach of the terms of service, which probably disallow the use of robots.
they pay for getting the data directly. This saves a lot of bandwidth (e.g. not having to load the full pages, extraction, updates with html changes etc.) and makes their business much less risky (legally and technically).
Guess which one they more likely chose; at least if the site has been around for some time without being shut down every now and then.
I'm not sure how it works but I have an application id and a key- which you get as a gold or silver subscriber- sign up for a month and send them an email and you will get access and the API documentation.

case studies or examples of high throughput services with highly dynamic data

I'm looking for some architecture ideas on a problem at work that I may have to solve.
the problem.
1) our enterprise LDAP has become a "contact master" filled with years of stale data and unused and unmaintained attributes.
2) management has decided that LDAP will no longer serve as a company phone book. it is for authorization purposes only.
3) the company has contact type data about people in hundreds of different sources. we need to scrub all the junk out of LDAP and give the other applications a central repo to store all this data about a person.
the ideal goal
1) have a single source to store all the various attributes about a person
2) the company probably has info on 500k people ( read 500K rows)
3) i estimate there could be 500 to 1000 optional attributes on these people. (read 500+ columns)
4) data would primarily be set/get via xml over jms (this infrastructure is already in place)
5) individual groups within the company could "own" columns. only they would be allowed to write to their columns, they would be responsible for keeping the data clean.
6) a single record lookup should be returned in sub seconds
7) system should support 1 million requests per hour at peak.
8) the primary goal is to serve real time data to the enterprise, reporting is a secondary goal.
9) we are a java, oracle, terradata shop. we are your typical big IT shop.
my thoughts:
1) originally i thought LDAP might work, but it doesn't scale when new columns are added.
2) my next thought was some kind of no-sql solution, but from what i have read, I don't think i cant get the performance I need, and its still relatively new. I'm not sure i can get my manager to sign off on something like that for such a critical project.
3) i think there will be a meta-data component to the solution that will track who owns the columns and what each column represents, and the original source system.
Thanks for reading, and thanks in advance for any thoughts.
SQL
With Teradata-grade tools an SQL-based solution may be feasible. I came across an article on database design awhile ago that discussed "anchor modeling".
Basically, the idea is to create a single, dumb, synthetic primary key table, while all real or meta data lives in other tables (subsets) and is attached by way of a foreign key + join.
I see the benefit of this design being two-fold. First, you can more easily compartmentalize data storage either for organizational or performance reasons. Second, you only create additional rows for records that have data in any given subset, so you use less space and indexing and searching are faster.
Subsets might be based on maintainer or some other criteria. XML set/get would be per-subset/record (rather than global record). All subsets for a given records can be composited and cached. Additional subsets can be created for metadata, search indexes, etc., and these can be queried independently.
NoSQL
NoSQL seems similar to LDAP (in theory, at least) but the benefit of a good NoSQL tool would include greater abstraction of metadata, versioning, and organization. In fact, from what I've read it seems that NoSQL datastores are designed to address some of the issues you've raised with respect to scaling and loosely structured data. There's a good question on SO regarding datastores.
Production NoSQL
Off-hand, there are a handful of large companies using NoSQL in massively-scaled environments, such as Google's Bigtable. It seems like the perfect tool for:
6) a single record lookup should be returned in sub seconds
7) system should support 1 million requests per hour at peak.
Bigtable is only available (to my knowledge) through AppEngine. Other, similar technologies are listed here.
Other Thoughts
The bigger picture view looks more or less the same regardless of the technology you decide to use. E.g. compartmentalize storage, composite views, cache views, stick metadata somewhere so you can find things.
The performance characteristics you're targeting are going to require some kind of caching and/or optimization based on real-world usage patterns. Regardless of the solution you choose, you probably can't resolve that in the design phase.
A couple thoughts:
1) our enterprise LDAP has become a "contact master" filled with years of stale data and unused and unmaintained attributes.
This isn't really a technological problem. You will have this problem with a new system as well, LDAP or not.
"LDAP ... doesn't scale"
There are lots of huge LDAP systems out there. LDAP is surely a dark art, but I'd willing to bet that it scales better than any SQL equivalent in this situation. Not to mention that LDAP is a standard for this kind of info, and as such it is accessible from zillions of different kinds of systems.
Maybe what you're looking for is a new LDAP system that's easier to manage / has better admin tools?
You may want to look into Len Silverston's Party Model. Here's a link to his book: http://www.amazon.com/Data-Model-Resource-Book-Vol/dp/0471380237.
I have no experience building something on that scale, though I think that thinking of it as 500k rows x 500 - 1000 columns sounds a bit ridiculous.

Webservice on Navision / Microsoft Dynamics version 5... or else?

Go bounty!
This question has earned me a tumbleweed badge (7 views in 7 days!), which is somehow a strong confirmation that Navision has a very limited market share, which - I suspect - should be a confirmation Navision is neither all that great piece of software...
But hey... that's what we got as a back-end, so I am ready to fight with this. :-O
If there is some daring navision developer who is able to shed light onto this... the bounty is there for you! :)
Original Post
I have recently implemented a rather complex e-commerce system that interacts with a legacy back-end based on Navision 5. So far the exchange of data between the two platform has happened via XML files, but this method is quite clumsy and very much prone to mishaps.
Our needs are:
To expose certain elements of the business logic of each platform to the other one (for example: "what's the total amount ever purchased by this customer?", "what are the products currently on offer?", "how many new customers have registered on the website?", etc...).
To have mechanisms of feed-back / validation for the various transactions (for example: "Here's the a new order from customer X" ... "Ok, got it, the order will now start to be processed" ... "Ok, copy that, bye!").
If possible, avoid playing around with files, but keeping all of this happening in terms of calls/ports/services...
The most natural way I could think of would be to integrate the two systems via webservice, but Navision 5 does not support this natively. So I did my "due diligence" and found a few things on MSDN including this article and this other one.
According to these articles it should not be that difficult to create a webservice on Navision 5, but when I suggested this solution to the team in charge of the legacy system, they told us that it is "pure theory" and they do not know of anybody who ever implemented it.
I have no reason to doubt their word, but mileage can vary... and I thought that maybe in the SO community there are professionals from other countries who actually implemented something similar and are available to share their experience.
So, my question is two-folded:
Is there anybody who has tried this at home and would be available to share a bit on what have been the greatest difficulties, if the final result is reliable, if they think the outcome is worth the effort, etc... ?
Is there anybody who faced a similar problem but solved it with a different approach and that would be available to present their solution ("I never did it myself, but if I had to do it I would do it like this..." type of answers are also welcome)?
Thank you in advance for your time! :)
I too will chime in with a not-too helpful answer about Nav 6 :)
I've just completed a project using Nav 6. Suprisingly, the webservices are VERY easy to expose and consume. It's really a trivial matter to go find a object in the webservices interface and tick a box to tell it to expose itself.
Unsuprisingly, the webservices don't work as you'd expect, you have to often use some trial-and error to get objects and properties to persist, as it's really touchy as to the sequence of events you use to save and object. And each object seems to work slightly differently. eg: To create a customer, I eventually found out that you have to create and save a blank customer, massage this new record with a codeunit, then fetch the record and then write the customer's attributes and save again. I expected to just create a new customer(), set the attributes and save in one quick swoop.
I guess you're not too keen on upgrading to Nav6, but off the top of my head, Here is how you could simulate web services:
Sharepoint can already consume and expose webservices, so that tier isn't a problem.
Nav 5 doesn't have them 'naturally', but you could cook up your own program that acts as a webservice 'broker' - you're already getting info into and out of nav, via mostly XML. You could build this broker to take input as the xml files and massage them to use in a webservice call. You could even forgo the XML and write and read directly from the Db, as all the Nav info is stored there anyway. So here's what I'm thinking:
NAV <-> SQL SERVER <-> New 'broker' webservice <-> Sharepoint
Or if you already have the NAV API down pat and want to resuse your XML:
NAV <-> XML files <-> New 'broker' webservice <-> Sharepoint
If you are using XML and use filewatchers, the latency shouldn't be too bad, usually filewatchers pick up on a drop or change in milliseconds.
Hmm, but I'm thinking you are probably supposed to use BizTalk for stuff like this:
NAV <-> BizTalk <-> Sharepoint
But I don't know how easy it would be to set up BizTalk to communicate with Nav. I'll bet that it's pretty straighforward to get comms working, but this is speculation.
In any case, I don't know how helpful this post is, but maybe it gives you a few ideas to go with.
Cheers,
Lance
Where I work, we were able to use one of the web services from NAV 6 to integrate with SharePoint, so you can look up a customer or record and show it in a web part in SharePoint. I know your question is about NAV 5 in particular, but I have only seen this working on NAV 6. And I wasn't the developer who worked on this, so I don't have any more specifics I'm afraid.
Did you try asking on mibuso.com? They're much more Navision-focused.
When you say expose business logic, does this include executing AL code (e.g. a CodeUnit)? If you only need to perform queries against the database you could use NODBC & System.Data.Odbc or the CFront .NET API. Either of these can easily be wrapped using a .NET web service, and both support the native NAV database. To pass messages back you would still need to use COM, as described in the first article you mention.
Any of the above are entirely possible, and relatively easy depending on your proficiency in .NET, COM and NAV.
The second article you linked to describes using the NAS. I'm no expert on this, but I think this might require a special license granule. Before spending time implementing anything its worth checking whether your license includes the NAS, CFront or NODBC.
You can actually do a "technical upgrade" of NAV 5 to NAV 2009 and then use the native webservices. Meaning replacing the exe-files and the entire application (but not the objects, which will still be version 5), and a couple of other tricks. But it works, and thus you have 2009-webservice-functionality on your NAV 5 :-)
For the benefit of anyone googling this, if you're still on Navision 6 with the native database, a good way of connecting it is via the windows message queue.
You can write your own web service that puts XML requests into the queue and then wait for the reply to pop up. On the Navision side, you have one clients that pulls the queue and puts answers into the reply queue. There's a special non-GUI client called NAS for that.
I've been using this approach for 15 years to connect booking engines to out Navision back end. It works very well and has the advantage that you can peek at requests before they reach Navision, so you can protect your back end from too many or faulty requests.
The only problem with this approach is that the COMMIT command within Navision is super expensive. It's difficult to serve high volumes of requests. As soon as the back end needs committing, you are down to just a few requests per second. This may be fine for low volume.
For high volume you need to implement caching or getting some of the business logic out. For me it was getting hit by price comparison websites, so here the solution is serve those from web services written in Python and only pass on the requests if someone buys something...

What is data mining from a developer's perspective?

I can find the technical explanation of what data mining is in a book or on Wikipedia, but I'm wondering what sort of development does it exactly involve? Is it more about using tools or more about writing tools? Is it really any much different from other domains when it comes to R&D?
Data Mining is the process of discovering interesting patterns in large amounts of data. It is not querying data, which is just what user Treb describes (sorry Treb).
To understand DM from a developer's perspective, you should read the book Programming Collective Intelligence by Toby Segaran.
In my experience (I'm a former data miner :-)), it's a mixture of using tools and writing tools. A lot of the time, the tools you need to analyse the particular data set don't exist, so you have to write them yourself first. It can be very interesting but you often need quite a different approach to the sort of programming I do now (embedded wireless), for example.
You really ought to change the accepted answer on this question so it doesn't mislead those who come across it.
Saying that querying a database IS data mining because "[h]ow would you discover any pattern in your data without querying first?" is like saying opening your car door is driving because "how else would you be able to drive somewhere without opening the car door first."
You can read your data out of a text file if you want. My first data mining assignment used data sets from the UCI repository and those are almost all text files.
If you want to learn about data mining start by looking up clustering and classification. Learn about decision trees and rule based classification. Then look at k-nearest-neighbor and k-means. After that if you really want to see what data mining is all about look at Chameleon, DBScan, and Support Vector Machines. Don't necessarily learn the minutiae of the last three (they're pretty complex and math heavy) but understanding the abstract idea of what happens will tell you all you need to know in order to use the many tools and libraries that are available for each strategy.
These are only the algorithms that popped into my head just now. There are so many others that I don't recall or don't even know yet.
Data mining is about searching large quantities of data for hidden patterns. Web 2.0 example: News corp uses its site myspace.com as a large data mine to determine what movies and products to promote. They write software to identify trends in the data that it's users post to the site. News corp does this to gather information useful for advertising campaigns and market predictions. It's different from other domains of R&D in that from a data givers perspective its passive. Rather than going out on the street and asking people in person what movies they are likely to see this summer and other such questions, the data mining tools sort out these things by analyzing data given by users voluntarily.
Wikipedia actually does have a pretty good article on it:
- http://en.wikipedia.org/wiki/Data_mining
Data Mining as I say is finding patterns or trends from given data. A developer perspective might be in applications like Anti Money Laundring... Where given a pattern you will search data for that given pattern. One other use is in Projection Softwares... where you project a result or outcome in future against a heuristic by studying recognizing the current trend from data.
I think it's more about using off the shelf tools rather than developing your own. An academic example of that kind of tools might be WEKA. Of course, you still have to know what algorithms use, how to preprocess data (very important this part), etc.
In R&D I don't have much idea, but it should be like almost everything: maths, statistics, more maths...
On the development level, data mining is just another database application, but with a huge amount of data.
The mining itself is done by running specific queries on the database. It's in the creation of the queries where the important work is done. They of course depend on the data model, and on the hypotheses, what sort of trends the customer expects to find.
Therefore, the fine tuning of the queries usually can't be done in development, but only once the system is live and you have live data. Then the user can test his hypotheses and adapt the queries to show him the trends he is looking for.
So from a dev point of view, data maining is about
Managing large sets of data in your client (one query may return 100.000 rows of data)
Providing the user (who may know nothing about SQL or relational databases in general) with an effective way to modify his queries and view the results.