I am currently working on a C++/COM project using ArcEngine(From ESRI). Aside from the fact that there is little to no support in terms of documentation (SDK is there.) Anyways, i am wondering if anyone here has had any experience in making the initialization process of ArcEngine faster. Right now it takes 30-35 seconds just to initialize the engine. Now we are going to be running several of these applications. Does anyone have any experience, with this?
Its a very werid and odd task, but ESRI's developer forums are no help. and i couldnt find anything on google.
Any ideas?
It's been almost a decade since I last played with ESRI stuff, so I can't help you with anything specific to ArcEngine.
Maybe you can pool instances? In the best case scenario you would be able to reuse ArcEngine instances, and could return an instance back to pool after you're done with it.
If that's not possible, you could at least try to have a number of instances ready to roll, although whether that is possible and/or useful depends a lot on the specifics of your app.
Is it really COM? In that case, the ArcEngine will be exposing a set of COM interfaces. COM interfaces are not magic, and not uniquely bound to one program. In fact, COM has explicit support for proxying. This is e.g. used by DCOM; you get a local proxy for the remote server.
In this case, it should be possible to write a custom COM proxy that fakes the initialization stuff but forwards everything else. Towards your client, the proxies COM interface is identical except faster. Towards ArcEngine, your proxy can wait quite long between calls.
Something that I have found useful with getting ESRI products to start faster (not necessarily ArcEngine, but this probably applies) is to specify the port number (generally 27004) in the registry where the license server is defined.
HKEY_LOCAL_MACHINE\SOFTWARE\ESRI\License\LICENSE_SERVER
HKEY_LOCAL_MACHINE\SOFTWARE\ESRI\ArcInfo\Workstation\8.0\LICENSE_SERVER
When you set this in installation or through the desktop administrator, it is generally something like: #yourserver.name
Change this to 27004#yourserver.name
Again this may not solve your issue, but if you're not doing it, it's worth a try. I've found it to speed things up in our environment, both using a license manager on a network and with a hardware dongle on the local machine.
Well from my understanding ArcEngine initialization, initializes a special COM environment.
You don't ever get any sort of real handle over the initialized environment. Can you somehow store a COM Enviroment and pass it to other programs. My current idea is:
Windows Service Running in Background with initialized ArcEngine. Program somehow queries the service, the service returns the COM Enviroment. Is this even possible?
I had a lot of grief with ESRI forums providing very little help. It feels like Arc* developers are largely on their own.
Using ArcEngine + .Net the initialization time for an application has been trivial (maybe 1 second?) in our environment -- are you using a slow remote server or is this JUST the engine with no network or maps being loaded?
Whenever I've had to deal with large data sets, ESRI has a pig though.
Good to see some discussion on SO of ESRI products! Not a lot here yet...
Exactly what line is taking 45 seconds? If I had to do some psychic debugging, I would guess that you are running into a problem with your license server.
Check that first.
Related
Suppose for an application which will never receive internet connection during its lifetime, how can you prevent the piracy of the software?
There cannot be a single product key requirement during installation because, once installed legitimately anybody can copy the installation and re-distribute it.
So every time the application runs it should check for something and crash if the check fails.
Now what could it possibly check?
Initially I thought keeping an encrypted binary file will do the job, but as answered here, that seems a negligible prevention.
Any hacker can modify the executable so that instead of crashing when the check fails it should continue running.
So no matter how difficult the check is, the cracked application will always run.
Now I cannot see any possible solution to this problem.
PS: I am a single independent developer who is developing productivity software with very low charge. Seeing this question I believe I just have to let it go. Sigh....
EDIT: I would like to thank all the contributors in this discussion in letting me know the grim reality...
What I understand now is that you are indirectly submitting the source code of your application in the form of the target executable. Its source code can be modified by anybody using a debugger, thus ANY method of preventing piracy through source code of your application is useless. The only possible solution to this problem is to keep your legitimate customers happy by providing them services (apart from the software) and keep your price below their expectations.
I was think of solving this problem for past 3 days and now all seems worthwhile but still learnt a lot in this process, which I wouldn't have otherwise...
I ha
The only standalone thing I've seen that is semi-effective is hardware keys that come with the boxed software. They used to attach to a parallel port or a serial port and get checked when you started the program.
AutoCad and similar programs used to do this, but it is a BIG PAIN for your customers. Any time it doesn't read it, or a key goes bad, customer productivity suffers. It hurts your legitimate customers far more than those who end up pirating it anyway, and a sufficiently motivated pirate can make a VM that will overcome this. Modern versions of this use USB.
My recommendation is to trust people. Upon install, make them click a "I promise I paid for this" button and be done with it. If they click "I didn't pay for this" show them a small paragraph about how to help keep good software coming and prevent customer-harming DRM schemes by simply contributing to the success of good software authors.
You could generate a unique copy for each user, create a database, and check it agents copies you find online if you like playing the biggest game of wack-a-mole ever.
I have an interesting conundrum. I have been challenged with identifying the most suitable process in which to create a "browser front-end" to an existing multi-user application built within the TigerLogic/Pick D3 environment. My research indicates there are many ways to do this; but I am struggling to decide which method is best or where to start. I have "played" with a few technologies but a commitment to one is needed to get started.
These methods include:
Creation of a complex webservice using MVS Toolkit; and engineering a client from WSDL either from scratch or using maven/wsimport. Tests indicate there is a lot more to this process than originally though for a simply WSDL.
Development of a Java based web app that harnesses the MVSPJavaAPI - I am not a JAVA developer so this means learning a new language. Development would most likely take place in Eclipse.
Using TigerLogics FlashCONNECT - resulting in additional expenditure to clients so not preferable - and more or less ruled
out.
There is also the .NET option - but I have ruled this out on the basis of needing portability.
My question is, has anyone else out there done anything like this and could you share your experience? My first task is to build a web-app that will reliably give me the D3 TCL prompt in a browser that I can customise.
I am not sure there is a definitive answer here but would like peoples thoughts and will label the most useful as the answer.
What path you choose depends in some part on your existing skill-set and whether that fits in with your portability needs. It is very difficult to give you a concrete answer to your question becaue of not knowing in which part of the chain you need the portability.
It is however possible to develop a web-browser front-end using .NET which will run on Linux or Windows, so I don't see an issue with portability here. Your web server will have to be windows based but it shouldn't matter whether D3 is running on Linux or Windows at the server-end, or whether the client desktops are running Linux or Windows.
You could try TigerLogics MVSP .NET API but I do not know if it has the power to deliver based on your needs. I believe you may find mv.NET from Bluefinity could fulfill your needs. This is in my opinion the leading product on the MultiValue market for achieving the goals you have in mind. This will mean spending money of course. For this you will get a very powerful set of tools. Also, the cost of investing in a good tool could end up smaller than the cost in terms of time, effort and potential complications of trying to do this piecemeal without spending any extra money. I am sure Flashconnect would also do the job. You would have to weigh up the cost of the different options to find out which one is right for you both technically and financially.
Not knowing whether or not you have .NET in your skill-set, I don't know whether the .NET option would be easier for you. It is however technically possible.
I would suggest using Rocket's D3 (formerly TigerLogic D3) .NET APIs and create a Web API RESTful service that you can consume with JavaScript in any other web technology and if you need to call from a D3 subroutine (in case you would) then use the MVS Toolkit.
Requirements though are D3 9.0 or later.
I've used all of the technologies described, and many more to interface with D3. I agree with #Glenn and will add... I understand you're edging away from .NET. That's fine, you don't need it. But consider that most LAMP implementations separate the DBMS servers from the web servers. That topology introduces short delays between the tiers but decouples them in case you want to use multiple web servers or multiple databases - a common topology even with D3 / MV.
I have a client where we have a Java/Grails front-end over Linux, with all data queries filtered through a single, elegant data provider class that's abstracted from application logic. That uses a web service call which I wrote in Java, calling to a .NET web service. The service is easily generated/modified, as is the client from the WSDL. From there IIS carries inbound queries to D3 via mv.NET, and at this point it doesn't matter if the D3 DBMS is in Linux or Windows. My web service could have as easily been in Linux with Java but it would then lack a pooling mechanism - see below.
If you want all Linux then you can go with the MVSP Java library. TigerLogic (now acquired by Rocket Software) committed to a PHP binding for MVSP some months ago. Rather than wait, one of my clients created a PHP wrapper around mv.NET, though MVSP is as easy. So the resulting application is essentially LAMP, but with the M = Multivalue. I have written code like this too - we can write a wrapper in any language which exposes a useful API and abstracts both connectivity method and OS dependencies. In other words it doesn't matter what languages we want to use or what OS's are involved. That part is rather trivial and subject to change later. It's better to focus on the application than the communications.
You can also go off the menu, so to speak, and create your own Java/PHP wrapper around the OS-level d3tcl command, which is a script/wrapper around the d3 executable. This allows you to open a connection yourself and pass in commands.
Whatever option you select, you need to consider that opening and closing a DBMS connection is a slow process. You do not want to script a login around every data request. You do want to open a connection and keep that open persistently, while your client code accesses and releases that persistent connection as required. This is why we like mv.NET and FlashCONNECT. With MVSP and other mechanisms you need to create your own persistence model. You'll also need to manage a pool of connection resources - what happens when you get 10 simultaneous queries, or just 1 short one after one long one? You don't want queries to back up, you don't want to reject or timeout connections, and you don't want to fire up a connection for every client. You do want the proper number of DBMS sessions waiting for inbound connections. mv.NET and FlashCONNECT do this for you, the others do not.
Personally I'd shy away from FlashCONNECT. I was there for its initial development and testing and for years of end-user implementations. It's not as widely used as the other options and is more a tool for those who aren't familiar with other options. If you're talking about Java then you're probably not inclined to use FlashCONNECT. That said, if you have developers who are not familiar with anything outside D3 then FlashCONNECT is a decent server-side tool for them while someone else is focusing on the client-side with other technologies. Everyone should use their best skillset.
Finally, (already?) if someone is not familiar with external technologies, and more intimate with D3, then other options exist like DesignBAIS and Viságe, mostly removing the burden of communications and allowing developers to work on the client-side features and back-end rules in BASIC.
I discuss all of these topics plus mobile and telephony on my blog.
HTH
How should a Windows 8 Metro application connect to a central database?
I've read about local storage, but I haven't read anything about connecting to a central database.
Obviously, this architectural design decision needs to support the disconnected scenario.
WCF web services seem to make sense.
But even if they do make sense, should we really create separate methods for all read/write operations?
Or are OData WCF services the way to go?
It seems like tablet software architecture should be able to borrow a lot from smartphone software architecture (but I am new to both).
Has Microsoft made any recommendations in its app samples?
It appears that others are asking similar questions on the Microsoft Developer Forums.
Here is what I've found:
According to Tim Heuer:
...You cannot directly have a SQL db embedded in your app or use
something like ADO.NET. This is more of an async/services
infrastructure. So if your data was exposed via services, then of
course you could connect that way. There are some other light-weight
methods you could use for local storage as well using things like the
Windows.Storage namespace (which is similar to Isolated Storage in
.NET).
Morten Nielsen agrees:
You can use HttpClient to download pretty much anything from the web.
Why don't you configure your WCF service to return data as JSON, and
use the DataContractJsonSerializer to deserialize the results?
Also, Tim Heuer cautions:
...Please note that while awesome, the SQLWinRT project on codeplex is a
wrapper to communicate with the classic SQLite engine...which uses
APIs that would not pass store validation currently.
Generic Object Storage Helper for WinRT and WinRTFile Based Database seem to have some promise.
But Daniel Stolt raises some good points:
It's awesome that there is good support for building OData clients and
other REST clients - but this only addresses the online scenario. The
"structured" part of Windows.Storage is a very limited model,
essentially limited to name/value pairs, insufficient for all but the
most basic scenarios. Yes there is local file storage, which is great
of course. But forcing every app developer out there to build her own
DBMS on top of local file storage will simply not cut it, especially
with all of System.Data having been removed from the profile. If local
file storage was sufficient for most device apps, then things like
SQLCE would have no purpose today already. And SQLCE clearly has a
purpose, and has played a very important role for occasionally
connected device apps for a very long time. There is also a tremendous
need for synchronization with a server-side database such as SQL
Azure, mostly to be able to roam data between devices. Yes there is
the roaming storage model in WinRT, but it shares the same limitations
of local storage mentioned above, and on top of this is very limited
in capacity (currently 30KB if memory serves). It is simply
insufficient for all but the simplest roaming data needs. Again,
forcing every app developer to design and implement her own
synchronization solution is very bad. You can do much better to enable
developers.
Many people are disappointed that the System.Data namespace is not supported in WinRT.
Richard Bethell said:
I don't even have words for this. This is astonishing. Leave aside for
the moment they want to force you to abstract to middleware for
database connectivity - I don't agree, but I can quasi understand a
rationale for that. I can even see pathways for developing like that.
But no System.Data.... at all? Do you even understand what you've done
to us?
What System.Data can do, outside of just having providers for Sql,
OleDb and other custom providers like Oracle, is provide a rich
abstraction of XML datasets that allow you to very quickly build a
data oriented Service Oriented Architecture.
For instance, I can easily create a web service using SOAP or WCF that
returns DataSets or DataTables, and then consume those objects easily
and directly. Being able to do this allows very rapid construction of
n-tier architectures, even without direct data connections available.
Without System.Data, and the power of DataViews, DataTables, etc. this
gets a lot harder. Sure you can custom create structs, put data in
there, and serve up structs, and use Linq to do whatever sorting,
filtering, etc. you want to do.... but it ends up being twice the
work, and makes code reuse a lot harder. And it means using our
existing service oriented architecture is impossible (without a big
overhaul.)
The withdrawal of System.Data is as big a thing for developers to deal
with as the loss of the Printer object in VB6 to vb.net 1.0 was. What
is harder to understand in this case is why it is necessary -
re-enabling it in the Metro profile can't possibly be a technical
difficulty of the product, can it?
It is valuable enough that I would seriously consider including Mono's
System.Data classes as part of any app I create (which would obviously
have to be open source.)
I think that this is another of those "it depends" questions...
The first and most obvious issue is that it very much depends on the context in which the application is running as to whether, to take the first case "Obviously...support...disconnected" is actually true - if the app is an internal corporate app then quite possibly not in that case no db == not work.
Secondly you could look (hmm, rash... one assumes you could look, this could be a bad assumption) at database synchronisation between a local SQL database and the remote db and so on and so forth.
Taking a step back... yes - you're absolutely right, look at it as being the same as phone or silverlight (although I don't know if there is yet RIA support) - but the thing is at this point its very hard to be prescriptive because given a general purpose platform one can therefore write applications to suit all sorts of purposes.
Not a hugely helpful answer really - but a start.
Having read #Jim G's answer it seems that I should probably withdrawn mine?
There are different "Application memory" options (like 80MB...200MB) in django-friendly hosting called webfaction and I'm confused deciding which one I should buy.
Could someone please walk me through the ideas on how to figure out how much memory my project might require (excluding operating system, the main apache server and the database servers memory requirements)? I understand in theory I'll need to perform some kind of load testing, but thought there might be ways to calculate that in advance with some simple/relatively easy understandable approach.
I don't know how hard they enforce application memory usage limit, and another question is: what will happen if more users came to the site and more threads started than what I expected? Will the application crash? Or will delays just become uncomfortable?
And - no, application is not ready yet (I can't measure anything right now). Development environment if it matters is Winodows 7, 64-bit. Hosting itself is some kind of Linux I think.
(Sorry if it's not a stackoverflow question.)
Sorry, but until you have the application completely developed, you can't say anything about the kind of memory it'll use. I recommend that you take their "lowest" plan, and renew to it to fit your needs, or still better: get hosting after you finish developing the application.
On the other hand, if you had the application ready, you could just run it in Apache with your host's config and some sample data to get a rough estimate...
I agree that you can' tell much before your app is ready.
As a vague estimation consider that your host is supposed to be "django-friendly" so some "basic" application should run without problems. Try and upgrade later if that's possible easily.
Also consider the type of data that is processed with your app, I eg. ran into troubles once when I had to process really large image uploads that made the whole site crash.
Also keep in mind if you need some ram for additional processes eg. memcache!
Webfaction are indeed a Django-friendly host, and your application will certainly not crash if it starts needing more memory than you have paid for. What will happen is that you will be allowed to use small amounts of extra memory, but if you consistently go over the limit they will send you a polite email requesting that you either reduce the load or pay for more.
I have a program that is using a configuration file.
I would like to tie the configuration file to the PC, so copying the file on another PC with the same configuration won't work.
I know that Windows Activation Mecanism is monitoring hardware to detect changes and that it can tolerates some minor changes to the hardware.
Is there any library that can help me doing that?
My other option is to use WMI to get Hardware configuration and to program my own tolerance mecanism.
Thanks a lot,
Nicolas
Microsoft Software Licensing and Protection Services has functionality to bind a license to hardware. It might be worth looking into. Here's a blog posting that might be of interest to you as well.
If you wish to restrict the use of data to a particular PC you'll have to implement this yourself, or find a third-party solution that can do this. There are no general Windows API's that offer this functionality.
You'll need to define what you currently call a "machine."
If I replace the CPU, memory, and hard drive, is it still the same computer? Network adaptor, video card?
What defines a machine?
There are many, many licensing libraries out there to do this for you, but almost all are for pay (because, ostensibly, you'd only ever want to protect commercial software this way). Check out what RSA, Verisign, and even microsoft have to offer. The windows API does not expose this, ostensibly to prevent hacking.
Alternately, do it yourself. It's not hard to do, the difficult part is defining what you believe a machine to be.
If you decide to track 5 things (HD, Network card, Video card, motherboard, memory sticks) and you allow 3 changes before requiring a new license, then users can duplicate the hard drive, take out two of the above, put them in a new machine, replace them with new parts in the old machine and run your program on the two separate PCs.
So it does require some thought.
-Adam
If the machine has a network card you could always check its mac address. This is supposed to be unique and checking it as part of the program's startup routine should guarantee that it only works in one machine at a time... even if you remove the network card and put it another machine it will then only work in that machine. This will prevent network card upgrades though.
Maybe you could just keep something in the registry? Like the last modification timestamp for this file - if there's no entry in the registry or the timestamps do not match then fall back to defaults - would that work? (there's more then one way to skin a cat ;) )