how to directly access appfabric cache without proxy? - appfabric

Start to learn and use appfabric cache.
From the whitepaper, http://msdn.microsoft.com/en-us/library/gg186017%28v=azure.10%29.aspx, it says:
Bulk get calls result in better network utilization.Direct cache access is much faster than proxies (ASP.NET, WCF).
I am not sure what this means. What is a proxy in appfabric world?
We do websites base on asp.net/mvc, so if we write some logic to access our abpfabric cluster, it will be called from asp.net/mvc code?
Many Thanks

If you look at the document refernced by that page it explains what is meant by caching:
In some cases, the cache client is wrapped and accessed via a proxy
with additional application or domain logic. Oftentimes, performance
of such applications is much different from the Windows Server
AppFabric Cache cluster itself. The goal of the tests in this category
is to show performance of a middle tier application with additional
logic and compare it with performance of direct access to the cache.
To accomplish the goal, a simple WCF application was implemented that
provided access to the cache and contained additional logic of
populating the cache from an external data source if the requested
object is not yet in the cache.
The document contains details on how this affected performance, but if you need more detail the source code used is available.
Using the DataCacheFactory (and/or AppFabric Session provider) from your MVC site will access the cache cluster directly, once you've granted access to the Application Pool user.

Related

How to point a URL to a static HTML through GCP API?

I am pretty much new to GoogleCloudPlatform. I am pondering to use it.
Can I route an URL for a HTML String through an API?
The idea is to create some static cache mechanism for heavy pages.
I would also need to remove or update the cache.
Maybe I'm not understanding what you need, but wouldn't make more sense to use their already available Cloud Memorystore?
What it's good for
Cloud Memorystore for Redis provides a fast, in-memory store for use cases that require fast, real-time processing of data. From simple caching use cases to real time analytics, Cloud Memorystore for Redis provides the performance you need.
Caching: Cache is an integral part of modern application
architectures. Cloud Memorystore for Redis provides low latency access
and high throughput for heavily accessed data, compared to accessing
the data from a disk based backend store. Session management,
frequently accessed queries, scripts, and pages are common examples of
caching.

Accessing my local baseX DB from another machine

I use BaseX on my machine to simplify how I interact with some XML data and I run it using the BaseX http service and access it via Rest and a localhost address.
I don't really have any network experience, and I want to know how I would go about accessing this data from anther machine. Would it be possible using the current configuration, or do I need to do something to rout the external requests.
Hope this question is clear. Like I said I have little-no experience dealing with these sorts of networking issues.
BaseX (or to be more specific, the embedded web server basexhttp) per default listens on port 8984, available to all other computers that can access you machine. Given no firewall (or NAT) prevents access, you should already be able to reach your machine under http://[ip-address]:8984. More in-depth reference is available in the BaseX Wiki: general information, configuration options and startup options.
WIth other words: if you didn't change any configuration, you will already be able to access the service.
If you want to offer web services using BaseX, consider adding a reverse proxy like nginx. This has several advantages:
configurable caching
serving static resources directly, without going through BaseX
reducing exposition of BaseX to the internet (nginx and similar products have a much broader user base, thus are analyzed for security issues in more depth)
providing TLS-encryption
providing web applications from different application servers, like a website powered by BaseX, and others using PHP
possibly quite a bunch I didn't consider right now

Synchronize Infinispan cache entries with database

I want to know whether I can use Infinispan for cached data synchronization with Oracle database. This is my scenario. I have two main applications. One is highly concurrent use app and the second is used as admin module. Since it is highly concurrent I want to reduce database connections (Load the entities into cache (read write enable) and use it from this place without calling database). But meantime I want to update database according to the cache changes because admin module is using database directly. Can that update process (cache to database) handle in entity level without involving application? Please let me know whether Infinispan supports this scenario or not. If supports please share ideas.
Yes, it is possible. Infinispan supports this use case.
This should be simple configuration "problem" only. All you need to use is properly configured CacheStore with passivation disabled. It will keep your cache (used by highly concurrent application) synchronized with database.
What does it exactly cause?
When passivation is disabled, whenever an element is modified, added
or removed, then that modification is persisted in the backend store
via the cache loader. There is no direct relationship between eviction
and cache loading. If you don't use eviction, what's in the persistent
store is basically a copy of what's in memory.
By memory is meant a cache here.
If you want to know even more about this and about other interesting options please see: https://docs.jboss.org/author/display/ISPN/Cache+Loaders+and+Stores#CacheLoadersandStores-cachepassivation
Maybe it's worth to consider aforementioned eviction. Whether to disable or enable it. It depends mainly on load generated by your highly concurrent application.
Actually, this only works when you use Infinispan in the same cluster for the admin module. If you load A in memory with Infinispan, change A to something else in the database directly with the admin module, then Infinispan will not know A has been updated.

How to 'web enable' a legacy C++ application

I am working on a system that splits users by organization. Each user belongs to an organization. Each organization stores its data in its own database which resides on a database server machine. A db server may manage databases for 1 or more organizations.
The existing (legacy) system assumes there is only one organization, however I want to 'scale' the application by running an 'instance' of it (tied to one organization), and run several instances on the server machine (i.e. run multiple instances of the 'single organization' application - one instance for each organization).
I will provide a RESTful API for each instance that is running on the server, so that a thin client can be used to access the services provided by the instance running on the server machine.
Here is a simple schematic that demonstrates the relationships:
Server 1 -> N database (each
organization has one database)
organization 1 -> N users
My question relates to how to 'direct' RESTful requests from a client, to the appropriate instance that is handling requests from users for that organization.
More specifically, when I receive a RESTful request, it will be from a user (who belongs to an organization), how (or indeed, what is the best way) to 'route' the request to the appropriate application instance running on the server?
From what I can gather, this is essentially a sharding problem. Regardless of how you split the instances at a hardware level (using VMs, multiple servers, all on one powerful server, etc), you need a central registry and brokering layer in your overall architecture that maps given users to the correct destination instance per request.
There are many ways to implement this of course, so just choose one that you know and is fast, and will scale, as all requests will come through it. I would suggest a lightweight stateless web application backed by a simple read only database that does the appropriate client identifier -> instance mapping, which you would load into memory/cache. To add flexibility on hardware and instance location, use (assuming Java) JNDI to store the hardware/port/etc information for each instance, and in your identifier mapping map the client identifier to the appropriate JNDI lookup key.
Letting the public API only specify the user sounds a little fragile to me. I would change the public API so that requests specify organization as well as user, and then have something trivial server-side that maps organizations to instances (eg. organization foo -> instance listening on port 7331).
That is a very tough question indeed; simply because there are many possible answers, and which one is the best can only be determined by you and your environment.
I would write an apache module in C++ to do that. Using this book, I managed to start writing very efficient modules.
To be able to give you more solutions (maybe just setting up a Squid proxy?), you'll need to specify how you will be able to determine to which server you need to redirect the client. If you can do it by IPs, though a GET param, though a POST XML param (like SOAP). Etc.
As the other answer says there are many ways to approach this issue. Lets assume that you DON'T have access to legacy software source code, which means you cannot modify it to listen on different ports for different instances.
Writing Apache module seems VERY extreme to solve this issue (and as someone who actually just finished writing a production apache module, I suggest avoiding it unless you are making serious money).
The approach can be as esoteric as you like. For instance if your legacy software runs on normal Intel architecture and you have the hardware capacity there are VM solutions, where you should be able to create a thin virtual machine, one running a single instance of the software and a multiplexer to tie them all.
If on the other hand you are running something like HPUX well :-) there are other approaches. How about you give a bit more detail?
Ahmed.

Offline web application

I’m thinking about building an offline-enabled web application.
The architecture I’m considering is as follows:
Web server (remote) <--> Web server/cache (local) <--> Browser/Prism
The advantages I envision for this model are:
Deployment is web-based, with all the advantages of this approach
Offline-enabled
UI (html/js) synchronization is a non-issue
Data synchronization can be mostly automated
as long as I stay within a RESTful paradigm
I can break this as required but manual synchronization would largely remain surgical
The local web server is started as a service; I can run arbitrary code, including behind-the-scene data synchronization
I have complete control of the data (location, no size limit, no possibility of user deleting unknowingly)
Prism with an extension could allow to keep the javascript closed source
Any thoughts on this architecture? Why should I / shouldn’t I use it? I'm particularly looking for success/horror stories.
The long version
Notes:
Users are not very computer-literate.
For instance, even superficially
explaining how Gears works is totally
out of the question.
I WILL be held liable if data is loss, even if it’s really the users fault (short of him deleting random directories on his machine)
I can require users to install something on their machine. It doesn’t have to be 100% web-based and/or run in a sandbox
The common solutions to this problem don’t feel adequate somehow. Here is a short analysis of each.
Gears/HTML5:
no control over data, can be deleted
by users without any warning
no
control over location of data (not
uniform across browsers and
platforms)
users need to open application in browser for synchronization to happen; no automatic, behind-the-scene synchronization
different browsers are treated differently, no uniform view of data on a single machine
limited disk space available
synchronization is completely manual, sql-based storage makes this a pain (would be less complicated if sql tables were completely replicated but it’s not so in my case). This is a very complex problem.
my code would be almost completely open sourced (html/js)
Adobe AIR:
some of the above
no server-side includes (!)
can run in the background, but not windowless
manual synchronization
web caching seems complicated
feels like a kludge somehow, I’ve had trouble installing on some machines
My requirements are:
Web-based (must). For a number of
reasons, sharing data between users
for instance.
Offline (must). The application must be fully usable offline (w/ some rare exceptions).
Quick development (must). I’m a single developer going against players with far more business resources.
Closed source (nice to have). Yes, I understand the open source model. However, at this point I don’t want competitors to copy me too easily. Again, they have more resources so they could take my hard work and make it better in less time than I could myself. Obviously, they can still copy me developing their own code -- that is fine.
Horror stories from a CRM product:
If your application is heavily used, storing a complete copy of its data on a user's machine is unfeasible.
If your application features data that can be updated by many users, replication is not simple. If three users with local changes synch, who wins?
In reality, this isn't really what users want. They want real-time access to the most current data from anywhere. We had better luck offering a mobile interface to a single source of truth.
The part about running the local Web server as a service appears unwise. Besides the fact that you are tied to certain operating environments that are available in the client, you are also imposing an additional burden of managing the server, on the end user. Additionally, the local Web server itself cannot be deployed in a Web-based model.
All in all, I am not too thrilled by the prospect of a real "local Web server". There is a certain bias to it, no doubt since I have proposed embedded Web servers that run inside a Web browser as part of my proposal for seamless off-line Web storage. See BITSY 0.5.0 (http://www.oracle.com/technology/tech/feeds/spec/bitsy.html)
I wonder how essential your requirement to prevent data loss at any cost is. What happens when you are offline and the disk crashes? Or there is a loss of device? In general, you want the local cache to be the least farther ahead of the server, but be prepared to tolerate loss of data to the extent that the server is behind the client. This may involve some amount of contractual negotiation or training. In practice this may not be a deal-breaker.
The only way to do this reliably is to offer some sort of "check out and lock" at the record level. When a user is going remote they must check out the records they want to work with. This check out copied the data to a local DB and prevents the record in the central DB from being modified while the record is checked out.
When the roaming user reconnects and check their locked records back in the data is updated on the central DB and unlocked.