I have an Access 2000 Database created over a decade ago. The original creators are long gone. It's stored in a shared folder on a file server on the local network. The file was over 250MB in size, and is accessed by anywhere from a dozen to two dozen local users on the network. We still use Access 2000 loaded on the machines to open the file. I've also already split the database using Access's split tool, into a FE and BE file in order to try and speed things up. We're currently using the FE/BE approach, accessing a single FE file stored on the server as well.
I'm trying to figure out what else I can do to try and optimize it. Move the BE into a SQL server? Put a local copy of the FE on each machine? Set myself on fire and hope for the best?
We are looking to replace the DB, but plans are at least a year or longer down the pipeline at best.
Related
I'm an IT person supporting researchers who use SAS. We have recently migrated most users storage from on-premises SMB shares to MS Teams. The question has arisen whether it's possible to keep SAS Data Sets in Teams storage (Sharepoint library), then access them via the synced library.
Are there any pitfalls to this approach? Any steps that could/should be taken to ensure there are no problems?
It is possible, but not ideal. SAS 9.4 accesses data through a concept called a libname. A libname is a location where data that SAS can access is stored. SAS 9.4 stores data in .sas7bdat files, but it can also access a large variety of other databases natively.
If your users can set up Sharepoint as a shared network disk, SAS can work with it as if it is local. If not, your users will need to download the .sas7bdat files to their system locally, then re-upload it back to Sharepoint using the Sharepoint REST API. SAS can do this through code.
There really is no issue with it other than a convenience factor. It's not as ideal as a shared disk or database access, but it can work in theory.
If they decide to mount it as a network drive, I would add the caveat that they should not use Sharepoint as a place to store temporary data with high read/write speeds. In fact, I'd make it read-only to prevent them from doing so. If they need to pull the data locally then they can do so with libname access.
I know about s3 storage but I am trying to see if things can work out by only using filesystem
The main reason to use a service like S3 is scalability. Imagine that you use a the file system of a simple server to store files. Then it means that everyone that visits your site and wants to access a file, has to visit the same server. If there are enough visitors, then this will eventually render the system unresponsive.
Scalable storage services will store the same data on multiple servers to allow serving the content when the number of requests increases. Furthermore one normally hits a server that is close to the location of that user which minimizes the delay to fetch a file.
Finally such storage services are more reliable. If you use a single disk to store all the files, it is possible that eventually the disk fails losing all the data. By storing the data on multiple locations, it is less likely that the files are completely lost.
I have a spring boot application which downloads around 300 MB of data at start up and saves it to a path /app/local/mydata. Currently, I have just one dev environment with a single node and it is not a problem. However, once I create a prod instance with (say) 10 nodes, it would be a waste of data bandwidth for each node to individually download the same 300 MB data. It will put a lot of stress on the service it is downloading the data from. And there is cost associated with data flowing in/out of EC2.
I can build a logic using a touchfile to make sure that only one box downloads the data and others just wait until the download is complete. However, I don't know where to download these data such that the other nodes can read it too.
Any suggestions?
Download it to S3 if you want to keep it in a file, but it sounds like you might need to put the data in a database (RDS) or maybe cache it in Redis (ElastiCache).
I'm not sure what a "touchfile" is but I assume you mean some sort of file lock mechanism. I don't see that as the best option for coordinating this across multiple servers. I would probably use a DynamoDB table with consistent reads and conditional writes as a distributed locking mechanism.
How often does the data you are downloading change? Perhaps you could just schedule a Lambda function to refresh the data periodically and update a database or something?
In general, you need to stop thinking about using the web server's local file system for this sort of thing.
I am writing a time tracking Windows application in C++ that uses sqlite3 engine to store its data. For my purpose it would be nice to share the database file across the local network (in a Windows network share folder) among several copies of my application, so that multiple users of the software could share data.
Is there a mechanism to do that with SQLite?
"nice to share the database file across the local network" You really don't want to do that. It will end up being more trouble than it's worth. In ideal circumstances it works, although the performance sucks a bit. In non-ideal circumstances, it will block forever without giving you any idea why and what's at fault.
It's much easier to partition your system into a server and a client. They can both run within the same application. When the application starts, it checks if there are any servers on the local network, and if there aren't, it starts one. It then connects to the server.
That's what Filemaker at least used to do 20 years ago, and it worked pretty well. Should be a breeze to implement using modern frameworks today (say Qt or boost).
We have two ColdFusion applications that share a common database. There are three instances of each application. (One instance of each application runs on each of three servers.)
I can see that the three instances of a given application should share a client variable store. (Load-balancing can cause a single user session to bounce between the three instances.) My question is: Is there any danger to having all instances of both applications share the same data store? Or should only one application be pointing at a given data store?
You can use the same client data store. The CDATA table has an 'app' column that stores the coldfusion application name. That column will keep your data unique to each application.
I'm working at an enterprise level ColdFusion shop with multiple CF applications running on the same server that are all pointed at the same client variable store. The only concern within the organization is how the client variable store affects regular backups, and that falls under the data team's purview. We don't have any problems with the different apps actually using the same client variable storage.
Related, from the ColdFusion documentation:
Some browsers allow only 20 cookies to
be set from a particular host.
ColdFusion uses two of these cookies
for the CFID and CFToken identifiers,
and also creates a cookie named
cfglobals to hold global data about
the client, such as HitCount,
TimeCreated, and LastVisit. This
limits you to 17 unique applications
per client-host pair.
I guess this deals more with how many applications you actually run rather than whether you have them all share the same client data store, but it does suggest that there may be some kind of hard limit on the total number of apps you can run at once, although I'd recommend splitting across hosts (or just using a different domain name) if you're planning on more than 16 apps!
As Eric stated above, running multiple apps off of one datasource is fine. What I would warn you is that these databases can fill up fast if you're not careful to block spiders and search engines from using them. Because CF creates client variables on each request for a new session, a search engine will get a new one every time because it never sends its old credentials/cookies so CF thinks it's a new user who needs a new client variable set. Also, be absolutely certain to check "Disable global client variable updates" in CF admin. This will save you a lot of unnecessary overhead.
I would think that multiple applications sharing the same data store would open up the possibility of users from one application having access to the other applications. While it may not be likely, the possibility could exist. (I don't have any facts to back that up, it just seems like a logical conclusion).
The question is, are you comfortable with that possibility, or do you have to absolutely make sure each application is secure?