It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
This is my first application using Microsoft Sync Framework 2.1 so you assume i don't know any thing of it.
The question is i need to synchronize tables- these number of tables to synchronize increase or decrease as the Database changes.
Like wise the direction of tables is also random in nature sometimes it is up or down or bi-directional. Even the rules vary
As we have large no of clients/distributors so the no of tables to synchronize for UserA may be different for UserB and even the direction.
As we need to create Scopes and what i find out we need to create a new Scope for every change and for every User Tables is it right?
So Example we have 100 tables 10 Users and 3 directions then the possibility of no of scopes will be above 3000
How to the number of scope effect the DB performance?
Even i dont know how can i remove scopes for the tables that are deleted in DB? or that i choose not to synchronize and even for the user also.
I found out there is something called as Deprovisioning but dont know how to use it.
Moreover i need to apply filters to the tables also so in that case do i need to create a new Scope again or not? I don't know how to create filters as the samples i downloaded does not have any example of filters?
Any help/sample/link is highly appreciated
a scope is a collection of tables that are sync together in a single sync session. how many tables to include is up to you.
have a look at this link for some guidance: Sync Framework Scope and SQL Azure Data Sync Dataset Considerations
I suggest you go thru the documentation and the tutorials/walkthroughs first. The documentation actually gets installed with the framework.
if you have trouble finding them, here's the corresponding links:
How to: Use Synchronization Scopes
How to: Provision and Deprovision Synchronization Scopes and Templates (SQL Server)
How to: Filter Data for Database Synchronization (SQL Server)
if you want to understand further what provisioning actually does, have a look at this: Sync Framework Provisioning
you might want to specify what databases are you actually synching
Related
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed last month.
Improve this question
During a high traffic portion of the day, one of your relational databases crashes, but the replica is never promoted to a master. You want to avoid this in the future.
What should you do?
A. Use a different database
B. Choose larger instances for your database
C. Create snapshots of your database more regularly
D. Implement routinely scheduled failovers of your databases
I feel that the answer should be 'C'.
Explanation:
Take regular snapshots of your database system.
If your database system lives on a Compute Engine persistent disk, you can take snapshots of your system each time you upgrade. If your database system goes down or you need to roll back to a previous version, you can simply create a new persistent disk from your desired snapshot and make that disk the boot disk for a new Compute Engine instance. Note that, to avoid data corruption, this approach requires you to freeze the database system's disk while taking a snapshot.
Reference: https://cloud.google.com/solutions/disaster-recovery-cookbook
However, there are so many varied answers from other sources and now I am confused.
Can someone please help? Thanks a zillion.
To select the best answer you must determine what is the question. Questions often have key items that affect the best answer.
What is the question?
The question is how to avoid database crashes.
What are the key items in the question?
High traffic
Only during a portion of the day
There is a replica
The replica is not promoted
Do all of the key items apply? Sometimes key points are not relevant to the question to test your understanding. In this case, the replica is that item. None of the answers involve a replica. That leaves you with two key points:
High traffic
Only during a portion of the day
Of the four answers, eliminate the ones that do not apply to the question. These answers are not good answers to the key points of the question.
Use a different database. Changing the database could mean significant changes to the application design. In most cases, this is not a good answer.
Create snapshots of your data more regularly. Snapshots are for backup and recovery. They do not prevent database crashes. In fact, if snapshots are performed too often, for example, when the database is under heavy load, you are more likely to make the problem worse.
implement routinely scheduled failover. This will not prevent a database from failing. This will help you recover after a failure.
That leaves one answer:
Choose larger instances for your database
Most database systems are not auto-scaling. That means you must select an instance size that can handle peak traffic loads. Only one of the answers provides for that fundamental requirement.
The question being asked is how to avoid a replica not being promoted to primary, NOT how to avoid a crash.
The crux of the problem here is that the replica was not promoted.
Testing failover is would ensure that replica are in fact able to assume primary roles.
Very good analysis for cracking sometimes confusing exam questions.
However, the question doesn't seem to be about avoiding crashes. It is about avoiding replicas not being promoted to master after the crash. So in this case only D makes sense.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
In my application, I have a bunch of service providers in my database offering various services. I need a user to be able to search through these service providers by either name, location, or both. I also need a user to be able to filter the providers by different criteria, based on multiple attributes.
I am trying to decide if I could implement this simply with database queries or if a more robust solution (i.e. a search engine) would better suit my needs.
Please let me know the pros and cons of either solution, and which you think would be best to go with.
I am writing my application in Django 1.7, using a PostGIS database, and would use django-haystack with elasticsearch if a search engine is the way to go here.
Buddy,It seems that you are working on a search intensive application.Now my opinion in this regard is as follows-:
1)If u use search intensive queries directly with the database,Then automatically overhead is gonna be very high as each time a separate criteria based query is to be fired to the database engine from your django.Each time query is to be built with seperate parameters and is to be built to fire at the backend database engine. Consequence is it will make you highly dependent on the availability of database server.Things can go more worse if database server will be located in some remote location.As overhead of network connectivity will be another addendum to this.
2)You should try to implement a server side caching system like redis that is a in-memory nosql database (sometimes also called a data structure server) that will beat all the problems I discussed in my previous point.Read more about it here.
3)To powerpack your search.Read about Apache Solr enter link description here.A lucene based search library this will power pack your search to the next level.
4)Last but not least go with case studies of biggies like facebook,twitter etc regarding how they are managing their infrastructure.You will get even more better idea.
Any doubts or suggestions.Kindly comment cheers :-)
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I am a web developer. I got a project to build a 'build your own site' concept platform. for example buildabazaar or bigcommerce
I tried magento, axiscart but it was not user friendly and suitable to customer needs. But OpenCart is one which meets all mu customer's requirement. Its easily modifiable.
As i am building this for multiple customers , can u guide me how to configure OpenCart to handle multiple customers with admin panel with all the admin panel options for individual customers.
This is very easy. OpenCart supports Multi carts - mutli stores on different domains; you can even limit the user to that particular cart - so create users for that cart admin panel only; however it is a bit tricky and can get messy if not done properly. Can you explain exactly what you wish to do and I will produce a step by step.
The other thing; opencart although is blazingly fast. Scaling needs to be considered. We use multi stores on opencart for some of our projects and use Rackspace cloud servers with 4-6 opencart instances per server which is load balanced, CDN for storage of static files and of course dns load balanced. The memory I would recommend for each store is 2gb with a separate database cloud instance 1 master db at 16gb and 4-5 slave load balanced for traffic at 4-8gb
This depends on traffic and how it grows. The current setup we have can handle up to 4000 orders per day easily. We manage an average 3,500 on average so our application setup is always ready for high demand. We could essentially host opencarts for users as you want to do with effecting our network..
Create user reg form
Create and update nginx/htaccess with subdomain info
Create and copy that info to an installer script based on that subdomain
Make the opencart install and config based on that subdomain and data
Send user an email abuot cname records so they can have store.com rather that substore3434.yourecomservice.com
It would be something like this. Lot of validation. cURL usage and cron jobs. I personally would use CPANEL/WHM Apache server (I know people are going to negative me on this) but using Nginx would be a nightmare as CPANEL automates scripts using fantastico... you can use nginx later if your project is a success..
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I really don't see the point of making EJB's into web services. The first thing that comes to mind is security. How do I keep the entire world from using my business methods? How would you authenticate a user to use a service. The second it seems hard to pass objects or if it's even possible to pass lists into a web service.
I can see some justifications such as having services for multiple applications that use the same methods. But why not just have a library or deploy an ear with all the business methods?
Thanks for your help.
How would you like to deploy your Java package to .NET client? Or better how would you like to deploy your Java package to 1.000.000 iPhones?
Web services are here because of interoperability. You will use them to pass data between processes without making the processes dependent on concrete technology. The communication will only be dependent on some interoperable and broadly supported protocols and data structures represented in XML or other interchangable format (like JSON).
If you need some advanced protocols for transaction flow, message level security or reliable messaging you will build SOAP service. If you need lightweight service for broad variety of clients you will build REST service.
Web services have its place and once you will have to build interoperable application or generally logic used by non Java code you will find them useful.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I have the following requirement. I have with me a database containing the contact and address details of at least 2000 members of my school alumni organization. We want to store all that information in a relation model so that
This data can be created and edited on demand.
This data is always backed up and should be simple to restore in case the master copy becomes unusable.
All sensitive personal information residing in this database is guaranteed to be available only to authorized users.
This database won't be online in the first 6 months. It will become online only after a website is built on top of it.
I am not a DBA and I don't want to spend time doing things like backups. I thought Amazon's RDS with it's automatic backup facility was the perfect solution for our needs. The only problem is that being a voluntary organization we cannot spare the monthly $100 to $150 fees this service demands.
So my question is, are there any less costlier alternatives to Amazon's RDS?
In your case of just contact and address data I would choose Amazon SimpleDB. I know SimpleDB might not be suitable for a large number of tables with relationships and all, but for your kind of data I think SimpleDB is sufficient. And costs is much much cheaper than Amazon RDS.
I also wanted to use RDS, but the smallest db size costs $80 p/month.
With out a bit more info I may be way off base here. but 2000 names addresses etc. is not a large DB and I would have thought that the possible use of Amazons RDS was a bit "overkill" to say the least.
Depending on how (and who) you want view edit etc. there are a number of free or almost free alternatives.
One method may be to set up /use a hosting package that has something like phpMyAdmin linked to a mySQL DB. Doing this it is possible to access and edit etc. the DB without having a website front end. Not pretty (like a website front end) but practical. A good host should also back up for you.
Another is to look at Google Documents. OK not really a database more a spread sheet, but very much on the lines of Excel. You can share Google docs with invited people and even set up a small website via Google Docs. This is a free method, but may not be that practical depending on your needs.
Have you taken a look at Microsoft SQL Azure? You can use it free for something like 90 days and then if you only need a 1GB db it would only be about $10 a month.
You mention backup so I thought I would talk about that as well. They way SQL Azure works is that it automatically creates 2 additional copies of your database on different machines in the data center. If one of the machines or db's become unavailable it automatically fails over to one of the other db's.
If you need anything above that you can also use the copy command to backup the database.
You can check
http://www.enciva.com/postgresql9-hosting.htm
and
http://www.acugis.com/postgresql-hosting.htm
They work for Postgres and MySQL.
For a frankly tiny db of that size I'd seriously look at http://www.sqlite.org/
it's inprocess, easy to constantly .dump off to S3 and you can use update hooks to keep checkpoints after updates.
backups/restores are almost the equivalent of windows batchfiles and wgets
good encryption using http://sqlcipher.net/
standard OS Filesystem and user level ACLs control security.
running a file backed db makes sense given the fragility of a normal EC2 backed RDBMS to EBS gremlins.
there are exclusions from to SQL92 (no real showstoppers), but given the project cost sensitivity and the RPO and RTO's of an alumni database, I reckon it's a good bet.