I am using sync framework 2.1.
What i am doing is changing the directions of sync continously
Example. first i set bidirectional, then may be upload and then download.
I am creating new scopes whenever any change happend and deprovision existing scope.
Now- After i set bidirectional and then upload direction - upload does not work at all.
After then i change it to bidirection then all the changes on local is overriden by server.
While uploading all the records are conflicted with LocalInsertRemoteInsert
There are also no scope overlapping as i found there are no scopes for this table in scope_info
I also referred this LocalInsertRemoteInsert conflicts on initial sync
Any help is appreciated
if you previously provisioned and synched the databases, then each copy contains data already. when you deprovision, Sync Fx removes the sync metadata including information on what was previously synched but not the data itself.
so when you reprovision and try to sync, since the previous information on what was synched was already wiped out by deprovisioning, Sync Fx has no idea that the replicas already contains the same set of rows.
when you sync, it will try to send the rows from one replica to the other, since the data already exists on the other side, you get a conflict (a duplicate PK error when inserting rows).
Related
The AWS S3 docs state that:
Amazon S3 offers eventual consistency for overwrite PUTS and DELETES in all regions.
http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#ConsistencyModel
The timespan until full consistency is reached can vary. During this period GET requests may return the previous object or the udpated object.
My question is:
When is the last-modified timestamp updated? Is it updated immediately after the overwrite PUT succeeds but before full consistency is reached, or is it only updated after full consistency is achieved?
I suspect the former but I can't find any documentation which clearly states this.
The Last-Modified timestamp should match the Date value returned in the response headers from the successful PUT request.
To my knowledge, this is not explicitly documented, but it can be derived from what is documented.
When you overwrite an object, it's not the overwriting itself that may be delayed by the eventual consistency model -- it's the availability of the overwritten content at a given S3 node (S3 is replicated to multiple nodes within the S3 region).
The Last-Modified timestamp, like the rest of the metadata, is established at the time of object creation and immutable, thereafter.
It is, in fact, not the "modification" time of the object at all, it is the creation time of the object. The explanation may sound pedantic, but it is accurate in the strictest sense: S3 objects and their metadata cannot in fact be modified at all, they can only be overwritten. When you "overwrite" an object in S3, what you are actually doing is creating a new object, reusing the old object's key (path+file name). The availability of this new object at a given S3 node (replication) is what may be delayed by the eventual consistency model... not the actual creation of the new object that overwrites the old one... hence there would be no reason for Last-Modified to be impacted by the replication delay (assuming there is a replication delay -- eventual consistency can at times be indistinguishable from immediate consistency).
This is something S3 does that is absolutely terrible.
Basically in Linux you have the mtime which is the time the file was last modified on the filesystem. Any S3 client could gather the mtime and set the Last-Modified time on S3 so that it would maintain when things were actually last modified.
Instead, Amazon just does this based on the object creation and this is effectively a massive problem if you ever just want to use the data as data outside of the original application that put it there.
So if you download a file from S3, your client would likely set the modified time and if it was uploaded to s3 immediately as it was created then you would at least have a near correct timestamp. But the reality is that you might take a picture and it might not get from your phone through the app, through the stack and to S3 for days!
This is not even considering re-uploading the file to s3. Which would compound the problem, as you might re-upload it years later. S3 will just act like Last-Modified is years later when the file was not actually modified.
They really need to allow you to set it, but they remain ambiguous and over-documented in other areas to make this hard to figure out.
https://github.com/s3tools/s3cmd/issues/524
We currently have two ColdFusion 10 dedicated servers which we are migrating to a single VPS server. We have many scheduled tasks on each. I have taken each of the neo-cron.xml files and copied the var XML elements, from within the struct type='coldfusion.server.ConfigMap' XML element, and pasted them within that element in the neo-cron.xml file on the new server. Afterward I restarted the ColdFusion service, log into cf admin, and the tasks all show as expected.
My problem is, when I try to update any of the tasks I get the following error when saving:
An error occured scheduling the task. Unable to store Job :
'SERVERSCHEDULETASK#$%^DEFAULT.job_MAKE CATALOGS (SITE CONTROL)',
because one already exists with this identification
Also, when I try to delete a task it tells me a task with that name does not exist. So it seems to me that the task information must also be stored elsewhere. So there when I try to update a task, the record doesn't exist in the secondary location so it tries to add it new to the neo-cron.xml file, which causes an error because it already exists. And when trying to delete, it doesn't exist in the secondary location so it says a task with that name does not exist. That is just a guess though.
Any ideas how I can get this to work without manually re-creating dozens of tasks? From what I've read this should work, but I need to be able to edit the tasks.
Thank you.
After a lot of hair-pulling I was able to figure out the problem. It all boiled down to having parentheses in the scheduled task names. This was causing both the "Unable to store Job : 'SERVERSCHEDULETASK#$%^DEFAULT.job_MAKE CATALOGS (SITE CONTROL)', because one already exists with this identification" error and also causing me to be unable to delete jobs. I believe it has something to do with encoding the parentheses because the actual neo-cron.xml name attribute of the var element encodes the name like so:
serverscheduletask#$%^default#$%^MAKE CATALOGS (SITE CONTROL)
Note that this anomaly did not exist on ColdFusion 10, Update 10, but does exist on Update 13. I'm not sure which update broke it, but there you go.
You will have to copy the neo-cron.xml from C:\ColdFusion10\\lib of one server to another. After that restart the server to make the changes effective. Login to the CF Admin and check the functionality.
This should work.
Note:- Please take a backup of the existing neo-cron.xml, before making the changes.
I am working on doing some simple analytics on a Django webstite (v1.4.1). Seeing as this data will be gathered on pretty much every server request, I figured the right way to do this would be with a piece of custom middleware.
One important metric for the site is how often given images are accessed. Since each image is its own object, I thought about using django-hitcount, but figured that was unnecessary for what I was trying to do. If it proves easier, I may use it though.
The current conundrum I face is that I don't want to query the database and look for a given object for every HttpRequest that occurs. Instead, I would like to wait until a successful response (indicated by an HttpResponse.status of 200 or whatever), and then query the server and update a hit field for the corresponding image. The reason the only way to access the path of the image is in process_request, while the only way to access the status code is in process_response.
So, what do I do? Is it as simple as creating a class variable that can hold the path and then lookup the file once the response code of 200 is returned, or should I just use django-hitcount?
Thanks for your help
Set up a cron task to parse your Apache/Nginx/whatever access logs on a regular basis, perhaps with something like pylogsparser.
You could use memcache to store the counters and then periodically persist them to the database. There are risks that memcache will evict the value before it's been persisted but this could be acceptable to you.
This article provides more information and highlights a risk arising when using hosted memcache with keys distributed over multiple servers. http://bjk5.com/post/36567537399/dangers-of-using-memcache-counters-for-a-b-tests
I have a system in which I get a lot of messages. Each message has a unique ID, but it can also receives updates during its lifetime. As the time between the message sending and handling can be very long (weeks), they are stored in S3. For each message only the last version is needed. My problem is that occasionally two messages of the same id arrive together, but they have two versions (older and newer).
Is there a way for S3 to have a conditional PutObject request where I can declare "put this object unless I have a newer version in S3"?
I need an atomic operation here
That's not the use-case for S3, which is eventually-consistent. Some ideas:
You could try to partition your messages - all messages that start with A-L go to one box, M-Z go to another box. Then each box locally checks that there are no duplicates.
Your best bet is probably some kind of database. Depending on your use case, you could use a regular SQL database, or maybe a simple RAM-only database like Redis. Write to multiple Redis DBs at once to avoid SPOF.
There is SWF which can make a unique processing queue for each item, but that would probably mean more HTTP requests than just checking in S3.
David's idea about turning on versioning is interesting. You could have a daemon that periodically trims off the old versions. When reading, you would have to do "read repair" where you search the versions looking for the newest object.
Couldn't this be solved by using tags, and using a Condition on that when using PutObject? See "Example 3: Allow a user to add object tags that include a specific tag key and value" here: https://docs.aws.amazon.com/AmazonS3/latest/dev/object-tagging.html#tagging-and-policies
I am trying to load my data using a separate query to the server after the records get dirty in the store. The updated values are sent to the server and relevant actions are performed using a custom ajax call and handled at the server side to update all the related records. But when the data is loaded again I get the above mentioned error.
The possible reason could be, since the records are dirty in the store, and without committing the store I am trying to load the data again, it is giving me the error. So, I tried to do an "Application.defaultTransaction.rollback()". It removes those records from the updated bucket, but the "key" in the updated bucket (the object type) still exists and I still get the error. Can anyone help me with this?
In short: is there a way to force clean the store or move all the objects in created/updated/inflight buckets to clean bucket?
Application.store.get('defaultTransaction').rollback() will remove any dirty objects in the store and take it to the initial state.
There is an open issue for store.rollback() too which might be an alternative when merged to master.
https://github.com/emberjs/data/pull/350#issuecomment-9578563