Did VS Code 1.14 remove Compare? - compare

I only noticed this because of the project I was working on with a client's CSS file. I was using Compare in 1.13 to check existing CSS information with some changes I was doing for her. When I upgraded to 1.14, the Compare function was wonky, allowing me to tag one file for Compare and then not allowing the right-click on the second (or any other) file. This lead me to believe it was removed, but I didn't see any mention of it in the release notes.
I've since gone back to 1.13. I rarely use Compare, but in this one instance it alerted me that there may be other issues of which I'm unaware, especially if right-clicking file names in the File Explorer doesn't respond.

The Compare feature was untouched in 1.14, this does not mean, however, that the feature isn't subject to any current bugs related to your platform. On Mac OS X the feature is working as intended.
But as a long time user I've found that the Compare feature has been unintuitive.
I would suggest either using the Diff feature (F7 or Shift+F7) or perhaps switching to using git and it's diff-ing tool.

Related

Managed document not working in MarkLogic 10 after xdmp:document-insert

First time manage document using dls:document-insert-and-manage
Update the same document using xdmp:document-insert
Document get lost from the dls latest version collection cts:search(/scopedIntervention/id , dls:documents-query())
First time manage document
<scopedIntervention>
<id>someId12345</id>
<scopedInterventionName>
First Name
</scopedInterventionName>
<forTestOnly>
true
</forTestOnly>
<inactive>
true
</inactive>
</scopedIntervention>)```
**Document inserted with versioning**
Verify document is present in latest documents collection
cts:search(/scopedIntervention/id , dls:documents-query())
Document present in managed latest collection
Update the same document
<scopedIntervention>
<id>someId12345</id>
<scopedInterventionName>
Updated Name
</scopedInterventionName>
<forTestOnly>
true
</forTestOnly>
<inactive>
true
</inactive>
</scopedIntervention>)```
**Update document to same URI using xdmp:document-insert**
Again verify document is present or NOT in latest documents collection
cts:search(/scopedIntervention/id , dls:documents-query())
Document NOT present in managed latest collection (lost from collection)
After applying DLS package using following upgrade step, the same document shows in the list
```xquery version "1.0-ml";
import module namespace dls = "http://marklogic.com/xdmp/dls"
at "/MarkLogic/dls.xqy";
dls:set-upgrade-status(fn:false()),
dls:start-upgrade(),
fn:doc("http://marklogic.com/dls/upgrade-task-status.xml"),
dls:latest-validation-results(),
dls:set-upgrade-status(fn:true())```
Update the same document using xdmp:document-insert
You are most likely removing the DLS Latest collection at this step. Further, version history is not preserved when you do this.
Instead of using xdmp:document-insert you should use dls:document-checkout-update-checkin .
Please read to the end -- if you did NOT do a DLS Upgrade on an upgraded ML version - STOP NOW and follow the upgrade instructions. Not doing so will leave DLS in a unstable state and anything else you do will make things much harder to repair.
+1 Rob. #IAM, reguardless of if it 'worked' or appeared to 'work' in V7, dls was not designed to handle the case you describe. DLS architecture depends on encapsulating all changes to documents within the checkin/checkout semantics. Bypassing that, you might as well bypass DLS entirely because it wont work. The fact that it was 'working' in V7 is a misnomer, it may have not behaved incorrectly in ways that your application cared about, or your code may have coincidentally done sufficiently similar work as the internals. You might get lucky and find a way to do so again, but I encourage you to consider how to work within the define behaviour of the library, or to refactor out those parts of your code that are not 'DLS Friendly' to operate between checkout/checkin windows -- not all updates have to be the checkout-update-checkin -- you can checkout -- do whatever -- then checkin.
As a migration workaround you MIGHT be able to make use of the upgrade functions added to dls on an ongoing basis.
See https://docs.marklogic.com/dls:start-upgrade
In V9 (I believe), significant non-backwards compatible changes were made to DLS internals that require running this code. one time
The assumption was as in-place update from prior DLS to current. However the code may also happen to work on an ongoing basis, depending on the details of exactly what your application code is doing that the DLS code doesn't know about.
The 'new' DLS code adds an internal collection to optimize the common case of searching for 'latest' documents -- if that is dropped then those documents will not show up on DLS searches (for 'latest').
You mention your code is 'migration scripts' --> If these are migrating from V7 to V10 then you could run your code before the V10 update, then run the V10 update then run the dls-upgrade. After that the documents should be in good shape -- as long as you don't do anything else that is not defined behaviour for managed documents.

Is there an update/change flag in the current version (v4) of gspread?

Is there an indicator or flag in gspread that indicates whether or not a change has been made to a sheet or worksheet? This appears to have been present as an attribute called updated before version 2.0, or maybe that served a different purpose?
You're looking for Detect Changes guide in Drive API.
For Google Drive apps that need to keep track of changes to files, the
Changes collection provides an efficient way to detect changes to all
files, including those that have been shared with a user. The
collection works by providing the current state of each file, if and
only if the file has changed since a given point in time.
There's a github code demo for testing purposes.

Should these auxiliary files be under Git version control?

I decided to start using the Git version control system for my C++ project. I'm new to version control. For the trunk things are simple, I just commit all the project versions I have. I kept each version as a separate folder because I knew I'd very soon use Git. But I encountered a problem with my branches.
At some stage of the development, I decided there's one class I want to develop in a branch. Without version control, I had to use make a "manual" branch. I copied the most recent header file and source file of that class to a separate folder and started working there. I made several versions there to work with simultaneously. One version was the first prototype of the class according to the plan (for which I made the "branch"). Then I added another file, in which I copied the first one but removed things that seemed to not be necessary. This way I have 2 versions, one with all my ideas and features, the the other one just with what I really use in my code, without what's not in use at the moment.
But then I added more. As development went on, I decided it may be a good idea to make that class a template. So I added a third version, which is just like the second one, but now some functionality implemented using polymorphism is implemented using a template. And I can't tell yet which version is the best, as it's too early to tell, so I want to have all 3 together.
Then I made another special file: A copy of the third version header file, in which each line can be marked or not marked. Marked means I use that specific method or I'm sure it's going to be in use very soon, otherwise the line isn't marked.
Then, some time later, I started a new branch. And for that branch I needed a new version of that class developed in the first branch. So I just copied one of the versions to the new branch's folder and started working there. Now again I had some kind of auxiliary file: I had 2 files, one from which I delete class methods I use, and one into which I write new methods I need to have.
Now I want to start using Git and I wonder: For all the project's text files, plans, diagrams, etc., it obvious - I keep them outside the Git repo. Whenever collaborative editing is needed I can set up a wiki or something like that. But for all those copies of the same header file, and for those auxiliary "marked" files, what do I do with them? I mean, it's fine by me to have them all in a branch, but what happens when I merge a branch into the trunk? I don't want to have all these copies and versions and lists, just the one final class file I've made.
On one hand, these are C++ source files used while coding. On the other hand they're not part of the pure source code of the software package, they just help me while I work but will never get compiled because in the end there's just the final version of the class which I chose to merge, and all other aux files, lists, etc. are kept just for reference.
What would be the best thing to do?
Thanks for reading my long story :)
EDIT: It's a local repo on my personal computer
Always keep documentation in same repository as source code. If you do not, your documentation will rot. It is becouse documentation is written agains some version of your software, so it has to develop the same way as software develops.
If your documentation is automaticaly generated or compiled into another format, commit only source data, makefile and configuration of generator, just like you do with source code.
What you describe is the normal use of branches: You have your master branch ("official", if it where) and a branch to develop a new feature (it doesn't really have to live in a separate directory, if I understand you correctly). Periodically you synchronize the feature branch with the master, either by rebasing it on the master or merging its changes in. In its turn, you can well have subordinate branches in which you try out approaches to develop the feature, handled with respect to the feature branch just like that one respect to the master. But in that case you have to be careful whenever you rebase.
You should keep any data that isn't easy to recreate in the repository, be it source code, documentation or even design sketches. Stuff that can be recreated (object code, automatically formated documentation, ...) should be kept out (any change there will create a difference to be checked in). Your repository (particularly not published branches) is your own workspace, it can be all the messy you like.
Take a look at the book mentioned at the git homepage.
Well, that’s clearly documentation and not source code, so you should separate it from your source code. As your documentation seems to be branch dependent, you should still check it into the repo, but in a separate doc directory.
About the merging: How a merge works is up to you in the end. Git just has a default merge strategy which is what most people want most of the time. But if you say that a merge into the main branch should just bring the code and not the docu, then that’s fine. Just merge that way:
git merge mybranch --no-commit
rm -rf **docu-dir**
git add -A
git commit

Design approaches to read/write different version of same config file

In our project we got an application that uses an external configuration file (say server.xml). Now we need to design a setup tool GUI in C++/QT to read/edit such configuration file and it should be able to handle all the different versions of such file. The user will choose the file version and then proceed with the editing. From one version to another doesn't change too much, maybe there is a new xml tag, a tag with a different name or in a different position.
What's the best design approach to do so? We are planning to go for a standard MVC design pattern but how to deal with all the different configuration versions without rewriting the same GUI code again n again?
Here the sample config file:
<?xml version="1.0" encoding="utf-8"?>
<Server_configuration ver="11">
<core>
<enable-tms>true</enable-tms>
<enable-gui-messages>true</enable-gui-messages>
<waiting-for-config-timeout>10000</waiting-for-config-timeout>
<remoting>
<port>50000</port>
<join-timeout>5000</join-timeout>
<ismultithread>true</ismultithread>
<maxconcurrentrequests>20</maxconcurrentrequests>
</remoting>
</core>
<content>
<ftp>
<ip>192.168.0.227</ip>
<port>21</port>
<userid>******</userid>
<passwd>******</passwd>
</ftp>
<library>
<ip>192.168.0.227</ip>
<port>50023</port>
</library>
<local>
<asset-root>/assetroot</asset-root>
<kdm-expiration-warning>172800000</kdm-expiration-warning>
</local>
<hula-store-daemon>
<ip>127.0.0.1</ip>
<port>5567</port>
</hula-store-daemon>
</content>
</Server_configuration>
This is no means a drop in solution but I here are some things to do/consider. Every situation will differ.
Have an explicit version identifier in your config files. Fingerprinting them is a real (error prone) pain.
Consider having a tool that will update from version to version. It will be easier than reading old versions and trying to apply them.
I may be easier to do every version step individually but this can make the conversions less "lossless". A happy hybrid is to do minor updates from version to version but have "checkpoint" major upgrades that will jump right to the latest (or the latest "checkpoint"). This is kinda like incremental backups with full backup snapshots every once and a while.
Keep the user informed. A sysadmin won't be happy if you are changing his settings. You might want to make the process interactive or put comments into the file of every added/moved/removed setting. I would also recommend keeping removed settings in some section of the file for user reference. (Put a note why they are there as well).
Backup the old file. Your script will crash and it will eat data. Do something like naming the current file ${oldname}.old-${ver}~. Saving the settings in a different section of the file won't always be enough and this will save your users a lot of heartache.
Versioning should always be designed as robust and as simple as possible. It is crucial for you to determine whether each version of your application must be compatible with each version of the setup tool (which is rare), or whether you can, for example, meet your needs if any newer setup tool works with any same or older application, but not vice versa.
One way compatibility
One possibility to design for the latter is to add a version attribute to the XML file but try to keep it at the same fixed value forever by always only changing the structure and semantics of the XML file in backward compatible ways. For example, adding an element is backward compatible as long as the setup tool can interpret its absence the same way both the old setup tool and the application would behave. It does not hurt that the new setup tool always writes an (equivalent) value to the new element, because two-way compatibility with the old application is not required.
Once the day comes when you cannot maintain backward compatibility on input, you just change the value of the version attribute and start special casing it in the setup tool.
If you validate the XML against an XSD, notice that XSD can actually do one frequently useful thing for you: assign default attribute values. This way, your setup tool's source code may not even actually notice that the underlying document was missing a recently added attribute!
Two way compatibility
Strict versioning is needed. A schema definition (XSD, RelayNG,...) should be defined for each version of the XML file and the file should be validated against it both when it is read by the setup tool, written by the setup tool, or read by the application. The schema definition may be identical for several consecutive versions, if the interpretation of the same XML has changed, so when in doubt, always increase the version number.
Do what you can educating everyone that they cannot just edit the latest schema and do away with that. Unreliable versioning is worse than no versioning.

Scan for changed files

I'm looking for a good efficient method for scanning a directory structure for changed files in Windows XP+. Something like how git does it is exactly what I'm looking for, when running a git status it displays all modified files, all new (untracked) files and deleted files very quickly which is exactly what I would like to do.
I have a basic model up and running which performs an initial scan and stores all filenames, size, dates and attributes.
On a subsequent scan it checks if the size, attributes or date have changed and marks as a changed file.
My issue now comes in detecting moved and deleted files. Is there a tried and tested method for this sort of thing? I'm struggling to come up with a good method.
I should mention that it will eventually use ReadDirectoryChangesW to monitor files and alert the user when something changes so a full scan is really a last resort after the initial scan.
Thanks,
J
EDIT: I think I may have described the problem badly. The issue I'm facing is not so much detecting the changes - I have ReadDirectoryChangesW() using IOCP on multiple threads to detected when a change happens, the issue is more what to do with the information. For example, a moved file is reported as a delete followed by a create and a rename comes in 2 parts, old name, followed by new name. So what I'm asking is how to differentiate between the delete as part of a move and an actual delete. I'm guessing buffering the changes and processing batches would be an option but feels messy.
In native code FileSystemWatcher is replaced by ReadDirectoryChangesW. Using this properly is not simple, there is a good baseline to build off here.
I have used this code in a previous job and it worked pretty well. The Win32 API itself (and FileSystemWatcher) are prone to problems that are described in the docs and also discussed in various places online, but impact of those will depending on your use cases.
EDIT: the exact change is indicated in the FILE_NOTIFY_INFORMATION structure that you get back - adds, removals, rename data including old and new name.
I voted Liviu M. up. However, another option if you don't want to use the .NET framework for some reason, would be to use the basic Win32 API call FindFirstChangeNotification.
You can use USN journaling if you are up to it, that is pretty low level (NTFS level) stuff.
Here you can find detailed information and source code included. It is written in C# but most of it is PInvoking C/C++ functions.