One of our new standards is to eliminate all VS warnings prior to PR submission, and one that is consistently getting missed is comments missing on migrations. Our official policy is to add the ignore flag for that warning to the migration file (as there's no real purpose to commenting it).
e.g.
#pragma warning disable CS1591
Is there a way to automate this so that whenever a migration file is generated for our projects it automatically adds this suppression to it? Similarly, is there a way to do this for the db snapshot, as it also wipes this flag when it gets reworked during a migration.
Related
Very new to siebel and I want to perform a repository migration from one environment to another.
The command I am using is something like this on the target serve
./srvrupgwiz /m master_Test2Prod.ucf
so my question is what happened if repo migration fails in the middle and unable to continue?
Will the target environment becomes corrupted? Is there a way to recover?
I am thinking must be a way to take a backup of the current repository on the target environment and somehow be able to restore that?
If this is true, then how to do that?
thanks
By default, the Siebel Respository you are replacing in the target environment will be renamed to "SS Temp Siebel Respository". You are prompted to supply the name for the newly imported repository (which will default to "Siebel Repository"). When a new repository row is being imported, its ROW_ID value is appended to the end of the name you provided. Once it is successfully committed, that suffixed value is removed. Therefore you can always tell when a repository is partially imported. If something fails, it's perfectly safe to delete the partial one (or leave it there, the next attempt will result in an entirely new one with yet another ROW_ID value suffixed to the end). You can recover the old one simply by renaming it. You can see the exact steps followed by the Database Configuration utility's Migrate Repository process by looking in the UCF files that drive it (e.g. master_dev2prod.ucf and driver_dev2prod.ucf).
In all fairness Siebel version and Database system have little influence on the type of solution that most will put in place: which is reversal of the database changes.
Now, Oracle, Microsoft and IBM (only supported brands) each have their own approaches and I'm more familiar with those of Oracle. Many Oracle implementations support flashback. This is a rolling log of all changes which allows one to 'travel back in time' by undoing the statements. This includes deletes as well. The maximum size of this log is one to have attention for as the Siebel DB is quite a large volume of data to be imported. I'm sure that Microsoft and IBM systems have similar technologies.
In any case the old fashioned export to disk works in all systems.
You can backup the existing repository by going to Repository object type in the object explorer and renaming the existing repository in the siebel tools.
In case the repository import fails, you just need to change the name of the backed up repository to Siebel Repository.
Also use /l log_file_name in the command to capture the logs of the import process.
Your command is fine for a migration of repository using an answer file. However, you can split out the repository migration into individual commands rather than using the unattended upgrade wizard. One of these commands is (windows):
%SIEBSRVR_HOME%\bin\repimexp.exe
You can use this executable to import or export repositories. It is often used as a means to backup existing repositories, which tends to be referred to as "exprep". Rather than spend additional time during a release doing full export from source then import into target, the export from source can be done in advance writing out to a .dat file which represents the entire repository. This file can then be read in as part of a repository import which can save time.
In order to perform an export/backup of your current repository, you can use a command like below (windows):
%SIEBSRVR_HOME%\bin\repimexp.exe /A E /U SADMIN /P PASSWORD /C ENTERPRISE_DATASOURCENAME_DSN /D SIEBEL /R "Siebel Repository" /F c:\my_export.dat /V Y /L c:\my_exprep.log
Once you have the exported .dat file, you can run a repository import referring to this file, rather than a database with your repository inside. You do this the same way using an answer file like in your original command, but the answer file will reference the .dat file. You can step through the Siebel wizard in order to write out this answer file if you are not confident editing it manually.
Our product (C++ windows application, Google Test as testing framework, VS2015 as IDE) has a number of file-based interfaces to external products, i.e., we generate a file which is then imported into an external product. For testing these interfaces, we have chosen a golden file approach:
Invoke the code that produces an interface file, save the resulting file for later reference (this is our golden file - we here assume that the current state of interface code is correct).
Commit the golden file to the TFS repository.
Make changes to the interface code.
Invoke the code, compare the resulting file with the according golden file.
If the files are equal, the test passes (the change was a refactoring). Otherwise,
Enable the refresh modus which makes sure that the golden file is overriden by the file resulting from invoking the interface code.
Invoke the interface code (thus refreshing the golden file).
Investigate the outgoing changes in VS's team explorer. If the changes are as desired by our code changes from step 3, commit code changes and golden file. Otherwise, go back to step 3.
This approach works great for us, but it has one drawback: VS only recognizes that the golden files have changed (and thus allows us to investigate the changes) if we use a local workspace. If we use a server workspace, programmatically remove the read-only flag from the golden files and refresh them as described above, VS still does not recognize that the files have changed.
So my question is: Is there any way to make our golden file testing approach work with server workspaces, e.g. by telling VS that some files have changed?
I can think of two ways.
First approach is to run a tf checkout instead of removing the Read-Only attribute.
This has an intrinsic risk as one may inadvertently checking-in the generated file; this should be prevented by restricting check-in permissions on those files. Also you may need to run tf undo to clean up the local state.
Another approach would be to map the golden files in a different directory and use a local diff tool instead of relying on Visual Studio builtin tool. This is less risky than the other solution, but may be cumbersome. Do not forget that you can "clone" a workspace (e.g. Import Visual Studio TFS workspaces).
I am currently investigating Flyway as an alternative to Liquibase, but was unable to find an answer to the following question in the documentation:
Assume a migration X is found to contain a bug after deployment in production. In retrospect, X should never have been executed as is, but it's already too late. However, we'd like to replace the migration X with a fixed version X', such that databases that are populated from scratch do not suffer from the same bug.
In Liquibase, you would fix the original changeset and use the <validChecksum> tag to notify Liquibase that the change was made by purpose. Is there a pendant to <validChecksum> in Flyway, or an alternative mechanism that achieves the same?
Although it is a violation of Flyway's API, the following approach has worked fine for us:
Write a beforeValidate.sql that fixes the checksum to match the expected value, so that when Flyway actually validates the checksum, everything seems fine.
An example:
-- The script xyz/V03_201808230839__Faulty_migration.sql was modified to fix a critical bug.
-- However, at this point there were already production systems with the old migration file.
-- On these systems, no additional statements need to be executed to reflect the change,
-- BUT we need to repair the Flyway checksum to match the expected value during the 'validate' command.
UPDATE schema_version
SET checksum = -842223670
WHERE (version, checksum) = ('03.201808230839', -861395806);
This has the advantage of only targetting one specific migration, unlike Flyway's repair command.
Depending how big the mess is you could also
simply have a follow-up migrations to correct it (typo in new column name, ..)
if that is not an option, you must manually fix both the migration and the DB and issue Flyway.repair() to realign the checksum http://flywaydb.org/documentation/command/repair.html
What is the point of hiding this bad change if it has already reached production? Is it expensive to replay every time on empty databases (I assume CI runs) ? Make a new db baseline with that migration already included.
In SAS using SASMSTORE option I can specify a place where the SASMACR catalog will exist. In this catalog will reside some macro.
At some moment I may need to change the macro and this moment may occure while this macro and therefore the catalog will be in use by another user. But then it will be locked and unavailable to be modified.
How can I avoid such a situation?
If you're using a SAS Macro catalog as a public catalog that is shared among colleagues, a few options exist.
First, use SVN or similar source control option so that you and your colleagues each have a local copy of the macro catalog. This is my preferred option. I'd do this, and also probably not used stored compiled macros - I'd just set it up as autocall macros, personally - because that makes it easy to resolve conflicts (as you have separate files for each macro). Using SCMs you won't be able to resolve conflicts, so you'll have to make sure everyone is very well behaved about always downloading the newest copy before making any changes, and discusses any changes so you don't have two competing changes made at about the same time. If SCMs are important for your particular use case, you could version control the macros that create the SCMs and build the SCM yourself every time you refresh your local copy of the sources.
Second, you could and should separate development from production here. Even if you have a shared library located on a shared network folder, you should have a development copy as well that is explicitly not locked by anyone except when developing a new macro for it (or updating a currently used macro). Then make your changes there, and on a consistent schedule push them out once they've been tested and verified (preferably in a test environment, so you have the classic three: dev, test, and prod environments). Something like this:
Changes in Dev are pushed to Test on Wednesdays. Anyone who's got something ready to go by Wednesday 3pm puts it in a folder (the macro source code, that is), and it's compiled into the test SCM automatically.
Test is then verified Thursday and Friday. Anything that is verified in Test by 3pm Friday is pushed to the Dev source code folder at that time, paying attention to any potential conflicts in other new code in test (nothing's pushed to dev if something currently in test but not verified could conflict with it).
Production then is run at 3pm Friday. Everyone has to be out of the SCM by then.
I suggest not using Friday for prod if you have something that runs over the weekend, of course, as it risks you having to fix something over the weekend.
Create two folders, e.g. maclib1 and maclib2, and a dataset which stores the current library number.
When you want to rebuild your library, query the current number, increment (or reset to 1 if it's already 2), assign your macro library path to the corresponding folder, compile your macros, and then update the dataset with the new library number.
When it comes to assigning your library, query the current library number from the dataset, and assign the library path accordingly.
today I ran into an error and have no clue how to fix it.
Error: App with label XYZ could not be found. Are you sure your INSTALLED_APPS setting is correct?
Where XYZ stands for the app-name that I am trying to reset. This error shows up every time I try to reset it (manage.py reset XYZ). Show all the sql code works.
Even manage.py validate shows no error.
I already commented out every single line of code in the models.py that I touched the last three months. (function by function, model by model) And even if there are no models left I get this error.
Here http://code.djangoproject.com/ticket/10706 I found a bugreport about this error. I also applied one the patches to allocate the error, it raises an exception so you have a trace back, but even there is no sign in what of my files the error occurred.
I don't want to paste my code right now, because it is nearly 1000 lines of code in the file I edited the most.
If someone of you had the same error please tell me were I can look for the problem. In that case I can post the important part of the source. Otherwise it would be too much at once.
Thank you for helping!!!
I had a similar problem, but I only had it working after creating an empty models.py file.
I was running Django 1.3
Try to clean up all your build artifacts: build files, temporary files and so on. Also ./manage.py test XYZ will show you stack trace. Later try to run python with -m pdb option and step through the code to see where you fail and why.
You don't specify which server you're using. With Apache you'll almost certainly need a restart for things to take effect. If you're using the development one try restarting that. If this doesn't work you may need to give us some more details.
I'd also check your paths as you may have edited one file but you may be using a different one.
Plus check what's still in your database, as some of your previous versions may be interfering.
Finally as a last resort I'd try a clean install (on another django instance) and see if that goes cleanly, if it does then I'd know that I'd got a conflict, if not then the problem's in the code.