I'm looking to have Visual Studio's SQL Schema Compare generate the delta SQL to update a production database, including a column rename on a table with system versioning on (temporal table).
Using refactoring within Visual Studio SQL Project will indeed create an entry in the refactorlog file, and will generate the correct sp_rename SQL for that renamed field. However, the associated history table doesn't get the field renamed - it does a drop and add, which will not work well once there is actual data in the table. Plus it also makes it so the tables get out of sync with each other.
Maybe I could modify the refactorlog XML to indicate it is a field with a history table attached? the XML below shows the Elementtype is "SqlSimpleColumn". Are there any other options I could explore?
System: Visual Studio Community 2017
SQL server: Azure SQL Database V12
<Operation Name="Rename Refactor" Key="48b6ef58-988b-48bf-9606-4048b0c51bf2" ChangeDateTime="04/03/2017 21:19:32">
<Property Name="ElementName" Value="[dbo].[xyz2].[newafterreset]" />
<Property Name="ElementType" Value="SqlSimpleColumn" />
<Property Name="ParentElementName" Value="[dbo].[xyz2]" />
<Property Name="ParentElementType" Value="SqlTable" />
<Property Name="NewName" Value="[updateafterreset]" />
</Operation>
Manually using sp_rename on a system-versioned history table results in the following error:
Msg 13759, Level 16, State 1, Procedure sp_rename, Line 316
Renaming a column failed on history table '<database name>.dbo.<table name>History' because it is not a supported operation on system-versioned tables.
Consider setting SYSTEM_VERSIONING to OFF and trying again.
Is it possible for you to do what it recommends?
Yes, you can set the system versioning off before making changes to the table.
ALTER TABLE [dbo].[TABLE_NAME] SET ( SYSTEM_VERSIONING = OFF )
GO
This will allow you to make changes to the table.
BEWARE: A word of caution - when you execute a sp_rename to rename a column, on a system versioned (temporal) table it breaks the version history for that entity
Related
While I was trying to hash the owner_name column with this set of Liquibase Script as suggested by the Corda here:
I have used the PostgreSQL DB
<changeSet author="My_Company" id="replace owner_name with owner_name_hash">
<addColumn tableName="iou_states">
<column name="owner_name_hash" type="nvarchar(130)"/>
</addColumn>
<update tableName="iou_states">
<column name="owner_name_hash" valueComputed="hash(owner_name)"/>
</update>
<dropColumn tableName="iou_states" columnName="owner_name"/>
I was getting the following error:
Reason: liquibase.exception.DatabaseException: Invalid parameter count for "HASH", expected count: "2..3"; SQL statement
The reason I am not understanding here what are the other fields we need to give to the hash function.
Can any one help me with this, and provide the correct script which will do the hashing of the column in table. Many Thanks in Advance.
Liquibase is doing what you're telling it to do correctly, but the syntax is not valid because it needs an additional parameter for the HASH. The PostgreSQL docs don't have a lot of information about the available parameters, but here are a couple of links that have more details.
https://www.postgresql.org/docs/14/functions-binarystring.html
https://www.postgresql.org/docs/14/functions-string.html
I'm trying to figure out the criteria for a Bulk Deletion Job in Dynamics 365 to enforce GDPR.
Essentially, I need to delete Contact records that haven't been used for more than 13 months.
The main part of the Criteria I've been using is where 'Modified On is Older Than 13 months'.
The problem with this is that the 'Modified On' field only updates when you change the details of the Contact record.
For example, a Contact might've been created in 2019 and the details have remained the same since then - therefore the Modified On date is 2019. However, this Contact only just emailed us last week and so their Contact is still in use but this isn't reflected in the Modified On date.
The criteria I have come up with to try and get around this is as follows:
Unfortunately, this is still returning Contacts that don't match the criteria I need.
Is it possible to get this criteria using the advanced find functionality, or will it require something external?
First of all, the usual recommendation is not to delete the records or you'll lose historic data, and you will end up with several records in the system with broken relationships.
If you already backed-up historic data or it does not have any value to the business, unless someone can correct me, I think what you are trying to do won't work with advance find. This is because of the limitation in advanced find with related entities and the fact the we cannot add conditions to the "Does not contain data" relationship option.
Your criteria is not giving the results you expect because it is requesting the following (assuming you have selected the Contact table in advanced find):
Select Contacts whose modifiedOn date is older than 13 months and that
they are linked as the customer of a case whose modifiedOn date is
older than 13 months and these cases have a related email whose
modiedOn date is older than 13 months
So, what you are retrieving are old Contacts that had a at least one Case some time ago, and not Contacts with no recent activity (A Contact can have an old Case, but may/may not have a recent one).
Another thing to consider is that you are using the "Customer" column [Cases(Customer)] from the Case table. Depending on how your organizations is handling this column and since the Customer column can hold Accounts or Contacts, you might want to use "primarycontactid" [Cases(Contact)] or another custom column (I've seen some designs were a custom column is used to track the Contact).
Last year I had a request for an organization to automatically merge thousands of contacts following some rules. What I ended up doing was a Console Application and one of the steps was to validate if the Contacts had any interactions (Leads, Opportunities, Cases and Activities) and count them, this way when the merge was performed I chose the Contact with more related records as the main Contact. So a similar approach could be use in your scenario.
You can create an efficient query with QueryExpressions, but if you are not used to them you can use this FetchXML in the console Application
<fetch version="1.0" output-format="xml-platform" mapping="logical" distinct="true">
<entity name="contact">
<attribute name="fullname" />
<attribute name="telephone1" />
<attribute name="contactid" />
<order attribute="fullname" descending="false" />
<link-entity name="incident" from="primarycontactid" to="contactid" link-type="outer" alias="case">
<attribute name="incidentid" />
<filter type="and">
<condition attribute="modifiedon" operator="last-x-months" value="13" />
</filter>
</link-entity>
</entity>
</fetch>
It will give you a list of Contacts and if they had a Case that was modified in the last 13 months, you'll get a GUID in the case.incidentid column, if you have a null value in case.incidentid, it means that there are no recent Cases and should be deleted.
Keep in mind that :
You might need to update the FetchXML to your needs.
You'll need to handle paging on the query results if there are more than 5,000 Contacts.
Depending on the number of Contacts in the system you'll want to create different batches to process them because it can take a while to complete.
It would be a good idea to create a report to validate the Contacts before deleting them.
My task is to automate testing of OBIEE report data. The main step is to get report's logical SQL.
I have dashboard with reports. Every report has named filter (not inline one) attached. So, I'd like to find a way to set up filter values and programmatically run generation of report SQL (so that WHERE clause is filled in with my values), play it and retrieve data. I have tried the following approaches:
OBIEE WebServices. First, I use generateReportSQL, then call for executeSQLQuery. This approach works fine for inline filters, I managed to set them up in . But I can not get it working with saved filters. How to generate report with values set up for columns in attached saved filter? No information in documentation or in internet found.
Generate Dashboard URL with all prompts set, run it and then read usage tracking tables to retrieve SQL queries. But it seems a bit strange approach, I believe there must be a simpler way to do the task. Moreover, usage tracking does not put information about report execution in its DB immediately, it has some timeout. Is there a way to avoid it?
runcat.sh + nqcmd - still, have not found a way to set values for saved filter.
So, my question is: how generate report's logical SQL with prompt values set for saved filter attached?
Thanks in advance,
Jol
UPDATE
Some examples:
XML of my usage tracking analysis contains the following:
<saw:filter>
<sawx:expr xsi:type="sawx:logical" op="and">
<sawx:expr xsi:type="sawx:special" op="prompted">
<sawx:expr xsi:type="sawx:sqlExpression">"S_NQ_ACCT"."START_DT"</sawx:expr>
</sawx:expr>
<sawx:expr xsi:type="sawx:special" op="prompted">
<sawx:expr xsi:type="sawx:sqlExpression">"S_NQ_ACCT"."USER_NAME"</sawx:expr>
</sawx:expr>
<sawx:expr xsi:type="sawx:special" op="prompted">
<sawx:expr xsi:type="sawx:sqlExpression">"S_NQ_ACCT"."SAW_DASHBOARD_PG"</sawx:expr>
</sawx:expr>
</sawx:expr>
I can use filterExpressions tag of generateReportSQL to create logicalSQL that includes my values in WHERE clause. Everything is OK if tag filter is included in analysis's XML (the case of inline filters, as in the example above):
<v7:generateReportSQL>
<v7:reportRef>
<v7:reportPath>report path</v7:reportPath>
</v7:reportRef>
<v7:reportParams>
<!--Zero or more repetitions:-->
<v7:filterExpressions>
<![CDATA[<sawx:expr xsi:type="sawx:string" op="equal" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:saw="com.siebel.analytics.web/report/v1.1" xmlns:sawx="com.siebel.analytics.web/expression/v1.1" subjectArea="Usage Tracking">
<sawx:expr xsi:type="sawx:sqlExpression">"S_NQ_ACCT"."USER_NAME"</sawx:expr>
<sawx:expr xsi:type="sawx:string">testuser</sawx:expr></sawx:expr>
]]>
</v7:filterExpressions>
.............................
</v7:reportParams>
<v7:sessionID>...</v7:sessionID>
</v7:generateReportSQL>
XML of my test analysis contains the following:
<saw:filter>
<sawx:expr xsi:type="sawx:savedFilter" path="/shared/myproject/_filters/myroject/my saved filter" name="my saved filter" /></saw:filter>
'my saved filter' has 'is prompted' columns that I'd like to set to my values and run an analysis to get dataset. But how to do it?
if webservices are useless here, what could be used?
Since those are normally used for completely dynamic population in terms of content (instantiated variables pulled from user profiles, stuff gotten from prompts, etc....) you won't get them in the LSQL.
tl;dr - Robin wrote a bice post about load testing with LSQL https://www.rittmanmead.com/blog/2014/03/built-in-obiee-load-testing-with-nqcmd/
Edit: Baseline Validation Tool (BVT) was proposed and is the answer.
I am trying to get stock option chains from Yahoo using YQL. I have tried this command in the YQL console:
select * from yahoo.finance.options
However, I get this error XML:
<?xml version="1.0" encoding="UTF-8"?>
<error xmlns:yahoo="http://www.yahooapis.com/v1/base.rng" yahoo:lang="en-US">
<diagnostics>
<publiclyCallable>true</publiclyCallable>
</diagnostics>
<description>No definition found for Table yahoo.finance.options</description>
</error>
It looks like this table doesn't exist anymore. Does anyone know what the correct table is?
You have to provide at least one where clause to make this query work. Like this:
select * from yahoo.finance.options where symbol='MMM'
or,
select * from yahoo.finance.options where symbol='A'
or,
select * from yahoo.finance.options where symbol='YHOO'
All the above queries work fine. If you want more specific data then you have to provide more conditions to filter out your desired data out of the whole data.
yahoo.finance.options is a community table. To read about community tables check here. From the link I just posted:
In order to use YQL with the community tables, you must pass in the datatables env file. You can do this on the YQL console as part of a YQL statement, or by passing in a query parameter.
The YQL console to test this can be found here
Something I just figured out today is that you can just use this to get the option data from yahoo
https://query2.finance.yahoo.com/v7/finance/options/SPXS?straddle=true
Since the middle of 2014 YQL has the problem, so the option chain from yahoo.finance with scraping could be get by the following codes with: Python and Matlab
I've got 8 worksheets within an Excel workbook that I'd like to import into separate tables within a SQL Server DB.
I'd like to import each of the 8 worksheets into a separate table, ideally, with table names coinciding with worksheet tab names, but initially, I just want to get the data into the tables, so arbitrary table names work for the time being too.
The format of the data in each of the worksheets (and tables by extension) is the same (and will be identical), so I'm thinking some kind of loop could be used to do this.
Data looks like this:
Universe Date Symbol Shares MktValue Currency
SMALLCAP 6/30/2011 000360206 27763 606361.92 USD
SMALLCAP 6/30/2011 000361105 99643 2699407.52 USD
SMALLCAP 6/30/2011 00081T108 103305 810926.73 USD
SMALLCAP 6/30/2011 000957100 57374 1339094.76 USD
And table format in SQL would/should be consistent with the following:
CREATE TABLE dbo.[market1] (
[Universe_ID] char(20),
[AsOfDate] smalldatetime,
[Symbol] nvarchar(20),
[Shares] decimal(20,0),
[MktValue] decimal(20,2),
[Currency] char(3)
)
I'm open to doing this using either SQL/VBA/C++ or some combination (as these are the languages I know and have access to). Any thoughts on how to best go about this?
You could use SSIS or DTS packages to import them. Here are a couple references to get you going.
Creating a DTS Package - pre 2005
Creating a SSIS Package - 2005 forward
For an Excel file (2007 or 2010) with an xlsx extension, I have renamed them to .zip and extracted their contents into a directory and use SQL XML Bulk Load to import the sheets and reference tables. When I have all the data in SQL server, I use basic SQL queries to extract/transform the data needed into designated worksheets. -- This keeps the "digestion" logic in SQL and uses minimal external VB script of C# development.
Link to SQL Bulk Load of XML data: http://support.microsoft.com/kb/316005
In SQL Management Studio, right click on a database, then click Tasks, then Import Data. This will take you through some screens and create an SSIS package to import the file. At some point in the process it will ask you if you want to save the package (I would run it a few times as well to make sure it imports your data the way you want it). Save it and then you can schedule the package to be run as a Job via the SQL Server Agent. (The job type will be Sql Server Integration Services).
You can use following script
SELECT * INTO XLImport3 FROM OPENDATASOURCE('Microsoft.Jet.OLEDB.4.0',
'Data Source=C:\test\xltest.xls;Extended Properties=Excel 8.0')...[Customers$]
SELECT * INTO XLImport4 FROM OPENROWSET('Microsoft.Jet.OLEDB.4.0',
'Excel 8.0;Database=C:\test\xltest.xls', [Customers$])
SELECT * INTO XLImport5 FROM OPENROWSET('Microsoft.Jet.OLEDB.4.0',
'Excel 8.0;Database=C:\test\xltest.xls', 'SELECT * FROM [Customers$]')
OR can use following
SELECT * INTO XLImport2 FROM OPENQUERY(EXCELLINK,
'SELECT * FROM [Customers$]')