How to do multiple parallel readers for data export using Google Spanner? - google-cloud-platform

External Backups/Snapshots for Google Cloud Spanner recommends to use queries with timestamp bounds to create snapshots for export. On the bottom of the Timestamp Bounds documentation it states:
Cloud Spanner continuously garbage collects deleted and overwritten data in the background to reclaim storage space. This process is known as version GC. By default, version GC reclaims versions after they are one hour old. Because of this, Cloud Spanner cannot perform reads at a read timestamp more than one hour in the past.
So any export would need to complete within an hour. A single reader (i.e. select * from table; using timestamp X) would not be able to export the entire table within an hour.
How can multiple parallel readers be implemented in spanner?
Note: It is mentioned in one of the comments that support for Apache Beam is coming, but it looks like that uses a single reader:
/** A simplest read function implementation. Parallelism support is coming. */
https://github.com/apache/beam/blob/master/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/spanner/NaiveSpannerReadFn.java#L26
Is there a way to do the parallel reader that beam requires today using exising APIs? Or will Beam need to use something that isn't released yet on google spanner?

It is possible to read data in parallel from Cloud Spanner with the BatchClient class. Follow read_data_in_parallel for more information.
If you are looking to export data from Cloud Spanner, I'd recommend you to use Cloud Dataflow (see the integration details here) as it provides higher level abstractions and takes care data processing details, like scaling and failure handling.

Edit 2018-03-30 - The example project has been updated to use the BatchClient offered by Google Cloud Spanner
After the release of the BatchClient for reading/downloading large amounts of data, the example project below has been updated to use the new batch client instead of the standard database client. The basic idea behind the project is still the same: Copy data to/from Cloud Spanner and any other database using standard jdbc functionality. The following code snippet sets the jdbc connection in batch read mode:
if (source.isWrapperFor(ICloudSpannerConnection.class))
{
ICloudSpannerConnection con = source.unwrap(ICloudSpannerConnection.class);
// Make sure no transaction is running
if (!con.isBatchReadOnly())
{
if (con.getAutoCommit())
{
con.setAutoCommit(false);
}
else
{
con.commit();
}
con.setBatchReadOnly(true);
}
}
When the connection is in 'batch read only mode', the connection will use the BatchClient of Google Cloud Spanner instead of the standard database client. When one of the Statement#execute(String) or PreparedStatement#execute() methods are called (as these allow multiple result sets to be returned) the jdbc driver will create a partitioned query instead of a normal query. The results of this partitioned query will be a number of result sets (one per partition) that can be fetched by the Statement#getResultSet() and Statement#getMoreResults(int) methods.
Statement statement = source.createStatement();
boolean hasResults = statement.execute(select);
int workerNumber = 0;
while (hasResults)
{
ResultSet rs = statement.getResultSet();
PartitionWorker worker = new PartitionWorker("PartionWorker-" + workerNumber, config, rs, tableSpec, table, insertCols);
workers.add(worker);
hasResults = statement.getMoreResults(Statement.KEEP_CURRENT_RESULT);
workerNumber++;
}
The result sets that are returned by the Statement#execute(String) are not executed directly, but only after the first call to ResultSet#next(). Passing these result sets to separate worker threads ensures parallel download and copying of the data.
Original answer:
This project was initially created for conversion in the other direction (from a local database to Cloud Spanner), but as it uses JDBC for both source and destination it can also be used the other way around: Converting a Cloud Spanner database to a local PostgreSQL database. Large tables are converted in parallel using a thread pool.
The project uses this open source JDBC driver instead of the JDBC driver supplied by Google. The source Cloud Spanner JDBC connection is set to read-only mode and autocommit=false. This ensures that the connection automatically creates a read-only transaction using the current time as timestamp the first time you execute a query. All subsequent queries within the same (read-only) transaction will use the same timestamp giving you a consistent snapshot of your Google Cloud Spanner database.
It works as follows:
Set the source database to read-only transactional mode.
The convert(String catalog, String schema) method iterates over all
tables in the source database (Cloud Spanner)
For each table the number of records is determined, and depending on the size of the table, the table is copied using either the main thread of the application or by a worker pool.
The class UploadWorker is responsible for the parallel copying. Each worker is assigned a range of records from the table (for example rows 1 to 2,400). The range is selected by a select statement in this format: 'SELECT * FROM $TABLE ORDER BY $PK_COLUMNS LIMIT $BATCH_SIZE OFFSET $CURRENT_OFFSET'
Commit the read-only transaction on the source database after ALL tables have been converted.
Below is a code snippet of the most important parts.
public void convert(String catalog, String schema) throws SQLException
{
int batchSize = config.getBatchSize();
destination.setAutoCommit(false);
// Set the source connection to transaction mode (no autocommit) and read-only
source.setAutoCommit(false);
source.setReadOnly(true);
try (ResultSet tables = destination.getMetaData().getTables(catalog, schema, null, new String[] { "TABLE" }))
{
while (tables.next())
{
String tableSchema = tables.getString("TABLE_SCHEM");
if (!config.getDestinationDatabaseType().isSystemSchema(tableSchema))
{
String table = tables.getString("TABLE_NAME");
// Check whether the destination table is empty.
int destinationRecordCount = getDestinationRecordCount(table);
if (destinationRecordCount == 0 || config.getDataConvertMode() == ConvertMode.DropAndRecreate)
{
if (destinationRecordCount > 0)
{
deleteAll(table);
}
int sourceRecordCount = getSourceRecordCount(getTableSpec(catalog, tableSchema, table));
if (sourceRecordCount > batchSize)
{
convertTableWithWorkers(catalog, tableSchema, table);
}
else
{
convertTable(catalog, tableSchema, table);
}
}
else
{
if (config.getDataConvertMode() == ConvertMode.ThrowExceptionIfExists)
throw new IllegalStateException("Table " + table + " is not empty");
else if (config.getDataConvertMode() == ConvertMode.SkipExisting)
log.info("Skipping data copy for table " + table);
}
}
}
}
source.commit();
}
private void convertTableWithWorkers(String catalog, String schema, String table) throws SQLException
{
String tableSpec = getTableSpec(catalog, schema, table);
Columns insertCols = getColumns(catalog, schema, table, false);
Columns selectCols = getColumns(catalog, schema, table, true);
if (insertCols.primaryKeyCols.isEmpty())
{
log.warning("Table " + tableSpec + " does not have a primary key. No data will be copied.");
return;
}
log.info("About to copy data from table " + tableSpec);
int batchSize = config.getBatchSize();
int totalRecordCount = getSourceRecordCount(tableSpec);
int numberOfWorkers = calculateNumberOfWorkers(totalRecordCount);
int numberOfRecordsPerWorker = totalRecordCount / numberOfWorkers;
if (totalRecordCount % numberOfWorkers > 0)
numberOfRecordsPerWorker++;
int currentOffset = 0;
ExecutorService service = Executors.newFixedThreadPool(numberOfWorkers);
for (int workerNumber = 0; workerNumber < numberOfWorkers; workerNumber++)
{
int workerRecordCount = Math.min(numberOfRecordsPerWorker, totalRecordCount - currentOffset);
UploadWorker worker = new UploadWorker("UploadWorker-" + workerNumber, selectFormat, tableSpec, table,
insertCols, selectCols, currentOffset, workerRecordCount, batchSize, source,
config.getUrlDestination(), config.isUseJdbcBatching());
service.submit(worker);
currentOffset = currentOffset + numberOfRecordsPerWorker;
}
service.shutdown();
try
{
service.awaitTermination(config.getUploadWorkerMaxWaitInMinutes(), TimeUnit.MINUTES);
}
catch (InterruptedException e)
{
log.severe("Error while waiting for workers to finish: " + e.getMessage());
throw new RuntimeException(e);
}
}
public class UploadWorker implements Runnable
{
private static final Logger log = Logger.getLogger(UploadWorker.class.getName());
private final String name;
private String selectFormat;
private String sourceTable;
private String destinationTable;
private Columns insertCols;
private Columns selectCols;
private int beginOffset;
private int numberOfRecordsToCopy;
private int batchSize;
private Connection source;
private String urlDestination;
private boolean useJdbcBatching;
UploadWorker(String name, String selectFormat, String sourceTable, String destinationTable, Columns insertCols,
Columns selectCols, int beginOffset, int numberOfRecordsToCopy, int batchSize, Connection source,
String urlDestination, boolean useJdbcBatching)
{
this.name = name;
this.selectFormat = selectFormat;
this.sourceTable = sourceTable;
this.destinationTable = destinationTable;
this.insertCols = insertCols;
this.selectCols = selectCols;
this.beginOffset = beginOffset;
this.numberOfRecordsToCopy = numberOfRecordsToCopy;
this.batchSize = batchSize;
this.source = source;
this.urlDestination = urlDestination;
this.useJdbcBatching = useJdbcBatching;
}
#Override
public void run()
{
// Connection source = DriverManager.getConnection(urlSource);
try (Connection destination = DriverManager.getConnection(urlDestination))
{
log.info(name + ": " + sourceTable + ": Starting copying " + numberOfRecordsToCopy + " records");
destination.setAutoCommit(false);
String sql = "INSERT INTO " + destinationTable + " (" + insertCols.getColumnNames() + ") VALUES \n";
sql = sql + "(" + insertCols.getColumnParameters() + ")";
PreparedStatement statement = destination.prepareStatement(sql);
int lastRecord = beginOffset + numberOfRecordsToCopy;
int recordCount = 0;
int currentOffset = beginOffset;
while (true)
{
int limit = Math.min(batchSize, lastRecord - currentOffset);
String select = selectFormat.replace("$COLUMNS", selectCols.getColumnNames());
select = select.replace("$TABLE", sourceTable);
select = select.replace("$PRIMARY_KEY", selectCols.getPrimaryKeyColumns());
select = select.replace("$BATCH_SIZE", String.valueOf(limit));
select = select.replace("$OFFSET", String.valueOf(currentOffset));
try (ResultSet rs = source.createStatement().executeQuery(select))
{
while (rs.next())
{
int index = 1;
for (Integer type : insertCols.columnTypes)
{
Object object = rs.getObject(index);
statement.setObject(index, object, type);
index++;
}
if (useJdbcBatching)
statement.addBatch();
else
statement.executeUpdate();
recordCount++;
}
if (useJdbcBatching)
statement.executeBatch();
}
destination.commit();
log.info(name + ": " + sourceTable + ": Records copied so far: " + recordCount + " of "
+ numberOfRecordsToCopy);
currentOffset = currentOffset + batchSize;
if (recordCount >= numberOfRecordsToCopy)
break;
}
}
catch (SQLException e)
{
log.severe("Error during data copy: " + e.getMessage());
throw new RuntimeException(e);
}
log.info(name + ": Finished copying");
}
}

Related

File to DB load using Apache beam

I need to load a file into my database, but before that I have to verify data is present in the database based on some file data. For instance, suppose I have 5 records in a file then I have to check 5 times in the database for separate records.
So how can I get this value dynamically? We have to pass dynamic value instead of 2 in line (preparedStatement.setString(1, "2");)
Here we are creating a Dataflow pipeline which loads data into the database using Apache Beam. Now we create a pipeline object and create a pipeline. Using a PCollection we are storing into database.
Pipeline p = Pipeline.create(options);
p.apply("Reading Text", TextIO.read().from(options.getInputFile()))
.apply(ParDo.of(new FilterHeaderFn(csvHeader)))
.apply(ParDo.of(new GetRatePlanID()))
.apply("Format Result", MapElements.into(
TypeDescriptors.strings()).via(
(KV < String, Integer > ABC) - >
ABC.getKey() + "," + ABC.getValue()))
.apply("Write File", TextIO.write()
.to(options.getOutputFile())
.withoutSharding());
// Retrieving data from database
PCollection < String > data =
p.apply(JdbcIO. < String > read()
.withDataSourceConfiguration(JdbcIO.DataSourceConfiguration.create(
"com.mysql.cj.jdbc.Driver", "jdbc:mysql://localhost:3306/XYZ")
.withUsername("root")
.withPassword("root1234"))
.withQuery("select * from xyz where z = ?")
.withCoder(StringUtf8Coder.of())
.withStatementPreparator(new JdbcIO.StatementPreparator() {
private static final long serialVersionUID = 1 L;
#Override
public void setParameters(PreparedStatement preparedStatement) throws Exception {
preparedStatement.setString(1, "2");
}
})
.withRowMapper(new JdbcIO.RowMapper < String > () {
private static final long serialVersionUID = 1 L;
public String mapRow(ResultSet resultSet) throws Exception {
return "Symbol: " + resultSet.getInt(1) + "\nPrice: " + resultSet.getString(2) +
"\nCompany: " + resultSet.getInt(3);
}
}));
As suggested, the most efficient would probably be loading the whole file into a temporary table and then doing a query to update the requisite rows.
If that can't be done, you could instead read the table into Dataflow (i.e. "select * from xyz") and then do a join/CoGroupByKey to match records with those found in your file. If you expect the existing database to be very large compared to the files you're hoping to upload into it, you could have a DoFn that makes queries to your database directly using JDBC (possibly caching the connection in the DoFn's setUp method) rather than using JdbcIO.

Unit Test for Apex Trigger that Concatenates Fields

I am trying to write a test for a before trigger that takes fields from a custom object and concatenates them into a custom Key__c field.
The trigger works in the Sandbox and now I am trying to get it into production. However, whenever I try and do a System.assert/assertEquals after I create a purchase and perform DML, the value of Key__c always returns null. I am aware I can create a flow/process to do this, but I am trying to solve this with code for my own edification. How can I get the fields to concatenate and return properly in the test? (the commented out asserts are what I have tried so far, and have failed when run)
trigger Composite_Key on Purchases__c (before insert, before update) {
if(Trigger.isBefore)
{
for(Purchases__c purchase : trigger.new)
{
String eventName = String.isBlank(purchase.Event_name__c)?'':purchase.Event_name__c+'-';
String section = String.isBlank(purchase.section__c)?'':purchase.section__c+'-';
String row = String.isBlank(purchase.row__c)?'':purchase.row__c+'-';
String seat = String.isBlank(String.valueOf(purchase.seat__c))?'':String.valueOf(purchase.seat__c)+'-';
String numseats = String.isBlank(String.valueOf(purchase.number_of_seats__c))?'':String.valueOf(purchase.number_of_seats__c)+'-';
String adddatetime = String.isBlank(String.valueOf(purchase.add_datetime__c))?'':String.valueOf(purchase.add_datetime__c);
purchase.Key__c = eventName + section + row + seat + numseats + adddatetime;
}
}
}
#isTest
public class CompositeKeyTest {
public static testMethod void testPurchase() {
//create a purchase to fire the trigger
Purchases__c purchase = new Purchases__c(Event_name__c = 'test', section__c='test',row__c='test', seat__c=1.0,number_of_seats__c='test',add_datetime__c='test');
Insert purchase;
//System.assert(purchases__c.Key__c.getDescribe().getName() == 'testesttest1testtest');
//System.assertEquals('testtesttest1.0testtest',purchase.Key__c);
}
static testMethod void testbulkPurchase(){
List<Purchases__c> purchaseList = new List<Purchases__c>();
for(integer i=0 ; i < 10; i++)
{
Purchases__c purchaserec = new Purchases__c(Event_name__c = 'test', section__c='test',row__c='test', seat__c= i+1.0 ,number_of_seats__c='test',add_datetime__c='test');
purchaseList.add(purchaserec);
}
insert purchaseList;
//System.assertEquals('testtesttest5testtest',purchaseList[4].Key__c,'Key is not Valid');
}
}
You need to requery the records after inserting them to get the updated data from the triggers/database

PowerBi Api - How to get GroupId and DatasetId of Dashboard via API

I have been reading https://powerbi.microsoft.com/en-us/blog/announcing-data-refresh-apis-in-the-power-bi-service/
In this post, it mentions "To get the group ID and dataset ID, you can make a separate API call".
Does anybody know how to do this from the dashboard URL, or do I have to embed the group id and dataset id in my app alongside the dashboard URL???
To get the group ID and dataset ID, you can make a separate API call.
This sentence isn't related to a dashboard, because in one dashboard you can put visuals showing data from many different datasets. These different API calls are Get Groups (to get list of groups, find the one you want and read it's id) and Get Datasets In Group (to find the dataset you are looking for and read it's id).
But you should already know the groupId anyway, because the dashboard is in the same group.
Eventually, you can get datasetId from particular tile using Get Tiles In Group, but I do not know a way to list tiles in dashboard using the Rest API.
This is a C# project code to get the dataset id from Power BI.
Use the below method to call the 'Get' API and fetch you the dataset Id.
public void GetDatasetDetails()
{
HttpResponseMessage response = null;
HttpContent responseContent = null;
string strContent = "";
PowerBIDataset ds = null;
string serviceURL = "https://api.powerbi.com/v1.0/myorg/admin/datasets";
Console.WriteLine("");
Console.WriteLine("- Retrieving data from: " + serviceURL);
response = client.GetAsync(serviceURL).Result;
Console.WriteLine(" - Response code received: " + response.StatusCode);
try
{
responseContent = response.Content;
strContent = responseContent.ReadAsStringAsync().Result;
if (strContent.Length > 0)
{
Console.WriteLine(" - De-serializing DataSet details...");
// Parse the JSON string into objects and store in DataTable
JavaScriptSerializer js = new JavaScriptSerializer();
js.MaxJsonLength = 2147483647; // Set the maximum json document size to the max
ds = js.Deserialize<PowerBIDataset>(strContent);
if (ds != null)
{
if (ds.value != null)
{
foreach (PowerBIDatasetValue item in ds.value)
{
string datasetID = "";
string datasetName = "";
string datasetWeburl = "";
if (item.id != null)
{
datasetID = item.id;
}
if (item.name != null)
{
datasetName = item.name;
}
if (item.qnaEmbedURL != null)
{
datasetWeburl = item.qnaEmbedURL;
}
// Output the dataset Data
Console.WriteLine("");
Console.WriteLine("----------------------------------------------------------------------------------");
Console.WriteLine("");
Console.WriteLine("Dataset ID: " + datasetID);
Console.WriteLine("Dataset Name: " + datasetName);
Console.WriteLine("Dataset Web Url: " + datasetWeburl);
} // foreach
} // ds.value
} // ds
}
else
{
Console.WriteLine(" - No content received.");
}
}
catch (Exception ex)
{
Console.WriteLine(" - API Access Error: " + ex.ToString());
}
}
points to remember:
Make sure these classes exist in your project
PowerBIDataset is a class with List
PowerBIDatasetValue is a class with id, name and webUrl (all string data type) data members
provide below constants in your project class
const string ApplicationID = "747d78cd-xxxx-xxxx-xxxx-xxxx";
// Native Azure AD App ClientID -- Put your Client ID here
const string UserName = "user2#xxxxxxxxxxxx.onmicrosoft.com";
// Put your Active Directory / Power BI Username here (note this is not a secure place to store this!)
const string Password = "xyxxyx";
// Put your Active Directory / Power BI Password here (note this is not secure pace to store this! this is a sample only)
call this GetDatasetDetails() method in the Main method of your project class
and finally
use the below 'Get' API to get the Group Id
https://api.powerbi.com/v1.0/myorg/groups

Passing Side Input in PCollection Partition

I want to pass a sideInput in PCollection Partition and On basis of that, i need to Divide my PCollection is their anyway....
PCollectionList<TableRow> part = merged.apply(Partition.of(Pcollection Count Function Called, new PartitionFn<TableRow>(){
#Override
public int partitionFor(TableRow arg0, int arg1) {
return 0;
}
}));
Any Other Way through Which I Can Partition My PCollection
//Without Dynamic destination partitioning BigQuery table
merge.apply("write into target", BigQueryIO.writeTableRows()
.to(new SerializableFunction<ValueInSingleWindow<TableRow>, TableDestination>() {
#Override
public TableDestination apply(ValueInSingleWindow<TableRow> value) {
TableRow row = value.getValue();
TableReference reference = new TableReference();
reference.setProjectId("XYZ");
reference.setDatasetId("ABC");
System.out.println("date of row " + row.get("authorized_transaction_date_yyyymmdd").toString());
LOG.info("date of row "+
row.get("authorized_transaction_date_yyyymmdd").toString());
String str = row.get("authorized_transaction_date_yyyymmdd").toString();
str = str.substring(0, str.length() - 2) + "01";
System.out.println("str value " + str);
LOG.info("str value " + str);
reference.setTableId("TargetTable$" + str);
return new TableDestination(reference, null);
}
}).withFormatFunction(new SerializableFunction<TableRow, TableRow>() {
#Override
public TableRow apply(TableRow input) {
LOG.info("format function:"+input.toString());
return input;
}
})
.withSchema(schema1).withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_TRUNCATE)
.withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED));
Now I have to use Dynamic Destination Any Solution.Insted Of this and have to Do Partition.
Based on seeing TableRow in your code, I suspect that you want to write a PCollection to BigQuery, sending different elements to different BigQuery tables. BigQueryIO.write() already provides a method to do that, using BigQueryIO.write().to(DynamicDestinations). See Writing different values to different BigQuery tables in Apache Beam.

Using the Reporting Services Web Service, how do you get the permissions of a particular user?

Using the SQL Server Reporting Services Web Service, how can I determine the permissions of a particular domain user for a particular report? The user in question is not the user that is accessing the Web Service.
I am accessing the Web Service using a domain service account (lets say MYDOMAIN\SSRSAdmin) that has full permissions in SSRS. I would like to programmatically find the permissions of a domain user (lets say MYDOMAIN\JimBob) for a particular report.
The GetPermissions() method on the Web Service will return a list of permissions that the current user has (MYDOMAIN\SSRSAdmin), but that is not what I'm looking for. How can I get this same list of permissions for MYDOMAIN\JimBob? I will not have the user's domain password, so using their credentials to call the GetPermissions() method is not an option. I am however accessing this from an account that has full permissions, so I would think that theoretically the information should be available to it.
SSRS gets the NT groups from the users' NT login token. This is why when you are added to a new group, you are expected to log out and back in. The same applies to most Windows checks (SQL Server, shares, NTFS etc).
If you know the NT group(s)...
You can query the ReportServer database directly. I've lifted this almost directly out of one of our reports which we use to check folder security (C.Type = 1). Filter on U.UserName.
SELECT
R.RoleName,
U.UserName,
C.Path
FROM
ReportServer.dbo.Catalog C WITH (NOLOCK) --Parent
JOIN
ReportServer.dbo.Policies P WITH (NOLOCK) ON C.PolicyID = P.PolicyID
JOIN
ReportServer.dbo.PolicyUserRole PUR WITH (NOLOCK) ON P.PolicyID = PUR.PolicyID
JOIN
ReportServer.dbo.Users U WITH (NOLOCK) ON PUR.UserID = U.UserID
JOIN
ReportServer.dbo.Roles R WITH (NOLOCK) ON PUR.RoleID = R.RoleID
WHERE
C.Type = 1
look into "GetPolicies Method" you can see at the following link.
http://msdn.microsoft.com/en-us/library/reportservice2010.reportingservice2010.getpolicies.aspx
Hopefully this will get you started. I use it when copying Folder structure, and Reports from an old server to a new server when I want to 'migrate' my SSRS items from the Source to the Destination Server. It is a a Method to Get the Security Policies for an item on one server, and then set the Security Policies for an identical item on another server, after I have copied the item from the Source Server to the Destination Server. You have to set your own Source and Destination Server Names.
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Web.Services.Protocols; //<=== required for SoapException
namespace SSRS_WebServices_Utility
{
internal static class TEST
{
internal static void GetPoliciesForAnItem_from_Source_ThenSetThePolicyForTheItem_on_Destination(string itemPath)
{
string sSourceServer = "SOURCE-ServerName";
Source_ReportService2010.ReportingService2010 sourceRS = new Source_ReportService2010.ReportingService2010();
sourceRS.Credentials = System.Net.CredentialCache.DefaultCredentials;
sourceRS.Url = #"http://" + sSourceServer + "/reportserver/reportservice2010.asmx";
string sDestinationServer = "DESTINATION-ServerName";
Destination_ReportService2010.ReportingService2010 DestinationRS = new Destination_ReportService2010.ReportingService2010();
DestinationRS.Credentials = System.Net.CredentialCache.DefaultCredentials;
DestinationRS.Url = #"http://" + sDestinationServer + "/reportserver/reportservice2010.asmx";
Boolean val = true;
Source_ReportService2010.Policy[] curPolicy = null;
Destination_ReportService2010.Policy[] newPolicy = null;
try
{
curPolicy = new Source_ReportService2010.Policy[1];
curPolicy = sourceRS.GetPolicies(itemPath, out val); //e.g. of itemPath: "/B2W/001_OLD_PuertoRicoReport"
//DestinationRS.SetPolicies(itemPath, newPolicy);
int iCounter = 0;
//int iMax = curPolicy.Length;
newPolicy = new Destination_ReportService2010.Policy[curPolicy.Length];
foreach (Source_ReportService2010.Policy p in curPolicy)
{
//create the Policy
Destination_ReportService2010.Policy pNew = new Destination_ReportService2010.Policy();
pNew.GroupUserName = p.GroupUserName;
pNew.GroupUserName = p.GroupUserName;
Destination_ReportService2010.Role rNew = new Destination_ReportService2010.Role();
rNew.Description = p.Roles[0].Description;
rNew.Name = p.Roles[0].Name;
//create the Role, which is part of the Policy
pNew.Roles = new Destination_ReportService2010.Role[1];
pNew.Roles[0]=rNew;
newPolicy[iCounter] = pNew;
iCounter += 1;
}
DestinationRS.SetPolicies(itemPath, newPolicy);
Debug.Print("whatever");
}
catch (SoapException ex)
{
Debug.Print("SoapException: " + ex.Message);
}
catch (Exception Ex)
{
Debug.Print("NON-SoapException: " + Ex.Message);
}
finally
{
if (sourceRS != null)
sourceRS.Dispose();
if (DestinationRS != null)
DestinationRS.Dispose();
}
}
}
}
To invoke it use the following:
TEST.GetPoliciesForAnItem_from_Source_ThenSetThePolicyForTheItem_on_Destination("/FolderName/ReportName");
Where you have to put your own SSRS Folder Name and Report Name, i.e. the Path to the item.
In fact I use a method that loops through all the items in the Destination folder that then calls the method like this:
internal static void CopyTheSecurityPolicyFromSourceToDestinationForAllItems_2010()
{
string sDestinationServer = "DESTINATION-ServerName";
Destination_ReportService2010.ReportingService2010 DestinationRS = new Destination_ReportService2010.ReportingService2010();
DestinationRS.Credentials = System.Net.CredentialCache.DefaultCredentials;
DestinationRS.Url = #"http://" + sDestinationServer + "/reportserver/reportservice2010.asmx";
// Return a list of catalog items in the report server database
Destination_ReportService2010.CatalogItem[] items = DestinationRS.ListChildren("/", true);
// For each FOLDER, debug Print some properties
foreach (Destination_ReportService2010.CatalogItem ci in items)
{
{
Debug.Print("START----------------------------------------------------");
Debug.Print("Object Name: " + ci.Name);
Debug.Print("Object Type: " + ci.TypeName);
Debug.Print("Object Path: " + ci.Path);
Debug.Print("Object Description: " + ci.Description);
Debug.Print("Object ID: " + ci.ID);
Debug.Print("END----------------------------------------------------");
try
{
GetPoliciesForAnItem_from_Source_ThenSetThePolicyForTheItem_on_Destination(ci.Path);
}
catch (SoapException e)
{
Debug.Print("SoapException START----------------------------------------------------");
Debug.Print(e.Detail.InnerXml);
Debug.Print("SoapException END----------------------------------------------------");
}
catch (Exception ex)
{
Debug.Print("ERROR START----------------------------------------------------");
Debug.Print(ex.GetType().FullName);
Debug.Print(ex.Message);
Debug.Print("ERROR END----------------------------------------------------");
}
}
}
}