PowerBi Api - How to get GroupId and DatasetId of Dashboard via API - powerbi

I have been reading https://powerbi.microsoft.com/en-us/blog/announcing-data-refresh-apis-in-the-power-bi-service/
In this post, it mentions "To get the group ID and dataset ID, you can make a separate API call".
Does anybody know how to do this from the dashboard URL, or do I have to embed the group id and dataset id in my app alongside the dashboard URL???

To get the group ID and dataset ID, you can make a separate API call.
This sentence isn't related to a dashboard, because in one dashboard you can put visuals showing data from many different datasets. These different API calls are Get Groups (to get list of groups, find the one you want and read it's id) and Get Datasets In Group (to find the dataset you are looking for and read it's id).
But you should already know the groupId anyway, because the dashboard is in the same group.
Eventually, you can get datasetId from particular tile using Get Tiles In Group, but I do not know a way to list tiles in dashboard using the Rest API.

This is a C# project code to get the dataset id from Power BI.
Use the below method to call the 'Get' API and fetch you the dataset Id.
public void GetDatasetDetails()
{
HttpResponseMessage response = null;
HttpContent responseContent = null;
string strContent = "";
PowerBIDataset ds = null;
string serviceURL = "https://api.powerbi.com/v1.0/myorg/admin/datasets";
Console.WriteLine("");
Console.WriteLine("- Retrieving data from: " + serviceURL);
response = client.GetAsync(serviceURL).Result;
Console.WriteLine(" - Response code received: " + response.StatusCode);
try
{
responseContent = response.Content;
strContent = responseContent.ReadAsStringAsync().Result;
if (strContent.Length > 0)
{
Console.WriteLine(" - De-serializing DataSet details...");
// Parse the JSON string into objects and store in DataTable
JavaScriptSerializer js = new JavaScriptSerializer();
js.MaxJsonLength = 2147483647; // Set the maximum json document size to the max
ds = js.Deserialize<PowerBIDataset>(strContent);
if (ds != null)
{
if (ds.value != null)
{
foreach (PowerBIDatasetValue item in ds.value)
{
string datasetID = "";
string datasetName = "";
string datasetWeburl = "";
if (item.id != null)
{
datasetID = item.id;
}
if (item.name != null)
{
datasetName = item.name;
}
if (item.qnaEmbedURL != null)
{
datasetWeburl = item.qnaEmbedURL;
}
// Output the dataset Data
Console.WriteLine("");
Console.WriteLine("----------------------------------------------------------------------------------");
Console.WriteLine("");
Console.WriteLine("Dataset ID: " + datasetID);
Console.WriteLine("Dataset Name: " + datasetName);
Console.WriteLine("Dataset Web Url: " + datasetWeburl);
} // foreach
} // ds.value
} // ds
}
else
{
Console.WriteLine(" - No content received.");
}
}
catch (Exception ex)
{
Console.WriteLine(" - API Access Error: " + ex.ToString());
}
}
points to remember:
Make sure these classes exist in your project
PowerBIDataset is a class with List
PowerBIDatasetValue is a class with id, name and webUrl (all string data type) data members
provide below constants in your project class
const string ApplicationID = "747d78cd-xxxx-xxxx-xxxx-xxxx";
// Native Azure AD App ClientID -- Put your Client ID here
const string UserName = "user2#xxxxxxxxxxxx.onmicrosoft.com";
// Put your Active Directory / Power BI Username here (note this is not a secure place to store this!)
const string Password = "xyxxyx";
// Put your Active Directory / Power BI Password here (note this is not secure pace to store this! this is a sample only)
call this GetDatasetDetails() method in the Main method of your project class
and finally
use the below 'Get' API to get the Group Id
https://api.powerbi.com/v1.0/myorg/groups

Related

How to do multiple parallel readers for data export using Google Spanner?

External Backups/Snapshots for Google Cloud Spanner recommends to use queries with timestamp bounds to create snapshots for export. On the bottom of the Timestamp Bounds documentation it states:
Cloud Spanner continuously garbage collects deleted and overwritten data in the background to reclaim storage space. This process is known as version GC. By default, version GC reclaims versions after they are one hour old. Because of this, Cloud Spanner cannot perform reads at a read timestamp more than one hour in the past.
So any export would need to complete within an hour. A single reader (i.e. select * from table; using timestamp X) would not be able to export the entire table within an hour.
How can multiple parallel readers be implemented in spanner?
Note: It is mentioned in one of the comments that support for Apache Beam is coming, but it looks like that uses a single reader:
/** A simplest read function implementation. Parallelism support is coming. */
https://github.com/apache/beam/blob/master/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/spanner/NaiveSpannerReadFn.java#L26
Is there a way to do the parallel reader that beam requires today using exising APIs? Or will Beam need to use something that isn't released yet on google spanner?
It is possible to read data in parallel from Cloud Spanner with the BatchClient class. Follow read_data_in_parallel for more information.
If you are looking to export data from Cloud Spanner, I'd recommend you to use Cloud Dataflow (see the integration details here) as it provides higher level abstractions and takes care data processing details, like scaling and failure handling.
Edit 2018-03-30 - The example project has been updated to use the BatchClient offered by Google Cloud Spanner
After the release of the BatchClient for reading/downloading large amounts of data, the example project below has been updated to use the new batch client instead of the standard database client. The basic idea behind the project is still the same: Copy data to/from Cloud Spanner and any other database using standard jdbc functionality. The following code snippet sets the jdbc connection in batch read mode:
if (source.isWrapperFor(ICloudSpannerConnection.class))
{
ICloudSpannerConnection con = source.unwrap(ICloudSpannerConnection.class);
// Make sure no transaction is running
if (!con.isBatchReadOnly())
{
if (con.getAutoCommit())
{
con.setAutoCommit(false);
}
else
{
con.commit();
}
con.setBatchReadOnly(true);
}
}
When the connection is in 'batch read only mode', the connection will use the BatchClient of Google Cloud Spanner instead of the standard database client. When one of the Statement#execute(String) or PreparedStatement#execute() methods are called (as these allow multiple result sets to be returned) the jdbc driver will create a partitioned query instead of a normal query. The results of this partitioned query will be a number of result sets (one per partition) that can be fetched by the Statement#getResultSet() and Statement#getMoreResults(int) methods.
Statement statement = source.createStatement();
boolean hasResults = statement.execute(select);
int workerNumber = 0;
while (hasResults)
{
ResultSet rs = statement.getResultSet();
PartitionWorker worker = new PartitionWorker("PartionWorker-" + workerNumber, config, rs, tableSpec, table, insertCols);
workers.add(worker);
hasResults = statement.getMoreResults(Statement.KEEP_CURRENT_RESULT);
workerNumber++;
}
The result sets that are returned by the Statement#execute(String) are not executed directly, but only after the first call to ResultSet#next(). Passing these result sets to separate worker threads ensures parallel download and copying of the data.
Original answer:
This project was initially created for conversion in the other direction (from a local database to Cloud Spanner), but as it uses JDBC for both source and destination it can also be used the other way around: Converting a Cloud Spanner database to a local PostgreSQL database. Large tables are converted in parallel using a thread pool.
The project uses this open source JDBC driver instead of the JDBC driver supplied by Google. The source Cloud Spanner JDBC connection is set to read-only mode and autocommit=false. This ensures that the connection automatically creates a read-only transaction using the current time as timestamp the first time you execute a query. All subsequent queries within the same (read-only) transaction will use the same timestamp giving you a consistent snapshot of your Google Cloud Spanner database.
It works as follows:
Set the source database to read-only transactional mode.
The convert(String catalog, String schema) method iterates over all
tables in the source database (Cloud Spanner)
For each table the number of records is determined, and depending on the size of the table, the table is copied using either the main thread of the application or by a worker pool.
The class UploadWorker is responsible for the parallel copying. Each worker is assigned a range of records from the table (for example rows 1 to 2,400). The range is selected by a select statement in this format: 'SELECT * FROM $TABLE ORDER BY $PK_COLUMNS LIMIT $BATCH_SIZE OFFSET $CURRENT_OFFSET'
Commit the read-only transaction on the source database after ALL tables have been converted.
Below is a code snippet of the most important parts.
public void convert(String catalog, String schema) throws SQLException
{
int batchSize = config.getBatchSize();
destination.setAutoCommit(false);
// Set the source connection to transaction mode (no autocommit) and read-only
source.setAutoCommit(false);
source.setReadOnly(true);
try (ResultSet tables = destination.getMetaData().getTables(catalog, schema, null, new String[] { "TABLE" }))
{
while (tables.next())
{
String tableSchema = tables.getString("TABLE_SCHEM");
if (!config.getDestinationDatabaseType().isSystemSchema(tableSchema))
{
String table = tables.getString("TABLE_NAME");
// Check whether the destination table is empty.
int destinationRecordCount = getDestinationRecordCount(table);
if (destinationRecordCount == 0 || config.getDataConvertMode() == ConvertMode.DropAndRecreate)
{
if (destinationRecordCount > 0)
{
deleteAll(table);
}
int sourceRecordCount = getSourceRecordCount(getTableSpec(catalog, tableSchema, table));
if (sourceRecordCount > batchSize)
{
convertTableWithWorkers(catalog, tableSchema, table);
}
else
{
convertTable(catalog, tableSchema, table);
}
}
else
{
if (config.getDataConvertMode() == ConvertMode.ThrowExceptionIfExists)
throw new IllegalStateException("Table " + table + " is not empty");
else if (config.getDataConvertMode() == ConvertMode.SkipExisting)
log.info("Skipping data copy for table " + table);
}
}
}
}
source.commit();
}
private void convertTableWithWorkers(String catalog, String schema, String table) throws SQLException
{
String tableSpec = getTableSpec(catalog, schema, table);
Columns insertCols = getColumns(catalog, schema, table, false);
Columns selectCols = getColumns(catalog, schema, table, true);
if (insertCols.primaryKeyCols.isEmpty())
{
log.warning("Table " + tableSpec + " does not have a primary key. No data will be copied.");
return;
}
log.info("About to copy data from table " + tableSpec);
int batchSize = config.getBatchSize();
int totalRecordCount = getSourceRecordCount(tableSpec);
int numberOfWorkers = calculateNumberOfWorkers(totalRecordCount);
int numberOfRecordsPerWorker = totalRecordCount / numberOfWorkers;
if (totalRecordCount % numberOfWorkers > 0)
numberOfRecordsPerWorker++;
int currentOffset = 0;
ExecutorService service = Executors.newFixedThreadPool(numberOfWorkers);
for (int workerNumber = 0; workerNumber < numberOfWorkers; workerNumber++)
{
int workerRecordCount = Math.min(numberOfRecordsPerWorker, totalRecordCount - currentOffset);
UploadWorker worker = new UploadWorker("UploadWorker-" + workerNumber, selectFormat, tableSpec, table,
insertCols, selectCols, currentOffset, workerRecordCount, batchSize, source,
config.getUrlDestination(), config.isUseJdbcBatching());
service.submit(worker);
currentOffset = currentOffset + numberOfRecordsPerWorker;
}
service.shutdown();
try
{
service.awaitTermination(config.getUploadWorkerMaxWaitInMinutes(), TimeUnit.MINUTES);
}
catch (InterruptedException e)
{
log.severe("Error while waiting for workers to finish: " + e.getMessage());
throw new RuntimeException(e);
}
}
public class UploadWorker implements Runnable
{
private static final Logger log = Logger.getLogger(UploadWorker.class.getName());
private final String name;
private String selectFormat;
private String sourceTable;
private String destinationTable;
private Columns insertCols;
private Columns selectCols;
private int beginOffset;
private int numberOfRecordsToCopy;
private int batchSize;
private Connection source;
private String urlDestination;
private boolean useJdbcBatching;
UploadWorker(String name, String selectFormat, String sourceTable, String destinationTable, Columns insertCols,
Columns selectCols, int beginOffset, int numberOfRecordsToCopy, int batchSize, Connection source,
String urlDestination, boolean useJdbcBatching)
{
this.name = name;
this.selectFormat = selectFormat;
this.sourceTable = sourceTable;
this.destinationTable = destinationTable;
this.insertCols = insertCols;
this.selectCols = selectCols;
this.beginOffset = beginOffset;
this.numberOfRecordsToCopy = numberOfRecordsToCopy;
this.batchSize = batchSize;
this.source = source;
this.urlDestination = urlDestination;
this.useJdbcBatching = useJdbcBatching;
}
#Override
public void run()
{
// Connection source = DriverManager.getConnection(urlSource);
try (Connection destination = DriverManager.getConnection(urlDestination))
{
log.info(name + ": " + sourceTable + ": Starting copying " + numberOfRecordsToCopy + " records");
destination.setAutoCommit(false);
String sql = "INSERT INTO " + destinationTable + " (" + insertCols.getColumnNames() + ") VALUES \n";
sql = sql + "(" + insertCols.getColumnParameters() + ")";
PreparedStatement statement = destination.prepareStatement(sql);
int lastRecord = beginOffset + numberOfRecordsToCopy;
int recordCount = 0;
int currentOffset = beginOffset;
while (true)
{
int limit = Math.min(batchSize, lastRecord - currentOffset);
String select = selectFormat.replace("$COLUMNS", selectCols.getColumnNames());
select = select.replace("$TABLE", sourceTable);
select = select.replace("$PRIMARY_KEY", selectCols.getPrimaryKeyColumns());
select = select.replace("$BATCH_SIZE", String.valueOf(limit));
select = select.replace("$OFFSET", String.valueOf(currentOffset));
try (ResultSet rs = source.createStatement().executeQuery(select))
{
while (rs.next())
{
int index = 1;
for (Integer type : insertCols.columnTypes)
{
Object object = rs.getObject(index);
statement.setObject(index, object, type);
index++;
}
if (useJdbcBatching)
statement.addBatch();
else
statement.executeUpdate();
recordCount++;
}
if (useJdbcBatching)
statement.executeBatch();
}
destination.commit();
log.info(name + ": " + sourceTable + ": Records copied so far: " + recordCount + " of "
+ numberOfRecordsToCopy);
currentOffset = currentOffset + batchSize;
if (recordCount >= numberOfRecordsToCopy)
break;
}
}
catch (SQLException e)
{
log.severe("Error during data copy: " + e.getMessage());
throw new RuntimeException(e);
}
log.info(name + ": Finished copying");
}
}

Alfresco WS Client API - WSSecurityException when using fetchMore method

Can someone tell me what's wrong with my code here... I'm always getting this exception on the first call to contentService.read(...) after the first fetchMore has occurred.
org.apache.ws.security.WSSecurityException: The security token could not be authenticated or authorized
// Here we're setting the endpoint address manually, this way we don't need to use
// webserviceclient.properties
WebServiceFactory.setEndpointAddress(wsRepositoryEndpoint);
AuthenticationUtils.startSession(wsUsername, wsPassword);
// Set the batch size in the query header
int batchSize = 5000;
QueryConfiguration queryCfg = new QueryConfiguration();
queryCfg.setFetchSize(batchSize);
RepositoryServiceSoapBindingStub repositoryService = WebServiceFactory.getRepositoryService();
repositoryService.setHeader(new RepositoryServiceLocator().getServiceName().getNamespaceURI(), "QueryHeader", queryCfg);
ContentServiceSoapBindingStub contentService = WebServiceFactory.getContentService();
String luceneQuery = buildLuceneQuery(categories, properties);
// Call the repository service to do search based on category
Query query = new Query(Constants.QUERY_LANG_LUCENE, luceneQuery);
// Execute the query
QueryResult queryResult = repositoryService.query(STORE, query, true);
String querySession = queryResult.getQuerySession();
while (querySession != null) {
ResultSet resultSet = queryResult.getResultSet();
ResultSetRow[] rows = resultSet.getRows();
for (ResultSetRow row : rows) {
// Read the content from the repository
Content[] readResult = contentService.read(new Predicate(new Reference[] { new Reference(STORE, row.getNode().getId(), null) },
STORE, null), Constants.PROP_CONTENT);
Content content = readResult[0];
[...]
}
// Get the next batch of results
queryResult = repositoryService.fetchMore(querySession);
// process subsequent query results
querySession = queryResult.getQuerySession();
}

Titanium Alloy Caching in Android/iOS? Or Preserving old views

Can we Cache Dynamically Created Lists or View till the webservices are called in background. I want to achieve something like the FaceBook App does. I know its possible in Android Core but wanted to try it in Titanium (Android and IOS).
I would further explain it,
Consider I have a app which has a list. Now When I open for first time, it will obviously hit the webservice and create a dynamic list.
Now I close the app and again open the app. The old list should be visible till the webservice provides any data.
Yes Titanium can do this. You should use a global variable like Ti.App.myList if it is just an array / a list / a variable. If you need to store more complex data like images or databases you should use the built-in file system. There is a really good Documentation on the Appcelerator website.
The procedure for you would be as follows:
Load your data for the first time
Store your data in your preferred way (Global variable, file system)
During future app starts read out your local list / data and display it until your sync is successfull.
You should consider to implement some variable to check wether any update is needed to minimize the network use (it saves energy and provides a better user experience if the users internet connection is slow).
if (response.state == "SUCCESS") {
Ti.API.info("Themes successfully checked");
Ti.API.info("RESPONSE TEST: " + response.value);
//Create a map of the layout names(as keys) and the corresponding url (as value).
var newImageMap = {};
for (var key in response.value) {
var url = response.value[key];
var filename = key + ".jpg"; //EDIT your type of the image
newImageMap[filename] = url;
}
if (Ti.App.ImageMap.length > 0) {
//Check for removed layouts
for (var image in Ti.App.imageMap) {
if (image in newImageMap) {
Ti.API.info("The image " + image + " is already in the local map");
//Do nothing
} else {
//Delete the removed layout
Ti.API.info("The image " + image + " is deleted from the local map");
delete Ti.App.imageMap[image];
}
}
//Check for new images
for (var image in newImageMap) {
if (image in Ti.App.imageMap) {
Ti.API.info("The image " + image + " is already in the local map");
//Do nothing
} else {
Ti.API.info("The image " + image + " is put into the local map");
//Put new image in local map
Ti.App.imageMap[image] = newImageMap[image];
}
}
} else {
Ti.App.imageMap = newImageMap;
}
//Check wether the file already exists
for (var key in response.value) {
var url = response.value[key];
var filename = key + ".png"; //EDIT YOUR FILE TYPE
Ti.API.info("URL: " + url);
Ti.API.info("FILENAME: " + filename);
imagesOrder[imagesOrder.length] = filename.match(/\d+/)[0]; //THIS SAVES THE FIRST NUMBER IN YOUR FILENAME AS ID
//Case1: download a new image
var file = Ti.Filesystem.getFile(Ti.Filesystem.resourcesDirectory, "/media/" + filename);
if (file.exists()) {
// Do nothing
Titanium.API.info("File " + filename + " exists");
} else {
// Create the HTTP client to download the asset.
var xhr = Ti.Network.createHTTPClient();
xhr.onload = function() {
if (xhr.status == 200) {
// On successful load, take that image file we tried to grab before and
// save the remote image data to it.
Titanium.API.info("Successfully loaded");
file.write(xhr.responseData);
Titanium.API.info(file);
Titanium.API.info(file.getName());
};
};
// Issuing a GET request to the remote URL
xhr.open('GET', url);
// Finally, sending the request out.
xhr.send();
}
}
In addition to this code which should be placed in a success method of an API call, you need a global variable Ti.App.imageMap to store the map of keys and the corresponding urls. I guess you have to change the code a bit to fit your needs and your project but it should give you a good starting point.

How to get the large picture from feed with graph api?

When loading the Facebook feeds from one page, if a picture exist in the feed, I want to display the large picture.
How can I get with the graph API ? The picture link in the feed is not the large one.
Thanks.
The Graph API photo object has a picture connection (similar to that the user object has):
“The album-sized view of the photo. […] Returns: HTTP 302 redirect to the URL of the picture.”
So requesting https://graph.facebook.com/{object-id-from-feed}/picture will redirect you to the album-sized version of the photo immediately. (Usefull not only for displaying it in a browser, but also if f.e. you want to download the image to your server, using cURL with follow_redirect option set.)
Edit:
Beginning with API v2.3, the /picture edge for feed posts is deprecated.
However, as a field the picture can still be requested – but it will be a small one.
But full_picture is available as well.
So /{object-id-from-feed}?fields=picture,full_picture can be used to request those, or they can be requested directly with the rest of feed data, like this /page-id/feed?fields=picture,full_picture,… (additional fields, such as message etc., must be specified the same way.)
What worked for me :
getting the picture link from the feed and replacing "_s.jpg" with "_n.jpg"
OK, I found a better way. When you retrieve a feed with the graph API, any feed item with a type of photo will have a field called object_id, which is not there for plain status type items. Query the Graph API with that ID, e.g. https://graph.facebook.com/1234567890. Note that the object ID isn't an underscore-separated value like the main ID of that feed item is.
The result of the object_id query will be a new JSON dictionary, where you will have a source attribute containing a URL for an image that has so far been big enough for my needs.
There is additionally an images array that contains more image URLs for different sizes of the image, but the sizes there don't seem to be predictable, and don't all actually correspond to the physical dimensions of the image behind that URL.
I still wish there was a way to do this with a single Graph API call, but it doesn't look like there is one.
For high res image links from:
Link posts
Video posts
Photo posts
I use the following:
Note: The reason I give the _s -> _o hack precedence over the object_id/picture approach is because the object_id approach was not returning results for all images.
var picture = result.picture;
if (picture) {
if (result.type === 'photo') {
if (picture.indexOf('_s') !== -1) {
console.log('CONVERTING');
picture = picture.replace(/_s/, '_o');
} else if (result.object_id) {
picture = 'https://graph.facebook.com/' + result.object_id + '/picture?width=9999&height=9999';
}
} else {
var qps = result.picture.split('&');
for (var i = 0; i < qps.length; i++) {
var qp = qps[i];
var matches = qp.match(/(url=|src=)/gi);
if (matches && matches.length > 0) picture = decodeURIComponent(qp.split(matches[0])[1]);
}
}
}
This is a new method to get a big image. it was born after the previews method doesn't works
/**
* return a big url of facebook
* works onky for type PHOTO
* #param picture
* #param is a post type link
* #return url of image
*/
#Transactional
public String getBigImageByFacebookPicture(String pictrue,Boolean link){
if(link && pictrue.contains("url=http")){
String url = pictrue.substring(pictrue.indexOf("url=") + 4);
try {
url = java.net.URLDecoder.decode(url, "UTF-8");
} catch (UnsupportedEncodingException e) {
StringBuffer sb = new StringBuffer("Big image for Facebook link not found: ");
sb.append(link);
loggerTakePost.error(sb.toString());
return null;
}
return url;
}else{
try {
Document doc = Jsoup.connect(pictrue).get();
return doc.select("#fbPhotoImage").get(0).attr("src");
} catch (Exception e) {
StringBuffer sb = new StringBuffer("Big image for Facebook link not found: ");
sb.append(link);
loggerTakePost.error(sb.toString());
return null;
}
}
}
Enjoy your large image :)
Actually, you need two different solutions to fully fix this.
1] https://graph.facebook.com/{object_id}/picture
This solution works fine for images and videos posted to Facebook, but sadly, it returns small images in case the original image file was not uploaded to Facebook directly. (When posting a link to another site on your page for example).
2] The Facebook Graph API provides a way to get the full images in the feed itself for those external links. If you add 'full_picture' to the fields like in this example below when calling the API, you will be provided a link to the higher resolution version.
https://graph.facebook.com/your_facebook_id/posts?fields=id,link,full_picture,description,name&access_token=123456
Combining these two solutions I ended up filtering the input in PHP as follows:
if ( isset( $post['object_id'] ) ){
$image_url = 'https://graph.facebook.com/'.$post['object_id'].'/picture';
}else if ( isset( $post['full_picture'] ) ) {
$image_url = $post['full_picture'];
}else{
$image_url = '';
}
See: http://api-portal.anypoint.mulesoft.com/facebook/api/facebook-graph-api/docs/reference/pictures
Just put "?type=large" after the URL to get the big picture.
Thanks to #mattdlockyer for the JS solution. Here is a similar thing in PHP:
$posts = $facebook->api('/[page]/posts/', 'get');
foreach($posts['data'] as $post)
{
if(stristr(#$post['picture'], '_s.'))
{
$post['picture'] = str_replace('_s.', '_n.', #$post['picture']);
}
if(stristr(#$post['picture'], 'url='))
{
parse_str($post['picture'], $picturearr);
if($picturearr['url'])
$post['picture'] = $picturearr['url'];
}
//do more stuff with $post and $post['picture'] ...
}
After positive comment from #Lachezar Todorov I decided to post my current approach (including paging and using Json.NET ;):
try
{
FacebookClient fbClient = new FacebookClient(HttpContext.Current.Session[SessionFacebookAccessToken].ToString());
JObject posts = JObject.Parse(fbClient.Get(String.Format("/{0}/posts?fields=message,picture,link,attachments", FacebookPageId)).ToString());
JArray newsItems = (JArray)posts["data"];
List<NewsItem> result = new List<NewsItem>();
while (newsItems.Count > 0)
{
result.AddRange(GetItemsFromJsonData(newsItems));
if (result.Count > MaxNewsItems)
{
result.RemoveRange(MaxNewsItems, result.Count - MaxNewsItems);
break;
}
JToken paging = posts["paging"];
if (paging != null)
{
if (paging["next"] != null)
{
posts = JObject.Parse(fbClient.Get(paging.Value<String>("next")).ToString());
newsItems = (JArray)posts["data"];
}
}
}
return result;
}
And the helper method to retieve individual items:
private static IEnumerable<NewsItem> GetItemsFromJsonData(IEnumerable<JToken> items)
{
List<NewsItem> newsItems = new List<NewsItem>();
foreach (JToken item in items.Where(item => item["message"] != null))
{
NewsItem ni = new NewsItem
{
Message = item.Value<String>("message"),
DateTimeCreation = item.Value<DateTime?>("created_time"),
Link = item.Value<String>("link"),
Thumbnail = item.Value<String>("picture"),
// http://stackoverflow.com/questions/28319242/simplify-looking-up-nested-json-values-with-json-net/28359155#28359155
Image = (String)item.SelectToken("attachments.data[0].media.image.src") ?? (String)item.SelectToken("attachments.data[0].subattachments.data[0].media.image.src")
};
newsItems.Add(ni);
}
return newsItems;
}
NewsItem class I use:
public class NewsItem
{
public String Message { get; set; }
public DateTime? DateTimeCreation { get; set; }
public String Link { get; set; }
public String Thumbnail { get; set; }
public String Image { get; set; }
}

Using the Reporting Services Web Service, how do you get the permissions of a particular user?

Using the SQL Server Reporting Services Web Service, how can I determine the permissions of a particular domain user for a particular report? The user in question is not the user that is accessing the Web Service.
I am accessing the Web Service using a domain service account (lets say MYDOMAIN\SSRSAdmin) that has full permissions in SSRS. I would like to programmatically find the permissions of a domain user (lets say MYDOMAIN\JimBob) for a particular report.
The GetPermissions() method on the Web Service will return a list of permissions that the current user has (MYDOMAIN\SSRSAdmin), but that is not what I'm looking for. How can I get this same list of permissions for MYDOMAIN\JimBob? I will not have the user's domain password, so using their credentials to call the GetPermissions() method is not an option. I am however accessing this from an account that has full permissions, so I would think that theoretically the information should be available to it.
SSRS gets the NT groups from the users' NT login token. This is why when you are added to a new group, you are expected to log out and back in. The same applies to most Windows checks (SQL Server, shares, NTFS etc).
If you know the NT group(s)...
You can query the ReportServer database directly. I've lifted this almost directly out of one of our reports which we use to check folder security (C.Type = 1). Filter on U.UserName.
SELECT
R.RoleName,
U.UserName,
C.Path
FROM
ReportServer.dbo.Catalog C WITH (NOLOCK) --Parent
JOIN
ReportServer.dbo.Policies P WITH (NOLOCK) ON C.PolicyID = P.PolicyID
JOIN
ReportServer.dbo.PolicyUserRole PUR WITH (NOLOCK) ON P.PolicyID = PUR.PolicyID
JOIN
ReportServer.dbo.Users U WITH (NOLOCK) ON PUR.UserID = U.UserID
JOIN
ReportServer.dbo.Roles R WITH (NOLOCK) ON PUR.RoleID = R.RoleID
WHERE
C.Type = 1
look into "GetPolicies Method" you can see at the following link.
http://msdn.microsoft.com/en-us/library/reportservice2010.reportingservice2010.getpolicies.aspx
Hopefully this will get you started. I use it when copying Folder structure, and Reports from an old server to a new server when I want to 'migrate' my SSRS items from the Source to the Destination Server. It is a a Method to Get the Security Policies for an item on one server, and then set the Security Policies for an identical item on another server, after I have copied the item from the Source Server to the Destination Server. You have to set your own Source and Destination Server Names.
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Web.Services.Protocols; //<=== required for SoapException
namespace SSRS_WebServices_Utility
{
internal static class TEST
{
internal static void GetPoliciesForAnItem_from_Source_ThenSetThePolicyForTheItem_on_Destination(string itemPath)
{
string sSourceServer = "SOURCE-ServerName";
Source_ReportService2010.ReportingService2010 sourceRS = new Source_ReportService2010.ReportingService2010();
sourceRS.Credentials = System.Net.CredentialCache.DefaultCredentials;
sourceRS.Url = #"http://" + sSourceServer + "/reportserver/reportservice2010.asmx";
string sDestinationServer = "DESTINATION-ServerName";
Destination_ReportService2010.ReportingService2010 DestinationRS = new Destination_ReportService2010.ReportingService2010();
DestinationRS.Credentials = System.Net.CredentialCache.DefaultCredentials;
DestinationRS.Url = #"http://" + sDestinationServer + "/reportserver/reportservice2010.asmx";
Boolean val = true;
Source_ReportService2010.Policy[] curPolicy = null;
Destination_ReportService2010.Policy[] newPolicy = null;
try
{
curPolicy = new Source_ReportService2010.Policy[1];
curPolicy = sourceRS.GetPolicies(itemPath, out val); //e.g. of itemPath: "/B2W/001_OLD_PuertoRicoReport"
//DestinationRS.SetPolicies(itemPath, newPolicy);
int iCounter = 0;
//int iMax = curPolicy.Length;
newPolicy = new Destination_ReportService2010.Policy[curPolicy.Length];
foreach (Source_ReportService2010.Policy p in curPolicy)
{
//create the Policy
Destination_ReportService2010.Policy pNew = new Destination_ReportService2010.Policy();
pNew.GroupUserName = p.GroupUserName;
pNew.GroupUserName = p.GroupUserName;
Destination_ReportService2010.Role rNew = new Destination_ReportService2010.Role();
rNew.Description = p.Roles[0].Description;
rNew.Name = p.Roles[0].Name;
//create the Role, which is part of the Policy
pNew.Roles = new Destination_ReportService2010.Role[1];
pNew.Roles[0]=rNew;
newPolicy[iCounter] = pNew;
iCounter += 1;
}
DestinationRS.SetPolicies(itemPath, newPolicy);
Debug.Print("whatever");
}
catch (SoapException ex)
{
Debug.Print("SoapException: " + ex.Message);
}
catch (Exception Ex)
{
Debug.Print("NON-SoapException: " + Ex.Message);
}
finally
{
if (sourceRS != null)
sourceRS.Dispose();
if (DestinationRS != null)
DestinationRS.Dispose();
}
}
}
}
To invoke it use the following:
TEST.GetPoliciesForAnItem_from_Source_ThenSetThePolicyForTheItem_on_Destination("/FolderName/ReportName");
Where you have to put your own SSRS Folder Name and Report Name, i.e. the Path to the item.
In fact I use a method that loops through all the items in the Destination folder that then calls the method like this:
internal static void CopyTheSecurityPolicyFromSourceToDestinationForAllItems_2010()
{
string sDestinationServer = "DESTINATION-ServerName";
Destination_ReportService2010.ReportingService2010 DestinationRS = new Destination_ReportService2010.ReportingService2010();
DestinationRS.Credentials = System.Net.CredentialCache.DefaultCredentials;
DestinationRS.Url = #"http://" + sDestinationServer + "/reportserver/reportservice2010.asmx";
// Return a list of catalog items in the report server database
Destination_ReportService2010.CatalogItem[] items = DestinationRS.ListChildren("/", true);
// For each FOLDER, debug Print some properties
foreach (Destination_ReportService2010.CatalogItem ci in items)
{
{
Debug.Print("START----------------------------------------------------");
Debug.Print("Object Name: " + ci.Name);
Debug.Print("Object Type: " + ci.TypeName);
Debug.Print("Object Path: " + ci.Path);
Debug.Print("Object Description: " + ci.Description);
Debug.Print("Object ID: " + ci.ID);
Debug.Print("END----------------------------------------------------");
try
{
GetPoliciesForAnItem_from_Source_ThenSetThePolicyForTheItem_on_Destination(ci.Path);
}
catch (SoapException e)
{
Debug.Print("SoapException START----------------------------------------------------");
Debug.Print(e.Detail.InnerXml);
Debug.Print("SoapException END----------------------------------------------------");
}
catch (Exception ex)
{
Debug.Print("ERROR START----------------------------------------------------");
Debug.Print(ex.GetType().FullName);
Debug.Print(ex.Message);
Debug.Print("ERROR END----------------------------------------------------");
}
}
}
}