Unable to delete dataset because it's included in a published app - but it isn't - powerbi

I am trying to delete a dataset from one of our premium workspaces and am getting an error saying it's included in the published app. However, as you can see below, the dataset in question (Construction Daily Report) is not included in the app and no reports reference it. I also tried deleting it using PowerShell but that didn't work either. Has anyone run into this same issue?

I have sometimes experienced significant lag between removing content from an app and republishing the app, until being allowed to actually remove the dataset from the workspace environment.
If you unpublished this very recently, simply try to wait a bit until all systems are fully up to date with currently published app contents. If a significant amount of time has passed, perhaps contact Microsoft directly.

Related

Google Cloud Platform adding OAuth Client ID says Requested entity already exists

I created a OAuth 2 client Id in Google Cloud Platform(GCP) in our production application. However this was only for internal use, so I removed it and tried to add it again in our development GCP project.
However when trying to add it, it says
Save failed
Requested entity already exists
Tracking number: xxx
What am I doing wrong? Do I need to do some extra steps to completely remove the OAuth 2 client id? I removed them around a month ago already, so it really should be gone by now.
It seems after 1 month the problem has automatically resolved itself. I assume it just soft deletes when you press delete, and then hard deletes one month later. Pretty annoying system.
You can also remove the entire project to get rid of unwanted ghost clients, but obviously you then lose all configuration.

How can I resolve this fatal error in Power BI whenerver I try to add a new data source?

I am working on a Power BI project.
I started using a MySQL database that had a small amount of data. I managed myself to create the schema and a very basic dashboard.
After this, I tried to change the data source for a new mysql database with a much bigger amount of data in order to see its performance. The tables are the same. The only thing that changes is the name of the data base and the name of the schema.
The thing is that whenever I try to do this an error always pops up:
Fatal error encountered during data read.
Microsoft.Mashup.Evaluator.Interface.ErrorException
True
I dont know why this happend. I tried to follow some suggestions I saw in the official forums but they didnt worked for me.
I also deleted the cache but nothing happens ( File-> Options and Settings-> Options-> Data Load -> clear cache)
If you could give me any suggestions, they would be appreciated because I am new to power bi and to be honest I am quite lost with this error.
Are you using mysql with hosted environment? like AWS RDS database ?
Previously I had similar issue getting fatal error when import data via mysql view.
The issue is the processer that used in mysql database was not powerful to run on 100% cpu usage.
So I had to upgrade processer in to powerful and efficient one. And did some changes to query to be efficient.
In your case try to add indexing to the tables and if you are using hosted mysql connection try upgrade processor that can work with 100% usage.

application behavior on shinyapps.io

I wrote a shiny application that includes the option to add comments. To make them available all the time in the application (also after the end of the session) I use CSV files, where I write the added comments, and then read them from it for display. I put the entire application along with the mentioned CSV file on shinyapps.io and shared it with users.
Unfortunately, I noticed that daily data is reset. During the day (ECT zone) comments are saved and displayed correctly. When I start it on the second day, I turns out that the comments from the previous day are gone and are only from the current one.
I suspect that the shinyapps.io server has been set up to reset the application to the original settings, but unfortunately I have not found information on this topic.
Do you know anything more, is there such a mechanism? Do you know how I could go around him?
When you app reaches the maximum idle limit, it will go into sleep mode, after that if you open it back up it may get a new Server and so all the Data will be gone.
The solution would be to use Resistant Storage. Easiest would be to just save the Data in Google Sheets or in Dropbox, you can read more and how to do that in the 2 Links below.
( https://docs.rstudio.com/shinyapps.io/Storage.html | https://shiny.rstudio.com/../persistent-data-storage )

About Sitecore Backup

I am trying to backup a whole Sitecore website.
I know that the package designer can do part of the job, but not entirely.
Having a backup is always a good way when the site is broken accidently.
Is there a way or a tool to backup the whole Sitecore website?
I am new to the Sitecore, so any advise is welcome.
Thank you!
We've got a SQL job running to back-up the databases nightly.
Apart from that, when I deploy code and it's a small bit I usually end up backing up only the parts I'm going to replace. If it's a big code deploy I just back up the whole website (code-wise anyway) before deploying the code package.
Apart from that we also run scheduled backups of the code (although I don't know the intervals), and of course we've got source control if everything else fails.
If you've got an automated deployment tool you could also automate the above of course.
Before a major deploy of content or code, I typically backup the master database and zip everything in the website directory minus the App_Data and temp directories. That way if the deploy goes wrong, I can restore the code and database fairly quickly and be back to the previous state.
I have no knowledge of a tool that can do this for you, but there are a few ways you can handle this in an easy way:
1) you can create a database backup of the master database, but this only contains content and no files like media files that are saved on disk or your complete and build solution. It is always a good idea to schedule your database backup every night and save the backups for at least a week or more.
2) When you use the package designer, you can create dynamic pacakges that can contain all your content, media files and solution files on disk. This is an easy way to deploy the site onto a new Sitecore installation all at once, but it requires a manual backup every time.
3) Another way you can use is to serialize your entire content-tree to an xml-format on disk from the Developer tab. Once serialized, you can revert them back into the content tree.
I'd suggest thinking of this in two parts, the first part is backing up the application which is a simple as making sure your application is in some SCM system.
For that you can use Team Development for Sitecore. One of it's features allows you to connect a Visual Studio project to your Sitecore instance.
You can select Sitecore items that you want to be stored in your solution and it will serialize them and place them into your solution.
You can then check them into your SCM system and sleep easier.
The thing to note is deciding which item to place in source control, generally you can think of Sitecore items has developer owned and Content Editor owned. The items you will place in your solution are the items that are developer owned; templates, sublayouts, layouts, and content items that you need for the site to function are good examples.
This way if something goes bad a base restoration is quick and easy.
The second part is the backup of the content in Sitecore that has been added since your deployment. For that like Trayek said above use a SQL job to do the back-ups at whatever interval your are comfortable with.
If you're bored I have a post on using TDS (Team Development for Sitecore) you can check out at Working with Sitecore, Part Nine: TDS
Expanding bit more on what Trayek said, my suggestion would be to have a Continuous Integration (CI) and have automated deploy using Team City.
A good answer is also given here on Stack Overflow.
Basically in your case Teamcity would automatically
1. take back up of the current website (i.e. code) and deploy the new code on top of it.
2. Scripts can also be written to take a differential backup of the SQL databases, if need be.
Hope this helps.
Take a look at Sitecore Instance Manager module. Works really well for packaging entire Sitecore instance.

Coldfusion Report Builder - How can you set different datasources externally between prod/staging/dev?

Coldfusion Report Builder is great.
One small issue. We use ANT+CFANT to deploy.
When we create the report, say in a datasource called MyApp_dev on a dev box.
Our other server is the production server. It also contains a staging build to ensure everything is going smoothly before we publish to live. (thanks to Al Everett for bringing this clarification to my attention.)
Everything works great when the report is created.
We deploy the report to our staging server, which has a datasource of MyApp_Staging. That server also, may or may not, have the live app working under MyApp_Live. Ant pushes the update to Staging just great.
Run the report, crashes and burns. Why?
It seems the report is looking for the MyApp_Dev data_source, even though the application is using the MyApp_Staging datasource.
In digging around I found a few approaches, I would like to do this one, final, ideal way from the beginning instead of having to go back to do dozens of reports differently when I have a new Aha! moment.
1) Obvious: Pass in the datasource in to the cfreport tag. Doesn't work for ColdFusion Builder Reports as of v8, or v9 as tested on Linux.
2) Most realistic option (but painful) so far: Pass in the query as an object into the ColdFusion Builder report. Let's think about this:
Create the Report with the report builder to my heart's content using the RDS, etc on my local box.
When I'm done, copy the query into a snippet of code, or into a database column to be dynamically be injected at runtime with correct datasource.
Modify my "run report" event to find the query from the database column, insert it into another dynamic cfquery and potentially... evaluate (!?!) it? Fun side is I can set the cfquery datasource to what I would need for each environment.
When I modify the report's columns in CF Report Builder, I always have to update the query in the database. Is there a snippet of code that can extract this for me? Hmm.
3) Less than ideal. Suck it up and let all the reports in staging run off the live server. Maybe copy the live data into staging (sans structural changes) to let it seem similar.
Are there any eloquent ways to accomplish the above?
Thanks in Advance!
If you have different dev/staging/production boxes, why not just use the same datasource name on each? That'll save you from having the code figure out where it is.
Because security concerns at my current assignment preclude me from using RDS, I use option 2 as a matter of course. I also like it as it makes it easier to debug.