Today (October 24, 2017) the reports api stopped returning the disabled_reason field. I have a program that uses GAM (google apps manager) to generate a report on all of our user accounts and then parses the disabled_reason field ( https://developers.google.com/admin-sdk/reports/v1/reference/usage-ref-appendix-a/users-accounts ) to find the date on which the account was actually suspended. Since that field appears to have been removed since yesterday, my program broke this morning. Does anyone know if there is a log of changes to the API or a reason why this field has been removed?
Thanks - Dan
So far the account parameters was last updated on August 22, 2017. Also the issue tracker was updated last March 30, 2017. I think you can try this first to make sure it is not a bug.
Related
I am setting up a development server on an AWS AMI with ColdFusion 2018 and MariaDB 10.5.4
I did not find out what the current production versions were, but it is highly possible they were somewhat older. The application was launched in 2016
The code is unchanged from production, and the database is a direct backup and restore, no changes
I am getting errors in the code in the cfoutput query when it tries to format the field Named: DateStamp. This is one example of the code that errors. It is in many places:
#DateFormat(Q.DateStamp,"m/d/yyyy")# #TimeFormat(Q.DateStamp, "short")#
This is the error
"The value class java.time.LocalDateTime cannot be converted to a date"
The table in MariaDB has the datatype of DateStamp as datetime, and this is unchanged from production
I don't know why this is expecting the field to be a LocalDateTime when it is a regular DateTime. It has to be something in the configuration of this environment, but I'm having trouble understanding what. I have searched, but all I get is "how to handle LocalDateTime" type of links, so it isn't of any help as I can't change all the code when this is a test environment that must at least start with the same code as production
As per my comment on Adrian's contribution—in cases like this the answer is often found by comparing the datasource configuration between environments—most importantly whether the same database driver was chosen in both, and then the various advanced settings on the datasource, and finally any version/compatibility settings on the database server itself.
You need to find out which versions of ACF, MySQL and Java are running in production. Even if the application was launched in 2016, there isn’t any guarantee it was released on ColdFusion 2016. It could be an older version of the server.
select Q.DateStamp from where <id = a single> example looks like "2016-10-17 17:50:34"
Did you run the query in an IDE or did you run this through a cfquery? You need to make sure that it's returning as a DateTime object ({ts '2012-12-12 12:12:12'}) and not a string.
java.time.localDateTime
A date-time without a time-zone in the ISO-8601 calendar system, such as 2007-12-03T10:15:30.
MariaDB DateTime
MariaDB displays DATETIME values in 'YYYY-MM-DD HH:MM:SS.ffffff'
I am trying to learn how to use the cloud billing API and playing around with it's methods. I copied a code snippet in Java that shows how to use the updateBillingInfo method. I have a project in my cloud account, and it has a billing account associated with it, and I wanted to change it to a different billing account.
Here's what I tried:
String name = "projects/My project";
ProjectBillingInfo info = new ProjectBillingInfo();
info.setBillingAccountName("billingAccounts/$BILLING_ID");
Cloudbilling.Projects.UpdateBillingInfo request = cloudbillingService.projects().updateBillingInfo(name, info);
ProjectBillingInfo response = request.execute();
and my problem is that request.execute() (as well as the API browser explorer) throws an exception with code "500 - internal error encountered".
Am I not using it correctly? It was my understanding that after this, when I check my project in GCP, I should see my project listed to the new billing account. Help is much appreciated.
You are using an invalid Project ID, since GCP project IDs have no spaces in them. Note that Project IDs and Project names are different things. It needs to be the ID as seen here. The rest of your code snippet seems fine, just make sure you put the actual project ID like this: projects/your-project-id
I wanted to test out PowerBI embedded so I downloaded the the sample app that is able to publish a pbix file and to embed it.
So I created the easiest PowerBI file one is able to make with Azure SQL, using the DirectQuery option, as underlying data source.
I succesfully imported the PowerBI file in my workspace collection
I changed the connection string of my PowerBI file succesfully
After that the code to patch the gateway with the username and password credentials fails
Then when I tried to view the embedded report I got this error.
I believe the connectionstring is in the correct format because it was updated succesfully. I also already tried to point it to another SQL database and then the error shows the other SQL database in the error message.
1) I thought this could be because the Gateway does not get the credentials that I gave it is that correct?
2) Does someone know how can I fix this?
Thanks in advance!
As #Cuong Le stated, this was a Microsoft Issue at first.
When the problem was fixed I still received a BadRequest exception. After trying to update the credentials with the PowerBI-CLI the problem became clearer. I needed to grant rights for Azure IP addresses to the relevant SQL database. Once I did that I was able to update the credentials. Unfortunately PowerBI API SDK's exception messages are not as good as the PowerBI-CLI messages. I also tried it with PowerBI API SDK and it also worked.
The exception message I got was the following:
[ powerbi ] {"error":{"code":"DM_GWPipeline_Gateway_DataSourceAccessError","pbi.error":{"code":"DM_GWPipeline_Gateway_DataSourceAccessError","parameters":{},"details":[{"code":"DM_ErrorDetailNameCode_UnderlyingErrorCode","detail":{"type":1,"value":"-2146232060"}},{"code":"DM_ErrorDetailNameCode_UnderlyingErrorMessage","detail":{"type":1,"value":"Cannot open server 'engiep-dev-weeu-sql' requested by the login. Client with IP address 'xx.xx.xx.213' is not allowed to access the server. To enable access, use the Windows Azure Management Portal or run sp_set_firewall_rule on the master database to create a firewall rule for this IP address or address range. It may take up to five minutes for this change to take effect."}},{"code":"DM_ErrorDetailNameCode_UnderlyingHResult","detail":{"type":1,"value":"-2146232060"}},{"code":"DM_ErrorDetailNameCode_UnderlyingNativeErrorCode","detail":{"type":1,"value":"40615"}}]}}}
The correct connectionstring format to use is:
Data Source=yourDataSource;Initial Catalog=yourDataBase;User ID=yourUser;Password=yourPass;
(Don't use quotes anywhere.)
I was experiencing the same issue. Also it is an open issue on github.
Attached Image :
enter image description here
To solve this, I used the PowerBI Cli 1.0.4 from NPM. And used Update Connection Operation,(remember to add -d).
powerbi update-connection -c [workspace name] -k [access key] -w [workspace id] -d [dataset id] -s "Data Source=xxx.database.windows.net;Initial Catalog=xxx;User ID=xxx;Password=xxx"
If it fails do it(Update-Connection Operation) again.
The issue happens since sometimes datasource credentials are not carried over to the workspace.
In the case of reports that use direct query, credentials are never brought with the pbix as an import is done. All private info are stripped out.
Hope this helps!
Thanks
We have a document library that has both Document sets and Documents. We also have a Workflow that is manually started by the user on any item in this library. The problem we are having is that the workflow doesn't start if the document is checked in and . If the document is checked out, it works fine. The workflow runs fine on a Document Set.
Looking into the log files, I see the following messages:
Skip lookup field SortBehavior as it's not dependent lookup, but it has PrimaryFieldId ID 46fff461-81e3-b73a-9fba-f4f1e8088cbe
Skip lookup field CheckedOutUserId as it's not dependent lookup, but it has PrimaryFieldId ID 46fff461-81e3-b73a-9fba-f4f1e8088cbe
Skip lookup field SyncClientId as it's not dependent lookup, but it has PrimaryFieldId ID 46fff461-81e3-b73a-9fba-f4f1e8088cbe
The target list of field Taxonomy Catch All Column, TaxCatchAll, does not exists in the current web or the current user does not have permissioin to see it. Skip it. 46fff461-81e3-b73a-9fba-f4f1e8088cbe
Immediately below these lines, I see the following message:
The file "http://sharepointurl.com/abc/TestWf/select_element.pdf" is not checked out. You must first check out this document before making changes......
The workflow is very simple and only logs a test message. I am not sure why SharePoint is trying to check-out the document but I have a feeling that it has something to do with the above messages.
Anyone has any idea why this is happening?
Thanks
We were able to fix the issue after getting some support on Microsoft TechNet forum.
Assuming the workflow is a SharePoint Designer Workflow, Open SharePoint Designer and Connect to your site. Click on Workflows from the left navigation, and click on your workflow. Workflow information page will open. In the "Settings" area in the right pane, uncheck the "Automatically update the workflow status to the current stage name". This will fix the problem.
I am using Codeception to run three acceptance tests which basically are as follows:-
Check the email address 'admin#admin.com' exists
Create a new user account
Login to the website
Obviously this requires the database so I have added 'Db' to the list of modules in the acceptance.suite.yml, however the generation of the report takes sometime, is this normal or do I have something wrong with my setup?
Below is the report (and time taken for each according to the html file it is generating)
check admin#admin.com account exists (AdminCept.php) (0.01s)
create new user account (CreateUserCept.php) (19.1s)
log in to the website (LoginCept.php) (21.72s)
Approx 40 seconds in total (although the command line states 1:02 - I guess as it replaces the mock database dump.sql back into the database as well)
Can anybody shed any light on the matter?
Not really an answer but closing this off - simply put the report generation takes time.