I’m trying to run a Dataset to build a KPI on SSRS 2016 Enterprise addition but, it give me the hereunder message although it run fine on the Report Builder:
[An Error has occurred.
The data set could not be processed .
There was a problem getting data from the Report server Web Services.]
I already changed the Dataset Timeout to Zero and the Database Query Time out to Zero on the RsReportServer.config
When I limit the Dataset to one row (a shorter query) it run fine which, mean the issue could be in the Web service or session timeout.
Setting Time-out Values for Report and Shared Dataset Processing (SSRS)
You can Reporting Services specify time-out values to set limits on
how system resources are used. Report server supports two time-out
values: An embedded dataset query time-out value is the number of
seconds that the report server waits for a response from the
database. This value is defined in a report.
A shared dataset query time-out value is the number of seconds that
the report server waits for a response from the database. This value
is part of the shared dataset definition and can be changed when you
manage the shared dataset on the report server.
A report execution time-out value is the maximum number of seconds
that report processing can continue before it is stopped. This value
is defined at the system level. You can vary this setting for
individual reports.
Most time-out errors occur during query
processing. If you are encountering time-out errors, try increasing
the query time-out value. Make sure to adjust the report execution
time-out value so that it is larger than the query time-out. The time
period should be sufficient to complete both query and report
processing.
https://learn.microsoft.com/en-us/sql/reporting-services/report-server/setting-time-out-values-for-report-and-shared-dataset-processing-ssrs
Related
I have several scheduled tasks that essentially perform the same type of functionality:
Request JSON data from an external API
Parse the data
Save the data to a database
The "Timeout (in seconds)" field in the Scheduled Task form is empty for each task.
Each CFM template has the following line of code at the top of the page:
<cfscript>
setting requesttimeout=299;
</cfscript>
However, I consistently see the following entries in the scheduled.log file:
"Information","DefaultQuartzScheduler_Worker-8","04/24/19","12:23:00",,"Task
default - Data - Import triggered."
"Error","DefaultQuartzScheduler_Worker-8","04/24/19","12:24:00",,"The
request has exceeded the allowable time limit Tag: cfhttp "
Notice, there is only a 1-minute difference between the start of the task, and its timing out.
I know that, according to Charlie Arehart, the timeout error messages that are logged are usually not indicative of the actual cause/point of the timeout, and, in fact, I have run tests and confirmed that the CFHTTP calls generally run in a matter of 1-10 seconds.
Lastly, when I make the same request in a browser, it runs until the requesttimeout set in the CFM page is reached.
This leads me to believe that there is some "forced"/"built-in"/"unalterable" request timeout for Scheduled Tasks, or, that it is using the default timeout value for the server and/or application (which is set to 60 seconds for this server/application) yet, I cannot find this documented anywhere.
If this is the case, is it possible to scheduled a task in ColdFusion that runs longer than the forced request timeout?
I am trying to build a cursor based pagination API on top of a spanner dataset. To do this I'm am using the read timestamp from the initial request to retrieve data and then encoding this into a cursor which can then be used to do an "Exact staleness" (https://cloud.google.com/spanner/docs/timestamp-bounds) read in subsequent paging requests.
For example, the processing of a request for the first page looks something like:
Transaction tx = spanner.singleUseReadOnlyTransaction();
tx.executeQuery(statement); // result set containing the first page of data
tx.getReadTimestamp(); // read timestamp that gets returned in a cursor
And for subsequent requests:
Transaction tx = spanner.singleUseReadOnlyTransaction(TimestampBound.ofReadTimestamp(cursorTs));
I'd also like to return a message to the user when the cursor timestamp has expired (the documentation linked to above states they are valid for roughly an hour) and to do this I have the following code:
try {
// process spanner result set
} catch (SpannerException e) {
if (ErrorCode.FAILED_PRECONDITION.equals(e.getErrorCode)) {
// cursor has expired, return appropriate error message
}
}
This works fine when manually testing against a long running spanner database. However, in my test code I create a spanner database and then tear it down once the test is complete and in these tests the spanner exception is only thrown intermittently when I use a read timestamp that should definitely be expired (say over a year old). In the cases where no exception is thrown, I get an empty resultset. If I make multiple requests to spanner in my test with this expired read timestamp, eventually the database seems to consistently throw the "failed precondition" error.
Is this behaviour expected for a newly provisioned spanner database?
I believe the reason for this behavior is because you are using Read-only Transactions. As explained in the documentation, Read-only transactions always observe a consistent state of the database and the transaction commit history at a chosen point. In your case, the database is created and torn down before and after your test is completed. Hence, no transaction commit history to be observed except after a number of attempts.
Currently, I am working on a project to integrate mysql with the IOCP server to collect sensor data and verify the collected data from the client.
However, there is a situation where mysql misses a connection.
The query itself is a simple query that inserts a single row of records or gets the average value between date intervals.
The data of each sensor flows into the DB at the same time every 5 seconds. When the messages of the sensors come on occasionally or overlap with the message of the client, the connection is disconnected.
lost connection to mysql server during query
In relation to throwing the above message
max_allowed_packet Numbers changed.
interactive_timeout, net_read_timeout, net_write_timeout, wait_timeout
It seems that if there are overlapping queries, an error occurs.
Please let me know if you know the solution.
I had a similar issue in a MySQL server with very simple queries where the number of concurrent queries were high. I had to disable the query cache to solve the issue. You could try disabling the query cache using following statements.
SET GLOBAL query_cache_size = 0;
SET GLOBAL query_cache_type = 0;
Please note that a server restart will enable the query cache again. Please put the configuration in MySQL configuration file if you need to have it preserved.
Can you run below command and check the current timeouts?
SHOW VARIABLES LIKE '%timeout';
You can change the timeout, if needed -
SET GLOBAL <timeout_variable>=<value>;
I am profiling some MS SQL queries with the SQL Server Profiler for my C# Application that I develop with Visual Studio and IIS Express:
The duration that is given for the event "Audit Logout" (16876 ms) is the total time between login and logout. The duration for the query is only 60 ms.
Login/Logout events are related to the setting up / tearing down.
From What is "Audit Logout" in SQL Server Profiler?
I would like to understand the time difference of 16816 ms (= 16876 ms - 60ms) in more detail.
a) Is it possible to log more events (like a "debug mode")?
b) Is it right to assume that the time difference is only due to tearing down because the end time of the "Audit Login" event is the same as the start time of the query execution?
c) Is there some other tool for analyzing (setup and) tear down times?
d) Does the time difference depend on my query? With other words: would an optimization of the query also help to reduce that time difference?
What I observed so far for #DevTime is that it makes a difference if I start my application the first time (IIS Express is started by Visual Studio, the database is created using the entity framework, example data is written to the database) or if I start it the second time when the database already exists.
For a login after the first start the time difference is about 15 s larger than for a login after the second start. The query that is marked in the example above is executed after user login. Therefore I would expect that the initialization of the database already has been finished at that time and that the initialization would not have any effect on the time difference. Nevertheless it seems to have an influence.
Some related articles:
What is "Audit Logout" in SQL Server Profiler?
Fixing slow initial load for IIS
SQL Server audit logout creates huge number of reads
https://learn.microsoft.com/de-de/sql/relational-databases/event-classes/audit-logout-event-class
https://social.msdn.microsoft.com/Forums/sqlserver/en-US/84ecfe9e-ff0e-4fc5-962b-cffdcbc619ee/audit-logout-event-seems-slow-on-occasion?forum=sqldatabaseengine
When starting the SQL Profiler a window Trace Properties is shown.
The second tab Events Selection is the place where the considered events can be selected.
Activate the option Show all events.
Enable for example the option "Showplan XML FOR Query Compile" under the section "Performance" to log more events.
Also see How to determine what is compiling in SQL Server
I dont have a solution with your recomandations, please the database continue whit problem. all the querys are so good in the process time, only the logout audit constantly show a long time duration. the network its laso good. I think that this logout time affected the efficiency
I'm testing out some BigTable queries on a 3-node cluster using the Go client, like:
r, err = tbl.ReadRow(ctx, "key1")
I'm getting results back within a few ms:
query 1: 129.748451ms
query 2: 3.256158ms
query 3: 2.474257ms
query 4: 2.814601ms
query 5: 2.850737ms
As you can see there's a significant setup connection delay on the first query.
Can anyone provide feedback whether this would be an acceptable value?
The queries originate from a GCE VM in the same zone (europe-west1-c) as the BigTable cluster.
Furthermore, is there any support planned to pool the BigTable connections when running on App Engine?
Bigtable Connections in Go are initialized asynchronously from when bigtable.NewClient() is called.
Connections are expensive objects that have significant initialization time.
The first ReadRow() call will block on waiting for that connection to finish set up. If you were to wait some amount of time between making the NewClient() call and the first ReadRow() you should not see higher latency in the first read.