ALTER DATABASE - Cannot process request. Not enough resources to process request. - azure-sqldw

I am working to automate some of my performance tests on Azure SQL Data Warehouse. I had been scaling up/down the databases using the Azure portal. I read in https://msdn.microsoft.com/en-us/library/mt204042.aspx that it is possible to use T-SQL to accomplish this via
ALTER DATABASE ...
In my first attempt using T-SQL, the attempt failed:
RunScript:INFO: Mon Feb 6 20:11:06 UTC 2017 : Connecting to host "logicalserver.database.windows.net" database "master" as "myuser"
RunScript:INFO: stdout from sqlcmd will follow...
ALTER DATABASE my_db MODIFY ( SERVICE_OBJECTIVE = 'DW1000' ) ;
Msg 49918, Level 16, State 1, Server logicalserver, Line 1
Cannot process request. Not enough resources to process request. Please retry you request later.
RunScript:INFO: Mon Feb 6 20:11:17 UTC 2017 : Return Code = "1" from host "logicalserver.database.windows.net" database "master" as "myuser"
RunScript:INFO: stdout from sqlcmd has ended ^.
I immediately went to the Azure portal, requested a scale, and it worked (taking 10 minutes to complete).
Is there any explanation?

Related

changes on primary are not getting replicated to standby in SAP ASE - ASE replication

2 SAP ASE servers are configured to replicate using SAP RS on a testdb with logical connection as a warm standby. In my case, all the threads and rep agents are running fine but data changes on PDS.testdb are not getting replicated to RDS.testdb.
On Replication Server:
1> admin logical_status
2> go
Logical Connection Name
Active Connection Name
Active Conn State
Standby Connection Name
Standby Conn State
Controller RS
Operation in Progress
State of Operation in Progress
Spid
---------------------------------------------------------------------------
[102] manvith6605t1.testdb
[103] MANVITH6605T1.testdb
Active/
[104] APMW2K8R2641.testdb
Active/Waiting for Enable Marker
[16777317] SAMPLE_RS
None
None
1> admin who_is_down
2> go
Spid Name State Info
---- --------------- ---------- ----------
On primary: deleted some rows of data.
1> select count(*) from mytable
2> go
-----------
24
(1 row affected)
On standby:
1> select count(*) from mytable
2> go
-----------
64
(1 row affected)
Feel free to ask for any clarifications.
Is this a new warm standby setup?
Have you been able to successfully replicate into the standby database in the past with this warm standby setup?
Did you by any chance run a switch active command recently?
Assuming this is a new setup, and switch active has not been run, I'm going to assume this may be an issue with how the standby connection was added to the setup ...
If the database connections have been added successfully and SRS thinks it should be replicating then admin logical_status should show both connections with a status of Active but this isn't the case ... the standby connection is showing Active/Waiting for Enable Marker.
I'm guessing that when you added the standby connection you opted to initialize the standby database via a dump-n-load operation. If you created the standby connection via a resource file you probably had something like rs.rs_init_by_dump: yes in the resource file; if you ran rs_init from the command line there would've been a related question that you probably said yes to.
When you tell rs_init that the standby database will be initialized via a dump-n-load operation the standard scenario looks like:
standby connection is created
standby DSI is configured to discard all incoming transactions until it sees a dump marker
admin logical_status should show the standby connection with a status of Active/Waiting for Enable Marker
operator performs a dump database in the active database (or dump transaction if this is a largish db and a db dump has already been dumped and loaded into the standby database)
the dump {database|transaction} command places a dump marker in the active database's transaction log
the repagent forwards this dump marker to SRS
SRS forwards the dump marker to the DSI
the DSI, upon receiving the dump marker will suspend the connection into the standby database and start saving incoming transactions
operator loads the database (or transaction log) dump file into the standby database
operator issues the online database command against the standby database
operator resumes the DSI connection into the standby database
admin logical_status should show the standby connection with a status of Active/
the DSI starts applying transactions against the standby database
NOTE: I don't recall, off the top of my head, if the standby connection's status changes to Active/ a) upon receiving the dump marker (and shutting down the DSI) or b) upon resuming the standby DSI connection.
Your admin logical_status is showing us that the DSI is still waiting for a dump marker.
If this is a production database you'll likely need to perform a new database dump ... at which point the standby DSI should go down ... and then you would need to load the database dump into the standby database, online said database, then resume the DSI connection into the standby database. This is the only way to ensure your standby database will be brought online with no missing transactions (ie, the current setup is discarding transactions).
If this is a development/test environment and/or you don't care about the possibility of the active and standby databases being out of sync, you should be able to:
suspend the DSI into the standby database
resume the DSI into the standby database
verify admin logical_status now shows a status of Active/ for the standby database and if so ... and assuming no other issues ...
newly performed transactions in the active database should start replicating
NOTE: Your previous DELETE performed against the active database has probably been discarded by now so you'll probably want to start by manually running the same DELETE against the standby in order to get the table in sync (assuming that's the only difference in the two tables, eg, you haven't run any UPDATEs in the active database since turning on replication).

Obfuscation process using Informatica

How to check obfuscation process status on Informatica, as I started the process about 8 hours ago and due to idle time was exceeded, my VM got logged off shutting all the applications.
You can still check the session logs from the integration server. They'll be in the installation directory / SessLogs
The Session Logs will be in readable format , only if you have selected - "Write backwards session logs" Option.
You can check session log in server in \server\infa_shared\SessLogs. Session logs will be saved in the server with time-stamp. To read the content of the log you can either open it through the workflow monitor, right-click on the session and select "Get session log".

Alternative to KILL 'SID' on Azure SQL Data Warehouse

If I submit a series of SQL statements (each with GO in sqlcmd) that I want to make an reasonable attempt to run on an Azure SQL Data Warehouse, I've found in sqlcmd how to ignore errors. But I've seen if I want to abort a statement in that sequence of statements with:
KILL "SIDxxxxxxx";
The whole session ends:
Msg 111202, Level 16, State 1, Server adws_database, Line 1
111202;Query QIDyyyyyyyyyy has been cancelled.
Is there a way to not end a query session in Azure SQL Data Warehouse? Similar to how postgres's
pg_cancel_backend()
works?
In postgres the
pg_terminate_backed(<pid>)
seems to be working similarly to the ADW
KILL 'SIDxxxx'
command.
Yes, a client can cancel a running request without aborting the whole session. In SSMS this is what the red square does during query execution.
Sqlcmd doesn't expose any way to cancel a running request, though. Other client interfaces do, like the .NET SqlClient you can use SqlCommand.Cancel()
David

Simple query takes minutes to execute on a killed/inactive session

I'm trying to add simple failover functionality to my application that talks to Oracle 8 11 database. To test that my session is up I issue a simple query (select 1 from dual).
Now when I try to simulate a network outage by killing my Oracle session by doing "alter system kill session 'sid,serial';" and execute this test query it takes up to 5 minutes for the application to process it and return error from Execute method (I'm using OCI API, C++):
Tue Feb 21 21:22:47 HKT 2012: Checking connection with test query...
Tue Feb 21 21:28:13 HKT 2012: Warning - OCI_SUCCESS_WITH_INFO: 3113: ORA-03113: end-of-file on communication channel
Tue Feb 21 21:28:13 HKT 2012: Test connection has failed, attempting to re-establish connection...
If I kill session with the 'immediate' keyword at the end of the query, then the test query returns error instantly.
Question 1: why it takes 5 minutes to execute my query? Are there any Oracle/PMON logs that can shed some light on what is happening during this delay?
Question 2: is it a good choice to use 'alter system kill session ' to simulate network failure? How close the outcomes of this query to a real-world network failure between application and Oracle DB?
Update:
Oracle version:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
There is a good chance that the program is waiting for rollback to complete.

S3ResponseError: S3ResponseError: 403 Forbidden

<RequestTime>Mon, 14 Mar 2011 10:09:28 GMT</RequestTime>
<ServerTime>2011-03-14T09:09:29Z</ServerTime></Error>
reason: The reason of this problem is that Amazon S3 allows only a small time stamp variation up to 15 minutes between the server and its requesting client (user pc). As Amazon is a big backup server of large number of users, security does matter a lot.
solution: I installed ntp on my ubuntu machine and try to sync it with s3. But still throwing same error.
How can I solved it.
My project is in Django
Make sure you use UTC time for your requests. From the AWS docs:
Request Elements
Time stamp—Each request must contain the date and time the request
was created, represented as a string
in UTC.
I had the same problem: Update your date with the following:
rdate -s ntp.xs4all.nl
substitute with whatever ntp server you require.