I recently came across this problem: I created a trigger for the wrong table (deny deletions on the table PHI).
SET TERM ^ ;
CREATE OR ALTER TRIGGER PHI_BD0 FOR DHI
ACTIVE BEFORE DELETE POSITION 0
AS
begin
exception delnotallowed;
end
^
SET TERM ; ^
I overlooked that the trigger was created on table DHI instead of PHI. So corrected the statement and ran it again what - usually - recreates the trigger and should correct the problem.
SET TERM ^ ;
CREATE OR ALTER TRIGGER PHI_BD0 FOR PHI
ACTIVE BEFORE DELETE POSITION 0
AS
begin
exception delnotallowed;
end
^
SET TERM ; ^
The script executes successfully but - really weird - the old trigger is still in place. No trigger on PHI, old trigger still active on DHI.
If a change an other trigger the same way without changing the target table the changes are always accepted (as expected).
The only way to get around the problem above is to delete the trigger before running the script. Of course this is not a real problem but things like this scare me as I can not rely on an exception being fired if something goes wrong.
UPDATED:
I edited the complete post, sorry for posting the incomplete note in the first place. I hacked it in and wanted to complete it later but accidently posted it prematurely. Sorry for that.
Related
I'd like to start by saying I'm no SAS wiz by no means.
I inherited SAS code from a team that no longer exists which was written by people who no longer work here, so there is nobody around that would be more familiar with how things work.
The structure of things is:
We have a SAS program that works as a scheduler for triggering a selection of smaller programs in a daily basis. The way it works is using statements to check for the time of day and based on that it then triggers programs that are stored in the server by using an %include statement.
This has worked flawlessly for the past 2 years and suddenly from yesterday on all the codes that are triggered by this scheduler are running with 0 observations.
If I manually open a program in the server (the same program that the scheduler triggers) it runs fine. If the scheduler triggers it then the log shows me that the data set has 0 observations and then stops the step.
This happens for every step in a program since the first one, which can be as simple as the step outlined below:
data drawdown;
set server01.legacy_mapping_drawdown;
run;
If I run the above step manually, log shows:
NOTE: The data set WORK.drawdown has 13643 observations and 107 variables.
If this is triggered by the %include statement, then the log reads:
NOTE: The data set WORK.drawdown has 0 observations and 107 variables.
WARNING: Data set WORK.drawdown was not replaced because this step was stopped.
I have no clue whatsoever as to why this would be happening.
The fact that this started happening on the 02/02/2020 leads me to believe that the new year might have something to do with it.
The code in the scheduler hasn't been touched at all in a while and the several codes are being triggered. It's how they perform that changes depending on being triggered manually or via the scheduler.
I know there is little to no technical details here but there isn't much to it really.
Would appreciate any ideas on this.
Thanks.
I have a dataflow job processing data from pub/sub defined like this:
read from pub/sub -> process (my function) -> group into day windows -> write to BQ
I'm using Write.Method.FILE_LOADS because of bounded input.
My job works fine, processing lots of GBs of data but it fails and tries to retry forever when it gets to create another table. The job is meant to run continuously and create day tables on its own, it does fine on the first few ones but then gives me indefinitely:
Processing stuck in step write-bq/BatchLoads/SinglePartitionWriteTables/ParMultiDo(WriteTables) for at least 05h30m00s without outputting or completing in state finish
Before this happens it also throws:
Load job <job_id> failed, will retry: {"errorResult":{"message":"Not found: Table <name_of_table> was not found in location US","reason":"notFound"}
It is indeed a right error because this table doesn't exists. Problem is that the job should create it on its own because of defined option CreateDisposition.CREATE_IF_NEEDED.
The number of day tables that it creates correctly without a problem depens on number of workers. It seems that when some worker creates one table its CreateDisposition changes to CREATE_NEVER causing the problem, but it's only my guess.
The similar problem was reported here but without any definite answer:
https://issues.apache.org/jira/browse/BEAM-3772?focusedCommentId=16387609&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16387609
ProcessElement definition here seems to give some clues but I cannot really say how it works with multiple workers: https://github.com/apache/beam/blob/master/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/bigquery/WriteTables.java#L138
I use 2.15.0 Apache SDK.
I encountered the same issue, which is still not fixed in BEAM 2.27.0 of january 2021. Therefore I had to develop a workaround: a custom PTransform which checks if the target table exist before the the BigQueryIO stage. It uses the bigquery java client for this and a Guava cache, as well as a windowing strategy (fixed, check every 15s) to sustain a heavy traffic of about 5000 elements per second. Here is the code: https://gist.github.com/matthieucham/85459eff5fdea8d115be520e2dd5ccc1
There was a bug in the past that caused this error, but that particular one was fixed in commit https://github.com/apache/beam/commit/d6b4dcec5f297f5c1bd08f345f0e1e5c756775c2#diff-3f40fd931c8b8b972772724369cea310 Can you check to see if the version of Beam you are running includes this commit?
I build a workflow in SSIS.
At two steps this workflow is checking an If-Condition.
If the result is true, the workflow should continue.
If it is false the workflow should go back to prior step and start over.
It is clear to me how to implement the If-Condition. But how can I redirect the control flow to a prior step? If I just link one node to the prior node I am getting following error.
Is there any special node for this issue? Did anyone have a similar problem and knows the solution?
Let me add an example here for others as well. In this example I use control flow. This control flow contains:
1. Two script task 'Task 1' and 'Task 2' which at the moment only have a MessageBox.Show to display the corresponding task name.
2. One Expression Task checking the if condition.
3. A for loop continuing based on an expression.
4. A package Boolean variable named 'Flag' which is set initially to True.
The SSIS package looks like below:
The for loop expression looks like below:
And, the expression for expression task looks like below:
The variable Flag can be changed in script task 1 for some special conditions or can be changed by some other means as required. In that case, the loop will exit and start running the task 2, otherwise task 1 will be continued.
I have a transformation with several steps that run by batch script using Windows Task Scheduler.
Sometimes the first step or the n steps fail and it stops the entire transformation.
I want to transformation to run from start to end regardless of any errors, any way of doing this?
1)One way is to “error handling”, however it is not available for all the steps. You can right click on the step and check whether error handling option is available or not.
2) if you are getting errors because of incorrect datatype, for example: you are expecting a integer value and for some specific record you may get string value so it may fail , for handling such situation you can use data validation step.
Basically you can implement logic based on the transformation you have created. Above are some of the General methods.
This is what you called "Error Handling". Though your transformation runs with some Errors, you still want your transformation to continue to run.
Situations:
- Data type issues in the data stream.
Ex: say you have a column X of data type integer but by mistake you got string value. then you can define Error handling to capture all these records.
- while Processing json data.
Ex: the path you mentioned to retrieve a value of json field and for some data node the path can't identify or missing it. you can define error handling to capture all missing path details.
- while Update table
- If you are updating a table with some key, and if the key was not available as it is coming from input stream then an error will occur. you can define error handling here also.
Suppose I had the following structure for a script called mycode.do in Stata
-some code to modify original data-
save new_data, replace
-some other code to perform calculations on new_data-
Now suppose I press the break button to stop Stata after it has saved new_data in the script. My understanding is that Stata will undo the changes made to the data if it is interrupted with the break button before it has finished. Following such interruption, will Stata erase new_data.dta from memory if it didn't exist initially (or revert it back to its original form if it already existed before mycode.do was executed)?
Stata documentation says "After you click on Break, the state of the system is the same as if you had never issued the original command." However, it sounds as if you expect that it treats an entire do-file as a "command". I do not believe this is the case. I believe once the save is completed, then the file new_data has been replaced, and Stata is not able to revert the file to the version before the save.
The Stata Reference Manual also says, in the documentation for Stata release 13, [R] 16.1.4 Error handling in do-files, "If you press Break while executing a do-file, Stata responds as though an error has occurred, stopping the do-file." Example 4 discusses this further and seems to support my interpretation.
This seems to me to have interesting implications for Stata "commands" that are implemented as ado files.