Why do cloud functions switch between numeric and alphanumeric execution ids? - google-cloud-platform

Often execution_ids will show strictly numeric execution ids in the log: 1009612003154395
Other times, execution ids are alphanumeric like: zjxjkn9mp4p9
Why are these changing execution id types chosen? Are they as arbitrary as they seem? Can I infer anything from them?

An execution ID is just a string that uniquely identifies a single invocation of a function. That's all it means. The contents of that string are meaningless, but you can be sure it will be unique for all invocations of a particular type of function.
One documented use of it (the only one I could find) is for view logs coming from that one invocation. This makes it easier to track how a function executed, without having to sort through a bunch of log lines from other functions. See the documentation for logging:
You can even view the logs for a specific execution:
gcloud functions logs read FUNCTION_NAME --execution-id EXECUTION_ID

Related

Parallel processes with dependency

There are two parallel processes. Each process has two steps. The second step of the first process is always executed after the first step. The second step of the second process is performed only under a certain condition.
Activity diagram:
How to reflect an additional condition: to complete the second step of the second process, the first step of the first process must be completed.
I managed:
Flaws:
No match between fork and join
If the condition of the second process is not met, the token “hangs” before join
Having looked at your solution once more made me think that you saw issues, where there are none. You are worried about the hanging token, but that is no issue in this case. If P22 is bypassed, the token from P11 will go down directly to the join node. P11 and P12 will pass their token down also with no issue, thereby creating that ghost token which gets stuck in the middle right join. Since the lower join now has two tokens it will continue to the end where the activity is terminated. At that point any free running tokens (and even active actions) are terminated as well. All good.
I leave my former answer for further inspiration. But basically they will all be implemented in similar ways since they represent a gateway.
Original answer
I guess that using an event would be the best way:
This way D can only start (and finish) when the event has been received which is sent after As completion.
Another way would be to use an object that stores the finalization of action A and which is read by D.
Note that the diagonal connectors through a ready are ObjectFlows which UML does not per default distinguish optically (unlike SysML).
P. 374 of UML 2.5 states
Object tokens pass over ObjectFlows, carrying data through an Activity via their values, or carrying no data ( null tokens). A null token can still be passed along an ObjectFlow and used like any other token. For example, an Action can output a null token to explicitly indicate that it did not produce an optional value, and a downstream DecisionNode (see sub clause 15.3) can test for this and branch accordingly.
So you can see that as a buffer holding a token and no real data is needed to be stored. Basically that's the same as an event. Implementation wise you would use a semaphore or a stream to realize that, but of course at this level you would not care too much about such details.

AWS Step function - choice step - TimestampEquals matching with multiple input variables

I have been trying to create a step function with a choice step that acts as a rule engine. I would like to compare a date variable (from the stale input JSON) to another date variable that I generate with a lambda function.
AWS documentation does not go into details about the Timestamp comparator functions, but I assumed that it can handle two input variables. Here is the relevant part of the code:
{
"Variable": "$.staleInputVariable",
"TimestampEquals": "$.generatedTimestampUsingLambda"
}
Here is the error that I am getting when trying to update(!!!) the stepFunction. I would like to highlight the fact that I don't even get to invoking the stepFunction as it fails while updating the function.
Resource handler returned message: "Invalid State Machine Definition: 'SCHEMA_VALIDATION_FAILED: String does not match RFC3339 timestamp at ..... (Service: AWSStepFunctions; Status Code: 400; Error Code: InvalidDefinition; Request ID: 97df9775-7d2d-4dd2-929b-470c8s741eaf; Proxy: null)" (RequestToken: 030aa97d-35a5-a6a5-0ac5-5698a8662bc2, HandlerErrorCode: InvalidRequest)
The stepfunction updates without the Timestamp matching, therefore, I suspect this bit of code.. Any guess?
EDIT (08.Jun.2021):
A comparison – Two fields that specify an input variable to compare,
the type of comparison, and the value to compare the variable to.
Choice Rules support comparison between two variables. Within a Choice
Rule, the value of Variable can be compared with another value from
the state input by appending Path to name of supported comparison
operators.
Source: AWS docs
It clearly states that two variables can be compared, but to no avail. Still trying :)
When I explained the problem to one of my peers, I realised that the AWS documentation mentions a Path postfix (which I confused with the $.). This Path needs to be added to the operatorName.
The following code works:
{
"Variable": "$.staleInputVariable",
"TimestampEqualsPath": "$.generatedTimestampUsingLambda"
}
Again, I would like to draw your attention to the "Path" word. That makes the magic!
Looks like you indeed found a way around your initial challenge from the thread linked below.
Using an amazon step function, how do I write a choice operator that references current time?
However, I thought you wanted to compare $.staleInputVariable to the current timestamp and I wince to think you had to configure a lambda function (and test it!) to do only that.
If so, you could have achieved that simply by using the Context Object or $$.:
{
"Variable": "$$.State.EnteredTime",
"TimestampEqualsPath": "$.staleInputVariable"
}

How to read batch submit job payload in job?

aws_stepfunctions_tasks.BatchSubmitJob takes a payload parameter - Is it possible to access the values from that payload within the job? The use case is that the original code specified payload={"count.$": "$.count"} and container_overrides=aws_stepfunctions_tasks.BatchContainerOverrides(command=["--count", "Ref::count"]), which forces the $.count output of the previous job to be a string. Since I need to use the count for another value which must be an integer I would like to avoid forcing the data type onto the previous job. Is this possible?

What is a regular expression that satisfies all valid options for a JOB card in JCL?

I'm working on a program that will need to remove a JOB card from a JCL member. I'm having a lot of trouble building something that satisfies all possible options and configurations.
Below is a good guide on the JOB statement:
http://www.tutorialspoint.com/jcl/jcl_job_statement.htm
Some issues though:
There may be multiple job cards in a member
There may be comments in the job card
There may be characters in columns 73-80
There may be a SYSAFF, SET or similar statement directly following the JOB statement that should be retained but may begin with slashes and spaces just like a job card
Any help would be appreciated. Currently I have the following regular expression:
//.*JOB.*\n(//\s{4,}[^\s]+(\s|\d)*\n)+
Ultimately I only need to change the JOB name to fit the restriction of the FTP JES reader which requires your job name to be the submitting USERID plus exactly one character under JESINTERFACELEVEL 1 which is used by our site. Changing only the job name would also be acceptable.
With the information from your comment on Joe's answer, your task becomes easier.
//JJJJJAAA JOB other-stuff
If the second word is JOB and the first two characters of the first word are // and the third character is not *, then you have a JOB card. Remove the first word, replacing it with //JJJJJx, where x is your additional single character. JJJJJ represents the user-id.
This does assume that the user-id of the existing JOBs will be the same as the user-id of the new JOBs, in which case the replacement JOB name is not going to cause the extension of the JOB card.
If this is not the case, if the user-id on the original JOB cards is shorter, or indeed not a user-id at all and is shorter, either all or some, then I'd recommend splitting the JOB card after the first comma (if present).
In the unlikely event that you have very long accounting information and nothing else, this may cause a JCL error when the above is true. If so, fix the accounting information or get around the user-id limit. This is an unlikely situation :-)
If there is no accounting information but there is a long comment, this may cause a JCL error by accidentally hitting column 72 with data (so it will think the next line is a Continuation). In the unlikely even of that happening, fix it.
Neither of these two are worth coding for. They are worth verifying for, though the simplest way to do that is to watch and pick them up if they fall over.
You do have one more thing to watch for, and this is whether any of your steps use DD * or DD DATA. If they do, then you have to discover if any use DLM=. If they do, you will have to switch off the search for the JOB card when encountering DLM=, and switch it on again when you reach the delimiter value starting in column one.
Your single character may cause you problems. You will have a limited number of jobnames possible per userid. Unless allowed, JOBs with the same name will not run at the same time.
You will need to account for the two positional parameters -- 142 bytes of accounting information and 30ish bytes for programmers name. Also, you will have to account for the optional keyword parameters:
ADDRSPC= BYTES= CARDS= CLASS= COND=
GROUP= LINES= MEMLIMIT= MSGCLASS= MSGLEVEL=
NOTIFY= PAGES= PASSWORD= PERFORM= PRTY=
RD= REGION= RESTART= SECLABEL= SCHENV=
TIME= TYPRUN= USER=
Dealing with the JES commands like SYSAFF and other JCL commands like SET make it very complicated.
You might want to approach it in steps -- regex to handle the "//" followed by up to 69 bytes and continued with a comma except in cases of comments where it starts with "//*".
It might help to know what you are trying to accomplish. You can ask JES to process the JCL for you and there are ways you can inspect the parsed JCL via macros, exits and control blocks.
In most cases it's the first card anyway. Or at least the first non-comment card.

Underlying mechanism in firing SQL Queries in Oracle

When we fire a SQL query like
SELECT * FROM SOME_TABLE_NAME under ORACLE
What exactly happens internally? Is there any parser at work? Is it in C/C++ ?
Can any body please explain ?
Thanks in advance to all.
Short answer is yes, of course there is a parser module inside Oracle that interprets the statement text. My understanding is that the bulk of Oracle's source code is in C.
For general reference:
Any SQL statement potentially goes through three steps when Oracle is asked to execute it. Often, control is returned to the client between each of these steps, although the details can depend on the specific client being used and the manner in which calls are made.
(1) Parse -- I believe the first action is actually to check whether Oracle has a cached copy of the exact statement text. If so, it can save the work of parsing your statement again. If not, it must of course parse the text, then determine an execution plan that Oracle thinks is optimal for the statement. So conceptually at least there are two entities at work in this phase -- the parser and the optimizer.
(2) Execute -- For a SELECT statement this step would generally run just enough of the execution plan to be ready to return some rows to the client. Depending on the details of the plan, that might mean running the whole thing, or might mean doing just a very small fraction of the work. For any other kind of statement, the execute phase is when all of the work is actually done.
(3) Fetch -- This is when rows are actually returned to the client. Generally the client has a predetermined fetch array size which sets the maximum number of rows that will be returned by a single fetch call. So there may be many fetches made for a single statement. Of course if the statement is one that cannot return rows, then there is no fetch step necessary.
Manasi,
I think internally Oracle would have its own parser, which does parsing and tries compiling the query. Think its not related to C or C++.
But need to confirm.
-Justin Samuel.