Azure Web Jobs Pipeline [inject before a function instance is created] - azure-webjobs

Problem:
I have a Web Job that logs to ApplicationInsights. I have a custom TelemetryInitializer (that is registered as a singleton (with no options)) that add some contextual information to the request (representing a function invocation). The issue is, that Azure Web Jobs (from what I know) doesn't provide a statically available execution context (similar to HttpContext from asp.net). I tried to build my own (based on AsyncLocal) (initializing it via a FunctionInvocationFilter), but the context is not available from TelemetryInitializers, since they are being invoked from the thread created before, with different execution context. In order this to work - I need to initialize my context earlier, before the function instance (together with AppInsights stuff) is created.
I tried to search in the Web Jobs SDK sources, but couldn't find any place where I can inject my context initialization logic.
Question:
Does anybody know how to do that? Or maybe I can achieve the same differently?

This can be achieved with System.Diagnostics.Activity, it can act as a context for logical operation. It is created very deep in WebJobs pipeline and AppInsights initializers share the operation context

Related

Spring Cloud Function - Manual Bean Registration and Loading Configuration Classes

I am currently using Spring Cloud function 3.07.RELEASE with the AWS Adapter for lambda.
We are using a limited scope Functional Bean registration and understand that this does not include full Spring Boot autoconfiguration. We are okay with this as we value the speed and significant reduction cold start times.
However, we do have configuration classes that we want to utilize and assume that this needs to be done manually. What is the best practice on importing these classes?
We tried searching, but failed to find documentation on the differences in behavior of the limited scope context vs spring boot application context.
If I understand your question correctly all you need to do is register those configuration classes manually and the rest will be autowired. There was a little issue with it which may or may not affect you. In any event it was fixed and will be available in 3.0.9 release next week.

Creating Dynamically Cron jobs at particular intervals

Looking for dynamically creating cron jobs that gets created and configured using the request parameters send by the Cloud Functions or normal HTTP request.
There is already manual way by visiting the Google Cloud console but I actually make this manual task by configuring and creating jobs according to request parameters.
I am already aware that we can provide a cron.yaml file that can have all the configuration but I need some help or any reference that contains detail way to achieve this.
I am also beginner so indeed correct me or provide any alternate solution.
You'll want to use the Cloud Scheduler API. Specifically, this is a REST API that lets you do everything you could do via the console or the gcloud command.

How to automate the Updating/Editing of Amazon Data Pipeline

I want to use AWS Data Pipeline service and have created some using the manual JSON based mechanism which uses the AWS CLI to create, put and activate the pipeline.
My question is that how can I automate the editing or updating of the pipeline if something changes in the pipeline definition? Things that I can imagine changing could be schedule time, addition or removal of Activities or Preconditions, references to DataNodes, resources definition etc.
Once the pipeline is created, we cannot edit quite a few things as mentioned here in the official doc: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-manage-pipeline-modify-console.html#dp-edit-pipeline-limits
This makes me believe that if I want to automate the updating of pipeline then I would have to delete and re-create/activate a new pipeline? If yes, then the next question is that how can I create a automated process which identifies the previous version's ID, deletes it and creates a new one? Essentially trying to build a release management flow for this where the configuration JSON file is released and deployed automatically.
Most commands like activate, delete, list-runs, put-pipeline-definition etc. take the pipeline-id which is not known until a new pipeline created. I am unable to find anything which remains constant across updates or recreation (the unique-id and name parameters of the createpipeline command are consistent but then I can't use them for the above mentioned tasks (I need pipeline-id for that.
Of course I can try writing shell scripts which grep and search the output and try to create a script but is there any other better way? Some other info that I am missing?
Thanks a lot.
You cannot edit schedules completely or change references so creating/deleting pipelines seems to be the best way for your scenario.
You'll need the pipeline-id to delete a pipeline. Is it not possible to keep a record of that somewhere? You can have a file with the last used id stored locally or in S3 for instance.
Some other ways I can think of are:
If you have only 1 pipeline in the account you can list-pipelines and
use the only result
If you have the pipeline name you can list-pipelines and find the id

Multiple ring sites on one immutant?

Immutant allows applications to respond to web requests via Ring
handlers. Each application can dynamically register any number of
handlers, each with a unique context path. This allows you to have
multiple Ring webapps that share the same deployment lifecycle.
So it says I can have multiple Ring apps on one immutant but can I/should I have two separate websites running on one immutant: site1.com and site2.com?
This context path is considered the top-level context path - you have
the option to bind a handler to a sub-context path that will be nested
within the top-level path. The full context is stripped from the url's
path before the request is processed, and the context and remaining
path info are made available as part of the request map via the
:context and :path-info keys, respectively.
It sounds like I can have an app running on site1.com/context1 and site1.com/context2 but not so much two separate domains.
The reason I'm asking is because immutant takes up a lot of my server resources so much so I'm not sure if I can run two immutants. The correct question might be how do I improve performance on my immutant? (I'm not any good with servers/deployment.)
Source: http://immutant.org/documentation/0.1.0/web.html
The answer is complicated by the fact that there are currently two major Immutant version branches: 1.x and 2.x. 1.x requires far more resources than 2.x, but 2.x hasn't been officially released yet (though incremental releases are available).
Both versions support mounting Ring apps at various combinations of virtual host, e.g. site1.com, and context path, e.g. /context1. In Immutant 1.x, the :virtual-host setting is in your deployment descriptor, as is the :context-path for the entire project. This is somewhat confusing, since you can also specify a :context-path when starting your Ring handler. The one passed to immutant.web/start is resolved relative to the one set in the deployment descriptor, which is why it's referred to as a "sub context path" in the docs.
In 2.x, things are simpler, because there is no deployment descriptor. Everything is passed as an option to immutant.web/run.
Can you post a small example with what you have so far?
It seems like you could achieve it with the :host option to run: https://projectodd.ci.cloudbees.com/job/immutant2-incremental/lastSuccessfulBuild/artifact/target/apidocs/immutant.web.html

Tomcat 5.5 Axis2 application scope problem - Unable to create single instance

I have deployed an axis2 web service on Tomcat 5.5. The web service functions as expected. But I noticed I was getting duplicated log entries. After researching it became clear that multiple instances of the class were being created - ie the first time it ran, one log entry; second time, two entries and so on.
I added the scope="application" parameter, but that has not solved the problem. I added it both in the service tag and as a separate parameter tag to no avail.
This class has many key global variables, logging being one of them. Frustrated as I am, I still haven't gotten to the point of deconstructing the globals (major overhaul, breaks code conventions in my department). Are global variables the culprit? Or is there some other Tomcat/Axis2 config I am missing?
Will post services.xml or other code upon request.
Thanks in advance - Bill
I have solved the problem...don't necessarily understand why but I now have the correct behavior...
The services.xml file I created as part of the web service (WEB-INF/services/myService/META-INF) was being overridden by config in tomcat/conf/server.xml, where I had previously only referred to myService with a context block. In order for myService to have unique service-level parameters, it has to have it's own config in tomcat/conf/server.xml...not just a context reference.
It seems to me that this is not the best config...services and contexts in server.xml. It's not dynamic that way. Unfortunately I am following a standard set here many moons ago, so nothing I can do.