I am building a Mulesoft/Anypoint app for deployment on Cloudhub, and for diagnostic purposes want to be able to determine (from within the app) whether it is running on Cloudhub or in Anypoint Studio on my local development machine:
On Cloudhub, I want to use the Cloudhub connector to create notifications for exceptional situations - but using that connector locally causes an exception.
On my local machine, I want to use very verbose logs with full dumping of the payload (#[message.payloadAs(java.lang.String)]) but want to use much more concise logging when on Cloudhub.
What's the best way to distinguish the current runtime? I can't figure out any automatic system properties that expose this information.
(I realize the I could set my own property called something like system.env=LOCAL and override it with system.env=CLOUDHUB for deployment, but I'm curious if the platform already provides this information in some way.)
As far as I can tell the best approach is to use properties. The specific name and values you use doesn't matter as long as you're consistent. Here's an example:
In your local dev environment, set the following property in mule-app.properties:
system.environment=DEV
When you deploy to Cloudhub, use the deployment tool to change that property to:
system.environment=CLOUDHUB
Then in your message processors, you can reference this property:
<logger
message="#['${system.environment}' == 'DEV' ? 'verbose log message' : 'concise log message']"
level="ERROR"
doc:name="Exception Logger"
/>
Related
When I am calling an api with normal api calling in postman and running a test script and setting environment value, it's working but when I use that api in postman flow, environment doesn't changing.
Script in my test:
pm.environment.set('email', body.email)
Looks like you are looking for this issue from discussions section of Postman Flows repository:
https://github.com/postmanlabs/postman-flows/discussions/142. Here are some key points from it:
I want to begin by saying that nothing is wrong with environments or variables. They just work differently in Flows from how they used to work in the Collection Runner or the Request Tab.
Variables are not first-class citizens in Flows.
It was a difficult decision to break the existing pattern, but we firmly believe this is a necessary change as it would simplify problems for both us and users.
Environment works in a read-only mode, updates to the environment from scripts are not respected.
Also in this post they suggest:
We encourage using the connection to pipe data from one block to another, rather than using Globals/Environments, etc.
According to this post:
We do not supporting updating globals and environment using flows.
I'm using Google Cloud Profiler (located at https://console.cloud.google.com/profiler) and would like to know how my profiling data changes across different builds of my application.
One way to do that would be to check the range of dates during which a particular commit was running on production, but that's time consuming because I have to:
Get the start date/time of release, determine the date/time of the next release
Set those dates manually in the profiler interface from the link above
That's really not terrible, but it'd be great to be able to set BUILD_ID environment variable like I can in Cloud Build and then be able to access that from the UI. Is something like this possible? Or is my approach the best way to do this at the moment?
Comparing across service versions would likely be a simpler and more precise way to do this (as opposed to using the time interval to select for profiles). To compare across service versions, it is necessary that the profiling agents set the service version.
The service version can be specified in the configuration passed to the agent (for the Go, Python, or Node.js agent) or via the -cprof_service_version flag (for the Java agent). If one is setting the service version using the configuration passed to the agent (applicable for the Go, Python, and Node.js agents), it may be convenient to use a flag or command line argument to set the service version so that the source code won't need to updated with each new version.
If one is running on Knative or App Engine standard, the service version should be auto-populated. These environments set the K_REVISION and GAE_VERSION environment variables (respectively), and the profiling agents (for all supported languages) use these environment variables to populate the service version. If one is running in another environment and modifying the source code is inconvenient or infeasible, one can set either the K_REVISION or GAE_VERSION environment variable in the environment running the application with the agent enabled to specify the service version.
My understanding is that the BUILD_ID is available at build time, but not at run time, so I don't know that it's possible for agents to use that directly.
(Disclosure: I work on Cloud Profiler at Google)
You can set the service version for this purpose. Please refer to the agent documentation for how to set it for supported languages.
For example, this shows using ServiceVersion for Go services.
I have a Flask app running as an AWS Lambda Function deployed with Zappa and would like to activate X-Ray to get more information for the different functions.
Activating X-Ray with Zappa was easy enough - it only requires adding this line in the zappa-settings.json:
"xray_tracing": true
Further, I installed the AWS X-Ray Python SDK and added a few decorators to some functions, like this:
#xray_recorder.capture()
When I deploy this as a Lambda function, it all works well. The problem is using the system locally, both when running tests and when running the Flask in a local server instead of as a lambda function.
When I use any of the functions that are decorated either in a test or through the local server, the following exception is thrown:
aws_xray_sdk.core.exceptions.exceptions.SegmentNotFoundException: cannot find the current segment/subsegment, please make sure you have a segment open
Which of course makes sense, because AWS Lambda handles the creation of segments.
Are there any good ways to deactivate capturing locally? This would be useful e.g. for running unit tests locally on functions that I would like to watch in X-Ray.
One of the feature request of this SDK is to have a "disabled" global flag so everything becomes no-ops https://github.com/aws/aws-xray-sdk-python/issues/26.
However, it still depends on what you are testing against. It's good practice to test what actually will be run on Lambda. You can set some environment variables so the SDK thinks it is running on Lambda.
You can see the SDK is looking for two env vars https://github.com/aws/aws-xray-sdk-python/blob/master/aws_xray_sdk/core/lambda_launcher.py. One is LAMBDA_TASK_ROOT set to true so it knows to switch to lambda-mode. The other one is _X_AMZN_TRACE_ID which contains the tracing context normally passed by lambda container.
If you just want to test non-XRay code you can set AWS_XRAY_CONTEXT_MISSING to LOG_ERROR so the SDK doesn't complain on context missing and simply give up capturing wrapped functions. This will run much less code path than mimic lambda behaviors. Ideally it would be better for the lambda local testing tool to be X-Ray friendly. Are you using https://github.com/awslabs/aws-sam-cli? There is already an open issue for this feature https://github.com/awslabs/aws-sam-cli/issues/217
I'm using AWS AppStream to stream a legacy .NET client. The app requires a parameter to start up correctly, which it gets via SessionContext passed into the create_streaming_url API call. I'd like to test this interaction locally without having to redeploy my app for every debug iteration as that takes well over half an hour. According to the AWS AppStream Docs session-context is stored in an environment variable that is only accessible via the AWS provided SessionContextRetriever.exe .NET application. The docs list the environment var as AppStream_Session_Context. I've tried setting this env var and running SessionContextRetriever.exe with no success. There is no documentation that I can find for SessionContextRetriever.exe but there's obviously something I'm missing here. Anybody have any experience with AppStream and session context?
The executable they provide doesn't come with a license, so I have to presume that it's copyrighted and licensed restrictively etc. So de-compiling it would be not be a good idea. But if somebody were to do such a thing, I would expect them to find something like
Console.Write(Environment.GetEnvironmentVariable("APPSTREAM_SESSION_CONTEXT", EnvironmentVariableTarget.Machine));
So I suggest that you try setting the environment variable at the system level for testing. That is, setting it in a script won't be visible to this executable because it's not looking at your current terminal session.
Setting the environment variable at the system level (using the Windows "Edit system environment variables) I see the output from this executable.
Run PS as Administrator:
PS C:\Users\Public\Apps> setx -m AppStream_Session_Context "Value"
PS C:\Users\Public\Apps> .\SessionContextRetriever.exe
Value
I want to be able to use the job and build number within my cloudbees application (i.e access it as an environment variable).
In the application description, I can use "${JOB_NAME} #${BUILD_NUMBER}", but is this also possible somehow within the environment override fields?
I want to be able to set something like:
Name: runningversion
Value: ${JOB_NAME} #${BUILD_NUMBER}
I am assuming you are using the CloudBees Deployer plugin to deploy your application to our RUN#cloud service.
If that is the case then you can achieve exactly what you want with the Override Environment section. You just need to do something like this:
The in-line help for the Value field even indicates that it
Supports ${} style token macro expansion
As a hint to let you know that you can do what you are trying to do... so if it doesn't work then there is a bug!
Those Override Environment name-value pairs should be available, at least, as OS level environment variables and for the Java based ClickStacks (e.g. Tomcat, JBoss, Glassfish, Play, etc) they should also be available as Java System Properties but that can require that the ClickStack is written to provide that support (the well known ones produced by CloudBees should)