Pass job name and build number to cloudbees application - build

I want to be able to use the job and build number within my cloudbees application (i.e access it as an environment variable).
In the application description, I can use "${JOB_NAME} #${BUILD_NUMBER}", but is this also possible somehow within the environment override fields?
I want to be able to set something like:
Name: runningversion
Value: ${JOB_NAME} #${BUILD_NUMBER}

I am assuming you are using the CloudBees Deployer plugin to deploy your application to our RUN#cloud service.
If that is the case then you can achieve exactly what you want with the Override Environment section. You just need to do something like this:
The in-line help for the Value field even indicates that it
Supports ${} style token macro expansion
As a hint to let you know that you can do what you are trying to do... so if it doesn't work then there is a bug!
Those Override Environment name-value pairs should be available, at least, as OS level environment variables and for the Java based ClickStacks (e.g. Tomcat, JBoss, Glassfish, Play, etc) they should also be available as Java System Properties but that can require that the ClickStack is written to provide that support (the well known ones produced by CloudBees should)

Related

Would like to get build information from Google Cloud Profiler

I'm using Google Cloud Profiler (located at https://console.cloud.google.com/profiler) and would like to know how my profiling data changes across different builds of my application.
One way to do that would be to check the range of dates during which a particular commit was running on production, but that's time consuming because I have to:
Get the start date/time of release, determine the date/time of the next release
Set those dates manually in the profiler interface from the link above
That's really not terrible, but it'd be great to be able to set BUILD_ID environment variable like I can in Cloud Build and then be able to access that from the UI. Is something like this possible? Or is my approach the best way to do this at the moment?
Comparing across service versions would likely be a simpler and more precise way to do this (as opposed to using the time interval to select for profiles). To compare across service versions, it is necessary that the profiling agents set the service version.
The service version can be specified in the configuration passed to the agent (for the Go, Python, or Node.js agent) or via the -cprof_service_version flag (for the Java agent). If one is setting the service version using the configuration passed to the agent (applicable for the Go, Python, and Node.js agents), it may be convenient to use a flag or command line argument to set the service version so that the source code won't need to updated with each new version.
If one is running on Knative or App Engine standard, the service version should be auto-populated. These environments set the K_REVISION and GAE_VERSION environment variables (respectively), and the profiling agents (for all supported languages) use these environment variables to populate the service version. If one is running in another environment and modifying the source code is inconvenient or infeasible, one can set either the K_REVISION or GAE_VERSION environment variable in the environment running the application with the agent enabled to specify the service version.
My understanding is that the BUILD_ID is available at build time, but not at run time, so I don't know that it's possible for agents to use that directly.
(Disclosure: I work on Cloud Profiler at Google)
You can set the service version for this purpose. Please refer to the agent documentation for how to set it for supported languages.
For example, this shows using ServiceVersion for Go services.

Is it possible to create multi-leve environment definitions in rails?

Let's say I have an app that has the usual environments: development, staging, and production.
Then let's say I have a set of tasks that I need to run in an environment where a specific set of configuration options have been overridden -- let's say the DB host -- and these scripts (and their overrides) need to run in each environment.
One solution that comes to mind is to create a whole set of environments for each of these special cases, i.e.: dboverride-development.rb, dboverride-staging, and dboverride-production. Each of these environments would inherit from its main environment, but then override the necessary configuration options. But this seems cumbersome and isn't at involves a lot of code replication.
Are there existing strategies or conventions for this use-case in rails (v4 specifically)?
You can use environment variables to override any specific options as shown in the Rails guides.
If you have a config/database.yml but no ENV['DATABASE_URL'] then this
file will be used to connect to your database:
If you have both config/database.yml and ENV['DATABASE_URL'] set then
Rails will merge the configuration together. To better understand this
we must see some examples.
When duplicate connection information is provided the environment
variable will take precedence:

Determine whether I'm running on Cloudhub or Locally

I am building a Mulesoft/Anypoint app for deployment on Cloudhub, and for diagnostic purposes want to be able to determine (from within the app) whether it is running on Cloudhub or in Anypoint Studio on my local development machine:
On Cloudhub, I want to use the Cloudhub connector to create notifications for exceptional situations - but using that connector locally causes an exception.
On my local machine, I want to use very verbose logs with full dumping of the payload (#[message.payloadAs(java.lang.String)]) but want to use much more concise logging when on Cloudhub.
What's the best way to distinguish the current runtime? I can't figure out any automatic system properties that expose this information.
(I realize the I could set my own property called something like system.env=LOCAL and override it with system.env=CLOUDHUB for deployment, but I'm curious if the platform already provides this information in some way.)
As far as I can tell the best approach is to use properties. The specific name and values you use doesn't matter as long as you're consistent. Here's an example:
In your local dev environment, set the following property in mule-app.properties:
system.environment=DEV
When you deploy to Cloudhub, use the deployment tool to change that property to:
system.environment=CLOUDHUB
Then in your message processors, you can reference this property:
<logger
message="#['${system.environment}' == 'DEV' ? 'verbose log message' : 'concise log message']"
level="ERROR"
doc:name="Exception Logger"
/>

Accessing user provided env variables in cloudfoundry in Spring Boot application

I have the following user provided env variable defined for my app hosted in cloudfoundry/pivotal webservices:
MY_VAR=test
I am trying to access like so:
System.getProperty("MY_VAR")
but I am getting null in return. Any ideas as to what I am doing wrong would be appreciated.
Environment variables and system properties are two different things. If you set an environment variable with cf set-env my-app MY_VAR test then you would retrieve it in Java with System.getenv("MY_VAR"), not with System.getProperty.
A better option is to take advantage of the Spring environment abstraction with features like the #Value annotation. As shown in the Spring Boot documentation, this allows you to specify values that get injected into your application as environment variables, system properties, static configuration, or external configuration without the application code explicitly retrieving the value.
Another possibility leaning on Scott Frederick's answer (sorry, I can't comment on the original post):
User provided env vars can easily be accessed in the application.yml:
my:
var: ${MY_VAR}
You can then use the #Value-Annotation like this:
#Value("${my.var}")
String myVar;

SSIS 2008R2 Dynamically Change Web Reference URL for Different Environments

I'm running MSSQL 2008 R2.
I want to be able to dynamically change the Web Reference URL in a Script Task when the package is deployed in different environments without having to change it manually and build it again each time.
I've got the Script Task to work with the Test Web Service,
I've added the Wed Reference in the Script Task and set the URL behaviour to "Dynamic".
I've got a Package Variable called "WebServiceURL".
so what do I need to do now in the Script Task for it to use the "WebServiceURL" package variable when calling the Web Service.
I know very little C# .Net.
Thanks in advance.
Take a look at this question.
How can I dynamically switch web service addresses in .NET without a recompile?
You probably want to set the .URL property of the class that calls the web service to the URL from your package variable, since the .config file for SSIS packages varies depending on where it is called from.
You can read a package variable by selecting the specific variable in the "Custom Properties / Read Only Variables" property in the window that you get when you first double click on a script task. Once its selected, you can use it in your code (the Variables class is automatically generated with each variable as a property).