Node.js - how to read Environmental variables from Cloud Foundry application? - cloud-foundry

I am working on the Node.js project and need to login to Cloud Foundry and read environmental variables (basically, credentials) of one application. So far, I was able to login and target the correct organization with the help of '#sap/cf-tools' package. Now, the last step remains, the equivalent of command
cf env APP_NAME
The problem is that '#sap/cf-tools' doesnt support yet this functionality and I haven't found any other which would do it. Did someone else face this problem and if yes, how did you solve it? Is there some npm package I have overlooked? Or I will be forced to run "cf env" with "-v" parameter and try to get to the desired EnvVariables through the series of axios calls? Thank you.

There's no need to use a series of Axios calls. Take a look at the following package https://www.npmjs.com/package/#sap/xsenv. It should fit your use case.
var xsenv = require('#sap/xsenv');
var services = xsenv.readServices();
var svc = services[process.env.SERVICE_NAME];

Related

AWS Amplify environment 'dev' not found

I'm working with AWS Amplify, specifically following this tutorial AWS-Hands-On-Tutorial.
I'm getting a build failure when I try to deploy the application.
So far I have tried creating multiple backend environments and connecting them with the frontend, hoping that this would alleviate the issue. The error message leads me to believe that the deploy is not set up to also detect the backend environment, despite that I have it set to do so.
Also, I have tried changing the environment that is set to deploy with the frontend by creating another develop branch to see if that is the issue.
I've had no success with trying any of these, the build continues to fail. I have also tried running the 'amplify env add' command as the error message states. I have not however tried "restoring its definition in your team-provider-info.json" as I'm not sure what that entails and can't find any information on it. Regardless, I would think creating a new environment would solve the potential issues there, and it didn't. Any help is appreciated.
Due to the documentation being out of date, I completed the steps below to resolve this issue:
Under Build Settings > Add package version override for Amplify CLI and leave it as 'latest'
When the tutorial advises to "update your front end branch to point to the backend environment you just created. Under the branch name, choose Edit...", where the tutorial advises to use 'dev' it actually had us setup 'staging', choose that instead.
Lastly, we need to setup a 'Service Role' under General. Select General > Edit > Create New Service Role > Select the default options and save the role, it should have a name of amplifyconsole-backend-role. Once the role is saved, you can go back to General > Edit > Select your role from the dropdown, if it doesn't show by default start typing it in.
After completing these steps, I was able to successfully redeploy my build and get it pushed to prod with authentication working. Hope it helps anyone who is running into this issue on Module 3 of the AWS Amplify Starter Tutorial!

How to get public-facing name from Google cloud function?

I am looking for a way to get the public-facing name from Google cloud function?
Any idea if doable?
If you can also give me the Google site that list the other information about my project, it will be nice also. I could not find anything so far. :-( I will keep looking in the mean time.
Thanks
As mentioned in the comments, there's no way to retrieve the %APP_NAME% from inside the Cloud Functions since those are isolated from Firebase and reside on the GCP side. However as a workaround to avoid hardcoding you can send Environment Configuration variables through the next command:
firebase functions:config:set app.name="APP_NAME"
Then inside your Cloud Function you can retrieve your configuration variables by using:
functions.config().app.name

Do I need to deploy function in gcloud in order to have OCR?

This GCloud Tutorial has a "Deploying the function", such as
gcloud functions deploy ocr-extract --trigger-bucket YOUR_IMAGE_BUCKET_NAME --entry-point
But at Quickstart: Using Client Libraries does not mention it at all, all it needs is
npm install --save #google-cloud/storage
then a few lines of code will work.
So I'm confused, do I need the "deploy" in order to have OCR, in other words what do/don't I get from "deploy"?
The command
npm install --save #google-cloud/storage
is an example of installing the Google Cloud Client Library for Node.js in your development environment, in this case, Cloud Storage API. This example is part of Setting Up a Node.js Development Environment tutorial.
Once you have coded, tested and set all the configurations for the app as described in the tutorial the next step would be the deployment, in this example a Cloud Function:
gcloud functions deploy ocr-extract --trigger-bucket YOUR_IMAGE_BUCKET_NAME --entry-point
So, note that this commands are two different steps to run OCR with Cloud Functions, Cloud Storage and other Cloud Platform components in the tutorial example using Node.js environment.
While Cloud Function (CF) is easy to understand, this answers specifically my own question, what does the "Deploy" actually do:
to have the code work for you, they must be deployed/uploaded to the GC. For people like me never done GCF this is new. My understanding was all I need to supply is credentials and satisfy the whatever server/backend (sorry, cloud) settings when my local app calls the remote Web API. That's where I stucked. The key I missed is the sample app itself is a server/backend event-handler trigger functions, and therefore Google requires them to be "deployed" just like when we deploy something during a staging or production release in a traditional corporate environment. So it's a real deploy. If you still don't get it, go to your GC admin page, menu, Cloud Function, "Overview" tab, you will see them. Hence goes to next
The 3 GC deploy command used in the Deploying Functions have ocr-extract ocr-save ocr-translate, they are not switches, they are function names that you can name them anything. Now, still in the Admin page, click on any of 3, "Source". Bang, they are there, deployed (uploaded).
Google, as this is a tutorial no one has digged into command reference book yet, I recommend adding a piece of note telling readers those 3 ocr-* can be anything you want to name.

AWS AppStream How do I test Session Context with SessionContextRetriever.exe

I'm using AWS AppStream to stream a legacy .NET client. The app requires a parameter to start up correctly, which it gets via SessionContext passed into the create_streaming_url API call. I'd like to test this interaction locally without having to redeploy my app for every debug iteration as that takes well over half an hour. According to the AWS AppStream Docs session-context is stored in an environment variable that is only accessible via the AWS provided SessionContextRetriever.exe .NET application. The docs list the environment var as AppStream_Session_Context. I've tried setting this env var and running SessionContextRetriever.exe with no success. There is no documentation that I can find for SessionContextRetriever.exe but there's obviously something I'm missing here. Anybody have any experience with AppStream and session context?
The executable they provide doesn't come with a license, so I have to presume that it's copyrighted and licensed restrictively etc. So de-compiling it would be not be a good idea. But if somebody were to do such a thing, I would expect them to find something like
Console.Write(Environment.GetEnvironmentVariable("APPSTREAM_SESSION_CONTEXT", EnvironmentVariableTarget.Machine));
So I suggest that you try setting the environment variable at the system level for testing. That is, setting it in a script won't be visible to this executable because it's not looking at your current terminal session.
Setting the environment variable at the system level (using the Windows "Edit system environment variables) I see the output from this executable.
Run PS as Administrator:
PS C:\Users\Public\Apps> setx -m AppStream_Session_Context "Value"
PS C:\Users\Public\Apps> .\SessionContextRetriever.exe
Value

Is there a programmatic way to export Postman Collections?

I have an ever-growing collection of Postman tests for my API that I regularly export and check in to source control, so I can run them as part of CI via Newman.
I'd like to automate the process of exporting the collection when I've added some new tests - perhaps even pull it regularly from Postman's servers and check the updated version in to git.
Is there an API I can use to do this?
I would settle happily for a script I could run to export my collections and environments to named json files.
Such a feature must be available in Postman Pro, when you use the Cloud instance feature(I haven't used it yet, but I'll probably do for continuous integration), I'm also interested and I went through this information:
FYI, that you don't even need to export the collection. You can use Newman to talk to the Postman cloud instance and call collections directly from there. Therefore, when a collection is updated in Postman, Newman will automatically run the updated collection on its next run.
You can also add in the environment URLs to automatically have Newman environment swap (we use this to run a healthcheck collection across all our environments [Dev, Test, Stage & Prod])
You should check this feature, postman licences are not particularly expensive, it can be worth it.
hope this helps
Alexandre
You have probably solved your problem by now, but for anyone else coming across this, Postman has an API that lets you access information about your collections. You could call /collections to get a list of all your collections, then query for the one(s) you want by their id. (If the link doesn't work, google "Postman API".)