Using other environments from Postman - postman

We have our workflow split into multiple projects, each dealing with different concerns (Central server is for anything authentication-related, API server for anything new-gen related, and each other project corresponds to a its own app).
This makes our process of hitting an app API as follows:
From Central Server local environment, post authentication
Set app
From the app environment for the end user, set to environment
Hit the API of the app of end user's choice
This makes for tests that are difficult to write, in that we'd have to do steps 1 through 3, and with two different environments.
Is there a way to access the variables from one environment (e.g. Central Server Local) from another, in the test script?

Environment variables are scoped to their own environment - up from that you could set a global variable in the test script and still access it when you change environments:
https://www.getpostman.com/docs/v6/postman/environments_and_globals/variables
Collection variables are also above environment variables but cannot be set programmatically, only through the collection settings.

Related

Why Environment variable doesn't update in postman flow?

When I am calling an api with normal api calling in postman and running a test script and setting environment value, it's working but when I use that api in postman flow, environment doesn't changing.
Script in my test:
pm.environment.set('email', body.email)
Looks like you are looking for this issue from discussions section of Postman Flows repository:
https://github.com/postmanlabs/postman-flows/discussions/142. Here are some key points from it:
I want to begin by saying that nothing is wrong with environments or variables. They just work differently in Flows from how they used to work in the Collection Runner or the Request Tab.
Variables are not first-class citizens in Flows.
It was a difficult decision to break the existing pattern, but we firmly believe this is a necessary change as it would simplify problems for both us and users.
Environment works in a read-only mode, updates to the environment from scripts are not respected.
Also in this post they suggest:
We encourage using the connection to pipe data from one block to another, rather than using Globals/Environments, etc.
According to this post:
We do not supporting updating globals and environment using flows.

Running multiple sites in one CF instance

I'm running 2 sites (Dev and QA) under one instance of CF 2018. When I send a REST call to a CFC on Dev, the QA instance gets confused and starts delivering pages from the Dev site, probably due to Application or Session vars being changed.
I verified that the REST call is using the session vars from Application.cfm for the dev environment, as it should. But the QA instance somehow gets switched over to the Dev folder and starts running cfm modules from there. I can't find where the QA pages are getting redirected.
Thanks in advance, for any ideas on things to look at.
Both DEV and QA are sharing the same application name and therefore the same application and session variables. You're switching context between requests, based on which domain / environment made the previous request.
In addition, you should convert your Application.cfm to Appliction.cfc and refactor how and when you're defining application and session variables. It feels like the CFM is just setting those values on every request instead of only once, which is why the different requests are switching context.
http://www.learncfinaweek.com/course/index/section/Application_cfc/item/Application_cfc/
You need to run
each environment on their own CF instance
the application name should be dynamic
each instance should point their data sources to their own database.
Isolating the instances allow you to track separate logs based on environment.
Dynamic application names isolates the application's shared scopes from each other.
The data should be separated per environment to isolate in-development DB related changes from QA testing.
https://coldbox.ortusbooks.com/getting-started/configuration/bootstrapper-application.cfc
Your code is in source control. You checkout a copy for DEV and one for QA, each to their own "web root".
/code/DEV/repo
/code/QA/repo
In your Application.cfc:
this.name = hash( getCurrentTemplatePath() );
This creates a dynamic application name based on the current folder path, so DEV and QA will be different and easily isolate the shared scopes. Also helps with local DEV efforts.
It's a bit of work, but there's no way to shortcut this.

Manage sqlite database with git

I have this small project that specifies sqlite as the database choice.
For this particular project, the framework is Django, and the server is hosted by Heroku. In order for the database to work, it must be set up with migration commands and credentials whenever the project is deployed to continuous integration tools or development site.
The question is, that many of these environments do not actually use the my_project.sqlite3 file that comes with the source repository, which we version control with git. How do I incorporate changes to the deployed database? Is a script that set up the database suitable for this scenario? Meanwhile, it is worth notice that there are security credentials that should not appear in a script in unencrypted ways, which makes the situation tricky.
that many of these environments do not actually use the my_project.sqlite3 file that comes with the source repository
If your deployment platform does not support your chosen database, then your development environment should probably be moved to using one of the databases they do support. It is possible to run different databases in development and production, but just seems like the source of headaches.
I have found a number of articles that state that Heroku just doesn't support SQLite in production and instead recommends Postgres.
How do I incorporate changes to the deployed database? Is a script that set up the database suitable for this scenario?
I assume that you are just extracting data from one database to give to another, so yes,as long as that script is a one time batch operation each time the code is updated, then it should be fine. You will want something else if you are adding/manipulating data in production and then exporting it to your git.
Meanwhile, it is worth notice that there are security credentials that should not appear in a script in unencrypted ways
An environment variable should solve that. You set your host machine to have environment variables with your credentials and then just retrieve them within the script. You are looking to have something like this:
# Set environment vars
os.environ['USER'] = 'username'
os.environ['PASSWORD'] = 'password'
# Get environment vars
USER = os.getenv('USER')
PASSWORD = os.environ.get('PASSWORD')

Accessing user provided env variables in cloudfoundry in Spring Boot application

I have the following user provided env variable defined for my app hosted in cloudfoundry/pivotal webservices:
MY_VAR=test
I am trying to access like so:
System.getProperty("MY_VAR")
but I am getting null in return. Any ideas as to what I am doing wrong would be appreciated.
Environment variables and system properties are two different things. If you set an environment variable with cf set-env my-app MY_VAR test then you would retrieve it in Java with System.getenv("MY_VAR"), not with System.getProperty.
A better option is to take advantage of the Spring environment abstraction with features like the #Value annotation. As shown in the Spring Boot documentation, this allows you to specify values that get injected into your application as environment variables, system properties, static configuration, or external configuration without the application code explicitly retrieving the value.
Another possibility leaning on Scott Frederick's answer (sorry, I can't comment on the original post):
User provided env vars can easily be accessed in the application.yml:
my:
var: ${MY_VAR}
You can then use the #Value-Annotation like this:
#Value("${my.var}")
String myVar;

Deploying Django as standalone internal app?

I'm developing an tool using Django for internal use at my organization. It's used to search and tag documents (using Haystack and Solr), and will be employed on different projects. My team currently has a working prototype and we want to deploy it 'in the wild.'
Our security environment is strict. Project documents are located on subfolders on a network drive, and access to these folders is restricted based on users' Windows credentials (we also have an MS SQL server that uses the same credentials). A user can only access the projects they are involved in. Since we're an exclusively Microsoft shop, if we want to deploy our app on the company intranet, we'll need to use an IIS server to deal with these permissions. No one on the team has the requisite knowledge to work with IIS, Active Directory, and our IT department is already over-extended. In short, we're not web developers and we don't have immediate access to anybody experienced.
My hacky solution is to forgo IIS entirely and have each end user run a lightweight server locally (namely, CherryPy) while each retaining access to a common project-specific database (e.g. a SQLite DB living on the network drive or a DB on the MS SQL server). In order to use the tool, they would just launch an all-in-one batch script and point their browser to 127.0.0.1:8000. I recognize how ugly this is, but I feel like it leverages the security measures already in place (note that never expect more than 10 simultaneous users on a given project). Is this a terrible idea, and if so, what's a better solution?
I've dealt with a similar situation (primary development was geared toward a normal deployment situation, but some users have a requirement to use the application on a standalone workstation). Rather than deploy web and db servers on a standalone workstation, I just run the app with the Django internal development server and a SQLite DB. I didn't use CherryPy, but hopefully this is somewhat useful to you.
My current solution makes a nice executable for users not familiar with the command line (who also have trouble remembering the URL to put in their browser) but is also relatively easy development:
Use PyInstaller to package up the Django app into single executable. Once you figure this out, don't continue to do it by hand, add it to your continuous integration system (or at least write a script).
Modify the manage.py to:
Detect if the app is frozen by PyInstaller and there are no arguments (i.e.: user executed it by double clicking it) and if so, then run execute_from_command_line(..) with arguments to start the Django development server.
Right before running the execute_from_command_line(..), pop off a thread that does a time.sleep(2) (to let the development server come up fully) and then webbrowser.open_new("http://127.0.0.1:8000").
Modify the app's settings.py to detect if frozen and change things around such as the path to the DB server, enabling the development server, etc.
A couple additional notes.
If you go with SQLite, Windows file locking on network shares may not be adequate if you have concurrent writing to the DB; concurrent readers should be fine. Additionally, since you'll have different DB files for different projects you'll have to figure out a way for the user to indicate which file to use. Maybe prompt in app, or build the same app multiple times with different settings.py files. Variety of a ways to hit this nail...
If you go with MSSQL (or any client/server DB), the app will have to know the DB credentials (which means they could be extracted by a knowledgable user). This presents a security risk that may not be acceptable. Basically, don't try to have the only layer of security within the app that the user is executing. The DB credentials used by the app that a user is executing should only have the access that the user is allowed.