Local machine environment variable - postman

I have next problem: I have same environment for site on work and home PC, but I have different database records inside it.
So in that case for test requests on my local environment I constantly need to change tested values.
Postman has different scopes for variables (see documentation)
In my case in collection scope is saved production variables. On environment scope I rewrite this values by my local configuration.
Collection variables
SITE_DOMAIN - https://www.prod.com/
USER_ID - 1234567890
Environment variable
SITE_DOMAIN - https://dev.loc/
USER_ID - 123
At home I have the same domain, but another user id and I need to change it in Environment variable every time when I wanna run request at home.
I wanna setup USER_ID to another value only at home local machine.
Recorded interface example
Is it possible to rewrite variable with local machine scope? There is local layer, but it isn't described in documentation.

If I understand the question correctly:
You could add a value into the local environment file local_dev or something, to run a check to see if it’s there - then have some logic in the pre-request script that looks for the value, if it exists, then change the USER_ID variable to the one you want before the request is made and if not, do nothing.
Roughly, something kind of like this but more elegant:
if(pm.environment.get('local_dev') === 'some_value') {
pm.environment.set('USER_ID', 1234)
}
I might have totally misunderstood the question though.

As I see, local variable is that variable which we setup in Pre-request Script section in pm.variables scope.
So we can override environment value without changing it by
pm.variables.set("VAR_NAME", "VAR_VALUE");
Unfortunately it will run in all PC's on each send request. So we need to add some logic to it.
As it supposed by #Danny Dainton we can add some environment variable for dev PC position.
So as a workaround I add variable PC_ENV to Local environment and put some logic for this in Pre-request Script section.
if (pm.environment.get('PC_ENV') === 'home') {
pm.variables.set("USER_ID", "35");
}
How can we use this? When we start work with Postman we go to our environment and setup PC_ENV value to home or office depends on where we are now.
Recorded example
If we don't want to run Pre-script section every time we can add all local variables values for each PC and run it only once at the beginning of the work by setup required condition.
const needSetupEnvironment = true;//change to false when setup is finished
if (needSetupEnvironment) {
const currentEnvironment = 'home';//setup environment before start work
let userId;
switch (currentEnvironment) {
case 'home':
userId = 35;
break;
default:
userId = 123;
break;
}
pm.environment.set('USER_ID', userId);
}
We can enable script when we need to change environment variables, and than disable it after setup run it once with correct environment.
Recorded example

Related

Updating Ember.js environment variables do not take effect using in-repo addon config() method on ember serve

My goal here is to create an auto-increment build number that updates both on ember build and ember serve. In the end, if I can only use this on build, that's totally ok.
I originally asked this question:
In-repo addon writing public files on build causes endless build loop on serve
In that I was attempting to solve this problem by writing out JSON files. The problem was mostly solved, but not using ember serve.
Instead of doing that, I'm now trying to update the local environment. But this is having a similar problem with ember serve. I've got the build number incrementing fine. I can use the config() method to set custom/dynamic variables in the environment. The problem I'm having is that the even though I can log the change in terminal when config() is called, and I can see it run on serve when files change, I don't see the changes in browser when I output Ember's ENV using ember serve. Here's my addon's methods so far.
Note: the appNumberSetup() function is just reading a local json file in the project root and updating the build number. That's working fine. Anything about pubSettingsFile can be ignored, I won't be using that moving forward.
init(parent, project) {
this._super.init && this._super.init.apply(this, arguments);
// we need to setup env in init() so config() and prebuild()
// will see update immediately
this.settingsFile = path.resolve(this.appDir, this.settingsFileName);
this.addonPubDataPath = path.resolve(this.appDir, 'lib', this.name, 'inc', 'public', 'build-data-output');
this.pubSettingsFile = path.resolve(this.addonPubDataPath, this.pubSettingsFileName);
// this only checks for .env variables and sets defaults
this.dotEnvSetup();
// must set this so prebuild skips processing a build number on build
// else we get build number incremented twice on first run
// then appNumberSetup() disables so subsequent serve preBuild() will run.
this.skipPreBuild = true;
this.appNumberSetup();
},
// this sends our created settings data to ENV.localBuildSettings in app
config(environment, appConfig){
// this 'buildme' is just an experiment
let x = `buildme${this.buildNumber}`;
let r = {
localBuildSettings: this.settings
};
r[`buildme${this.buildNumber}`] = this.buildNumber;
this.dlog("Config ran...");
this.dlog(JSON.stringify(r, null, 4));
return r;
},
preBuild: function(result){
// init() disables preBuild() here, but subsequent builds with serve still
// run appNumberSetup() to update this.settings for env and JSON
if(this.skipPreBuild === true){
this.skipPreBuild = false;
}
else {
// only run here after init runs
this.appNumberSetup();
}
// don't do this... write file makes endless loop on serve
// this.saveSettingsFile(this.pubSettingsFile, this.settings);
},
this.settings is a local variable in addon and it updated on build/serve, the JSON looks like this:
{
"appVersion": 911,
"appBuildNumber": 7117
}
Is there a way to update Ember's ENV with dynamic data? (like a new build number)
The addon config() appears to run on each change in ember serve, and it shows the build number in terminal output. But it looks like that runs after postBuild(). Maybe that's why I don't see the changes. Is there a way to update that environment during preBuild()?
I'm not sure of the specifics but ember-cli-new-version does this. During the build stage they create a VERSION.txt file, might even do what you need already without needing to write it yourself.

Set or modify an AWS Lambda environment variable with Python boto3

i want to set or modify an environment variable in my lambda script.
I need to save a value for the next call of my script.
For exemple i create an environment variable with the aws lambda console and don't set value. After that i try this :
import boto3
import os
if os.environ['ENV_VAR']:
print(os.environ['ENV_VAR'])
os.environ['ENV_VAR'] = "new value"
In this case my value will never print.
I tried with :
os.putenv()
but it's the same result.
Do you know why this environment variable is not set ?
Thank you !
Consider using the boto3 lambda command, update_function_configuration to update the environment variable.
response = client.update_function_configuration(
FunctionName='test-env-var',
Environment={
'Variables': {
'env_var': 'hello'
}
}
)
I need to save a value for the next call of my script.
That's not how environment variables work, nor is it how lambda works. Environment variables cannot be set in a child process for the parent - a process can only set environment variables in its own and child process environments.
This may be confusing to you if you set environment variables at the shell, but in that case, the shell is the long running process setting and getting your environment variables, not the programs it calls.
Consider this example:
from os import environ
print environ['A']
environ['A'] = "Set from python"
print environ['A']
This will only set env A for itself. If you run it several times, the initial value of A is always the shell's value, never the value python sets.
$ export A="set from bash"
$ python t.py
set from bash
Set from python
$ python t.py
set from bash
Set from python
Further, even if that wasn't the case, it wouldn't work reliably with aws lambda. Lambda runs your code on whatever compute resources are available at the time; it will typically cache runtimes for frequently executed functions, so in these cases data could be written to the filesystem to preserve it. But if the next invocation wasn't run in that runtime, your data would be lost.
For your needs, you want to preserve your data outside the lambda. Some obvious options are: write to s3, write to dynamo, or, write to sqs. The next invocation would read from that location, achieving the desired result.
AWS Lambda just executes the piece of code with given set of inputs. Once executed, it returns the output and that's all. If you want to preserve the output for your next call, then you probably need to store that in DB or Queue as Dan said. I personally use SQS in conjunction with SNS that sends me notifications about current state. You can even store the end result like success or failure in SQS which you can use for next trigger. Just throwing the options here, rest all depends on your requirements.

Django session variables sometimes get lost in multi-threaded environment

I'm trying to cache a set of strings per session by storing each one in their own variable and by using django.contrib.session.
I have the following code:
import copy
def get_result(request, operation):
previous_result = request.session.get(operation.name)
if previous_result:
result = copy.deepcopy(previous_result)
else:
result = get_json_response(operation)
request.session[operation.name] = copy.deepcopy(result)
return result
get_result() is
triggered via ajax requests
used for many different operations which may be called at the same time
may be called multiple times per operation in one session
This code works perfectly fine on my local environment. However, in production server where gevent and chausette is installed, it fails.
Most of the time, request.session.get(operation.name) would return None even when it is not the first time that get_result is called for that operation. In some cases, it returns a value but in some, it doesn't. There seems to be no pattern on when it does and doesn't work.
I suspect that the inconsistency is because different threads are referencing the session variable at different states. What would be the proper way to handle session variables in this case?
I did in fact have the same problems and also tried to save the session properly with the tweaks you posted.
In the end, what solved my problem was changing the default cache in settings.py to
'BACKEND': 'django.core.cache.backends.dummy.DummyCache',
Using FileBasedCache instead helps as well, but it crashes in the local environment (development). Dummy works for local as well as production.

Flash Builder (Mobile) - Dynamic Web Service URL

For my Flash Builder 4.6 Project I have a http service defined which looks at a url from our website.
What I'd like to be able to do though is to change the web service url on the fly within the app. i.e. using the existing url as default but having an admin/settings screen to change where the web service points (either stored in our sqlite database or in local memory).
This would be so that we could allow our customers to host their own version of the website/database but still be able to use/download the app through the app stores.
Has anyone had any experience with doing this?
EDIT: Adding some more details after the comments below.
When I created the HTTP Service through the FlashBuilder wizard it creates two web service classes a super class and a sub class which inherits from the super class. All of the code that the wizard populates goes into the super class.
I can assume that the code I need to put in would be in the sub class. But I do not know which function I'd put it in or how.
Below is a sample of the Super's constructor:
// initialize service control
_serviceControl = new mx.rpc.http.HTTPMultiService("websitehere");
var operations:Array = new Array();
var operation:mx.rpc.http.Operation;
var argsArray:Array;
operation = new mx.rpc.http.Operation(null, "loginRequest");
operation.url = "login.php";
operation.method = "GET";
argsArray = new Array("un","pw");
operation.argumentNames = argsArray;
operation.serializationFilter = serializer0;
operation.properties = new Object();
operation.properties["xPath"] = "/";
operation.contentType = "application/x-www-form-urlencoded";
operation.resultType = valueObjects.Data;
operations.push(operation);
_serviceControl.operationList = operations;
I'm not sure what property of the _serviceControl variable I would need to alter.
Also when I search for my website in my code it brings back a .fml file inside a .model directory which seems to get auto refreshed if I change the service url through the wizard. Would this not cause an issue?
I then have the challenge of accessing the user defined url. Within the app we use an sqlite database to store data but I think it would probably be better to use a 'SharedObject' which we also use to know what account they are logged into. How reliable is this? I assume I would be able to access this via the Service?
Though the awkward thing is that we were planning to have this configurable on a settings screen that would have been accessed after logging in. But to log in it would already need to know which server to point to.
if im reading your question correctly then your main ambition is to dynamically change the url for the services based on a user defined variable.
This is very easy to accomplish and even easier to accomplish if you are using parsley / spicelib.
a few points
dont change the code in the super file, this will get overwritten whenever the service gets refreshed. change everything in its generated sub-Class.
Shared Objects are very good for small quantities of data but should never be used for massive datasets i.e storing a big arraycollection.
Anyway here is how i achieve this.
In the SubClass you can change the constructor function.
Here is how i change my urls based on a config variable but you can just as easily use a SharedObject instead.
public function SubClassConstructor(){
if(CONFIG::DOMAIN_IDENT == "development" || CONFIG::DOMAIN_IDENT == "dev" || CONFIG::DOMAIN_IDENT == "d"){
_serviceControl.endpoint = "http://yoururl1";
}
else if(CONFIG::DOMAIN_IDENT == "production" || CONFIG::DOMAIN_IDENT == "prod" || CONFIG::DOMAIN_IDENT == "p"){
_serviceControl.endpoint = "http://yoururl2";
}
}
Of course this isn't exactly what your looking for but its a working solution, of course you can use Bindings to a Global ApplicationModel or direct reference to the SharedObject i guess you already know how to use the SharedObject.
Ask if you need any further help or guidance.
As cghrmauritius' solution didn't quite work for me, I am posting up the final solution that did work in my situation.
public function subConstructor()
{
super();
_serviceControl.baseURL = "http://url1";
}
Obviously for my final solution I need to implement the shareobject as well but overriding the url was my main priority.

Determining dev vs production

What method should I use to determine if I'm on the dev system vs. production?
In this post from Ray Camden, he shows how to see what folder you're in, so that could be an indicator.
While in dev, I want to have error trapping turned off, missing template turned off, debug="yes" for cfstoredproc and cfquery, as well as always reload the components onRequestStart.
I have two approaches to this, both of which have served well. I'll start with the easiest approach first, which is what I'd call a "static". I use this when I don't have many environment-specific settings... maybe a small handful.
I'm assuming you have an Application.cfc or .cfm file for your app. In there, you could set a variable, something like "application.environment", and by default it'd be set to "dev". Throughout your app you could inspect that variable to determine where you are.
When you package your application for deployment, you could then change that Application.cfc file to read "" instead.
Now, that's going to get annoying, so I just use ant for this. I just use something like this in my build.xml, which lives in the same directory as Application.cfc:
<replace file="Application.cfc" token="DEV" value="PROD" casesensitive="true" />
And then zip the app for deployment:
<zip destfile="${zipdir}/MyApp-Production.zip">
<zipfileset dir="." prefix="MyApp" />
</zip>
Then I deploy the zip. If I'm working on a small project that uses FTP instead of some corporate enterprisey deployment hooey, then I'll just have an ANT task that FTPs files to my production server and it'll also perform that replace on Application.cfc and push that file, too.
For most of the apps I work on where I work, we use two database tables to manage environments. We do this because we have a lot of different environments, and each one has different settings, usually centered around filesystem and network paths that differ per environment (let's not talk about why they're different... totally separate discussion). So We have a table we call "AppLocations":
LocationID | LocName | LocDesc | Setting1 | Setting2 | Setting 3| ......
1 | Local | 'Localhost Environment' | whatever.....
2 | Dev | 'Development Environment' | whatever....
3 | Test | 'Test Environment' | whatever.....
and so on.
Then, we have another table named "AppLocationHosts"
LocationID | LocHostName
1 | 'localhost'
2 | 'devservername'
2 | 'otherdevservername'
3 | 'testservername'
3 | 'othertestserver'
and so on.
then, in Application.cfc, in onApplicationStart, we do this query
SELECT TOP 1 *
FROM AppLocations
WHERE LocationID IN (SELECT LocationID FROM AppLocationHosts WHERE LocHostName = <cfqueryparam value="#CGI.HTTP_HOST#" cfsqltype="cf_sql_varchar"/>)
And from there, once we know what location we're in based on the http_host match, we set those "Setting" columns into the application scope:
<cfloop list="#qryAppPathLocations.ColumnList#" index="ColName">
<cfset application[ColName] = qryAppPathLocations[ColName]>
</cfloop>
This approach isn't for everyone, but in our weird environment where consistency is unusual, it's been a very flexible approach.
Now, if you literally only have two environments, and one of them is "localhost" and the other is "www.myapp.com", then by far the easiest would be to just do a check on http_host in onApplicationStart and if you're in "www.myapp.com", then you do your production-specific setup. Perhaps here you set stuff like "request.querydebug = true" and then when you're in production, you turn that off. Then your queries could use that flag to determine whether to turn debug on or off for the cfstoredproc and query. Though I must say, I strongly recommend against that.
Can you just enable debugging in CFAdmin on your Dev box for your IP then use IsDebugMode()?
Dump the #server# scope and you'll see some keys that may help - eg the license mode of ColdFusion.
The solution we use is to set the IP of the current instance, and check it against our known "dev" IPs. Simple, easy, works.
A lot of good answers here - I'd like to mention using cgi.server_name , which can be combined with using a custom DNS to specify your dev environment. To get the localhost working, for IIS on Windows, set up hosts file like e.g. this:
C:\Windows\System32\drivers\etc\hosts - add entry:
127.0.0.1 myapp.dev.mydomain.com.au
Then in IIS map your server to this DNS.
Your systest and uat servers might be set up properly in your corp's DNS, such as
myapp.systest.mydomain.com.au - systest
myapp.uat.mydomain.com.au - uat
myapp.mydomain.com.au - production
Then, in my application.cfc I have a getEnvironment() that is called on every load for ease of use:
// get the environment based on cgi variables - top of application.cfc
this.stConfig = THIS.getEnvironment();
//... onApplicationStart
if (!stConfig.validEnvironment) {
writeOutput("Environment #cgi.server_name# not recognised");
return false;
}
// ...
public struct function getEnvironment () {
stConfig=structnew();
stConfig.validEnvironment = 1;
switch (cgi.server_name) {
// my dev environment
case "myapp.dev.mydomain.com.au": {
stConfig.env = "dev";
// +++
}
// my dev environment
case "myapp.systest.mydomain.com.au": {
stConfig.env = "systest";
// +++
}
// etc
}
return stConfig;
}
I will also copy stConfig to the request scope.
Now, I've got a lot of other stuff there too, and there's lots of ways to implement the storage of environments, e.g. but basically I find the combination of DNS and cgi.server_name particularly well suited to managing environments.
Fwiw, I will include ini files in application.cfc based on the environment name that I use for storing environment specific configurations. I find the getProfileSections() very useful for this, as the config files are very easy to work with. I have one common file that is shared between all environments, and then environment specific ones for those settings that need to be tailored to each environment.
Is it possible to get the directory of the currently running application?
Consider this directory structure for the different "instances" of your application:
/home/deploy/DevLevel.0/MyApp
Production Version
/home/deploy/DevLevel.1/MyApp
Preview or Staging Version
/home/deploy/DevLevel.2/MyApp
Development Version
If you can read the path to the current application, it's easy to find the integer after DevLevel. With that in hand (set as a global variable/constant), use it to change settings or behavior at runtime:
DevLevel == 0 means "Production"
DevLevel >= 1 means "Development"
For example, in the credit card authorization code:
if(DevLevel > 0)
enable_test_mode();
In error handling code:
if(DevLevel == 0)
send_error_to_log();
else
print_error();
Conclusion
The primary benefit here is that the code between the versions can remain 100% identical . No more "forgetting to enable this or disable that when moving code live".
Can this be implemented in ColdFusion?