Running multiple sites in one CF instance - coldfusion

I'm running 2 sites (Dev and QA) under one instance of CF 2018. When I send a REST call to a CFC on Dev, the QA instance gets confused and starts delivering pages from the Dev site, probably due to Application or Session vars being changed.
I verified that the REST call is using the session vars from Application.cfm for the dev environment, as it should. But the QA instance somehow gets switched over to the Dev folder and starts running cfm modules from there. I can't find where the QA pages are getting redirected.
Thanks in advance, for any ideas on things to look at.

Both DEV and QA are sharing the same application name and therefore the same application and session variables. You're switching context between requests, based on which domain / environment made the previous request.
In addition, you should convert your Application.cfm to Appliction.cfc and refactor how and when you're defining application and session variables. It feels like the CFM is just setting those values on every request instead of only once, which is why the different requests are switching context.
http://www.learncfinaweek.com/course/index/section/Application_cfc/item/Application_cfc/
You need to run
each environment on their own CF instance
the application name should be dynamic
each instance should point their data sources to their own database.
Isolating the instances allow you to track separate logs based on environment.
Dynamic application names isolates the application's shared scopes from each other.
The data should be separated per environment to isolate in-development DB related changes from QA testing.
https://coldbox.ortusbooks.com/getting-started/configuration/bootstrapper-application.cfc
Your code is in source control. You checkout a copy for DEV and one for QA, each to their own "web root".
/code/DEV/repo
/code/QA/repo
In your Application.cfc:
this.name = hash( getCurrentTemplatePath() );
This creates a dynamic application name based on the current folder path, so DEV and QA will be different and easily isolate the shared scopes. Also helps with local DEV efforts.
It's a bit of work, but there's no way to shortcut this.

Related

Django on a single server. Web and Dev environments with web/dev databases

In a couple of months, I'm receiving a single (physical) Ubuntu LTS server for the purpose of a corporate Intranet only web tools site. I've been trying to figure out a framework to go with. My preference at this point would be to go with Django. I've used RoR, CF and PHP heavily in the past.
My Django concern right now is how to have both a separate '/web/' and '/dev/' environment, when I'm only getting a single server. Of course this would include also needing separate 'web' and 'dev' databases (either separated by db name or having two different db instances running on the single server).
Option 1: I know I could only setup a 'web' (production) environment on Ubuntu and then use my corporate Windows laptop to develop Django tools. I've read this works fine except that a lot of 3rd party Django packages don't work on Windows. My other concern would be making code changes and then pushing to the Ubuntu server where I might introduce problems that didn't show up on the local Windows development environment.
Option 2: Somehow setup a separate Django 'web' and 'dev' environment on the same server. I've seen a lot of different and confusing information on this. Also adding to the complication is what I assume would be the need to have two database instances running on the same server. Or, how could you have two different Django environments for 'web' and 'dev' and have them point to different db tables based on name instead of needing two different db instances running?
Thanks for any advice. I'm actually having trouble relaxing and learning Django not knowing how bad this is going to deal with. I could easily just deal with the pain of developing in basic PHP if this is too over complicated. With plain PHP it's dead simple to have a '/web/' and and '/dev/' path and separate db's just by checking the URL or file path for '/web/' or '/dev/' (and then pointing to the right db for example - 'mytool_dev_v1' / 'mytool_web_v1').
There are multiple ways to solve this problem:
You can run 2 separate instances of django in the same server in different virtual environments. You can configure them in a multiple ways: using environment variables or just separate 'production' and 'dev' config-files and choose which gonna be used.
You can use docker containers to serve different django instances. It is the best way I think. You can configure them in the same way: by the environment variables or multiple config files for 'dev' and 'prod' options.
If you want to serve 2 (or more) sites in the same server youll probably need to configure nginx server to redirect requests to the separate containers or django instances depends on the domain name or something else (url, for example).
As I know there is no problem to configure separate database for each instance. You also can run your postgres or mysql instance in container. The same way you can run nginx.
I can't recommend you to develop your app in the same server where production app is running. I convinced that development must going in the developer's computer, but yeah... Windows is not the best for django development, but it mostly works. Otherwise I can recommend you to use dualboot or at least VirtualBox with Ubuntu.

Using other environments from Postman

We have our workflow split into multiple projects, each dealing with different concerns (Central server is for anything authentication-related, API server for anything new-gen related, and each other project corresponds to a its own app).
This makes our process of hitting an app API as follows:
From Central Server local environment, post authentication
Set app
From the app environment for the end user, set to environment
Hit the API of the app of end user's choice
This makes for tests that are difficult to write, in that we'd have to do steps 1 through 3, and with two different environments.
Is there a way to access the variables from one environment (e.g. Central Server Local) from another, in the test script?
Environment variables are scoped to their own environment - up from that you could set a global variable in the test script and still access it when you change environments:
https://www.getpostman.com/docs/v6/postman/environments_and_globals/variables
Collection variables are also above environment variables but cannot be set programmatically, only through the collection settings.

Deploying Django as standalone internal app?

I'm developing an tool using Django for internal use at my organization. It's used to search and tag documents (using Haystack and Solr), and will be employed on different projects. My team currently has a working prototype and we want to deploy it 'in the wild.'
Our security environment is strict. Project documents are located on subfolders on a network drive, and access to these folders is restricted based on users' Windows credentials (we also have an MS SQL server that uses the same credentials). A user can only access the projects they are involved in. Since we're an exclusively Microsoft shop, if we want to deploy our app on the company intranet, we'll need to use an IIS server to deal with these permissions. No one on the team has the requisite knowledge to work with IIS, Active Directory, and our IT department is already over-extended. In short, we're not web developers and we don't have immediate access to anybody experienced.
My hacky solution is to forgo IIS entirely and have each end user run a lightweight server locally (namely, CherryPy) while each retaining access to a common project-specific database (e.g. a SQLite DB living on the network drive or a DB on the MS SQL server). In order to use the tool, they would just launch an all-in-one batch script and point their browser to 127.0.0.1:8000. I recognize how ugly this is, but I feel like it leverages the security measures already in place (note that never expect more than 10 simultaneous users on a given project). Is this a terrible idea, and if so, what's a better solution?
I've dealt with a similar situation (primary development was geared toward a normal deployment situation, but some users have a requirement to use the application on a standalone workstation). Rather than deploy web and db servers on a standalone workstation, I just run the app with the Django internal development server and a SQLite DB. I didn't use CherryPy, but hopefully this is somewhat useful to you.
My current solution makes a nice executable for users not familiar with the command line (who also have trouble remembering the URL to put in their browser) but is also relatively easy development:
Use PyInstaller to package up the Django app into single executable. Once you figure this out, don't continue to do it by hand, add it to your continuous integration system (or at least write a script).
Modify the manage.py to:
Detect if the app is frozen by PyInstaller and there are no arguments (i.e.: user executed it by double clicking it) and if so, then run execute_from_command_line(..) with arguments to start the Django development server.
Right before running the execute_from_command_line(..), pop off a thread that does a time.sleep(2) (to let the development server come up fully) and then webbrowser.open_new("http://127.0.0.1:8000").
Modify the app's settings.py to detect if frozen and change things around such as the path to the DB server, enabling the development server, etc.
A couple additional notes.
If you go with SQLite, Windows file locking on network shares may not be adequate if you have concurrent writing to the DB; concurrent readers should be fine. Additionally, since you'll have different DB files for different projects you'll have to figure out a way for the user to indicate which file to use. Maybe prompt in app, or build the same app multiple times with different settings.py files. Variety of a ways to hit this nail...
If you go with MSSQL (or any client/server DB), the app will have to know the DB credentials (which means they could be extracted by a knowledgable user). This presents a security risk that may not be acceptable. Basically, don't try to have the only layer of security within the app that the user is executing. The DB credentials used by the app that a user is executing should only have the access that the user is allowed.

Pain of configuring various environments in development and production (Rails 4 application)

As per best practices, my development team does not store the application config file in a repo for security reasons (we use a config/application.yml file to store configs). However, when we actually develop and deploy, this causes some problems:
A developer needs to add a new external URL that is different depending on what environment the application is running in. Since there is no config file in the repo, he cannot update a single file that gets synced when another developer pulls the code. To make this happen, he updates his local config/application.yml file and then each other developer updates their local file, and then we have to add the new ENV variable to the server's config/application.yml. Has to be a better solution.
If we stored the config/application.yml file in the repo and shared it among everyone and the servers, this solves the problem of sharing/updating global configs, BUT it opens up the possibility that a developer may accidentally start their local application in production mode and touch live data or spam real users with test emails (has happened which is why it's a concern).
Is there a standard best practice for solving these types of problems? It seems I either sacrifice productivity for security but can't really have both.
I've been thinking about creating a config/development.yml file in the repo that all developers share, which stores all environments EXCEPT production. That way they can share config/ENV items for development and sync them up. But in production, I would have a config/production.yml file that ONLY lives on the servers.
If the application is started in anything except production environment, it loads the development.yml file. If it is started in production, it loads the production.yml file. But since the production.yml file does NOT live in the repo (only on the servers), there's no chance that a developer can accidentally touch live data or spam real users, etc...
Have any professional developers tried a scheme like this? I've done a lot of googling but really haven't found a satisfactory solution.
Check out the RailsConfig gem. This allows you do to exactly what you stated, but with the ease of a gem. This also allows you and your dev team to have local yaml files that override settings.
config/settings.yml
config/settings/#{environment}.yml
config/environments/#{environment}.yml
config/settings.local.yml
config/settings/#{environment}.local.yml
config/environments/#{environment}.local.yml
You would then just have config/settings/production.yml within your .gitignore so that it will not be checked into source control.

CF Builder Debugging config

G'day
For security reasons, our CFAdmin (and accordingly RDS) is accessed via one domain, say cfadmin.ourdomain.com, and access the site via a different domain: www.ourdomain.com.
Via some miracle I have just been able to get both RDS and a server set up without RDS giving me "Could not initialize class com.adobe.rds.core.services.Messages" (this is a first), and will allow me to launch a debugging session. However it tries to hit the file I'm testing via cfadmin.ourdomain.com (and the actual website is not defined on that IIS website). I can understand why this happens, but I can't figure out how to tell the debugging config that the actual website is www.ourdomain.com.
It is not a possibility to have either CFIDE accessible on www.ourdomain.com, or the site accessible via cfadmin.ourdomain.com. So that cannot be part of a proposed solution.
Anyone have any ideas?
Oh: this is on CF9.0.1.
UPDATE:
Sorry, just to be absolutely clear... this is our dev environment. This is all running on my local PC. However the local server (a VM running on my workstation) is configured the same as the prod environment (for obvious reasons), down to how CFAdmin is accessed.
This is not the answer you will like :) Your security folks were right in the first place. CF Builder debugging is a development tool designed to be used on a dev system and it works well mostly in a local dev environment where CF+ Apache or IIS are running on your local workstation.
On a production server RDS itself is a weakness that you really don't want running in that production environment (sorry). Having said that here are my big ideas :)
Have your admin create a folder on cfadmin.ourdomain.com that points to your prod code. Depending on your code you might get that to work.
Point cfadmin.ourdomain.com to your production code directly (no folder). After all the CFIDE and other mappings used by RDS are aliases or virtuals anyway.
That's all I got sorry....