Multiple ring sites on one immutant? - clojure

Immutant allows applications to respond to web requests via Ring
handlers. Each application can dynamically register any number of
handlers, each with a unique context path. This allows you to have
multiple Ring webapps that share the same deployment lifecycle.
So it says I can have multiple Ring apps on one immutant but can I/should I have two separate websites running on one immutant: site1.com and site2.com?
This context path is considered the top-level context path - you have
the option to bind a handler to a sub-context path that will be nested
within the top-level path. The full context is stripped from the url's
path before the request is processed, and the context and remaining
path info are made available as part of the request map via the
:context and :path-info keys, respectively.
It sounds like I can have an app running on site1.com/context1 and site1.com/context2 but not so much two separate domains.
The reason I'm asking is because immutant takes up a lot of my server resources so much so I'm not sure if I can run two immutants. The correct question might be how do I improve performance on my immutant? (I'm not any good with servers/deployment.)
Source: http://immutant.org/documentation/0.1.0/web.html

The answer is complicated by the fact that there are currently two major Immutant version branches: 1.x and 2.x. 1.x requires far more resources than 2.x, but 2.x hasn't been officially released yet (though incremental releases are available).
Both versions support mounting Ring apps at various combinations of virtual host, e.g. site1.com, and context path, e.g. /context1. In Immutant 1.x, the :virtual-host setting is in your deployment descriptor, as is the :context-path for the entire project. This is somewhat confusing, since you can also specify a :context-path when starting your Ring handler. The one passed to immutant.web/start is resolved relative to the one set in the deployment descriptor, which is why it's referred to as a "sub context path" in the docs.
In 2.x, things are simpler, because there is no deployment descriptor. Everything is passed as an option to immutant.web/run.

Can you post a small example with what you have so far?
It seems like you could achieve it with the :host option to run: https://projectodd.ci.cloudbees.com/job/immutant2-incremental/lastSuccessfulBuild/artifact/target/apidocs/immutant.web.html

Related

Django communicating with another python application?

Is it possible to have django running on the server and one application from django inter communicating with another python process say that I developed and fetching a response from it or even make it just do a particular action?
It can be synchronous or asynchronous; I have some idea of being asynchronous where some package like hendrix, crossbar.io or even celery can be used. But I don't understand what would be the name for this inter-communication and how should I plan the architecture for this.
Going around my head I have the two following situations I'm seeking a plan to be developed:
1.
Say I have django and an e-mail sender with the python package smtp. A user making a request to a view would make django execute my python module I developed for sending an email to a particular user (with a smpt server from google/gmail). It could be synchronous or asynchronous.
OR
2
I have django (some application) and I want it to communicate with some server I maintain; say for making this server execute some code or just fetch a file (if it is an ftp server). Is this an appropriate situation to point to the term 'microservices'? Or there is another term or workaround here?
Your first solution would be called an installable python module, just like any package you install with pip. You can have this as a separate module if you need your code to be re-usable across multiple or just future projects.
Your second solution would be a microservice. This will require setting your small module as a service that could have a REST API to communicate with and make it do whatever you intend it to do.
If your question is "what is the right approach" then I would tell you it depends on your use case. If this is just some re-usable code that you don't want to repeat over and over through our project then just make it into a separate module. While if this is a service that you expect other built services will use and rely on, then just make it into a microservice. You can use a microframework such as Flask for easier and faster setup of your service. Otherwise, if it's just some code that you will use once and serves a single functionality on your application then just write it and keep it there.
There are no rules or standards on which approach should be taken. I personally judge things depending on the use-case.
Hope this helps!

Dynamic database connection in Symfony 4

I am setting up a multi tenant Symfony 4 application where each tenant has it's own database.
I've set up two database connections in the doctrine.yaml config. One of the connections is static based on an env variable. The other one should have a dynamic URL based on a credential provider service.
doctrine:
dbal:
connections:
default:
url: "#=service('provider.db.credentials').getUrl()"
The above expression "#=service('provider.db.credentials').getUrl()" is not being parsed though.
When injecting "#=service('provider.db.credentials').getUrl()" as argument into another service the result of getUrl() on the provider.db.credentials service is injected. But when using it in the connection configuration for doctrine the expression is not being parsed.
Does anyone have an idea how to solve this?
You're trying to rely on ability of Symfony services definition to use expressions for defining certain aspects of services. However you need to remember that this functionality is part of Dependency Injection component which is able (but not limited to) to use configuration files for services. To be more precise - this functionality is provided by configuration loaders, you can take a look here for example of how it is handled by Yaml configuration loader.
On the other hand configuration for Doctrine bundle, you're trying to use is provided by Config component. A fact that Dependency Injection component uses same file formats as Config component do may cause an impression that these cases are handled in the same way, but actually they're completely different.
To sum it up: expression inside Doctrine configuration does not work as you expecting because Doctrine bundle configuration processor doesn't expect to get an Expression Language expression and doesn't have support for handling them.
While explanations given above are, hopefully, answers your question - you're probably expecting to get some information about how to actually solve your problem.
There is at least 2 possible ways to do it, but choosing correct way may require some additional information which is out of scope of this question.
In a case if you know which connection to choose at a time of container building (your code assumes that it is a case, but you may not be aware about it) - then you should use compiler pass mechanism yo update Doctrine DBAL services definitions (which may be quite tricky). Reason for this non-trivial process is that configurations are loaded at the early stages of container building process and provides no extension points. You can take a look into sources if necessary. Anyway, while possible, I would not recommend you to go in this way and most likely you will not need it because (I suppose) you need to select connection in runtime rather then in container building time.
Probably more correct approach is to create own wrapper of DBAL Connection class that will maintain list of actual connections and will provide required connection depending on your application's logic. You can refer to implementation details of DBAL sharding feature as example. Wrapper class can be defined directly through Doctrine bundle configuration by using wrapper_class key for dbal configuration

Terraform Multiple State Files Best Practice Examples

I am trying to build out our AWS environments using Terraform but am hitting some issues scaling. I have a repository of just modules that I want to use repeatedly when building my environments and a second repository just to handle the actual implementations of those modules.
I am aware of HashiCorp's Github page that has an example but there, each environment is one state file. I want to split environments out but then have multiple state files within each environment. When the state files get big, applying small updates takes way too long.
Every example I've seen where multiple state files are used, the Terraform files are extremely un-DRY and not ideal.
I would prefer to be able to have different variable values between environments but have the same configuration.
Has anyone ever done anything like this? Am I missing something? I'm a bit frustrated because every Terraform example is never at scale and it makes it hard for n00b such as myself to start down the right path. Any help or suggestions is very much appreciated!
The idea of environment unfortunately tends to mean different things to different people and organizations.
To some, it's simply creating multiple copies of some infrastructure -- possibly only temporary, or possibly long-lived -- to allow for testing and experimentation in one without affecting another (probably production) environment.
For others, it's a first-class construct in a deployment architecture, with the environment serving as a container into which other applications and infrastructure are deployed. In this case, there are often multiple separate Terraform configurations that each have a set of resources in each environment, sharing data to create a larger system from smaller parts.
Terraform has a feature called State Environments that serves the first of these use-cases by allowing multiple named states to exist concurrently for a given configuration, and allowing the user to switch between them using the terraform env commands to focus change operations on a particular state.
The State Environments feature alone is not sufficient for the second use-case, since it only deals with multiple states in a single configuration. However, it can be used in conjunction with other Terraform features, making use of the ${terraform.env} interpolation value to deal with differences, to allow multiple state environments within a single configuration to interact with a corresponding set of state environments within another configuration.
One "at scale" approach (relatively-speaking) is described in my series of articles Terraform Environment+Application Pattern, which describes a generalization of a successful deployment architecture with many separate applications deployed together to form an environment.
In that pattern, the environments themselves (which serve as the "container" for applications, as described above) are each created with a separate Terraform configuration, allowing each to differ in the details of how it is configured, but they each expose data in a standard way to allow multiple applications -- each using the State Environments feature -- to be deployed once for each environment using the same configuration.
This compromise leads to some duplication between the environment configurations -- which can be mitgated by using Terraform modules to share patterns between them -- but these then serve as a foundation to allow other configurations to be generalized and deployed multiple times without such duplication.

Use flask admin to set config parameters

As title says, I have small web app, without using database and models.
I'd like interface to change some of Flask own config parameters, and thought that flask-admin may bring me there quickly. Is this easily possible?
You can't generally change configuration after starting the application without restarting the server.
The application (at least in production) will be served with multiple processes, possibly even on multiple servers. Changes to the configuration will only effect the process that handled the request, until the other processes are reaped and re-start. Even then, they may fork from a time after the configuration was read.
Extensions are not consistent about how they read configuration. Some read the configuration from current_app every request. Some only read it during init_app and store their own copy, so changing the configuration wouldn't change their copy.
Even if the configuration is read each time, some configuration just can't be changed, or requires other steps as well. For example, if you change databases, you should probably make sure you also close all connections to the old database, which the config knows nothing about. Another example, you could change debug mode but it won't do anything, because most of the logging is set up ahead of time.
The web app might not be the only thing relying on the configuration, so even if you could restart it automatically when configuration changed, you'd also need to restart dependent services such as Celery. And those services also might be on completely different machines or as different users.
Configuration is typically stored in Python files, so you'd need to create a serializer that can dump valid Python code, or write a config loader for a different format.
Flask-Admin might be able to be used to create a user interface for editing the configuration, but it wouldn't otherwise help with any of these issues.
It's not really worth it to try and change Flask.config after starting the application. It's just not designed for that. Design a config system specifically for the config you need if that's something you need, but don't expect to be able to generally change Flask.config.

Sitecore development and demo servers

I'm attempting to get an understanding of what is a best practice / recommended setup for moving information between multiple Sitecore installations. I have a copy of Sitecore setup on my machine for development. We need a copy of the system setup for demonstration to the client and for people to enter in content prelaunch. How should I set things up so I people can enter content / modify the demonstration version of the site and still allow me to continue development on my local machine and publish my updates without overwriting changes between the systems? Or is this not the correct approach for me to be taking?
I believe that the 'publishing target' feature is what I need to use, but as this is my first project working with Sitecore and so I am looking for practical experience on how to manage this workflow.
Nathan,
You didn't specify what version of Sitecore, but I will assume 6.01+
Leveraging publishing targets will allow you to 'publish' your development Sitecore tree (or sub-trees) from your development environment to the destination, such as your QA server. However, there is potential that you publish /sitecore/content/home/* and then you wipe out your production content!
Mark mentioned using "Sitecore Packages" to move your content (as well as templates, layout items, etc...) over, which is the traditional way of moving items between environments. Also, you didn't specify what version of Sitecore you are using, but the Staging Module is not needed for Sitecore 6.3+. The Staging Module was generally used to keep file systems in sync and to clear the cache of Content Delivery servers.
However, the one piece of the puzzle that is missing here is that, you will still need to update your code (.jpg, .css, .js, .dll, .etc) on the QA box.
The optimal solution would be to have your Sitecore items (templates, layout item, rendering items, and developer owned content items) in Source control right alongside your ASP.NET Web Application and any class library projects you may have. At a basic level, you can do this using built in "Serialization" features of Sitecore. Lars Nielsen wrote an article touching on this.
To take this to the next level, you would use a tool such as Team Development for Sitecore. This tool will allow you to easily bring your Sitecore items into Visual Studio and treat them as code. At this point you could setup automated builds, or continuous integration, so that your code and Sitecore items, are automatically pushed to your QA environment. There are also configuration options to handle the scenario of keeping production content in place while still deploying developer owned items.
I recommend you looks at the staging module if you need to publish to multiple targets from the same instance, i.e. publish content from one tree over a firewall to a development site, to a QA site, etc.
If you're just migrating content from one instance to another piecemeal, you can use Sitecore packages which are standard tools to move content. The packages serialize the content to XML and zip it up and allow you to install them in other instances.