Cloudfoundry how to let two apps sharing one database service - cloud-foundry

I bind a postgresql service for two apps. They both update the database. Problem is that I use one of the app to create the tables(database schema) by using spring jdbc namespace. But since the other one is provisioned to use a different user name and password. It can not access the tables created by another one. Anyway cloudfoundry to provide the flexibility to resolve the issue?

For Spring apps, this can be achieved by taking the advantage of "auto-reconfigure". CF detects the bean of class javax.sql.DataSource under certain conditions and then replace the properties such as username or password with values of provisioned ones. You can find very detailed instructions here: http://docs.cloudfoundry.com/frameworks/java/spring/spring.html
Therefore for your 2 apps you can both configure the datasource connection as the same format. As long as you bind the same postgresql service to these 2 apps, although CF will inject different values to both apps, they can access same table without any explicit configurations.

Related

How to query tenants on Google Cloud Identity Platform?

I am trying to setup a multi-tenant app using Google Cloud Identity Platform. So far I have successfully setup a small app that can create a tenant and a user in said tenant. When you create a tenant you pass it a displayName. In my testing I noticed that it would create multiple tenants all with the exact same display name. I started to consider how I might create a validation to prevent it from doing this by doing a lookup to see if a tenant with this displayName already existed. However in looking at the documentation I found here (https://cloud.google.com/identity-platform/docs/multi-tenancy-managing-tenants#tenant_management) I only see a way to list all of the tenants. I was hoping for a way to query the list on displayName. Is there some other way I can prevent the duplicate tenants?

How to make an api completely independent from the main Django Project?

I have a django project that is hosted internally, and this project feeds the database. I need an API that serves the data of that database to a public domain (that api does not do any DML only selects), but this API needs be hosted in a diferent machine and even if the project that is hosted internally explodes that api needs to keep working.
I am using DRF for the api
I did an api project that has its own apps, views, serializers and models(those models connect to the db existing table like this) and only have the fields that I need represented. Is this the right way to do this or am I missing something? The thing that I am worried is if one of the columns of a model changes name i will have to change the model in the api, but that is a very rare modification.
I don't know if I understood the question correctly but it seems from your question that you have a DJANGO project along with the DATABASE hosted on a particular machine. If that is the case then if your server goes down external API's will not be able to fetch your data.
If on the other hand you have a dedicated server or RDBMS for only your database then you will be able to fetch the data using any API connecting to that database server.

Python Django working with multiple database simultaneously

I've been trying to add in a functionality to my already existing Django application.
Currently, my application only services multiple users belonging to an organization.
I'm trying to accomplish a task wherein:
Multiple organizations can work with my application by having separate databases. This way organization-specific data is private to individual organizations.
For every organization that wants to subscribe to the web app services, the web app shall use a database template to create the database for the new organization and commission it.
The web app should handle/service all organizations simultaneously.
Is this possible?

How do I use Blue-Green Deployment in Pivotal Web Services?

I read from this guide http://docs.cloudfoundry.org/devguide/deploy-apps/blue-green.html that this deployment is supported through CLI.
Can I use my Pivotal Web Services web interface to do that ?
Does the Green instance connect to the production DB ?
Any guide is greatly appreached
Yes,you can use the web interface to map and unmap routes. When you go to the detail app of your app, you can see at the bottom a tab panel. One of the tabs says 'Routes'. There you can map and unmap routes.
Yes, both instances should be connected to the same db. During a time, when both instances are live (the production route is mapped to both of them), you will have users using the old and the new version. If a user is using the old version, you want his/her edits to be stored and present when he/she is switched over to the new version. Note: That means you should make sure that all your database migrations are backwards compatible.

Stratos 1.6.0 - Messaging between Storage Server and Data Service Service

I am configuring Stratos 1.6.0 and trying get the following scenario working.
Create a database in Storage Server
Create a user in Storage Server
Assign the user to the database
Generate datasource for the user/database combination in the Storage Server
Create DataService in Data Service Server and use the data source above
From what I can see in the code... when one creates a datasource in a Carbon application, the org.wso2.carbon.ndatasource.core.DataSourceRepository will notify the member's in the cluster of the new DataSource. These members will then invalidate the registry cache.
The problem comes is that in the default clustering configuration in Stratos 1.6.0, the Storage Server and the Data Service Server are in different Tribe domains, so messaging using Tribe is not possible between the two types of applications.
How can one get the Data Service Server to update its datasource configuration when datasources are created in the Storage Server?
What you've mentioned in your query itself is exactly what's expected from providing the option to create a datasource via WSO2 Storage Server. However, there are certain technical complexities associated with sharing datasources across nodes/clusters of different Carbon products (other than the type of Carbon product in which the datasources are created) and we're currently in the middle of attending to them. Therefore, all considered, a better way to integrate SS with DSS would be, first create your database/database user in WSO2 SS, then create datasources with that information (connection strings, user credentials, etc) in WSO2 DSS and consume them.
Regards,
Prabath
P.S. You can refer http://sparkletechthoughts.blogspot.in/2013/04/relational-storage-solution-using-wso2.html which provides you with a comprehensive guide for creating databases/database users/privilege templates.