I need to have different values in substitution strings in DEV and PROD. How do I prevent overwriting the substitution strings when updating PROD. DEV and PROD are in separate databases.
I don't see how to exclude the application definitions in build options.
Is there a better way to meet this requirement?
Thanks
The way I see it, substitution strings are application items defined as constants. Only use them for strings that are always the same in any deployment instance of the app. As soon as the value needs to be changeable (for example dev has different value than prod), use application items instead.
If you insist on doing this with build options then this is an option:
Set the values of the application items using a computation or an application process (this is for for production).
Create a 2nd set of computations or an app process with a sequence higher than the sequence of the one above (so this will override the original values) and set a build option on those (exclude on export).
That way, when you export the app, only the first set of computations / app process will be included.
However, my preference is to configure this in the database and have a settings table that has a record indicating the status of the environment (prod/dev/stage/uat) and store the strings in a custom messages table (one record per app status/application item) . In an application process or computation get the value of the application items. The reason I prefer this is that the app doesn't need to know if it is dev or prod, but the database should. This option has a couple of challenges if the same database and schema is used for prod and dev.
Related
I have several applications in an Oracle APEX 19.2 workspace that use shared authentication. In order to access enduser metadata, I want to use an application item defined as global in the master application. It seems to be configered correctly: In a slave application, I can see the correct session value in the debugger windows (Session State, View: Application Items).
But the usual replacement syntaxes do not work: I can not access the value with any of those methods:
:VARIABLE
&VARIABLE.
apex_util.get_session_state('variable')
The only method that is working is apex_util.fetch_app_item('variable',[application id]) - this is cumbersome, as I would like to work with application aliases and I would need to translate the alias using the view apex_applications.
Is this working as intended or did I do something wrong?
Have you created the same application item in the slave application as well? You will also have to set it to Scope = Global. This will expose the value in the current application.
I'm using siddhi to create some app which also interacts with PostgreSQL DB. Although I'm not sure, I believe, there is a bug about making multiple updates on the same PG table, within a single event (i.e. upon receiving an event, update a record in the table, and create another one again in the same table) it seems the batch updates are causing some problems. SO, I just want to give it a try after disabling batchUpdate (it is enabled by default). I just don't know how to configure it using siddhi-sdk (via Intellij plugin). There are two related tickets:
https://github.com/wso2-extensions/siddhi-store-rdbms/issues/43
https://github.com/wso2/product-sp/issues/472
Until these are documented, I'd like to get some quick response how to set these fields.
Best regards...
I'm using siddhi to create some app which also interacts with PostgreSQL DB. Although I'm not sure, I believe, there is a bug about making multiple updates on the same PG table, within a single event (i.e. upon receiving an event, update a record in the table, and create another one again in the same table) it seems the batch updates are causing some problems.
When batchEnabled has been set to true, it will perform the insert/update operation on batch of events instead of performing those operations on each and every single event. Simply, this has been introduced to improve the performance.
The default value of this parameter is currently set to "true".
However, batchEnable configurations is done through a system parameter called, "{{RDBMS-Name}}.batchEnable" which have to be configured in the WSO2 Stream Processor's deployment.yaml
If you want to overide this property in Product-SP please find the steps below.
Open the deployment.yaml file located in {Product-SP-Home}/conf/editor/
Insert the following lines in the file.
siddhi:
extensions:
extension:
name: store
namespace: rdbms
properties:
PostgreSQL.batchEnable: true
But currently there is no way to overwrite those system configurations from the siddhi app level. Since you are using the SDK, what you can do is changing the default value of above parameter to "false".
Please find the steps below do it.
Find the siddhi-store-rdbms-4.x.xx.jar file in the siddhi
sdk. This is located in the {siddhi-sdk-home}/lib/ .
Open the jar file using an archive manager and open the
rdbms-table-config.xml file located inside it with a text editor.
Set false in <batchEnable>true</batchEnable> attribute under the
<database name="PostgreSQL"> tag and save it.
Thanks Raveen. with a simple dash (-) before "extension" I was able to set the config.
siddhi:
extensions:
- extension:
name: store
namespace: rdbms
properties:
PostgreSQL.batchEnable: false
We are a small team of developers working on an application using the PostgreSQL database backend. Each of us has a separate working directory and virtualenv, but we share the same PostgreSQL database server, even Jenkins is on the same machine.
So I am trying to figure out a way to allow us to run tests on the same project in parallel without running into a database name collision. Furthermore, sometimes a Jenkins build would fail mid-way and the test database doesn't get dropped in the end, such that subsequent Jenkins build could get confused by the existing database and fail automatically.
What I've decided to try is this:
import os
from datetime import datetime
DATABASES = {
'default': {
# the usual lines ...
TEST_NAME: '{user}_{epoch_ts}_awesome_app'.format(
user=os.environ.get('USER', os.environ['LOGNAME']),
# This gives the number of seconds since the UNIX epoch
epoch_ts=int((datetime.utcnow() - datetime.utcfromtimestamp(0)).total_seconds())
),
# etc
}
}
So the test database name at each test run most probably will be unique, using the username and the timestamp. This way Jenkins can even run builds in parallel, I think.
It seems to work so far. But could it be dangerous? I'm guessing we're safe as long as we don't try to import the project settings module directly and only use django.conf.settings because that should be singleton-like and evaluated only once, right?
I'm doing something similar and have not run into any issue. The usual precautions should be followed:
Don't access settings directly.
Don't cause the values in settings to be evaluated in a module's top level. See the doc for details. For instance, don't do this:
from django.conf import settings
# This is bad because settings might still be in the process of being
# configured at this stage.
blah = settings.BLAH
def some_view():
# This is okay because by the time views are called by Django,
# the settings are supposed to be all configured.
blah = settings.BLAH
Don't do anything that accesses the database in a module's top level. The doc warns:
If your code attempts to access the database when its modules are compiled, this will occur before the test database is set up, with potentially unexpected results. For example, if you have a database query in module-level code and a real database exists, production data could pollute your tests. It is a bad idea to have such import-time database queries in your code anyway - rewrite your code so that it doesn’t do this.
Instead of the time, you could use the Jenkins executor number (available in the environment); that would be unique enough and you wouldn't have to worry about it changing.
As a bonus, you could then use --keepdb to avoid rebuilding the database from scratch each time... On the downside, failed and corrupted databases would have to be dropped separately (perhaps the settings.py can print out the database name that it's using, to facilitate manual dropping).
How can I programmatically dump/query Launch Services database in MacOS (i.e. analog of command lsregister -dump)?
EDIT: I want to get set of associations UTI -> Bundle_IDs. Using LSCopyAllRoleHandlersForContentType - does not always work, here a similar trouble, therefore concluded that the best working method - parsing the output of "lsregister -dump", but the location of lsregister changes from version to version.
We are building a n-tiered style application in Kohana 3.1 which distributes JSONP powered widgets to our partners based on a partner_id.
Each partner needs to be able to call a widget and specify an environment parameter: test OR production with the initial call, which will be used to select the appropriate database.
We need our bootstrap to watch for $_REQUEST['environment'] variable and then to maintain the state of that variable whenever the partner makes a call to the widget service.
The problem is, that all requests in the application use Bootstrap.php, but many of the requests are internal - i.e. they do not come with a partner_id or environment variable. We tried to use sessions to store these, but as these are server-to-server GET/POST calls, it does not seem possible to store and recall the session id in a cookie on the server (this is browser-less GET).
Does anyone have any suggestions? We realise we could pass the environment variable with every single call internal or external, but this does not seem very robust.
We have a config file which stores partner settings (indexed by partner_id), such as the width and height of the widget and we thought about storing the partner's environment in here, but not all calls to the server would be made by a partner, so we would still need another way to trigger the environment for other calls and select the correct DB.
We also thought of storing a flat file for the partner which maintains the last requested environment, but again, as we have many internal requests after the initial one, we don't always have a knowledge (i.e. we don't usually care) which partner_id is used in the initial call.
Hope this makes sense...!
The solution would be to call the models and methods that are needed to 'do stuff' from a single controller, keeping the partner_id only in the controller and sending the requested data back once all of the 'do stuff' methods have been run, as per the MVC model.
i.e., request from partner -> route -> controller -> calls models etc -> passes back to controller -> returns view to partner
That allows the partner_id to be kept by the controller and only passed to whatever models require it to 'do stuff', keeping within the MVC framework.
If you've not kept within the confines of MVC, then things will obviously get more complex and you'll need to store the variable somewhere.