Symfony 2 and Multiple databases with config.yml settings - doctrine-orm

I have a problem in Symfony 2.3 specifically, where I need to generate schema and update it, access tables in 2 differents databases. Both are in MySQL.
How to Work with multiple Entity Managers and Connections doesn't fix to the scenario, even it's recommended in some articles around.
Errors:
[RuntimeException]
Bundle "XyzBundle" does not contain any mapped entities.
Commands:
doctrine:generate entities XyzBundle --no-backup
doctrine:schema:update --force --em=payment
#/app/config/config.yml
doctrine:
#... connections (default, payment, etc.)
orm:
default_entity_manager: default
auto_generate_proxy_classes: "%kernel.debug%"
entity_managers:
default:
connection: default
mappings:
MyBundle: ~
payment:
connection: payment
mappings:
ZkPaymentBundle: ~

First, I found these excellent articles, where many configurations forms are mentioned.
Doctrine ORM
Install Gedmo Doctrine2 extensions in Symfony2
Even when the bundle isn't in 'vendor/' dir, it was necessary assume that was an external one.
#/app/config/config.yml
payment:
connection: payment
mappings:
XyzBundle:
type: annotation
prefix: Xyz\XyzBundle\Entity
dir: "%kernel.root_dir%/../src/Xyz/XyzBundle/Entity"
Commands
doctrine:generate:entities Xyz/XyzBundle/Entity --no-backup
doctrine:schema:update --force --em=payment
I hope it saves a lot of wasted time.

Related

How do I install Apache Superset CLI on Windows?

Superset offers a CLI for managing the Superset instance, but I am unable to find instructions for getting it installed and talking to my instance of Superset.
My local machine is Windows, but my instance of Superset is running in a hosted Kubernetes cluster.
-- Update 2 2022.08.06
After some continued exploration, have found some steps that seem to be getting me closer.
# clone the Superset repo
git clone https://github.com/apache/superset
cd superset
# create a virtual environment using Python 3.9,
# which is compatible with the current version of numpy
py -3.9 -m venv .venv
.venv\Scripts\activate
# install the Superset package
pip install apache-superset
# install requirements (not 100% sure which requirements are needed)
pip install -r .\requirements\base.txt
pip install -r .\requirements\development.txt
# install psycopg2
pip install psycopg2
# run superset-cli
superset-cli
# error: The term 'superset-cli' is not recognized
# run superset
superset
superset will run, but now I'm getting an error from psycopg2 about unknown host:
Loaded your LOCAL configuration at [c:\git\superset\superset_config.py]
logging was configured successfully
2022-08-06 06:29:08,311:INFO:superset.utils.logging_configurator:logging was configured successfully
2022-08-06 06:29:08,317:INFO:root:Configured event logger of type <class 'superset.utils.log.DBEventLogger'>
Falling back to the built-in cache, that stores data in the metadata database, for the following cache: `FILTER_STATE_CACHE_CONFIG`. It is recommended to use `RedisCache`, `MemcachedCache` or another dedicated caching backend for production deployments
2022-08-06 06:29:08,318:WARNING:superset.utils.cache_manager:Falling back to the built-in cache, that stores data in the metadata database, for the following cache: `FILTER_STATE_CACHE_CONFIG`. It is recommended to use `RedisCache`, `MemcachedCache` or another dedicated caching backend for production deployments
Falling back to the built-in cache, that stores data in the metadata database, for the following cache: `EXPLORE_FORM_DATA_CACHE_CONFIG`. It is recommended to use `RedisCache`, `MemcachedCache` or another dedicated caching backend for production deployments
2022-08-06 06:29:08,322:WARNING:superset.utils.cache_manager:Falling back to the built-in cache, that stores data in the metadata database, for the following cache: `EXPLORE_FORM_DATA_CACHE_CONFIG`. It is recommended to use `RedisCache`, `MemcachedCache` or another dedicated caching backend for production deployments
2022-08-06 06:29:10,602:ERROR:flask_appbuilder.security.sqla.manager:DB Creation and initialization failed: (psycopg2.OperationalError) could not translate host name "None" to address: Unknown host
My config file c:\git\superset\superset_config.py has the following database settings:
DATABASE_HOST = os.getenv("DATABASE_HOST")
DATABASE_DB = os.getenv("DATABASE_DB")
POSTGRES_USER = os.getenv("POSTGRES_USER")
POSTGRES_PASSWORD = os.getenv("DATABASE_PASSWORD")
I could set those values in the superset_config.py or I could set the environment variables and let superset_config.py read them. However, my instance of superset is running in a hosted kubernetes cluster and the superset-postgres service is not exposed by external ip. The only service with an external ip is superset.
Still stuck...
I was way off track - once I found the Preset-io backend-sdk repo on github it started coming together.
https://github.com/preset-io/backend-sdk
Install superset-cli
mkdir superset_cli
cd superset_cli
py -3.9 -m venv .venv
.venv\Scripts\activate
pip install -U setuptools setuptools_scm wheel #for good measure
pip install "git+https://github.com/preset-io/backend-sdk.git"
Example command
# Export resources (databases, datasets, charts, dashboards)
# into a directory as YAML files from a superset site: https://superset.example.org
mkdir export
superset-cli -u [username] -p [password] https://superset.example.org export export

FOSElasticaBundle configuration with Symfony 5 and AWS Elasticsearch

I am trying to connect to an AWS Elasticsearch domain using FOSElasticaBundle (version v6.0.0-beta4). According to the documentation, this bundle uses ruflin/Elastica bundle. After researching the documentation and the related questions here, I could find some examples and configuration that I implemented, however, I am getting an error related to the elastica configuration. My config:
//config/packages/fos_elastica.yaml
fos_elastica:
clients:
default:
url: 'aws-elasticsearch-domain-url'
aws_access_key_id: 'access-key'
aws_secret_access_key: 'secret-key'
aws_region: "aws-region"
transport: "AwsAuthV4"
indexes:
(indexes configuration...)
When populating the indexes, I am getting this error related to the AwsAuthV4 transport parameter:
In AwsAuthV4.php line 43:
Attempted to load class "SignatureV4" from namespace "Aws\Signature".
Did you forget a "use" statement for another namespace?
I am unsure whether this is not supported, not properly configured, or something else.
Make sure that when you are using FOSElasticaBundle you have installed also package called aws/aws-sdk-php. This package will guarantee that calss Aws\Signature\SignatureV4 will be loaded aswell.
In case you have still this problem remember to do composer dump-autoload in the console.

PCF Staticfile_buildpack not considering Staticfile

I'm trying to deploy Angular 6 application to PCF using "cf push -f manifest.yml" command.
Deployment works fine, except its not considering options set in "Staticfiles" .
To be specific, I've below values in Staticfile, to force HTTPS, and to include ".well-known" folder which will be excluded by default due to "." prefix.
force_https: true
host_dot_files: true
location_include: .well-known/assetlinks.json
I also have a manifest.yml file with below values,
applications:
- name: MyApp
instances: 1
memory: 256M
instances: 1
random-route: false
buildpack: https://github.com/cloudfoundry/staticfile-buildpack.git
path: ./dist
routes:
- route:myapp.example.com
Is there an alternate option to set these params in manifest.yml or how to I achieve it?
First..
buildpack: https://github.com/cloudfoundry/staticfile-buildpack.git
Don't do this. You don't want to point to the master branch of a buildpack, it could be unstable. Point to a release, like https://github.com/cloudfoundry/staticfile-buildpack.git#v1.4.31 or use the system provided staticfile buildpack, like staticfile_buildpack.
To be specific, I've below values in Staticfiles
It's not Staticfiles, plural. It's Staticfile singular. Make sure you have the correct file name & that it's in the root of your project (i.e. same directory that you're pushing), which is ./dist based on path: in your manifest.
Update: For angular, "Staticfile" should be under "src" folder, which will put it under dist on building.
https://docs.cloudfoundry.org/buildpacks/staticfile/index.html#config-process

How do you define a custom DNS search in CloudFoundry?

Using CloudFoundry, is there a way to define a custom DNS search so host names are resolved?
We are using an Ubuntu stem cell and need to reach out to an external server. Using a FQDN, this works, but would prefer to use the host name only. Generally, this is in resolve.conf on a Unix/Linux box but wasn't sure how to define this in CloudFoundry.
One option here would be a Bosh add-on. A Bosh add-on will run on all VMs managed by your Bosh Director. Here are some example add-ons.
You'll want to use the os-conf-release for your add-on. It has a job called search_domain which lets you set the search domain on all of the Bosh deployed VMs.
I haven't tested it, but I believe a manifest like this should work.
releases:
- name: os-conf
version: 12
addons:
- name: search-domain
jobs:
- name: search_domain
release: os-conf
properties:
search_domain: my.domain.com
That would add my.domain.com to the list of search domains in resolv.conf. Hope that helps!

How to correctly add ENV["SECRET_KEY"BASE"] in rails

I am having a difficult time setting up Rails 4.2 in production on a VM running on passenger and nginx, and not using RVM or anything similar.
I got Incomplete response received from application and looking in the nginx error log it said something about missing secret_key_base and secret_key although there is no reference to that last one any where in the config directory.
I ran export SECRET_KEY_BASE='...' and in rails c production ENV["SECRET_KEY_BASE"] displays the key but after restarting nginx I still get the same error.
Placing the key directly in secrets solved that problem but is there an actual way to do this correctly?
Solution:
The solution that worked for me is to place export SECRET_KEY_BASE="<string obtained from rake secret>" in .bashrc
If you use rbenv, there is another solution below in the accepted answer.
If you are using rbenv you can add the rbenv-vars plugin and add a .rbenv-vars file containing (and don't check that into your repo)
SECRET_KEY_BASE='...'
other solution is to add the the SECRET_KEY_BASE manually to the secrets.yml file and also ignore that file from your repo.
a third answer that saw mentioned is adding
export SECRET_KEY_BASE='...'
to one of these files .bashrc .bash_profile .profile
Your config/secrets.yml should have something like
development:
secret_key_base: f91fe2e2e4a9bf8f8b6aa1c296bb9ec10f2bc91c08965176a642ea0927400651ea993512f83d9823bcc046555e40b8c257f5f19fab8c59b5a02c9d230a369fe7
test:
secret_key_base: c116ac7c8f69018d1f4e10f632cac7a22348f0bd8ed8f21ca45460574d2f501f248418bc888e31556e16ba3ab58c3a7cba027140097abe3f511dddf6625fa8cd
# Do not keep production secrets in the repository,
# instead read values from the environment.
production:
secret_key_base: <%= ENV["SECRET_KEY_BASE"] %>
To set SECRET_KEY_BASE, first you'll need to generate it with
rake secret
Then take that output and edit your /etc/environment (depending on your distro, assuming Ubuntu here) and place it as such
SECRET_KEY_BASE=...
Restart your server and you should be gravy