I use yaml mapping
And when I try php bin/console doctrine:generate:entities App --no-backup, I get the error
Can't find base path for "App\Entity\User"
doctrine.yaml:
doctrine:
dbal:
# configure these for your database server
driver: 'pdo_mysql'
server_version: '5.7'
charset: utf8mb4
# With Symfony 3.3, remove the `resolve:` prefix
url: '%env(resolve:DATABASE_URL)%'
orm:
auto_generate_proxy_classes: '%kernel.debug%'
naming_strategy: doctrine.orm.naming_strategy.underscore
auto_mapping: true
mappings:
App:
is_bundle: false
type: 'yml'
dir: '%kernel.project_dir%/config/doctrine'
prefix: 'App\Entity'
alias: 'App'
Sample User.orm.yml:
App\Entity\User:
type: entity
table: tbl_user
repositoryClass: App\Entity\UserRepository
fields:
username:
type: string
length: 40
nullable: true
password:
type: string
length: 40
nullable: true
This command is provided by the sensio/generator-bundle, which is not compatible with Symfony 4, and will never be.
On the readme file, they advice to use the MakeBundle instead.
But it seems there no equivalent for the generate:entities command. In most of IDE (I'm using PhpStorm), there is helper to generate getter and setter.
Related
I'm trying to install and configure apache using the v1.2.2 saltstack apache-formula
salt server-test state.apply apache test=true
But keep getting the following error:
server-test:
Data failed to compile:
ID apache-service-running in SLS apache.service.running is not a dictionary
my master salt path looks like that:
file_roots:
base:
- /srv/salt/states
- /srv/salt/formulas/php
- /srv/salt/formulas/nginx
- /srv/salt/formulas/apache-formula
- /srv/salt/formulas/mysql-formula
- /srv/salt/files
pillar_roots:
base:
- /srv/salt/pillar/base
dev:
- /srv/salt/pillar/dev
prod:
- /srv/salt/pillar/prod
I can't figure out where the problem is! any hints?
pillar/base/apache/server-test.sls
apache:
manage_service_states: False
lookup:
version: '2.4'
# for each site name there must be a config in salt://configs/webserver/apache2/sites/
site_names: [ 'server-test' ]
security:
ServerTokens: Prod
modules:
enabled:
- ssl
- alias
- rewrite
- headers
- shib
- http2
#disabled:
# - php7.3
# others are managed elsewhere
# - status
# - proxy
#- proxy_fcgi
mpm:
module: mpm_event
params:
start_servers: 3
min_spare_threads: 50
max_spare_threads: 100
thread_limit: 64
threads_per_child: 25
max_request_workers: 2000
server_limit: 80
max_connections_per_child: 0
The formula is broken: #383
manage_service_states: False doesn't work.
All afternoon I have been trying to get my head around concatenating a parameter in an ADO template. The parameter is a source path and in the template a next folder level needs to be added. I would like to achieve this with a "simple" concatenation.
The simplified template takes the parameter and uses it to form the inputPath for a PowerShell script, like this:
parameters:
sourcePath: ''
steps:
- task: PowerShell#2
inputs:
filePath: 'PSRepo/Scripts/MyPsScript.ps1'
arguments: '-inputPath ''$(sourcePath)/NextFolder''
I have tried various ways to achieve this concatenation:
'$(sourcePath)/NextFolder'
see above
'$(variables.sourcePath)/NextFolder'
I know sourcePath is not a variable, but tried based on the fact that using a parameter in a task condition it apparently only works when referencing through variables
'${{ parameters.sourcePath }}/NextFolder'
And some other variations, all to no avail.
I also tried to introduce a variables section in the template, but that is not possible.
I have searched the internet for examples/documentation, but no direct answers and other issues seemed to hint to some solution, but were not working.
I will surely be very pleased if someone could help me out.
Thanx in advance.
We can add the variables in our temp yaml file and pass the sourcePath to the variable, then we can use it. Here is my demo script:
Main.yaml
resources:
repositories:
- repository: templates
type: git
name: Tech-Talk/template
trigger: none
variables:
- name: Test
value: TestGroup
pool:
# vmImage: windows-latest
vmImage: ubuntu-20.04
extends:
template: temp.yaml#templates
parameters:
agent_pool_name: ''
db_resource_path: $(System.DefaultWorkingDirectory)
# variable_group: ${{variables.Test}}
temp.yaml
parameters:
- name: db_resource_path
default: ""
# - name: 'variable_group'
# type: string
# default: 'default_variable_group'
- name: agent_pool_name
default: ""
stages:
- stage:
jobs:
- job: READ
displayName: Reading Parameters
variables:
- name: sourcePath
value: ${{parameters.db_resource_path}}
# - group: ${{parameters.variable_group}}
steps:
- script: |
echo sourcePath: ${{variables.sourcePath}}
- powershell: echo "$(sourcePath)"
Here, I just use the workingDirectory to as the test path. You can use the variables also.
Attach my build result:
Thanx, Yujun. In meantime did get it working. Apparently there must have been some typo that did block the script from executing right as the se solution looks like one of the options mentioned above.
parameters:
sourcePath: ''
steps:
- task: PowerShell#2
inputs:
filePath: 'PSRepo/Scripts/MyPsScript.ps1'
arguments: '-inputPath ''$(sourcePath)/NextFolder''
I need to copy the SSH public key from a local file, then use it in a uri task in my playbook.
Keep in mind, I cannot use "authorized_key" module as this is a system I must use the API to configure public keys for users.
Code below keeps failing, I am 100% sure its because of the filter I am using. I am including the commented out section that does work for the body.
Trying to use a lookup with a regex_search, I used [^\s]\s[^\s] which works in python. Also the key is in a different directory in my local host (../../ssh/ssh_key/key.pub)
Any ideas?
- name: copy public key to gitea
hosts: localhost
tasks:
- name: include user to add as variable
include_vars:
file: users.yaml
name: users
- name: Gather users key contents and create variable
# shell: "cat ../keys/ssh_keys/zz123z.pub | awk '{print $1 FS $2}'"
shell: "cat ../keys/ssh_keys/{{item.username}}.pub | awk '{print $1 FS $2}'"
register: key
with_items:
- "{{users.user}}"
- name: Add user's key to gitea
uri:
url: https://10.10.10.10/api/v1/admin/users/{{ item.username }}/keys
headers:
Authorization: "token {{ users.GiteaApiToken }}"
validate_certs: no
return_content: yes
status_code: 201
method: POST
body: "{\"key\": \"{{ key.stdout }}\", \"read_only\": true, \"title\": \"{{ item.username }} shared
body_format: json
with_items:
- "{{users.user}}"
This is the error I receive when using -vvv
TASK [Add user's key to gitea] *************************************************
task path: /home/dave/projects/Infrastructure/ansible/AddTempUsers/addusers.yaml:275
Wednesday 04 March 2020 18:14:29 -0500 (0:00:00.537) 0:00:01.991 *******
fatal: [localhost]: FAILED! => {
"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'stdout'\n\nThe error appears to be in '/home/dave/projects/Infrastructure/ansible/AddTempUsers/addusers.yaml': line 275, column 13, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Add user's key to gitea\n ^ here\n"
}
I FIGURED IT OUT!
used shell with an awk command to gather the keys. (Note: including an awk for RSA keys, and one for id_ed25519, which we use. RSA is commented out but others can comment if they wish to use.)
Used loop control to iterate through the results.
Code below:
- name: copy public key to gitea
hosts: localhost
tasks:
- name: include user to add as variable
include_vars:
file: users.yaml
name: users
- name: Gather users key contents and create variable
# For RSA Keys
# shell: "cat ../keys/ssh_keys/{{item.username}}.pub | awk '/-END PUBLIC KEY-/ { p = 0 }; p; /-BEGIN PUBLIC KEY-/ { p = 1 }'
# For id_ed5519 Keys
shell: "cat ../keys/ssh_keys/{{item.username}}.pub | awk '{print $1 FS $2}'"
register: key
with_items:
- "{{users.user}}"
- name: Add user's key to gitea
uri:
url: https://10.10.10.10/api/v1/admin/users/{{ item.username }}/keys
headers:
Authorization: "token {{ users.GiteaApiToken }}"
validate_certs: no
return_content: yes
status_code: 201
method: POST
body: "{\"key\": \"{{ key.results[ndx].stdout }}\", \"read_only\": true, \"title\": \"{{ item.username }} shared VM\"}"
body_format: json
with_items:
- "{{users.user}}"
loop_control:
index_var: ndx
I want to load the testdata for my unit tests into the test db via DataFixtures.
The documentation says, that if I set the environment variable the test db should be used:
$ php bin/console doctrine:fixtures:load --env=test
Careful, database will be purged. Do you want to continue y/N ?y
> purging database
> loading App\DataFixtures\PropertyFixtures
> loading App\DataFixtures\UserFixtures
> loading App\DataFixtures\UserPropertyFixtures
However if I check the data end up in my default database.
Where do I configure my test db with symfony 4?
And where do I configure it, so that DataFixtures knows where to write?
For my functional tests I configured the db setting in the phpunit.xml
What we did to solve it was the following. I don't know if that is a good way, so alternative solutions are still appreciated.
in ../config/packages there is the file doctrine.yaml
I copied the file in the folder ./config/packages/test and edited it in the following way:
parameters:
# Adds a fallback DATABASE_URL if the env var is not set.
# This allows you to run cache:warmup even if your
# environment variables are not available yet.
# You should not need to change this value.
env(DATABASE_URL): ''
doctrine:
dbal:
# configure these for your database server
driver: 'pdo_mysql'
server_version: '5.7'
charset: utf8
default_table_options:
charset: utf8
collate: utf8_unicode_ci
url: mysql://%db_user_test%:%db_password_test%#%db_host_test%:%db_port_test%/%db_name_test%
orm:
auto_generate_proxy_classes: '%kernel.debug%'
naming_strategy: doctrine.orm.naming_strategy.underscore
auto_mapping: true
mappings:
App:
is_bundle: false
type: annotation
dir: '%kernel.project_dir%/src/Entity'
prefix: 'App\Entity'
So what I changed was the like with the url: mysql....
In my parameters I had added the parameters db_user_test etc and used them to connect to the test db.
It works, but still I am not sure if this is how it should be done.
Currently I have a "State" model and following config details in shards.yml.
I am checking the following in "development" environment.
octopus:
environments:
- development
- staging
- production
replicated: true
fully_replicated: true
development:
slave1:
host: 192.168.5.130
adapter: mysql2
database: mydb
username: user
password: Password
reconnect: false
staging:
slave1:
host: 192.168.1.2
adapter: mysql2
database: server_db
username: admin
password: fake_staging_password
reconnect: false
production:
slave1:
host: 192.168.1.5
adapter: mysql2
database: production_db_name
username: admin
password: fake_production_password
reconnect: true
When ever I issue State.all or any active record query, I see the same SQL statment is sent to server 2 times
For example State.count sends the following SQL two times.
[Shard: slave1] (1.5ms) SELECT COUNT(*) FROM `states`
[Shard: slave1] (2.1ms) SELECT COUNT(*) FROM `states`
=> 35
Is this normal? or I have any issue in settings?
I think you're supposed to place either replicated: true or fully_replicated: true
Not both.