Some paster command not working in ckan 2.7.3 - centos7

Trying to use the info in:
http://docs.ckan.org/en/ckan-1.4.3/authorization.html
to create users and assign roles to specifics package and the command right not working.
For instance:
paster --plugin=ckan rights -c /etc/ckan/default/development.ini list
I get error:
Command 'rights' not known (you may need to run setup.py egg_info)
Known commands:
celeryd Celery daemon [DEPRECATED]
check-po-files Check po files for common mistakes
color Create or remove a color scheme.
config-tool Tool for editing options in a CKAN config file
create Create the file layout for a Python distribution
create-test-data Create test data in the database.
datapusher Perform commands in the datapusher
dataset Manage datasets
datastore Perform commands to set up the datastore
db Perform various tasks on the database.
exe Run #! executable files
front-end-build Creates and minifies css and JavaScript files
help Display help
jobs Manage background jobs
less Compile all root less documents into their CSS counterparts
make-config Install a package and create a fresh config file/directory
minify Create minified versions of the given Javascript and CSS files.
notify Send out modification notifications.
plugin-info Provide info on installed plugins.
points Show information about entry points
post Run a request for the described application
profile Code speed profiler
ratings Manage the ratings stored in the db
rdf-export Export active datasets as RDF
request Run a request for the described application
search-index Creates a search index for all datasets
serve Serve the described application
setup-app Setup an application, given a config file
sysadmin Gives sysadmin rights to a named user
tracking Update tracking statistics
trans Translation helper functions
user Manage users
views Manage resource views.
but if I create a user like this:
paster sysadmin add seanh -c /etc/ckan/default/development.ini
works ok, so I don't think the problem was in my enviroment.
Note:
Centos 7.4
ckan 2.7.3
thanks

'Rights' was deprecated in the migration to CKAN 2.X, and the paster command removed.
From CKAN 2.0, permissions are organization by organization and by group. It's a simplification, catering for what is considered the most common use case.
However if you need to control user permissions on a single dataset (rather than all the datasets in an org/group together) then that dataset needs to be on its own in a org or group. Or you can customize the auth system using IAuthFunctions.

Related

Workflow migration from repo to repo

I'd like to ask, how may I do a migration of mappings, worklets and workflows from Informatica PowerCenter Integ, to Prod.
Integ Enviroment and Prod are in different servers, so I can't just mouve folder from folder.
Is it possible? I can't find any refernece or tutorial.
Thank you in advance.
In Powercenter, its possible to copy form one env to another. Request everyone to check in their objects first adn log off from both source and target repo.
Open Repository Manager, connect to the source repository and select the folder you want to copy.
Click Edit > Copy.
Connect to the target repository. Connect to the target repository with the same user account used to connect to the source repository. If you do not have same user you need to use deployment group/deployment folder.
In the Navigator, select the target repository, and click Edit > Paste. You will get many options like - replacing objects, use latest version, check out etc. You can follow below link to get help.
https://docs.informatica.com/data-integration/powercenter/10-5/repository-guide/copying-folders-and-deployment-groups/copying-or-replacing-a-folder/steps-to-copy-or-replace-a-folder.html
Now, my preference would be to use deployment group or deployment folder. Its easy to use and easy to control - like if you want to replace 10 objects out of 100s, or you want to create a standard process for future migrations, or deploy using command task automatically, you can do as well.

Django with mySQL

I have successfully set up and used Django with MySQL on my local machine, but now if I put my project on say GitHub what should the requirements be for any other person to be able to run it?
I am asking this because my project uses a database that I have stored in my local machine and while uploading the project on GitHub I have not uploaded the database. in sqlite3 there is a database file inside the project itself but this does not happen for MySQL whose database is stored in a different location.
I mean Django accesses the database from a different location(var/lib/MySQL) and when I try to copy the database from there to the project folder and specify its location in settings.py, I get an access denied error.
So how can I solve this?
You would typically have a seed file for others to use. Others will create a database on their own systems and use your seed file to get started with the project.
It should not be necessary to copy the database files. Also, you should not just copy the MySQL directory like that. If you copy the whole directory then you might replace what somebody already has on their system, but if you copy only some of the files then you might be missing things like the MySQL user accounts. Besides, that is a backup procedure, not a deployment or distribution procedure.
For somebody else to get started with your project the normal process is:
manually create the appropriate MySQL user and database (or provide a script to automate it)
Run migrations: python manage.py migrate
Import initial data:
This can be with fixtures: python manage.py loaddata my_data.json
Or with a custom management command: python manage.py load_my_data
However, if you really need to provide somebody with an almost ready database, you can use mysqldmp which will produce a SQL text file, but the other person still needs to create the user account manually.
I want to add some with himank that if you need to provide some additional data for database you can up your fixture-datalink in fixture folder. Then other person will able to load those manually with command or even able to run a script link to populate initially data to database.

How to create new user when configuration is locked using "Configuration Read-only"

We have a Drupal 8 site hosted at Pantheon and the site configuration is locked via the "Configuration Read-only" module.
I created a local clone of the site using git and added a new user but when I do a git status it shows my branch as being in sync with the master. With this said, it doesn't look like the newly added user was written to any of the config YAML files.
So, I suspect that I will need to export the database from my local and import it to Pantheon - but this doesn't seem like the correct process or safest method. Can someone please confirm as I haven't found any resources applicable to this scenario and want to ensure that I'm following best practice?
Users are Entities and as such are stored in the database, not in configuration.
If you want to synchronize your users across different environments then you'll have to look into a way to retrieve db backups from Pantheon and import them into a different environment or look into a module to sync the User entities. I found the content_sync module from a quick Google search, but have not used it and cannot guarantee that it will work/ fulfill your requirements.

Deploy multiple Content Delivery Servers with same confguration

I am building out a Sitecore farm with multiple Content Delivery servers. In the current process, I stand up the CD server and go through the manual steps of commenting out connection strings and enabling or disabling config files as detailed here per each virtual machine/CD server:
https://doc.sitecore.net/Sitecore%20Experience%20Platform/xDB%20configuration/Configure%20a%20content%20delivery%20server
But since I have multiple servers, is there any sort of global configuration file where I could dictate the settings I want (essentially a settings template for CD servers), or a tool where I could load my desired settings/template for which config files are enabled/disabled etc.? I have used the SIM tool for instance installation, but unsure if it offers the loading of a pre-determined "template" for a CD server.
It just seems in-efficient to have to stand up a server then config each one manually versus a more automated process (ex. akin to Sitecore Azure, but in this case I need to install the VMs on-prem).
There's nothing directly in Sitecore to achieve what you want. Depending on what tools you are using then there are some options to reach that goal though.
Visual Studio / Build Server
You can make use of SlowCheetah config transforms to configure non-web.config files such as ConnetionStrings and AppSettings. You will need a different build profiles for each environment you wish to create a build for and add the appropriate config transforms and overrides. SlowCheetah is available as a nuget package to add to your projects and also a Visual Studio plugin which provides additional tooling to help add the transforms.
Continuous Deployment
If you are using a continuous deployment tool like Octopus Deploy then you can substitute variables in files on a per environment and machine role basis (e.g. CM vs CD). You also have the ability to write custom PowerShell steps to modify/transform/delete files as required. Since this can also run on a machine role basis you can write a step to remove unnecessary connection strings (master, reporting, tracking.history) on CD environments as well as delete the other files specified in the Sitecore Configuration Guide.
Sitecore Config Overrides
Anything within the <sitecore> node in web.config can be modified and patch using Include File Patching Facilities built into Sitecore. If you have certain settings which need to be modified or deleted for a CD environment then you can create a CD-specific override, which I place in /website/App_Config/Include/z.ProjectName/WebCD and use a post-deployment PowrrShell script in Octopus deploy to delete this folder on CM environment. There are example of patches within the Include folder, such as SwitchToMaster.config. In theory you could write a patch file to remove all the config sections mentioned in the depoyment guide, but it would be easier to write a PowerShell step to delete these instead.
I tend to use all the above to aid in deploying to various environments for different server roles (CM vs CD).
Strongly recommend you take a look at Desired State Configuration which will do exactly what you're talking about. You need to set up the actual configuration at least once of course, but then it can be deployed to as many machines as you'd like. Changes to the config are automatically flowed to all machines built from the config, and any changes made directly to the machines (referred to as configuration drift) are automatically corrected. This can be combined with Azure, which now has capability to act as a "pull-server" through the Automation features.
There's a lot of reading to do to get up to speed with this feature-set but it will solve your problem.
This is not a Sitecore tool per se.

Django Server Structure and Conventions

I'm interested in figuring out the best practice way of organising Django apps on a server.
Where do you place Django code? The (old now) Almanac says /home/django/domains/somesitename.com/ but I've also seen things placed in /opt/apps/somesitename/ . I'm thinking that the /opt/ idea sounds better as it's not global, but I've not seen opt before, and presumably it might be better for apps to go in a site specific deployer users home dir.
Would you recommend having one global deployer user, one user per site, or one per site-env (eg, sitenamelive, sitenamestaging). I'm thinking one per site.
How do you version your config files? I currently put them in an /etc/ folder at top level of source control. eg, /etc/nginc/somesite-live.conf.
How do you do provision your servers and do the deployment? I've resisted Chef and Puppet for years on the hope of something Python based. Silver Lining doesn't seem ready yet, and I have big hopes for Patchwork (https://github.com/fabric/patchwork/). Currently we're just using some custom Fabric scripts to deploy, but the "server provisioning" is handled by a bash script and some manual steps for adding keys and creating users. I'm about to investigate Silk Deployment (https://bitbucket.org/btubbs/silk-deployment) as it seems closest to our setup.
Thanks!
I think there would have to be more information on what kinds of sites you are deploying: there would be differences based on the relations between the sites, both programatically and 'legally' (as in a business relation):
Having an system account per 'site' can be handy if the sites are 'owned' by different people - if you are a web designer or programmer with a few clients, then you might benefit from separation.
If your sites are related, i.e. a forum site, a blog site etc, you might benefit from a single deployment system (like ours).
for libraries, if they're hosted on reputable sources (pypy, github etc), its probably ok to leave them there and deploy from them - if they're on dodgy hosts which are up or down, we take a copy and put them in a /thirdparty folder in our git repo.
FABRIC
Fabric is amazing - if its setup and configured right for you:
We have a policy here which means nobody ever needs to log onto a server (which is mostly true - there are occasions where we want to look at the raw nginx log file, but its a rarity).
We've got fabric configured so that there are individual functional blocks (restart_nginx, restart_uwsgi etc), but also
higher level 'business' functions which run all the little blocks in the right order - for us to update all our servers we meerly type 'fab -i secretkey live deploy' - the live sets the settings for the live servers, and deploy ldeploys (the -i is optional if you have your .ssh keys set up right)
We even have a control flag that if the live setting is used, it will ask 'are you sure' before performing the deploy.
Our code layout
So our code base layout looks a bit like this:
/ <-- folder containing readme file etc
/bin/ <-- folder containing nginx & uwsgi binaries (!)
/config/ <-- folder containing nginx config and pip list but also things like pep8 and pylint configs
/fabric/ <-- folder containing fabric deployment
/logs/ <-- holding folder that nginx logs get written into (but not committed)
/src/ <-- actual source is in here!
/thirdparty/ <-- third party libs that we didn't trust the hosting of for pip
Possibly controversial because we load our binaries into our repo, but it means that if i upgrade nginx on the boxes, and want to roll back, i just do it by manipulation of git. I know what works against what build.
How our deploy works:
All our source code is hosted on a private bitbucket repo (we have a lot of repos and a few users, thats why bitbucket is better for us then github). We have a user account for the 'servers' with its own ssh key for bitbucket.
Deploy in fabric performs the following on each server:
irc bot announce beginning into the irc channel
git pull
pip deploy (from a pip list in our repo)
syncdb
south migrate
uwsgi restart
celery restart
irc bot announce completion into the irc channel
start availability testing
announce results of availability testing (and post report into private pastebin)
The 'availability test' (think unit test, but against live server) - hits all the webpages and API's on the 'test' account to make sure it gets back sane data without affecting live stats.
We also have a backup git service so if bitbucket is down, it falls over to that gracefully, and we even have jenkins integration that on a commit to the 'deploy' branch, it causes the deployment to go through
The scary bit
Because we use cloud computing and expect a high throughput, our boxes auto spawn. Theres a default image which contains a a copy of the git repo etc, but invariably it will be out of date, so theres a startup script which does a deployment to itself, meaning new boxes added to the cluster are automatically up-to-date.