How to set up the Redmine WS API Key programmatically? - redmine

I need to enable Redmine WS and then generate and store its API Key during installation time in order to use it later in other scripts (e.g. fetch_changesets). Is there a ruby command to this?

/usr/share/redmine/script/runner "Setting.sys_api_enabled = 1" -e production
/usr/share/redmine/script/runner "Setting.sys_api_key = 'abcd'" -e production

Related

Django Cookiecutter Database restore

I'm trying to restore the database with the maintenance script provided. But there is a check in the script which doesn't allow me to restore if the user is postgres.
Any reason for that ?
It is a custom to not use the postgres user in this case. Similar to the custom that when operating a linux server, you use a user account instead of the root account.
You can remove the passage from the script if you want to proceed anyway. However, cookiecutter-django should have generated a .env/.production/.postgres file with a different username than postgres.

AWS Amplify: Same admin query on two separate apps

So here's my situation...I have two React apps that need to talk to the same Cognito User Pool. I've been able to accomplish this by copying the aws-exports.js file from the first app to the second app I created (not sure if this is something I should be doing or not but it is working). The issue I am having however is when I run an Admin Query on the second app (to say list users in the Cognito User Pool) I get a 403 (Forbidden) error. Has anyone ever run into this before? Googling all day has not helped me so I figured I would ask.
You'll need "multi-frontend" solution:
https://docs.amplify.aws/cli/teams/multi-frontend
I'll give you some useful infos for this:
Open the Amplify Console and there the "first" app (wheres the backend was created).
Go to the first app's "backend" section
Select "Backend environments" tab
Search for "Edit backend" box and this text: "To continue working on the backend, install the Amplify CLI and make updates by running the command below from the root of your project folder"
copy that command, and paste/run in second app's root.
Beware!
do not modify (and push) the backend from the second application.
if you use git branch based environment you must always switch the env AND the branch parallel. Do not pull the "master" backend for your "dev" env.
try to avoid modifing on amplify console if you modify things with amplify cli. Those things cannot be syncronized... :(
If you store multiple apps in a git monorepo:
https://docs.amplify.aws/cli/usage/monorepo

How to get Postman Environment and Postman Globals URLs for passing to Newman?

Newman help specifies that collection, environment and globals can be passed as a path or as a URL. I can see how to get a collection URL from Postman (by going to Share > Collection Link).
How can I get the URLs to Environment and Globals in Postman, so I could pass them to newman?
Using Newman with the Postman Pro API:
Generate an API key
Fetch a list of your collections from:
https://api.getpostman.com/collections?apikey=$apiKey
Get the collection link via its UID:
https://api.getpostman.com/collections/$uid?apikey=$apiKey
Obtain the environment URI from:
https://api.getpostman.com/environments?apikey=$apiKey
Using the collection and environment URIs acquired in steps 3 and 4, run the collection as follows:
newman run "https://api.getpostman.com/collections/$uid?apikey=$apiKey" \
--environment "https://api.getpostman.com/environments/$uid?apikey=$apiKey"
Link to Newman package
Using the Postman desktop app please try the following steps -
View your collection in Postman
From the collection details view
(press the arrow next to the collection name) and select View in Web.
In the Postman web view, next to the Postman logo on the right, there is a drop-down.
From the drop-down select the Workspace where your test collection is in.
On the Collection list page, you will see Environments as a tab next to Collections.
Select the Environments tab, then select the specific environment you want.
The URL on this page is the Environment's URL you can use for Postman.
via Postman, I exported my environment as a json file, and then hosted that file on a webserver.
I didn't get that how to get Postman Globals URLs for passing to Newman?
I am only able to get the collection and Environment URL.
From the command line, use the newman command line options:
-e <source>, --environment <source>
Specify an environment file path or URL. Environments provide a set
of variables that one can use within collections. Read More
-g <source>, --globals <source>
Specify file path or URL for global variables. Global variables are
similar to environment variables but has a lower precedence and can
be overridden by environment variables having same name.
If you use newman as a Node JS module, provide environment and global as options to newman.run():
newman.run({environment: <source>, globals: <source>}, callback)

Sitecore EXM List Manager in distributed environment with Lucene Search Configuration

We are using Lucene index instead of Solr. We are currently facing an issue with our List Manager in CD server. The below code throws an exception in CD server as it's unable to instantiate List Manger from Sitecore Configuration Factory.
newsRecipientList = listRepository.GetEditableRecipientCollection("{my list guid }");
I've already gone through the Sitecore documentation for List Manager in a scaled environment, but it only talks about Solr.
https://doc.sitecore.net/sitecore_experience_platform/digital_marketing/the_list_manager/configure_the_list_manager_in_a_scaled_environment
Any guidance on Sitecore configuration for List Manager using Lucene is much appreciated.
Sitecore Exception Stacktrace
Value can not be null : listManager
at
Sitecore.Modules.EmailCampaign.Factories.BusinessLogicFactory.<>c__DisplayClassd.b__b()
at
Sitecore.Modules.EmailCampaign.Core.InstanceCreator.GetConfiguredInstanceOrDefault[TResult](String
configurationPath, Func1 defaultInstanceBuilder) at
Sitecore.Modules.EmailCampaign.Factories.BusinessLogicFactory.<>c__DisplayClassd.<CreateDefaultFactory>b__a()
at
Sitecore.Modules.EmailCampaign.Factories.InitializedOnce1.get_Value()
at
Sitecore.Modules.EmailCampaign.ListManager.ListManagerCollectionRepository.GetEditableRecipientCollection(String
recipientCollectionId)
if you followed the guide to the Delivery environment, ListManager is disabled and that might be the reason why you see that error. Does the same code work on CM? (where LM is enabled)
Since List Manager is not available in CD environment we need to call Sitecore API to update/add contacts. Below class has api's to modify contact list.
Sitecore.Modules.EmailCampaign.ClientApi
We need to add connection string in CD server in order to call this api's.
<add name="EmailCampaignClientService" connectionString="url=http://<Your CM Server host>/sitecore%20modules/web/emailcampaign/ecmclientservice.asmx;timeout=60000" />

How to configure CouchDB authentication in Docker?

I'm trying to build a Dockerized CouchDB to run in AWS that bootstraps authentication for my app. I've got a Dockerfile that installs CouchDB 1.6.1 and sets up the rest of the environment the way I need it. However, before I put it on AWS and potentially expose it to the wild, I want to put some authentication in place. The docs show this:
http://docs.couchdb.org/en/1.6.1/api/server/authn.html
which hardly explains the configuration properly or what is required for basic security. I've spent the afternoon reading SO questions, docs and blogs, all about how to do it, but there's no consistent story and I can't tell if what worked in 2009 will works now, or which parts are obsolete. I see a bunch of possible settings in the current ini files, but they don't match what I'm seeing in my web searches. I'm about to start trying various random suggestions I've gleaned from various readings, but thought I would ask before doing trial and error work.
Since I want it to run in AWS I need it to be able to start up without manual modifications. I need my Dockerfile to do the configuration, so using Futon isn't going to cut it. If I need to I can add a script to run on start to handle what can't be done there.
I believe that I need to set up an admin user, then define a role for users, provide a validation function that checks for the proper role, then create users that have that role. Then I can use the cookie authentication (over SSL) to restrict access to my app that provides the correct login and handles the session/cookie.
It looks like some of it can be done in the Dockerfile. Do I need to configure authentication_handlers, and an admin user in the ini file? And I'm guessing that the operations that modify the database will need to be done by some runtime script. Has anyone done this, or seen some example of it being done?
UPDATE:
Based on Kxepal's suggestion I now have it working. My Dockerfile is derived from klaemo's docker-couchdb, as mentioned below. The solution is to force the database to require authentication, but a fresh install starts out as Admin-Party. To stop that you have to create an admin user, which secures the system data but leaves other databases open. First, create an admin user in your Dockerfile:
RUN sed -e '/^\[admins\]$/a admin=openpassword\n' -i /usr/local/etc/couchdb/local.ini
(just following klaemo's sed pattern of using -e) and when CouchDB runs it will salt and hash this password and replace it in the local.ini file. I extract that password and replaced "openpassword" with this so that my Dockerfile didn't have the password in plain text. CouchDB can tell by the form of it not to hash it again.
The normal pattern to now secure the other databases is to create users/roles and use them in a validation function to deny access to the other databases. Since I am only interested in getting a secure system in place for testing I opted to defer this and just use the settings in local.ini to force everyone to be authenticated.
The Dockerfile now needs to set the require_valid_user flag:
RUN sed -e '/^\[couch_httpd_auth\]$/a require_valid_user = true\n' -i /usr/local/etc/couchdb/local.ini
And that requires uncommenting the WWW-Authenticate setting:
RUN sed -e 's/^;WWW-Authenticate/WWW-Authenticate/' -i /usr/local/etc/couchdb/local.ini
Which, since the setting shows Basic realm="administrator" means that the NSURLProtectionSpace in my iOS app needs to use #"administrator" as the realm.
After this I now have a Dockerfile that creates a CouchDB server that does not allow anonymous modification or reading.
This hasn't solved all of my configuration issues since I need to populate a database, but since I use a python script to do that and since I can pass credentials when I run that, I have solved most problems.
To setup auth configuration during image build, you need to check not API, but configuration for server admins. TL;DR just put [admin] section into local.ini file with your username and password in plain text - on start, CouchDB will replace password with it hash and CouchDB wouldn't be in Admin Party state.
P.S. Did you check docker-couchdb project?