I have several config items for my module, adding a field to the user entity and also adding a role. Both configs are added in the \MY_MODULE\config\install and the install works great YAY!!!!!
Now when I uninstall the module the fields and the user role are still enabled in the system. This makes no sense to me.
Does anyone know why or what I need to do to get it to uninstall without writing code in hook_uninstall (That is what I ended up doing for the fields but still does not make sense to me).
Content of the \MY_MODULE\config\install\user.role.tiimeoffadmin.yml
langcode: en
status: true
dependencies: { }
id: timeoffadministrator
label: 'Time Off Administrator'
weight: 1
is_admin: false
permissions:
- 'access comments'
- 'access content'
- 'add time off entities'
- 'edit time off entities'
- 'view published time off entities'
Thanks for any help you can provide.
I finally ran across the item that fixed this issue. For yml files there is the dependency section that needs added. So adding the below fixed my issues with the fields and the role.
dependencies:
module:
- timeoff
enforced:
module:
- timeoff
I credit this thread for the answer (Two people post that solution in there #Robert Ben Parkinson & #Ahmad)
https://drupal.stackexchange.com/questions/164612/how-do-i-remove-a-configuration-object-from-the-active-configuration
Related
Is it possible to change the name of an Expo project without having to go through the entire process of building it again and submitting to the app store?
I accidentally didn't update the project name in the app.json file and now am stuck with an app called exmilti in TestFlight.
It took a few days to build, submit, and get the approval for TestFlight so I would love to avoid that process if there is a simple fix.
When I attempt to rebuild it in the CLI with the new name I get an error:
Reason: Unexpected response, raw:
{"responseId":"ed00c05f-82a0-41d6-9a7c-b48d04e68a1a","resultCode":35,"resultString":"There
were errors in the data supplied. Please correct and
re-submit.","userString":"Multiple profiles found with the name
'com.myComapanyName.AppName AppStore'. Please remove the duplicate profiles
and try again."
Which means to me that I am going to have to fully remove the app from TestFlight (yikes) and then re-upload the newly named App and wait for them to approve it again.
Any advice?
Update - I did not find an easier way so I just compiled the project and pushed it via expo again with the correct name.
It didn't take as long as I had expected.
So im trying to create a project with google cloud deployment manager,
Ive structured the setup roughly as below:
# Structure
Org -> Folder1 -> Seed-Project(Location where I am running deployment manager from)
Organization:
IAM:
-> {Seed-Project-Number}#cloudservices.gserviceaccount.com:
- Compute Network Admin
- Compute Shared VPC Admin
- Organisation Viewer
- Project Creator
# DeploymentManager Resource:
type cloudresourcemanager.v1.project
name MyNewProject
parent
id: '{folder1-id}'
type: folder
projectId: MyNewProject
The desired result is that MyNewProject should be created under Folder1.
However; It appears as if the deployment manager service account does not have sufficent permissions:
$ CLOUDSDK_CORE_PROJECT=Seed-Project gcloud deployment-manager deployments \
create MyNewDeployment \
--config config.yaml \
--verbosity=debug
Error messageļ¼
- code: RESOURCE_ERROR
location: /deployments/MyNewDeployment/resources/MyNewProject
message: '{"ResourceType":"cloudresourcemanager.v1.project",
"ResourceErrorCode":"403","ResourceErrorMessage":{"code":403,"message":"The
caller does not have permission","status":"PERMISSION_DENIED","statusMessage":"Forbidden","requestPath":"https://cloudresourcemanager.googleapis.com/v1/projects/MyNewProject","httpMethod":"GET"}}'
I've done some digging, and it appears to be calling the resourcemanager.projects.get method; The 'Compute Shared VPC Admin (roles/compute.xpnAdmin)' role should provide this permission as documented here: https://cloud.google.com/iam/docs/understanding-roles
Except that doesn't seem to be the case, whats going on ?
Edit
Id like to add some additional information gathered from debugging efforts:
These are the API requests from the deployment manager, (from the seed project).
You can see that the caller is an anonymous service account, this isn't what id expect to see. (Id expect to see {Seed-Project-Number}#cloudservices.gserviceaccount.com as the calling account here)
Edit-2
config.yaml
imports:
- path: composite_types/project/project.py
name: project.py
resources:
- name: MyNewProject
type: project.py
properties:
parent:
type: folder
id: "{folder1-id}"
billingAccountId: billingAccounts/REDACTED
activateApis:
- compute.googleapis.com
- deploymentmanager.googleapis.com
- pubsub.googleapis.com
serviceAccounts: []
composite_types/project/* is an exact copy of the templates found here:
https://github.com/GoogleCloudPlatform/deploymentmanager-samples/tree/master/community/cloud-foundation/templates/project
The key thing is that this is a GET operation, not an attempt to create the project. This is to verify global uniqueness of the project-id requested, and if not unique, PERMISSION_DENIED is thrown.
Lousy error message, lots of wasted developer hours !
Probably late, but just to share that I ran into similar issue today.Double checked every permission mentioned in the Readme for the serviceAccount under which the deployment manager job runs ({Seed-Project-Number}#cloudservices.gserviceaccount.com in the question), turned out that the Billing Account User role was not assigned contrary to what I thought earlier. Granting that and running it again worked.
The shares at my company are becoming unwieldy and we have now officially ran out of letters to map shares to having exhausted A, B, H-Z. Not all of our users need access to some of these shares, but there are enough people who need access to enough different shares that we can't simply recycle letters for them which are used by other shares. At this point we're going to need to start moving shares over to network locations.
Adding a network location shortcut on My Computer isn't difficult, I right click and use the Wizard, but how do I do it through Group Policy? I don't want to have to set up 100 or so computers manually
This absolutely can be done using only existing Group Policy preferences, but it's a little tedious.
Background Info
When you create a network location shortcut it actually creates three things.
A read-only folder with the name of your network shortcut
A target.lnk within that folder with your destination
A desktop.ini file that contains the following
[.ShellClassInfo]
CLSID2={0AFACED1-E828-11D1-9187-B532F1E9575D}
Flags=2
I found this information on this Spiceworks community forum post.
How to make it happen
I figured out how to do this from a comment in the same forum post linked above.
You need to create four settings in a group policy. All of the settings are located in the group policy editor under: User Configuration>Preferences>Windows Settings
as seen in this image.
Folders Setting
Add a new folder with preference with the following settings as seen in this image.
Path: %APPDATA%\Microsoft\Windows\Network Shortcuts\SHARENAME
Read-only checked
Ini Files Settings
There are two setting that you must make in this setting, as seen in this image.
Create one for the CLSID2 settings image
File Path: %APPDATA%\Microsoft\Windows\Network Shortcuts\SHARENAME\desktop.ini
Section Name: .ShellClassInfo
Property Name: CLSID2
Property Value: {0AFACED1-E828-11D1-9187-B532F1E9575D}
And another for the Flags setting image
File Path: %APPDATA%\Microsoft\Windows\Network Shortcuts\SHARENAME\desktop.ini
Section Name: .ShellClassInfo
Property Name: Flags
Property Value: 2
Shortcuts Setting
Add a new shortcut preference with the following settings image
Name: %APPDATA%\Microsoft\Windows\Network Shortcuts\SHARENAME\target
Target type: File System Object
Location: <Specify full path>
Target path: SHARETARGET
Closing Notes
This will work to create the network location using group policy. I would recommend using item level targeting to keep all of your network locations in one group policy.
It can be a handful to manage all of these separate preferences, so I created an application to help with managing the shares, and the user security group filters. Here is my application on github, you must create the first share using the settings above, but the application can handle adding more shares, deleting shares, and updating existing shares.
You can make a bat script which you can add to startup policy to run:
net use <driver letter> \\<servername>\<sharename> /user:<username> <password>
Example:
#echo off
net use w: \\server /user:Test TestPassword
And this will add on every computer a network shortcut to \\server with letter W .
And you can modify to make some this only on some computers or users.
Let's say you want only on user 'MikeS' to run this command, so you put something like that:
IF %USERNAME% == 'MikeS'(
net use w: \\server /user:Test TestPassword
)
I'm having trouble with Google's App engine indexes. When running my app via the GoogleAppEngineLauncher, the app is working fine. When deploying the app, I get the following error:
NeedIndexError: no matching index found.
The suggested index for this query is:
- kind: Bar
ancestor: yes
properties:
- name: rating
direction: desc
The error is generated after this line of code:
bars = bar_query.fetch(10)
Before the above line of code, it reads:
bar_query = Bar.query(ancestor=guestbook_key(guestbook_name)).order(-Bar.rating)
My index.yaml file contains the exact "suggested" index below # AUTOGENERATED:
- kind: Bar
ancestor: yes
properties:
- name: rating
direction: desc
Am I maybe missing something? I removed the index.yaml file and deployed the app again (via the command-line) and one less file was uploaded - so the index.yaml file is there.
Everything is working fine locally. I'm working on the latest Mac OSx. The command used for deployment was:
appcfg.py -A app-name --oauth2 update app
The datastore I implemented is loosely based on the guestbook tutorial app.
Any help would be greatly appreciated.
EDIT:
My ndb.Model is defined as follow:
class Bar(ndb.Model):
content = ndb.StringProperty(indexed=False)
lat = ndb.FloatProperty(indexed=False)
lon = ndb.FloatProperty(indexed=False)
rating = ndb.IntegerProperty(indexed=True)
url = ndb.TextProperty(indexed=False)
Check https://appengine.google.com/datastore/indexes to see if this index is present and status set to "serving". It's possible that the index is still being built.
The development environment emulates the production environment. It does not really have indexes in the Datastore sense.
Probably a little late now, but running "gcloud app deploy index.yaml" helped since running deploy by itself ignored the index.yaml file.
As others have said, the dashboard at https://appengine.google.com/datastore/indexes will be showing "pending" for a while.
I stumbled on the same issue and your comments helped me in the right direction. Here's what Google says how to handle this:
According to the Google documentation the story is that using
gcloud app deploy
the index.yaml file is not uploaded (question is why not?). Anyway, one has to upload this index file manually.
To do so, the documentation gives the following command:
gcloud datastore create-indexes index.yaml
(supposing you execute this from the same directory of the index.yaml file)
Once you have done this you can go to the Datastore console and you will see the index has been created. It will then start to be indexed (took some 5 minutes in my case) and once the index is being served you can start your application.
I fixed this issue by moving the index that the error says is missing above the auto generate line in the "index.yaml" file.
In your case the yaml file will look like:
indexes:
- kind: Bar
ancestor: yes
properties:
- name: rating
direction: desc
# AUTOGENERATED
Then all you have to do is update your app then update the indexes, you update the indexes by running the following command.
appcfg.py [options] update_indexes <directory>
With the directory being the directory relative to your index.yaml file. You should then see that index on your dashboard at https://appengine.google.com/datastore/indexes
The update will initially be "pending" but after the index says "serving" you will be able to make your query.
This NeedIndexError can be triggered by different causes, as I arrived here while having a slightly different problem, so I'll try to explain all I was doing wrong in order to show things that can be done:
I thought I have to had only one index per Kind of entity. That's not true, as long as I found you need to have as many indexes as different queries you will need to make.
While on development web server indexes are autogenerated and placed below the #AUTOGENERATED line in index.yaml file.
After modifying indexes I use first gcloud datastore indexes create index.yaml and I wait until indexes are Serving in https://console.cloud.google.com/datastore/indexes?project=your-project.
I clean unused indexes by executing gcloud datastore indexes cleanup index.yaml be aware that you do not delete indexes that are being used on production. Reference here
Be aware that if you don't specify direction on your index properties, it will be ASC by default. So if you are trying to make a - sort query it will again rise the error.
Things I think but I have not 100% evidence longer than my particular problem, but I think can help as a kind of brainstorming:
Indexes are important while querying data, not when uploading.
Creating manually the #AUTOGENERATED line not seem to be necessary if you are generating indexes manually. Reference here
As the development server updates indexes below #AUTOGENERATED line while making queries, you can "accidentally" solve your problem by adding this lane. While the real problem is a lack of manually index update using gcloud datastore indexes create index.yaml command. Reference here and here
In my case, I have uploaded the index file manually like below:
gcloud datastore indexes create "C:\Path\of\your\project\index.yaml"
Then you should confirm the update:
Configurations to update:
descriptor: [C:\Path\of\your\project\index.yaml]
type: [datastore indexes]
target project: [project_name]
Do you want to continue (Y/n)? y
Then you can go to the Datastore console to check if the the index has been created via this link:
https://console.cloud.google.com/datastore/indexes
is there any plug in for redmine, to display linked changesets for a specific issue?
I specified Repository (Mercurial) and I would like to see changesets clicking to an Issue.
You don't need a plugin for this - it's built-in. All you need to do when committing to your repository is to enter the issue number preceded by a # in your message.
For an example, check out an issue on the redmine web site: http://www.redmine.org/issues/6317
There's a column containing Associated revisions which contains source control changesets. They appear on this issue because they have #6317 in their message.
Depending on the configuration of redmine, a keyword before the issue number is needed in order to have the changeset linked to the issue. Those keywords can be modified in the settings of redmine:
Examples for the above settings:
refs #1234
references #1234, #1337
issue #1234 #1337 & #1001
If you'd like to omit keywords and simply link a changeset to all issue numbers found in the log message, enter a * into the Referencing keywords textbox.
Finally, in order to have redmine check for new changesets and parse the repository's log messages, you'll need to click on the Repository tab, or configure a rake task to do it regularly.