I created a API Connect project with command
apic loopback
When I try to launch the API designer, I receive error as below:
sdil#sdil-VirtualBox:~/Project/test-apic/todo4$ apic edit
The user model "User" is attached to an application that does not specify
whether other sessions should be invalidated when a password or
an email has changed. Session invalidation is important for security
reasons as it allows users to recover from various account breach
situations.
We recommend turning this feature on by setting
"logoutSessionsOnSensitiveChanges" to true in
server/config.json (unless you have implemented your own solution
for token invalidation).
We also recommend enabling "injectOptionsFromRemoteContext" in
User's settings (typically via common/models/*.json file).
This setting is required for the invalidation algorithm to keep
the current session valid.
Learn more in our documentation at
https://loopback.io/doc/en/lb2/AccessToken-invalidation.html
Error: loopback.errorHandler is no longer available. Please use the module "strong-error-handler" instead.
When I check for declaration in package.json, I did see strong-error-handler written.
"dependencies": {
...
"strong-error-handler": "^2.0.0",
}
How do i fix this to make API Designer running?
I sort of recognize this problem, actually. We had the new strong-error-handler but also the old one active.
Do the steps in "Migration from old LoopBack error handler" here:
https://loopback.io/doc/en/lb3/Using-strong-error-handler.html#migration-from-old-loopback-error-handler
Should eliminate the old one completely.
Related
I've created a public/private key pair as described here for the Google Cloud Platform (see graphic below)
The problem: I can't find a shred of documentation describing where to put it. This thing is not the typical SSH key pair, but rather a JSON file.
Where should it be stored on a mac to allow the gcloud command to authenticate and push to the GCP?
If you are authenticating locally with a service account to build/push with gcloud, you should set the environment variable on your mac terminal to point to the JSON key file.
export GOOGLE_APPLICATION_CREDENTIALS="/home/user/Downloads/service-account-file.json"
Once this environment variable is defined, all the requests will be authenticated against that Service Account using the key info from the json file.
Please consider looking at the doc below for reference:
https://cloud.google.com/docs/authentication/production
The CaioT answer is the right one if you want to use a service account key file locally.
However, the question shouldn't be asked because it's a bad practice to have service account key files. They have to be used in only few cases. Else, they are security weakness in your projects.
Have a higher look on this key file. At the end, it's only a file, stored on your mac (or elsewhere) without special security dispositions. You can copy it without any problem, edit it, copy the content. You can send it by email, push it in Git repository (might be public!)...
If you are several developers to work on the same project, it because quickly a mess to know who manage the keys. When you have a leak, it's hard to know which key has been used and need to be removed,...
So, have a closer look to this part of the documentation. I also wrote some articles to propose alternative to use them. Let me know if you are interested.
I locally run my app which uses Datastore.
The app is written in Java and uses Objectify. The code is like the below.
ofy().transact(() -> { ofy().load().type(PersonEntity.class).list(); })
This simple query runs successfully when my app connects to my GCP Project's Datastore.
But, when I use cloud-datastore-emulator, this query is rejected with an error message Only ancestor queries are allowed inside transactions.
This restriction about non-ancestor query seems to be removed on Firestore in Datastore mode. But cloud-datastore-emulator seems still restrict it.
My question is,
cloud-datastore-emulator doesn't support Firestore in Datastore mode?
Is there any way to emulate Firestore in Datastore mode?
gcloud SDK version: 346.0.0
Well, the answer to your question is: It should support it, as the emulator is suppose to support everything that the production environment does. That being said I did went through the documentation after seeing your question and found that here it's stated that:
The Cloud SDK includes a local emulator of the production Datastore mode environment.
But if you were to follow the link, there are hints that this is an emulator to both the legacy Datastore and Firestore in Datastore mode. So this might be why you are seeing this behavior. With that information at hand, it might be a good idea to open a case in Google's Issue Tracker so that they're engineering team can clarify if this is an expected behavior or not and if not, fix this issue.
After adding a new policy and disabling an outdated policy at the PDP console, an action that displays correctly at the PDP Policy view, the connected PDP process using a Java client did not reflect the logic added by the new policy, still acting according to the older, disabled rules. We also tried to run "Clear Decision Cache" and" Clear Attribute Cache" widgets at the PDP Extension screen, and the PEP is still showing the same issue.
A graceful restart of the WSO2 did solve the error. The server is running WSO2 5.1 release. From an operational standpoint, the restart command is a rather disruptive action and should be avoided.
Are further configuration, or command options available at the WSO2 IS package to drop cache and dynamically refresh an active policy without causing disruption of ongoing services?
This is already tested and working scenario in 5.1.0.
As I understood, you wanted to edit a policy and should reflect that changes after you publish that new policy without doing any other operation, right ? Yes, when you publish a same policy again with new changes, it will replace the new policy in DB and cache in cluster as well. It should reflect at that time.
Actually the scenario described by Harsha is not the same as the one Claude asked. Changing the policy and publishing might work. But disabling or even deleting a policy from the PDP does not become effective unless the server is restarted.
There is a new ticket in jira:
Disabling/Deleting Policy from PDP Configuration does not work
If I create a HIT in the Sandbox via Mturk's GUI, is it possible to transfer it to the Production site, or do I have to re-create the HIT manually in the Production site?
In particular, is it possible, to download .input, .question and .properties for HIT created via GUI in the sandbox, in order to use them to generate the same HIT on the Production site via the CLT?
The obvious way seems to be using Mturk HIT's layouts. However, reading the doc, I don't see how/ know whether it is possible to to do this using the CLT. The doc on HITLayoutParameter requires using CreateHIT, but this is not an available command in the CLT (only have loadHITs).
I have seen other questions Creating mTurk HIT from Layout with parameters using boto and python and Create a MTurk HIT from an existing template about ways to do it with boto but I am still wondering whether that's doable with the CLT.
The live and sandbox modes are completely separate and no transfer is possible from one to the other.
You will need to implement this programmatically by storing the specs of the sandbox HIT and creating a live HIT.
Another option is to use a service like TurkPrime.com which allows you to copy HITs from sandbox to live mode
I've deployed a copy of opserver, and it is working perfectly when using alladmin as the security setting. However, once I switch it to ad and configure the groups, the SQL tab goes away and I get an access denied message if I try browsing directly to it. The dashboard still displays all Solar Winds data as expected.
The build I'm using is actually from November. I tried a more recent build, but I lose the network information from Solar Winds (the CPU and Mem graphs show, but Net is all blank)
Is there a separate place to configure the SQL permissions that I'm missing?
I think perhaps there was some caching going on for the hub that wasn't happening for the provider, because they are both working now. Since it was a new security group, perhaps it hadn't replicated yet (causing the SQL auth to fail) but the dashboard provider was still using the previous authentication?
I also did discover a neat option while researching this though - the GitHub page mentions that you can also specify security at a provider level in the JSON using the AdminGroups and ViewGroups properties!