Apex Automatic backups - oracle-apex

I'm using oracle apex (on autonomous db) and usually there is an automatic backup each night if the application has changed. 25 backups are kept.
It's a great feature.
However it's simply stopped working for me. I wonder if I've messed with the setting of the number of backups allowed?
The documentation says:
About Configuring the Number of Backups
By default, Application Express only retains the 25 most recent
backups. Older backups are deleted to make room for new ones.
Instance administrators can configure the number of backups per
application, or disable automatic backups all together. To configure
the number of backups for each application, sign in to the
Administration Services application and navigate to Manage Instance,
Feature Configuration, Application Development, and update the Number
of backups per application attribute. Valid values are between 0 and
30. A value of zero (0) disables the backup feature.
(https://docs.oracle.com/en/database/oracle/application-express/20.2/htmdb/managing-application-backups.html#GUID-764CE12A-6EE1-457C-88A7-FDC0ED9FA827)
Its very like I've set a value of 0...and about the time it stopped working I know I did wonder if I could increase the number of backups. However when I log into admin services ||Manage Instance || Feature Configuration the Application Development option does not appear. I cannot find it anywhere.
Has anyone got any idea where I can find it or other suggestions to address the issue?
(apex 20.2, autonomous (ATP) db)

Related

How can I have my instances refresh automatically in Google Cloud Run?

I have a Cloud Run application built in Python and Docker. The application serves a dashboard that runs queries against data and displays visualizations and statistics. Currently, if I want the app to load quickly I have to set the minimum number of instances to a number greater than 0, I typically use 10. This is great for serving the app immediately, however it can become outdated. I would essentially like to be able to keep a minimum number of instances available to serve the app immediately, but I would like it if they would refresh, or shut down and start up, once every few hours or at least once a day. Is there a way to achieve this?
I have tried looking into Cloud Scheduler to somehow get the Cloud Run application to refresh on a schedule, but I was unclear on how to make the whole thing shut down and reload, especially without serving another revision.
I think about a design but I never tested it. Try to do that.
At startup, store in a global variable the time.Now(). You have the startup time
Implement a Healthcheck probe. The healthcheck answer OK (HTTP 200) if the NOW minus startup time (the global variable) is below X (1 hour for instance). Else it answer KO (HTTP 500)
Deploy your Cloud Run service with the new Health check feature
Like that, after X duration, the instance will autodeclare itself unhealthy and Cloud Run will evict it and create a new one.
It should work. Let me know, I'm interested in the result!

Can't use Composer environment after re-enabling service

I have been trying to find a way to save on the costs of Airflow by disabling it when not in use. I have discovered that if we disable the composer.googleapis.com service while not in use that Google does not charge for the service while it is disabled, although it does continue to charge for other resources that are still active. Unfortunately, if the service is disabled for more than an hour or so, the service is not usable after re-enabling it. After the service has been disabled for an extended period of time, the Composer Environment Details Page shows
An error occurred with retrieving the last operation on this environment
and
This environment cannot be edited due to the errors that occurred during environment creation/update. Please investigate the logs to determine the cause, or create a new environment.
And gcloud composer environments describe shows state: ERROR
The one error that I did see in the logs was due to a duplicate key when the airflow_monitoring DAG was rescheduled after a little over an hour. Therefore, created a new Composer environment, disabled all DAGs, disabled the composer service, waited a while, then enabled it again. The environment was once again in an error state.
The Cloud Composer documentation states:
If you disable the Cloud Composer API, environments become unusable within an hour of service deactivation unless you re-enable the API. If you re-enable the API, you are billed for the service usage that occurs while the Cloud Composer service is deactivating.
Maybe this is poorly worded, but to me it sounds like it would become unusable within an hour if you disable it, but if you re-enabled it anytime later, it will become usable again. I am wondering if it really means that if you disable it, you must re-enable it within 1 hour or it will become permanently unusable.
Is there a way to disable the composer.googleapis.com service for longer than an hour and then get it working again after the service has been re-enabled? Is there something I can restart, or some way to clear the error state? Is there more I should do before disabling it?
I am using composer-1.10.4-airflow-1.10.6 with Python 3.
Thanks.
No, there is no way to disable the composer.googleapis.com service for more than an hour and then have environments be functional after re-enablement.
GCP services are not meant to be enabled/disabled on the fly in this manner, and disablement of a service is meant to be performed with the intention of disabling it for the long term. Keeping a service disabled for long enough means Google-managed components created for the service (specifically for your project) will be decommissioned, and in Composer's case, this will render your environments permanently unusable.
The error state in the environment cannot be cleared. If you want to save on costs, you should delete Composer environments as opposed to deactivating the service entirely. The "service" is not cluster-like and isn't meant to be toggled on and off.

Trouble configuring Presto's memory allocation on AWS EMR

I am really hoping to use Presto in an ETL pipeline on AWS EMR, but I am having trouble configuring it to fully utilize the cluster's resources. This cluster would exist solely for this one query, and nothing more, then die. Thus, I would like to claim the maximum available memory for each node and the one query by increasing query.max-memory-per-node and query.max-memory. I can do this when I'm configuring the cluster by adding these settings in the "Edit software settings" box of the cluster creation view in the AWS console. But the Presto server doesn't start, reporting in the server.log file an IllegalArgumentException, saying that max-memory-per-node exceeds the useable heap space (which, by default, is far too small for my instance type and use case).
I have tried to use the session setting set session resource_overcommit=true, but that only seems to override query.max-memory, not query.max-memory-per-node, because in the Presto UI, I see that very little of the available memory on each node is being used for the query.
Through Google, I've been led to believe that I need to also increase the JVM heap size by changing the -Xmx and -Xms properties in /etc/presto/conf/jvm.config, but it says here (http://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-presto.html) that it is not possible to alter the JVM settings in the cluster creation phase.
To change these properties after the EMR cluster is active and the Presto server has been started, do I really have to manually ssh into each node and alter jvm.config and config.properties, and restart the Presto server? While I realize it'd be possible to manually install Presto with a custom configuration on an EMR cluster through a bootstrap script or something, this would really be a deal-breaker.
Is there something I'm missing here? Is there not an easier way to make Presto allocate all of a cluster to one query?
As advertised, increasing query.max-memory-per-node, and also by necessity the -Xmx property, indeed cannot be achieved on EMR until after Presto has already started with the default options. To increase these, the jvm.config and config.properties found in /etc/presto/conf/ have to be changed, and the Presto server restarted on each node (core and coordinator).
One can do this with a bootstrap script using commands like
sudo sed -i "s/query.max-memory-per-node=.*GB/query.max-memory-per-node=20GB/g" /etc/presto/conf/config.properties
sudo restart presto-server
and similarly for /etc/presto/jvm.conf. The only caveats are that one needs to include the logic in the bootstrap action to execute only after Presto has been installed, and that the server on the coordinating node needs to be restarted last (and possibly with different settings if the master node's instance type is different than the core nodes).
You might also need to change resources.reserved-system-memory from the default by specifying a value for it in config.properties. By default, this value is .4*(Xmx value), which is how much memory is claimed by Presto for the system pool. In my case, I was able to safely decrease this value and give more memory to each node for executing the query.
As a matter of fact, there are configuration classifications available for Presto in EMR. However, please note that these may vary depending on the EMR release version. For a complete list of the available configuration classifications per release version, please visit 1 (make sure to switch between the different tabs according to your desired release version). Specifically regarding to jvm.config properties, you will see in 2 that these are not currently configurable via configuration classifications. That being said, you can always edit the jvm.config file manually per your needs.
Amazon EMR 5.x Release Versions
1
Considerations with Presto on Amazon EMR - Some Presto Deployment Properties not Configurable:
2

Google CloudSQL instance non-responsive, how to get support?

When it comes to databases, we want to leave managing them to the pros, which is why we went for a managed solution in the form of a CloudSQL 2nd gen db instance. Today the instance stopped responding, I clicked restart, it has been restarting for hours and is not responding, I have tried clone the instance, also not responding.
I don't know what else to do, our db is crippled and the service that uses it is down. These things happen, fine.
The thing that shocked me is that I am unable to contact anybody to resolve this problem. I understand that I can pay for a support subscription, $150p/m and up. This confuses me though, the GCloud console UI is not responding, am I incorrect in assuming I should not have to pay for support for the core product to at least work?
This leads me to my main question, if I want to continue using Google Cloud products in production, do I NEED a support subscription?
Same happened to us yesterday. The cloud SQL instance did not respond for an hour and a half (from 18h to 19:30h GTM+1).
We couldn't do absolutely nothing, we tried to backup the instance to a bucket but the command was returning an error saying that another operation is in progress.
We are a small startup and we can't pay for a support plan, but when we hired the cloud SQL service we thought that this kind of situations doesn't happen.
Honestly, after this I believe that Cloud SQL is not a good option if you do not contract at the same time a gold or platinum support plan. It is frustrating that something fails and you can not do anything, or even report the error.
Try the gcloud command line tool in your active shell, instead of the console UI. Try exporting the data from your SQL instance to google cloud storage bucket by using this command:
gcloud sql instances \
export <sql-instnace-name> \
gs://<bucket-name>/backup.sql
The SQL instance's service account by default has read and write access to google cloud storage bucket.
Create a new SQL instance using this command:
gcloud sql instances \
create <new-sql-instance-name>
Now, add the data to the new SQL instance using this command:
gcloud sql instances \
import <new-sql-instance-name> \
gs://<bucket-name>/backup.sql
You can get free or premium support here. You do not need a subscription to get help; it all depends on your needs and the level of urgency you estimate for eventual future problems.
If you have a recent backup of your database, you may consider re-creating it in another instance, from there.
You may consider posting your issue in the Google Cloud SQL Product Issue Tracker. This way, it will enjoy much better visibility from developers and Google support, without attracting any extra costs.

Missing SQL tab when using AD provider

I've deployed a copy of opserver, and it is working perfectly when using alladmin as the security setting. However, once I switch it to ad and configure the groups, the SQL tab goes away and I get an access denied message if I try browsing directly to it. The dashboard still displays all Solar Winds data as expected.
The build I'm using is actually from November. I tried a more recent build, but I lose the network information from Solar Winds (the CPU and Mem graphs show, but Net is all blank)
Is there a separate place to configure the SQL permissions that I'm missing?
I think perhaps there was some caching going on for the hub that wasn't happening for the provider, because they are both working now. Since it was a new security group, perhaps it hadn't replicated yet (causing the SQL auth to fail) but the dashboard provider was still using the previous authentication?
I also did discover a neat option while researching this though - the GitHub page mentions that you can also specify security at a provider level in the JSON using the AdminGroups and ViewGroups properties!