How to reduce notification duration time? - questdb

When running a query via the Web Console, a notification appears.
However, I feel that it lasts too much time, and it impedes my view and performing other tasks.
How do I configure the amount of time after which notifications in the Web Console must automatically disappear?

At the moment, this is not a user-configurable feature, but the project is open to requested changes via GitHub issues

Related

Recommended way to run a web server on demand, with auto-shutdown (on AWS)

I am trying to find the best way to architect a low cost solution to provide an on-demand web server for a certain amount of time.
The context is as follows: I have some large amount of data sitting on S3. From time to time, users will want to consult that data. I've written a Flask app that can display the data in a nice way for them. Beign poorly written, it really only accepts a single user session at the time. Currently therefore they have to download the Flask app and run it on their own machine.
I would like to find a way for users to request a cloud-based web server that would run the Flask app (through a docker container for example) on-demand, and give them access to it quickly, without having to do much if anything on their own machine.
Every user wanting to view the data would have their own web server created on demand (to avoid multiple users sharing the same web server, which wouldn't work with my Flask app)
Critically, and in order to avoid cost, the web server would terminate itself automatically after some (configurable) idle time (possibly with the Flask app informing the user that it's about to shut down, so that they can "renew" the lease).
Initially I thought that maybe AWS Fargate would be good: it can run docker instances, is quite configurable in terms of CPU/disk it can get (my Flask app is resource-hungry), and at least on paper could be used in a way that there is zero cost when users are not consulting the data (bar S3 costs naturally). But it's when it comes to the detail that I'm not sure...
How to ensure that every new user gets their own Fargate instance?
How to shut-down the instance automatically after idle time?
Is Fargate quick enough in terms of boot time?
The closest I can think is AWS App Runner. It's built on top of Fargate and it provides an intelligent scale out mechanism (probably you are not interested in this) as well as a scale to (almost) 0 capability. The way it works is that when the endpoint is solicited and it's doing work you pay for the entire fargate task (cpu/memory) you have selected in the configuration. If the endpoint is doing nothing you only pay for the memory (note the memory cost is roughly 20% of the entire cost so it's not scale to 0 but "quasi"). Checkout the pricing examples at the bottom of this page.
Please note you can further optimize costs by pausing/starting the endpoint (when it's paused you pay nothing) but in that case you need to create the logic that pauses/restarts it.
Another option you may want to explore is using Lambda this way (which would allow you to use the same container image and benefit from the intrinsic scale to 0 of Lambda). But given your comment "Lambda doesn’t have enough power, and the timeout is not flexible" may be a show stopper.

Detect Presence in AWS AppSync

I am working on an app that relies heavily on detecting when users go offline and go back online. I wanted to do this with AWS AppSync, but I can't seem to find a way to do this in the documentation. Is there a way to do it in AppSync?
Thanks for the question. Detecting presence is not currently support out of the box but you can likely build similar features yourself depending on the use case.
For example, a resolver on a subscription field is invoked every time a new device tries to open a subscription. You can use this resolver field to update some data source to tell the rest of your system that some user is currently subscribed. If using something like DynamoDB, you can use a TTL field to have records automatically removed after a certain amount of time and then require a user to "ping" every N minutes to specify that they are still online.
You could also have your application call a mutation when it first starts to register the user as online, then have the application call another mutation when the app closes to register it as offline. You could combine this with TTLs to prevent stale records in situations where the app crashes or something prevents the call to register as offline.
Thanks for the suggestion and hope this helps in the meantime.

Suppress mesage-The Mapping task failed to run. Another instance of the task is currently running

I have set up multiple jobs in Informatica cloud to sync data from Oracle with Informatica objects. The job is scheduled to run every 3 minutes as per the business requirements. Sometimes the job used to run long due to secure agent resource crunch and my team used to multiple emails as below
The Mapping task failed to run. Another instance of the task is currently running.
Is there any way to suppress these failure emails in the mapping?
This wont get set at the mapping level but on the session or integration service level see following https://network.informatica.com/thread/7312
This type of error comes when workflow/session is running and trying to re-run. use check if by script if already running then wait. If want to run multiple instance of same:
In Workflow Properties Enable 'Configure Concurrent Execution' by checking the check box.
once its enables you 2 options
Allow Concurrent Run with same instance name
Allow Concurrent run only with unique instance name
Notifications configured at the task level over ride those at the org level, so you could do this by configuring notifications at the task level and only sending out warnings to the broader list. That said, some people should still be receiving the error level warning because if it recurs multiple times within a short period of time there may be another issue.
Another thought is that batch processes that run every three minutes that take longer than three minutes is usually an opportunity to improve the design. Often a business requirement for short batch intervals is around a "near real time" desire. If you have also Cloud Application Integration service, you may want to set up an event to trigger the batch run. If there is still overlap based on events, you can use the Cloud Data Integration API to and create a dynamic version of the task each time. For really simple integrations you could perform the integration in CAI, which does allow multiple instances running at the same time.
HTH

How to add time based restrictions to prevent a Windows user from logging in?

I am trying to add some modification to the windows login screen that only allows a user to log in if they have time remaining for that day. I also want to allow them to increase the time via a connection with the wireless device.
I have seen similar things done, with Asus Face Login, and Bluetooth login, and I would have no problem writing the code, but how do I put the restrictions at the login screen?
Am I correct in the assumption that this can also only be done using C++? If so, I program mainly in java, so when environments would you recommend forC++?
Any suggestions and links to resources are appreciated!
Thanks
I am not pretty much sure but that can be done as follows:
There is task scheduler in windows that gives user a feature to run a particular program at log on. So you can develop a program that will run at log on.
Program can be something like that:
Firstly, it will get the current user, check for the time available with the user. This can be done by saving time spent by the user in internal database or file.
If user has crossed the specified limit then it will write the details in file/database and fire system("Logoff") command. Logoff is windows command to logoff.
Further on each successfull login it will keep the time of loging in and while logging off it will update the time difference in the file/database.
This is how I suppose it can be implemented. If you have some better way please share.

Speeding up page loads while using external API's

I'm building a site with django that lets users move content around between a bunch of photo services. As you can imagine the application does a lot of api hits.
for example: user connects picasa, flickr, photobucket, and facebook to their account. Now we need to pull content from 4 different apis to keep this users data up to date.
right now I have a a function that updates each api and I run them all simultaneously via threading. (all the api's that are not enabled return false on the second line, no it's not much overhead to run them all).
Here is my question:
What is the best strategy for keeping content up to date using these APIs?
I have two ideas that might work:
Update the apis periodically (like a cron job) and whatever we have at the time is what the user gets.
benefits:
It's easy and simple to implement.
We'll always have pretty good data when a user loads their first page.
pitfalls:
we have to do api hits all the time for users that are not active, which wastes a lot of bandwidth
It will probably make the api providers unhappy
Trigger the updates when the user logs in (on a pageload)
benefits:
we save a bunch of bandwidth and run less risk of pissing off the api providers
doesn't require NEARLY the amount of resources on our servers
pitfalls:
we either have to do the update asynchronously (and won't have
anything on first login) or...
the first page will take a very long time to load because we're
getting all the api data (I've measured 26 seconds this way)
edit: the design is very light, the design has only two images, an external css file, and two external javascript files.
Also, the 26 seconds number comes from the firebug network monitor running on a machine which was on the same LAN as the server
Personally, I would opt for the second method you mention. The first time you log in, you can query each of the services asynchronously, showing the user some kind of activity/status bar while the processes are running. You can then populate the page as you get the results back from each of the services.
You can then cache the results of those calls per user so that you don't have to call the apis each time.
That lightens the load on your servers, loads your page fast, and provides the user with some indication of activity (along with incrimental updates to the page as their content loads). I think those add up to the best User Experience you can provide.