I have a server farm configuration with 2 web server, 1 app servers and 1 database server.
What i am trying to do from timer job is to loop through all web applications present in the server farm and perform some action against some web-applications.
I have timer job which is provisioned with Central admin web application and locktype as ContentDatabse passed as parameter, and i have scheduled this job to run every 2 minutes. This timer job is provisioned by a feature which has a farm scope.
Central admin site is hosted on web servers but not on app server.
My timer job never runs.
In Central admin, i see job is scheduled to be run from 2 web servers but not from app server.
In SharePoint app server log i see one interesting log which says "Job definition TestJob, id 779b3ccf-df47-4c4d-aaea-7b3ae2f6502a has no online instance for service Microsoft.SharePoint.Administration.SPWebService, id db200214-6eb2-4696-bb3b-53cb12119650, ignoring"
If i stop the SharePoint timer service running in the app servers my timer job runs, after my job has run once, if i start the timer service in app server, my job runs every 2 minutes as expected. From this blog i have came to know that "So, the first thing is that for a particular server to be eligible to run the job, it must be provisioned with the service or web app associated with the job." And my app servers does not host the web applications.
My questions is.
Why is my timer job is trying to run from the app servers?
If i pass a web server while provisioning timer job is that correct? I think timer job becomes tied to particular server and removing/ taking it down may make the timer job not to run.
Related
Due to some technical issues, we stopped the AWS server and when we started the server all delayed jobs are showing in the queue, none of the delayed jobs is running on the server, so I need to start the delayed job server. When I used the following command I got some issues which are shown in this picture:
I would like to start a web server on-demand as an inetd "tcp/wait" service which shuts itself down after a programmable period of inactivity.
Many web servers already support inetd "tcp/nowait" mode, but this mode has the disadvantage that a new process needs to be forked for every new connection. It is therefore slower and more resource-intensive than running a dedicated server daemon.
A web server supporting inetd's "tcp/wait" would only be launched by inetd for the first request, then serve any number of requests using the same server instance until no requests occurred for some period of idle time, in which case the server instance automatically terminates and lets inetd start it again once the next period of activity starts.
Such a tcp/wait inetd web server should have approximately the same efficiency as a dedicated web server (i. e. running permanently) during times of activity. However, it will automatically shut down in times of inactivity, saving system resources.
Irregular "anti-demand"-driven shutdowns will also clean up any memory leaks from the web server and possibly associated FGCI-services (which would terminate together with the web server).
I know that it is already possible to use systemd's socket activation in combination with lighttpd's -i option to implement what I want.
However, I want a solution that also works without systemd, depending on nothing else than a running Internet superserver no matter how the latter one has been started (inetd/xinetd started by sysvinit, runit, manually, or systemd's socket activation replacing inetd/xinetd).
I have a WebJob that is triggered by an Azure Storage Queue. When I test this locally everything works fine. When I publish the WebJob (as part of an Azure WebSite) and then go to the Azure Management Portal and try to start the WebJob, it throws and error.
I had this running earlier, but it was having problems so I deleted the job in the management portal and tried to republish the web site with the web job.
Any suggestions on how to figure out what's going on?
In the old Azure Management Portal there was no clear way I could find to kill the process (stop the Job if there was one). Using the new portal, I looked at all the processes running on the site and there was the WebJob running 26 threads. After killing the process I was able to start the recent uploaded one.
I'm using Django+celery for my 1st ever web development project, and rabbitmq as the broker. My celery workers are running on a different system from the web server and are executing long-running tasks. During the task execution, the task output will be dumped to local log files on the workers. I'd like to display these task log files through the web server so the user can know in real-time where the execution is, but I've no idea how I should transfer these log files between the workers and the system where the web server is. Any suggestion is appreciated.
Do not move logs, just log to the same place. It can be really any database (relational or non-relational) accessible from the web server and Celery workers. You can even create (or look for) appropriate python logging handler, saving log records to the centralized storage.
Maybe the solution isn't to move the logs, but to aggregate them. Take a look at some logging tools like splunk, loggly or logscape.
I have developed a web service in c# (ASP.NET), and published it in IIS 6.1 (Windows Server 2008). Web application takes data from this web service, but time after time it does not return anything. After I restart web service it works again normally. I don't understand why web service stops returning result time after time. What can cause this? Any help please?
Hard to say without knowing any details. But an approach would be checking the event logs.
Set the Idle timeOut for an Application Pool to 0 instead of default value of 20 minutes for having no timeout when your service is idle.
using inetmgr
Open IIS Manager. For information about opening IIS Manager, see Open
IIS Manager (IIS 7).
In the Connections pane, expand the server node and click Application
Pools.
On the Application Pools page, select the application pool for
which you want to specify idle time-out settings, and then click
Advanced Settings in the Actions pane.
In the Idle Time-out (minutes) box, type a number of minutes, and then click OK.
uisng cmd
appcmd set config /section:applicationPools /[name=' string
'].processModel.idleTimeout: timeSpan
appcmd set config /section:applicationPools /[name=' Marketing '].processModel.idleTimeout:0.00:30:00