I have reinstalled wamp server and my project is running very fast than before. All that different is there is no tmp file in tmp folder (which was of 34GB!!)
Apache maintains website access and error logs that can grow in size very quickly. PHP also has similar logs (if enabled via configuration).
C:\WampDeveloper\Logs
C:\WampDeveloper\Temp
Once Apache log files grow in size to above several 100 MB, performance issues can arise.
Also the Temp folder holds lots of session and temporary data files that don’t get properly cleaned up, which causes it’s own issues.
To speed up Read this WAMP is Running Very Slow, This will have this points
Windows Hosts file
IPv6
Firewall and Anti-Virus software
Power Plan
Local Issues
YOUR BROWSER
Clear Your WAMP Log Files
Apache
MySQL
etc...
Related
Every time I change any config in PHP or Apache sections, MAMP asks to restart MySQL (and all the other services).
Is there any way to prevent that? (Using MAMP PRO 4.2.1 on MacOs High Sierra)
I'm not supposed to restart my mySQL if I add a new entry on etc/hosts for instance.
(Besides the fact that MAMP crashes 5/10 times it tries to restart the whole thing, and 10/10 times can't restart MySQL properly.)
I can only provide this anecdotal experience.
I was having the same issue. I deleted old hosts from about 16 down to 6. I also deleted unused sql databases ( 16 to 6 ) MAMP continued to hang but eventually pulled out of the hang after the moste recent database update.
First I ran a system clean with Cocktail –wiping caches and any system DNS caches and rebooted. Then with phpMyAdmin I deleted old databases restarted the servers and then updated the current database project after manually dropping all the tables. I suspected that mysqld was the cause of the hang and when i examined the process records it was difficult to be sure. I think there are issues with memory swapping and addresses based on my guestimate.
I'm about to ditch MAMP for Docker when i get a bit of time to learn Docker.
I am using Coldfusion MX8 server and one of the scheduled task was running from 2 years but now suddenly from 01/12/2014 scheduled tasks are not running. When i browsed the file in browser then the file is running successfully without error.
I am not sure is there any updatation or license expiration problem. I am aware that mid of this year Adobe closed the support for coldfusion 8.
The first most common problem of this problem is external to the server. When you say you browsed to the file and it worked in a browser, it is very important to know if that test was performed on the server desktop. Knowing that you can browse to the file from your desktop or laptop is of small value.
The most common source of issues like this is a change in the DNS or network stack that is interfereing with resolution. For example, if the internal DNS serving your DMZ suddenly starts serving the "external" address - suddenly your server can't browse to your domain. Or if the IP served by the server for the domain in question goes from being 127.0.0.1 to some other IP that the server can't acces correctly due to reverse proxy or LB or some other rule. Finally, sometimes the Apache or IIS is altered so that an IP that previously was serviced (127.0.0.1 being the most common example) now does not respond.
If it is something intrinsic to the scheduler service then Frank's advice is pretty good - especially look for "proxy schduler" entries in the log - they can give you good clues. I would also log results of a scheduled task to a file. Then check the file. If it exists then your scheduled tasks ARE running - they are just not succeeding. Good luck!
I've seen the cf scheduling service crash in CF8. The rest of CF is unaffected.
Have you tried restarting the server?
Here are your concerns:
Your File (works since you tested it manually).
Your Scheduled Task (failed).
Your Coldfusion Application (Service) (any changes here)?
Your Server (what about here).
To test your problem create a duplicate task and schedule it. Leave the other one in place (maybe set your new one to run earlier). Use the same file too. See if it completes.
If it doesn't then you have a larger problem. Since the Coldfusion Server sits atop of the JVM there could be something happening there. Things just don't stop working unless something got corrupted or you got compromised. If you hardened your server by rearranging/renaming the file structure to make it more secure...It would break your task.
So going back: if your test schedule works then determine what is different between the two. Note you have logging capabilities. Logging abilities for CF8
If you are not directly incharge of maintaining this server, then I would recommend asking around and see if there was recent maintenance, if so, what was done to the server?
I've got some code that runs in Enterprise guide (SAS Enterprise build, Windows locally, Unix server), which imports a large table via a local install of PC File server. It runs fine for me, but is slow to the point of uselessness for the system tester.
When I use his SAS identity on my windows PC, the code works; but when I use my SAS identity on his machine it doesn't, so it appears to be a problem with the local machine. We have the same version of EG (same hot fixes installed) connecting to the same server (with the same roles) running the same code in the same project, connecting to the same Access database.
Even a suggestion of what to test next would be greatly appreciated!
libname ACCESS_DB pcfiles path="&db_path"
server=&_CLIENTMACHINE
port=9621;
data permanent.&output_table (keep=[lots of vars]);
format [lots of vars];
length [lots of vars];
set ACCESS_DB.&source_table (rename=([some awkward vars]));
if [var]=[value];
[build some new vars, nothing scary];
;
run;
Addenda The PC files server is running on the same machine where the EG project is being run in both case - we both have the same version installed. &db_path is the location of the Access database - on a network file store both users can access (in fact other, smaller tables can be retrieved by both users in a sensible amount of time). This server is administered by IT and not a server we as the business can get software installed on.
The resolution of your problem will require more details and best solved by dialog with SAS Tech Support. The "online ticket" form is here or you can call them by phone.
For example, is the PCFILES server running locally on both your machine and your tester's machine? If yes, is the file referenced by &db_path on a network file server and does your tester have similar access (meaning both of you can reach it the same way)? Have you considered installing the PCFILE server on your file server rather than on your local PC? Too many questions, I think, for a forum like this. But I could be wrong (its happened before); perhaps others will have a great answer.
After editing files in my development environment and saving them to my guest OS (CentOS), the Guest delivers a cached version of the edited files (.css or .js).
At first I thought this was a local browser caching issue, but I've deleted, disabled, incinerated, etc every local cache in all 4 browsers and in the laptop (non-host) hard drive.
In addition, I tested using a machine (that has never accessed the guest) and the guest still delivered the unedited files.
I've then disabled all caching modules in Apache - I'm pretty sure (but not positive - and open to any suggestions) Apache is not the culprit.
Either my guest or my host is caching files somehow/somewhere and I can't figure out how or where.
This has been a very frustrating 48 hours - any help would be greatly appreciated.
Background:
VirtualBox v 4.0.12
Guest: CentOS 5.5/LAMP (Being used as a local development server) Internal IP 192.168.12.62
Host: Windows Server 2008 (Network Config: Bridged) Internal IP 192.168.12.42
Development files are stored on the Host and shared with the Guest via "Shared Folders"
Application development is done on a third machine (laptop) connected to the host via mapped network drive. Internal IP 192.168.12.32
I've configured Apache with numerous virtual IP's 192.168.12.150-180
Please let me know if I've left anything out.
This forum post confirms the problem. Here's the bug report. Vboxsf doesn't play nicely with sendfile. The Apache workaround, as previously mentioned:
EnableSendFile Off
For the curious, here's the SendFile docs.
First my setup that is used for testing purpose:
3 Virtual Machines running with the following configuration:
MS Windows 2008 Server Standard Edition
Latest version of AppFabric Cache
Each one has a local network share where the config file is stored (I have added all the machines in each config)
The cache is distributed but not high availibility (we don't have Enterprise version of Windows)
Each host is configured as lead, so according to the documentation at least one host should be allowed to crash.
Each machine has the website I testing installed, and local cache configured
One linux machine that is used as a proxy (varnish is used) to distribute the traffic for testing purpose.
That's the setup and now on to the problem. The scenario I am testing is simulating one of the servers crashing and then bring it back in the cluster. I have problem both with the server crashing and bringing it back up. Steps I am using to test it:
Direct the traffic with Varnish on the linux machine to one server only.
Log in to make sure there is something in the cache.
Unplug the network cable for one of the other servers (simulates that server crashing)
Now I get a cache timeout and I get a service error. I want the application to still be up on the servers that didn't crash, and it take some time for the cache to come back up on the remaining servers. Is that how it should be? Plugging the network cable back in and starting the host cause a similar problem.
So my question is if I have missed something? What I would like to see happen is that if one server crashes the cache should still remaing upp since a majority of the leads are still up, and starting the crashed server again should bring it back gracefully into the cluster without any causing any problems on the other hosts. But that might no be how it works?
I ran through a similar test scenario a few months ago where I had a test client generating load on a 3 lead-server cluster with a variety of Puts, Gets, and Removes. I rebooted one of the servers multiple times while the load test was running and the cache stayed online. If I remember correctly, there were a limited number errors as that server rebooted, but overall the cache appeared to remain healthy.
I'm not sure why you're not seeing similar results, but I would try removing the Varnish proxy from your test and see if that helps.