ColdFusion 2018 scheduled tasks not working - coldfusion

We have recently began migrating to ColdFusion 2018 Enterprise, but have found that the scheduled tasks do not work. Although the relevant cfm file works if run in the browser on the same server, if we try and run it as a scheduled tasks then it does not work (although it will say it has run successfully on the screen).
The log file just contains a single line for each run:
Information","DefaultQuartzScheduler_Worker-5","11/20/20","12:48:18","","Task default.takename triggered."
From what I understand there should be additional lines for the http request etc, however.
We have tried various usernames and passwords, including admin accounts to make sure it is not a permissions issue but nothing seems to make any difference.
We have also tried outputting to a file but nothing ever populates the file, although it does update the file's modified date with the date/time the tasks ran (or create a new file if necessary).
Does anyone have any experience with this type of problem?

This ended up being an IIS permissions issue. We resolved it by enabling anonymous authentication for both the directory that the relevant cfm files are contained in, as well as the "jakarta" directory that I believe ColdFusion uses for some integration requirements. Scheduled tasks then ran as expected.

Related

Lost ability to edit code in AWS Lambda console

I have several Lambdas deployed to AWS, all created as single file function in the console. All was working fine until I flushed my caches and cookies in chrome. Then the function codes will no longer show up in the browser, any browser, I tried 3. Also all the Lambda functions think they are all zip file based so I cannot reenter the code from my git repo. The functions still operate properly, I just cannot edit them.
All new functions I create are also not in console editing mode. Something general / global has changed, not specific to any one function.
What can cause this? And across all browsers?
Most importantly how can I fix this?
You can download your code as a zip file if you click right on Actions > Export Function and then Download deployment package. Maybe re-uploading the packages will fix your issue.

Flows Disappeared from Project

I've been using Dataprep for months, and have a lot of different flows built in one of my projects. I was working with it this morning, but now when I log in, the project in Dataprep is blank, like I'm a brand new user. I'm starting to panic because months of work has vanished! Does anyone have any suggestions on what to do?
Things I've tried without success:
I switched into a different project and I can see that project's
flows listed.
Logged out/in
restarted browser
Thank you for your help, you are correct. It turns out we received an email from google with the subject "[Action Required] Please migrate off JSON-RPC and Global HTTP Batch Endpoints" (specifically storage#v1). We were not using this API with the solutions we developed within this project, so one of our developers deactivated it. It showed the affected dependencies, which included the Dataflow API. DataPrep was not disabled, nor did it need to be reenabled before accessing it again...it just lost it's metadata like both Ali T and James commented.
Google Cloud Support recommends exporting the recipes and flows (manually I believe) as the best way to prevent DataPrep working file loss in the future.

vQmod fails to write to vqcache directory on OpenCart

Can anyone suggest why vQmod fails to write cache files and instead writes an empty file named vq2-C to /vqmod/vqcache?
Environment
Windows Server 2012
Plesk Panel 12.5
PHP 5.3.2.9
MySQL 5.6.26
OpenCart 2.1.0.1
vQmod 2.6.1
Issue
vQmod fails to save modifications to /vqmod/vqcache/vq2-*.php
On each page load it applies modifications specified in /vqmod/xml/*.xml and writes an empty file named vq2-C to /vqmod/vqcache.
Background
The affected website was migrated from another Windows box with similar configuration.
In short the old server ran Plesk Panel 12.0, the new server is Plesk 12.5 so has minor updates to software versions.
Both sites run on PHP 5.3.2.9 and the new server follows OWASP recommendations more closely so has more PHP functions disabled eg. fopen_with_path.
Investigation so far
Running the vQmod installer again reports: VQMOD ALREADY INSTALLED!
File permissions
/vqmod/logs and /vqmod/vqcache have modify permissions, files are written here. Permissions are applied through Plesk Panel, checked over remote desktop and enabling global write permissions on web root through Plesk does not change anything.
Logs
vQmod logs have no useful information, only skipped files are noted eg. VQModObject::parseMods - Could not resolve path for [ catalog/language/english/module/featured.php] (SKIPPED).
No php_error.log files are generated.
Failed Request Tracing does not pick up any issues.
Tests
All /vqmod/xml files have been removed except vqmod_opencart.xml and one that modifies column_left.tpl. These modifications are applied successfully but no cache files are generated in /vqmod/vqcache.
If I remove /vqmod/checked.cache and /vqmod/mods.cache the files are regenerated on next page load.
vQmod versions - rolled back to 2.5.1 but the issue persists.
Other considerations
When one particular vQmod modification is enabled page load time is unacceptably slow (up to 20 sec). The modification displays the first 4 products from sub categories on the parent category page. I've not gone through the code yet but assume it's hitting pretty hard on the database.
On the original server page load was sub 2 seconds. I doubt this is related to the cache issue as that seems to be a permissions problem.
I had such an issue with mine, I later found out that the issue was with directory permissions, in this case, might be caused by moving. Set permissions of vqcache folder recursively to 777. It worked for me.
My apologies, I should have updated this sooner.
The issue relates to case sensitive preg_replace.
In short changing line #120 of vqmod.php from
$stripped_filename = preg_replace('~^' . preg_quote(self::getCwd(), '~i') . '~', '', $sourcePath)
to
$stripped_filename = preg_replace('~^' . preg_quote(self::getCwd(), '~i') . '~i', '', $sourcePath);
Means the /vqmod/vqcache/vq2-*.php cache files are written and the website runs as normal.
Explained in more detail at https://github.com/vqmod/vqmod/issues/81
I don't think the preg_quote should have the i argumment but it's in the original code so I left it in.

Coldfusion 8 scheduled task not running?

I started a job as a web developer at a company a few months ago managing a bunch of Coldfusion applications among other things. Apparently a scheduled task was set up many years ago, and worked fine until it stopped working under one of the previous web developers, a couple of years ago. No one knows why it stopped working, but it is now my job to fix it. This is my first job as a web developer, I didn't know CF when I started my job (barely knew it existed), and I only started learning about scheduled tasks this morning, so just know that I am a total newbie.
The file is a basic one- it just updates a table in the database. If you run the URL in the browser (which is what they have been doing for the past couple of years), it runs fine, and everything is updated. The scheduled task, which was set to run every night, has not been updating the file. I've tried turning on the log in CF Admin, setting it to run at various times this morning, and also just telling it to run manually, and according to the log, it is executing (with no errors), but the file is not being updated. I tried commenting out most of the file and just telling it to send a basic e-mail, with no variables or anything, but I am getting the same result.
Any ideas? I have no idea what to try from here. I tried looking for a solution online, but the only post I found similar to my situation is this, where people seem to be suggesting that the issue may be variables that are not available to the scheduler:
coldfusion scheduled task not sending emails
There are no variables on my page right now though. I tried running the task via CFSCHEDULE, per the suggestion on that page, but I got the same result as before. Some of the other suggestions (server monitor/FusionReactor/cflog) I just plain don't know how to do, so I have not tried those.
Edit: Right now, this is the only code in the page which is not commented out:
<cfmail
to="[e-mail address]"
from="[e-mail address]"
Subject="is it running at all?">
Is it running?
</cfmail>
Edit 2: Okay, now I've got something like this before and after the code for the e-mail:
<cflog
text = "before e-mail"
application = "yes"
log = "Scheduler"
type = "information">
I see the log messages if I actually go to the URL for the file (and the e-mail is sent as well), but not if I tell it to run the scheduled task from CF admin. Because the e-mail sends when I open the file in the browser, I don't think it is a problem with the mail server.
Edit 3: Yes, the e-mail addresses are plain, hard-coded strings.
I'm not exactly sure what you mean by "covered" by an Application.cfm file though. There is an Application.cfm file in the top-level of the site, but not within this particular sub-directory. There are a number of Application scope variables, but none that are used in the file as it is now.
Edit 4: Thank you for the explanation. As I said, total n00b when it comes to CF, so I appreciate the help. The Application.cfm page for this application checks to see if you are logged in, and if you are not, redirects you to the log in page. Could that be the issue?
Edit 5: YAY! It seems like that was the issue. Thank you thank you thank you! Leigh, please submit that as an answer so that I can choose it. You are my hero!
(From the comments )
A long shot but is your scheduled task inside a directory covered by an Application.cfm/Application.cfc file? The reason for asking is that the code inside the parent Application.cfm file executes first before your .cfm script. Is there any code inside the Application.cfm file that aborts a request or redirects (such as permissions check)?

Is there an ideal way to move from Staging to Production for Coldfusion code?

I am trying to work out a good way to run a staging server and a production server for hosting multiple Coldfusion sites. Each site is essentially a fork of a repo, with site specific changes made to each. I am looking for a good way to have this staging server move code (upon QA approval) to the production server.
One fanciful idea involved compiling the sites each into EAR files to be run on the production server, but I cannot seem to wrap my head around Coldfusion archives, plus I cannot see any good way of automating this, especially the deployment part.
What I have done successfully before is use subversion as a go between for a site, where once a site is QA'd the code is committed and then the production server's working directory would have an SVN update run, which would then trigger a code copy from the working directory to the actual live code. This worked fine, but has many moving parts, and still required some form of server access to each server to run the commits and updates. Plus this worked for an individual site, I think it may be a nightmare to setup and maintain this architecture for multiple sites.
Ideally I would want a group of developers to have FTP access with the ability to log into some control panel to mark a site for QA, and then have a QA person check the site and mark it as stable/production worthy, and then have someone see that a site is pending and click a button to deploy the updated site. (Any of those roles could be filled by the same person mind you)
Sorry if that last part wasn't so much the question, just a framework to understand my current thought process.
Agree with #Nathan Strutz that Ant is a good tool for this purpose. Some more thoughts.
You want a repeatable build process that minimizes opportunities for deltas. With that in mind:
SVN export a build.
Tag the build in SVN.
Turn that export into a .zip, something with an installer, etc... idea being one unit to validate with a set of repeatable deployment steps.
Send the build to QA.
If QA approves deploy that build into production
Move whole code bases over as a build, rather than just changed files. This way you know what's put into place in production is the same thing that was validated. Refactor code so that configuration data is not overwritten by a new build.
As for actual production deployment, I have not come across a tool to solve the multiple servers, different code bases challenge. So I think you're best served rolling your own.
As an aside, in your situation I would think through an approach that allows for a standardized codebase, with a mechanism (i.e. an API) that allows for the customization you're describing. Otherwise managing each site as a "custom" project is very painful.
Update
Learning Ant: Ant in Action [book].
On Source Control: for the situation you describe, I would maintain a core code base and overlays per site. Export core, then site specific over it. This ensures any core updates that site specific changes don't override make it in.
Call this combination a "build". Do builds with Ant. Maintain an Ant script - or perhaps more flexibly an ant configuration file - per core & site combination. Track version number of core and site as part of a given build.
If your software is stuffed inside an installer (Nullsoft Install Shield for instance) that should be part of the build. Otherwise you should generate a .zip file (.ear is a possibility as well, but haven't seen anyone actually do this with CF). Point being one file that encompasses the whole build.
This build file is what QA should validate. So validation includes deployment, configuration and functionality testing. See my answer for deployment on how this can flow.
Deployment:
If you want to automate deployment QA should be involved as well to validate it. Meaning QA would deploy / install builds using the same process on their servers before doing a staing to production deployment.
To do this I would create something that tracks what server receives what build file and whatever credentials and connection information is necessary to make that happen. Most likely via FTP. Once transferred, the tool would then extract the build file / run the installer. This last piece is an area I would have to research as to how it's possible to let one server run commands such as extraction or installation remotely.
You should look into Ant as a migration tool. It allows you to package your build process with a simple XML file that you can run from the command line or from within Eclipse. Creating an automated build process is great because it documents the process as well as executes it the same way, every time.
Ant can handle zipping and unzipping, copying around, making backups if needed, working with your subversion repository, transferring via FTP, compressing javascript and even calling a web address if you need to do something like flush the application memory or server cache once it's installed. You may be surprised with the things you can do with Ant.
To get started, I would recommend the Ant manual as your main resource, but look into existing Ant builds as a good starting point to get you going. I have one on RIAForge for example that does some interesting stuff and calls a groovy script to do some more processing on my files during the build. If you search riaforge for build.xml files, you will come up with a great variety of them, many of which are directly for ColdFusion projects.