Coldfusion cfprint and UPS labels - coldfusion

I am trying to use Coldfusion CFPRINT to print UPS labels to a network printer. The starting labels (png files) are great and I can print them locally to the zebra printer and they print and work wonderfully. The barcodes produced by CFPRINT however are of such poor quality that a barcode scanner cannot read them. My research shows that Coldfusion uses the jpedal java library which resizes the images to 72 dpi - which is just not crisp enough for a scanner.
I read about using a jpedal setting: org.jpedal.upscale=2 but I have no clue as to where you would utilize this.
Any suggestions on how to fix this CFPRINT resolution issue using Coldfusion?

(Just to add a bit more detail to the comments)
That is a JVM argument. There are several ways to apply it:
Add the setting to your jvm.config file manually. Backup the file first. Then add -Dorg.jpedal.upscale=2 to the end of the java.args section. Save the changes and restart the CF Server. Do not skip the backup step! Errors in the jvm.config file can prevent the server from starting. So it is important to have a good copy you can restore if needed.
Open the CF Administrator and select Server Settings > Java and JVM > JVM Arguments. Add -Dorg.jpedal.upscale=2 to the end of the arguments. Save the settings and restart the CF server.
Again, I would strongly recommend making a backup of the jvm.config file first. As #Mark noted in the comments, some versions of CF have been known to mangle the jvm.config file, which could prevent the server from starting. But as long as you have a good backup, simply restore it and you are good to go.
IIRC, you could also set the property at runtime, via code. However, timing will be more of a factor. Their API states system properties must be set before accessing JPedal. The docs are not clear on exactly what that means. However, the implication is the system property is only read once, so if you set it too late, it will have no affect.
// untested
sys = createObject("java", "java.lang.System");
prop = sys.getProperties();
prop.setProperty("org.jpedal.upscale", "2");
sys.setProperties(prop);
Side note, I was not familiar with that setting, but a quick search turned up the CF8 Update 1 Release Notes which mention this setting "improves sharpness, but it also doubles the image size" and also increases memory. Just something to keep in mind.

Related

Configuring Jetty to do console-capture and requestlog to a directory outside of jetty.base?

Looking at the docs, all the logging paths specified for console-capture and requestlog are relative to jetty.base, normally in $jetty.base/logs . That's ok for many purposes but, I really want logs to go into /var/logs/jetty , just like a lot of other processes would do. I've tried setting this in console-capture as /var/log/jetty, but that just tries to save log files in $jetty.base/var/log/jetty, which isn't what I need.
Is there some way to do this? I'm looking for the simplest possible approaches to saving logs. This is the last thing I need to do before my Jetty installation is fully in production. Overall it's been great. This is all with Jetty 9, latest release, on Ubuntu.
Start by not using console-capture.
You have progressed beyond the limited scope of console-capture with your requirement.
You'll want a formal logging framework, pick one, like "logback" (which the Jetty devs recommend), or java.util.logging, or log4j.
Use one of the logging-* modules to setup Jetty's server classpath to start using that logging library.
Now configure that logging library (example: if you are using "logback", the file ${jetty.base}/resources/logback.xml is what you configure)
Finally, configure your access logging to use slf4j.
Boom, all of your logging is now going to your logging library of choice, and it's configuration can be used to slice / dice / roll over / filter / etc the logging in any way you want.
You can have it split into different logging output files, combine them into one, roll on different rules (size, number of lines, duration, time, etc).
Definitely made some progress on this. For some reason, console redirect was taking the absolute path correctly while the request logger was not. For that, I made my configuration relative: ../../../var/log/jetty
This seems like a clunky way to do it but it does seem to work. I'm still getting a failure on startup but weirdly enough it's running fine and I don't see exceptions so I need to figure that out now.

web.config vs. text file for storing a comma-separated value

We have a collection of VB.NET / IIS web services on some of our servers, and they have web.config files in the websites' root directories that they're already reading configurations from. There is a new configuration that needed to be added that will immediately be quite a bit longer than the others, and it'll only stand to grow. It's essentially a comma-separated value, and I'm wanting to keep it specifically in a configuration file of some sort.
At first I started doing this with a text file, but there was a problem with that. The text file's contents could change while web service threads and processes are running, so they would need to essentially re-read the file every time they needed to access its values. I thought about using some sort of caching, but unless the web services are completely restarted each time the file is updated, caching would block updates to the file from being used immediately. But reading from a text file each time is slow...
Then came the idea of putting that value in web.config, along with the other configurations the services are already using. When web.config is altered, the changes are able to be cached in the code, on top of coming into play immediately. However web.config is, well, web.config, and it's not a totally trivialized text file that is simply read out of in the code. IIS treats web.config in a special manner.
I'm tempted to think any negative consequences of putting a comma-separated value in web.config would be outweighed, in comparison to storing them in a text file (or a database, which probably can't be used for this anyway), but I guess I better ask.
What are the implications of storing a possibly lengthy, comma-separated value in web.config, instead of in its own little text file? Is either file a particularly good or bad idea? To me, it seems like web.config would be easy to get along with without having to re-read the file over and over, but there's certainly more to it than the common user is aware. Thanks!
I recommend using the Application Cache for this:
http://msdn.microsoft.com/en-us/library/vstudio/6hbbsfk6(v=vs.100).aspx

Logging Etiquette

I have a server program that I am writing. In this program, I log allot. Is it customary in logging (for a server) to overwrite the log of previous runs, append to the file with some sort of new run header, or to create a new log file (it won't be restarted too often).
Which of these solutions is the way of doing things under Linux/Unix/MacOS?
Also, can anyone suggest a logging library for C++/C? I need one, regardless of the answer to the above question.
Take a look in /var/log/...you'll see that files are structured like
serverlog
serverlog.1
serverlog.2
This is done by logrotate which is called in a cronjob. But everything is simply in chronological order within the files. So you should just append to the same log file each time, and let logrotate split it up if needed.
You can also add a configuration file to /etc/logrotate.d/ to control how a particular log is rotated. Depending on how big your logfiles are, it might be a good idea to add here information about your logging. You can take a look at other files in this directory to see the syntax.
This is a rather complex issue. I don't think that there is a silver bullet that will kill all your concerns in one go.
The first step in deciding what policy to follow would be to set your requirements. Why is each entry logged? What is its purpose? In most cases this will result in some rather concrete facts, such as:
You need to be able to compare the current log with past logs. Even when an error message is self-evident, the process that led to it can be determined much faster by playing spot-the-difference, rather than puzzling through the server execution flow diagram - or, worse, its source code. This means that you need at least one log from a past run - overwriting blindly is a definite No.
You need to be able to find and parse the logs without going out of your way. That means using whatever facilities and policies are already established. On Linux it would mean using the syslog facility for important messages, to allow them to appear in the usual places.
There is also some good advice to heed:
Time is important. No only because there's never enough of it, but also because log files without proper timestamps for each entry are practically useless. Make sure that each entry has a timestamp - most system-wide logging facilities will do that for you. Make also sure that the clocks on all your computers are as accurate as possible - using NTP is a good way to do that.
Log entries should be as self-contained as possible, with minimal cruft. You don't need to have a special header with colors, bells and whistles to announce that your server is starting - a simple MyServer (PID=XXX) starting at port YYYYY would be enough for grep (or the search function of any decent log viewer) to find.
You need to determine the granularity of each logging channel. Sending several GB of debugging log data to the system logging daemon is not a good idea. A good approach might be to use separate log files for each logging level and facility, so that e.g. user activity is not mixed up with low-level data that in only useful when debugging the code.
Make sure your log files are in one place, preferably separated from other applications. A directory with the name of your application is a good start.
Stay within the norm. Sure you may have devised a new nifty logfile naming scheme, but if it breaks the conventions in your system it could easily confuse even the most experienced operators. Most people will have to look through your more detailed logs in a critical situation - don't make it harder for them.
Use the system log handling facilities. E.g. on Linux that would mean appending to the same file and letting an external daemon like logrotate to handle the log files. Not only would it be less work for you, it would also automatically maintain any general logging policies as a whole.
Finally: Always copy log important data to the system log as well. Operators watch the system logs. Please, please, please don't make them have to look at other places, just to find out that your application is about to launch the ICBMs...
https://stackoverflow.com/questions/696321/best-logging-framework-for-native-c
For the logging, I would suggest creating a new log file and clean it using a certain frequency to avoid it growing too fat. Overwrite logs of previous login is usually a bad idea.

Setting the label for a Windows networking mapping

Is it possible to give a network drive mapping (as created with the WNetAddConnection functions or "Map network drive..." GUI) a label other than the default "<Target Name> (<Target Path>) (<Drive Letter>:)" one?
I tried giving SetVolumeLabel a go but this always fails, and I see nothing in the WNet API's to specficy the display label.
This isn't a 100% solution but it's more of an answer than a comment...
If you rename a mapped network drive the the GUI (by right clicking on it and going to 'Rename') it adds a value to the registry. Reading round on various sites (notably this one) it looks like Windows may sporadically delete this value by itself, so this may not be a permanent solution...
I have just manually done it through regedit and it worked in the GUI, so I see no reason why it shouldn't work programmatically as well
Add a string value called _LabelFromReg with a value of whatever you want the label to be to the registry key
HKCU\Software\Microsoft\Windows\CurrentVersion\Explorer\MountPoints2\##<server-name>#<share-name>
This key should already exist if you have already created the share.
Apparently (see the link above) you then need to make that key read-only to prevent the OS from changing it back at will - I don't know how you would do that programmatically but i'm sure it can be done.
I know there are huge gaps in this answer, but maybe it's a poke in the right direction?

Fusebox 5 issues when replacing files

Today, no matter what I did, my application just would not recognise a change I did to a file I uploaded. I even put a cfabort at the top of the page and it just ignored it.
Now, this is a production server, so there were some things I normally have to do for the fusebox framework to load the new pages. However, all the usual processes failed and I even tried numerous others. Let me list them:
Normal Process:
&fusebox.parseAll=1&fusebox.password=whatever <- Did not work
&fusebox.load=1&fusebox.password=whatever <- Did not work
Other things I tried:
* changed mode from production to development-full-load <- Did not work
* called onApplicationStart to reset app <- Did not work
* changed the application name to reset app <- Did not work
* deleted parsed folder and regenerated <- Did not work
No matter what I did (they may have been more that I just don't recall at present) nothing would refresh the page. The only thing that worked after I was at my wits end, was to stop the Railo server, restart it and then run the thing I tried first again, being:
&fusebox.parseAll=1&fusebox.password=whatever
That worked. So my only assumption can be that somehow, somewhere in the one of the applications, the cached code was being used to regenerate the parsed files instead of the actual updated file.
Has anyone experienced this before and do you have any solutions to avoid this. I can not keep restarting my production application just to update a changed file.
Thanks
From what you've said it sounds like Trusted Caching may be turned on, which is an odd name but basically means "I trust that these files will not change, so don't bother checking" or something like that. The main thing is it doesn't look at your cfm/cfc files for changes, which is faster, but of course very annoying when you make changes.
On Railo, that can happen at the per-mapping level, so first thing is to check all your mappings to see if the "Trusted" option is enabled - unless your site is high enough traffic that it's beneficial, for a Fusebox app it's probably more hassle than it's worth - so for any relevant mappings, unless you specifically need it, go ahead and disable it.
There is also a similar global caching option - in Railo Web Admin, go to Settings>Performance/Caching and most likely you want to have "Inspect Templates" set to "Once". If it is set to "Never", this is same as Trusted cache, which again is faster but not best for a changing site.
However, you may have noticed there is a "Clear template cache" button below - if you prefer to keep it on "Never" you can press this button each time the code changes, and it will rebuild the cache with the latest files.