MXUnit mocking permission denied - coldfusion

I was finally able to get mxunit and mocking working on my local Windows install but after the sys admin installed it on our Linux server I get the following error only when I use it. It works fine for another app that does not require mocking.
Offending code:
mockLogger = getMockBox().createMock('coldbox.system.logging.Logger');
mockLogger.$("info").$("debug").$("warn").$("error");
model.$property(propertyName="logger", mock=mockLogger);
Error:
/shared/coldbox/system/testing/stubs/9DA00BFE-CBB2-164D-DAB9269585B3E317.cfm (Permission denied)
Is there a something that I should be setting in my test/Application.cfc?

The error is because MXUnit / Mockbox is trying to create the file specified, but CF doesn't have permission to write to that location.
The usual fix for this would be to update the permissions for that stubs directory, so that CF can write and access the files there. (Use chown/chmod, or ask the sys admin to do it.)
The other option is to use a different location which CF does have permission to. You can set this by passing the generationPath argument to MockBox when you initialise it, either...
new coldbox.system.testing.MockBox( generationPath="path" )
... if you're initialising it yourself, or from a unit test...
getMockBox().init( generationPath="path" )
The path provided needs to be relative - i.e. something cfinclude can use, so it might be worth setting up a mapping.

Related

Informatica Job encounters "Actual File does not have execute permission" error

I'm trying to create a simple mapping task job in Informatica Cloud that copies a text file from a subdirectory to its' parent directory. Even if I give both folders 777 permissions on the secure agent where the process is run, I get the following error when I run the process:
"[ERROR]
com.informatica.cloud.api.adapter.runtime.exception.FatalRuntimeException:
Actual File does not have execute permission!!"
How do I resolve this issue?
We found the issue. Salesforce automatically started enforcing "enhanced domains" in sandboxes even though our org isn't ready to use that feature yet. I learned from my client that this was only happening in our sandbox, and the issue started happening when this change was implemented. We temporarily disabled the feature in the Salesforce sandbox and will reactivate it once our third party vendor has our org ready to use enhanced domains.

C++ MSI Package Administative Privileges

Here is the issue that I am having,
I have a C++ application that runs by writing data to .txt files and I want to create an MSI Package for the application.
When I build and run my app all is fine but when I run my MSI Setup File the created application does get granted the correct privileges to function.
I can't find a way to allow the app to write to the .txt files needed even if I include them in the package and set them as system files.
If I "Run as administrator" all is well but that isn't really plausible as I need it to function while "Running as User".
Is there anyway to prompt the user while installing to agree to an install with admin rights, so it doesn't have to be done manually before a prompt each launch.
Anything that can get my code running again would be brilliant, thanks.
Longer Writeup: System.UnauthorizedAccessException while running .exe under program files (several other options in addition to the ones listed below).
Per-User Folder: I would think you should install the files in question to a per-user folder (writeable for user - for example My Documents), or as templates to a per-machine folder (not writeable for normal users - for example %ProgramFiles%) and then have your application copy the templates from the per-machine location to the current user's My Documents folder - for example. Then you write to the files there - where a regular user will have write access. I suppose you could also write to a network share which is set up for users to have access.
Elevation: It is possible, to require the application to run elevated (link might be outdated - for .NET it is slightly different), but this is a horrible approach for something as simple as writing to text files. I would never require such elevation. Elevated rights are pervasive, and you don't want your application to run with the keys to the city - you become a hacker target and bugs in your tool become armed and dangerous.
ACL Modification: It is also possible to install the text files to a per-machine location and apply ACL permissioning to them so that they are writeable for regular users even if they don't have elevated rights. There is some information on how to do this here (bullet point 2). This approach is frowned upon in this day and age, but it will work. Be on the alert that your ACL permissioning shouldn't be too tight, in case you write to a new file, delete the old one and rename the new file to the old name during your write operation - you need file create in addition to file write obviously - there is very fine-grained control in NTFS. GenericWrite should do the trick I think.
Some Links (loosely connected, added for easy retrieval):
Create folder and file on Current user profile, from Admin Profile
Why is it a good idea to limit deployment of files to the user-profile or HKCU when using MSI?
Create a .config folder in the user folder
There is no connection at all between the install of an application and the running of an application regarding privileges. In other words there is nothing you can do in an MSI install that grants elevated privileges to the app being installed. It would be a massive security breach if a limited user could create an MSI setup that then installed an app that ran elevated.
So this question is actually nothing to do with Windows Installer - it's about whether you require users to be limited users or elevated users. If it's acceptable that users must be privileged, then you give the app an elevation manifest. If limited users will use it, then all writes or modifications to files or registry entries must be to locations available to limited users. It also means that the app won't be able to perform privileged operations, such as starting or stopping services.

ColdFusion 9 cffile error Access is Denied

I am getting the following error:
The cause of this exception was:
java.io.FileNotFoundException:
//server/c$/folder1/folder2/folder3/folder4/folder5/login.cfm
(Access is denied).
When doing this:
<cffile action="copy"
destination="#copyto#\#apfold#\#applic#\#files#"
source="#path#\#apfold#\#applic#\#files#">
If I try to write to C:\folder1\folder2\folder3\folder4\folder5\login.cfm, it works fine. The problem with doing it this way is that this is a script for developers to be able to manually sync files to their application folder. We have multiple servers for each instance that is randomly picked by BigIP. So just writing to the C:\ drive would only copy the file to the server the developer is currently accessing. So if the developer were to close out the browser and go right back in to make sure their changes worked, if they happen to get sent to a different server, they won't see their change.
Since it works with writing to C:\, I know the permissions are correct. I've also copied the path out of the error message and put it in the address bar on the server and it got to the folder/file fine. What else could be stopping it from being able to access that server?
It seems that you want to access a file via UNC notation on a network folder (even if it incidentally refers to a directory on the local c:\ drive). To be able to do this, you have to change the user the ColdFusion 9 Application Server Service runs on. By default, this service runs with the user "Local System Account" which you need to change to an actual user. Have a look at the following link to find out how to do this: http://mlowell.hubpages.com/hub/Coldfusion-Programming-Accessing-a-shared-network-drive
Note that you might have to add a user with the same name as the one used for the CF 9 service to all of the file servers.
If you don't want to enable ftp on your servers another option would be to use RoboCopy to keep the servers in sync. I have had very good luck using this tool. You will need access to the cfexecute ColdFusion tag and you will need to create share(s) on your servers.
RoboCopy is an executable that comes with Windows. You can read some documentation here and here. It has some very powerful features and can be set to "mirror" the contents of directories from one server to the other. In this mode it will keep the folders identical (new files added, removed files deleted, updated files copied, etc). This is how I have used it.
Basically, you will create a share on your destination servers and give access to a specific user (can be local or domain). On your source server you will run some ColdFusion code that:
Logically maps a drive to the destination server
Runs the RoboCopy utility to copy files to the destination server
Then disconnects the mapped drive
The ColdFusion service on your source server will need access to C:\WINDOWS\system32\net.exe and C:\WINDOWS\system32\robocopy.exe. If you are using ColdFusion sandbox security you will need to add entries for these executables (on the source server only). Here are some basic code examples.
First, map to the destination server:
<cfexecute name="C:\WINDOWS\system32\net.exe"
arguments="use {share_name} {password} /user:{username}"
variable="shareLog"
timeout="30">
</cfexecute>
The {share_name} here would be something like \\server\c$. {username} and {password} should be obvious. You can specify username as \\server\username. NOTE I would suggest using a share that you create rather than the administrative share c$ but that is what you had in your example.
Next, copy the files from the source server to the destination server:
<cfexecute name="C:\WINDOWS\system32\robocopy.exe"
arguments="{source_folder} {destination_folder} [files_to_copy] [options]"
variable="robocopyLog"
timeout="60">
</cfexecute>
The {source_folder} here would be something like C:\folder1\folder2\folder3\folder4\folder5\ and the {destination_folder} would be \\server\c$\folder1\folder2\folder3\folder4\folder5\. You must begin this argument with the {share_name} from the step above followed by the desired directory path. The [files_to_copy] is a list of files or wildcard (*.*) and the [options] are RoboCopy's options. See the links that I have included for the full list of options. It is extensive. To mirror a folder structure see the /E and /PURGE options. I also typically include the /NDL and /NP options to limit the output generated. And the /XA:SH to exclude system and hidden files. And the /XO to not bother copying older files. You can exclude other files/directories specifically or by using wildcards.
Then, disconnect the mapped drive:
<cfexecute name="C:\WINDOWS\system32\net.exe"
arguments="use {share_name} /d"
variable="shareLog"
timeout="30">
</cfexecute>
Works like a charm. If you go this route and have not used RoboCopy before I would highly recommend playing around with the options/functionality using the command line first. Then once you get it working to your liking just paste those options into the code above.
I ran into a similar issue with this and it had me scratching my head as well. We are using an Active Directory along with a UNC path to SERVERSHARE/webroot. The application was working fine with the exception of using CFFILE to create a directory. We were running our CFService as a Domain account and permissions were granted onto the webroot folder (residing on the UNC Server). This same domain account was also being used to connect to the UNC path within IIS. I even went so far as to grant FULL Control on the webroot folder but still had no luck.
Ultimately what I found was causing the problem was that the Inetpub Folder (parent folder to our webroot) had sharing turned on but that sharing did not include 'Read/Write' sharing for our CFService domain account.
So while we had Sharing on Inetpub and more powerful user permissions turned on for Inetpub/webroot folder, the sharing permissions (or lack thereof) took precedence over the more granular webroot user security permissions.
Hope this helps someone else.

Connecting to neo4j using ColdFusion

Has anyone here successfully connected to neo4j using ColdFusion?
I was able to connect to neo4j 1.6.1 using this guide as a starting point: http://ghostednotes.com/2010/04/29/using-neo4j-graph-databases-with-coldfusion
. However, it was a short lived success. I have since uninstalled neo4j 1.6.1 and installed 1.7.
I am now running Apache, CF 9.0.1 on windows XP as a local dev box. I added ...\neo4j-community-1.7\lib to my CF class path and the libraries are listed in CF Server Java Class Path. neo4j is running fine, as I can use their administrator interface: http://localhost:7474/webadmin/# . CF and Apache are also running fine. I use them daily.
While the code below works, I'd really like to 'see' what's going on using the neo4j web admininistrator. So I can coordinate my learning neo4j while using the data in a CF application.
Code: (Works)
dbroot = "/tmp/neo4jtest1/";
graphDb = createObject('java', 'org.neo4j.kernel.EmbeddedGraphDatabase');
graphDb.init( dbroot & 'var/myFirstGraphDB');
So I tried to connect to the neo4j db graph.db . However the code fails.
Code: (fails)
graphDb = createObject('java', 'org.neo4j.kernel.EmbeddedGraphDatabase');
graphDb.init( dbroot & 'graph.db');
Error:
Object instantiation exception.
An exception occurred while instantiating a Java object. The class must not be an interface or an abstract class. Error: ''.
If I remove the "." in graph.db it does create a "graphdb" in the neo4j data folder, and successfully connects to it. However, that db is not viewable with their admin :(
I'm a novice, so please dumb down your answer.
Ok, I think what you're trying to achieve is not possible. It is not possible to access Neo4J within CF (via Java) and have the admin interface working (caveat 1 applies).
If you have put all the jars of the Neo4J package into Adobe CF then most likely the Neo4J admin interface is looking at it's own Neo4J file system. When you create the Embedded server it is not connecting to the same database because it simply can't.
Embedded Neo4J doesn't work like a standard database connection. One Embedded Neo4J reads and writes to one directory location (key word: directory, it doesn't open a single file but a whole bunch of them). No two Neo4J instances can access the same directory location (caveat 2 applies).
Ok, the caveats:
1- it is possible, in theory, to manually start up the admin interface programatically so that it uses the Embedded server that you create via Java. The Java code looks simple enough (taken from Using the server (including web administration) with an embedded database):
// Create your embedded graph db somewhere
src = CreateObject("java", "org.neo4j.server.WrappingNeoServerBootstrapper")
.init(graphDb);
srv.start();
// The server is now running
// until we stop it:
srv.stop();
I did not get this working, mostly because the admin server hasa bunch of dependencies that were incompatible with the rest of my setup, so I can't advise on how well the above will work.
2- it is possible to have 1 read/write Neo4J accessing one location and then have multiple read-only Neo4Js (EmbeddedReadOnlyGraphDatabase) reading the same location (but I've never tried it).
You do have the option of using the REST interface - either manually, or via the Neo4J Java REST Binding (kinda slow, though).
It might be worth reading the Deployment Scenarios documentation before getting too deep in this.
There is at least one CF/Neo4J bridge out there, but it's pretty incomplete. I have one that I worked on, but I need to figure out if I can open source it!
Just a small addition to otupman's comments. I can confirm his theory of connecting to the admin interface from CF. Adding the following jars to the CF class path seemed to be enough to get the basics up and running. You may need additional jars if you are using more advanced features. Note, I am using Tomcat so the exact jars may differ slightly for your environment
neo4j-community-1.7/lib/*.* (entire directory)
neo4j-community-1.7/system/lib: (ONLY the jars below)
asm-3.1.jar
asm-analysis-3.2.jar
asm-commons-3.2.jar
asm-tree-3.2.jar
asm-util-3.2.jar
commons-configuration-1.6.jar
jackson-core-asl-1.8.3.jar
jackson-jaxrs-1.8.3.jar
jackson-mapper-asl-1.8.3.jar
jersey-core-1.9.jar
jersey-multipart-1.9.jar
jersey-server-1.9.jar
jetty-6.1.25.jar
jetty-util-6.1.25.jar
neo4j-server-1.7-static-web.jar
neo4j-server-1.7.jar
rrd4j-2.0.7.jar
Then started the server and database in onApplicationStart
factory = createObject("java", "org.neo4j.graphdb.factory.GraphDatabaseFactory");
dbroot = ExpandPath("/neo4jtest/");
graphDb = factory.newEmbeddedDatabase(dbroot & 'myFirstGraphDB');
Bootstrapper = createObject("java", "org.neo4j.server.WrappingNeoServerBootstrapper");
graphServer = Bootstrapper.init( graphDb );
graphServer.start();
application.graphServer = graphServer;
application.graphDb = graphDB;
And closed both in onApplicationEnd
application.graphDb.shutDown();
application.graphServer.stop();
Edit: After some further testing, I think is better to load them once in OnServerStart. Then use a shutdown hook to close them. But since this is just for a local development box, it is less critical.

Django custom management commands from admin interface

I asked a previous question getting a django command to run on a schedule. I got a solution for that question, but I still want to get my commands to run from the admin interface. The obstacle I'm hitting is that my custom management commands aren't getting recognized once I get to the admin interface.
I traced this back to the __init__.py file of the django/core/management utility. There seems to be some strange behavior going on. When the server first comes up, a dictionary variable _commands is populated with the core commands (from django/core/management/commands). Custom management commands from all of the installed apps are also pushed into the _commands variable for an overall dictionary of all management commands.
Somehow, though between when the server starts and when django-chronograph goes to run the job from the admin interface, the _commands variable loses the custom commands; the only commands in the dictionary are the core commands. I'm not sure why this is. Could it be a path issue? Am I missing some setting? Is it a django-chronograph specific problem? So forget scheduling. How might I run a custom management command from the django admin graphical interface to prove that it can indeed be done? Or rather, how can I make sure that custom management commands available from said interface?
I'm also using django-chronograph
and for me it works fine. I did also run once into the problem that my custom commands were not regognized by the auto-discovery feature. I think the first reason was because the custom command had an error in it. Thus it might be an idea to check whether your custom commands run without problems from the command line.
The second reason was indeed some strange path issue. I might check back with my hosting provider to provide you with a solution. Will post back to you in a few days..
i am the "unix-guy" mentioned above by tom tom.
as far as i remember there were some issues in the cronograph code itself, so it would be a good idea to use the code tom tom posted in the comments.
where on the filesystem is django-cronograph stored (in you app-folder, in an extra "lib-folder" or in your site-packages?
when you have it in site-packages or another folder that is in your "global pythonpath" pathing should be no issue.
the cron-process itself DOES NOT USE THE SAME pythonpath, as your django app. remember: you start the cron-process via your crontab - right? so there are 2 different process who do not "know" each other: the cron-process AND the django-process (initialized by the webserver) so i would suggest to call the following script via crontab and export pythonpath again:
#!/bin/bash
PYTHONPATH=/path/to/libs:/path/to/project_root:/path/to/other/libs/used/in/project
export PYTHONPATH
python /path/to/project/manage.py cron
so the cron-started-process has the same pythonpath-information as your project.
greez from vienna/austria
berni