I'm running CF9 dev edition - using the built in webserver, on win XP sp3
I don't seem to be able to get a command line to run from CF...
Error looks like this:
Timeout period expired without completion of C:\Program Files\Vis\vis.exe
If I run this from the CMD - it works fine...
C:\Program Files\Vis\vis.exe "C:\Documents and Settings\me.POV-ISP\My Documents\actX.vjb"
trying to run it from CF using this code - (or many other variations) and it times out.
<cfexecute name = "C:\Program Files\Vis\vis.exe"
arguments = "C:\Documents and Settings\me.POV-ISP\My Documents\actX.vjb"
variable="result" timeout="600" errorVariable="errorv"/>
I understand from reading other threads that it MAY be permissions, but WHAT should the permissions be? I installed CF as me - it's run locally, thru my account, same as me running CMD.
thoughts.???
The first thing I would try is change your path from C:\documents and settings ...(This is actually a short cut file/pointer in Windows 7 and not a true path) and use c:\users\m.pov-isp\Documents\actX.vjb). Just in case ColdFusion can't resolve the pointer in the system environment.
Failing that, it's either Syntax or permissions right?
To rule out syntax, I'd run this, slightly modified for windows 7, stock example from live docs:
<cfexecute name = "C:\Windows\System32\NETSTAT.EXE"
arguments = "-e"
outputFile = "C:\Temp\output.txt"
timeout = "1">
</cfexecute>
If the output file shows up, you know it's the syntax. If it doesn't we can attack the service...
Open up your services panel (right click my computer -> manage -> services and applications -> services).
Select the ColdFusion 9 Application server -> properties
Change the Logon account to your own. This should give ColdFusion access to your Documents and all of the other resources you can reach as a user. You also may want to try outputting to a folder on the C drive that will be less likely to have conflicting permission issues.
Best of luck.
-Dave
You can also try overridding any setting in the individual file you're making with the following command
<cfsetting requesttimeout = "10000">
Where, 10000 is 10000 seconds. You might want to stick to 600 seconds (10 minutes) and move up from there. I would put it before the cfexecute. You can try using it alone first, or in conjunction with the cfexecute timeout.
Let us know how it goes!
If none of the approaches above works, get hold of SysInternals Process Monitor and run it whilst running the request. Process Monitor generates thousands of events (filtering them down is very necessary), including all file and registry access attempts and their success.
You may well find a clue in the logging it does.
Related
I want to detect current windows 10 update status programmatically.
I tried wuapi and it works well but there are some problems in wuapi.
First, it takes long time to get update information.
Second, it can not be used at offline.
Is there any other method to detect current windows 10 update status?
Is there any registry or system file to detect it?
I tried procmon to analyse but there are too many files and registries linked with windows udpate.
Thank you...
There is no documented way to access the search results that Automatic Updates is using (the results that the Windows Update page in Settings displays).
However, there are two things that might be of use to you:
You can use IAutomaticUpdatesResults::LastInstallationSuccessDate to immediately see the last time the computer installed updates successfully. If all you want to know is "Is this PC processing updates successfully?", then this may be all you need.
You can use a Windows Update API search to see what updates are needed. Here's a script you can use as a starting point. If you use this script as written, it will go online to find newly-released updates, which isn't what you want in your scenario. But you can set your IUpdateSearcher object's Online property to false before calling Search. Doing that will perform an offline scan, in which WU just re-evaluates the updates it already knows about. This will work offline and will also return faster results.
"COM API
The COM API is a good way to directly access Windows Update without having to parse logs. Applications of this API range from finding available updates on the computer to installing and uninstalling updates.
You could use the Microsoft.Update.Session class to run an update search and then count the number of updates available to see if there are any updates for the computer.
PowerShell Example:
$updateObject = New-Object -ComObject Microsoft.Update.Session
$updateObject.ClientApplicationID = "Serverfault Example Script"
$updateSearcher = $updateObject.CreateUpdateSearcher()
$searchResults = $updateSearcher.Search("IsInstalled=0")
Write-Host $searchResults.Updates.Count
If the returned result is more than 0 then there are updates for the computer that need to be installed and/or downloaded. You can easily update the powershell script to fit your application.
Just a heads up, it appears that the search function is not async so it would freeze your application while searching. In that case you will want to make it async."
Registry method
Source:
https://serverfault.com/questions/891188/is-it-possible-to-detect-the-windows-update-status-via-registry-to-see-if-the-s
I am trying to enumerate Windows power plans through very straightforward code that has been working for several years. On my Windows 10 64-bit machine, however, I am getting errors when I try to enumerate a collection of power plans.
I realize Visual Foxpro code is like a fossil, but it is still pretty easy to read:
loSchemes = CREATEOBJECT("Collection")
loWMIService = GETOBJECT("winmgmts:\\.\root\cimv2\power")
loItems = loWMIService.ExecQuery("SELECT * FROM Win32_PowerPlan")
FOR EACH loItem IN loItems
loSchemes.Add(loItem)
ENDFOR
The code has been working for years, from Windows XP up through (I think, anyway), Windows 10.
The error happens after loItems is instantiated via the ExecQuery() method call. The object exists and has visible properties, but if I try to access anything in the debugger, it says the expression cannot be evaluated. If I wrap the iterating FOR loop in a TRY..CATCH, the error I get is:
OLE error code 0x80070668: Only administrators have permission to add,
remove, or configure server software during a Terminal services remote
session. If you want to install or configure software on the server,
contact your network administrator.
So, it looks like I am being locked out from power plan information because the process thinks I am remotely trying to change the configuration even though I am accessing WMI data from the local machine (where I do have administrator rights, incidentally).
I downloaded a "WMI Explorer" tool from Code Plex, and I actually get the same problem there when I try to iterate over Win32_PowerPlan. The log returns an error:
Failed to enumerate instances from Win32_PowerPlan. ERROR:
(That is the literal response -- no actual error is listed.)
This makes me think this isn't just some sort of Foxpro issue.
Edit
I downloaded a WMI process explorer from Sapien, and it displays a UAC prompt every time it starts, running it with elevated privileges. Both the 32-bit and 64-bit versions of that software can query Win32_PowerPlan and display results. I then ran the Code Plex WMI Explorer as administrator and it was also able to iterate over Win32_PowerPlan without errors. So, the issue appears to be unrelated to "bit"-ness and has everything to do with WMI data access requiring administrator privileges even though I am running locally using a login that does, in fact, have administrator privileges. Needless to say, I am still stumped... For the record, I can still access all sorts of other information via WMI: processor info, memory usage, processes, services, IP Address, and OS description. All of those modules still work perfectly. But when it comes to the \root\cimv2\power namespace and accessing power plans, no joy.
Further edit
Some other questions mention ImpersonationLevel, saying I might need to explicitly set the level to "impersonate" (an enumeration constant = 3). I am playing with my WMIService object and can read and write the impersonation level, but it is 3 by default. I raised it to 4 ("delegate") and still cannot access the power plan items. The query runs fine, but an error gets thrown when I try to access any properties of loItems. If I reduce the impersonation level to 1 ("anonymous"), then an "Access denied" error gets thrown on the ExecQuery() call. Finally, level 2 ("identify") allows the query and I can access the Count property without an error being thrown. But zero items are returned where there should be 5. I am confused now more than ever.
In summary, I cannot access power plan information from my local machine, even though I have administrator privileges, regardless. This is on a Windows 10 Professional 64-bit installation (all updates up-to-date), definitely no Terminal Server software installed.
I am having a problem with calling the function MsiOpenDatabase (https://msdn.microsoft.com/en-us/library/aa370338(v=vs.85).aspx) from inside a program when I choose to "run as administrator". When I run it under an admin account but without explicitly starting the executable as elevated it all works just fine. This indicates that the path to MSI file etc should be correct.
So, when running elevated the MsiOpenDatabase() I get an error code of 110 (0x6e).
I have tried to call MsiGetLastErrorRecord as explained here (https://msdn.microsoft.com/en-us/library/aa370124(v=vs.85).aspx) but nothing happens when I try to print the code in a message box. It simply doesn't get there.
I do not have Visual studio for debuggning on the target machine, so debugging is a bit of a pain.
Target machine is Windows 7 x64. Application is 32-bit.
But just the pure fact that it works un-elevated but fails when run as an administrator...it feels like there should be some kind of answer to this which can be derived from this fact perhaps?
Thankful for any help!
EDIT:
I finally solved it!
Apparently I had to go to the network share where the MSI file is located (which I am trying to call MsiOpenDatabase on) and right cklick on a file there and choose "run as administrator" because then and only then did I get a UAC dialog box asking for credentials (I mean I was able to open Windows Explorer as admin and navigate to the network share without problem so I never thought that it would be what would give me these peoblems). After haing done that I was able to run my application and it did no longer fail on any MsiOpenDatabase call.
But, why must I do this procedure to get access to run file on a network share since I already had access (execute rights) with the same user but when not elevated? How come Windows needs to ask the same user for credentials if it is already running elevated on the very same account that already has access to the network share? Seems strange to me, but I suppose I am missing some crucial part?
SAMPLE CODE
LPCTSTR szPersist = MSIDBOPEN_READONLY;
MSIHANDLE handleDB;
UINT result = MsiOpenDatabase(strPath, szPersist, &handleDB); // strPath is something like _T("\\server\MSI\Setup.msi");
result variable has value 110 when this error occurrs as explained above and keep the part in the update section in mind. I find it strange, but perhaps someone knows UAC better than me and why I have to provide credentials again by going to a file on the netowrk share and choose to run as admin to get it working (since I have already provided credentials as non-admin with the same account earlier at that very same network share location)?
This is standard UAC behavior since Windows Vista and is not related to MSI at all. Do a google search for "uac network drives".
You should be closing your MSI handles though as I commented above. Use PMSIHANDLE instead of MSIHANDLE.
I am trying to find out a way to launch a custom daemon from my program. The daemon itself is implemented using double-forking mechanism and works fine if launched directly.
So far I have come across various ways to start a daemon:
Create an init script and install it to init.d directory.
Launch the program using start-stop-daemon command.
Create .desktop file and place in one of the autostart paths.
While the 1st 2 methods are known to start the service using command line, the 3rd method is for autostarting the service (or any other application) at user login.
So far my guess is that the program can be executed directly using exec() family of functions, or the 'start-stop-daemon' command can be executed via system() function.
Is there a better way to start/stop service?
Generally startups are done from shell scripts that would call your C++ program which would then do its double fork. Note that it should also close unneeded file descriptors, use setsid() and possibly setpgid/setpgrp (I can't remember if these apply to Linux too), possibly chdir("/"), etc. There are a number of fairly normal things to do which are described in the Stevens book - for more info see http://software.clapper.org/daemonize/daemonize.html
If the daemon is supposed to run with root or other system user account, then the system /etc/init/ or /etc/init.d/ mechanisms are appropriate places to have scripts to stop|start|status|etc your daemon.
If the deamon is supposed to be for the user, and run under his/her account, you have a couple of options.
1) the .desktop file - I'm not personally a fan, but if it also does something for you on logging out (like let you trigger shutting down your daemon), it might be viable.
2) For console logins, the ~/.bash_login and ~/.bash_logout - you can have these run commands supported by your daemon's wrapper to start it and (later) shut it down. The latter can be done by saving the PID in a file or having the .bash_login keep it in a variable the .bash_logout will use later. This may involve some tweaking to make sure the two scripts get run, once each, by the outermost login shell only (normal .bashrc stuff stays in the .bashrc, and .bash_login would need to read it in for the login shell before starting the daemon, so the PATH and so on would be set up by then).
3) For graphic environments, you'd need to find the wrapper script from which things like your X window manager are run. I'm using lightdm, and at some point /etc/X11/Xsession.d/40x11-common_xsessionrc ends up running my ~/.xsessionrc which gives me a hook to startup anything I want (I have it run my ~/.xinitrc which runs my window manager and everything), as well as the place to shot everything down later. The lack of standardization for giving control to the user makes finding the hook pretty annoying, since just using a different login manager (e.g. lightdm versus gdb) can change where the hook is.
4) A completely different approach is to just have the user's crontab start up the daemon. Run "man 5 crontab" and look for the special #reboot option to have tasks run at boot. I haven't used it myself - there's a chance it's root restricted, but it's easy to test and you need only contemplate having your daemon exist gracefully (and quickly) at system shutdown when the system sends it a SIGTERM signal (see /etc/init.d/sendsigs for details).
Hope something from that helps.
Build time of XPages application containing several JARs, Java sources and ~50 XP/CC elements takes about minute to build on server via WAN. I have replicated application to local, build time dropped to ~10s.
Since few days ago build of local application is extremely slow, about 2-5 minutes. After some experiments there is workaround: to disable TCP port in location document - it drops build times to just few seconds. Even tho it works, it does not help much - testing requires user to be authenticated, so I need to replicate design changes to remote or local server - and that means to change location (online/offline) every time.
UPDATE 2013-04-04: I have duplicated my current location document and removed home and directory servers. To my surprise, with this location build times went back to few seconds - with TCP port enabled so replication is possible. Bigger surprise was the fact, that returning home/directory servers back to new location did not reproduce the problem - in fact they do not affect performance. I know it because I have renamed current location document and everything went to normal. From my understanding, "something" in client configuration was connected to location name. Thanks to Simon's tips I will investigate further.
The question is still open: I am looking for some (eclipse) preference controlling this behavior - unintended communication with server during build of local application.
Solution:
Teamstudio CIAO hooks into designer and checks for every update of design element. Seems to be lack of code optimization to me: it checks whether currently built design element (every single one, one by one) should be controlled in CIAO config database.
This explains why the problem was solved by renaming of location document. I was disappointed yesterday, when performance problems started again. Fortunately, I recalled CIAO setup to that location document about that time. CIAO uses teamstudio.ini file in DATA directory to configure what CIAO configuration database is used for every location document. Look for entry:
CIAOConfigDb[location name]=server name;CIAO\CIAOConfig.nsf
For development on local replicas with connection to server (for replication or local server), use location document with CIAO disabled.
This works only with property ForceConfigLocation=0.
Not a solution (yet!), but may help in the investigation. I'll update further if you post results later.
Debug instructions.
Add the following to the shortcut that launches the Designer client.
-RPARAMS -console -debug -separateSysLogFiles -consoleLog
Start the designer client. This will also open up the OSGi console.
Reproduce the issue. While it is still in progress in the OSGi console type the following:
dump threads
Do this three times, with a small amount of time between completion of each dump. Once done open the three heap dumps (in the IBM_TECHNICAL_SUPPORT folder) in the Heap Dump Analyser.
It will show you what threads are consistent through all three dumps. Take a look at those and look for package names/calls which may appear to be a functional area. Once you have that then you can try adding the debug for the related class.
For example: Let's say you notice "com.ibm.designer.domino.ui.commons." in the thread, then you would edit the rcpinstall.properties file. It will be in:
<Notes Install>\Data\workspace\.config\rcpinstall.properties
and you would add (start with FINE, then FINEST if nothing):
com.ibm.designer.domino.ui.commons.level=FINE
Now when you restart the designer client it will generate debug output in the workspace\logs folder for that package. You need to then go through the trace logs looking for the time when the delay occurred and see if it makes any references to related design elements.
Other open applications may get built at the same time (which looks like a bug top me). Be sure to close all other applications and the server based replica. Open applications have their icon showing in the application list and they stay open even if you close and reopen the Designer. In Designer 9 right click application and select "Close Application". In 8.5 you need to use Package Exprorer for closing.
Another good way is to use Working Sets. Only applications in open Working Set will be built (AFAIK). Have a Working Set with this one app only (and the app only in this Working Set).
update 1
If these don't help I would delete/rename bookmark.nsf, Cache.NDK and desktop8.ndk. Then open just this one app and see what happens.
update 2
Check that there are no referenced projects. Right click the application and select "Project Properties". From there "Project Referencies" and make sure no check boxes are checked.
update 3
Based on your update I would check the item names starting with $ in location document. Sometimes there are saved IP addresses etc. which could cause this problem. All those items can be removed.
If possible (and if You are not using it yet) try to use version 9 of the Domino designer (You do not have to use Domino 9 to do that - it works fine with Domino 8.5.3).
For our projects build times went down to only few seconds from few minutes. I guess that they finally noticed at IBM that the build process used to heavily relay on connection to server and done something with it.
With new designer You don't event have to replicate to local. You can directly work on Your local server.