Windows service does not recognize network path. Whats the workaround? - c++

We have a Buffalo NAS drive as a backup drive.
And when we map this drive as B:\ , our backup application seems to understand this and run as an application.
But when run as a service, it does not recognize the mapping and crashes.
I tried giving the path as \\\192.168.x.x\Backups\ as the backup path, the service runs but then a lot of submodules fail because it sees the \\\ as a escape character.
What is the workaround so that the windows service can see the mapped drive.
I am trying to run zip.exe via a CreateProcess();
""C:\Users\jvenkatraj\Documents\SQLite\Debug\zip.exe" -9 -q -g -u "\\\192.168.123.60\Backup\store\location1\50\f2\25\43\d8\88\b9\68\49\8d\2b\d0\08\9e\7e\df\z.zip" "\\\192.168.123.60\Backup\store\temp\SPD405.tmp\file_contents""
The backslashes are messing with the quotes. And it is a WCHAR type, and I can't change it to any other type, else I will have to redefine this elsewhere as well. How many backslashes should I use?

You can map a network drive inside the service itself using the WNetAddConnection2 API function.

Create a symbolic link somewhere to the NAS share:
mklink /D c:\nas-backups \\192.168.x.x\Backups
and point your backup application to c:\nas-backups\etc.

Try running your service under a user that "sees" the network, such as the "Network Service" user or even as the "human" user that mapped the network drive.

The easiest way to do this would probably be to access the network path
string path = #"\\192.168.x.x\Backups\";
Another thing you have to make sure of is that the service has access to this path. If your service is logged in as a user that does NOT have access you have to change logon credentials of the service to a user/domain account that does have access to this path.

Related

How to retrieve heapdump in PCF using SMB

I need to -XXHeapdumoOutofmemory and -XXHeapdumoFilepath option in PCF manifest yml to create heapdump on OutOfMemory .
I understand I can use SMB or NFS in vm args but how to retrieve the heapdump file when app goes OutOfMemory and not accessible.
Kindly help.
I need to -XXHeapdumoOutofmemory and -XXHeapdumoFilepath option in PCF manifest yml to create heapdump on OutOfMemory
You don't need to set these options. The Java buildpack will take care of this for you. By default, it installs a jvmkill agent which will automatically do this.
https://github.com/cloudfoundry/java-buildpack/blob/main/docs/jre-open_jdk_jre.md#jvmkill
In addition, the jvmkill agent is smart enough that if you bind a SMB or NFS volume service to your application, it will automatically save the heap dumps to that location. From the doc link above...
If a Volume Service with the string heap-dump in its name or tag is bound to the application, terminal heap dumps will be written with the pattern <CONTAINER_DIR>/<SPACE_NAME>-<SPACE_ID[0,8]>/<APPLICATION_NAME>-<APPLICATION_ID[0,8]>/<INSTANCE_INDEX>--<INSTANCE_ID[0,8]>.hprof
The key is that you name the bound volume service appropriately, i.e. the name must contain the string heap-dump.
You may also do the same thing with non-terminal heap dumps using the Java Memory Agent that the Java buildpack can install for you upon request.
I understand I can use SMB or NFS in vm args but how to retrieve the heapdump file when app goes OutOfMemory and not accessible.
To retrieve the heap dumps you need to somehow access the file server. I say "somehow" because it entirely depends on what you are allowed to do in your environment.
You may be permitted to mount the SMB/NFS volume directly to your PC. You could then access the files directly.
You may be able to retrieve the files through some other protocol like HTTP or FTP or SFTP.
You may be able to mount the SMB or NFS volume to another application, perhaps using the static file buildpack, to serve up the files for you.
You may need to request the files from an administrator with access.
Your best best is to talk with the admin for your SMB or NFS server. He or she can inform you about the options that are available to you in your environment.

How to read/get files from Google Cloud Compute Engine Disk without connecting into it?

I accidentally messed up the permissions of the file system, which shows the message sudo: /usr/local/bin/sudo must be owned by uid 0 and have the setuid bit set when attempting to use sudo, such as read protected files, etc.
Response from this answer (https://askubuntu.com/a/471503) suggest to login as root to do so, however I didn't setup a root password before and this answer (https://stackoverflow.com/a/35017164/4343317) suggest me to use sudo passwd. Obviously I am stuck in an infinite loop from the two answers above.
How can I read/get the files from Google Cloud Compute Engine's disk without logging in into the VM (I have full control of the VM instance and the disk as well)? Is there another "higher" way to login as root (such as from gcloud tool or the Google Cloud interface) to access the VM disk externally?
Thanks.
It looks like the following recipe may be of value:
https://cloud.google.com/compute/docs/disks/detach-reattach-boot-disk
What this article says is that you can shutdown your VM, detach its boot disk and then attach it as a data disk to a second VM. In that second VM you will have the ability to make changes. However, if you don't know what changes you need to make to restore the system to sanity, then as #John Hanley says, we might want to use this mounting technique to copy of your work and then destroy your tainted VM and recreate a new one fresh and copy back in your work and start from there.

How to deativate UFW from outside the VM on Google Cloud Compute Instance

I accidentaly enabled the UFW on my Google Cloud Compute debian instance and unfortunately port 22 is blocked now. I've tried every way to go inside the VM but i can't...
I'm trying to access trhougth the serial port but it's asking me for user and password that was never set.
Does anyone have any idea what can I do?
If I could 'edit' the files on disk, it would be possible to change firewall rules and disable it. Already thought on mounting the VM disk on another instance but Google doesn't allow to "hot detach" it.
Also tried to create another VM from a snapshot of VM disk, but of course, the new instance came with the same problem.
Lots of important files inside and can't go in...
This is the classical example when you close yourself outside of the house with the key inside.
There are several ways to get back inside a virtual machine when the ssh is not currently working in Google Cloud Platform, from my point of view the easiest is to make use of the startup script.
You can use them to run a script as Root when your machine starts, in this way you can basically change the configuration without accessing the virtual machine.
Therefore you can:
simply launch some command in order to deactivate UFW and then access again the machine
if it is not enough and you rally need to access to fix the configuration, you can set up username and password for the root user making use of the startup script and then accessing through the serial console therefore without ssh (basically it is like you had your keyboard directly connected to the hardware). Note as soon you access the instance remove or at least change the password you have just used was visible to the people having access to the project. A safer way is to write down the password in on a private file on a bucket and download it on the instance with the startup script.
Note that you can redirect the output of command to a file and then upload the file to a bucket if you need to debug the script, read the content of a file, understanding what is going on, etc.
The easiest way is to create a startup-script that disables ufw. which gets executed whenever the instance is booted:
Go into your Google Cloud Console. Go to your VM instance and click Edit button.
Scroll down to your "Custom metadata", and add a "startup-script" as key and the following script as value:
#! /bin/bash
/usr/sbin/ufw disable
Click save and reboot your instance.
Delete that startup-script and click Save, so that it won't get executed in future boots.
You can try google serial ports. From then you can enable the ssh
https://cloud.google.com/compute/docs/instances/interacting-with-serial-console
ufw allow ssh

Can ColdFusion access a network drive while running under local system account?

We have a set of files that we need ColdFusion to copy to a network share. However, we are unable to change the user that the ColdFusion service is running under, which means that ColdFusion does not have adequate permissions to access any network shares. We do have a username and password that would give us access, but we cannot have the entire ColdFusion service running under that account.
Is there any way to do these file copy operations from within ColdFusion? Possibly be spawning a cfthread under the new user, accessing the underlying java, or using some other third party component? Our fallback is to create a batch file and run it from Windows Task Scheduler that copies all files in a local directory to the network share, but that's a suboptimal solution as it requires setup and maintenance outside of the CF codebase.
One option is something that I have used in the past. It requires access to the cfexecute tag however. If you have access to running that tag (some hosting providers do not allow it) then you can do something like the following.
Map the network drive via a Window's share (note that any output is being written to the netMessage variable):
This is where you would specify the remote username and password
<cfexecute name="C:\WINDOWS\system32\net.exe"
arguments="use \\#remoteServerName#\#remoteShareName#\ #remoteAccountPassword# /user:#remoteServerName#\#remoteAccountUsername#"
variable="netMessage"
timeout="30">
</cfexecute>
Copy the files to the network drive via the mapped drive that you just created (note that any output is being written to the robocopyMessage variable):
I am using robocopy here and suggest you look into it instead of just copy
<cfexecute name="C:\WINDOWS\system32\robocopy.exe"
arguments="#localDirectory# \\#remoteServerName#\#remoteShareName#\ #robocopyArguments#"
variable="robocopyMessage"
timeout="300">
</cfexecute>
Now cleanup by disconnecting the mapped network drive (note that any output is being written to the netMessage variable):
<cfexecute name="C:\WINDOWS\system32\net.exe"
arguments="use \\#remoteServerName#\#remoteShareName#\ /d"
variable="netMessage"
timeout="30">
</cfexecute>
You could also put this code within cfthread tags if you wish.

ColdFusion 9 cffile error Access is Denied

I am getting the following error:
The cause of this exception was:
java.io.FileNotFoundException:
//server/c$/folder1/folder2/folder3/folder4/folder5/login.cfm
(Access is denied).
When doing this:
<cffile action="copy"
destination="#copyto#\#apfold#\#applic#\#files#"
source="#path#\#apfold#\#applic#\#files#">
If I try to write to C:\folder1\folder2\folder3\folder4\folder5\login.cfm, it works fine. The problem with doing it this way is that this is a script for developers to be able to manually sync files to their application folder. We have multiple servers for each instance that is randomly picked by BigIP. So just writing to the C:\ drive would only copy the file to the server the developer is currently accessing. So if the developer were to close out the browser and go right back in to make sure their changes worked, if they happen to get sent to a different server, they won't see their change.
Since it works with writing to C:\, I know the permissions are correct. I've also copied the path out of the error message and put it in the address bar on the server and it got to the folder/file fine. What else could be stopping it from being able to access that server?
It seems that you want to access a file via UNC notation on a network folder (even if it incidentally refers to a directory on the local c:\ drive). To be able to do this, you have to change the user the ColdFusion 9 Application Server Service runs on. By default, this service runs with the user "Local System Account" which you need to change to an actual user. Have a look at the following link to find out how to do this: http://mlowell.hubpages.com/hub/Coldfusion-Programming-Accessing-a-shared-network-drive
Note that you might have to add a user with the same name as the one used for the CF 9 service to all of the file servers.
If you don't want to enable ftp on your servers another option would be to use RoboCopy to keep the servers in sync. I have had very good luck using this tool. You will need access to the cfexecute ColdFusion tag and you will need to create share(s) on your servers.
RoboCopy is an executable that comes with Windows. You can read some documentation here and here. It has some very powerful features and can be set to "mirror" the contents of directories from one server to the other. In this mode it will keep the folders identical (new files added, removed files deleted, updated files copied, etc). This is how I have used it.
Basically, you will create a share on your destination servers and give access to a specific user (can be local or domain). On your source server you will run some ColdFusion code that:
Logically maps a drive to the destination server
Runs the RoboCopy utility to copy files to the destination server
Then disconnects the mapped drive
The ColdFusion service on your source server will need access to C:\WINDOWS\system32\net.exe and C:\WINDOWS\system32\robocopy.exe. If you are using ColdFusion sandbox security you will need to add entries for these executables (on the source server only). Here are some basic code examples.
First, map to the destination server:
<cfexecute name="C:\WINDOWS\system32\net.exe"
arguments="use {share_name} {password} /user:{username}"
variable="shareLog"
timeout="30">
</cfexecute>
The {share_name} here would be something like \\server\c$. {username} and {password} should be obvious. You can specify username as \\server\username. NOTE I would suggest using a share that you create rather than the administrative share c$ but that is what you had in your example.
Next, copy the files from the source server to the destination server:
<cfexecute name="C:\WINDOWS\system32\robocopy.exe"
arguments="{source_folder} {destination_folder} [files_to_copy] [options]"
variable="robocopyLog"
timeout="60">
</cfexecute>
The {source_folder} here would be something like C:\folder1\folder2\folder3\folder4\folder5\ and the {destination_folder} would be \\server\c$\folder1\folder2\folder3\folder4\folder5\. You must begin this argument with the {share_name} from the step above followed by the desired directory path. The [files_to_copy] is a list of files or wildcard (*.*) and the [options] are RoboCopy's options. See the links that I have included for the full list of options. It is extensive. To mirror a folder structure see the /E and /PURGE options. I also typically include the /NDL and /NP options to limit the output generated. And the /XA:SH to exclude system and hidden files. And the /XO to not bother copying older files. You can exclude other files/directories specifically or by using wildcards.
Then, disconnect the mapped drive:
<cfexecute name="C:\WINDOWS\system32\net.exe"
arguments="use {share_name} /d"
variable="shareLog"
timeout="30">
</cfexecute>
Works like a charm. If you go this route and have not used RoboCopy before I would highly recommend playing around with the options/functionality using the command line first. Then once you get it working to your liking just paste those options into the code above.
I ran into a similar issue with this and it had me scratching my head as well. We are using an Active Directory along with a UNC path to SERVERSHARE/webroot. The application was working fine with the exception of using CFFILE to create a directory. We were running our CFService as a Domain account and permissions were granted onto the webroot folder (residing on the UNC Server). This same domain account was also being used to connect to the UNC path within IIS. I even went so far as to grant FULL Control on the webroot folder but still had no luck.
Ultimately what I found was causing the problem was that the Inetpub Folder (parent folder to our webroot) had sharing turned on but that sharing did not include 'Read/Write' sharing for our CFService domain account.
So while we had Sharing on Inetpub and more powerful user permissions turned on for Inetpub/webroot folder, the sharing permissions (or lack thereof) took precedence over the more granular webroot user security permissions.
Hope this helps someone else.