Mamp-PRO 6.x on Mac Catalina can't access the folder with my websites anymore - macos-catalina

I upgraded from Mamp-PRO 5 to 6 and now all the websites are red.
In the «document root» field, it says «restricted folder» and when I try to add a new host, I can't select a folder - they're all greyed out.
My files are located in /myUser/Library/Webserver/Documents/…
I gave apachectl, httpd and Mamp-Pro full disk access in the system preferences.
Any idea if I can fix this without moving the whole folder to a different location?
The exisiting websites are working, though (e.g. I can start Apache and the local websites are responding).
(running MacOS Catalina)

It's possible that changing the name of htdocs to localhost will do the trick. However, here are the steps I took to correct the problem:
Stopped MAMP Pro servers
Copied Applications/MAMP/htdocs to my Dropbox folder (I'm guessing it could go in the documents folder or anywhere else)
Clicked on localhost in MAMP and chose the new document root location (Dropbox/htdocs in my case)
Chose new document roots for all my hosts at the new location
Of note, MAMP automatically changed the name of htdocs to localhost. This is the basis for my assertion at the beginning of this post.

I am having the same problem on Mojave, I filed a bug report to the developer and will respond if I receive something.
My files are located in /Applications/MAMP/htdocs and I cannot chose any folder within that folder.
Launching the (red marked) existing hosts works, interestingly. Just not editing nor creating new ones.

It is possible to use the same document root with multiple hostnames. This can be done via the "Aliases" option in the "General" section.
This answer comes a little late and is not the answer to the original question, but maybe this will help others to work around the limitation of MAMP PRO 6.
Unfortunately, I didn't get a tip in the right direction from MAMP PRO support or via a Google search.

Related

Perforce (AWZ Server Lightsail Windows Instance) - Unreal Engine Source Control - Move Perforce Depot

I'll give a bit of a background as to the setup we have and why. Currently myself and a friend want to collaborate on an Unreal Engine Project. To do this I've set up an Amazon Lightsail Instance with Windows Server running. I've then installed Perforce onto this Server and added two users. Both of us are able to connect to this server from our local machines (great I thought!). Our goal was to attach two 'virtual' disks of 32gb to this server via Lightsails Storage option. I've formatted these discs and they are detected as Disk D and E on the Server. Our goal was to have two depots, one on Disk E and one on Disk D, the reason for this being the C disk was only 20gb (12gb Free after Windows).
I have tried multiple things (not got much hair left after this) to try and map the depots created to each HDD but have had little success and need your wisdom!
I've followed both the process indicated in this support guide (https://community.perforce.com/s/article/2559) via CMD as well as changing the depot storage location in P4Admin on the Server via RDP to the virtual disks D and E respectively.
Example change is from //UE_WIP/... to D:/UE_WIP/... (I have create a folder UE_WIP and UE_LIVE on each HDD).
When I open up P4V on my local machine I'm able to happily connect (as per screenshot) and set workstation to my local machine (detects both depots). This is when we're getting stuck. I then open up a new unreal engine file and save the unreal engine file to the the following local directory E:/DELETE/Perforce/Test/ and open up source control (See image 04). This is great, it detects the workspace and all is connecting to the server.
When I click submit to source control I get the following 'Failed Checking Source Control' when I try adding via P4V manually marking the new content folder for add I get the following 'file(s) not in client view.
All we want is the ability to send an Unreal Engine up to either the WIP Drive Depot or the Live Drive Depot. To resolve this does it require:
Two different workstations (one set up for LIVE and one for WIP)
Do we need to add some local folders to our directory? E:/DELETE/Perforce/UE_WIP & E:/DELETE/Perforce/UE_LIVE?
Do we need to tweak something on the Perforce Server?
Do we need to tweak something in Unreal Engine?
Any and all help would be massively appreciated.
Best,
Ben
https://imgur.com/a/aaMPTvI - Image gallery of issues
Your screenshots don't show how (or if?) you set up your local workspace (i.e. the thing that tells Perforce where the files are on your local workstation).
See: https://www.perforce.com/perforce/r13.1/manuals/p4v/Defining_a_client_view.html
The Perforce server acts as a layer of abstraction between the backend storage (i.e. the depots you've set up) and the client machines where you actually do your work. The location of the depot files doesn't matter at all to the client (any more than, say, the backend filesystem of a web server matters to your web browser); all that matters is how you set up the workspace, which is a simple matter of "here's where my local files are" (the Root) and "here's how my local paths map to depot paths" (the View).
You get the "file not in view" error if you try to add a local file to the depot and it's not in the View you've defined. The fix there is generally to simply fix the Root and/or View to accurately describe where you local files are. One View can easily map to multiple depots (as long as they're on a single server).
(edit)
Specifically, in your case, all of the files you're trying to add are under the path:
E:\DELETE\Perforce\Test\Saved\...
Since you've set up your workspace as:
Client: bsmith
Root: E:\DELETE\Perforce\bsmith
View:
//WIP/... //bsmith/WIP/...
//LIVE/... //bsmith/LIVE/...
then your bsmith workspace consists of these two local paths:
E:\DELETE\Perforce\bsmith\WIP\...
E:\DELETE\Perforce\bsmith\LIVE\...
The files you're trying to add aren't even under your Root, much less under either of the View mappings. That's what the "not in client view" error messages mean.
If you want to add the files where they are, modify your Root and View so that you define your workspace as being where your files are; if you want to have the files in one of the local directories above that you've already defined as being where your workspace lives, you'll have to move them there. If you put your files in bsmith\WIP, then when you add them they'll go to the WIP depot; if you put them in bsmith\LIVE, then they'll go to the LIVE depot, per your View.
Either way, once they're in your workspace, you can add them to the depot. Simple as that!

vQmod fails to write to vqcache directory on OpenCart

Can anyone suggest why vQmod fails to write cache files and instead writes an empty file named vq2-C to /vqmod/vqcache?
Environment
Windows Server 2012
Plesk Panel 12.5
PHP 5.3.2.9
MySQL 5.6.26
OpenCart 2.1.0.1
vQmod 2.6.1
Issue
vQmod fails to save modifications to /vqmod/vqcache/vq2-*.php
On each page load it applies modifications specified in /vqmod/xml/*.xml and writes an empty file named vq2-C to /vqmod/vqcache.
Background
The affected website was migrated from another Windows box with similar configuration.
In short the old server ran Plesk Panel 12.0, the new server is Plesk 12.5 so has minor updates to software versions.
Both sites run on PHP 5.3.2.9 and the new server follows OWASP recommendations more closely so has more PHP functions disabled eg. fopen_with_path.
Investigation so far
Running the vQmod installer again reports: VQMOD ALREADY INSTALLED!
File permissions
/vqmod/logs and /vqmod/vqcache have modify permissions, files are written here. Permissions are applied through Plesk Panel, checked over remote desktop and enabling global write permissions on web root through Plesk does not change anything.
Logs
vQmod logs have no useful information, only skipped files are noted eg. VQModObject::parseMods - Could not resolve path for [ catalog/language/english/module/featured.php] (SKIPPED).
No php_error.log files are generated.
Failed Request Tracing does not pick up any issues.
Tests
All /vqmod/xml files have been removed except vqmod_opencart.xml and one that modifies column_left.tpl. These modifications are applied successfully but no cache files are generated in /vqmod/vqcache.
If I remove /vqmod/checked.cache and /vqmod/mods.cache the files are regenerated on next page load.
vQmod versions - rolled back to 2.5.1 but the issue persists.
Other considerations
When one particular vQmod modification is enabled page load time is unacceptably slow (up to 20 sec). The modification displays the first 4 products from sub categories on the parent category page. I've not gone through the code yet but assume it's hitting pretty hard on the database.
On the original server page load was sub 2 seconds. I doubt this is related to the cache issue as that seems to be a permissions problem.
I had such an issue with mine, I later found out that the issue was with directory permissions, in this case, might be caused by moving. Set permissions of vqcache folder recursively to 777. It worked for me.
My apologies, I should have updated this sooner.
The issue relates to case sensitive preg_replace.
In short changing line #120 of vqmod.php from
$stripped_filename = preg_replace('~^' . preg_quote(self::getCwd(), '~i') . '~', '', $sourcePath)
to
$stripped_filename = preg_replace('~^' . preg_quote(self::getCwd(), '~i') . '~i', '', $sourcePath);
Means the /vqmod/vqcache/vq2-*.php cache files are written and the website runs as normal.
Explained in more detail at https://github.com/vqmod/vqmod/issues/81
I don't think the preg_quote should have the i argumment but it's in the original code so I left it in.

Issue while logging in with VMware vSphere client 5.5

I am getting below error while logging in with VMware vSphere Client 5.5
"The type initializer for 'VirtualInfrastructure.Utils.ClientsXml' threw an exception."
Just recently had this. Try to right click the shortcut and "run as administrator". This worked for me.
To avoid having to do this all the time. Set the shortcut to "Run as administrator". Even if a user launches it. I should still start without prompting for an administrator ID.
I got the same problem this morning, don't know what's causing the issue. Vsphere client was working fine last week, only changes was backup by Veeam during the weekend.
My solution is open the website of your ESXi server, click the "Download Vsphere client" link to download the latest version 5.5 client and upgrade the existing client.
Check your disk space and/or file permissions of your temporary directory.
VMWare may be unable to create necessary files in your %temp% directory which 'will' cause the exact error your experiencing.
I had this problem recently, it happened because I had updated the user's TEMP file location in Control Panel - System - Advanced System Settings so TEMP files were stored on e: drive, but the temp folder path on the e: drive did not exist. Creating the correct folder structure on e drive fixed the problem
Yes you need to Run as Administrator, This is due to a Permissions access problem by simply double clicking it, even in an Administrative level profile.

ColdFusion 9 cffile error Access is Denied

I am getting the following error:
The cause of this exception was:
java.io.FileNotFoundException:
//server/c$/folder1/folder2/folder3/folder4/folder5/login.cfm
(Access is denied).
When doing this:
<cffile action="copy"
destination="#copyto#\#apfold#\#applic#\#files#"
source="#path#\#apfold#\#applic#\#files#">
If I try to write to C:\folder1\folder2\folder3\folder4\folder5\login.cfm, it works fine. The problem with doing it this way is that this is a script for developers to be able to manually sync files to their application folder. We have multiple servers for each instance that is randomly picked by BigIP. So just writing to the C:\ drive would only copy the file to the server the developer is currently accessing. So if the developer were to close out the browser and go right back in to make sure their changes worked, if they happen to get sent to a different server, they won't see their change.
Since it works with writing to C:\, I know the permissions are correct. I've also copied the path out of the error message and put it in the address bar on the server and it got to the folder/file fine. What else could be stopping it from being able to access that server?
It seems that you want to access a file via UNC notation on a network folder (even if it incidentally refers to a directory on the local c:\ drive). To be able to do this, you have to change the user the ColdFusion 9 Application Server Service runs on. By default, this service runs with the user "Local System Account" which you need to change to an actual user. Have a look at the following link to find out how to do this: http://mlowell.hubpages.com/hub/Coldfusion-Programming-Accessing-a-shared-network-drive
Note that you might have to add a user with the same name as the one used for the CF 9 service to all of the file servers.
If you don't want to enable ftp on your servers another option would be to use RoboCopy to keep the servers in sync. I have had very good luck using this tool. You will need access to the cfexecute ColdFusion tag and you will need to create share(s) on your servers.
RoboCopy is an executable that comes with Windows. You can read some documentation here and here. It has some very powerful features and can be set to "mirror" the contents of directories from one server to the other. In this mode it will keep the folders identical (new files added, removed files deleted, updated files copied, etc). This is how I have used it.
Basically, you will create a share on your destination servers and give access to a specific user (can be local or domain). On your source server you will run some ColdFusion code that:
Logically maps a drive to the destination server
Runs the RoboCopy utility to copy files to the destination server
Then disconnects the mapped drive
The ColdFusion service on your source server will need access to C:\WINDOWS\system32\net.exe and C:\WINDOWS\system32\robocopy.exe. If you are using ColdFusion sandbox security you will need to add entries for these executables (on the source server only). Here are some basic code examples.
First, map to the destination server:
<cfexecute name="C:\WINDOWS\system32\net.exe"
arguments="use {share_name} {password} /user:{username}"
variable="shareLog"
timeout="30">
</cfexecute>
The {share_name} here would be something like \\server\c$. {username} and {password} should be obvious. You can specify username as \\server\username. NOTE I would suggest using a share that you create rather than the administrative share c$ but that is what you had in your example.
Next, copy the files from the source server to the destination server:
<cfexecute name="C:\WINDOWS\system32\robocopy.exe"
arguments="{source_folder} {destination_folder} [files_to_copy] [options]"
variable="robocopyLog"
timeout="60">
</cfexecute>
The {source_folder} here would be something like C:\folder1\folder2\folder3\folder4\folder5\ and the {destination_folder} would be \\server\c$\folder1\folder2\folder3\folder4\folder5\. You must begin this argument with the {share_name} from the step above followed by the desired directory path. The [files_to_copy] is a list of files or wildcard (*.*) and the [options] are RoboCopy's options. See the links that I have included for the full list of options. It is extensive. To mirror a folder structure see the /E and /PURGE options. I also typically include the /NDL and /NP options to limit the output generated. And the /XA:SH to exclude system and hidden files. And the /XO to not bother copying older files. You can exclude other files/directories specifically or by using wildcards.
Then, disconnect the mapped drive:
<cfexecute name="C:\WINDOWS\system32\net.exe"
arguments="use {share_name} /d"
variable="shareLog"
timeout="30">
</cfexecute>
Works like a charm. If you go this route and have not used RoboCopy before I would highly recommend playing around with the options/functionality using the command line first. Then once you get it working to your liking just paste those options into the code above.
I ran into a similar issue with this and it had me scratching my head as well. We are using an Active Directory along with a UNC path to SERVERSHARE/webroot. The application was working fine with the exception of using CFFILE to create a directory. We were running our CFService as a Domain account and permissions were granted onto the webroot folder (residing on the UNC Server). This same domain account was also being used to connect to the UNC path within IIS. I even went so far as to grant FULL Control on the webroot folder but still had no luck.
Ultimately what I found was causing the problem was that the Inetpub Folder (parent folder to our webroot) had sharing turned on but that sharing did not include 'Read/Write' sharing for our CFService domain account.
So while we had Sharing on Inetpub and more powerful user permissions turned on for Inetpub/webroot folder, the sharing permissions (or lack thereof) took precedence over the more granular webroot user security permissions.
Hope this helps someone else.

Coldfusion: Setting up a second website on CF8 Enterprise

Windows 2008 Server R2 64bit, ColdFusion8 Enterprise Edition (multi-server configuration), Plesk
Recently purchased dedicated hosting on Hostgator and set up a website with help from a server guy. I'm doing a second site on my own this time. I've successfully created the new site in plesk, pointed the domain etc, but now need to set up a new instance of ColdFusion for this new site and I'm not sure how to go about it. From my googling, I'm guessing it's simply a case of setting up a new "instance", which can be done via the coldfusion administrator. Is this correct? Is there anything else I need to know? Any gotchas waiting to bite me?
Thanks in advance for any help.
Well you don't need to set up a new CF instance for a new website: there's not a one-to-one relationship between the two ideas.
The minimum you really need to do is to set up the new website, in a directory within the CF application root, or within a CF-mapped directory if the dir is outside that root, and then run the web server connector (wsconfig.exe) to configure the website to pass requests for CF files to CF.
On a standard install, wsconfig.exe is located in the [coldfusion]/runtine/bin dir, and on a multiserver install it's in [JRun]/bin.
If you do add a new instance, you still need to run wsconfig to connect the website to the CF instance.
user460114, new instance == new "server", so when first instance crashes for some reason, second will still run because it uses separate heap space and lives it's own life.
You could use this for some other purpose like failover instance, or cluster, or staging instance. It's not needed in your case.
You need to need to make new directory in webroot, here's example:
C:\inetpub\www\old_site
C:\inetpub\www\new_site
And set new_domain.com to point to ..\new_site, and leave old_site.com point to ..\old_site.
Cheers