I've installed the Remote System Explorer Plugin for Eclipse and set up a SSH connection to our development server using public key authentication and a custom Port (not sure if any of these customisations relate to the problem).
However browsing the file system works great and I can even create folders and files. E.g. /tmp/foo/bar.txt but I can't figure out to "push" changes I've did to the server.
So my problem is I open a file, write some text and save it in Eclipse. The asterisk next to file names vanishes (indicating it's save) and if I close an re-open the file in Eclipse the changes are present. But they're not visible on the server. E.g. doing changes to a .html file won't show any changes in the web browser or cat bar.txt (mentioned earlier) produces an empty output.
Creating folders or files is working as intended but updates to file contents are not shown up on the remote system though they're persist in Eclipse.
Is there some button I'm missing to update my "local" changes to the "remote server". Can I even get rid off all this caching? As we're working in a Team it's crucial our Files are always up to date and having some extra caching would definitely spoil all the fun :(
My IDE configuration is like that :
Eclipse IDE for C/C++ Developers
Version: Juno Service Release 2
Build id: 20130225-0426
Get a handle to the IFileService that is hosting the file ... this gives you the ability to upload files.
Related
I'll give a bit of a background as to the setup we have and why. Currently myself and a friend want to collaborate on an Unreal Engine Project. To do this I've set up an Amazon Lightsail Instance with Windows Server running. I've then installed Perforce onto this Server and added two users. Both of us are able to connect to this server from our local machines (great I thought!). Our goal was to attach two 'virtual' disks of 32gb to this server via Lightsails Storage option. I've formatted these discs and they are detected as Disk D and E on the Server. Our goal was to have two depots, one on Disk E and one on Disk D, the reason for this being the C disk was only 20gb (12gb Free after Windows).
I have tried multiple things (not got much hair left after this) to try and map the depots created to each HDD but have had little success and need your wisdom!
I've followed both the process indicated in this support guide (https://community.perforce.com/s/article/2559) via CMD as well as changing the depot storage location in P4Admin on the Server via RDP to the virtual disks D and E respectively.
Example change is from //UE_WIP/... to D:/UE_WIP/... (I have create a folder UE_WIP and UE_LIVE on each HDD).
When I open up P4V on my local machine I'm able to happily connect (as per screenshot) and set workstation to my local machine (detects both depots). This is when we're getting stuck. I then open up a new unreal engine file and save the unreal engine file to the the following local directory E:/DELETE/Perforce/Test/ and open up source control (See image 04). This is great, it detects the workspace and all is connecting to the server.
When I click submit to source control I get the following 'Failed Checking Source Control' when I try adding via P4V manually marking the new content folder for add I get the following 'file(s) not in client view.
All we want is the ability to send an Unreal Engine up to either the WIP Drive Depot or the Live Drive Depot. To resolve this does it require:
Two different workstations (one set up for LIVE and one for WIP)
Do we need to add some local folders to our directory? E:/DELETE/Perforce/UE_WIP & E:/DELETE/Perforce/UE_LIVE?
Do we need to tweak something on the Perforce Server?
Do we need to tweak something in Unreal Engine?
Any and all help would be massively appreciated.
Best,
Ben
https://imgur.com/a/aaMPTvI - Image gallery of issues
Your screenshots don't show how (or if?) you set up your local workspace (i.e. the thing that tells Perforce where the files are on your local workstation).
See: https://www.perforce.com/perforce/r13.1/manuals/p4v/Defining_a_client_view.html
The Perforce server acts as a layer of abstraction between the backend storage (i.e. the depots you've set up) and the client machines where you actually do your work. The location of the depot files doesn't matter at all to the client (any more than, say, the backend filesystem of a web server matters to your web browser); all that matters is how you set up the workspace, which is a simple matter of "here's where my local files are" (the Root) and "here's how my local paths map to depot paths" (the View).
You get the "file not in view" error if you try to add a local file to the depot and it's not in the View you've defined. The fix there is generally to simply fix the Root and/or View to accurately describe where you local files are. One View can easily map to multiple depots (as long as they're on a single server).
(edit)
Specifically, in your case, all of the files you're trying to add are under the path:
E:\DELETE\Perforce\Test\Saved\...
Since you've set up your workspace as:
Client: bsmith
Root: E:\DELETE\Perforce\bsmith
View:
//WIP/... //bsmith/WIP/...
//LIVE/... //bsmith/LIVE/...
then your bsmith workspace consists of these two local paths:
E:\DELETE\Perforce\bsmith\WIP\...
E:\DELETE\Perforce\bsmith\LIVE\...
The files you're trying to add aren't even under your Root, much less under either of the View mappings. That's what the "not in client view" error messages mean.
If you want to add the files where they are, modify your Root and View so that you define your workspace as being where your files are; if you want to have the files in one of the local directories above that you've already defined as being where your workspace lives, you'll have to move them there. If you put your files in bsmith\WIP, then when you add them they'll go to the WIP depot; if you put them in bsmith\LIVE, then they'll go to the LIVE depot, per your View.
Either way, once they're in your workspace, you can add them to the depot. Simple as that!
I am getting below error while logging in with VMware vSphere Client 5.5
"The type initializer for 'VirtualInfrastructure.Utils.ClientsXml' threw an exception."
Just recently had this. Try to right click the shortcut and "run as administrator". This worked for me.
To avoid having to do this all the time. Set the shortcut to "Run as administrator". Even if a user launches it. I should still start without prompting for an administrator ID.
I got the same problem this morning, don't know what's causing the issue. Vsphere client was working fine last week, only changes was backup by Veeam during the weekend.
My solution is open the website of your ESXi server, click the "Download Vsphere client" link to download the latest version 5.5 client and upgrade the existing client.
Check your disk space and/or file permissions of your temporary directory.
VMWare may be unable to create necessary files in your %temp% directory which 'will' cause the exact error your experiencing.
I had this problem recently, it happened because I had updated the user's TEMP file location in Control Panel - System - Advanced System Settings so TEMP files were stored on e: drive, but the temp folder path on the e: drive did not exist. Creating the correct folder structure on e drive fixed the problem
Yes you need to Run as Administrator, This is due to a Permissions access problem by simply double clicking it, even in an Administrative level profile.
I am getting the following error:
The cause of this exception was:
java.io.FileNotFoundException:
//server/c$/folder1/folder2/folder3/folder4/folder5/login.cfm
(Access is denied).
When doing this:
<cffile action="copy"
destination="#copyto#\#apfold#\#applic#\#files#"
source="#path#\#apfold#\#applic#\#files#">
If I try to write to C:\folder1\folder2\folder3\folder4\folder5\login.cfm, it works fine. The problem with doing it this way is that this is a script for developers to be able to manually sync files to their application folder. We have multiple servers for each instance that is randomly picked by BigIP. So just writing to the C:\ drive would only copy the file to the server the developer is currently accessing. So if the developer were to close out the browser and go right back in to make sure their changes worked, if they happen to get sent to a different server, they won't see their change.
Since it works with writing to C:\, I know the permissions are correct. I've also copied the path out of the error message and put it in the address bar on the server and it got to the folder/file fine. What else could be stopping it from being able to access that server?
It seems that you want to access a file via UNC notation on a network folder (even if it incidentally refers to a directory on the local c:\ drive). To be able to do this, you have to change the user the ColdFusion 9 Application Server Service runs on. By default, this service runs with the user "Local System Account" which you need to change to an actual user. Have a look at the following link to find out how to do this: http://mlowell.hubpages.com/hub/Coldfusion-Programming-Accessing-a-shared-network-drive
Note that you might have to add a user with the same name as the one used for the CF 9 service to all of the file servers.
If you don't want to enable ftp on your servers another option would be to use RoboCopy to keep the servers in sync. I have had very good luck using this tool. You will need access to the cfexecute ColdFusion tag and you will need to create share(s) on your servers.
RoboCopy is an executable that comes with Windows. You can read some documentation here and here. It has some very powerful features and can be set to "mirror" the contents of directories from one server to the other. In this mode it will keep the folders identical (new files added, removed files deleted, updated files copied, etc). This is how I have used it.
Basically, you will create a share on your destination servers and give access to a specific user (can be local or domain). On your source server you will run some ColdFusion code that:
Logically maps a drive to the destination server
Runs the RoboCopy utility to copy files to the destination server
Then disconnects the mapped drive
The ColdFusion service on your source server will need access to C:\WINDOWS\system32\net.exe and C:\WINDOWS\system32\robocopy.exe. If you are using ColdFusion sandbox security you will need to add entries for these executables (on the source server only). Here are some basic code examples.
First, map to the destination server:
<cfexecute name="C:\WINDOWS\system32\net.exe"
arguments="use {share_name} {password} /user:{username}"
variable="shareLog"
timeout="30">
</cfexecute>
The {share_name} here would be something like \\server\c$. {username} and {password} should be obvious. You can specify username as \\server\username. NOTE I would suggest using a share that you create rather than the administrative share c$ but that is what you had in your example.
Next, copy the files from the source server to the destination server:
<cfexecute name="C:\WINDOWS\system32\robocopy.exe"
arguments="{source_folder} {destination_folder} [files_to_copy] [options]"
variable="robocopyLog"
timeout="60">
</cfexecute>
The {source_folder} here would be something like C:\folder1\folder2\folder3\folder4\folder5\ and the {destination_folder} would be \\server\c$\folder1\folder2\folder3\folder4\folder5\. You must begin this argument with the {share_name} from the step above followed by the desired directory path. The [files_to_copy] is a list of files or wildcard (*.*) and the [options] are RoboCopy's options. See the links that I have included for the full list of options. It is extensive. To mirror a folder structure see the /E and /PURGE options. I also typically include the /NDL and /NP options to limit the output generated. And the /XA:SH to exclude system and hidden files. And the /XO to not bother copying older files. You can exclude other files/directories specifically or by using wildcards.
Then, disconnect the mapped drive:
<cfexecute name="C:\WINDOWS\system32\net.exe"
arguments="use {share_name} /d"
variable="shareLog"
timeout="30">
</cfexecute>
Works like a charm. If you go this route and have not used RoboCopy before I would highly recommend playing around with the options/functionality using the command line first. Then once you get it working to your liking just paste those options into the code above.
I ran into a similar issue with this and it had me scratching my head as well. We are using an Active Directory along with a UNC path to SERVERSHARE/webroot. The application was working fine with the exception of using CFFILE to create a directory. We were running our CFService as a Domain account and permissions were granted onto the webroot folder (residing on the UNC Server). This same domain account was also being used to connect to the UNC path within IIS. I even went so far as to grant FULL Control on the webroot folder but still had no luck.
Ultimately what I found was causing the problem was that the Inetpub Folder (parent folder to our webroot) had sharing turned on but that sharing did not include 'Read/Write' sharing for our CFService domain account.
So while we had Sharing on Inetpub and more powerful user permissions turned on for Inetpub/webroot folder, the sharing permissions (or lack thereof) took precedence over the more granular webroot user security permissions.
Hope this helps someone else.
I have a web service (.asmx) that takes an array of bytes of an image as a parameter and saves the image on our server. It works fine, other than the fact that I cannot do anything with the file afterwards as I apparently don't have the permissions.
When I use FileStream in my code to save the file to my server folder, is there any where in code that I can set read/write/execute permissions?
**EDIT: Our server is a Linux server
I wrote the web service on my Windows machine (Visual Studio 2010) and packaged it in MonoDevelop for deployment to our Linux server. The folder where I store the images is in the same folder as my .asmx file (on the Linux server). Whenever my web service stores an image in that folder the permission default to rw-------. I would like them to default to rwxrwxrwx.
try this
// Get a FileSecurity object that represents the
// current security settings.
FileSecurity fSecurity = File.GetAccessControl(strFilePath1);
// Add the FileSystemAccessRule to the security settings.
fSecurity.AddAccessRule(new FileSystemAccessRule("IUSR_SOMESERVER",FileSystemRights.FullControl,AccessControlType.Allow));
// Set the new access settings.
File.SetAccessControl(strFilePath1, fSecurity);
adapt the code to what you want (user, filepath and acl)
How do you check for permissions to write to a directory or file?
Here is an article of file permissions, I hope you can apply them to reading the file since it sounds like you do not have trouble writing it or maybe you have trouble writing it properly and that is what causes the permissions to get locked depending on where you save on your server.
My silverlight 5 application has a third party grid. I need to export the grid & open the excel for user machine. The grid supports export feature which writes the content in Stream stream = dialog.OpenFile() Export is working fine. The new requirement is to open the file instead just saving the file. I cannot run my app in OOB, I also hate to push the file to my service / website and download file from there. Is there any effective workaround or solution to open the content in user's excel application without making application as trusted with certificates?
It is an essential security feature of Silverlight to not allow starting other applications or even open websites when running in browser and not running as trusted application.
So as you might have guessed (as there were no answers to this question for about one and a half years) the answer to your question is: No, there is no effective workaround for your problem.
I had that problem myself, and went for the solution to send the file to a webservice and store it on a network-share. From there the user is then able to open it...