I used this screen in Visual Studio 2017 to set the certificate for my project.
But it decided to copy the certificate into the project:
What can do to prevent VS from copying my certificate into the source code?
I don't want to accidentally check it in. As a workaround, I am going to add it to the .gitignore file, but I would rather it be completely outside of the source.
[EDIT] In response to #RodrigoM's keen observation about certificate is safe because it has the password: That is true. The system prompted me for the password. Now I am concerned where the password is stored ;). So now I have an additional question:
Where is the password stored that I entered when I added my certificate to the solution?
Adding certificate file to .gitignore is enough to be completely sure, that accidental check-in will never happen. Consider default .gitignore file for Visual Studio, it contains rule by .pfx extension (line 219). You can set up Build Action to None and Copy to Output Directory to Do not copy, just in case.
Another option that you have is adding certificate file as link (Project => Add => Existing Item => Add as Link). So you will see the certificate in your Solution Explorer, but the actual file will be located in some directory on your machine, far away from folder with git checkout.
Hope it helps.
Related
When I upload a set of files to a bucket on Google Storage, they are automatically assigned file types ("text/html", "application/json", etc.). But when I do a directory upload via the developer console, the files in the directory all get type "application/octet-stream". How do I get Google Storage to automatically assign file types to the contents of an uploaded directory?
This is likely a bug in the developer console. This problem comes from adding a directory via drag-and-drop. Uploading a directory via the "Upload Folder" button fixes this problem (it associates the correct file types with the files in the directory).
As pointed out in your response above,
This is likely a bug in the developer console. This problem comes from adding a directory via drag-and-drop. Uploading a directory via the "Upload Folder" button fixes this problem (it associates the correct file types with the files in the directory).
this does seem a bug
This does seem to be a bug. All Google Cloud Platform bugs should be reported on the public issue tracker. You might want to file a bug at Public Issue Tracker so that this can be further investigated.
We are trying to publish a web service from our Dev box onto the UAT box.
There are no errors when building the web-service, but when trying to publish (using UNC path: \\TEST-SERVER\c$\inetpub\wwwroot\PerformanceReviewWebService) we get the below error message and the process fails:
3>C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v12.0\Web\Microsoft.Web.Publishing.targets(1475,5):
Error : Could not open Source file: The given path's format is not supported.
what can we do to track down this error and resolve it?
On the target box we have checked the security of the folder:
..\c$\inetpub\wwwroot\PerformanceReviewWebService
and we definitely have access to write into that directory.
We have no build errors.
Here are a few things you can try that may help debug or resolve the issue:
Try launching Visual Studio as an Administrator and retry publishing.
Try publishing to a local location. Does this succeed?
If yes there is likely a problem with either accessing the UNC path or a file/security permission issue
If local publishing does not succeed, make sure you haven't mistakenly moved any of you bin, contents or webconfig files to a different location.
Check for files in Visual Studio that are light grey, or have an exclamation mark next to them (They may be missing from the project).
Attempt to possibly reload the project to a prior state where publishing was successful. Continue making updates gradually until your project is up to date.
Create a new empty project and attempt to publish to the same location. If this works you have something corrupted or wrong in you existing project. You can try excluding files or folders from the build to narrow down which ones may be causing the problem.
In my case a referenced web-project shared content with the web-project I want to publish. After removing it from the content of the referenced project it was working again.
I am looking for a way to share the EB configuration so anyone in my team with valid aws creds can deploy the code. By default, EB adds following to your .gitignore file.
# Elastic Beanstalk Files
.elasticbeanstalk/*
!.elasticbeanstalk/*.cfg.yml
!.elasticbeanstalk/*.global.yml
Do I need to check-in these files to share it with the team?
In my opinion, AWS royally messed up with their .gitignore defaults. This was confusing at first, because it seemed like it was there for a good reason. We couldn't find a good reason. Maybe it was just a precaution so you didn't commit something you shouldn't. However, firstly, modifying a project's .gitignore is not something it should be doing by default, in my opinion. And secondly, no one should be committing code they haven't reviewed.
As Kush notes in his reply, you can add the files into a nested directory which would be tracked by your VCS. I'm assuming the reason for this is so that different developers can maintain different configurations. We have zero use for anything remotely resembling this, but it's worth noting as I'm sure someone may.
We've completely removed these entries from our project and commit the entire .elasticbeanstalk and .ebextensions directories.
Assuming you have CLI Access you can create template and share a command like:
eb config save dev-env --cfg prod
Now, open this file in a text editor to modify/remove sections as necessary for your production environment.
Note: AWSConfigurationTemplateVersion is a required field. Do not remove it from the configuration file.
Checking Configurations into Version Control
If you want to check in your saved configurations so that anyone with access to your code can use the same settings in their own environments or if you want to track different versions of the saved configurations, move the file to the .elasticbeanstalk/folder directory. Saved configurations are located in the .elasticbeanstalk/saved_configs/ folder. By moving the configuration file up one level into the .elasticbeanstalk/ folder, the file can be checked in and will still work with the EB CLI. After you move the file, you must add and commit it.
Refer this AWS Blog Post
I run a website (www.pixelscrapper.com) that serves file downloads of images, and zipped collections of images (which are zip files containing multiple imags, psds, vectors, etc.) These files are hosted on Amazon S3, and served via download urls generated by aws sdk for php (v1).
Just recently, users trying to download our zip files using Firefox have started getting "This file is not commonly downloaded" warnings (after the download finishes), which forces them to override the warning before accessing the file via the Firefox download manager. Naturally, this kind of warning causes concern for our users.
This warning shows up IN FIREFOX ONLY--Chrome, Edge, and Internet Explorer show no warnings when downloading and opening our zip files. The warning also only seems to show up for (surprise, surprise) files that have been more recently added to the site, and have relatively few total downloads--but many of our files never receive large numbers of downloads, so this warning has the potential to plague many of our files indefinitely.
My question is: is there anything I can do to prevent this warning? By adjusting headers, signing files in some way, etc.? (From what I understand, Chrome and Edge also have "uncommon file" warning, but they don't seem to be concerned with our files--why is this warning only firing in Firefox?) I've searched around on Stack Overflow and elsewhere, but most of the questions I've seen about "uncommon download" warnings are targeted at Chrome or Internet Explorer, and I can't seem to find any Firefox-specific information about this warning.
Here's a sample file download url (generated by aws sdk) that is causing warnings:
https://pixelscrapper-user-content.s3.amazonaws.com/template-attachment/user-2/node-13574/paper-037-template-polka-dots.zip?response-content-disposition=attachment%3B%20filename%3D%22ps_marisa-lerin_13574_paper-037-template-polka-dots_cu.zip%22&AWSAccessKeyId=AKIAIWM7MZMHRPA6FHEA&Expires=1495386939&Signature=HDmwRFPX81CIVrQgu1BkEyR9iRQ%3D
Here's an inspection of headers in Firefox:
UPDATE:
The issue here is not the nasty-looking url generated by the aws sdk: I checked downloading the same zip file (containing one jpg, one psd) from the following "clean" url, and it still gives the warning: http://pixelscrapper-misc-files.s3.amazonaws.com/ps_marisa-lerin_13574_paper-037-template-polka-dots_cu.zip
Check the settings under menu path Tools->Options->Security. You may need to uncheck the options under the "General" section of that dialogue box. Simply unchecking "Warn me about unwanted and uncommon software" resolves this.
No need to change other settings. I hope it helps.
Also see: https://support.mozilla.org/en-US/kb/how-does-phishing-and-malware-protection-work
I am getting the following error:
The cause of this exception was:
java.io.FileNotFoundException:
//server/c$/folder1/folder2/folder3/folder4/folder5/login.cfm
(Access is denied).
When doing this:
<cffile action="copy"
destination="#copyto#\#apfold#\#applic#\#files#"
source="#path#\#apfold#\#applic#\#files#">
If I try to write to C:\folder1\folder2\folder3\folder4\folder5\login.cfm, it works fine. The problem with doing it this way is that this is a script for developers to be able to manually sync files to their application folder. We have multiple servers for each instance that is randomly picked by BigIP. So just writing to the C:\ drive would only copy the file to the server the developer is currently accessing. So if the developer were to close out the browser and go right back in to make sure their changes worked, if they happen to get sent to a different server, they won't see their change.
Since it works with writing to C:\, I know the permissions are correct. I've also copied the path out of the error message and put it in the address bar on the server and it got to the folder/file fine. What else could be stopping it from being able to access that server?
It seems that you want to access a file via UNC notation on a network folder (even if it incidentally refers to a directory on the local c:\ drive). To be able to do this, you have to change the user the ColdFusion 9 Application Server Service runs on. By default, this service runs with the user "Local System Account" which you need to change to an actual user. Have a look at the following link to find out how to do this: http://mlowell.hubpages.com/hub/Coldfusion-Programming-Accessing-a-shared-network-drive
Note that you might have to add a user with the same name as the one used for the CF 9 service to all of the file servers.
If you don't want to enable ftp on your servers another option would be to use RoboCopy to keep the servers in sync. I have had very good luck using this tool. You will need access to the cfexecute ColdFusion tag and you will need to create share(s) on your servers.
RoboCopy is an executable that comes with Windows. You can read some documentation here and here. It has some very powerful features and can be set to "mirror" the contents of directories from one server to the other. In this mode it will keep the folders identical (new files added, removed files deleted, updated files copied, etc). This is how I have used it.
Basically, you will create a share on your destination servers and give access to a specific user (can be local or domain). On your source server you will run some ColdFusion code that:
Logically maps a drive to the destination server
Runs the RoboCopy utility to copy files to the destination server
Then disconnects the mapped drive
The ColdFusion service on your source server will need access to C:\WINDOWS\system32\net.exe and C:\WINDOWS\system32\robocopy.exe. If you are using ColdFusion sandbox security you will need to add entries for these executables (on the source server only). Here are some basic code examples.
First, map to the destination server:
<cfexecute name="C:\WINDOWS\system32\net.exe"
arguments="use {share_name} {password} /user:{username}"
variable="shareLog"
timeout="30">
</cfexecute>
The {share_name} here would be something like \\server\c$. {username} and {password} should be obvious. You can specify username as \\server\username. NOTE I would suggest using a share that you create rather than the administrative share c$ but that is what you had in your example.
Next, copy the files from the source server to the destination server:
<cfexecute name="C:\WINDOWS\system32\robocopy.exe"
arguments="{source_folder} {destination_folder} [files_to_copy] [options]"
variable="robocopyLog"
timeout="60">
</cfexecute>
The {source_folder} here would be something like C:\folder1\folder2\folder3\folder4\folder5\ and the {destination_folder} would be \\server\c$\folder1\folder2\folder3\folder4\folder5\. You must begin this argument with the {share_name} from the step above followed by the desired directory path. The [files_to_copy] is a list of files or wildcard (*.*) and the [options] are RoboCopy's options. See the links that I have included for the full list of options. It is extensive. To mirror a folder structure see the /E and /PURGE options. I also typically include the /NDL and /NP options to limit the output generated. And the /XA:SH to exclude system and hidden files. And the /XO to not bother copying older files. You can exclude other files/directories specifically or by using wildcards.
Then, disconnect the mapped drive:
<cfexecute name="C:\WINDOWS\system32\net.exe"
arguments="use {share_name} /d"
variable="shareLog"
timeout="30">
</cfexecute>
Works like a charm. If you go this route and have not used RoboCopy before I would highly recommend playing around with the options/functionality using the command line first. Then once you get it working to your liking just paste those options into the code above.
I ran into a similar issue with this and it had me scratching my head as well. We are using an Active Directory along with a UNC path to SERVERSHARE/webroot. The application was working fine with the exception of using CFFILE to create a directory. We were running our CFService as a Domain account and permissions were granted onto the webroot folder (residing on the UNC Server). This same domain account was also being used to connect to the UNC path within IIS. I even went so far as to grant FULL Control on the webroot folder but still had no luck.
Ultimately what I found was causing the problem was that the Inetpub Folder (parent folder to our webroot) had sharing turned on but that sharing did not include 'Read/Write' sharing for our CFService domain account.
So while we had Sharing on Inetpub and more powerful user permissions turned on for Inetpub/webroot folder, the sharing permissions (or lack thereof) took precedence over the more granular webroot user security permissions.
Hope this helps someone else.