I am having multiple jetty server instances to a load balancer.
I am setting the following temp directory in jetty.xml
<Call name="setAttribute">
<Arg>javax.servlet.context.tempdir</Arg>
<Arg>/some/dir/foo</Arg>
</Call>
Can multiple servers instances point to the same temp directory?
http://www.eclipse.org/jetty/documentation/9.3.x/ref-temporary-directories.html#_setting_a_specific_temp_directory
Yes, multiple server instances can point to the same temp directory. If you are trying to pull something from what directory I'd recommend persisting the temp directory as explained in the documentation you linked.
Related
I'm having a weird issue I've not encountered before: My new Elastic Beanstalk environment is not respecting my .htaccess files. It's odd because I don't recall this ever being a problem before. In fact I have an older EB environment that I set up years ago and it's fine with the game code.
This new environment is 64 bit Amazon Linux 2.
Looking elsewhere there are guides saying you need to edit your /etc/httpd/conf/httpd.conf file, but my EC2 instance doesn't have one. (I also don't have an /etc/apache2 directory.) The closest it has is a /etc/httpd/conf.d/php.conf file.
I don't recall ever having to do this previously, and obviously I'm a bit concerned that my EC2 instance will forget any changes to any .conf files if I have to spawn new instances are created in the future.
How do you get my EB/EC2 instance to implement my .htaccess files?
From v3.0.0 onwards, Amazon changed their Elastic Beanstalk PHP Platforms to use nginx as their server instead of Apache. This is not mentioned anywhere when you're creating your platform, so it can catch you unawares.
If you want to use Apache, you need to select a platform version of v2.x.x.
See the full history of Elastic Beanstalk PHP platforms for specific details.
An update to the other answer, you can now change the server in versions 3+ to apache by updating the proxy server in the configuration.
I am trying to setup Active-Active setup for the WSO2 API Manager by following this url:Configuring an Active-Active Deployment
Everything is working fine except step 5 where I am trying to setup NFS. I moved /repository/deployment/server folder to another drive. For e.g. at location:
D:/WSO2AM/Deployment/server
so that both nodes can share deployment folder together.
Now not knowing what config files to change to point deployment folder to location other than default, I made changes to carbon.xml and made changes to an element "RepositoryLocation" and set it to D:/WSO2AM/Deployment/server but uit looks like it is not enough. When I start the server, I get the following error messsage:
FATAL - SynapseControllerFactory The synapse.xml location .\.\repository/deployment/server/synapse-configs\default doesn't exist
[2019-03-12 15:54:49,332] FATAL - ServiceBusInitializer Couldn't initialize the ESB...
org.apache.synapse.SynapseException: The synapse.xml location .\.\repository/deployment/server/synapse-configs\default doesn't exist
I will appreciate if someone can help me setup NFS so that both nodes can share same deployment folder and I don't have to worry about syncing them through some other mechanism.
Thanks
After struggling for almost a day, I found a solution in WSO2's completely separate thread.
Enable Artifact Synchronization
In this thread, they are asking to create a SMB share (for Windows) for Deployment and tenants directory, for APIM purpose, we need to create SMB share for the directory /repositiry/deployment/server directory.
It is just one command away to create a symbolic link as seen below:
mklink /D <APIM_HOME>/repositiry/deployment/server D:\WSO2\Shared\deployment\server
We need to create symlink in both nodes to point to the same location.
Once done, no configuration changes needed on APIM side. It will work by default and you have following scenario configured.
I created an instance in AWS Beanstalk and use it with git repository.
There is two files outside this repository, config.php and .htaccess.
I could create them with vim, inside the instance via ssh, but when I upload a new version they are erased.
What is the correct way to work with files outside the repository, like db connection and custom configurations?
The idea behind Elastic Beanstalk (and other application PaaS's as this isn't unique to Elastic Beanstalk) is that the server the application runs on is in essence stateless. This means that any local changes that you make to the instance will be gone if that instance is replaced.
This can be the case when using AutoScaling Groups that cause instances to be terminated and created based on demand. This can also happen if your instance has issues and is deemed in a bad state.
Thus if you SSH into an EC2 instance, create files, and then push a new version of your application, your instance is tore down, brought back up and the files aren't there anymore.
If you want to persist information that isn't in version control (often application secrets like API keys, credentials, specific configuration, etc.), then one way to do that is to add it to environment variables which you can learn about here: http://docs.aws.amazon.com/gettingstarted/latest/deploy/envvar.html
#Josh Davis is correct in saying "The idea behind Elastic Beanstalk (and other application PaaS's as this isn't unique to Elastic Beanstalk) is that the server the application runs on is in essence stateless. This means that any local changes that you make to the instance will be gone if that instance is replaced."
In layman's terms, that means that the server can be rebuilt at any time and any data that was persisted to disk is lost.
If you'd like to persist the above two files without version control then I would suggest using ebextensions >> http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers.html
Example:
files:
"/home/ec2-user/myfile" :
mode: "000755"
owner: root
group: root
source: http://foo.bar/myfile
"/home/ec2-user/myfile2" :
mode: "000755"
owner: root
group: root
content: |
# this is my file
# with content
The first example will download a file from http://foo.bar/myfile and create that file and it's contents on the file system in this location /home/ec2-user/myfile
The second example will create a file with the contents you specify; in this case the file will have # this is my file # with content.
If you use the second option always run is through a YAML validator >> http://www.yamllint.com/
I'm just learning about Chef, and I'm confused about the best way to handle a situation where there's a single config file that may need to be modified by multiple recipes. The specific case I have is with AWS Cloudwatch Logs. It has a single configuration file (no way that I see to include other files), which needs to list out all of the logfiles it collects. I have different recipes (e.g. a web server, a mail server, etc.) that will cause different logfiles to be created. When those recipes run, I want their logfiles to be added to the AWS Logs config file. But it doesn't seem safe to just append that to the file, because if the recipe is re-run, it will be added again. And I want to keep the logic about which logfiles exist within the recipe where the software that creates them is installed. I'd like to be able to just add a recipe to the runlist and have its logs get added to the config file (and vice versa), without the mail server recipe having to know anything about the files the web server creates and so on.
What is the right way to do this?
You need to have a CloudWatch recipe that actually creates the config file. Each of your other recipes can add some node attributes to indicate what log files they need added, and then the CloudWatch cookbook could grab those attributes and use them to build the config file.
For example:
In the webserver recipe
node['logs']['nginx']['somevhost'] = %w(/path/to/log /path/to/other/log)
In a cloudwatch recipe
node['logs'].each do |app|
app.each do |component|
<add to some collection structured for CloudWatch's config file>
end
end
Even better
I would suggest you take a look at the infochips's silverware cookbook. It has a nice library you can use to advertise your log files in the webserver and mailserver cookbooks, and then discover them in the CloudWatch cookbook.
I am getting the following error:
The cause of this exception was:
java.io.FileNotFoundException:
//server/c$/folder1/folder2/folder3/folder4/folder5/login.cfm
(Access is denied).
When doing this:
<cffile action="copy"
destination="#copyto#\#apfold#\#applic#\#files#"
source="#path#\#apfold#\#applic#\#files#">
If I try to write to C:\folder1\folder2\folder3\folder4\folder5\login.cfm, it works fine. The problem with doing it this way is that this is a script for developers to be able to manually sync files to their application folder. We have multiple servers for each instance that is randomly picked by BigIP. So just writing to the C:\ drive would only copy the file to the server the developer is currently accessing. So if the developer were to close out the browser and go right back in to make sure their changes worked, if they happen to get sent to a different server, they won't see their change.
Since it works with writing to C:\, I know the permissions are correct. I've also copied the path out of the error message and put it in the address bar on the server and it got to the folder/file fine. What else could be stopping it from being able to access that server?
It seems that you want to access a file via UNC notation on a network folder (even if it incidentally refers to a directory on the local c:\ drive). To be able to do this, you have to change the user the ColdFusion 9 Application Server Service runs on. By default, this service runs with the user "Local System Account" which you need to change to an actual user. Have a look at the following link to find out how to do this: http://mlowell.hubpages.com/hub/Coldfusion-Programming-Accessing-a-shared-network-drive
Note that you might have to add a user with the same name as the one used for the CF 9 service to all of the file servers.
If you don't want to enable ftp on your servers another option would be to use RoboCopy to keep the servers in sync. I have had very good luck using this tool. You will need access to the cfexecute ColdFusion tag and you will need to create share(s) on your servers.
RoboCopy is an executable that comes with Windows. You can read some documentation here and here. It has some very powerful features and can be set to "mirror" the contents of directories from one server to the other. In this mode it will keep the folders identical (new files added, removed files deleted, updated files copied, etc). This is how I have used it.
Basically, you will create a share on your destination servers and give access to a specific user (can be local or domain). On your source server you will run some ColdFusion code that:
Logically maps a drive to the destination server
Runs the RoboCopy utility to copy files to the destination server
Then disconnects the mapped drive
The ColdFusion service on your source server will need access to C:\WINDOWS\system32\net.exe and C:\WINDOWS\system32\robocopy.exe. If you are using ColdFusion sandbox security you will need to add entries for these executables (on the source server only). Here are some basic code examples.
First, map to the destination server:
<cfexecute name="C:\WINDOWS\system32\net.exe"
arguments="use {share_name} {password} /user:{username}"
variable="shareLog"
timeout="30">
</cfexecute>
The {share_name} here would be something like \\server\c$. {username} and {password} should be obvious. You can specify username as \\server\username. NOTE I would suggest using a share that you create rather than the administrative share c$ but that is what you had in your example.
Next, copy the files from the source server to the destination server:
<cfexecute name="C:\WINDOWS\system32\robocopy.exe"
arguments="{source_folder} {destination_folder} [files_to_copy] [options]"
variable="robocopyLog"
timeout="60">
</cfexecute>
The {source_folder} here would be something like C:\folder1\folder2\folder3\folder4\folder5\ and the {destination_folder} would be \\server\c$\folder1\folder2\folder3\folder4\folder5\. You must begin this argument with the {share_name} from the step above followed by the desired directory path. The [files_to_copy] is a list of files or wildcard (*.*) and the [options] are RoboCopy's options. See the links that I have included for the full list of options. It is extensive. To mirror a folder structure see the /E and /PURGE options. I also typically include the /NDL and /NP options to limit the output generated. And the /XA:SH to exclude system and hidden files. And the /XO to not bother copying older files. You can exclude other files/directories specifically or by using wildcards.
Then, disconnect the mapped drive:
<cfexecute name="C:\WINDOWS\system32\net.exe"
arguments="use {share_name} /d"
variable="shareLog"
timeout="30">
</cfexecute>
Works like a charm. If you go this route and have not used RoboCopy before I would highly recommend playing around with the options/functionality using the command line first. Then once you get it working to your liking just paste those options into the code above.
I ran into a similar issue with this and it had me scratching my head as well. We are using an Active Directory along with a UNC path to SERVERSHARE/webroot. The application was working fine with the exception of using CFFILE to create a directory. We were running our CFService as a Domain account and permissions were granted onto the webroot folder (residing on the UNC Server). This same domain account was also being used to connect to the UNC path within IIS. I even went so far as to grant FULL Control on the webroot folder but still had no luck.
Ultimately what I found was causing the problem was that the Inetpub Folder (parent folder to our webroot) had sharing turned on but that sharing did not include 'Read/Write' sharing for our CFService domain account.
So while we had Sharing on Inetpub and more powerful user permissions turned on for Inetpub/webroot folder, the sharing permissions (or lack thereof) took precedence over the more granular webroot user security permissions.
Hope this helps someone else.