Creating new instances + hosts file - google-cloud-platform

So I have been trying to create an Ansible playbook which creates a new instance to GCP and create a test file inside that instance. I've been using this example project from Github as template. In this example project, there is ansible_hosts -file which contains this host:
[gce_instances]
myinstance[1:4]
but I don't have any idea what it is doing actually?

The fragment your provided is Ansible technology and not actually related to anything GCP specific. This is a good reference doc: Working with Inventory.
At a high level,
[gce_instances]
myinstance[1:4]
the hosts file defines the machine identities against which Ansible is to execute against. With the hosts file, you can define groups of hosts to allow you to apply ansible playbooks to subsets of hosts at a time.
In the example, a group is created that is called gce_instances. There is nothing special or magic about the name. It isn't any kind of key word/phrase special to our story.
Within a group, we specify the hostnames that we wish to work against.
The example given is a wild-card specifier and simply short-hand for:
[gce_instances]
myinstance1
myinstance2
myinstance3
myinstance4

Related

How to deploy a serverless application to specific / limited Google Cloud regions using Terraform?

I have been following this tutorial to run global services with Google Cloud Run using Terraform.
That is enabled by using a data source to retrieve all Cloud Run regions as a list and deploying in all of them;
data "google_cloud_run_locations" "default" { }
Then, deploying the Cloud Run service using for_each construct in HCL:
for_each = toset(data.google_cloud_run_locations.default.locations)
I want to achieve something similar be able to add specific / limited regions as opposed to deploying in all regions. For example, to a list I declare in terraform.tfvars.
I suppose there are slight modifications to be made in case that is possible.
More information:
As per the official docs, I can specify a location where I want to run my service.
This link shows how to configure cloud run to deploy to all available regions.
What I want to do is to deploy to more than one region (but not all) with Terraform, e.g.
["us-west1", "us-central1", "us-east1"]
Is it possible or would I need to change the data source that retrieves all Cloud Run regions?
The data for google_cloud_run_locations does not allow filtering because the API endpoint only supports returning all possible locations. Therefore, we need to do the filtering in Terraform DSL. There is no intrinsic function that is equivalent to a filter, select, etc. from other languages. Therefore, we need a for lambda expression here.
All possible locations are stored in the attribute data.google_cloud_run_locations.default.locations, so we would filter on that list with a regular expression. Given the example in the question of limiting to the list ["us-west1", "us-central1", "us-east1"]:
for_each = toset([for location in data.google_cloud_run_locations.default.locations : location if can(regex("us-(?:west|central|east)1", location))])
The conditional selects only the locations which match the regular expression because the can function returns a boolean type for a successful or unsuccessful match. The regular expression can be easily modified for a different subset of locations if necessary.

Using multiple SSH keys for different hosts with Ansible EC2 Inventory Plugin

I am trying to use Ansible to install applications across a number of existing AWS EC2 instances which use a number of different SSH keys and usernames on different Linux OSes. Because of the changing state of the existing instances I am attempting to use Ansible's Dynamic Inventory via the aws_ec2 inventory plugin as recommended.
I am able to group the hosts by key_name but now need to run the Ansible playbook against this inventory using the relevant SSH key and username according to the group, structured as the below example output from ansible-inventory -i inventory.aws_ec2.yml --graph:
#all:
|--#_SSHkey1:
| |--hostnameA
| |--hostnameB
|--#_SSHkey2:
| |--hostnameC
|--#_SSHkey3:
| |--hostnameD
| |--hostnameE
| |--hostnameF
|--#aws_ec2:
| |--hostnameA
| |--hostnameB
| |--hostnameC
| |--hostnameD
| |--hostnameE
| |--hostnameF
|--#ungrouped:
I have tried creating a separate hosts file (as per the below) using the groups as listed above, providing the path to the relevant SSH key but I am unsure how you would use this with the dynamic inventory.
[SSHkey1]
ansible_user=ec2-user
ansible_ssh_private_key_file=/path/to/SSHkey1
[SSHkey2]
ansible_user=ubuntu
ansible_ssh_private_key_file=/path/to/SSHkey2
[SSHkey3]
ansible_user=ec2-user
ansible_ssh_private_key_file=/path/to/SSHkey3
This is not explained in the official Ansible documentation here and here but should be a common use case. A lot of the documentation I have found refers to an older method of using Dynamic Inventory using a python script (ec2.py) which is deprecated and so is no longer relevant (for instance this AWS post).
I have found a similar unanswered question here (Part 3).
Any links to examples, documentation or explanations would be greatly appreciated as this seems to be a relatively new way of creating a dynamic inventory and I am finding it hard to locate clear, detailed documentation.
Edit
Using group variables as suggested by #larsks in the comments worked. Was initially caught out by the fact that the SSH key names returned from the inventory plugin prepend an underscore so the group names need to be of the form _SSHkey.
The answer was to use group variables as suggested in the comments. SSH key names returned from the inventory plugin prepend an underscore so the group names need to be of the form _SSHkey.
Have you considered using the ssh config file? ~/.ssh/config. You can put specific host connection information there. Host, hostname,user,Identityfile are the four options you need
Host ec1
Hostname 10.10.10.10
User ubuntu
IdentityFile ~/.ssh/ec1-ubuntu.rsa
Then when you ssh to 'ec1' , ssh will connect to host 10.10.10.10 as user ubuntu with the specified rsa key. 'Ec1' can be any name you like it does not have to be actual host name or ip or FQDN. Make it match your inventory name.
Warning:: make certain file permissions for the directory ~/.ssh and the files within it are all 0600 (chmod -R 0600 ~/.ssh) and that the owner is correct or ssh will give you fits. On ubuntu the /var/log/auth.log will help with troubleshooting.

How to set up different uploaded file storage locations for Laravel 5.2 in local deployment and AWS EB w/ S3?

I'm working on a Laravel 5.2 application where users can send a file by POST, the application stores that file in a certain location and retrieves it on demand later. I'm using Amazon Elastic Beanstalk. For local development on my machine, I would like the files to store in a specified local folder on my machine. And when I deploy to AWS-EB, I would like it to automatically switch over and store the files in S3 instead. So I don't want to hard code something like \Storage::disk('s3')->put(...) because that won't work locally.
What I'm trying to do here is similar to what I was able to do for environment variables for database connectivity... I was able to find some great tutorials where you create an .env.elasticbeanstalk file, create a config file at ~/.ebextiontions/01envconfig.config to automatically replace the standard .env file on deployment, and modify a few lines of your database.php to automatically pull the appropriate variable.
How do I do something similar with file storage and retrieval?
Ok. Got it working. In /config/filesystems.php, I changed:
'default' => 'local',
to:
'default' => env('DEFAULT_STORAGE') ?: 'local',
In my .env.elasticbeanstalk file (see the original question for an explanation of what this is), I added the following (I'm leaving out my actual key and secret values):
DEFAULT_STORAGE=s3
S3_KEY=[insert your key here]
S3_SECRET=[insert your secret here]
S3_REGION=us-west-2
S3_BUCKET=cameraflock-clips-dev
Note that I had to specify my region as us-west-2 even though S3 shows my environment as Oregon.
In my upload controller, I don't specify a disk. Instead, I use:
\Storage::put($filePath, $filePointer, 'public');
This way, it always uses my "default" disk for the \Storage operation. If I'm in my local environment, that's my public folder. If I'm in AWS-EB, then my Elastic Beanstalk .env file goes into effect and \Storage defaults to S3 with appropriate credentials.

How to restrict createObject() on certain java classes or packages?

I want to create a secure ColdFusion environment, for which I am using multiple sandboxes configuration. The following tasks are easily achievable using the friendly administrator interface:
Restricting CFtags like: cfexecute, cfregistry and cfhttp.
Disabling Access to Internal ColdFusion Java components.
Access only to certain server and port ranges by third-party resources.
And the others using configuration of the web server accordingly.
The Problem:
So I was satisfied with the setup only to encounter later that regardless of the restriction applied to the cfexecute tag one can use java.lang.Runtime to execute system files or scripts easily;
String[] cmd = {"cmd.exe", 'net stop "ColdFusion 10 Application Server"'};
Process p = Runtime.getRuntime().exec(cmd);
or using the java.lang.ProcessBuilder:
ProcessBuilder pb = new ProcessBuilder("cmd.exe", 'net stop "ColdFusion 10 Application Server"');
....
Process myProcess = pb.start();
The problem is that I cannot find any solutions which allows me to disable these two classes: java.lang.Runtime & java.lang.ProcessBuilder for the createObject().
For the note: I have tried the file restriction in the sanbox and os permission as well, but unfortunately they seem to work on an I/O file operations only and I cannot mess with security policies of the system libraries as they might be used internally by ColdFusion.
Following the useful suggestions from #Leigh and #Miguel-F, I tried my hands on implementing the Security Manager and Policy. Here's the outcome:
1. Specifying an Additional Policy File at runtime instead of making changes to the default java.policy file. To enable this, we add the following parameters to JVM arguments using CFAdmin interface or alternatively appending it to the jvm.args line in the jvm.config file :
-Djava.security.manager -Djava.security.policy="c:/policies/myRuntime.policy"
There is a nice GUI utility inside jre\bin\ called policytool.exe which allows you to manage policy entries easily and efficiently.
2. We have enforced the Security manager and provided our custom security policy file which contains:
grant codeBase "file:///D:/proj/secTestProj/main/-"{
permission java.io.FilePermission
"<<ALL FILES>>", "read, write, delete";
};
Here we are setting FilePermission for all files to read, write, delete excluding execute from the list as we do not want any type of file to be executed using the java runtime.
Note: The codebase can be set to an empty string if we want the policy to be applied to all the applications irrespective of the source.
I really wished for a deny rule in policy file to make things easier similar to the grant rule we're using, but there isn't unfortunately. If you need to put in place a set of complex security policies, you can use Prograde library, which implements policy file with deny rule (stack ref.).
You could surely replace <<ALL FILES>> with individual file and set permissions accordingly or for a better control use a combination of <<ALL FILES>> and individual file permissions.
References: Default Policy Implementation and Policy File Syntax, Permissions in JDK and Controlling Applications
This approach solves our core issue: denying execution of files using java runtime by specifying permissions allowed on a file. In other approach, we can implement Security Manager directly in our application to define policy file from there, instead of defining it in our JVM args.
//set the policy file as the system securuty policy
System.setProperty("java.security.policy", "file:/C:/java.policy");
// create a security manager
SecurityManager sm = new SecurityManager();
//alternatively, get the current securiy manager using System.getSecuriyManager()
//set the system security manager
System.setSecurityManager(sm);
To be able to set it, we need these permissions inside our policy file:
permission java.lang.RuntimePermission "setSecurityManager";
permission java.lang.RuntimePermission "createSecurityManager";
permission java.lang.RuntimePermission "usePolicy";
Using Security Manager object inside an application has its own advantages as it exposes many useful methods For instance: CheckExec(String cmd) which checks whether a calling thread is allowed to create a sub-process or not.
//perform the check
try{
sm.checkExec("notepad.exe");
}
catch(SecurityException e){
//do something...show warning.
}

Same Logical Path for Multiple Instances of ColdFusion

I am running two instances of CF9. Both instances have a Logical Path called SharedCode mapped under Mappings that point to two different directories. However, when I reference the mapping from the second instance, it points to the directory mapped in the first (default) instance.
The mappings are like so:
Default instance: SharedCode --> D:\Websites\SharedCode
Second instance: SharedCode --> D:\Websites\CF2\SharedCode
My code to reference the mapping as such: SharedCode\cfc\foo.cfm. If I run expandPath('\SharedCode'\) in the second instance, it outputs D:\Websites\SharedCode\
After some investigation, it looks as though ColdFusion does not allow mappings with the same Logical Path in separate instances. Is this true, and is there a solution that doesn't involve making each Logical Path unique?
Sounds like the code running in your second instance isn't actually connected to the second instance. You can check by dumping the server scope in each instance and see if they reference the same root directory. If they are the same, you'll need to use the Web Server Configuration tool to configure your web sites to use the correct CF instances.