Specify static IP on salt-cloud profile deploy - vmware

I've setup a new salt master and am trying to automate the deployment of new VMs with static IPs (no dhcp available) from a template.
I can deploy VMs ok using my template via a cloud profile with a default IP defined there, but I can't find a way to overwrite the IP address to use dynamically on deployment, I was hoping to pass the hostname/ip into the cli call or via the salt-api so I can initiate from an other application.
I've tried to pass the IP into a state as dynamic pillar data, this configures the vm hostname ok but couldn't see how to pass the IP into the profile as the profile conf doesn't accept pillar variables.
salt-call state.apply vm-new pillar='{"hostname": "salt-test", "ip": "172.0.0.11"}'
vm-new.sls
{{ pillar['hostname'] }}:
cloud.profile:
- name: {{ pillar['hostname'] }}
- profile: centos7
cloud.profiles.d/centos7.conf
...
devices:
network:
Network adapter 1:
name: 'VM Network'
switch_type: standard
ip: 172.0.0.90
subnet_mask: 255.255.255.0
gateway: [172.0.0.1]
...
I then tried to look at using a map file but trying to pass pillar data doesn't seem to work.
# salt-cloud -m cloud.maps.d/centos7.map pillar='{"hostname": 'salt-test', "ip": "172.0.0.11"}'
[ERROR ] Rendering exception occurred: Jinja variable 'salt.utils.context.NamespacedDictWrapper object' has no attribute 'hostname'
[ERROR ] Rendering map cloud.maps.d/centos7.map failed, render error:
Jinja variable 'salt.utils.context.NamespacedDictWrapper object' has no attribute 'hostname'
No nodes defined in this map
centos7.map
centos7:
- {{ pillar['hostname'] }}:
devices:
network:
Network adapter 1:
ip: {{ pillar['ip'] }}
I have spent a while digging around the docs and github issues but couple of people trying to do similar things but hardcoded IPs in the map file solved their issue, is it possible to do what I'm trying to do? Any advice/pointers on where to look next?

I've encountered a similar requirement wherein I needed to dynamically set some EC2 instance attributes (e.g. hostname). At least 3 months ago since this writing, this use case was not possible so I ended up building a salt exec module (e.g. execmodule.provision_instances) that dynamically generates a map file given my predefined profiles with default values, and eventually called salt.cloud.CloudClient.map_run with the generated map file.
It worked well by calling the exec module (e.g. salt-call execmodule.provision_instances). It would be better if we can just simply pass pillars instead of specifying a map file.
Note: Since this thread is old, salt cloud maps may already support passing pillars to map runs, please check.

I have tested salt.modules.win_ip.set_static_ip for Windows VMS and it works. For example, you can run this command on salt master to set IP of all windows machines:
salt -G 'os_family:Windows' ip.set_static_ip 'Local Area Connection' 10.1.2.3/24 gateway=10.1.2.1
You can read the official doc here.

Related

Nextjs 404s on buildManifest across multiple EC2 instances

Context: I have a simple Next.js and KeystoneJS app. I've made duplicate deployments on 2 AWS EC2 instances. Each instance also has an Nginx reverse proxy routing port 80 to 3000 (my apps port). The 2 instances are also behind an application load balancer.
Problem: When routing to my default url, my application attempts to fetch the buildManifest for the nextjs application. This, however, 404s most of the time.
My Guess: Because the requests are coming in so close together, my load balancer is routing the second request for the buildManifest to the other instance. Since I did a separate yarn build on that instance, the build ids are different, and therefore it is not fetching the correct build. This request 404s and my site is broken.
My Question: Is there a way to ensure that all requests made from instance A get routed to instance A? Or is there a better way to do my builds on each instance such that their ids are the same? Is this a use case for Docker?
I have had a similar problem with our load balancer and specifying a custom build id seems to have fixed it. Here's the dedicated issue and this is how my next.config.js looks like now:
const execSync = require("child_process").execSync;
const lastCommitCommand = "git rev-parse HEAD";
module.exports = {
async generateBuildId() {
return execSync(lastCommitCommand).toString().trim();
},
};
If you are using a custom build directory in your next.config.js file, then remove it and use the default build directory.
Like:
distDir: "build"
Remove the above line from your next.config.js file.
Credits: https://github.com/serverless-nextjs/serverless-next.js/issues/467#issuecomment-726227243

How to check if gremlin is properly connected to aws neptune instance

I have launched an aws neptune instance and installed apache-tinkerpop-gremlin-console version 3.3.3 on windows 10 machine.
neptune-remote.yml looks like:
hosts: [abc-nept.XXXXXX.us-XXXX-1.neptune.amazonaws.com]
port: 8182
serializer: { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV3d0, config: { serializeResultToString: true }}
after running gremlin.bat next command is:
:remote connect tinkerpop.server conf/neptune-remote.yaml
Now at this stage I am able to make queries and those are working! So question is how can I check whether I am actually connected to aws neptune instance or not?
I assume your question is related to having multiple :remote instances configured. Obviously, if you've simply created:
:remote connect tinkerpop.server conf/neptune-remote.yaml
then the only place your data could be going to or coming from is Neptune. The Console does allow multiple :remote instance that you can switch between so if you also had one for a local Gremlin Server then you might want to confirm which one you're sending requests. You just do this:
gremlin> :remote
==>Remote - Gremlin Server - [localhost/127.0.0.1:8182]
You'll be able to see the "current" :remote and thus know whether it is for Neptune or your local Gremlin Server instance.

Inspec (Kitchen) Multiple Control / Target Types

It doesn't seem as if I am able to run Inspec against multiple targets using different controls. For instance I have the following:
control "aws" do
describe aws_ec2_instance(name: 'Terraform Test Instance') do
it { should exist }
end
end
And I have the following
control 'operating_system' do
describe command('lsb_release -a') do
its('stdout') { should match (/Ubuntu/) }
end
end
When I run inspec directly I can pass -t for either ssh or aws, and the aws control works when I pass the aws target (aws://us-east-1) and the operating system control passes when I pass the ssh target.
Is there a way to make BOTH of these run using Kitchen-Inspec? I found the feature request that was closed by Inspec team that multiple targets is out of scope Issue 268, I didn't know however if Kitchen addressed this since it wraps Inspec.
No, this is not supported by Kitchen-Inspec. We only aim to support host-based tests since it's part of an integration testing cycle for that host.

How to set DNS server and network interface to boto3?

I would like to upload files to S3 using boto3.
The code will run on a server without DNS configured and I want that the upload process will be routed through a specific network interface.
Any idea if there's any way to solve these issues?
1) add the end point addresses for s3 to /etc/hosts, see this list http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
2) configure a specific route to the network interface, see this info on superuser
https://superuser.com/questions/181882/force-an-application-to-use-a-specific-network-interface
As for setting a network interface, I did a workaround that allows to set the source ip for each connection made by boto.
Just change awsrequest.py AWSHTTPConnection class with the following:
a) Before init() of AWSHTTPConnection add:
source_address = None
b) Inside the init() add:
if AWSHTTPConnection.source_address is not None:
kwargs["source_address"] = AWSHTTPConnection.source_address
Now, from your code you should do the following before you start using boto:
from botocore.awsrequest import AWSHTTPConnection
AWSHTTPConnection.source_address = (source_ip_str, source_port)
Use source_port = 0 in order to let OS choose random port (you probably want this option, see python socket docs for more details)

Enabling HA namenodes on a secure cluster in Cloudera Manager fails

I am running a CDH4.1.2 secure cluster and it works fine with the single namenode+secondarynamenode configuration, but when I try to enable High Availability (quorum based) from the Cloudera Manager interface it dies at step 10 of 16, "Starting the NameNode that will be transitioned to active mode namenode ([my namenode's hostname])".
Digging into the role log file gives the following fatal error:
Exception in namenode joinjava.lang.IllegalArgumentException: Does not contain a valid host:port authority: [my namenode's fqhn]:[my namenode's fqhn]:0 at
org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:206) at
org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:158) at
org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:147) at
org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:143) at
org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:547) at
org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:480) at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:443) at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:608) at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:589) at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1140) at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1204)
How can I resolve this?
It looks like you have two problems:
The NameNode's IP address is resolving to "my namenode's fqhn" instead of a regular hostname. Check your /etc/hosts file to fix this.
You need to configure dfs.https.port. With Cloudera Manager free edition, you must have had to add the appropriate configs to the safety valves to enable security. As part of that, you need to configure the dfs.https.port.
Given that this code path is traversed even in the non-HA mode, I'm surprised that you were able to get your secure NameNode to start up correctly before enabling HA. In case you haven't already, I recommend that you first enable security, test that all HDFS roles start up correctly and then enable HA.