vm and host on same subnet: VBox automatic: kvm how? - virtualbox

Using VirtualBox on 192.168.1.1, I get vms like 192.168.1.2 ... automatically.
kvm/virt-manager generates vms on the 192.168.122.0 subnet, with routing between 122.0 and 1.0.
AFAICT, my kvms can have addresses on the ..1.1 subnet if I configure a bridge properly.
I've yet to find a resource which says
'the minimal configuration to provide a
kvm instance on the same subnet as the host is:'
They may be saying that, but with terminology I'm not familiar with.
Any suggestions?
Thanks,
Kent

The minimum configuration is a configured bridge, as per here:
https://wiki.ubuntu.com/KvmWithBridge
After making these changes you should have a br0 interface with an IP address, and eth0 (or whatever your setup is) should not have an IP address. Then when creating a new VM use a shared device for the networking and specify br0.

Either a version change, or I overlooked it before:
'macvtap' options provide what I am looking for.
Thanks,
Kent

Related

How to deal in AWS routing if we need to use a loopback interface in a EC2 instance

I am relatively new in AWS. I am trying some thing basic like this:
One Ubuntu instance is connected with a vSRX instance. Say Ubuntu instance eth1 ip is like 20.0.0.100 and vSRX corresponding interface ip is 20.0.0.101. Now I want to configure a loopback interface (its a virtual interface) inside the vSRX and assign a ip 99.99.99.99. Obviously this 99… network info is not available with AWS. My question is how can I build that knowledge in aws routing and make sure that to reach 99.99.99.99 go via 20.0.0.101 as next-hop? Is this possible?
Thanks in advance
I was thinking of creating a subnet in my VPC first with 99… network. But I unnecessary so not want to burn larger no of ip. And I believe /32 is not an acceptable CIDR in aws. And my journey stopped here. I am thinking of trying to configure CIDR of 99.99.99.96/29 but after that should I add that as local in route ? How could I specify that to reach 99 series go via a specific ip?

EC2 Classic Link - Determine Classic IP Range

So, I am working to migrate from EC2 Classic to VPC (yeah, I know, long time in coming and this was an inherited platform).
I have created a VPC and when I go to turn on Classic Link, I get the following error:
The CIDR range of vpc-[id_here] overlaps with the Classic IP space
I looked and was not able to find a way to determine which IP Range(s) Classic uses. Is there a way to find out so I can make my VPC's not stomp all over it?
Thanks!
10.0.0.0/8 as documented here.
As in the comment above:
"VPCs that are in the 10.0.0.0/16 and 10.1.0.0/16 IP address ranges can be enabled for ClassicLink

Google cloud virtual instance cannot ping my Mac (checked firewalls)

I have a virtual machine instance running on Google Cloud Compute Engine — a preemptible free-tier CPU running Ubuntu 17.04. The end goal is to connect it to a MongoDB running on my local machine, a 2015 Macbook Pro (OS 10.12.6). But first, I've been trying to ensure the VM can reach my Mac via ping.
Running ping <VM's external IP> from my Mac works.
pinging my Mac from another Mac on the same wifi network works.
Running ping <Mac's IP> from the VM via the browser terminal does not work.
I've disabled my Mac's firewall. I've also configured my VM's firewall rules to allow all inbound and outbound traffic, to no avail:
ingress firewall rules, egress firewall rules
How might I get this instance to ping my Mac successfully?
Does your Mac's IP address begin with 10., 192.168., or between 172.16. and 172.32.? These are private addresses only reachable within your Mac's local network, which is (part of) why GCE cannot reach your VM.
This is part of a very common configuration. An ISP only allocates one (or a small number) of IP addresses to your home or business. A router on the network performs NAT to share that IP addess between computers on the local network, which instead use private IP addresses for themselves. As the router doesn't know what to do with the inbound MongoDB traffic, it blocks it.
There are two common ways around this that are usually found in your router settings:
"port forwarding" where you tell the traffic to forward all traffic on port 1234 to your Mac. This can get MongoDB working, but not ping.
If you have whole extra IP addresses, "DMZ" where your router directly forwards an entire extra IP to your instance. If you have only one IP address this is not an option as that IP is needed to be shared for other devices on the Wifi.
You likely also have a firewall on the router. If you use a DMZ or port-forwarding you must make sure that firewall allows traffic through too.
That said, I'm not sure that this is a sensible thing to do. Opening up your local network to the internet can create major security issues, plus it is likely unreliable more expensive (free tier only provides 1GB egress/month, your db traffic could exceed this).
Actually running MongoDB on instances within GCE is almost certainly a better option in every regard for you.

Upload local Vagrant package.box to AWS

So, I've been working locally in a vagrant ubuntu box for the past month: I've spent a lot of time working on customizing it and installing exactly all the software I want on it. I started all of this through the normal vagrant tutorial (aka, nothing special). I packaged my local vagrant box into a package.box file. Now, I want to move my development environment (e.g. package.box file) to an Amazon EC2 instance on AWS. I know I'm not supposed to ask for software recommendations, but my question is basically: is this possible to do and, if it is, could you point me to some examples of people doing it? I've read that packer might be an option, but it looks to me (a very inexperienced perspective) that maybe I should have started with that instead of trying to use it now. Any help would be appreciated - I don't want to spend a couple weeks setting up a new environment when I have one locally set up.
Edit:
Progress! I followed #error2007s link and followed the tutorial. I'm at the point where I've uploaded the VMDK image to s3 and provisioned an instance using it (all done automatically with the ec2-import-instance command on the CLI). However, I don't see a Public IP to access the new instance after I start it up.
I think this is related to cloud-init somehow, but I'm not sure what that is really. I tried it with both the /etc/cloud/cloud.cfg file that came with the box as well as the one listed here and neither of the two boxes I uploaded gave me a Public IP to access.
Edit 2:
Here are some things I see in the Console (They all seem right to me, but a more experienced eye might see something wrong):
subnet info:
Auto-assign Public IP: yes
Network ACL:
VPC info:
DNS resolution: yes
DNS hostnames: yes
ClassicLink DNS Support: no
VPC CIDR: 172.31.0.0/16
DHCP Option Set:
Options: domain-name = ec2.internal domain-name-servers = AmazonProvidedDNS
From my perspective, those all look right, or am I missing something?
I assigned an Elastic IP per these instructions, but when I ssh ec2-user#<elastic-ip>, it says ssh: connect to host <elastic-ip> port 22: Connection refused. The security group assigned to the instance is set to allow all protocols on all ports. Also, this is the first time I encounter a Elastic IP and I'm unsure what exactly it is doing.
Amazon enables you to transfer your Vm to AWS as a EC2 instance. Check this tutorial this is more simple.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/UsingVirtualMachinesinAmazonEC2.html
You want to use the Vagrant AWS provider found here:
https://github.com/mitchellh/vagrant-aws
This is a Vagrant 1.2+ plugin that adds an AWS provider to Vagrant,
allowing Vagrant to control and provision machines in EC2 and VPC.
This will allow you to provision your AWS instances using Vagrant, allowing you to migrate the same local development environment to an AWS EC2 instance.
There is a good tutorial here:
https://nurmrony.wordpress.com/2015/03/15/vagrant-deploy-and-provisioning-an-amazon-ec2-instance/
Hi I have found these articles but I have not yet tested them myself. Im still in the middle of organizing my personal notes and identifying my technology stack. I intend to have a Homestead vagrant box be replicated as an EC2 instance, so I wont have to configure the instance(s) manually.
https://nurmrony.wordpress.com/2015/03/15/vagrant-deploy-and-provisioning-an-amazon-ec2-instance/
https://www.tothenew.com/blog/using-vagrant-to-deploy-aws-ec2-instances/
https://foxutech.com/how-to-deploy-on-amazon-ec2-with-vagrant/
https://blog.scottlowe.org/2016/09/15/using-vagrant-with-aws/
https://devops.com/devops-primer-using-vagrant-with-aws/
I find their approaches similar. The only thing that I am worried at is the "vagrant add box" part.
I asked myselft, what if I had to do this setup again for familiarization purposes, what will happen since I already added a vagrant box (the dummy one, as instructed in the tutorials) previously.

Multiple Public IP Addresses on AWS EC2 (VPC)

Thanks to everyone in advance -
I have an ec2 instance with the following network config:
eth0 - internal-ipaddressA
eth1 - internal-ipaddressB
public-elastic-ipddressA associated with internal-ipaddressA
public-elastic-ipddressB associated with internal-ipaddressB
I configured sshd to listen on both these addresses explicitly:
internal-ipaddressA
internal-ipaddressB
I can ssh to public-elastic-ipddressA and then ssh to internal-ipaddressA AND internal-ipaddressB, just to make sure sshd is working correctly on both addresses.
All that said, I am unable to ssh to public-elastic-ipddressB if it is associated with any other network interface besides the primary, which was created by default when the instance was started.
Am I missing some sort of special routing or ACL/security configurations here?
Thanks!
Sam
The sshd process is probably bond to the first adress.
You should look at /etc/ssh/sshd_config. The ListenAddress propeties contains the adress it listen to (man page).
The adress is probably first set by Cloutint.
It's a routing problem. You need to put each network interface of the instance in a different subnet of the VPC or the packets won't be routed back from the instance to the destination.
Other solution is to assign two internal IPs to the same network interface, and then configure them in the OS as eth0 and eth0:1, but this won't achieve your objective.