How do I update the hardware details for a system in Beaker? - beaker-testing

The hardware details shown in Beaker for my system are out of date, because the hardware has been modified. How can I update Beaker's records so that they reflect the current hardware?

The inventory details for a system are gathered automatically using the /distribution/inventory task. The easiest way to run this task is to use the bkr machine-test workflow command to generate and submit an appropriate job definition:
bkr machine-test --inventory --family=RedHatEnterpriseLinux6 \
--arch=x86_64 --machine=host1.example.com
This command will submit a job running /distribution/inventory on host1.example.com after provisioning RHEL6.
There is also a standalone command beaker-system-scan available in the Beaker harness yum repos. The /distribution/inventory task uses this command under the covers. You could install it and run it by hand on the system, if you want to update the system's hardware details in Beaker without reprovisioning it.

Related

How to apply rolling updates in VM instances instead of using Managed Instance group in GCP?

Problem: I want to apply patch updates in a VM instance which is not a part of a Managed Instance Group. The patch update could be-
A change in the version of the current OS of a VM instance, that is, change from Ubuntu-16-v1 to Ubuntu-16-v2.
An upgrade of the OS boot, that is, changing from Ubuntu-16 OS to Ubuntu-18 OS.
Installation of a new package in the existing machine.
Exploration:
For Problem 1 & 2 stated above
I have explored and tried the rolling update feature present in Managed Instance Group in the Google Cloud Platform and this seems to be a good approach for the problem stated, but what should be the best approach with best practices if someone is not using a Managed Instance Group? You may find the details here.
For Problem 3 stated above
I have tried the Os-patch Management service of GCP but is there any other method that I could use?
Create an "image" from the boot disks of your existing Compute Engine instances.
For updating with newer configurations and software, group images in "image family" which always points to the latest image.
See https://cloud.google.com/compute/docs/images/create-delete-deprecate-private-images#setting_families
For your use case, I think you should use IAC script like terraform to recreate similar VMs with the same name, disk, internal address, etc..and call the script from the repo directly on a scheduled date automatically or provide self patch instructions.
Here is the likely process:
Send Email Notification to all the VM owners that Auto-Patch is
scheduled on XYZ.
Email content should include an Instance list going to be
patched/update, list of action, patch team contact details.
An email should also include a link for skipping this auto-update and perform "Self Patching instruction"
documents
Self patching documents should have a command to call autopatch
wrapper script like: "curl -u "encrypted-auth:x-oauth-basic" -k -H 'Accept:
application/vnd.github.VERSION.raw'
'https://github.com/api/v3/repos/xyz/images/contents/gcp/patch_OS_update.sh?ref=master'
|bash -s -- -q"
The above script can also have other options like to query patchset available for particular VM or scan the VM for pending updates

App is getting installed for every test method in Amazon device farm.

App is getting installed for every test method in Amazon device farm. But the same code works fine on real devices. Any capabilities to be added to get rid of this issue?
There is solution for this problem. However, you need to install the app only once on the device before you start execution.
If you install the app (apk file) manually then you don't need to add the "app" desired capability. Instead you can just add two capabilities : "appActivity" and "appActivity"
capabilities.setCapability("appPackage", "com.your.app.package.name");
capabilities.setCapability("appActivity", ".ui.ActivityName");
If you use "app" capability in desired capabilities then appium tries to install the apk on the device every time when driver is initialized. Removing this capability and adding appPackage and appActivity is the best way to avoid re-installing the app every time.
AWS Device Farm does not install the app every time; however they do run each test individually rather than in bulk.

Exporting VM/vAPP in vCloud environment

We have a customised vCloud environment. We are trying to download the vAPP image as ovf file for migrating it to some other environment. I am following this procedure
Stop the VM.
Click on download button on setting
It asks for download location and type of image (ova/ovf).
It initiates the download.
Now my problem lies on 4th step. When I click download it initiates download and I could see "enabling download" when it happened. After some unknown time(can't predict the time may be 2hr, 3hr 4hr, 1hr) the process gets failed. I have to repeat the process multiple times(at least 3 to 5 times) to start the actual download process where it actually copies the VM image on disk.
I am not able to predict the actual time of VM download and why the process get failed many time before it start the actual export process.
Can someone tell me answers of below mentioned questions
Does vCloud enable download functionality before it allows us to download the VM? If it does how much time it takes for this functionality to enable.
Can we enable this functionality beforehand so that vCloud should just start the VM download process instantly once I shutdown the machine and start the VM export process?
Do you think using CLI tool like ovftool will make the process faster and prevent it from failing so that I will get to know the actual VM download time and we can prepare a plan for migration?
From my limited understanding of working with the API and SDK, I do not think 1 is possible... if it is... it's not straightforward.. at least to me
as for #3 if you are not using the CLI for scripting and automating purposes, yes it would definitely help

How to replicate code changes across multiple AWS instances?

We have a load balanced setup in AWS with two instances. We do pretty frequent code updates, utilizing SVN. I need to know how easy it is to update the code changes across all the instances in our cluster. Can we simply do 'snapshots' and create new volumes each time for the instances?...or?...
I would not do updates via EBS snapshots. Think of EBS volumes as a hard disk - you would not change your harddisk if you have an update for your software.
As you have your code in a version control system, code updates should be quite simple like logging in to your (multiple) servers and doing a git pull or svn update. This should fetch the latest code files from your servers. Depending on the type of application you would have to do some other tasks afterwards, running build scripts, emptying cache etc.
The problem is that this kind of setup does not scale well. If you have n servers, you will have to login and do this command n times. Therefore it makes sense to look into some remote management tools that you can use in one step. With a lot of these tools, you also get a complete configuration management stack: you define a set of recipes or tasks (like installed packages, configuration files, fetch the latest code, necessary build steps) for each of your servers, and when you boot up a new server it fetches the lastest version of its configuration and installs itself.
Popular configuration management tools include Puppet or Salt. Both tools have remote execution included which should make your task to publish your code base easier, you would only have to fire one command on your master server and it automatically executes this task on all its minions / slave servers.

vmware - revert to snapshot from within the GUEST?

i have virtual machines running on vmware ESXi and vmware workstation.
i need to execute "revert to snapshot" from inside the guest.
i have done so much searching, but all solutions proposed so far suggest doing it from "outside" - either some external machine or the host itself.
other workarounds suggest to enable automatic reverting to snapshot on power off event.
please do not suggest anything in that direction. i really need to execute it from within the guest. for example:
as scheduled task
as batch script (at the end of completing some other tasks)
edit:
this is the reason why i think there must be some way to achieve this: inside the guest there are "vmare tools" running as system service. so i would expect this component to also expose a functionality to trigger the host / hypervisor reverting the current VM to snapshot.
if this is not possible currently it should be implemented as new feature :)
in case it's currently not possible to execute it "from inside": that would also be an "answer" ...
I've actually done this pretty recently, try this:
Install VMware vSphere PowerCLI 5.1 (it's a command line scripting interface for ESX)
Write a script (perhaps in Notepad) that contains the following code:
Connect-VIServer <vCenter Server IP>
Set-VM <VM name> -Snapshot <Snapshot name> -Confirm:$false
This will connect to your vCenter server and revert your VM to the specified snapshot.
Save the script as revert_snapshot.ps1 (PowerShell file extension)
Using Windows Task Schedule, create a new tasks. The General and Triggers tabs are self
explanatory, but the Actions tab is where you'll configure the scheduled tasks to launch
your PowerShell script.
For 'Action' select 'Start a Program'. Under 'Program/script', enter the following:
C:\Windows\SysWOW64\WindowsPowerShell\v1.0\powershell.exe
For the 'Add arguments' field, you'll specify the path of your PowerShell script:
-psc "C:\Program Files (x86)\VMware\Infrastructure\vSphere PowerCLI\vim.psc1" "<path to your script>"
note: vim.psc1 is not available in the latest version of PowerCLI.
Save your task and run it manually as a test. Be patient as sometimes the cmdlet for logging into vCenter (Connect-VIServer) can take a few seconds to connect.