I am trying to create a custom CentOS image to be used as an image for Openstack Ironic. I am following the guide here: https://docs.openstack.org/image-guide/centos-image.html. I created the image, and deploy it on my baremetal server. On the ironic side it seems that it has successfully 'dd' the image into the server. But when the server boots up, it can't find any of the partitions with the error /dev/disk/by-uuid/XXXX does not exist. I am able to boot it to rescue mode, but I am clueless on what to do to debug it.
I also used the same procedure to deploy custom Ubuntu Image, and it seems to work perfectly fine. Does anyone have any suggestions to solve this?
Okay, after much tinkering, I have found the problem. This is more of a CentOS7 problem than an openstack problem.
I have found the kickstart script that generates the CentOS cloud build (https://github.com/CentOS/sig-cloud-instance-build/blob/master/cloudimg/CentOS-7-x86_64-GenericCloud-201606-r1.ks). Turns out that they included the dracut-config-generic package which my custom CentOS image did not include. After some google searching work, I found this link (https://www.systutorials.com/docs/linux/man/8-dracut/) stated that:
On RHEL-7 the hostonly mode is the default mode. Generic "non-hostonly" images are created, if the dracut-config-generic rpm is installed. The rescue kernel entry in the bootloader menu is also a generic image.
Without dracut-config-generic, the images can only run in the virtualised environment I set up. So after adding this package, I can successfully deploy it through openstack ironic successfully.
Hope this helps anyone that was trying on this.
Related
I’m looking to learn about Cloud Foundry and I’m trying to get a development instance of it set up on my local Windows 10 PC. But I’m not having any luck.
I’m finding a lot of information about PCF Dev which was deprecated a while ago. I also looked at the replacement for PCF Dev, CF Dev (https://github.com/cloudfoundry-attic/cfdev). Its git page mentions that its repository is no longer receiving updates. I still went ahead and tried installing it using the instructions in the README:
cf install-plugin -r CF-Community cfdev
But the link it uses to download the plugin is broken:
Starting download of plugin binary from repository CF-Community...
Get "https://d3p1cc0zb2wjno.cloudfront.net/cfdev/cfdev-v0.0.18-rc.36-windows.exe": dial tcp: lookup d3p1cc0zb2wjno.cloudfront.net: no such host
Can anyone recommend a way to get a development instance of Cloud Foundry set up on my local machine so I can play around with it?
Thanks
Yes, steer clear of pcf-dev and cf-dev, they may still work but are definitely not getting updates so will be way out of date by now.
My understanding, although I haven't tried this process in a while, is that the way to run locally is with VirtualBox. You can run one locally using bosh-deployment & cf-deployment and Virtualbox.
For instructions installing Bosh in VirtualBox using bosh-deployment, see the Install Section to install Bosh.
With Bosh installed, follow the deployment guide to get CF installed. You can skip to step 4, since you're installing into VirtualBox. Be sure to read the entire document before you begin, however pay specific attention to this section which has specific instructions for running locally.
I am trying to establish a LDAP connection (<cfldap>) from within a Docker image of Coldfusion 2021. It would be hard to post any relavent code here simply because it would expose our AD tree, however, the same code I am trying to run works just fine from an installed copy of CF2021 on a linux server.
The reason for using a Docker image (vs. install) in this instance, is an attempt to setup a local development (MACOS) environment. So far, everything seems to be working great with the exception of LDAP calls.
Note: I have successfully run a ldapsearch call from a bash shell within the container.
The error I'm getting:
An error has occurred while trying to execute query :Could not resolve
a valid ldap host
The Docker image repo I pulled from:
https://hub.docker.com/r/adobecoldfusion/coldfusion
Update: I've just noticed CF version differences between the server that isn't having the problems:
Linux Version: 2021,0,01,325996 (installed a few weeks ago non-Docker)
Local MACOS: 2021,0,02,328618 (Docker)
Update 2: We've installed a fresh ColdFusion 2021 Docker image on a Linux box directly connected to our network and we are still seeing this issue. This narrows the issue down to Adobe Cold Fusion 2021's interaction with Docker and it's ability to do <CFLDAP>.
Update 3: 10-13-2021 - It would appear the CF team is aware of this, has confirmed the bug and is looking into it.
Update 4: 11-12-2021 - This bug is related to the version of Java running within the Docker image. "Adobe CF support suggested updating to JAVA SE 11.0.13 (LTS) inside the docker container" which has worked when tested. Expect CF to solve the problem in future Docker CF2021 releases.
I am stuck in a technical issue on a project and I think you the forum could help me out.
I have an EC2 Instance Type:p2.xlarge running on AWS, I cloned a repository in this instance which requires pytorch and cuda dependencies(this point has been taken care of).
Now, The issue is that I wanna work & run this code-base(which is is AWS instance now) somehow in my local pyCHARM IDE. In short, I didn't have proper resources on my laptop to run the repository, so I have to run in an AWS instance but for debugging purposes the local IDE would be a great option.
Is it possible to do that?. In other words, we can do SSH into AWS instance and run code, but all will be done through command line, if we could SSH through PYCHARM and can see the code in AWS here in local machine within PYCHARM and change, debug or run it as it was local but actually it gets executed in the instance.
Please suggest a solution to it.
Thanks in advance.
EDIT-1:
After following, #Cromulent suggestion, I have arrived here
Setting the remote:
Upload happening within the local & remote repo.
I still didn't understand the requirement of syncing the local and remote folders, when I only want to open the remote folder in my PYCHARM IDE and work on it.
I think after this setup, I have to change the code in local copy and the PYCHARM will sync the code in remote copy. How will I be running(using resources-GPUs of the remote Instance, not my local machine.) the remote code in PYCHARM in this scenario, I am just syncing it, for running again I have to ssh through command line and run the script(This does not serve the purpose)?
EDIT-2:
After #Cromulent suggestions.
Actually, it did work, but still, I am not able to run the remote code locally.
I am getting the below error while running any remote script. If I run the same script using ssh in the terminal, the scripts run normally. I tried to fix the problem using this post on StackOverflow, but it didn't work too.
ssh://ubuntu#ec2-52-41-247-169.us-west-2.compute.amazonaws.com:22/home/ubuntu/anaconda3/bin/python -u <08ad9807-3477-4916-96ce-ba6155e3ff4c>/home/ubuntu/InsightProject/scripts/download_flownet2.py
/home/ubuntu/anaconda3/bin/python: can't open file '<08ad9807-3477-4916-96ce-ba6155e3ff4c>/home/ubuntu/InsightProject/scripts/download_flownet2.py': [Errno 2] No such file or directory
The below is the screenshot for the above problem:
PyCharm Professional supports remote Python interpreters (either the globally installed Python interpreter or a virtualenv). It works by creating an SSH connection to the server and then running the code on the remote host. The results are then displayed locally in PyCharm Professional. You can also do remote debugging as well.
You MUST be using the professional version of PyCharm though. The free community version does not support this feature.
You can find the documentation here:
https://www.jetbrains.com/help/pycharm/configuring-remote-interpreters-via-ssh.html
One more solution is to deploy a Jupyter Notebook on your remote server. Then you will be able to use it from PyCharm Professional Edition.
Don't forget to make rules for the jupyter ports (e.g. allow all 8888) in your AWS console and in your instance.
To configure a remote interpreter for your notebook do this (source):
Open the Jupyter Notebook page of the Settings/Preferences dialog.
On this page, select or clear the Markdown cells rendering enabled option, and specify the username and password. Note that for the
single-user notebooks these fields are optional - leave them blank.
Fill in the username (for JupyterHub) and password.
Click the link Configure remote interpreter. You'll find yourself at the Project Interpreter page.
Configure the remote interpreter, as described in the section Configuring Python Interpreter.
You will want to configure a remote interpreter.
I tried the above approach but it didn't work for me. I have edited my post so that I can get additional input from the community, but I didn't any after the first answer was posted.
My friend actually figured out a secondary way to fix the issue. He actually uses "NOMACHINE" on the local machine and open connection to the remote desktop. Then you can directly install PYCHARM in the remote machine and work in there. I hope this will help others.
The solution is in his blog post. (Thanks to Shaobo Guan)
Another solution would be to use VNC instead of NoMachine
I have recently initialized a GPU instance on Google cloud, and installed Anaconda and installed all required dependencies before I stoped that instance. Now when I started the instance, it does not have anaconda installed in it. I found it is so weird. Please let me know if you know any details on it. I also looked into details from the doc of google, I don't find any related comments that should behave like this.
https://cloud.google.com/compute/docs/instances/stopping-or-deleting-an-instance
No, this should not happen if programs got installed properly in persistent/boot disk file system.
If programs are supposedly installed in TMPFS or other memory mapped file system then after the instance is rebooted the memory contents would be lost and consequently data and links to it.
However, this is never done as VM Instance packages are installed in persistent disk.
I guess your installation failed for some reason. Check if the packages are still installed. If you are using a Redhat Linux variant you can use ‘yum list installed’ to see all installed packages or ‘yum list installed|grep -i <package-to-search-for> to filter out a particular package.
If the package shows up, then the issue could be related to a misconfiguration or other problem somewhere. Use dmesg and/or cat /var/log/messages to view the logs and try to find any problems there which may be related to Anaconda or GPU software.
I just encountered the same problem. I know this question is dated but might help a complete beginner like myself. In my case I needed to SSH onto the instance instead of just being in the project level virtual environment.
gcloud beta compute ssh --zone "europe-west2-c" "myinstancename" --project "fired-brimstone-234534"
I'm working on a project that I haven't touched in about 4 months. Before everything on the deploy was working fine, but now I'm getting an error when trying to deploy an update.
Failed to pull Docker image amazon/aws-eb-python:3.4.2-onbuild-3.5.1: Pulling repository amazon/aws-eb-python time="2016-01-17T01:40:45Z" level="fatal" msg="Could not reach any registry endpoint" . Check snapshot logs for details. Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/03build.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
In the eb-activity log, it further states [CMD-AppDeploy/AppDeployStage0/AppDeployPreHook/03build.sh] : Activity execution failed, because: Pulling repository amazon/aws-eb-python before repeating what was shown in the UI.
The original was using a Preconfigured Docker 64bit Debian jessie v1.3.1 running Python 3.4. I've tried upgrading to the latest, which is version 2.0.6, but it never completes (don't need to get into specifics of that error, separate issue and I'd like to stay on 1.3.1 if possible). I've also tried upgrading to the latest 1.x but it has the same result of upgrading to 2.0.6.
Any ideas, or anything else I should be looking for clues?
Docker Hub has deprecated pulls from Docker clients on 1.5 and earlier. Make sure that your docker client version is at least above 1.5. See https://blog.docker.com/2015/10/docker-hub-deprecation-1-5/ for more information.