Running conda with proxy - python-2.7

I'm using Anaconda 2.7 on windows, and my internet connection uses a proxy.
Previously, when using python 2.7 (Not Anaconda), I installed package like this:
pip install {packagename} --proxy proxy-us.bla.com:123
Is there a way to run conda with proxy argument? didn't see it in conda help.
Thanks

Or you can use the command line below from version 4.4.x.
conda config --set proxy_servers.http http://id:pw#address:port
conda config --set proxy_servers.https https://id:pw#address:port

You can configure a proxy with conda by adding it to the .condarc, like
proxy_servers:
http: http://user:pass#corp.com:8080
https: https://user:pass#corp.com:8080
or set the HTTP_PROXY and HTTPS_PROXY environment variables. Note that in your case you need to add the scheme to the proxy url, like https://proxy-us.bla.com:123.
See http://conda.pydata.org/docs/config.html#configure-conda-for-use-behind-a-proxy-server.

One mistake I was making was saving the file as a.condarc or b.condarc.
Save it only as .condarc and paste the following code in the file and save the file in your home directory. Make necessary changes to hostname, user etc.
channels:
- defaults
show_channel_urls: True
allow_other_channels: True
proxy_servers:
http: http://user:pass#hostname:port
https: http://user:pass#hostname:port
ssl_verify: False

The best way I settled with is to set proxy environment variables right before using conda or pip install/update commands. Simply run:
set HTTP_PROXY=http://username:password#proxy_url:port
For example, your actual command could be like
set HTTP_PROXY=http://yourname:your_password#proxy.your_company.com:8080
If your company uses https proxy, then also
set HTTPS_PROXY=https://username:password#proxy_url:port
Once you exit Anaconda prompt then this setting is gone, so your username/password won't be saved after the session.
I didn't choose other methods mentioned in Anaconda documentation or some other sources, because they all require hardcoding of username/password into
Windows environment variables (also this requires restart of Anaconda prompt for the first time)
Conda .condarc or .netrc configuration files (also this won't work for PIP)
A batch/script file loaded while starting Anaconda prompt (also this might require configuring the path)
All of these are unsafe and will require constant update later. And if you forget where to update? More troubleshooting will come your way...

I was able to get it working without putting in the username and password:
conda config --set proxy_servers.https https://address:port

You can configure a proxy with conda by adding it to the .condarc, like
proxy_servers:
http: http://user:pass#corp.com:8080
https: https://user:pass#corp.com:8080
Then in cmd Anaconda Power Prompt (base) PS C:\Users\user> run:
conda update -n root conda

On Mac what worked for me was going to keychain and updating the password for the key that for the company's the internal repo.

Related

Methods to automate ColdFusion Administrator settings

When working with a ColdFusion server you can access the CFIDE/administrator to set config values, which update the cfusion/lib/ xml files (e.g. neo-runtime.xml, neo-mail.xml, etc.)
I'd like to automate a deployment process that includes setting these administrator values so that I don't have to log in and manually set them for each new box that shares settings. I'm unsure of the best way to go about it.
Some thoughts I had are:
Replacing the full files with ones containing my custom settings. I've done this for local development, but it may not be an ideal method due to CF hot-fixes potentially adding/removing/changing attributes.
A script to read the wddx xml file and replace the attribute values. I'm having trouble finding information about how to do this method.
Has anyone done anything like this before? Or does anyone have any recommendations on how to best go about this?
At one company, we checked all the neo-*.xml files into source control, with a set for each environment Devs only had access to the dev settings and we could deploy a local development environment with all the correct settings for new employees quickly.
but it may not be an ideal method due to CF hot-fixes potentially adding/removing/changing attributes.
You have to keep up with those changes and migrate each environment appropriately.
While I was there, we upgraded from 8 to 9, 9 to 11 and from 11 to 2016. Environments would have to be mixed as it took time to verify the applications worked with each new version of CF. Each server got their correct XML files for that environment and scripts would copy updates as needed. We had something like 55 servers in production running 8 instances each, so this scaled well.
There is a very usefull tool developed by Ortus Solutions for this kind of automatizations called cfconfig that can be installed with their commandbox command line utility. This tool isn't only capable of setting configurations of the administrator: It is also capable of exporting/importing settings to a json file (cfconfig.json). It might be what you need.
Here is the link to their docs
https://cfconfig.ortusbooks.com/introduction/getting-started-guide
CFConfig worked perfectly for my needs. I marked #AndreasRu answer as accepted for introducing me to that tool! I'm just adding this response with some additional detail for posterity.
Install CommandBox as part of deployment script
Install CFConfig as part of deployment script
Use CFConfig to export a config.json file from an existing box that will share settings with the new deployment. Store this json file in source control for each type/env of box.
Use CFConfig to import the config.json as part of deployment script
Here's a simple example of what this looks like on debian
# Installs CommandBox
curl -fsSl https://downloads.ortussolutions.com/debs/gpg | apt-key add -
echo "deb https://downloads.ortussolutions.com/debs/noarch /" | tee -a /etc/apt/sources.list.d/commandbox.list
apt-get update && apt-get install apt-transport-https commandbox
# Installs CFConfig module
box install commandbox-cfconfig
# Import config settings
box cfconfig import from=/<path-to-config>/config.json to=/opt/ColdFusion/cfusion/ toFormat=adobe#11.0.19

CondaHTTPError: HTTP 000 CONNECTION FAILED for url <https://repo.anaconda.com/pkgs/free/win-64/repodata.json.bz2>

I'm setting up the virtual environment of Django for the first time. I've downloaded the Anaconda library of Python in my D drive. So initially I set up the path of Python and Conda(Scripts) manually in advance system settings. But now when I'm creating the environment using command
conda create --name mydjang0 django
the command prompt is showing an error like this-
C:\Users\AABHA GAUTAM> conda create --name mydjang0 django
Solving environment: failed
CondaHTTPError: HTTP 000 CONNECTION FAILED for url <https://repo.anaconda.com/pkgs/pro/win-64/repodata.json.bz2>
Elapsed: -
An HTTP error occurred when trying to retrieve this URL.
HTTP errors are often intermittent, and a simple retry will get you on your way.
If your current network has https://www.anaconda.com blocked, please file
a support request with your network engineering team.
SSLError(MaxRetryError('HTTPSConnectionPool(host='repo.anaconda.com', port=443): Max retries exceeded with url: /pkgs/pro/win-64/repodata.json.bz2 (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available."))'))
If you already have a .condarc file in your root folder. Then update the file by running this command.
conda config --set ssl_verify false
If you do not have a .condarc file, then create one and then run the above command. I have added both the commands below.
conda config --add channels conda-forge
conda config --set ssl_verify false
Instead of using command prompt use Anaconda Prompt as administator.
run the same command there.
I also faced same issue with command prompt.
Only delete file .condrac from c:\users\user

RuntimeError:This command is using a remote connection in offline mode.[CondaError]

when I've created environment into conda, I got this error just after Proceed response:
[root#MyServer]#conda create -n py26 python=2.6 anaconda --offline
Fetching package metadata .........
Solving package specifications: ..............
......................
....
...
Proceed ([y]/n)? y
CondaError: RuntimeError(u'EnforceUnusedAdapter called with url https://repo.continuum.io/pkgs/free/linux-64/jpeg-8d-0.tar.bz2\nThis command is using a remote connection in offline mode.\n',)
CondaError: RuntimeError(u'EnforceUnusedAdapter called with url https://repo.continuum.io/pkgs/free/linux-64/jpeg-8d-0.tar.bz2\nThis command is using a remote connection in offline mode.\n',)
CondaError: RuntimeError(u'EnforceUnusedAdapter called with url https://repo.continuum.io/pkgs/free/linux-64/jpeg-8d-0.tar.bz2\nThis command is using a remote connection in offline mode.\n',)
even that I can see that my env has created successfuly:
[root#MyServer]# conda env list
# conda environments:
#
py26 /opt/Anaconda/Anaconda2-4.4.0/envs/py26
py27 /opt/Anaconda/Anaconda2-4.4.0/envs/py27
root * /opt/Anaconda/Anaconda2-4.4.0
is that Error influence environement I have created?
After few hours of search I found out that this issue cames from a bug on the Conda version 4.3.x: Github
to fix this issue, you will have to install the Conda 4.4.x
also, you have to check out the UPDATE on this version to enable conda in your shel:

How to install Ansible in Amazon Linux EC2?

I pick this Amazon Linux: Amazon Linux AMI 2017.09.1 (HVM), SSD Volume Type - ami-1a033c7a.
I installed Ansible using the command:
sudo pip install ansible,
it shows install completes.when I run ansible --version, it shows:
ansible 2.4.1.0
config file = None
configured module search path = [u'/home/ec2-
user/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python2.7/site-
packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.12 (default, Nov 2 2017, 19:20:38) [GCC 4.8.5
20150623 (Red Hat 4.8.5-11)]
Why config file = None? Shouldn't it shows /etc/ansible/ansible.cfg? I do not see /etc/ansible/hosts, not even folder /etc/ansible. Did I install correctly, where is the folder /etc/ansible?
why config file = None?
Because at the time of running ansible --version no config file was found.
shouldn't it shows /etc/ansible/ansible.cfg?
No. It should show the ansible.cfg actually being used.
Per documentation, Ansible tries to find the config file in:
ANSIBLE_CONFIG (an environment variable)
ansible.cfg (in the current directory)
.ansible.cfg (in the home directory)
/etc/ansible/ansible.cfg
ansible --version will show you the exact path to the one being used.
Strictly speaking the last point is not always true, as package managers and virtual environment managers might cause the /etc directory to be located elsewhere.
did I install correctly
You didn't mention any error or warning during the installation and ansible --version returned a proper response.
There is no reason not to believe it's installed properly.
where is the folder /etc/ansible?
It's not existing on your system. There is no default inventory file, nor configuration file created by the installation package.
Create one.
Here I answer the question myself.
There are many ways to install ansible, and then you get difference default settings, depending on the OS. Many tutorials just assume the ansible_hosts and ansible.cfg already in /etc/ansible, which is not correct if you install ansible using pip.
In fact, if you install ansible using pip, then you will not see ansible.cfg and ansible_hosts in /etc/ansible. Even the folder /etc/ansible does not exist. but never mind, you can create these two files yourself as follows:
suppose you want to store ansible_hosts and ansible.cfg in /home/ec2-user, then you can:
echo <remote_host> /home/ec2-user/ansible_hosts
export ANSIBLE_INVENTORY=/home/ec2-user/ansible_hosts
wget https://raw.githubusercontent.com/ansible/ansible/devel/examples/ansible.cfg
mv ansible.cfg /home/ec2-user/
export ANSIBLE_CONFIG=/home/ec2-user/ansible.cfg
then if ansible --version, you will see
ansible 2.4.1.0
config file = /home/ec2-user/ansible.cfg
....
and if you test ansible ad-hoc command (my remote_host is ubuntu, so I use -u ubuntu, you can change it to be yours):
ansible all -m ping -u ubuntu
then you see ansible ping remote_host successfully.
This shows ansible does work.

Rails doesn't pick up SECRET_KEY_BASE environment variable

Here's what I'm working with right now:
Ubuntu Trusty 14.04
Rails 4.2.6
Ruby 2.2.3
Passenger
Nginx
When I try to visit the IP I get this message:
Incomplete response received from application
When I look at nginx/error.log I see:
Missing `secret_token` and `secret_key_base` for 'production' environment, set these values in `config/secrets.yml`
On the server I did:
RAILS_ENV=production bundle exec rake secret
I placed that result into each of these files for good measure:
~/.bashrc
~/.bash_profile
~/.profile
/app/shared/config/local_env.yml
For all shell scripts the format is:
export SECRET_KEY_BASE="[key]"
For the local_env.yml I used just:
SECRET_KEY_BASE="[key]"
I've also tried entering it without quotation marks.
I've restarted the server each time I made a change. No cigar.
What else might be the issue?
-- UPDATE
I've even added the secret key to the secrets.yml file directly. So now I'm thinking my issue is either something to do with passenger/nginx or with a typo somewhere.
It is more likely that the environment variables are not actually set rather than Rails is not picking them up. You're raking secrets, which I don't do. I set them up manually in the Unix etc/environment, and do not check any secrets into source control. But the following are a few steps that should help you either resolve or hone in on the problem.
On your Ubuntu server for system wide environment variables
1- $env
Look for your SECRET_TOKEN and SECRET_KEY_BASE. The error tells you that these are not set, this is just a technique to check env. (RAILS_ENV will also be shown in the list if it is set.)
2- $sudo nano /etc/environment
Add the following lines -- use your actual values between double quotes. Do not use a [key] or any programmatic replacement.
export SECRET_TOKEN="T99ABC..."
export SECRET_KEY_BASE="99ABC..."
3- $logout / $login to reload environment vars
4- $env - Check the environment again
Look for your SECRET_TOKEN and SECRET_KEY_BASE to be set.
5- Try deploying again. If it fails, check the environment vars using $env again. It will tell you if something in your deploy is smashing your SECRET_* env vars.