Connecting to AWS Kinesis with erlang library kinetic - amazon-web-services
I am trying to connect to kinesis using the erlang library kinetic. https://github.com/AdRoll/kinetic... my development.config has my aws key and secret in it, however I am not sure what the metadata_base_url should be or what else I need in order to make it work...currently i have:
%% -*- erlang -*-
[{kinetic,
[{args, [
% All of these values are optional
% kinetic will get all of the context from the instance
{metadata_base_url, "https://kinesis.us-east-1.amazonaws.com"},
{aws_access_key_id, "mykey"},
{aws_secret_access_key, "mysecret"},
{iam_role, "kinetic"},
{lhttpc_opts, [{max_connections, 5000}]}
]}]
}].
Below is my results when trying to start it...
kinetic (master) $ make
==> lhttpc (get-deps)
==> jiffy (get-deps)
==> meck (get-deps)
==> kinetic (get-deps)
==> lhttpc (compile)
==> jiffy (compile)
==> meck (compile)
==> kinetic (compile)
Compiled src/kinetic.erl
kinetic (master) $ erl -pa ebin -pa deps/*/ebin -s inets -s crypto -s ssl -s lhttpc -config development -s kinetic
Erlang R16B03-1 (erts-5.10.4) [source] [64-bit] [smp:8:8] [async-threads:10] [hipe] [kernel-poll:false] [dtrace]
Eshell V5.10.4 (abort with ^G)
1>
=INFO REPORT==== 2-Dec-2014::11:51:31 ===
application: kinetic
exited: {{shutdown,
{failed_to_start_child,kinetic_config,
{{badmatch,{error,403}},
[{kinetic_config,new_args,1,
[{file,"src/kinetic_config.erl"},{line,127}]},
{kinetic_config,update_data,1,
[{file,"src/kinetic_config.erl"},{line,42}]},
{kinetic_config,init,1,
[{file,"src/kinetic_config.erl"},{line,55}]},
{gen_server,init_it,6,
[{file,"gen_server.erl"},{line,304}]},
{proc_lib,init_p_do_apply,3,
[{file,"proc_lib.erl"},{line,239}]}]}}},
{kinetic,start,[normal,[]]}}
type: temporary
When removed the base_url config...
=ERROR REPORT==== 2-Dec-2014::12:41:30 ===
{failed_connect,[{to_address,{"169.254.169.254",80}},{inet,[inet],etimedout}]}
=INFO REPORT==== 2-Dec-2014::12:41:30 ===
application: kinetic
exited: {{shutdown,
{failed_to_start_child,kinetic_config,
{{badmatch,
{error,
{failed_connect,
[{to_address,{"169.254.169.254",80}},
{inet,[inet],etimedout}]}}},
[{kinetic_config,new_args,1,
[{file,"src/kinetic_config.erl"},{line,127}]},
{kinetic_config,update_data,1,
[{file,"src/kinetic_config.erl"},{line,42}]},
{kinetic_config,init,1,
[{file,"src/kinetic_config.erl"},{line,55}]},
{gen_server,init_it,6,
[{file,"gen_server.erl"},{line,304}]},
{proc_lib,init_p_do_apply,3,
[{file,"proc_lib.erl"},{line,239}]}]}}},
{kinetic,start,[normal,[]]}}
type: temporary
It seems that in case of running kinetic application outside of ec2 cluster you need to specify region in config:
[{kinetic,
[{args, [
{region, "us-east-1"}, %% just an example
...
]}]
}].
and use fixed version of kinetic which won't be trying to discover region.
Second solution is to set metadata_base_url option to your http service which on get request for "/latest/meta-data/placement/availability-zone" will respond with your region.
I've never used aws and some of statements might be improper.
The error message says that the access to https://kinesis.us-east-1.amazonaws.com/latest/meta-data/placement/availability-zone return an error 403: Forbidden. Reading http documentation it means that the authentication was correct but that you do not have the right access to this ressource.
Related
Ansible GCP dynamic inventory Failed to connect to the host via ssh Permission denied (publickey)
Configuration I followed the steps in the below links to set up my GCP dynamic inventory. https://docs.ansible.com/ansible/latest/scenario_guides/guide_gce.html http://matthieure.me/2018/12/31/ansible_inventory_plugin.html In short, it was the below steps I installed the needed requisites. $ pip install requests google-auth1 I created a service account with sufficient privileges. and set it's credentials. I added the below to the /etc/ansible/ansible.cfg file [inventory] enable_plugins = gcp_compute I created a file called hosts.gcp.yml which holds the dynamic inventory setup (as shown below): projects: - my-project-id hostnames: - name filters: [] auth_kind: serviceaccount service_account_file: my/credentials_path.json keyed_groups: - key: zone and tried to run the below command which worked fine macbook#MacBooks-MacBook-Pro Ansible % ansible-inventory --graph -i hosts.gcp.yml #all: |--#_us_central1_a: | |--test |--#ungrouped: but when running the below command I got the following errors macbook#MacBooks-MacBook-Pro Ansible % ansible -i hosts.gcp.yml all -m ping test | UNREACHABLE! => { "changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname test: nodename nor servname provided, or not known", "unreachable": true } I then commented out the - name option from the hosts.gcp.yml file but got another error. macbook#MacBooks-MacBook-Pro Ansible % ansible -i hosts.gcp.yml all -m ping 34.X.X.8 | UNREACHABLE! => { "changed": false, "msg": "Failed to connect to the host via ssh: macbook#34.X.X.8: Permission denied (publickey).", "unreachable": true } This raises the following questions 1- Is an SSH setup (creating users and copying ssh-keys) needed on the host machines when using dynamic Inventories (I don't think so)? 2- Why is ansible resorting to SSH though a dynamic Inventory is set? What if the host didn't expose SSH to the public or didn't have a public IP? Your kind support is highly appreciated. Thanks. A more verbose output of the test macbook#MacBooks-MacBook-Pro Ansible % ansible -i hosts.gcp.yml all -vvv -m ping ansible [core 2.11.6] config file = /etc/ansible/ansible.cfg configured module search path = ['/Users/macbook/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/Cellar/ansible/4.7.0/libexec/lib/python3.9/site-packages/ansible ansible collection location = /Users/macbook/.ansible/collections:/usr/share/ansible/collections executable location = /usr/local/bin/ansible python version = 3.9.7 (default, Oct 13 2021, 06:45:31) [Clang 13.0.0 (clang-1300.0.29.3)] jinja version = 3.0.2 libyaml = True Using /etc/ansible/ansible.cfg as config file redirecting (type: inventory) ansible.builtin.gcp_compute to google.cloud.gcp_compute Parsed /Users/macbook/xxxx/Projects/xxxx/Ansible/hosts.gcp.yml inventory source with ansible_collections.google.cloud.plugins.inventory.gcp_compute plugin Skipping callback 'default', as we already have a stdout callback. Skipping callback 'minimal', as we already have a stdout callback. Skipping callback 'oneline', as we already have a stdout callback. META: ran handlers <34.132.201.8> ESTABLISH SSH CONNECTION FOR USER: None <34.132.201.8> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/Users/macbook/.ansible/cp/026bb454d7 34.132.201.8 '/bin/sh -c '"'"'echo ~ && sleep 0'"'"'' <34.X.X.8> (255, b'', b'macbook#34.X.X.8: Permission denied (publickey).\r\n') 34.X.X.8 | UNREACHABLE! => { "changed": false, "msg": "Failed to connect to the host via ssh: macbook#34.X.X.8: Permission denied (publickey).", "unreachable": true } macbook#MacBooks-MacBook-Pro Ansible % ansible -i hosts.gcp.yml all -u ansible -vvv -m ping ansible [core 2.11.6] config file = /etc/ansible/ansible.cfg configured module search path = ['/Users/macbook/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/Cellar/ansible/4.7.0/libexec/lib/python3.9/site-packages/ansible ansible collection location = /Users/macbook/.ansible/collections:/usr/share/ansible/collections executable location = /usr/local/bin/ansible python version = 3.9.7 (default, Oct 13 2021, 06:45:31) [Clang 13.0.0 (clang-1300.0.29.3)] jinja version = 3.0.2 libyaml = True Using /etc/ansible/ansible.cfg as config file redirecting (type: inventory) ansible.builtin.gcp_compute to google.cloud.gcp_compute Parsed /Users/macbook/xxxx/Projects/xxx/Ansible/hosts.gcp.yml inventory source with ansible_collections.google.cloud.plugins.inventory.gcp_compute plugin Skipping callback 'default', as we already have a stdout callback. Skipping callback 'minimal', as we already have a stdout callback. Skipping callback 'oneline', as we already have a stdout callback. META: ran handlers <34.132.201.8> ESTABLISH SSH CONNECTION FOR USER: ansible <34.132.201.8> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=10 -o ControlPath=/Users/macbook/.ansible/cp/46d2477dfb 34.132.201.8 '/bin/sh -c '"'"'echo ~ansible && sleep 0'"'"'' <34.X.X.8> (255, b'', b'ansible#34.X.X.8: Permission denied (publickey).\r\n') 34.X.X.8 | UNREACHABLE! => { "changed": false, "msg": "Failed to connect to the host via ssh: ansible#34.X.X.8: Permission denied (publickey).", "unreachable": true }
Dynamic inventory used only for collect data of your machines. If you want to get access into it, you should use SSH. You must add your ssh-public key into VM's config and specify username Add these lines in your ansible.cfg into the [defaults] section: host_key_checking = false remote_user = <username that you specify in VM's config> private_key_file = <path to private ssh-key>
Most probably Ansible can't establish ssh connection to the hosts (listed in hosts.gcp.yml) because they don't recognize ssh key of the machine that tries to ping them. Since you're using a macbook it's clear it's not a GCP VM. This means your GCP VM's don't have it's public ssh key by default. You can add your macboook's key (found in ~ssh/id_rsa.pub) to the list of authorized keys that all GCP VM's will accept without any action on your side. As for the first question - it's clearly DNS issue - however I'm not versed enough with this tool so You'd have tell if you can ping all the VM's using their DNS names directly from your mac's terminal. If so then the issue will be with Ansible configuration - otherwise it's DNS issue that prevent's your computer from using DNS names of your VM's. Additionally - ansible-inventory --graph i /file/path works "offline" and will only show the structure of your inventory regardles if it exists or works.
There are a couple of points in your question, one about inventory and one about connections. Inventory Your hosts.gcp.yml file is for a dynamic inventory plugin, as you said. What that means is that Ansible will run the GCP inventory plugin using the settings in that file, and the plugin will call GCP's API and generate a list of hosts to use as inventory. What the ansible-inventory command returns is what the ansible command will use also. In the example bit of output you pasted into your question, it looks like "test" is the only host it sees. Connections When you run the ansible command it will run the module against each host. It will first get the hostname returned by inventory, and then connect to that host using the transport type you specified. This is true even for the ping module. From the ping module's doc page: "This is NOT ICMP ping, this is just a trivial test module that requires Python on the remote-node." Meaning, it makes a connection. Potential Gotchas Is inventory returning the correct hostname for your environment? What is the connection type you're using? As for hostname, you set "hostnames" to "name" in your inventory file. Just be sure that's right. It might not be in your case. As for connection type, if you haven't configured it, then by default it will be "smart", which uses SSH. You can find what you're using by doing this: ansible-config dump | grep DEFAULT_TRANSPORT You can change the connection type with the --connection option to the ansible command, or any of the other ways ansible lets you specify config options. Connection type is set independently from inventory type. They are two separate steps. The connection type is set via config or the command line option and is not based on what inventory plugin you're using. Your Problem To resolve your problem, figure out what hostnames ansible-inventory is actually returning, and what connection type you're using. Then see if you can connect to that hostname using that connection type. If the hostname being returned is "test" and your connection type is "smart" or "ssh", then try actually connecting with ssh to "test". From the command line, literally do ssh test. If that succeeds, then ansible should successfully connect to that host when it's run. If that doesn't succeed, then you have to do whatever you need to do to fix it in order for ansible to run successfully. Likewise, if you set a connection plugin different from SSH, then you should try to connect to your host using whatever that connection method uses in order to ensure that those types of connections are actually working. More info about all this can be found in ansible's user guide. See, for example, "Connecting to remote nodes".
PowerShell 7 and AWS Module SSM: How do I actually connect?
It's dead simple: How can I do the identical command to the python cli as the powershell one? I see it returns a websocket - should I just connect? using python/bash in mac $ aws ssm start-session --target i-xxx Starting session with SessionId: christian.bongiorno#xxx.com-0544a5b3ac9cdc6cd sh-4.2$ Now in powershell on the same mac PS /Users/cbongiorno/Downloads> Install-Module -Name AWS.Tools.SimpleSystemsManagement PS /Users/cbongiorno/Downloads> Start-SSMSession -Target i-xxx SessionId StreamUrl TokenValue --------- --------- ---------- christian.bongiorno#xxx.com-011bb75b5d9188ab3 wss://ssmmessages.us-east-1.amazonaws.com/v1/data-channel/christian.bongiorno#xxx.com-011bb75b5d9188ab3?role=publish_subscribe AAEAAXDjmEubBvyBryaMbiCP5WdWX… PS /Users/cbongiorno/Downloads> Resume-SSMSession -SessionId christian.bongiorno#xxx.com-011bb75b5d9188ab3 SessionId StreamUrl TokenValue --------- --------- ---------- christian.bongiorno#xxx.com-011bb75b5d9188ab3 wss://ssmmessages.us-east-1.amazonaws.com/v1/data-channel/christian.bongiorno#sterlingts.com-011bb75b5d9188ab3?role=publish_subscribe AAEAAeHX3Op/NJ2tU4qjfsHIjS80v… With powershell, I get no errors, but I also get no shell - I get this object. It should give me a terminal on that host
Looking for logs when an instance of EC2 is initializing
I am using openshift-ansible (https://github.com/openshift/openshift-ansible) that was partially customized for our needs. The part launching the instances was modified to set the group_id nothing more was changed in it. When creating a master openshift all works fine. However when creating 2 nodes of openshift I can see the 2 instances being created in the "Running instance" panel of the EC2 Dashboard. The instances are for a few seconds in state Initializing and they automatically switch to "Shutting down" Ansible on its side was still in the task of launching the instances. So my question is: Is there a way to analyze logs of the instances of AWS when new instances are being created ? Log of the last ansible task: TASK: [Launch instance(s)] **************************************************** REMOTE_MODULE ec2 region=eu-west-1 keypair=ggkey1-eu-west state=present instance_type=m3.large user_data='#cloud-config mounts: - [ xvdb ] - [ ephemeral0 ] write_files: - content: | DEVS=/dev/xvdb VG=docker_vg path: /etc/sysconfig/docker-storage-setup owner: root:root permissions: '"'"'0644'"'"' ' vpc_subnet_id=subnet-60cf1205 image=ami-33ba2a44 count=2 EXEC ['/bin/sh', '-c', 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1441977401.88-262307796372076 && echo $HOME/.ansible/tmp/ansible-tmp-1441977401.88-262307796372076'] PUT /tmp/tmp4r8qve TO /root/.ansible/tmp/ansible-tmp-1441977401.88-262307796372076/ec2 EXEC ['/bin/sh', '-c', u'LANG=C LC_CTYPE=C /usr/bin/env python2 /root/.ansible/tmp/ansible-tmp-1441977401.88-262307796372076/ec2; rm -rf /root/.ansible/tmp/ansible-tmp-1441977401.88-262307796372076/ >/dev/null 2>&1'] failed: [localhost] => {"failed": true} msg: wait for instances running timeout on Fri Sep 11 13:21:43 2015 $ ansible --version ansible 1.9.2 configured module search path = None $ uname -a Linux ip-172-31-42-45 3.10.0-123.8.1.el7.x86_64 #1 SMP Mon Sep 22 19:06:58 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux root#ip-172-31-42-45 : ~/uha-rbox-spawner$ Thanks,
Is there a way to analyze logs of the instances of AWS when new instances are being created ? You are looking for "get console output". You can see it in the AWS (http) console, or you can fetch it from awscli or the API of your choice. "get console output" is slightly confusing since the AWS Console is also a "console". Think of it as "system logs" (as the Console does), or simply "what would show on a screen in a datacenter".
subprocess.Popen still waits when starting Rserve
Local system: OSX 10.10.3 Yosemite Python 2.7 Remote Server: NAME="Amazon Linux AMI" VERSION="2014.09" I'm trying to use Python to start (restart) the Rserve processes on the server. I'm able to start it as ec2-user either directly ($R CMD Rserve --vanilla) or using a SystemV script ($service Rserve restart)...however the Python script still hangs waiting for the started Rserve process to finish. Since it's a daemon, it doesn't finish and the script hangs. I have tried both subprocess.call AND subprocess Popen. log.error('Attempting to restart Rserve environment...') PEM = "/Users/xxx/ra.pem" USER = "ec2-user" PORT = '22' SERVER = str(callobj.rhandler_host) # ends up confirmed correct IP command = ['/sbin/service', 'Rserve', 'restart'] # Starts Rserve and hangs # command = ['R', 'CMD', 'Rserve', '--vanilla'] # Starts Rserve and hangs # command = ['ls'] # lists and finishes ssh = ['ssh', '-i', PEM, '-p', PORT, USER + '#' + SERVER] run = ssh + command proc = subprocess.Popen(run, shell=False, # stdout=subprocess.PIPE) # Tried this, no change stdin=None, stdout=None, stderr=None, close_fds=True) OUTPUT: 2015-06-06 16:51:51,362 - MRATrefactor - ERROR - [R] environment does not appear to be running on server '54.xxx.xxx.x2'. Connection denied, server not reachable or not accepting connections 2015-06-06 16:51:51,363 - MRATrefactor - ERROR - rserveHandler.connect: ; *args: None; **kwargs: None; RserveNotRunning: [R] environment does not appear to be running on server '54.xxx.xxx.x2'. Connection denied, server not reachable or not accepting connections. Rserve may not be running. 2015-06-06 16:51:51,363 - MRATrefactor - ERROR - Attempting to restart Rserve environment... PID: 8205 Cannot match 8205 to running processes. Grepping for process [expecting PID: 8205]... grep count: 0 0 Stopping Rserve: [FAILED] Starting Rserve: R version 3.1.1 (2014-07-10) -- "Sock it to Me" Copyright (C) 2014 The R Foundation for Statistical Computing Platform: x86_64-redhat-linux-gnu (64-bit) R is free software and comes with ABSOLUTELY NO WARRANTY. You are welcome to redistribute it under certain conditions. Type 'license()' or 'licence()' for distribution details. Natural language support but running in an English locale R is a collaborative project with many contributors. Type 'contributors()' for more information and 'citation()' on how to cite R or R packages in publications. Type 'demo()' for some demos, 'help()' for on-line help, or 'help.start()' for an HTML browser interface to help. Type 'q()' to quit R. Rserv started in daemon mode. ec2-user 8385 1 0 14:51 ? 00:00:00 /usr/lib64/R/bin/Rserve --vanilla PID: 8385 Starting Rserve: [ OK ] (and script hangs here...apparently (maybe) waiting...?)
Capistrano git:check failed exit status 128
I got this when I ran cap production git:check. (I took out my real ip address and user name) DEBUG [4d1face0] Running /usr/bin/env git ls-remote -h foo#114.215.183.110:~/git/deepot.git on 114.***.***.*** DEBUG [4d1face0] Command: ( GIT_ASKPASS=/bin/echo GIT_SSH=/tmp/deepot/git-ssh.sh /usr/bin/env git ls-remote -h foo#114.***.***.***:~/git/deepot.git ) DEBUG [4d1face0] Error reading response length from authentication socket. DEBUG [4d1face0] Permission denied (publickey,password). DEBUG [4d1face0] fatal: The remote end hung up unexpectedly DEBUG [4d1face0] Finished in 0.715 seconds with exit status 128 (failed). Below is my deploy file... set :user, 'foo' set :domain, '114.***.***.***' set :application, 'deepot' set :repo_url, 'foo#114.***.***.***:~/git/deepot.git' set :deploy_to, '/home/#{fetch(:user)}/local/#{fetch(:application)}' set :linked_files, %w{config/database.yml config/bmap.yml config/cloudinary.yml config/environments/development.rb config/environments/production.rb} Below is my production.rb... role :app, %w{foo#114.***.***.***} role :web, %w{foo#114.***.***.***} role :db, %w{foo#114.***.***.***} server '114.***.***.***', user: 'foo', roles: %w{web app}, ssh_options: {keys: %w{/c/Users/Administrator/.ssh/id_rsa}, auth_methods: %w(publickey)} I can successfully ssh on to my foo#114.***.***.*** without entering any password using git bash. (I am windows 7 machine and deployment server is ubuntu 12.04) Any help will be appreciated. Thanks!
Try generating a new private/public key par and provide a passphrase for it. If it works, the problem is your current key doesn't use a passphrase.