I'm using the latest version of AWS OpenSearch but somehow, when I'm trying to go to the Trace analytics Dashboard it does not show the traces sent by the Data Prepper.
Manual OpenTelemetry instrumented application
Data Prepper is running in a Docker (opensearchproject/data-prepper:latest)
OpenSearch is running on the latest version
Sample Configuration
data-prepper-config.yaml
ssl: false
pipelines.yaml
entry-pipeline:
delay: "100"
source:
otel_trace_source:
ssl: false
sink:
- pipeline:
name: "raw-pipeline"
- pipeline:
name: "service-map-pipeline"
raw-pipeline:
delay: "100"
source:
pipeline:
name: "entry-pipeline"
processor:
- otel_trace_raw:
sink:
- opensearch:
hosts: [ "https://opensearch-domain" ]
username: "admin"
password: "admin"
index_type: trace-analytics-raw
service-map-pipeline:
delay: "100"
source:
pipeline:
name: "entry-pipeline"
processor:
- service_map_stateful:
sink:
- opensearch:
hosts: ["https://opensearch-domain"]
username: "admin"
password: "admin"
index_type: trace-analytics-service-map
remote-collector.yaml
...
exporters:
otlp/data-prepper:
endpoint: data-prepper-address:21890
service:
pipelines:
traces:
receivers: [otlp]
exporters: [otlp/data-prepper]
When I try to go to the Query Workbench and run the query SELECT * FROM otel-v1-apm-span, I'm getting the list of received trace spans. But I'm unable to see a chart or something on the Trace Analytics Dashboard (both Traces and Services). It's just an empty dashboard.
I'm also getting a warning:
WARN org.opensearch.dataprepper.plugins.processor.oteltrace.OTelTraceRawProcessor - Missing trace group for SpanId: xxxxxxxxxxxx
The traceGroupFields are also empty.
"traceGroupFields": {
"endTime": null,
"durationInNanos": null,
"statusCode": null
}
Is there something wrong with my setup? Any help is appreciated.
I am trying to implement some of the CIS security benchmark advices to kubernetes version 1.21.4 via kOps(1.21.0) for a self hosted Kubernetes on aws.
However when i try protectKernelDefaults:true in kubelet config and EventRateLimit adminssion plugin kube api server config, the k8s cluster fails to come up.
I am trying bring up a new cluster with these settings not trying to update any existing ones.
kops cluster yaml that i am trying to use is
apiVersion: kops.k8s.io/v1alpha2
kind: Cluster
metadata:
name: k8s.sample.com
spec:
cloudLabels:
team_number: "0"
environment: "dev"
api:
loadBalancer:
type: Internal
additionalSecurityGroups:
- sg-id
crossZoneLoadBalancing: false
dns: { }
authorization:
rbac: { }
channel: stable
cloudProvider: aws
configBase: s3://state-data/k8s.sample.com
etcdClusters:
- cpuRequest: 200m
etcdMembers:
- encryptedVolume: true
instanceGroup: master-eu-west-3a
name: a
memoryRequest: 100Mi
name: main
env:
- name: ETCD_MANAGER_HOURLY_BACKUPS_RETENTION
value: 2d
- name: ETCD_MANAGER_DAILY_BACKUPS_RETENTION
value: 1m
- name: ETCD_LISTEN_METRICS_URLS
value: http://0.0.0.0:8081
- name: ETCD_METRICS
value: basic
- cpuRequest: 100m
etcdMembers:
- encryptedVolume: true
instanceGroup: master-eu-west-3a
name: a
memoryRequest: 100Mi
name: events
env:
- name: ETCD_MANAGER_HOURLY_BACKUPS_RETENTION
value: 2d
- name: ETCD_MANAGER_DAILY_BACKUPS_RETENTION
value: 1m
- name: ETCD_LISTEN_METRICS_URLS
value: http://0.0.0.0:8081
- name: ETCD_METRICS
value: basic
iam:
allowContainerRegistry: true
legacy: false
kubeControllerManager:
enableProfiling: false
logFormat: json
kubeScheduler:
logFormat: json
enableProfiling: false
kubelet:
anonymousAuth: false
logFormat: json
protectKernelDefaults: true
tlsCipherSuites: [ TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 ]
kubeAPIServer:
auditLogMaxAge: 7
auditLogMaxBackups: 1
auditLogMaxSize: 25
auditLogPath: /var/log/kube-apiserver-audit.log
auditPolicyFile: /srv/kubernetes/audit/policy-config.yaml
enableProfiling: false
logFormat: json
enableAdmissionPlugins:
- NamespaceLifecycle
- LimitRanger
- ServiceAccount
- PersistentVolumeLabel
- DefaultStorageClass
- DefaultTolerationSeconds
- MutatingAdmissionWebhook
- ValidatingAdmissionWebhook
- NodeRestriction
- ResourceQuota
- AlwaysPullImages
- EventRateLimit
- SecurityContextDeny
fileAssets:
- name: audit-policy-config
path: /srv/kubernetes/audit/policy-config.yaml
roles:
- Master
content: |
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: Metadata
kubernetesVersion: 1.21.4
masterPublicName: api.k8s.sample.com
networkID: vpc-id
sshKeyName: node_key
networking:
calico:
crossSubnet: true
nonMasqueradeCIDR: 100.64.0.0/10
subnets:
- id: subnet-id1
name: sn_nodes_1
type: Private
zone: eu-west-3a
- id: subnet-id2
name: sn_nodes_2
type: Private
zone: eu-west-3a
- id: subnet-id3
name: sn_utility_1
type: Utility
zone: eu-west-3a
- id: subnet-id4
name: sn_utility_2
type: Utility
zone: eu-west-3a
topology:
dns:
type: Private
masters: private
nodes: private
additionalPolicies:
node: |
[
{
"Effect": "Allow",
"Action": [
"kms:CreateGrant",
"kms:Decrypt",
"kms:DescribeKey",
"kms:Encrypt",
"kms:GenerateDataKey*",
"kms:ReEncrypt*"
],
"Resource": [
"arn:aws:kms:region:xxxx:key/s3access"
]
}
]
master: |
[
{
"Effect": "Allow",
"Action": [
"kms:CreateGrant",
"kms:Decrypt",
"kms:DescribeKey",
"kms:Encrypt",
"kms:GenerateDataKey*",
"kms:ReEncrypt*"
],
"Resource": [
"arn:aws:kms:region:xxxx:key/s3access"
]
}
]
---
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
labels:
kops.k8s.io/cluster: k8s.sample.com
name: master-eu-west-3a
spec:
image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210720
machineType: t3.medium
maxSize: 1
minSize: 1
nodeLabels:
kops.k8s.io/instancegroup: master-eu-west-3a
role: Master
subnets:
- sn_nodes_1
- sn_nodes_2
detailedInstanceMonitoring: false
additionalSecurityGroups:
- sg-id
---
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
labels:
kops.k8s.io/cluster: k8s.sample.com
name: nodes-eu-west-3a
spec:
image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210720
machineType: t3.large
maxSize: 1
minSize: 1
nodeLabels:
kops.k8s.io/instancegroup: nodes-eu-west-3a
role: Node
subnets:
- sn_nodes_1
- sn_nodes_2
detailedInstanceMonitoring: false
additionalSecurityGroups:
- sg-id
** Note: I have made some changes to values above to remove some specific details **
I have tried these protectKernelDefaults & EventRateLimit settings seperately and tried to bring up the cluster. And it doesnt work in those cases as well.
when I try protectKernelDefaults and ssh to master node and check the /var/log directory kube-scheduler.log, kube-proxy.log, kube-controller-manager.log and kube-apiserver.log are empty.
and when it try EventRateLimit and ssh to master node and check the /var/log directory the api server fails to come up and all the other log files has failures stating unable to connect to api server.
kube-apiserver.log contains the following
Log file created at: 2021/08/23 05:35:51
Running on machine: ip-10-100-120-9
Binary: Built with gc go1.16.7 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
Log file created at: 2021/08/23 05:35:54
Running on machine: ip-10-100-120-9
Binary: Built with gc go1.16.7 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
Log file created at: 2021/08/23 05:36:11
Running on machine: ip-10-100-120-9
Binary: Built with gc go1.16.7 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
Log file created at: 2021/08/23 05:36:32
Running on machine: ip-10-100-120-9
Binary: Built with gc go1.16.7 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0823 05:36:32.654990 1 flags.go:59] FLAG: --add-dir-header="false"
Log file created at: 2021/08/23 05:37:15
Running on machine: ip-10-100-120-9
Binary: Built with gc go1.16.7 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
Log file created at: 2021/08/23 05:38:44
Running on machine: ip-10-100-120-9
Binary: Built with gc go1.16.7 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
Log file created at: 2021/08/23 05:41:35
Running on machine: ip-10-100-120-9
Binary: Built with gc go1.16.7 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
Log file created at: 2021/08/23 05:46:47
Running on machine: ip-10-100-120-9
Binary: Built with gc go1.16.7 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
Log file created at: 2021/08/23 05:51:57
Running on machine: ip-10-100-120-9
Binary: Built with gc go1.16.7 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
Log file created at: 2021/08/23 05:56:59
Running on machine: ip-10-100-120-9
Binary: Built with gc go1.16.7 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
Any pointers to what is happening would help. Thanks in advance.
The issue with default kernel settings was a bug in kOps. The installed did not set the sysctl settings that kubelet expects.
The issue with the admission controller is simply a missing admission controller configuration file.
I have used the following in my config file for statsd
It worked fine when provided with access ID and access key . But iamRole fails. Any insight would be helpful
{
backends: [ "aws-cloudwatch-statsd-backend" ],
cloudwatch:
{
iamRole: 'role_attached_to_EC2_with_CloudWatchAgentServerPolicy',
region: 'US_EAST_1'
}
}
#I am being shown following error#
node node_modules/statsd/stats.js localConfig.js
4 Jul 21:55:17 - [2783] reading config file: localConfig.js
4 Jul 21:55:
17 - server is up INFO
/home/ubuntu/webapp-backend/node_modules/awssum/lib/amazon/amazon.js:67
throw MARK + 'accessKeyID is required';
^
amazon: accessKeyID is required
i am trying to download a file from s3 bucket , i did aws configure and also exported my access key and secret key but i am still getting the same error. Please suggest me
Code:
- name: Download xx tarball
s3:
bucket: xxx
object: folder/xx-commandline-4.0.3.tar.gz
dest: '/tmp/{{ xx_tarball }}'
mode: get
when: 'st.stat.exists == false'
Error:
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "No Authentication Handler found: No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV1Handler'] Check your credentials "}
ansible --version
ansible 2.0.0.2
uname -a
Linux ip-xx-xxx-xx-x 4.4.0-1026-aws #35-Ubuntu SMP Thu Jul 20 21:59:09 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
You need to check couple of things:
I- boto is installed on the target host, where you need to download the file from s3:
sudo -H pip install boto
II- If this is remote host then use this format:
- name: Download xx tarball
s3:
aws_access_key: "{{ AWS_S3_ACCESS_KEY }}"
aws_secret_key: "{{ AWS_S3_SECRET_KEY }}"
bucket: xxx
object: folder/xx-commandline-4.0.3.tar.gz
dest: '/tmp/{{ xx_tarball }}'
mode: get
when: st.stat.exists == false
Note: when you export the AWS credentials, it work locally but not for the remote host, so you need to pass the credentials to the module so it will work for remote host.
Hope it might help you
I'm using salt stack and I want to try and provision new EC2 instances using the salt-cloud command. But I'm getting an auth failure on salt-cloud command:
[root#salt:~] #salt-cloud -p base_ec2_public ops.example.com
[ERROR ] AWS Response Status Code and Error: [401 401 Client Error: Unauthorized] {'Errors': {'Error': {'Message': 'AWS was not able to validate the provided access credentials', 'Code': 'AuthFailure'}}, 'RequestID': '3a5e33e2-d1a9-44fa-983c-26691d4f8ee7'}
[ERROR ] AWS Response Status Code and Error: [401 401 Client Error: Unauthorized] {'Errors': {'Error': {'Message': 'AWS was not able to validate the provided access credentials', 'Code': 'AuthFailure'}}, 'RequestID': '163079c6-2b79-4301-80c8-77ba0d7c896d'}
[ERROR ] There was a profile error: string indices must be integers, not str
This is my /etc/salt/cloud.providers.d/aws.conf file
----
my-ec2-us-east-public-ips:
# Set up the location of the salt master
#
minion:
master: salt.example.com
# Set up grains information, which will be common for all nodes
# using this provider
grains:
node_type: broker
release: 1.0.1
# Specify whether to use public or private IP for deploy script.
#
# Valid options are:
# private_ips - The salt-cloud command is run inside the EC2
# public_ips - The salt-cloud command is run outside of EC2
#
ssh_interface: public_ips
# Optionally configure the Windows credential validation number of
# retries and delay between retries. This defaults to 10 retries
# with a one second delay betwee retries
win_deploy_auth_retries: 10
win_deploy_auth_retry_delay: 1
# Set the EC2 access credentials (see below)
#
id: "REDACTED"
key: "REDACTED"
# Make sure this key is owned by root with permissions 0400.
#
private_key: /etc/salt/my_test_key.pem
keyname: my_test_key
securitygroup: default
# Optionally configure default region
# Use salt-cloud --list-locations <provider> to obtain valid regions
#
location: us-east-1
availability_zone: us-east-1a
#
ssh_username: ec2-user
# Optionally add an IAM profile
iam_profile: 'arn:aws:iam::REDACTED:user/bluethundr'
driver: ec2
my-ec2-us-east-private-ips:
# Set up the location of the salt master
#
minion:
master: salt.example.com
# Specify whether to use public or private IP for deploy script.
#
# Valid options are:
# private_ips - The salt-master is also hosted with EC2
# public_ips - The salt-master is hosted outside of EC2
#
ssh_interface: private_ips
# Optionally configure the Windows credential validation number of
# retries and delay between retries. This defaults to 10 retries
# with a one second delay betwee retries
win_deploy_auth_retries: 10
win_deploy_auth_retry_delay: 1
# Set the EC2 access credentials (see below)
#
id: "REDACTED"
key: "REDACTED"
# Make sure this key is owned by root with permissions 0400.
#
private_key: /etc/salt/my_test_key.pem
keyname: my_test_key
# This one should NOT be specified if VPC was not configured in AWS to be
# the default. It might cause an error message which says that network
# interfaces and an instance-level security groups may not be specified
# on the same request.
#
securitygroup: default
# Optionally configure default region
#
location: us-east-1
availability_zone: us-east-1a
# Configure which user to use to run the deploy script. This setting is
# dependent upon the AMI that is used to deploy. It is usually safer to
# configure this individually in a profile, than globally. Typical users
# are:
#
# Amazon Linux -> ec2-user
# RHEL -> ec2-user
# CentOS -> ec2-user
# Ubuntu -> ubuntu
#
ssh_username: ec2-user
# Optionally add an IAM profile
iam_profile: 'arn:aws:iam::REDACTED:user/bluethundr'
driver: ec2
And this is my /etc/salt/cloud.profiles.d/aws_pofiles.conf
base_ec2:
provider: my-ec2-us-east-public-ips
image: ami-869a9cee
size: t2.micro
ssh_username: ec2-user
base_ec2_private:/et
provider: my-ec2-us-east-private-ips
image: ami-869a9cee
size: t2.micro
ssh_username: ec2-user
base_ec2_public:
provider: my-ec2-us-east-public-ips
image: ami-e565ba8c
size: t2.micro
ssh_username: ec2-user
base_ec2_db:
provider: my-ec2-us-east-public-ips
image: ami-e565ba8c
size: m1.xlarge
ssh_username: ec2-user
volumes:
- { size: 10, device: /dev/sdf }
- { size: 10, device: /dev/sdg, type: io1, iops: 1000 }
- { size: 10, device: /dev/sdh, type: io1, iops: 1000 }
- { size: 10, device: /dev/sdi, tags: {"Environment": "production"} }
# optionally add tags to profile:
tag: {'Environment': 'production', 'Role': 'database'}
# force grains to sync after install
sync_after_install: grains
base_ec2_vpc:
provider: my-ec2-us-east-public-ips
image: ami-a73264ce
size: m1.xlarge
ssh_username: ec2-user
script: /etc/salt/cloud.deploy.d/user_data.sh
network_interfaces:
- DeviceIndex: 0
PrivateIpAddresses:
- Primary: True
#auto assign public ip (not EIP)
AssociatePublicIpAddress: True
SubnetId: subnet-813d4bbf
SecurityGroupId:
- sg-750af413
del_root_vol_on_destroy: True
del_all_vol_on_destroy: True
volumes:
- { size: 10, device: /dev/sdf }
- { size: 10, device: /dev/sdg, type: io1, iops: 1000 }
- { size: 10, device: /dev/sdh, type: io1, iops: 1000 }
tag: {'Environment': 'production', 'Role': 'database'}
sync_after_install: grains
Here's some debug output of the command I'm trying to get working:
[root#salt:~] #salt-cloud -p base_ec2_public ops.example.com -l debug
[DEBUG ] Reading configuration from /etc/salt/cloud
[DEBUG ] Reading configuration from /etc/salt/master
[DEBUG ] Using cached minion ID from /etc/salt/minion_id: salt.example.com
[DEBUG ] Missing configuration file: /etc/salt/cloud.providers
[DEBUG ] Including configuration from '/etc/salt/cloud.providers.d/aws.conf'
[DEBUG ] Reading configuration from /etc/salt/cloud.providers.d/aws.conf
[DEBUG ] Missing configuration file: /etc/salt/cloud.profiles
[DEBUG ] Including configuration from '/etc/salt/cloud.profiles.d/aws_profiles.conf'
[DEBUG ] Reading configuration from /etc/salt/cloud.profiles.d/aws_profiles.conf
[DEBUG ] Configuration file path: /etc/salt/cloud
[WARNING ] Insecure logging configuration detected! Sensitive data may be logged.
[INFO ] salt-cloud starting
[DEBUG ] Could not LazyLoad parallels.avail_sizes: 'parallels' __virtual__ returned False
[DEBUG ] LazyLoaded parallels.avail_locations
[DEBUG ] LazyLoaded proxmox.avail_sizes
[DEBUG ] Could not LazyLoad saltify.destroy: 'saltify.destroy' is not available.
[DEBUG ] Could not LazyLoad saltify.avail_sizes: 'saltify.avail_sizes' is not available.
[DEBUG ] Could not LazyLoad saltify.avail_images: 'saltify.avail_images' is not available.
[DEBUG ] Could not LazyLoad saltify.avail_locations: 'saltify.avail_locations' is not available.
[DEBUG ] LazyLoaded rackspace.reboot
[DEBUG ] LazyLoaded openstack.list_locations
[DEBUG ] LazyLoaded rackspace.list_locations
[DEBUG ] Could not LazyLoad parallels.avail_sizes: 'parallels' __virtual__ returned False
[DEBUG ] LazyLoaded parallels.avail_locations
[DEBUG ] LazyLoaded proxmox.avail_sizes
[DEBUG ] Could not LazyLoad saltify.destroy: 'saltify.destroy' is not available.
[DEBUG ] Could not LazyLoad saltify.avail_sizes: 'saltify.avail_sizes' is not available.
[DEBUG ] Could not LazyLoad saltify.avail_images: 'saltify.avail_images' is not available.
[DEBUG ] Could not LazyLoad saltify.avail_locations: 'saltify.avail_locations' is not available.
[DEBUG ] LazyLoaded rackspace.reboot
[DEBUG ] LazyLoaded openstack.list_locations
[DEBUG ] LazyLoaded rackspace.list_locations
[DEBUG ] Using AWS endpoint: ec2.us-east-1.amazonaws.com
[DEBUG ] AWS Request: https://ec2.us-east-1.amazonaws.com/?Action=DescribeInstances&Version=2014-10-01
[DEBUG ] AWS Response Status Code: 401
[ERROR ] AWS Response Status Code and Error: [401 401 Client Error: Unauthorized] {'Errors': {'Error': {'Message': 'AWS was not able to validate the provided acce
ss credentials', 'Code': 'AuthFailure'}}, 'RequestID': '0f483305-6cb2-4c09-ae2f-ec804fd3beea'}
[DEBUG ] Failed to execute 'ec2.list_nodes()' while querying for running nodes: An error occurred while listing nodes: AWS was not able to validate the provided a
ccess credentials
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/salt/cloud/__init__.py", line 2383, in run_parallel_map_providers_query
cloud.clouds[data['fun']]()
File "/usr/lib/python2.7/site-packages/salt/cloud/clouds/ec2.py", line 3496, in list_nodes
nodes = list_nodes_full(get_location())
File "/usr/lib/python2.7/site-packages/salt/cloud/clouds/ec2.py", line 3346, in list_nodes_full
return _list_nodes_full(location)
File "/usr/lib/python2.7/site-packages/salt/cloud/clouds/ec2.py", line 3436, in _list_nodes_full
instances['error']['Errors']['Error']['Message']
SaltCloudSystemExit: An error occurred while listing nodes: AWS was not able to validate the provided access credentials
[DEBUG ] Generating minion keys for 'ops.jokefire.com'
[DEBUG ] LazyLoaded cloud.fire_event
[DEBUG ] MasterEvent PUB socket URI: /var/run/salt/master/master_event_pub.ipc
[DEBUG ] MasterEvent PULL socket URI: /var/run/salt/master/master_event_pull.ipc
[DEBUG ] Initializing new IPCClient for path: /var/run/salt/master/master_event_pull.ipc
[DEBUG ] Sending event - data = {'profile': 'base_ec2_public', 'event': 'starting create', '_stamp': '2016-09-13T19:24:13.555913', 'name': 'ops.jokefire.com', 'pr
ovider': 'my-ec2-us-east-public-ips:ec2'}
[INFO ] Creating Cloud VM ops.jokefire.com in us-east-1
[DEBUG ] Using AWS endpoint: ec2.us-east-1.amazonaws.com
[DEBUG ] AWS Request: https://ec2.us-east-1.amazonaws.com/?Action=DescribeAvailabilityZones&Filter.0.Name=region-name&Filter.0.Value.0=us-east-1&Version=2014-10-0
1
[DEBUG ] AWS Response Status Code: 401
[ERROR ] AWS Response Status Code and Error: [401 401 Client Error: Unauthorized] {'Errors': {'Error': {'Message': 'AWS was not able to validate the provided acce
ss credentials', 'Code': 'AuthFailure'}}, 'RequestID': 'e9912cf2-2e9b-496f-b607-4b9bae8b8938'}
[ERROR ] There was a profile error: string indices must be integers, not str
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/salt/cloud/cli.py", line 284, in run
self.config.get('names')
File "/usr/lib/python2.7/site-packages/salt/cloud/__init__.py", line 1454, in run_profile
ret[name] = self.create(vm_)
File "/usr/lib/python2.7/site-packages/salt/cloud/__init__.py", line 1284, in create
output = self.clouds[func](vm_)
File "/usr/lib/python2.7/site-packages/salt/cloud/clouds/ec2.py", line 2512, in create
data, vm_ = request_instance(vm_, location)
File "/usr/lib/python2.7/site-packages/salt/cloud/clouds/ec2.py", line 1742, in request_instance
az_ = get_availability_zone(vm_)
File "/usr/lib/python2.7/site-packages/salt/cloud/clouds/ec2.py", line 1094, in get_availability_zone
zones = _list_availability_zones(vm_)
File "/usr/lib/python2.7/site-packages/salt/cloud/clouds/ec2.py", line 1242, in _list_availability_zones
ret[zone['zoneName']] = zone['zoneState']
TypeError: string indices must be integers, not str
Can someone take a stab and let me know why I'm getting auth failures? The redacted AWS keys were taken straight from the AWS interface and copied into the cloud.providers file.
It seems the EC2 credentials are not provided. You may need to check the Key/ID of the EC2 credentials, and their policy.
For credentials, replace "REDACTED" string with your real key/ID.