I'm running elasticsearch image using docker.
Other services are running fine, like creating index, getting list of indices index.
But when i try to create mappings for already existing indices it returns the following error
{
"error": {
"root_cause": [
{
"type": "mapper_parsing_exception",
"reason": "Root mapping definition has unsupported parameters: mappings definition"
}
],
"type": "mapper_parsing_exception",
"reason": "Root mapping definition has unsupported parameters:mappings definition "
},
"status": 400
}
This is the mapping i'm trying to create
{
"mappings": {
"myfile": {
"dynamic": "strict",
"properties": {
"property1":{
"type":"keyword"
},
"property2":{
"type":"long"
}
}
}
}
}
URL i'm using to create mappings through postman
Method POST http://localhost:9200/my-domain-name/_mapping
in the body of the request i'm sending
above mappings
Elasticsearch image for docker is
elasticsearch:
container_name: tqd-elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.0.1
environment:
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- discovery.type=single-node
depends_on:
- "localstack"
logging:
driver: none
ports:
- 9300:9300
- 9200:9200
networks:
- "local"
networks:
local:
driver: "bridge"
What am I doing wrong over here?
Elasticsearch has deprecated _type of index from Elasticsearch 7.x version.
You can use below request from Postamn and it will work for you.
URL: POST http://localhost:9200/my-domain-name/_mapping
{
"dynamic": "strict",
"properties": {
"property1": {
"type": "keyword"
},
"property2": {
"type": "long"
}
}
}
Related
I am configuring Kubernetes based on aws ec2.
I use elasticsearch's packetbeat to get the geometric of clients accessing the service.
Istio is used as the service mesh of Kubernetes, and CLB is used for the load balancer.
I want to know the client ip accessing the service and the domain address the client accesses here.
my packetbeat.yml
setup.dashboards.enabled: true
setup.template.enabled: true
setup.template.settings:
index.number_of_shards: 2
packetbeat.interfaces.device: eth0
packetbeat.interfaces.snaplen: 1514
packetbeat.interfaces.auto_promices_mode: true
packetbeat.interfaces.with_vlans: true
packetbeat.protocols:
- type: dhcpv4
ports: [67, 68]
- type: dns
ports: [53]
include_authorities: true
include_additionals: true
- type: http
ports: [80,5601,8081,8002,5000, 8000, 8080, 9200]
send_request: true
send_response: true
send_header: ["User-Agent", "Cookie", "Set-Cookie"]
real_ip_header: "X-Forwarded-For"
- type: mysql
ports: [3306, 3307]
- type: memcache
ports: [11211]
- type: redis
ports: [6379]
- type: pgsql
ports: [5432]
- type: thrift
ports: [9090]
- type: mongodb
ports: [27017]
- type: cassandra
ports: [9042]
- type: tls
ports: [443, 993, 995, 5223, 8443, 8883,8883, 9243, 15021, 15443, 32440]
send_request: true
send_response: true
send_all_headers: true
include_body_for: ["text/html", "application/json"]
packetbeat.procs.enabled: true
packetbeat.flows:
timeout: 30s
period: 10s
fields: ["server.domain"]
processors:
- include_fields:
fields:
- source.ip
- server.domain
- add_dokcer_metadata:
- add_host_metadata:
- add_cloud_metadata:
- add_kubernetes_metadata:
host: ${HOSTNAME}
indexers:
- ip_port:
matchers:
- field_format:
format: '%{[ip]}:%{[port]}'
# with version 7 of Packetbeat use the following line instead of the one above.
#format: '%{[destination.ip]}:%{[destination.port]}'
output.elasticsearch:
hosts: ${ELASTICSEARCH_ADDRESS}
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
pipeline: geoip-info
setup.kibana:
host: 'https://myhost:443'
my CLB listener
CLB has enabled proxy protocol.
But the packet beat doesn't bring me the data I want.
search for tls log
"client": {
"port": 1196,
"ip": "10.0.0.83"
},
"network": {
"community_id": "1:+ARNMwsOGxkBkrmWfCVawtA1GKo=",
"protocol": "tls",
"transport": "tcp",
"type": "ipv4",
"direction": "egress"
},
"destination": {
"port": 8443,
"ip": "10.0.1.77",
"domain": "my host domain"
},
search for flow.final: true
"event": {
"duration": 1051434189423,
"kind": "event",
"start": "2022-10-28T05:25:14.171Z",
"action": "network_flow",
"end": "2022-10-28T05:42:45.605Z",
"category": [
"network_traffic",
"network"
],
"type": [
"connection"
],
"dataset": "flow"
},
"source": {
"geo": {
"continent_name": "Asia",
"region_iso_code": "KR",
"city_name": "Has-si",
"country_iso_code": "KR",
"country_name": "South Korea",
"region_name": "Gg",
"location": {
"lon": 126.8168,
"lat": 37.2072
}
},
"port": 50305,
"bytes": 24174,
"ip": "my real ip address",
"packets": 166
},
I can find out if I search separately, but there are no two points of contact.
I would like to see the log of the above two combined.
The domain the client accesses + real client ip.
please help me..
I'm using the OpenSearch 1.2 version deployed on AWS.
I was following this AWS tutorial but with my own sensor data. I've created the following IoT Core Rule OpenSearch Action:
OpenSearchTopicRule:
Type: AWS::IoT::TopicRule
Properties:
TopicRulePayload:
Actions:
- OpenSearch:
Endpoint: !Join ['', ['https://', !GetAtt OpenSearchServiceDomain.DomainEndpoint]]
Id: '${newuuid()}'
Index: sensors
RoleArn: !GetAtt IoTOSActionRole.Arn
Type: sensor_data
Sql: SELECT *, timestamp() as ts FROM 'Greenhouse/+/Sensor/Status'
The IoTOSActionRole has propper es:ESHttpPut permission. But when I try to create an index with following command send from Postman that would match the Type: sensor_data attribute:
curl --location --request PUT 'https://search-iot***-avt***i.eu-west-1.es.amazonaws.com/sensors' \
--header 'Content-Type: application/json' \
--data-raw '{
"mappings": {
"sensor_data": {
"properties": {
"ts": { "type": "long",
"copy_to": "datetime"},
"datetime": {"type": "date",
"store": true},
"deviceID": {"type": "text",
"store": true},
"humidity": {"type": "integer",
"store": true},
"temperature": {"type": "integer",
"store": true},
"lux": {"type": "integer",
"store": true},
"soil": {"type": "integer",
"store": true}
}}}'
I receive an error:
{
"error": {
"root_cause": [
{
"type": "mapper_parsing_exception",
"reason": "Root mapping definition has unsupported parameters: [sensor_data : {properties={datetime={store=true, type=date}, temperature={store=true, type=integer}, humidity={store=true, type=integer}, soil={store=true, type=integer}, deviceID={store=true, type=text}, lux={store=true, type=integer}, ts={copy_to=datetime, type=long}}}]"
}
],
"type": "mapper_parsing_exception",
"reason": "Failed to parse mapping [_doc]: Root mapping definition has unsupported parameters: [...]",
"caused_by": {
"type": "mapper_parsing_exception",
"reason": "Root mapping definition has unsupported parameters: [...}]"
}
},
"status": 400
}
I've tried removing the 'type' "sensor_data" attribute as mentioned here (but that's ElasticSearch solution) and that allowed me to create an index with that mapping,
{
"acknowledged": true,
"shards_acknowledged": true,
"index": "sensors"
}
and then index pattern in OpenSearch Dashboard, but what happens then is that the IoT Core Rule even though it gets triggered does not result in any data ingestion to the OpenSearch domain. So I guess IoT Core Action tries to send that data with type sensor_databut there's no corresponding type in OS. Additionally, when I open the Discovery tab in OS dashboard I get this notice:
"undefined" is not a configured index pattern ID
Showing the default index pattern: "sensors*" (d97775d0-***a2fd725)
Sample data:
{
"deviceID": "Tomatoes",
"Greenhouse": 1,
"date": "05-05",
"time": "09:35:39",
"timestamp": 1651743339,
"humidity": 60,
"temperature": 33.3,
"lux": 9133.333,
"soil": 78
}
What PUT call I have to make to create a sensor_data type mapping in OS that would match the type specified in IoT Core OpenSearch Action Rule?
UPDATE
I've tried the same API call with Elasticsearch_7.10 and received the same mapper_parsing_exception response. I've tried removing the "store": true attribute. The only call accepted is the one omitting the sensor_data type attribute to the .../sensors/_mappings API.
I'm trying to create this API Gateway (gist) with Authorizer, and ANY method.
I run into this error:
The following resource(s) failed to create: [BaseLambdaExecutionPolicy, ApiGatewayDeployment]
I've checked the parameters passed into this template from my other stacks and they're correct. I've checked this template and it's valid.
My template is modified from this template with "Runtime": "nodejs8.10".
This is the same stack (gist) which is created successfully using swagger 2. I just want to replace swagger 2 with AWS::ApiGateway::Method.
Update 6 Jun 2019:
I tried to create the whole nested stack using the working version of the API Gateway stack, then create another API Gateway with the template that doesn't work with the parameters I get from the nested stack, then I have this:
The REST API doesn't contain any methods (Service: AmazonApiGateway; Status Code: 400; Error Code: BadRequestException; Request ID: ID)
But I did specify the method in my template following AWS docs:
"GatewayMethod": {
"Type" : "AWS::ApiGateway::Method",
"DependsOn": ["LambdaRole", "ApiGateway"],
"Properties" : {
"ApiKeyRequired" : false,
"AuthorizationType" : "Cognito",
"HttpMethod" : "ANY",
"Integration" : {
"IntegrationHttpMethod" : "ANY",
"Type" : "AWS",
"Uri" : {
"Fn::Sub": "arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${LambdaFunction.Arn}/invocations"
}
},
"MethodResponses" : [{
"ResponseModels": {
"application/json": "Empty"
},
"StatusCode": 200
}],
"RequestModels" : {"application/json": "Empty"},
"ResourceId" : {
"Fn::GetAtt": ["ApiGateway", "RootResourceId"]
},
"RestApiId" : {
"Ref": "ApiGateway"
}
}
},
Thanks to #John's suggestion. I've tried to create the nested stack with the version that worked and pass in the parameters for the version that doesn't work.
The reason for that error is:
CloudFormation might try to create Deployment before it creates Method
from balaji's answer here.
So this is what I did:
"methodANY": {
"Type": "AWS::ApiGateway::Method",
"Properties": {
"AuthorizationType": "COGNITO_USER_POOLS",
...},
"ApiGatewayDeployment": {
"Type": "AWS::ApiGateway::Deployment",
"DependsOn": "methodANY",
...
I also found this article on cloudonaut.io by Michael Wittig helpful.
I try to define my AWS Api Gateway infrastructure using Swagger/OpenAPI. Everything is working so far, however I have problems enabling the need for an API-Key for my endpoints.
My Swagger file looks like this (shortened):
---
swagger: 2.0
basePath: /dev
info:
title: My API
description: Proof of concept
schemes:
- https
securityDefinitions:
api_key:
type: apiKey
name: X-Api-Key
in: header
paths:
/example-path:
options:
consumes:
- application/json
produces:
- application/json
x-amazon-apigateway-integration:
type: mock
requestTemplates:
application/json: |
{
"statusCode" : 200
}
responses:
"default":
statusCode: "200"
responseParameters:
method.response.header.Access-Control-Allow-Methods: "'GET,HEAD,OPTIONS'"
method.response.header.Access-Control-Allow-Headers: "'Content-Type,Authorization,X-Amz-Date,X-Api-Key,X-Amz-Security-Token'"
method.response.header.Access-Control-Allow-Origin: "'*'"
responseTemplates:
application/json: |
{}
responses:
200:
description: Default response for CORS method
headers:
Access-Control-Allow-Headers:
type: "string"
Access-Control-Allow-Methods:
type: "string"
Access-Control-Allow-Origin:
type: "string"
get:
security:
- api_key: []
x-amazon-apigateway-integration:
# Further definition of the endpoint, calling Lambda etc...
Linked inside a CloudFormation template the Swagger file is processed successfully. But when I open the endpoint in the AWS Web Console, the flag for API Key Required is still false.
Any suggestions? Thanks.
Found the solution: the API key has to be named x-api-key (all lowercase).
It seems like only this way the setting is recognized during import.
To enable required API Key you need to add this "x-amazon-apigateway-api-key-source" : "HEADER" in security scheme block.
See an example:
"components" : {
"securitySchemes" : {
"api-key" : {
"type" : "apiKey",
"name" : "x-api-key",
"in" : "header",
"x-amazon-apigateway-api-key-source" : "HEADER"
}
}
}
It's an example using proxy requests.
Your JSON should be like this:
openapi3
{
"openapi": "3.0.3",
"info": {
"title": "User Portal",
"description": "API focused in User Portal.",
"version": "v1"
},
"paths": {
"users/{proxy+}": {
"options": {
"x-amazon-apigateway-integration": {
"httpMethod": "OPTIONS",
"payloadFormatVersion": "1.0",
"type": "MOCK"
}
},
"x-amazon-apigateway-any-method": {
"produces":[ "application/json"],
"parameters": [
{
"name": "proxy",
"in": "path",
"required": "true",
"type": "string"
}
],
"responses": {},
"security": [
{
"api-key": []
}
],
"x-amazon-apigateway-integration": {
"uri":"https://test.com.br/users/{proxy}",
"httpMethod":"ANY",
"type": "HTTP_PROXY"
}
}
}
},
"components" : {
"securitySchemes" : {
"api-key" : {
"type" : "apiKey",
"name" : "x-api-key",
"in" : "header",
"x-amazon-apigateway-api-key-source" : "HEADER"
}
}
}
}
In openapi2 you can add this in your yml.
swagger: 2.0
basePath: /dev
info:
title: My API
description: Proof of concept
schemes:
- https
securityDefinitions:
api_key:
type: apiKey
name: X-Api-Key
in: header
x-amazon-apigateway-api-key-source: HEADER
If you have troubles using api integration with openapi you can see this article: Working with API Gateway extensions to OpenAPI
How can I create an RDS instance with the create-environment or another subcommand of aws elasticbeanstalk? I've tried several combinations of parameters to no avail. Below is an example.
APP_NAME="randall-railsapp"
aws s3api create-bucket --bucket "$APP_NAME"
APP_VERSION="$(git describe --always)"
APP_FILE="deploy-$APP_NAME-$APP_VERSION.zip"
git archive -o "$APP_FILE" HEAD
aws s3 cp "$APP_FILE" "s3://$APP_NAME/$APP_FILE"
aws --region us-east-1 elasticbeanstalk create-application-version \
--auto-create-application \
--application-name "$APP_NAME" \
--version-label "$APP_VERSION" \
--source-bundle S3Bucket="$APP_NAME",S3Key="$APP_FILE"
aws --region us-east-1 elasticbeanstalk create-environment \
--application-name "$APP_NAME" \
--version-label "$APP_VERSION" \
--environment-name "$APP_NAME-env" \
--description "randall's rails app environment" \
--solution-stack-name "64bit Amazon Linux 2014.03 v1.0.0 running Ruby 2.1 (Puma)" \
--cname-prefix "$APP_NAME-test" \
--option-settings file://test.json
And the contents of test.json:
[
{
"OptionName": "EC2KeyName",
"Namespace": "aws:autoscaling:launchconfiguration",
"Value": "a-key-is-here"
},
{
"OptionName": "EnvironmentType",
"Namespace": "aws:elasticbeanstalk:environment",
"Value": "SingleInstance"
},
{
"OptionName": "SECRET_KEY_BASE",
"Namespace": "aws:elasticbeanstalk:application:environment",
"Value": "HAHAHAHAHAHA"
},
{
"OptionName": "DBPassword",
"Namespace": "aws:rds:dbinstance",
"Value": "hunter2"
},
{
"OptionName": "DBUser",
"Namespace": "aws:rds:dbinstance",
"Value": "random"
},
{
"OptionName": "DBEngineVersion",
"Namespace": "aws:rds:dbinstance",
"Value": "9.3"
},
{
"OptionName": "DBEngine",
"Namespace": "aws:rds:dbinstance",
"Value": "postgres"
}
]
Anyone know why this is failing? Anything I specify with a aws:rds:dbinstance namespace seems to get removed from the configuration.
Just setting the aws:rds:dbinstance options does not create an RDS database.
Currently you can create an RDS instance using one of the following techniques:
Create using AWS Console
Use eb cli
Use Resources section of ebextensions to create an RDS resource
The first two approaches are the most convenient as they do all the heavy lifting for you but for the third one you have to do some extra work. The third approach is what you would want to use if you are not using the console or eb CLI.
You can create an RDS resource for your beanstalk environment using the following ebextension snippet. Create a file called 01-rds.config in the .ebextensions directory of your app source.
Resources:
AWSEBRDSDatabase:
Type: AWS::RDS::DBInstance
Properties:
AllocatedStorage: 5
DBInstanceClass: db.t2.micro
DBName: myawesomeapp
Engine: postgres
EngineVersion: 9.3
MasterUsername: myAwesomeUsername
MasterUserPassword: myCrazyPassword
This file is in YAML format so indentation is important. You could also use JSON if you like.
These are not option settings so you cannot pass it as --option-settings test.json. You just need to bundle this file with your app source.
Read more about what properties you can configure on your RDS database here. On this page you can also find what properties are required and what properties are optional.
Let me know if the above does not work for you or if you have any further questions.
As of December 2017 we use the following ebextensions
$ cat .ebextensions/rds.config
Resources:
AWSEBRDSDBSecurityGroup:
Type: AWS::RDS::DBSecurityGroup
Properties:
EC2VpcId:
Fn::GetOptionSetting:
OptionName: "VpcId"
GroupDescription: RDS DB VPC Security Group
DBSecurityGroupIngress:
- EC2SecurityGroupId:
Ref: AWSEBSecurityGroup
AWSEBRDSDBSubnetGroup:
Type: AWS::RDS::DBSubnetGroup
Properties:
DBSubnetGroupDescription: RDS DB Subnet Group
SubnetIds:
Fn::Split:
- ","
- Fn::GetOptionSetting:
OptionName: DBSubnets
AWSEBRDSDatabase:
Type: AWS::RDS::DBInstance
DeletionPolicy: Delete
Properties:
PubliclyAccessible: true
MultiAZ: false
Engine: mysql
EngineVersion: 5.7
BackupRetentionPeriod: 0
DBName: test
MasterUsername: toor
MasterUserPassword: 123456789
AllocatedStorage: 10
DBInstanceClass: db.t2.micro
DBSecurityGroups:
- Ref: AWSEBRDSDBSecurityGroup
DBSubnetGroupName:
Ref: AWSEBRDSDBSubnetGroup
Outputs:
RDSId:
Description: "RDS instance identifier"
Value:
Ref: "AWSEBRDSDatabase"
RDSEndpointAddress:
Description: "RDS endpoint address"
Value:
Fn::GetAtt: ["AWSEBRDSDatabase", "Endpoint.Address"]
RDSEndpointPort:
Description: "RDS endpoint port"
Value:
Fn::GetAtt: ["AWSEBRDSDatabase", "Endpoint.Port"]
AWSEBRDSDatabaseProperties:
Description: Properties associated with the RDS database instance
Value:
Fn::Join:
- ","
- - Ref: AWSEBRDSDatabase
- Fn::GetAtt: ["AWSEBRDSDatabase", "Endpoint.Address"]
- Fn::GetAtt: ["AWSEBRDSDatabase", "Endpoint.Port"]
With such custom options
$ cat .ebextensions/custom-options.config
option_settings:
"aws:elasticbeanstalk:customoption":
DBSubnets: subnet-1234567,subnet-7654321
VpcId: vpc-1234567
The only things - you have to explicitly pass RDS_* env variables to your application.
The other answers did not work in my environment as of Sept 2015. After much trial and error, the following worked for me:
config template snippet (YAML):
aws:rds:dbinstance:
DBAllocatedStorage: '5'
DBDeletionPolicy: Delete
DBEngine: postgres
DBEngineVersion: 9.3.9
DBInstanceClass: db.t2.micro
DBPassword: PASSWORD_HERE
DBUser: USERNAME_HERE
MultiAZDatabase: false
.ebextensions/rds.config file (JSON):
{
"Parameters": {
"AWSEBDBUser": {
"NoEcho": "true",
"Description": "The name of master user for the client DB Instance.",
"Default": "ebroot",
"Type": "String",
"ConstraintDescription": "must begin with a letter and contain only alphanumeric characters"
},
"AWSEBDBPassword": {
"NoEcho": "true",
"Description": "The master password for the DB instance.",
"Type": "String",
"ConstraintDescription": "must contain only alphanumeric characters"
},
"AWSEBDBName": {
"NoEcho": "true",
"Description": "The DB Name of the RDS instance",
"Default": "ebdb",
"Type": "String",
"ConstraintDescription": "must contain only alphanumeric characters"
}
},
"Resources": {
"AWSEBAutoScalingGroup": {
"Metadata": {
"AWS::ElasticBeanstalk::Ext": {
"_ParameterTriggers": {
"_TriggerConfigDeployment": {
"CmpFn::Insert": {
"values": [
{
"CmpFn::Ref": "Parameter.AWSEBDBUser"
},
{
"CmpFn::Ref": "Parameter.AWSEBDBPassword"
},
{
"CmpFn::Ref": "Parameter.AWSEBDBName"
}
]
}
}
},
"_ContainerConfigFileContent": {
"plugins": {
"rds": {
"Description": "RDS Environment variables",
"env": {
"RDS_USERNAME": {
"Ref": {
"CmpFn::Ref": "Parameter.AWSEBDBUser"
}
},
"RDS_PASSWORD": {
"Ref": {
"CmpFn::Ref": "Parameter.AWSEBDBPassword"
}
},
"RDS_DB_NAME": {
"Ref": {
"CmpFn::Ref": "Parameter.AWSEBDBName"
}
},
"RDS_HOSTNAME": {
"Fn::GetAtt": [
"AWSEBRDSDatabase",
"Endpoint.Address"
]
},
"RDS_PORT": {
"Fn::GetAtt": [
"AWSEBRDSDatabase",
"Endpoint.Port"
]
}
}
}
}
}
}
}
},
"AWSEBRDSDatabase": {
"Type": "AWS::RDS::DBInstance",
"DeletionPolicy": "Delete",
"Properties": {
"DBName": {
"Ref": {
"CmpFn::Ref": "Parameter.AWSEBDBName"
}
},
"AllocatedStorage": "5",
"DBInstanceClass": "db.t2.micro",
"Engine": "postgres",
"DBSecurityGroups": [
{
"Ref": "AWSEBRDSDBSecurityGroup"
}
],
"MasterUsername": {
"Ref": {
"CmpFn::Ref": "Parameter.AWSEBDBUser"
}
},
"MasterUserPassword": {
"Ref": {
"CmpFn::Ref": "Parameter.AWSEBDBPassword"
}
},
"MultiAZ": false
}
},
"AWSEBRDSDBSecurityGroup": {
"Type": "AWS::RDS::DBSecurityGroup",
"Properties": {
"DBSecurityGroupIngress": {
"EC2SecurityGroupName": {
"Ref": "AWSEBSecurityGroup"
}
},
"GroupDescription": "Enable database access to Beanstalk application"
}
}
}
}
I had the same problem, couldn't get it to work via .ebextensions, and I don't like the EB CLI tool.
EB CLI uses an undocumented API feature, and a customized version of the botocore library ('eb_botocore') to make this work. :(
So I went ahead and forked botocore, and merged in the API data file used by eb_botocore: https://github.com/boto/botocore/pull/396
Then I ran 'python setup.py install' on both my modified botocore and aws-cli (both at master), and aws-cli now accepts a --template-specification option on the 'aws elasticbeanstalk create-environment' command. Hooray!
Example usage:
aws elasticbeanstalk create-environment\
...various options...\
--option-settings file://option-settings.json
--template-specification file://rds.us-west-2.json
where rds.us-west-2.json is:
{
"TemplateSnippets": [{
"SnippetName": "RdsExtensionEB",
"Order": 10000,
"SourceUrl":
"https://s3.amazonaws.com/elasticbeanstalk-env-resources-us-west-2/eb_snippets/rds/rds.json"
}]
}
(it appears you must select a snippet specific to your EB region).
and option-settings.json contains RDS-related settings similar to ones listed in the question (DBEngine, DBInstanceClass, DBAllocatedStorage, DBPassword).
It works great. I hope the AWS CLI team allows us to use this feature in the official tool in the future. I'm guessing it's not a trivial change or they would have done it already, but it's a pretty major omission functionality-wise from the Elastic Beanstalk API and AWS CLI tool, so hopefully they take a crack at it.