Istio VirtualService not used in k8s Service - istio

Hi I'm very newby in Istio/K8s, and I'm trying to make a service that I have test-service to use a new VirtualService that I've created.
Here the steps that I did
kubectl config set-context --current --namespace my-namespace
I create my VirtualService
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: test-service
namespace: my-namespace
spec:
hosts:
- test-service
http:
- fault:
delay:
fixedDelay: 60s
percentage:
value: 100
route:
- destination:
host: test-service
port:
number: 9100
Then I apply into K8s
kubectl apply -f test-service.yaml
But now when I invoke the test-service using gRPC I can reach the service, but the fault with the delay is not happening.
I dont know in which log I can see of this test-service is using the VirtualService that I created or not
Here my gRPC Service config:
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "test-service",
"namespace": "my-namespace",
"selfLink": "/api/v1/namespaces/my-namespace/services/test-service",
"uid": "8a9bc730-4125-4b52-b373-7958796b5df7",
"resourceVersion": "317889736",
"creationTimestamp": "2021-07-07T10:39:54Z",
"labels": {
"app": "test-service",
"app.kubernetes.io/managed-by": "Helm",
"version": "v1"
},
"annotations": {
"meta.helm.sh/release-name": "test-service",
"meta.helm.sh/release-namespace": "my-namespace"
},
"managedFields": [
{
"manager": "Go-http-client",
"operation": "Update",
"apiVersion": "v1",
"time": "2021-07-07T10:39:54Z",
"fieldsType": "FieldsV1",
"fieldsV1": {
"f:metadata": {
"f:annotations": {
".": {},
"f:meta.helm.sh/release-name": {},
"f:meta.helm.sh/release-namespace": {}
},
"f:labels": {
".": {},
"f:app": {},
"f:app.kubernetes.io/managed-by": {},
"f:version": {}
}
},
"f:spec": {
"f:ports": {
".": {},
"k:{\"port\":9100,\"protocol\":\"TCP\"}": {
".": {},
"f:port": {},
"f:protocol": {},
"f:targetPort": {}
}
},
"f:selector": {
".": {},
"f:app": {}
},
"f:sessionAffinity": {},
"f:type": {}
}
}
},
{
"manager": "dashboard",
"operation": "Update",
"apiVersion": "v1",
"time": "2022-01-14T15:51:28Z",
"fieldsType": "FieldsV1",
"fieldsV1": {
"f:spec": {
"f:ports": {
"k:{\"port\":9100,\"protocol\":\"TCP\"}": {
"f:name": {}
}
}
}
}
}
]
},
"spec": {
"ports": [
{
"name": "test-service",
"protocol": "TCP",
"port": 9100,
"targetPort": 9100
}
],
"selector": {
"app": "test-service"
},
"clusterIP": "****************",
"type": "ClusterIP",
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {}
}
}

According to the Istio documentation, configuring fault only works for HTTP traffic, not for gRPC:
https://istio.io/latest/docs/reference/config/networking/virtual-service/#HTTPFaultInjection

Related

API gateway - message "select an integration response." when creating stack using cloudformation

This is what I am expecting to see in API Gateway after creating the stack.
But this is what's actually happen.
In the method response, it shows message "select an integration response.", but
I did add the model in the method response, and "HTTP status: Proxy" should be shown
What's going on?
resources.json
{
"AWSTemplateFormatVersion": "2010-09-09",
"Resources": {
"HelloWorldApi": {
"Type": "AWS::ApiGateway::RestApi",
"Properties": {
"Name": "hello-api",
"Description": "API used for practice",
"FailOnWarnings": true
}
},
"getBannerMethod": {
"Type": "AWS::ApiGateway::Method",
"DependsOn": ["HelloWorldApi"],
"Properties": {
"RestApiId": {
"Ref": "HelloWorldApi"
},
"ResourceId": {
"Ref": "BannerResource"
},
"HttpMethod": "GET",
"MethodResponses":[
{
"ResponseModels" : {"application/json" : "Empty"},
"ResponseParameters":{
"method.response.header.Access-Control-Allow-Origin": "'*'"
},
"StatusCode" : "200"
},
{
"StatusCode": "500"
}
],
"AuthorizationType": "NONE",
"Integration": {
"Credentials": {
"Fn::ImportValue": {
"Fn::Sub": "${RolesStack}-ApiGatewayRoleArn"
}
},
"IntegrationHttpMethod": "POST",
"Type": "AWS_PROXY",
"Uri": {
"Fn::Join": ["",
[
"arn:aws:apigateway:",
{
"Ref": "AWS::Region"
},
":lambda:path/2015-03-31/functions/",
{
"Fn::GetAtt": ["getBannerHandler", "Arn"]
},
"/invocations"
]
]
}
}
}
}
}
}
Just add this inside Integration :
"IntegrationResponses": [{
"ResponseParameters":{
"method.response.header.Access-Control-Allow-Origin": "'*'"
},
"StatusCode" : "200"
}]
This below block
"MethodResponses":[
{
"ResponseModels" : {"application/json" : "Empty"},
"ResponseParameters":{
"method.response.header.Access-Control-Allow-Origin": "'*'"
},
"StatusCode" : "200"
},
{
"StatusCode": "500"
}
],
is set for method response level. You are looking at lambda means integration response level. For that you have to set IntegrationResponses.
Full template :
{
"AWSTemplateFormatVersion": "2010-09-09",
"Resources": {
"HelloWorldApi": {
"Type": "AWS::ApiGateway::RestApi",
"Properties": {
"Name": "hello-api",
"Description": "API used for practice",
"FailOnWarnings": true
}
},
"getBannerMethod": {
"Type": "AWS::ApiGateway::Method",
"DependsOn": ["HelloWorldApi"],
"Properties": {
"RestApiId": {
"Ref": "HelloWorldApi"
},
"ResourceId": {
"Ref": "BannerResource"
},
"HttpMethod": "GET",
"MethodResponses":[
{
"ResponseModels" : {"application/json" : "Empty"},
"ResponseParameters":{
"method.response.header.Access-Control-Allow-Origin": "'*'"
},
"StatusCode" : "200"
},
{
"StatusCode": "500"
}
],
"AuthorizationType": "NONE",
"Integration": {
"Credentials": {
"Fn::ImportValue": {
"Fn::Sub": "${RolesStack}-ApiGatewayRoleArn"
}
},
"IntegrationHttpMethod": "POST",
"IntegrationResponses": [{
"ResponseParameters":{
"method.response.header.Access-Control-Allow-Origin": "'*'"
},
"StatusCode" : "200"
}],
"Type": "AWS_PROXY",
"Uri": {
"Fn::Join": ["",
[
"arn:aws:apigateway:",
{
"Ref": "AWS::Region"
},
":lambda:path/2015-03-31/functions/",
{
"Fn::GetAtt": ["getBannerHandler", "Arn"]
},
"/invocations"
]
]
}
}
}
}
}
}
For those looking for the quick hack workaround to get it working from the console (like me).
I found the answer here: https://github.com/hashicorp/terraform-provider-aws/issues/11561
The only way to fix this issue is to login to the AWS Console and the do the following:
Go to "Integration request" and then uncheck "Use Lambda Proxy integration" and then check it again.
After performing the above steps the Method response correctly shows the mapped model.

How to add AWS IoT provisioning template in Cloudformation template / CDK

I am using Cloudformation template to create a stack including IoT fleet provisioning template and according to the document the IoT provisioning template body should be string type.
I have the IoT fleet provisioning template like this:
{
"Parameters": {
"SerialNumber": {
"Type": "String"
},
"AWS::IoT::Certificate::Id": {
"Type": "String"
}
},
"Resources": {
"certificate": {
"Properties": {
"CertificateId": {
"Ref": "AWS::IoT::Certificate::Id"
},
"Status": "Active"
},
"Type": "AWS::IoT::Certificate"
},
"policy": {
"Properties": {
"PolicyName": "mypolicy"
},
"Type": "AWS::IoT::Policy"
},
"thing": {
"OverrideSettings": {
"AttributePayload": "MERGE",
"ThingGroups": "REPLACE",
"ThingTypeName": "REPLACE"
},
"Properties": {
"AttributePayload": {
"SerialNumber": {
"Ref": "SerialNumber"
}
},
"ThingName": {
"Ref": "SerialNumber"
}
},
"Type": "AWS::IoT::Thing"
}
}
}
The Cloudformation template is like this:
AWSTemplateFormatVersion: '2010-09-09'
Description: "Template to create iot"
Resources:
FleetProvisioningTemplate:
Type: AWS::IoT::ProvisioningTemplate
Properties:
Description: Fleet provisioning template
Enabled: true
ProvisioningRoleArn: "arn:aws:iam::1234567890:role/IoT-role"
TemplateBody: String
TemplateName: mytemplate
I tried to use the JSON string of the IoT provisioning template for the template body but it didn't work. My question is how I can create an IoT provisioning template using Cloudformation template?
update
It turned out I can add the IoT provisioning template as a 'literal block'
AWSTemplateFormatVersion: '2010-09-09'
Description: "Template to create iot"
Resources:
FleetProvisioningTemplate:
Type: AWS::IoT::ProvisioningTemplate
Properties:
Description: Fleet provisioning template
Enabled: true
ProvisioningRoleArn: "arn:aws:iam::1234567890:role/IoT-role"
TemplateBody: |
{
"Parameters": {
"SerialNumber": {
"Type": "String"
},
"AWS::IoT::Certificate::Id": {
"Type": "String"
}
},
"Resources": {
"certificate": {
"Properties": {
"CertificateId": {
"Ref": "AWS::IoT::Certificate::Id"
},
"Status": "Active"
},
"Type": "AWS::IoT::Certificate"
},
"policy": {
"Properties": {
"PolicyName": "cto-full-function-dev"
},
"Type": "AWS::IoT::Policy"
},
"thing": {
"OverrideSettings": {
"AttributePayload": "MERGE",
"ThingGroups": "DO_NOTHING",
"ThingTypeName": "REPLACE"
},
"Properties": {
"AttributePayload": {},
"ThingGroups": [],
"ThingName": {
"Ref": "SerialNumber"
},
"ThingTypeName": "cto"
},
"Type": "AWS::IoT::Thing"
}
}
}
TemplateName: mytemplate
But as soon as I added the PreProvisioningHook as the cloudformation document says, the template fails with invalid request error.
AWSTemplateFormatVersion: '2010-09-09'
Description: "Template to create iot"
Resources:
LambdaHook:
Type: AWS::Lambda::Function
....
FleetProvisioningTemplate:
Type: AWS::IoT::ProvisioningTemplate
Properties:
Description: Fleet provisioning template
Enabled: true
ProvisioningRoleArn: "arn:aws:iam::1234567890:role/IoT-role"
PreProvisioningHook:
TargetArn: {
"Fn::GetAtt": [
"LambdaHook",
"Arn"
]
}
PayloadVersion: "1.0"
TemplateBody: |
{
"Parameters": {
"SerialNumber": {
"Type": "String"
},
"AWS::IoT::Certificate::Id": {
"Type": "String"
}
},
"Resources": {
"certificate": {
"Properties": {
"CertificateId": {
"Ref": "AWS::IoT::Certificate::Id"
},
"Status": "Active"
},
"Type": "AWS::IoT::Certificate"
},
"policy": {
"Properties": {
"PolicyName": "cto-full-function-dev"
},
"Type": "AWS::IoT::Policy"
},
"thing": {
"OverrideSettings": {
"AttributePayload": "MERGE",
"ThingGroups": "DO_NOTHING",
"ThingTypeName": "REPLACE"
},
"Properties": {
"AttributePayload": {},
"ThingGroups": [],
"ThingName": {
"Ref": "SerialNumber"
},
"ThingTypeName": "cto"
},
"Type": "AWS::IoT::Thing"
}
}
}
TemplateName: mytemplate
I also asked question on here but no luck. Did any one have the same issue and fix it?
I finally figured it out but want to share it in case someone is having the same question.
AWS IoT document doesn't mention this but if you want to add a PreProvisioningHook for your provisioning template, you need to give IoT access to the lambda, AKA PreProvisioningHook, so in the Cloudformation template, add something like this:
LambdaAddPermission:
Type: AWS::Lambda::Permission
Properties:
Action: lambda:InvokeFunction
FunctionName: !GetAtt PreProvisionHook.Arn
Principal: iot.amazonaws.com
In the Provisioning Template resource, make sure you have this:
PreProvisioningHook:
PayloadVersion: '2020-04-01'
TargetArn: {
"Fn::GetAtt": [
"PreProvisionHook",
"Arn"
]
}
in CDK you can opt to use a shorthand too:
preProvisioningHookLambda.grantInvoke(new iam.ServicePrincipal('iot.amazonaws.com')) // allow iot to invoke this function
This is the TS code I am using for everyones reference:
import * as cdk from '#aws-cdk/core';
import * as iam from '#aws-cdk/aws-iam';
import * as lambdaNodeJS from '#aws-cdk/aws-lambda-nodejs';
import * as iot from "#aws-cdk/aws-iot";
const props = {
stage: 'development'
}
const PolicyName = "DevicePolicy";
const templateName = 'DeviceProvisioningTemplateV1';
const templateBody = {
Parameters: {
SerialNumber: {
Type: "String"
},
ModelType: {
Type: "String"
},
"AWS::IoT::Certificate::Id": {
Type: "String"
}
},
Resources: {
certificate: {
Properties: {
CertificateId: {
Ref: "AWS::IoT::Certificate::Id"
},
Status: "Active"
},
Type: "AWS::IoT::Certificate"
},
policy: {
Properties: {
PolicyName
},
Type: "AWS::IoT::Policy"
},
thing: {
OverrideSettings: {
AttributePayload: "MERGE",
ThingGroups: "DO_NOTHING",
ThingTypeName: "REPLACE"
},
Properties: {
ThingGroups: [],
ThingName: {
Ref: "SerialNumber"
}
},
Type: "AWS::IoT::Thing"
}
}
};
const preProvisioningHookLambda = new lambdaNodeJS.NodejsFunction(this, `provisioning-hook-lambda-${props?.stage}`, {
entry: './src/lambda/provisioning/hook.ts',
handler: 'handler',
bundling: {
externalModules: [
]
},
timeout: cdk.Duration.seconds(5)
});
preProvisioningHookLambda.grantInvoke(new iam.ServicePrincipal('iot.amazonaws.com')) // allow iot to invoke this function
// Give the AWS IoT service permission to create or update IoT resources such as things and certificates in your account when provisioning devices
const provisioningRole = new iam.Role(this, `provisioning-role-arn-${props?.stage}`, {
assumedBy: new iam.ServicePrincipal('iot.amazonaws.com'),
});
provisioningRole.addManagedPolicy(iam.ManagedPolicy.fromAwsManagedPolicyName('service-role/AWSIoTThingsRegistration'));
new cdk.CfnOutput(this, 'provisioningRoleArn ', { value: provisioningRole.roleArn || 'undefined' });
const provisioningTemplate = new iot.CfnProvisioningTemplate(this, `provisioning-hook-template-${props?.stage}`, {
provisioningRoleArn: provisioningRole.roleArn,
templateBody: JSON.stringify(templateBody),
enabled: true,
templateName,
preProvisioningHook: {
payloadVersion: '2020-04-01',
targetArn: preProvisioningHookLambda.functionArn,
}
});
new cdk.CfnOutput(this, 'preProvisioningLambdaFunctionName ', { value: preProvisioningHookLambda.functionName || 'undefined' });
new cdk.CfnOutput(this, 'provisioningTemplateName ', { value: provisioningTemplate.templateName || 'undefined' });
Base on the answer by Z Wang, this is how you do it in the AWS CDK:
myLambda.addPermission('InvokePermission', {
principal: new ServicePrincipal('iot.amazonaws.com'),
action: 'lambda:InvokeFunction',
});

Why is AWS CloudFormation throwing "Encountered unsupported property InstanceGroups"?

When I deploy the below AWS CloudFormation script, I am getting the following error: "Encountered unsupported property InstanceGroups"
I have used InstanceGroups in the past without any issues. Here is an example of how others using it: https://noise.getoto.net/tag/amazon-emr/
I am using EMR 5.17.0, which I have used to set up before.
{
"Description": "Spark ETL EMR CloudFormation",
"Resources": {
"EMRCluster": {
"Type": "AWS::EMR::Cluster",
"Properties": {
"Applications": [
{
"Name": "Hadoop"
},
{
"Name": "Spark"
},
{
"Name": "Ganglia"
},
{
"Name": "Zeppelin"
}
],
"AutoScalingRole": "EMR_AutoScaling_DefaultRole",
"BootstrapActions": [
{
"Path": "s3://somepath/scripts/install_pip36_dependencies.sh",
"Args": [
"relay==0.0.1"
],
"Name": "install_pip36_dependencies"
}
],
"Configurations": [
{
"Classification": "yarn-site",
"Properties": {
"yarn.scheduler.fair.preemption": "False",
"yarn.resourcemanager.am.max-attempts": "1"
},
"Configurations": []
},
{
"Classification": "core-site",
"Properties": {
"fs.s3.canned.acl": "BucketOwnerFullControl"
},
"Configurations": []
}
],
"EbsRootVolumeSize": 10,
"InstanceGroups": [
{
"Name": "Master",
"Market": "ON_DEMAND",
"InstanceRole": "MASTER",
"InstanceType": "m5.2xlarge",
"InstanceCount": 1,
"EbsConfiguration": {
"EbsBlockDeviceConfigs": [
{
"VolumeSpecification": {
"SizeInGB": 100,
"VolumeType": "64"
},
"VolumesPerInstance": 1
}
],
"EbsOptimized": "True"
}
},
{
"Name": "Core",
"Market": "ON_DEMAND",
"InstanceGroupType": "CORE",
"InstanceType": "m5.2xlarge",
"InstanceCount": 5,
"EbsConfiguration": {
"EbsBlockDeviceConfigs": [
{
"VolumeSpecification": {
"SizeInGB": 100,
"VolumeType": "gp2"
},
"VolumesPerInstance": 1
}
],
"EbsOptimized": "True"
}
},
{
"Name": "Task - 3",
"Market": "ON_DEMAND",
"InstanceGroupType": "TASK",
"InstanceType": "m5.2xlarge",
"InstanceCount": 2,
"EbsConfiguration": {
"EbsBlockDeviceConfigs": [
{
"VolumeSpecification": {
"SizeInGB": 32,
"VolumeType": "gp2"
},
"VolumesPerInstance": 1
}
],
"EbsOptimized": "True"
}
}
],
"LogUri": "s3://somepath/emr-logs/",
"Name": "EMR CF",
"ReleaseLabel": "emr-5.17.0",
"ServiceRole": "EMR_DefaultRole",
"VisibleToAllUsers": "True"
}
}
}
}
When the CF script is loaded, it should create an AWS EMR cluster
Aws recommends that you set MasterInstanceGroup and CoreInstanceGroup under Instances
I give you an example of the Instances property of an EMR Cluster with Hadoop, Hbase, Spark, Ganglia and Zookeeper:
Instances:
Ec2KeyName: !Ref KeyName
Ec2SubnetId: !ImportValue MySubnetPrivateA
EmrManagedMasterSecurityGroup: !ImportValue EmrMasterSgId
AdditionalMasterSecurityGroups:
- !ImportValue EmrMasterAdditionalSgId
EmrManagedSlaveSecurityGroup: !ImportValue EmrSlaveSgId
AdditionalSlaveSecurityGroups:
- !ImportValue EmrSlaveAdditionalSgId
ServiceAccessSecurityGroup: !ImportValue EmrServiceSgId
MasterInstanceGroup:
InstanceCount: 1
InstanceType: !Ref MasterInstanceType
Market: ON_DEMAND
Name: Master
CoreInstanceGroup:
InstanceCount: !Ref NumberOfCoreInstances
InstanceType: !Ref CoreInstanceType
Market: ON_DEMAND
Name: Core
TerminationProtected: false
VisibleToAllUsers: true
JobFlowRole: !Ref EMRClusterinstanceProfile
ReleaseLabel: !Ref ReleaseLabel
LogUri: !Ref LogUri
Name: !Ref EMRClusterName
AutoScalingRole: EMR_AutoScaling_DefaultRole
ServiceRole: !Ref EMRClusterServiceRole
Tags:
-
Key: "cluster_name"
Value: "master.emr.my.com"
You can see the complete AWS template here.

Force PersistentVolumeClaim and Deployment to land in same availability zone

I have a kubernetes cluster in AWS with ec2 worker nodes in the following AZs along with corresponding PersistentVolumes in each AZ.
us-west-2a
us-west-2b
us-west-2c
us-west-2d
My problem is I want to create a Deployment with a volume mount that references a PersistentVolumeClaim and guarantee they land in the same AZ because right now it is luck whether both the Deployment and PersistentVolumeClaim end up in the same AZ. If they don't land in the same AZ then the deployment fails to find the volume mount.
I create 4 PersistentVolumes by manually creates EBS volumes in each AZ and copying the ID to the spec.
{
"apiVersion": "v1",
"kind": "PersistentVolume",
"metadata": {
"name": "pv-2"
},
"spec": {
"capacity": {
"storage": "1Gi"
},
"accessModes": [
"ReadWriteOnce"
],
"persistentVolumeReclaimPolicy": "Retain",
"awsElasticBlockStore": {
"volumeID": "vol-053f78f0c16e5f20e",
"fsType": "ext4"
}
}
}
{
"kind": "PersistentVolumeClaim",
"apiVersion": "v1",
"metadata": {
"name": "mydata",
"namespace": "staging"
},
"spec": {
"accessModes": [
"ReadWriteOnce"
],
"resources": {
"requests": {
"storage": "10Mi"
}
}
}
}
{
"apiVersion": "extensions/v1beta1",
"kind": "Deployment",
"metadata": {
"name": "myapp",
"namespace": "default",
"labels": {
"app": "myapp"
}
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"app": "myapp"
}
},
"template": {
"metadata": {
"labels": {
"app": "myapp"
}
},
"spec": {
"containers": [
{
"name": "hello",
"image": "centos:7",
"volumeMounts": [ {
"name":"mydata",
"mountPath":"/etc/data/"
} ]
}
],
"volumes": [ {
"name":"mydata",
"persistentVolumeClaim":{
"claimName":"mydata"
}
}]
}
}
}
}
You could try setting annotation for region and AvailabilityZone as mentioned in here and here

Kubernetes PetSet - PersistentVolumeClaim "ProvisioningFailed" on AWS

I’m in the process of migrating ElasticSearch to a Kubernete’s PetSet but there’s a problem when provisioning the PersistentVolumeClaim.
I have two volumes in AWS “vol-84094e30” “vol-87e59f33”. Both are 5 GiB, 100/3000 IOPS, us-west-2a Availability Zone and are volume type gp2.
I have a StorageClass definition for the volume type:
{
"kind": "StorageClass",
"apiVersion": "apps/v1alpha1",
"metadata": {
"name": "fast"
},
"provisioner": "kubernetes.io/aws-ebs",
"parameters": {
"type": "gp2",
"zone": "us-west-2a"
}
}
… two PersistentVolume definitions:
{
"kind": "PersistentVolume",
"apiVersion": "v1",
"metadata": {
"name": "es-persistent-vol",
"labels": {
"type": "amazonEBS"
}
},
"spec": {
"capacity": {
"storage": "5Gi"
},
"accessModes": [
"ReadWriteOnce"
],
"awsElasticBlockStore": {
"volumeID": "vol-87e59f33",
"fsType": "ext4"
}
}
}
{
"kind": "PersistentVolume",
"apiVersion": "v1",
"metadata": {
"name": "es-persistent-vol2",
"labels": {
"type": "amazonEBS"
}
},
"spec": {
"capacity": {
"storage": "5Gi"
},
"accessModes": [
"ReadWriteOnce"
],
"awsElasticBlockStore": {
"volumeID": "vol-84094e30",
"fsType": "ext4"
}
}
}
… an Elastic Search service:
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"name": "elasticsearch-logging",
"namespace": "kube-system",
"labels": {
"k8s-app": "elasticsearch-logging",
"kubernetes.io/name": "Elasticsearch"
}
},
"spec": {
"ports": [
{
"port": 9200,
"protocol": "TCP",
"targetPort": "db"
}
],
"selector": {
"k8s-app": "elasticsearch-logging"
}
}
}
… and an Elastic Search PetSet:
{
"apiVersion": "apps/v1alpha1",
"kind": "PetSet",
"metadata": {
"name": "elasticsearch-logging-v1",
"namespace": "kube-system",
"labels": {
"k8s-app": "elasticsearch-logging",
"version": "v1",
"kubernetes.io/cluster-service": "true"
}
},
"spec": {
"serviceName": "elasticsearch-logging-v1",
"replicas": 2,
"template": {
"metadata": {
"annotations": {
"pod.beta.kubernetes.io/initialized": "true"
},
"labels": {
"app": "elasticsearch-data"
}
},
"spec": {
"containers": [
{
"image": "gcr.io/google_containers/elasticsearch:v2.4.1",
"name": "elasticsearch-logging",
"resources": {
"limits": {
"cpu": "1000m"
},
"requests": {
"cpu": "100m"
}
},
"ports": [
{
"containerPort": 9200,
"name": "db",
"protocol": "TCP"
},
{
"containerPort": 9300,
"name": "transport",
"protocol": "TCP"
}
],
"volumeMounts": [
{
"name": "es-persistent-storage",
"mountPath": "/data"
}
]
}
]
}
},
"volumeClaimTemplates": [
{
"metadata": {
"name": "es-persistent-storage",
"annotations": {
"volume.beta.kubernetes.io/storage-class": "fast"
},
"labels": {
"type": "amazonEBS"
}
},
"spec": {
"accessModes": [
"ReadWriteOnce"
],
"resources": {
"requests": {
"storage": "5Gi"
}
}
}
}
]
}
}
When I create all these the PersistentStorageVolume (defined in PetSet) is unable to provision the volumes (ProvisioningFailed - no volume plugin matched). I think it may be AWS specific, I’ve looked on various forums for (what I thought were) the same issue, but were not applicable to AWS.
Any help is much appreciated.
Here are the kubectl describe outputs:
$ kubectl describe pv
Name: es-persistent-vol
Labels: type=amazonEBS
Status: Available
Claim:
Reclaim Policy: Retain
Access Modes: RWO
Capacity: 5Gi
Message:
Source:
Type: AWSElasticBlockStore (a Persistent Disk resource in AWS)
VolumeID: vol-87e59f33
FSType: ext4
Partition: 0
ReadOnly: false
No events.
Name: es-persistent-vol2
Labels: type=amazonEBS
Status: Available
Claim:
Reclaim Policy: Retain
Access Modes: RWO
Capacity: 5Gi
Message:
Source:
Type: AWSElasticBlockStore (a Persistent Disk resource in AWS)
VolumeID: vol-84094e30
FSType: ext4
Partition: 0
ReadOnly: false
No events.
$ kubectl describe pvc
Name: es-persistent-storage-elasticsearch-logging-v1-0
Namespace: kube-system
Status: Pending
Volume:
Labels: app=elasticsearch-data
Capacity:
Access Modes:
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
18s 1s 3 {persistentvolume-controller } Warning ProvisioningFailed no volume plugin matched
Name: es-persistent-storage-elasticsearch-logging-v1-1
Namespace: kube-system
Status: Pending
Volume:
Labels: app=elasticsearch-data
Capacity:
Access Modes:
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
19s 2s 3 {persistentvolume-controller } Warning ProvisioningFailed no volume plugin matched
$ kubectl describe petset
Name: elasticsearch-logging-v1
Namespace: kube-system
Image(s): gcr.io/google_containers/elasticsearch:v2.4.1
Selector: app=elasticsearch-data
Labels: k8s-app=elasticsearch-logging,kubernetes.io/cluster-service=true,version=v1
Replicas: 2 current / 2 desired
Annotations: <none>
CreationTimestamp: Wed, 02 Nov 2016 13:06:23 +0000
Pods Status: 0 Running / 1 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1m 1m 1 {petset } Normal SuccessfulCreate pvc: es-persistent-storage-elasticsearch-logging-v1-0
1m 1m 1 {petset } Normal SuccessfulCreate pet: elasticsearch-logging-v1-0
1m 1m 1 {petset } Normal SuccessfulCreate pvc: es-persistent-storage-elasticsearch-logging-v1-1
$ kubectl describe service el
Name: elasticsearch-logging
Namespace: kube-system
Labels: k8s-app=elasticsearch-logging
kubernetes.io/name=Elasticsearch
Selector: k8s-app=elasticsearch-logging
Type: ClusterIP
IP: 192.168.157.15
Port: <unset> 9200/TCP
Endpoints: <none>
Session Affinity: None
No events.