k8s how to update specific value in secret - amazon-web-services

I need to update a secret with specific value,(the secret contain additional data) my question is how I can update just the value and not all the secret data (I don't want to override the existing data). I mean if the secret have additional values I don’t want to override them just the entry foo
updSec := v1.Secret{
TypeMeta: metav1.TypeMeta{},
ObjectMeta: metav1.ObjectMeta{
Name: "d-values",
Namespace: "terv”,
},
Immutable: nil,
Data: nil,
StringData: nil,
Type: "Opaque",
}
updSec.Data[“foo”] = newVal
if err := r.Client.Update(ctx, &updSec); err != nil {
return ctrl.Result{}, err
}
The issue is that the secret is already exist and here im creating new object and not sure how to do it right ...I need for secret that called d-values just update the newVal for key foo
update
when trying the code in the answer after I run the
patch, err := yaml.Marshal(updSec)
the data looks like following
and the patch are failed with error, any idea if its related ?
if I try with the c.Client.Update it works but not with Patch but the Patch is the right way as if I've ties before is should keep them..

I don't think you can update a single key using the Update method, but you can certainly do that using Patch instead. Here's an example that uses a StrategicMergePatch; it will replace the key val2 in a secret with the value newval:
package main
import (
"context"
"encoding/json"
"flag"
"fmt"
"path/filepath"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/util/homedir"
)
func main() {
var kubeconfig *string
var namespace *string
var secretname *string
namespace = flag.String("namespace", "", "namespace of secret")
secretname = flag.String("name", "", "name of secret")
if home := homedir.HomeDir(); home != "" {
kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file")
} else {
kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file")
}
flag.Parse()
if *namespace == "" {
panic(fmt.Errorf("you must specify a namespace"))
}
if *secretname == "" {
panic(fmt.Errorf("you must specify a secret name"))
}
config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
if err != nil {
panic(err)
}
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err)
}
secretClient := clientset.CoreV1().Secrets(*namespace)
ctx := context.TODO()
updSec := v1.Secret{
Data: map[string][]byte{
"val2": []byte("newval"),
},
}
payloadBytes, err := json.Marshal(updSec)
if err != nil {
panic(err)
}
if _, err = secretClient.Patch(ctx, *secretname,
types.StrategicMergePatchType, payloadBytes, metav1.PatchOptions{}); err != nil {
panic(err)
}
// Fetch updated secret
sec, err := secretClient.Get(ctx, *secretname, metav1.GetOptions{})
if err != nil {
panic(err)
}
secJson, err := json.MarshalIndent(sec, "", " ")
if err != nil {
panic(err)
}
fmt.Print(string(secJson))
}
For example, if I create a secret like this:
kubectl create secret generic \
--from-literal val1=key1 \
--from-literal val2=key2 example
And then run the above code like this:
go run main.go -namespace default -name example
The code will output the update secret. Looking at the data section, we see:
"data": {
"val1": "a2V5MQ==",
"val2": "bmV3dmFs"
},
And if we decode val2 we see:
$ kubectl get secret example -o json | jq '.data.val2|#base64d'
"newval"
Using the Operator SDK
If you're working with the Operator SDK, you can use Update if you're first reading the existing value, like this:
// Read the existing secret
secret := &corev1.Secret{}
if err := r.Get(ctx, req.NamespacedName, secret); err != nil {
panic(err)
}
// Check if it needs to be modified
val, ok := secret.Data["val2"]
// If yes, update the secret with a new value and then write
// the entire object back with Update
if !ok || !bytes.Equal(val, []byte("val2")) {
ctxlog.Info("needs update", "secret", secret)
secret.Data["val2"] = []byte("newval")
if err := r.Update(ctx, secret); err != nil {
panic(err)
}
}
You can use the Patch method if you only want to submit a partial update:
if !ok || !bytes.Equal(val, []byte("val2")) {
ctxlog.Info("needs update", "secret", secret)
newVal := corev1.Secret{
Data: map[string][]byte{
"val2": []byte("newval"),
},
}
patch, err := json.Marshal(newVal)
if err != nil {
panic(err)
}
if err := r.Client.Patch(ctx, secret, client.RawPatch(types.StrategicMergePatchType, patch)); err != nil {
panic(err)
}
}
This is pretty much identical to the earlier example. There are examples of using the client.Patch method in the docs, but I'll be honest, I don't find the example very clear.

Related

Google Cloud Platform presignURL using Go

Trying to upload a picture to Google Cloud Platform, I always get the same err "<?xml version='1.0' encoding='UTF-8'?><Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your Google secret key and signing method.</Message><StringToSign>GOOG4-RSA-SHA256 20.................951Z".
I did add a service-account to the bucket with the role Storage Admin and Storage Object Admin as you can see on the pic
I have generated a Key(for the service account) and downloaded it as .json file, then I generate a presignURL using this code:
// key is the downloaded .json key file from the GCP service-account
// the return string is the presignedURL
func getPresignedURL(path, key string) (string, error) {
sakeyFile := filepath.Join(path, key)
saKey, err := ioutil.ReadFile(sakeyFile)
if err != nil {
log.Fatalln(err)
}
cfg, err := google.JWTConfigFromJSON(saKey)
if err != nil {
log.Fatalln(err)
}
bucket := "mybucket"
ctx := context.Background()
client, err := storage.NewClient(ctx)
if err != nil {
return "", fmt.Errorf("storage.NewClient: %v", err)
}
defer client.Close()
opts := &storage.SignedURLOptions{
Scheme: storage.SigningSchemeV4,
Method: "PUT",
Headers: []string{
"Content-Type:image/jpeg",
},
Expires: time.Now().Add(15 * time.Minute),
GoogleAccessID: cfg.Email,
PrivateKey: cfg.PrivateKey,
}
u, err := client.Bucket(bucket).SignedURL("mypic.jpeg", opts)
if err != nil {
return "", fmt.Errorf("Bucket(%q).SignedURL: %v", bucket, err)
}
return u, nil
}
The presignedURL looks good, something like this:
https://storage.googleapis.com/djedjepicbucket/mypic.jpeg?X-Goog-Algorithm=GOOG4-RSA-SHA256&X-Goog-Credential=djedje%40picstorage-363707.iam.gserviceaccount.com%2F20220926%2Fauto%2Fstorage%2Fgoog4_request&X-Goog-Date=20220926T081951Z&X-Goog-Expires=899&X-Goog Signature=3f330715d7a38ea08f99134a16f464fb............5ad800a7665dfb1440034ab1f5ab045252336&X-Goog-SignedHeaders=content-type%3Bhost
Then I read a file(picture) from disk and upload it using the presignURL
// the uri is the presignedURL
func newfileUploadRequest(uri string, params map[string]string, paramName, path string) (*http.Request, error) {
file, err := os.Open(path)
if err != nil {
return nil, err
}
defer file.Close()
body := &bytes.Buffer{}
writer := multipart.NewWriter(body)
if err != nil {
return nil, err
}
_, err = io.Copy(body, file)
for key, val := range params {
_ = writer.WriteField(key, val)
}
err = writer.Close()
if err != nil {
return nil, err
}
req, err := http.NewRequest("PUT", uri, body)
req.Header.Set("Content-Type", "image/jpeg")
return req, err
}
Then I exec the request
// the previous func
request, err := newfileUploadRequest(purl, extraParams, "picture", filepath.Join(path, "download.jpeg"))
if err != nil {
log.Fatal(err)
}
client := &http.Client{}
resp, err := client.Do(request)
if err != nil {
log.Fatal(err)
} else {
body := &bytes.Buffer{}
_, err := body.ReadFrom(resp.Body)
if err != nil {
log.Fatal(err)
}
resp.Body.Close()
fmt.Println(resp.StatusCode)
fmt.Println(resp.Header)
fmt.Println(body)
}
Unfortunatly, I always get the same error back
403
map[Alt-Svc:[h3=":443"; ma=2592000,h3-29=":443"; ma=2592000,h3-Q050=":443"; ma=2592000,h3-Q046=":443"; ma=2592000,h3-Q043=":443"; ma=2592000,quic=":443"; ma=2592000; v="46,43"] Content-Length:[884] Content-Type:[application/xml; charset=UTF-8] Date:[Mon, 26 Sep 2022 08:22:19 GMT] Server:[UploadServer] X-Guploader-Uploadid:[ADPyc......................ECL_4W]]
<?xml version='1.0' encoding='UTF-8'?><Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your Google secret key and signing method.</Message><StringToSign>GOOG4-RSA-SHA256
20220926T081951Z
20220926/auto/storage/goog4_request
c5f36838af4......................8ffb56329c1eb27f</StringToSign><CanonicalRequest>PUT
/djedjepicbucket/mypic.jpeg
X-Goog-Algorithm=GOOG4-RSA-SHA256&X-Goog-Credential=djedje%40picstorage-363707.iam.gserviceaccount.com%2F20220926%2Fauto%2Fstorage%2Fgoog4_request&X-Goog-Date=20220926T081951Z&X-Goog-Expires=899&X-Goog-SignedHeaders=content-type%3Bhost
content-type:multipart/form-data; boundary=5be13cc........................dd6aef6823
host:storage.googleapis.com
content-type;host
UNSIGNED-PAYLOAD</CanonicalRequest></Error>
Actually I have tryied many other ways as well but I basically always get this(more or less) same err back, Does someone have an Idea what am I forgetting(I am on that for 2 days now...) ? Thank you
=============================
I edited the question, that code works perfectly
Found the answer, in the both getPresignedURL() and newfileUploadRequest() func, the Header must be set to "Content-Type:application/octet-stream"(or "Content-Type:image/jpeg" for instance if the picture need to be display using its URL), then the pic is uploaded without issue.

When using headObject in aws sdk 2 for go. Why it gives undefined?

Thanks in advance :) . I'm using the following code to get metadata from an s3 object after listing all the object in a bucket . But I don't know why it gives the error undefined: s3.HeadObject when running go run listObjects.go -bucket xxxx -prefix xxxx
I tried two solutions: giving the client as the one created from the config and from the context as in this link appears [1]. BUt both gave the same error. Can you give me any clue?
package main
import (
"context"
"flag"
"fmt"
"log"
"github.com/aws/aws-sdk-go-v2/config"
"github.com/aws/aws-sdk-go-v2/service/s3"
)
var (
bucketName string
objectPrefix string
objectDelimiter string
maxKeys int
)
func init() {
flag.StringVar(&bucketName, "bucket", "", "The `name` of the S3 bucket to list objects from.")
flag.StringVar(&objectPrefix, "prefix", "", "The optional `object prefix` of the S3 Object keys to list.")
flag.StringVar(&objectDelimiter, "delimiter", "",
"The optional `object key delimiter` used by S3 List objects to group object keys.")
flag.IntVar(&maxKeys, "max-keys", 0,
"The maximum number of `keys per page` to retrieve at once.")
}
// Lists all objects in a bucket using pagination
func main() {
flag.Parse()
if len(bucketName) == 0 {
flag.PrintDefaults()
log.Fatalf("invalid parameters, bucket name required")
}
// Load the SDK's configuration from environment and shared config, and
// create the client with this.
cfg, err := config.LoadDefaultConfig(context.TODO())
if err != nil {
log.Fatalf("failed to load SDK configuration, %v", err)
}
client := s3.NewFromConfig(cfg)
// Set the parameters based on the CLI flag inputs.
params := &s3.ListObjectsV2Input{
Bucket: &bucketName,
}
if len(objectPrefix) != 0 {
params.Prefix = &objectPrefix
}
if len(objectDelimiter) != 0 {
params.Delimiter = &objectDelimiter
}
// Create the Paginator for the ListObjectsV2 operation.
p := s3.NewListObjectsV2Paginator(client, params, func(o *s3.ListObjectsV2PaginatorOptions) {
if v := int32(maxKeys); v != 0 {
o.Limit = v
}
})
// Iterate through the S3 object pages, printing each object returned.
var i int
log.Println("Objects:")
for p.HasMorePages() {
i++
// Next Page takes a new context for each page retrieval. This is where
// you could add timeouts or deadlines.
page, err := p.NextPage(context.TODO())
if err != nil {
log.Fatalf("failed to get page %v, %v", i, err)
}
// Log the objects found
// Headobject function is called
for _, obj := range page.Contents {
input := &s3.HeadObjectInput{
Bucket: &bucketName,
Key: obj.Key,
}
result, err := &s3.HeadObject(client, input)
if err != nil {
panic(err)
}
fmt.Println("Object:", *obj.Key)
}
}
}
./listObjects.go:86:20: undefined: s3.HeadObject
1
Doing the headObject as an auxiliary method works
package main
import (
"context"
"flag"
"fmt"
"log"
"github.com/aws/aws-sdk-go-v2/config"
"github.com/aws/aws-sdk-go-v2/service/s3"
)
var (
bucketName string
objectPrefix string
objectDelimiter string
maxKeys int
)
func init() {
flag.StringVar(&bucketName, "bucket", "", "The `name` of the S3 bucket to list objects from.")
flag.StringVar(&objectPrefix, "prefix", "", "The optional `object prefix` of the S3 Object keys to list.")
flag.StringVar(&objectDelimiter, "delimiter", "",
"The optional `object key delimiter` used by S3 List objects to group object keys.")
flag.IntVar(&maxKeys, "max-keys", 0,
"The maximum number of `keys per page` to retrieve at once.")
}
// Lists all objects in a bucket using pagination
func main() {
flag.Parse()
if len(bucketName) == 0 {
flag.PrintDefaults()
log.Fatalf("invalid parameters, bucket name required")
}
// Load the SDK's configuration from environment and shared config, and
// create the client with this.
cfg, err := config.LoadDefaultConfig(context.TODO())
if err != nil {
log.Fatalf("failed to load SDK configuration, %v", err)
}
client := s3.NewFromConfig(cfg)
// Set the parameters based on the CLI flag inputs.
params := &s3.ListObjectsV2Input{
Bucket: &bucketName,
}
if len(objectPrefix) != 0 {
params.Prefix = &objectPrefix
}
if len(objectDelimiter) != 0 {
params.Delimiter = &objectDelimiter
}
// Create the Paginator for the ListObjectsV2 operation.
p := s3.NewListObjectsV2Paginator(client, params, func(o *s3.ListObjectsV2PaginatorOptions) {
if v := int32(maxKeys); v != 0 {
o.Limit = v
}
})
// Iterate through the S3 object pages, printing each object returned.
var i int
log.Println("Objects:")
for p.HasMorePages() {
i++
// Next Page takes a new context for each page retrieval. This is where
// you could add timeouts or deadlines.
page, err := p.NextPage(context.TODO())
if err != nil {
log.Fatalf("failed to get page %v, %v", i, err)
}
// Log the objects found
// Headobject function is called
for _, obj := range page.Contents {
fmt.Println("Object:", *obj.Key)
OpHeadObject(client, bucketName, *obj.Key)
}
}
}
func OpHeadObject(sess *s3.Client, bucketName, objectName string) {
input := &s3.HeadObjectInput{
Bucket: &bucketName,
Key: &objectName,
}
resp, err := sess.HeadObject(context.TODO(), input)
if err != nil {
panic(err)
}
fmt.Println(resp.StorageClass) // that you want.
}

How do I get pagination working with exclusiveStartKey for dynamodb aws-sdk-go-v2?

I'm trying to create a pagination endpoint for a dynamodb table I have. But I've tried everything to get the exclusiveStartKey to be the correct type for it to work. However, everything I've tried doesn't seem to work.
example code:
func GetPaginator(tableName string, limit int32, lastEvaluatedKey string) (*dynamodb.ScanPaginator, error) {
svc, err := GetClient()
if err != nil {
logrus.Error(err)
return nil, err
}
input := &dynamodb.ScanInput{
TableName: aws.String(tableName),
Limit: aws.Int32(limit),
}
if lastEvaluatedKey != "" {
input.ExclusiveStartKey = map[string]types.AttributeValue{
"id": &types.AttributeValueMemberS{
Value: lastEvaluatedKey,
},
}
}
paginator := dynamodb.NewScanPaginator(svc, input)
return paginator, nil
}
Edit:
Okay so I'm creating a API that requires pagination. The API needs to have a query parameter where the lastEvaluatedId can be defined. I can then use the lastEvaluatedId to pass as the ExclusiveStartKey on the ScanInput. However when I do this I still received the same item from the database. I've created a test.go file and will post the code below:
package main
import (
"context"
"fmt"
"os"
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/config"
"github.com/aws/aws-sdk-go-v2/feature/dynamodb/attributevalue"
"github.com/aws/aws-sdk-go-v2/service/dynamodb"
)
type PaginateID struct {
ID string `dynamodbav:"id" json:"id"`
}
func main() {
lastKey := PaginateID{ID: "ae82a99d-486e-11ec-a7a7-0242ac110002"}
key, err := attributevalue.MarshalMap(lastKey)
if err != nil {
fmt.Println(err)
return
}
cfg, err := config.LoadDefaultConfig(context.TODO(), func(o *config.LoadOptions) error {
o.Region = os.Getenv("REGION")
return nil
})
if err != nil {
fmt.Println(err)
return
}
svc := dynamodb.NewFromConfig(cfg, func(o *dynamodb.Options) {
o.EndpointResolver = dynamodb.EndpointResolverFromURL("http://localhost:8000")
})
input := &dynamodb.ScanInput{
TableName: aws.String("TABLE_NAME"),
Limit: aws.Int32(1),
ExclusiveStartKey: key,
}
paginator := dynamodb.NewScanPaginator(svc, input)
if paginator.HasMorePages() {
data, err := paginator.NextPage(context.TODO())
if err != nil {
fmt.Println(err)
return
}
fmt.Println(data.Items[0]["id"])
fmt.Println(data.LastEvaluatedKey["id"])
}
}
When I run this test code. I get this output:
&{ae82a99d-486e-11ec-a7a7-0242ac110002 {}}
&{ae82a99d-486e-11ec-a7a7-0242ac110002 {}}
So the item that is returned is the same Id that I am passing to the ScanInput.ExclusiveStartKey. Which means it's not starting from the ExclusiveStartKey. The scan is starting from the beginning everytime.
The aws-sdk-go-v2 DynamoDB query and scan paginator constructors have a bug (see my github issue, includes the fix). They do not respect the ExclusiveStartKey param.
As an interim fix, I copied the paginator type locally and added one line in to the constructor: nextToken: params.ExclusiveStartKey.
so basically what you need to do is to get the LastEvaluatedKey and to pass it to ExclusiveStartKey
you can not use the scan paginator attributes because it's not exported attributes, therefore instead I suggest that you use the returned page by calling NextPage
in the following snippet I have an example :
func GetPaginator(ctx context.Context,tableName string, limit int32, lastEvaluatedKey map[string]types.AttributeValue) (*dynamodb.ScanOutput, error) {
svc, err := GetClient()
if err != nil {
logrus.Error(err)
return nil, err
}
input := &dynamodb.ScanInput{
TableName: aws.String(tableName),
Limit: aws.Int32(limit),
}
if len(lastEvaluatedKey) > 0 {
input.ExclusiveStartKey = lastEvaluatedKey
}
paginator := dynamodb.NewScanPaginator(svc, input)
return paginator.NextPage(), nil
}
keep in mind that paginator.NextPage(ctx) could be nil incase there is no more pages or you can use HasMorePages()

How to add a file to a temporary directory?

I have created a temp dir using tmpDir, err := ioutil.TempDir(dir, "OAS"). And i used this path to add a swagger extracted from aws to this temp dir. path = tmpDir + "/" + apiName + ".json", but it doesnt work. i also tried path = <path>/OAS/apiName.json it didn't work either. So my question is if I want add a file to this tempDir how do I define it's path?
cmd_3, err := exec.Command("aws", "apigateway", "get-export", "--rest-api-id", api_id, "--stage-name", stageName, "--export-type", "swagger", path).Output()
pwd, err := os.Getwd()
if err != nil {
return
}
dir = pwd
} //gets the path where the program is executed from
apiName := flagApiNameToGet
stageName := flagStageName
path = tmpDir + "/" + apiName + ".json"
// Searching for API ID:
for _, item := range apis.Items {
if item.Name == apiName {
fmt.Printf("API ID found: %+v ", item.Id)
api_id := item.Id
cmd_3, err := exec.Command("aws", "apigateway", "get-export", "--rest-api-id", api_id, "--stage-name", stageName, "--export-type", "swagger", path).Output()
if err != nil {
return err
}
output := string(cmd_3[:])
fmt.Println(output)
found = true
break
}
}
func execute() {
tmpDir, err := ioutil.TempDir(dir, "OAS")
if err != nil {
fmt.Println("Error creating temporary directory to store OAS")
return
}
fmt.Println("Temporary directory created:", tmpDir)
defer os.RemoveAll(tmpDir)
err = getOAS()
if err != nil {
utils.HandleErrorAndExit("Error getting OAS from AWS. ", err)
}
err = initializeProject()
if err != nil {
utils.HandleErrorAndExit("Error initializing project. ", err)
}
fmt.Println("Temporary directory deleted")
}
Since the tmpDir variable is global. Change your code to:
var err error
tmpDir, err = ioutil.TempDir(dir, "OAS")
Spotted the difference? := and =. Other fuction doesnt see scope declared variable tmpDir.
Here is an example of your code playground as you can see the global var dir is empty in other function call. Fixed version

Unable to obtain kubeconfig of an AWS EKS cluster in Go code

I have created an AWS EKS cluster. In order to obtain its kubeconfig, I usually run aws eks update-kubeconfig --name cluster-name --region us-west-2 using a shell.
However, I now wish to obtain the kubeconfig in Go without having to run anything in the shell (the goal being to create and then manipulate an EKS cluster in a Go test). I am able to describe an EKS cluster using this code:
package main
import (
"fmt"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/eks"
)
func main() {
sess := session.Must(session.NewSession(&aws.Config{
Region: aws.String("us-west-2"),
}))
eksSvc := eks.New(sess, aws.NewConfig().WithRegion("us-west-2"))
clusterOutput, err := eksSvc.DescribeCluster(&eks.DescribeClusterInput{
Name: aws.String("cluster-name"),
})
if err != nil {
panic(err)
}
fmt.Printf("%#v\n", clusterOutput)
}
After that, I currently have no idea about how could I get the kubeconfig of that cluster in order to then use it with the Go client for Kubernetes without having to use aws eks separately.
I have checked the AWS documentation, AWS CLI codebase, and eksctl codebase with no luck so far. The connection to an EKS cluster is only documented in this webpage and it uses a shell: https://aws.amazon.com/premiumsupport/knowledge-center/eks-cluster-connection/
Any ideas?
The general flow goes something like this:
DescribeCluster (as you have done) and extract some necessary data
Using the necessary data, get a token using aws-iam-authenticator's package token
Using that token, create a Kubernetes clientset with the help of client-go.
package main
import (
"encoding/base64"
"log"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/eks"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"sigs.k8s.io/aws-iam-authenticator/pkg/token"
)
func newClientset(cluster *eks.Cluster) (*kubernetes.Clientset, error) {
log.Printf("%+v", cluster)
gen, err := token.NewGenerator(true, false)
if err != nil {
return nil, err
}
opts := &token.GetTokenOptions{
ClusterID: aws.StringValue(cluster.Name),
}
tok, err := gen.GetWithOptions(opts)
if err != nil {
return nil, err
}
ca, err := base64.StdEncoding.DecodeString(aws.StringValue(cluster.CertificateAuthority.Data))
if err != nil {
return nil, err
}
clientset, err := kubernetes.NewForConfig(
&rest.Config{
Host: aws.StringValue(cluster.Endpoint),
BearerToken: tok.Token,
TLSClientConfig: rest.TLSClientConfig{
CAData: ca,
},
},
)
if err != nil {
return nil, err
}
return clientset, nil
}
func main() {
name := "wonderful-outfit-1583362361"
region := "us-east-2"
sess := session.Must(session.NewSession(&aws.Config{
Region: aws.String(region),
}))
eksSvc := eks.New(sess)
input := &eks.DescribeClusterInput{
Name: aws.String(name),
}
result, err := eksSvc.DescribeCluster(input)
if err != nil {
log.Fatalf("Error calling DescribeCluster: %v", err)
}
clientset, err := newClientset(result.Cluster)
if err != nil {
log.Fatalf("Error creating clientset: %v", err)
}
nodes, err := clientset.CoreV1().Nodes().List(metav1.ListOptions{})
if err != nil {
log.Fatalf("Error getting EKS nodes: %v", err)
}
log.Printf("There are %d nodes associated with cluster %s", len(nodes.Items), name)
}
Here's my go.mod for versions:
module github.com/swoldemi/sandbox
go 1.14
require (
github.com/aws/aws-sdk-go v1.29.19
k8s.io/apimachinery v0.0.0-20190612125636-6a5db36e93ad
k8s.io/client-go v0.0.0-20190425172711-65184652c889
sigs.k8s.io/aws-iam-authenticator v0.5.0
)
Adding to Simon's great answer, you may want to also pass the session in the GetTokenOptions struct, like so:
opts := &token.GetTokenOptions{
ClusterID: aws.StringValue(cluster.Name),
Session: sess,
}
tok, err := gen.GetWithOptions(opts)
Otherwise the gen.GetWithOptions(opts) call will try to read your AWS credentials from local sources (e.g. ~/.aws/credentials) and may fail with this error:
NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors