how to send namespace in amazon-managed-prometheus via awscli - amazon-web-services

I want send the PromQL query to amazon-managed-prometheus via awscli, But I am not able to filter result based on namespace.
I am able to send the same filter in local prometheus via prometheus_api_client.PrometheusConnect, but can not use the same in aws(because of auth)
Is there any way?
awscurl -X POST --region ap-southeast-1 --access_key $KEY1 --secret_key $KEY --service aps "https://aps-workspaces.ap-southeast-1.amazonaws.com/workspaces/ws-2222-222-222-222-222/api/v1/query?query=sum(storage_level_sst_num) by (namespace, instance, level_index)"
{
"status": "success",
"data": {
"resultType": "vector",
"result": [
{
"metric": {
"instance": "10.0.3.68:1250",
"level_index": "0_MVGroup",
"namespace": "benchmark"
},
"value": [
1665730049.128,
"8"
]
}
]
}
}
~ %
s awscurl -X POST --region ap-southeast-1 --access_key $KEY1 --secret_key $KEY --service aps "https://aps-workspaces.ap-southeast-1.amazonaws.com/workspaces/ws-2222-222-222-222-222-2222/api/v1/query?query=sum(storage_level_sst_num{namespace="benchmark"}) by (instance, level_index)"
{"message":null}
Traceback (most recent call last):
File "/opt/homebrew/bin//awscurl", line 33, in <module>
sys.exit(load_entry_point('awscurl==0.26', 'console_scripts', 'awscurl')())
File "/opt/homebrew/Cellar/awscurl/0.26_1/libexec/lib/python3.10/site-packages/awscurl/awscurl.py", line 521, in main
inner_main(sys.argv[1:])
File "/opt/homebrew/Cellar/awscurl/0.26_1/libexec/lib/python3.10/site-packages/awscurl/awscurl.py", line 515, in inner_main
response.raise_for_status()
File "/opt/homebrew/Cellar/awscurl/0.26_1/libexec/lib/python3.10/site-packages/requests/models.py", line 953, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://aps-workspaces.ap-southeast-1.amazonaws.com/workspaces/ws-2222-222-222-222-222-2222/api/v1/query?query=sum(storage_level_sst_num%7Bnamespace=benchmark%7D)%20by%20(instance,%20level_index)

Related

Connect to AWS managed redis cache, exposed by an upstream service, using kube-proxy

I have a following scenario in the k8s cluster.
AWS managed redis cluster which is exposed by a upstream service called redis.
I have opened a tunnel locally using kube-proxy.
curl 127.0.0.1:31997/api/v1/namespaces/intekspersistence/services/redis
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "redis",
"namespace": "intekspersistence",
"selfLink": "/api/v1/namespaces/intekspersistence/services/redis",
...
"spec": {
"type": "ExternalName",
"sessionAffinity": "None",
"externalName": "xxx.xxx.usw2.cache.amazonaws.com"
},
"status": {
"loadBalancer": {
}
}
As shown, I am able to route to the redis service locally and it's poinintg to the Actual redis host.
Now I am trying to validate and ping the redis host using the below python script.
from redis import Redis
import logging
logging.basicConfig(level=logging.INFO)
redis = Redis(host='127.0.0.1:31997/api/v1/namespaces/intekspersistence/services/redis')
if redis.ping():
logging.info("Connected to Redis")
Upon running this, It's throwing error as host not found. [Probably due to inclusion of port in the host].
python test.py
Traceback (most recent call last):
File "test.py", line 7, in <module>
if redis.ping():
File "/home/appsadm/.local/lib/python2.7/site-packages/redis/client.py", line 1378, in ping
return self.execute_command('PING')
File "/home/appsadm/.local/lib/python2.7/site-packages/redis/client.py", line 898, in execute_command
conn = self.connection or pool.get_connection(command_name, **options)
File "/home/appsadm/.local/lib/python2.7/site-packages/redis/connection.py", line 1192, in get_connection
connection.connect()
File "/home/appsadm/.local/lib/python2.7/site-packages/redis/connection.py", line 563, in connect
raise ConnectionError(self._error_message(e))
redis.exceptions.ConnectionError: Error -2 connecting to 127.0.0.1:31997/api/v1/namespaces/intekspersistence/services/redis:6379. Name or service not known.
Is there a workaround to trim the port from host?? Or to route to the host using the above python module.

Celery not connecting to RabbitMQ on AWS Elastic Beanstalk when deployed using Docker containers

I have 3 docker containers - my_django_app, rabbitmq, and celery_worker. I have implemented it on my local system using docker-compose.yml which is as follows:
version: '3'
services:
web: &my_django_app
build: .
command: python3 manage.py runserver 0.0.0.0:8000
ports:
- "80:8000"
depends_on:
- rabbitmq
rabbitmq:
image: rabbitmq:latest
celery_worker:
<<: *my_django_app
command: celery -A MyDjangoApp worker --autoscale=10,1 --loglevel=info
ports: []
depends_on:
- rabbitmq
When I run this on my local system, it works perfectly fine. And then I deployed these images to AWS Elastic Beanstalk (Multi container environment) using Dockerrun.aws.json which is as follows:
{
"AWSEBDockerrunVersion": 2,
"Authentication": {
"Bucket": "cred-keeper",
"Key": "index.docker.io/.dockercfg"
},
"containerDefinitions": [{
"Authentication": {
"Bucket": "cred-keeper",
"Key": "index.docker.io/.dockercfg"
},
"command": [
"celery",
"-A",
"MyDjangoApp",
"worker",
"--autoscale=10,1",
"--loglevel=info"
],
"essential": true,
"image": "myName/my_django_app:latest",
"name": "celery_worker",
"memory": 150
},
{
"essential": true,
"image": "rabbitmq:latest",
"name": "rabbitmq",
"memory": 256,
},
{
"Authentication": {
"Bucket": "cred-keeper",
"Key": "index.docker.io/.dockercfg"
},
"command": [
"python3",
"manage.py",
"runserver",
"0.0.0.0:8000"
],
"essential": true,
"image": "myName/my_django_app:latest",
"memory": 256,
"name": "web",
"portMappings": [{
"containerPort": 8000,
"hostPort": 80
}]
}
],
"family": "",
"volumes": []
}
I saw the logs for the 3 containers by downloading the logs from AWS Elastic Beanstalk, and the containers web as well as rabbitmq are working just fine, but celery_worker shows logs like:
[2020-06-30 20:17:22,885: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**#rabbitmq:5672//: failed to resolve broker hostname.
Trying again in 2.00 seconds... (1/100)
[2020-06-30 20:17:24,898: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**#rabbitmq:5672//: failed to resolve broker hostname.
Trying again in 4.00 seconds... (2/100)
[2020-06-30 20:17:28,914: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**#rabbitmq:5672//: failed to resolve broker hostname.
Trying again in 6.00 seconds... (3/100)
.
.
.
[2020-06-30 20:16:45,662: CRITICAL/MainProcess] Unrecoverable error: OperationalError('failed to resolve broker hostname')
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/amqp/transport.py", line 137, in _connect
host, port, family, socket.SOCK_STREAM, SOL_TCP)
File "/usr/local/lib/python3.7/socket.py", line 752, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -2] Name or service not known
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/kombu/connection.py", line 439, in _reraise_as_library_errors
yield
File "/usr/local/lib/python3.7/site-packages/kombu/connection.py", line 430, in ensure_connection
callback, timeout=timeout)
File "/usr/local/lib/python3.7/site-packages/kombu/utils/functional.py", line 344, in retry_over_time
return fun(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/kombu/connection.py", line 283, in connect
return self.connection
File "/usr/local/lib/python3.7/site-packages/kombu/connection.py", line 839, in connection
self._connection = self._establish_connection()
File "/usr/local/lib/python3.7/site-packages/kombu/connection.py", line 794, in _establish_connection
conn = self.transport.establish_connection()
File "/usr/local/lib/python3.7/site-packages/kombu/transport/pyamqp.py", line 130, in establish_connection
conn.connect()
File "/usr/local/lib/python3.7/site-packages/amqp/connection.py", line 311, in connect
self.transport.connect()
File "/usr/local/lib/python3.7/site-packages/amqp/transport.py", line 77, in connect
self._connect(self.host, self.port, self.connect_timeout)
File "/usr/local/lib/python3.7/site-packages/amqp/transport.py", line 148, in _connect
"failed to resolve broker hostname"))
OSError: failed to resolve broker hostname
My CELERY_BROKER_URL in my Django's settings is "amqp://rabbitmq". Also, my celery.py is as:
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'Speeve.settings')
app = Celery('MyDjangoApp')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
What do I need to do in order for my celery container to work properly on AWS Elastic Beanstack?
Please help!

gcloud sql instances patch fails with invalid data error

When trying to add high availability on an existing Cloud SQL instance using:
gcloud sql instances patch $INSTANCE --project $PROJECT --availability-type regional
the process fails with this message
The following message will be used for the patch API method.
{"project": "$PROJECT", "name": "$INSTANCE", "settings": {"availabilityType": "REGIONAL", "databaseFlags": [{"name": "sql_mode", "value": "TRADITIONAL"}, {"name": "default_time_zone", "value": "+01:00"}]}}
ERROR: (gcloud.sql.instances.patch) HTTPError 400: The incoming request contained invalid data.
It also fails using the web interface.
Gcloud version Google Cloud SDK [280.0.0]
This is the output of the log (not much help that I can see):
2020-02-14 11:01:34,476 DEBUG root Loaded Command Group: [u'gcloud', u'sql', u'instances']
2020-02-14 11:01:34,510 DEBUG root Loaded Command Group: [u'gcloud', u'sql', u'instances', u'patch']
2020-02-14 11:01:34,517 DEBUG root Running [gcloud.sql.instances.patch] with arguments: [--availability-type: "regional", --project: "$PROJECT", INSTANCE: "$INSTANCE"]
2020-02-14 11:01:35,388 INFO ___FILE_ONLY___ The following message will be used for the patch API method.
2020-02-14 11:01:35,398 INFO ___FILE_ONLY___ {"project": "$PROJECT", "name": "$INSTANCE", "settings": {"availabilityType": "REGIONAL", "databaseFlags": [{"name": "sql_mode", "value": "TRADITIONAL"}, {"name": "default_time_zone", "value": "+01:00"}]}}
2020-02-14 11:01:35,865 DEBUG root (gcloud.sql.instances.patch) HTTPError 400: The incoming request contained invalid data.
Traceback (most recent call last):
File "C:\Users\udAL\AppData\Local\Google\Cloud SDK\google-cloud-sdk\lib\googlecloudsdk\calliope\cli.py", line 981, in Execute
resources = calliope_command.Run(cli=self, args=args)
File "C:\Users\udAL\AppData\Local\Google\Cloud SDK\google-cloud-sdk\lib\googlecloudsdk\calliope\backend.py", line 807, in Run
resources = command_instance.Run(args)
File "C:\Users\udAL\AppData\Local\Google\Cloud SDK\google-cloud-sdk\lib\surface\sql\instances\patch.py", line 306, in Run
return RunBasePatchCommand(args, self.ReleaseTrack())
File "C:\Users\udAL\AppData\Local\Google\Cloud SDK\google-cloud-sdk\lib\surface\sql\instances\patch.py", line 278, in RunBasePatchCommand
instance=instance_ref.instance))
File "C:\Users\udAL\AppData\Local\Google\Cloud SDK\google-cloud-sdk\lib\googlecloudsdk\third_party\apis\sql\v1beta4\sql_v1beta4_client.py", line 697, in Patch
config, request, global_params=global_params)
File "C:\Users\udAL\AppData\Local\Google\Cloud SDK\google-cloud-sdk\bin\..\lib\third_party\apitools\base\py\base_api.py", line 731, in _RunMethod
return self.ProcessHttpResponse(method_config, http_response, request)
File "C:\Users\udAL\AppData\Local\Google\Cloud SDK\google-cloud-sdk\bin\..\lib\third_party\apitools\base\py\base_api.py", line 737, in ProcessHttpResponse
self.__ProcessHttpResponse(method_config, http_response, request))
File "C:\Users\udAL\AppData\Local\Google\Cloud SDK\google-cloud-sdk\bin\..\lib\third_party\apitools\base\py\base_api.py", line 604, in __ProcessHttpResponse
http_response, method_config=method_config, request=request)
HttpBadRequestError: HttpError accessing <https://sqladmin.googleapis.com/sql/v1beta4/projects/$PROJECT/instances/$INSTANCE?alt=json>: response: <{'status': '400', 'content-length': '269', 'x-xss-protection': '0', 'x-content-type-options': 'nosniff', 'transfer-encoding': 'chunked', 'vary': 'Origin, X-Origin, Referer', 'server': 'ESF', '-content-encoding': 'gzip', 'cache-control': 'private', 'date': 'Fri, 14 Feb 2020 10:01:35 GMT', 'x-frame-options': 'SAMEORIGIN', 'alt-svc': 'quic=":443"; ma=2592000; v="46,43",h3-Q050=":443"; ma=2592000,h3-Q049=":443"; ma=2592000,h3-Q048=":443"; ma=2592000,h3-Q046=":443"; ma=2592000,h3-Q043=":443"; ma=2592000', 'content-type': 'application/json; charset=UTF-8'}>, content <{
"error": {
"code": 400,
"message": "The incoming request contained invalid data.",
"errors": [
{
"message": "The incoming request contained invalid data.",
"domain": "global",
"reason": "invalidRequest"
}
]
}
}
>
2020-02-14 11:01:35,868 ERROR root (gcloud.sql.instances.patch) HTTPError 400: The incoming request contained invalid data.
2020-02-14 11:01:35,898 DEBUG root Metrics reporting process started...
Edit:
When using the gcloud cli command:
gcloud patch with 3 input parameters
Both $PROJECT and $INSTANCE do exist since I can gcloud sql databases list --instance $INSTANCE --project $PROJECT and it works fine.
availability-type=regional it's documented so should work
I'm not constructing the request manually, I'm using gcloud CLI
When using the console.cloud.google.com web interface:
Main menu -> SQL -> select instance -> Enable High Availability.
It's a button, no parameters added by myself.
Both return the same error "The incoming request contained invalid data."
Can't see how I may be doing it wrong.
Please check your data in the incoming request.
I used the Method: instances.patch and it worked as expected for me.
project
instance-name
request body:
"settings": {
"availabilityType": "REGIONAL",
"databaseFlags": [
{
"name": "sql_mode",
"value": "TRADITIONAL"
},
{
"name": "default_time_zone",
"value": "+01:00"
}
]
}
}
Curl command:
'https://sqladmin.googleapis.com/sql/v1beta4/projects/your-project/instances/your_instancet?key=[YOUR_API_KEY]' \
--header 'Authorization: Bearer [YOUR_ACCESS_TOKEN]' \
--header 'Accept: application/json' \
--header 'Content-Type: application/json' \
--data '{"settings":{"availabilityType":"REGIONAL","databaseFlags":[{"name":"sql_mode","value":"TRADITIONAL"},{"name":"default_time_zone","value":"+01:00"}]}}' \
--compressed```
Response 200:
{
"kind": "sql#operation",
"targetLink": "https://content-sqladmin.googleapis.com/sql/v1beta4/projects/your-project/instances/your-instance",
"status": "PENDING",
"user": "#cloud.com",
"insertTime": "2020-02-14T12:35:37.615Z",
"operationType": "UPDATE",
"name": "3f55c1be-97b5-4d37-8d1f-15cb61b4c6cc",
"targetId": "your-instance",
"selfLink": "https://content-sqladmin.googleapis.com/sql/v1beta4/projects/wave25-vladoi/operations/3f55c1be-97b5-4d37-8d1f-15cb61b4c6cc",
"targetProject": "your-project"
}

Unable to invoke lambda function from localstack via aws cli

I have the lambda function pushed as I can see it in localstack, based on below command/output
aws lambda get-function --function-name books1 --endpoint-url=http://localhost:4574
{
"Code": {
"Location": "http://localhost:4574/2015-03-31/functions/books1/code"
},
"Configuration": {
"Version": "$LATEST",
"FunctionName": "books1",
"CodeSize": 50,
"FunctionArn": "arn:aws:lambda:us-east-1:000000000000:function:books1",
"Environment": {},
"Handler": "main",
"Runtime": "go1.x"
}
}
When I try to execute it, as shown below, I get an error, and my localstack is running inside a docker container
aws --endpoint-url=http://localhost:4574 lambda invoke --function-name books1 /tmp/output.json
An error occurred (InternalFailure) when calling the Invoke operation (reached max retries: 4): Error executing Lambda function: Unable to find executor for Lambda function "books1". Note that Node.js and .NET Core Lambdas currently require LAMBDA_EXECUTOR=docker Traceback (most recent call last):
File "/opt/code/localstack/localstack/services/awslambda/lambda_api.py", line 269, in run_lambda
event, context=context, version=version, asynchronous=asynchronous)
File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 466, in execute
process.run()
File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 462, in do_execute
result = lambda_function(event, context)
File "/opt/code/localstack/localstack/services/awslambda/lambda_api.py", line 390, in generic_handler
'Note that Node.js and .NET Core Lambdas currently require LAMBDA_EXECUTOR=docker') % lambda_name)
Exception: Unable to find executor for Lambda function "books1". Note that Node.js and .NET Core Lambdas currently require LAMBDA_EXECUTOR=docker
This lambda is written in Go and when I manually execute it on real AWS, it works just fine.
You should run localstack container with passed LAMBDA_EXECUTOR=docker environment and /var/run/docker.sock:/var/run/docker.sock volume
docker run \
-itd \
-v /var/run/docker.sock:/var/run/docker.sock \
-e LAMBDA_EXECUTOR=docker \
-p 4567-4583:4567-4583 -p 8080:8080 \
--name localstack \
localstack/localstack

Boto is unable to access bucket inside ECS container which have correct IAM roles (but Boto3 can)

I have ECS container and I have attached an IAM policy like below:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Action": [
"s3:List*",
"s3:Get*"
],
"Resource": "arn:aws:s3:::my_s3_bucket/*"
}
]
}
and have both boto and boto3 installed in it.
I am able to list bucket using boto3 but not by boto. Please see code below:
import boto3
s3_conn = boto3.client('s3')
s3_conn.list_objects(Bucket='my_s3_bucket')
'Owner': {u'DisplayName': 'shoppymcgee', u'ID': 'adf3425700e4f995d8773a8b********'}, u'Size': 116399950}, {u'LastModified': datetime.datetime(2013, 5, 18, 6, 35, 6, tzinfo=tzlocal()), u'ETag': '"2b4a4d60458cde1685c93dabf98c6e19"', u'StorageClass': 'STANDARD', u'Key': u'2013/05/18/SLPH_201305180605_eligible-product-feed.txt', u'Owner': {u'DisplayName': 'shoppymcgee', u'ID': 'adf3425700e4f995d8773a8be6b0df09d06751f3274d8be5e8ae04761a5eef09'},
import boto
conn = boto.connect_s3()
print conn
S3Connection:s3.amazonaws.com
mybucket = conn.get_bucket('my_s3_bucket')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/site-packages/boto/s3/connection.py", line 509, in get_bucket
return self.head_bucket(bucket_name, headers=headers)
File "/usr/local/lib/python2.7/site-packages/boto/s3/connection.py", line 528, in head_bucket
response = self.make_request('HEAD', bucket_name, headers=headers)
File "/usr/local/lib/python2.7/site-packages/boto/s3/connection.py", line 671, in make_request
retry_handler=retry_handler
File "/usr/local/lib/python2.7/site-packages/boto/connection.py", line 1071, in make_request
retry_handler=retry_handler)
File "/usr/local/lib/python2.7/site-packages/boto/connection.py", line 943, in _mexe
request.body, request.headers)
Version of boto - boto==2.48.0
Version of boto3 and botocore - botocore==1.7.41 and boto3==1.4.7
Boto does not support the AWS_CONTAINER_CREDENTIALS_RELATIVE_URI environment variable, which is what gives containers/tasks the ability to use the task-specific IAM role.
If you do a search on GitHub for that environment variable in Boto's repository, you'll come up with no code hits and an open issue asking for it to be implemented - https://github.com/boto/boto/search?q=AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
Until support is added (if it will be at all given boto's maintenance state), the only way to use boto is to call the metadata service manually on curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI, retrieve the credentials and manually pass them to boto (be careful of the expiry on the temporary credentials though).
Or migrate to boto3.