data not being updated in aws when using kinesis data stream - amazon-web-services

Im trying to implement kinesis data streams into my react native application using amplify.
I have followed the steps as stated in the official documentation, but no data is being updated when I check aws page.
This is my code
Analytics.addPluggable(new AWSKinesisProvider())
Analytics.configure({
AWSKinesis: {
// OPTIONAL - Amazon Kinesis service region
region: 'us-east-2',
// OPTIONAL - The buffer size for events in number of items.
bufferSize: 1000,
// OPTIONAL - The number of events to be deleted from the buffer when flushed.
flushSize: 100,
// OPTIONAL - The interval in milliseconds to perform a buffer check and flush if necessary.
flushInterval: 5000, // 5s
// OPTIONAL - The limit for failed recording retries.
resendLimit: 5
}
})
Analytics.record({
data: {
mydata:"new data"
},
streamName: '*myStreamName*'
}, 'AWSKinesis');
After executing this is how my console from debugger looks like this
LOG [DEBUG] 31:18.860 Credentials - getting credentials
LOG [DEBUG] 31:18.862 Credentials - picking up credentials
LOG [DEBUG] 31:18.863 Credentials - getting new cred promise
LOG [DEBUG] 31:18.866 Credentials - checking if credentials exists and not expired
LOG [DEBUG] 31:18.868 Credentials - are these credentials expired? {"accessKeyId": "****", "authenticated": true, "expiration": 2020-11-03T11:29:53.000Z, "identityId": "****", "secretAccessKey": "****", "sessionToken": "****"}
LOG [DEBUG] 31:18.871 Credentials - credentials not changed and not expired, directly return
LOG [DEBUG] 31:18.905 AWSKinesisProvider - set credentials for analytics undefined
LOG [DEBUG] 31:20.576 AWSKinesisProvider - init clients
LOG [DEBUG] 31:20.579 AWSKinesisProvider - initialize kinesis with credentials {"accessKeyId": "****", "authenticated": true, "identityId": "****", "secretAccessKey": "****", "sessionToken": "****"}
LOG [DEBUG] 31:20.624 AWSKinesisProvider - putting records to kinesis with records [{"Data": [123, 34, 109, 121, 100, 97, 116, 97, 34, 58, 34, 104, 101, 108, 108,
111, 32, 105, 109, 32, 110, 101, 119, 32, 104, 101, 114, 101, 34, 125], "PartitionKey": "partition-us-east-2:****"}]
LOG [DEBUG] 31:22.311 AWSKinesisProvider - Upload records to stream *myStreamName*

Related

Confluent Schema Registry on Strimzi - pods not getting created

I've Strimzi Kafka installed on GKE(GCP), and i'm trying to install Confluent Schema registry referring link -
https://github.com/lsst-sqre/strimzi-registry-operator
Steps followed:
Installed strimzi-registry-operator in namespace schema-registry-operator,
Note : Strimzi Kafka is installed in namespace - kafka
Command used :
helm repo add lsstsqre https://lsst-sqre.github.io/charts/
helm repo update
helm install ssr lsstsqre/strimzi-registry-operator -n schema-registry-operator --values values.yaml
values.yaml:
------------
# -- Name of the Strimzi Kafka cluster
clusterName: "versa-kafka-gke"
# -- Namespace where the Strimzi Kafka cluster is deployed
clusterNamespace: "kafka"
# -- Namespace where the strimzi-registry-operator is deployed
operatorNamespace: "strimzi-registry-operator"
Step 2:
Installed the kafkatopic (registry-schemas),kafkauser in schema - 'kafka'
(Note : Strimzi kafka is also installed in the namespace - kafka)
kafkatopic.yaml
----------------
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
name: registry-schemas
labels:
strimzi.io/cluster: versa-kafka-gke
spec:
partitions: 1
replicas: 3
config:
# http://kafka.apache.org/documentation/#topicconfigs
cleanup.policy: compact
kafkauser.yaml
--------------
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaUser
metadata:
name: confluent-schema-registry
labels:
strimzi.io/cluster: versa-kafka-gke
spec:
authentication:
type: tls
authorization:
# Official docs on authorizations required for the Schema Registry:
# https://docs.confluent.io/current/schema-registry/security/index.html#authorizing-access-to-the-schemas-topic
type: simple
acls:
# Allow all operations on the registry-schemas topic
# Read, Write, and DescribeConfigs are known to be required
- resource:
type: topic
name: registry-schemas
patternType: literal
operation: All
type: allow
# Allow all operations on the schema-registry* group
- resource:
type: group
name: schema-registry
patternType: prefix
operation: All
type: allow
# Allow Describe on the __consumer_offsets topic
- resource:
type: topic
name: __consumer_offsets
patternType: literal
operation: Describe
type: allow
Step3 :
Installed StrimziSchemaRegistry in schema - strimzi-schema-operator
Here is what i see in the schema svchema-registry-operator:
(base) Karans-MacBook-Pro:schema-registry-yamls karanalang$ kc get all -n schema-registry-operator
NAME READY STATUS RESTARTS AGE
pod/strimzi-registry-operator-7867fbc985-rddqw 1/1 Running 0 121m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/strimzi-registry-operator 1/1 1 1 121m
NAME DESIRED CURRENT READY AGE
replicaset.apps/strimzi-registry-operator-7867fbc985 1 1 1 121m
Also, when i logon to the SchemaRegistryOperator pod, i see the following error.
Traceback (most recent call last):
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/aiokits/aiotasks.py", line 108, in guard
await coro
File "/opt/venv/lib/python3.10/site-packages/kopf/_core/reactor/queueing.py", line 175, in watcher
async for raw_event in stream:
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/clients/watching.py", line 82, in infinite_watch
async for raw_event in stream:
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/clients/watching.py", line 159, in continuous_watch
objs, resource_version = await fetching.list_objs(
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/clients/fetching.py", line 28, in list_objs
rsp = await api.get(
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/clients/api.py", line 111, in get
response = await request(
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/clients/auth.py", line 45, in wrapper
return await fn(*args, **kwargs, context=context)
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/clients/api.py", line 85, in request
await errors.check_response(response) # but do not parse it!
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/clients/errors.py", line 150, in check_response
raise cls(payload, status=response.status) from e
kopf._cogs.clients.errors.APIForbiddenError: ('secrets is forbidden: User "system:serviceaccount:schema-registry-operator:strimzi-registry-operator" cannot list resource "secrets" in API group "" in the namespace "kafka"', {'kind': 'Status', 'apiVersion': 'v1', 'metadata': {}, 'status': 'Failure', 'message': 'secrets is forbidden: User "system:serviceaccount:schema-registry-operator:strimzi-registry-operator" cannot list resource "secrets" in API group "" in the namespace "kafka"', 'reason': 'Forbidden', 'details': {'kind': 'secrets'}, 'code': 403})
[2022-11-30 23:27:39,605] kopf._cogs.clients.w [DEBUG ] Stopping the watch-stream for strimzischemaregistries.v1beta1.roundtable.lsst.codes in 'kafka'.
[2022-11-30 23:27:39,606] kopf._core.reactor.o [ERROR ] Watcher for strimzischemaregistries.v1beta1.roundtable.lsst.codes#kafka has failed: ('strimzischemaregistries.roundtable.lsst.codes is forbidden: User "system:serviceaccount:schema-registry-operator:strimzi-registry-operator" cannot list resource "strimzischemaregistries" in API group "roundtable.lsst.codes" in the namespace "kafka"', {'kind': 'Status', 'apiVersion': 'v1', 'metadata': {}, 'status': 'Failure', 'message': 'strimzischemaregistries.roundtable.lsst.codes is forbidden: User "system:serviceaccount:schema-registry-operator:strimzi-registry-operator" cannot list resource "strimzischemaregistries" in API group "roundtable.lsst.codes" in the namespace "kafka"', 'reason': 'Forbidden', 'details': {'group': 'roundtable.lsst.codes', 'kind': 'strimzischemaregistries'}, 'code': 403})
Traceback (most recent call last):
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/clients/errors.py", line 148, in check_response
response.raise_for_status()
File "/opt/venv/lib/python3.10/site-packages/aiohttp/client_reqrep.py", line 1004, in raise_for_status
raise ClientResponseError(
aiohttp.client_exceptions.ClientResponseError: 403, message='Forbidden', url=URL('https://10.44.0.1:443/apis/roundtable.lsst.codes/v1beta1/namespaces/kafka/strimzischemaregistries')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/aiokits/aiotasks.py", line 108, in guard
await coro
File "/opt/venv/lib/python3.10/site-packages/kopf/_core/reactor/queueing.py", line 175, in watcher
async for raw_event in stream:
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/clients/watching.py", line 82, in infinite_watch
async for raw_event in stream:
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/clients/watching.py", line 159, in continuous_watch
objs, resource_version = await fetching.list_objs(
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/clients/fetching.py", line 28, in list_objs
rsp = await api.get(
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/clients/api.py", line 111, in get
response = await request(
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/clients/auth.py", line 45, in wrapper
return await fn(*args, **kwargs, context=context)
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/clients/api.py", line 85, in request
await errors.check_response(response) # but do not parse it!
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/clients/errors.py", line 150, in check_response
raise cls(payload, status=response.status) from e
kopf._cogs.clients.errors.APIForbiddenError: ('strimzischemaregistries.roundtable.lsst.codes is forbidden: User "system:serviceaccount:schema-registry-operator:strimzi-registry-operator" cannot list resource "strimzischemaregistries" in API group "roundtable.lsst.codes" in the namespace "kafka"', {'kind': 'Status', 'apiVersion': 'v1', 'metadata': {}, 'status': 'Failure', 'message': 'strimzischemaregistries.roundtable.lsst.codes is forbidden: User "system:serviceaccount:schema-registry-operator:strimzi-registry-operator" cannot list resource "strimzischemaregistries" in API group "roundtable.lsst.codes" in the namespace "kafka"', 'reason': 'Forbidden', 'details': {'group': 'roundtable.lsst.codes', 'kind': 'strimzischemaregistries'}, 'code': 403})
Few questions on this:
I don't see the schemaRegistry pod getting created which would be listening on port 8081, only the StrimziSchemaRegistry object is created in schema - strimzi-schema-operator
How do i get access to the SchemaRegistry url so i can upload schemas to it ?
How do i resolve the permission error above ?
Do I need to create a separate service account for installinh schema-registry ?
Pls advise.
tia!
Update :
This is an existing issue with the Strimzi Schema Registry operator (https://github.com/lsst-sqre/strimzi-registry-operator/issues/79).
Essentially, the ServiceAccount is not created in the correct namespace, I re-created the ServiceAccount in namespace - strimzi-registry-operator to resolve the issue.
However, i'm facing another issue (existing issue - https://github.com/lsst-sqre/strimzi-registry-operator/issues/84), the schema registry is not getting created.
Additional Details :
Schema-Registry-operator is deployed in namespace - 'strimzi-registry-operator'
Strimzi Kafka(cluster - versa-kafka-gke) - deployed in namespace - 'kafka'
Part of the Strimzi kafka yaml, with version & listeners :
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: versa-kafka-gke #1
spec:
kafka:
version: 3.0.0
replicas: 3
listeners:
- name: plain
port: 9092
type: internal
tls: false
- name: tls
port: 9093
type: internal
tls: true
authentication:
type: tls
- name: external
port: 9094
type: loadbalancer
tls: true
authentication:
type: tls
authorization:
type: simple
KafkaUser (confluent-schema-registry) & KafkaTopic (registry-schemas), deployed in namespace - 'kafka'
Confluent Schema Registry - deployed in namespace - 'kafka' (
Error :
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Error Logging 73s kopf Handler 'create_registry' failed with an exception. Will retry.
Traceback (most recent call last):
File "/opt/venv/lib/python3.10/site-packages/kopf/_core/actions/execution.py", line 279, in execute_handler_once
result = await invoke_handler(
File "/opt/venv/lib/python3.10/site-packages/kopf/_core/actions/execution.py", line 374, in invoke_handler
result = await invocation.invoke(
File "/opt/venv/lib/python3.10/site-packages/kopf/_core/actions/invocation.py", line 139, in invoke
await asy...al/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/opt/venv/lib/python3.10/site-packages/strimziregistryoperator/handlers/createregistry.py", line 131, in create_registry
bootstrap_server = get_kafka_bootstrap_server(
File "/opt/venv/lib/python3.10/site-packages/strimziregistryoperator/deployments.py", line 83, in get_kafka_bootstrap_server
raise kopf.Error(msg, delay=10)
AttributeError: module 'kopf' has no attribute 'Error'
Normal Logging 73s kopf Creating a new Schema Registry deployment: confluent-schema-registry with listener=tls (security protocol=tls) and strimzi-version=v1beta2 serviceType=ClusterIP image=confluentinc/cp-schema-registry:7.2.1
Normal Logging 12s kopf Creating a new Schema Registry deployment: confluent-schema-registry with listener=tls (security protocol=tls) and strimzi-version=v1beta2 serviceType=ClusterIP image=confluentinc/cp-schema-registry:7.2.1
Error Logging 12s kopf Handler 'create_registry' failed with an exception. Will retry.
Traceback (most recent call last):
File "/opt/venv/lib/python3.10/site-packages/kopf/_core/actions/execution.py", line 279, in execute_handler_once
result = await invoke_handler(
File "/opt/venv/lib/python3.10/site-packages/kopf/_core/actions/execution.py", line 374, in invoke_handler
result = await invocation.invoke(
File "/opt/venv/lib/python3.10/site-packages/kopf/_core/actions/invocation.py", line 139, in invoke
await asy...al/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/opt/venv/lib/python3.10/site-packages/strimziregistryoperator/handlers/createregistry.py", line 131, in create_registry
bootstrap_server = get_kafka_bootstrap_server(
File "/opt/venv/lib/python3.10/site-packages/strimziregistryoperator/deployments.py", line 83, in get_kafka_bootstrap_server
raise kopf.Error(msg, delay=10)
AttributeError: module 'kopf' has no attribute 'Error'

how to send namespace in amazon-managed-prometheus via awscli

I want send the PromQL query to amazon-managed-prometheus via awscli, But I am not able to filter result based on namespace.
I am able to send the same filter in local prometheus via prometheus_api_client.PrometheusConnect, but can not use the same in aws(because of auth)
Is there any way?
awscurl -X POST --region ap-southeast-1 --access_key $KEY1 --secret_key $KEY --service aps "https://aps-workspaces.ap-southeast-1.amazonaws.com/workspaces/ws-2222-222-222-222-222/api/v1/query?query=sum(storage_level_sst_num) by (namespace, instance, level_index)"
{
"status": "success",
"data": {
"resultType": "vector",
"result": [
{
"metric": {
"instance": "10.0.3.68:1250",
"level_index": "0_MVGroup",
"namespace": "benchmark"
},
"value": [
1665730049.128,
"8"
]
}
]
}
}
~ %
s awscurl -X POST --region ap-southeast-1 --access_key $KEY1 --secret_key $KEY --service aps "https://aps-workspaces.ap-southeast-1.amazonaws.com/workspaces/ws-2222-222-222-222-222-2222/api/v1/query?query=sum(storage_level_sst_num{namespace="benchmark"}) by (instance, level_index)"
{"message":null}
Traceback (most recent call last):
File "/opt/homebrew/bin//awscurl", line 33, in <module>
sys.exit(load_entry_point('awscurl==0.26', 'console_scripts', 'awscurl')())
File "/opt/homebrew/Cellar/awscurl/0.26_1/libexec/lib/python3.10/site-packages/awscurl/awscurl.py", line 521, in main
inner_main(sys.argv[1:])
File "/opt/homebrew/Cellar/awscurl/0.26_1/libexec/lib/python3.10/site-packages/awscurl/awscurl.py", line 515, in inner_main
response.raise_for_status()
File "/opt/homebrew/Cellar/awscurl/0.26_1/libexec/lib/python3.10/site-packages/requests/models.py", line 953, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://aps-workspaces.ap-southeast-1.amazonaws.com/workspaces/ws-2222-222-222-222-222-2222/api/v1/query?query=sum(storage_level_sst_num%7Bnamespace=benchmark%7D)%20by%20(instance,%20level_index)

Connect to AWS managed redis cache, exposed by an upstream service, using kube-proxy

I have a following scenario in the k8s cluster.
AWS managed redis cluster which is exposed by a upstream service called redis.
I have opened a tunnel locally using kube-proxy.
curl 127.0.0.1:31997/api/v1/namespaces/intekspersistence/services/redis
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "redis",
"namespace": "intekspersistence",
"selfLink": "/api/v1/namespaces/intekspersistence/services/redis",
...
"spec": {
"type": "ExternalName",
"sessionAffinity": "None",
"externalName": "xxx.xxx.usw2.cache.amazonaws.com"
},
"status": {
"loadBalancer": {
}
}
As shown, I am able to route to the redis service locally and it's poinintg to the Actual redis host.
Now I am trying to validate and ping the redis host using the below python script.
from redis import Redis
import logging
logging.basicConfig(level=logging.INFO)
redis = Redis(host='127.0.0.1:31997/api/v1/namespaces/intekspersistence/services/redis')
if redis.ping():
logging.info("Connected to Redis")
Upon running this, It's throwing error as host not found. [Probably due to inclusion of port in the host].
python test.py
Traceback (most recent call last):
File "test.py", line 7, in <module>
if redis.ping():
File "/home/appsadm/.local/lib/python2.7/site-packages/redis/client.py", line 1378, in ping
return self.execute_command('PING')
File "/home/appsadm/.local/lib/python2.7/site-packages/redis/client.py", line 898, in execute_command
conn = self.connection or pool.get_connection(command_name, **options)
File "/home/appsadm/.local/lib/python2.7/site-packages/redis/connection.py", line 1192, in get_connection
connection.connect()
File "/home/appsadm/.local/lib/python2.7/site-packages/redis/connection.py", line 563, in connect
raise ConnectionError(self._error_message(e))
redis.exceptions.ConnectionError: Error -2 connecting to 127.0.0.1:31997/api/v1/namespaces/intekspersistence/services/redis:6379. Name or service not known.
Is there a workaround to trim the port from host?? Or to route to the host using the above python module.

Can not connect to AWS Amplify PubSub -> Socket error:undefined

I've been trying out all ways to get the Amplify/PubSub working without any luck. It seems all the documentation are rather outdated.
Here is what I have done so far. Please note all hashes are made up ;-)
Created a fresh React Native app
Installed amplify packages
Installed Amplify CLI
Invoked $ amplify configure
Invoked $ amplify init
Invoked $ amplify add auth
Invoked $ amplify push, which created the aws-exports.js object
Created a super simple component
import React from 'react';
import { View } from 'react-native';
import { withAuthenticator } from 'aws-amplify-react-native';
import Amplify, { Auth, PubSub } from 'aws-amplify';
import { AWSIoTProvider } from '#aws-amplify/pubsub/lib/Providers';
import awsmobile from './aws-exports';
Amplify.configure({
Auth: {
mandatorySignIn: true,
region: awsmobile.aws_cognito_region,
userPoolId: awsmobile.aws_user_pools_id,
identityPoolId: awsmobile.aws_cognito_identity_pool_id,
userPoolWebClientId: awsmobile.aws_user_pools_web_client_id,
},
Analytics: {
disabled: true,
},
});
Amplify.addPluggable(
new AWSIoTProvider({
aws_pubsub_region: 'ap-southeast-2',
aws_pubsub_endpoint:
'wss://a123456789d-ats.iot.ap-southeast-2.amazonaws.com/mqtt',
})
);
Amplify.Logger.LOG_LEVEL = 'DEBUG';
class App extends PureComponent {
componentDidMount() {
if (this.props.authState === 'signedIn') {
Auth.currentCredentials().then((creds) => {
// get the principal that needs to be attached to policy
console.log('principal to be attached', creds.identityId)
PubSub.subscribe('topic_1').subscribe({
next: (data) => console.log(JSON.stringify(data, null, 2)),
error: (msg) => console.log('ERROR: ', msg.error),
close: () => console.log('Done'),
});
});
}
}
render() {
return (
<View/>
);
}
}
export default withAuthenticator(App);
I attached AWS root certificate to my iPhone (see below)
10.Create a IAM policy for AWS IoT
IoTAppPolicy
iot:*
arn:aws:iot:ap-southeast-2:1234567890:*
11.Attach the principal I got from Auth.currentCredentials to the policy
aws iot attach-principal-policy --policy-name IoTAppPolicy --principal ap-southeast-2:db1234bc-5678-90123-4567-89ae0e123b4
12.Attach Policies to Auth Role
AWSIoTDataAccess
AWSIoTConfigAccess
Yet, when I run the app I get the following error log
[DEBUG] 51:42.745 SignIn - Sign In for test#test.com
[DEBUG] 51:44.616 AuthClass CognitoUserSession {idToken: CognitoIdToken, refreshToken: CognitoRefreshToken, accessToken: CognitoAccessToken, clockDrift: 0}
[DEBUG] 51:44.616 Credentials - set credentials from session
[DEBUG] 51:46.247 Credentials - Load credentials successfully
[DEBUG] 51:46.248 AuthClass - succeed to get cognito credentials
DEBUG] 51:47.150 Hub - Dispatching to auth with {event: "signIn", data: CognitoUser, message: "A user 2d...de has been signed in"}
[DEBUG] 51:47.151 SignIn CognitoUser {username: "2d...de", pool: CognitoUserPool, Session: null, client: Client, signInUserSession: CognitoUserSession, …}
[DEBUG] 51:47.152 AuthClass - Getting the session from this user: CognitoUser {username: "2d...de", pool: CognitoUserPool, Session: null, client: Client, signInUserSession: CognitoUserSession, …}
[DEBUG] 51:47.152 AuthClass - Succeed to get the user session CognitoUserSession {idToken: CognitoIdToken, refreshToken: CognitoRefreshToken, accessToken: CognitoAccessToken, clockDrift: 1}
[DEBUG] 51:47.574 AuthPiece - verified user attributes {verified: {…}, unverified: {…}}
[INFO] 51:47.575 Authenticator - Inside handleStateChange method current authState: signIn
[DEBUG] 51:47.575 VerifyContact - no unverified contact
[INFO] 51:47.578 Authenticator - authState has been updated to signedIn
[DEBUG] 51:47.581 AuthClass - getting current credentials
[DEBUG] 51:47.582 Credentials - getting credentials
[DEBUG] 51:47.582 Credentials - picking up credentials
[DEBUG] 51:47.582 Credentials - getting new cred promise
[DEBUG] 51:47.582 Credentials - checking if credentials exists and not expired
[DEBUG] 51:47.583 Credentials - credentials not changed and not expired, directly return
[DEBUG] 51:47.583 AnalyticsClass - Analytics has been disabled
[DEBUG] 51:47.584 PubSub - subscribe options undefined
[DEBUG] 51:47.584 MqttOverWSProvider - Subscribing to topic(s) topic1
[DEBUG] 51:47.584 Credentials - getting credentials
[DEBUG] 51:47.584 Credentials - picking up credentials
[DEBUG] 51:47.584 Credentials - getting new cred promise
[DEBUG] 51:47.585 Credentials - checking if credentials exists and not expired
[DEBUG] 51:47.585 Credentials - are these credentials expired?
[DEBUG] 51:47.585 Credentials - credentials not changed and not expired, directly return
[DEBUG] 51:47.586 Signer {region: "ap-southeast-2", service: "iotdevicegateway"}
[DEBUG] 51:47.590 MqttOverWSProvider - Creating new MQTT client cca4e07f-a15a-46ce-904d-483a83162018
[WARN] 52:50.152 MqttOverWSProvider - cca4e07f-a15a-46ce-904d-483a83162018 {
"errorCode": 7,
"errorMessage": "AMQJS0007E Socket error:undefined.",
"uri": "wss://a123456789d.iot.ap-southeast-2.amazonaws.com/mqtt?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=ASIAU7VGXF6UOWZIUFOM%2F20200502%2Fap-southeast-2%2Fiotdevicegateway%2Faws4_request&X-Amz-Date=20200502T055147Z&X-Amz-SignedHeaders=host&X-Amz-Signature=010d8..7dba&X-Amz-Security-Token=IQ..lk%3D"
}
ERROR: Disconnected, error code: 7
[DEBUG] 52:50.153 MqttOverWSProvider - Unsubscribing from topic(s) topic1
Any idea why I can't connect to the topic?
Who would have thought that a package was missing.
After installing
yarn add amazon-cognito-identity-js
everything worked fine on my physical iPhone.
I went to the AWS IoT Core page and published a test message from there and voilà I get the following object back
{
"message": "Hello from AWS IoT console"
}

Boto is unable to access bucket inside ECS container which have correct IAM roles (but Boto3 can)

I have ECS container and I have attached an IAM policy like below:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Action": [
"s3:List*",
"s3:Get*"
],
"Resource": "arn:aws:s3:::my_s3_bucket/*"
}
]
}
and have both boto and boto3 installed in it.
I am able to list bucket using boto3 but not by boto. Please see code below:
import boto3
s3_conn = boto3.client('s3')
s3_conn.list_objects(Bucket='my_s3_bucket')
'Owner': {u'DisplayName': 'shoppymcgee', u'ID': 'adf3425700e4f995d8773a8b********'}, u'Size': 116399950}, {u'LastModified': datetime.datetime(2013, 5, 18, 6, 35, 6, tzinfo=tzlocal()), u'ETag': '"2b4a4d60458cde1685c93dabf98c6e19"', u'StorageClass': 'STANDARD', u'Key': u'2013/05/18/SLPH_201305180605_eligible-product-feed.txt', u'Owner': {u'DisplayName': 'shoppymcgee', u'ID': 'adf3425700e4f995d8773a8be6b0df09d06751f3274d8be5e8ae04761a5eef09'},
import boto
conn = boto.connect_s3()
print conn
S3Connection:s3.amazonaws.com
mybucket = conn.get_bucket('my_s3_bucket')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/site-packages/boto/s3/connection.py", line 509, in get_bucket
return self.head_bucket(bucket_name, headers=headers)
File "/usr/local/lib/python2.7/site-packages/boto/s3/connection.py", line 528, in head_bucket
response = self.make_request('HEAD', bucket_name, headers=headers)
File "/usr/local/lib/python2.7/site-packages/boto/s3/connection.py", line 671, in make_request
retry_handler=retry_handler
File "/usr/local/lib/python2.7/site-packages/boto/connection.py", line 1071, in make_request
retry_handler=retry_handler)
File "/usr/local/lib/python2.7/site-packages/boto/connection.py", line 943, in _mexe
request.body, request.headers)
Version of boto - boto==2.48.0
Version of boto3 and botocore - botocore==1.7.41 and boto3==1.4.7
Boto does not support the AWS_CONTAINER_CREDENTIALS_RELATIVE_URI environment variable, which is what gives containers/tasks the ability to use the task-specific IAM role.
If you do a search on GitHub for that environment variable in Boto's repository, you'll come up with no code hits and an open issue asking for it to be implemented - https://github.com/boto/boto/search?q=AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
Until support is added (if it will be at all given boto's maintenance state), the only way to use boto is to call the metadata service manually on curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI, retrieve the credentials and manually pass them to boto (be careful of the expiry on the temporary credentials though).
Or migrate to boto3.