How to access the List which is placed inside the dictionaries in python - python-2.7

ex.
{'positions': {u'_total': 1, u'values': [{u'startDate': {u'year': 2000, u'month': 7}, u'title': u'ABCD', u'company': {u'industry': u'ABCD', u'size': u'1001-5001', u'type': u'ABCD', u'id': 1234, u'name': u'ABCD'}, u'summary': u'ABCD', u'isCurrent': , u'id': }]}
I am trying to access "comapny" dictionary

Related

Nothing being written into the Redshift table [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed yesterday.
Improve this question
I have this AWS Lambda function for writing to Redshift. It executes without error but doesn't actually create the table. Does anyone have any thoughts on what might be wrong or what checks I could perform?
import json
import boto3
import botocore.session as bc
from botocore.client import Config
print('Loading function')
secret_arn = 'arn:aws:secretsmanager:<some secret stuff here>'
cluster_id = 'cluster_id'
bc_session = bc.get_session()
region = boto3.session.Session().region_name
session = boto3.Session(
botocore_session=bc_session,
region_name=region
)
config = Config(connect_timeout = 180, read_timeout = 180)
client_redshift = session.client("redshift-data", config = config)
def lambda_handler(event, context):
query_str = "create table db.lambda_func (id int);"
try:
result = client_redshift.execute_statement(Database = 'db',
SecretArn = secret_arn,
Sql = query_str,
ClusterIdentifier = cluster_id)
print("API successfully executed")
print('RESULT: ', result)
stmtid = result['Id']
response = client_redshift.describe_statement(Id=stmtid)
print('RESPONSE: ', response)
except Exception as e:
raise Exception(e)
return str(result)
RESULT: {'ClusterIdentifier': 'redshift-datalake', 'CreatedAt':
datetime.datetime(2023, 2, 16, 16, 56, 9, 722000, tzinfo=tzlocal()),
'Database': 'db', 'Id': '648bd5b6-6d3f-4d12-9435-
94e316e8dbaa', 'SecretArn': 'arn:aws:secretsmanager:<secret_here>',
'ResponseMetadata': {'RequestId': '648bd5b6-6d3f-4d12-9435-
94e316e8dbaa', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amzn-
requestid': '648bd5b6-6d3f-4d12-9435-94e316e8dbaa', 'content-type':
'application/x-amz-json-1.1', 'content-length': '249', 'date': 'Thu,
16 Feb 2023 16:56:09 GMT'}, 'RetryAttempts': 0}}
RESPONSE: {'ClusterIdentifier': 'redshift-datalake', 'CreatedAt':
datetime.datetime(2023, 2, 16, 16, 56, 9, 722000, tzinfo=tzlocal()),
'Duration': -1, 'HasResultSet': False, 'Id': '648bd5b6-6d3f-4d12-
9435-94e316e8dbaa', 'QueryString': 'create table db.lambda_func (id
int);', 'RedshiftPid': 0, 'RedshiftQueryId': 0, 'ResultRows': -1,
'ResultSize': -1, 'SecretArn': 'arn:aws:secretsmanager:
<secret_here>', 'Status': 'PICKED', 'UpdatedAt':
datetime.datetime(2023, 2, 16, 16, 56, 9, 904000, tzinfo=tzlocal()),
'ResponseMetadata': {'RequestId': '15e99ba3-8b63-4775-bd4e-
c8d4f2aa44b4', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amzn-
requestid': '15e99ba3-8b63-4775-bd4e-c8d4f2aa44b4', 'content-type':
'application/x-amz-json-1.1', 'content-length': '437', 'date': 'Thu,
16 Feb 2023 16:56:09 GMT'}, 'RetryAttempts': 0}}

Pyomo Debug No Solution

I have a very strange problem. I made a notebook containing my model, a very simple linear problem with some constraints. And it works, i.e. the solution is found. The env is made with conda.
In this case I get:
{'Problem':
[{'Name': 'unknown',
'Lower bound': 23.02222576,
'Upper bound': 23.02222576,
'Number of objectives': 1,
'Number of constraints': 395,
'Number of variables': 961,
'Number of nonzeros': 200,
'Sense': 'minimize'}],
'Solver': [{'Status': 'ok', 'User time': -1.0, 'System time': 0.02, 'Wallclock time': 0.01, 'Termination condition': 'optimal', 'Termination message': 'Model was solved to optimality (subject to tolerances), and an optimal solution is available.', 'Statistics': {'Branch and bound': {'Number of bounded subproblems': None, 'Number of created subproblems': None}, 'Black box': {'Number of iterations': 448}}, 'Error rc': 0, 'Time': 0.020696401596069336}], 'Solution': [OrderedDict([('number of solutions', 0), ('number of solutions displayed', 0)])]}
Now, I copied the notebook code inside my big python service. Same code, more or less.
The solver is cbc installed system wide.
The solution is never found.
Before posting my code, I would understand how to debug .
In the second case, I get always:
{'Problem':
[{'Name': 'unknown',
'Lower bound': None,
'Upper bound': inf, 'Number of objectives': 1,
'Number of constraints': 395,
'Number of variables': 961,
'Number of nonzeros': 202, 'Sense': 'minimize'}],
'Solver': [{'Status': 'warning', 'User time': -1.0, 'System time': 0.01, 'Wallclock time': 0.01, 'Termination condition': 'infeasible', 'Termination message': 'Model was proven to be infeasible.', 'Statistics': {'Branch and bound': {'Number of bounded subproblems': 0, 'Number of created subproblems': 0}, 'Black box': {'Number of iterations': 0}}, 'Error rc': 0, 'Time': 0.024977684020996094}], 'Solution': [OrderedDict([('number of solutions', 1), ('number of solutions displayed', 1)]), {'Status': 'unknown', 'Problem': {}, 'Objective': {}, 'Variable': {}, 'Constraint': {}}]}
The number of constraints and variable is the same in both cases, and I could swear that the code is the same. The only difference is the 'cbc' solver that in the first case is from conda.
I checked the variables with model.pprint(), and in the second case, I saw that some of them never get increased or changed.
Another strange thing, is that the number of iterations is zero in the second case....
[EDIT]
My fault. The configuration parameters were different between the 2 env.

Trouble trying to get size of Celery queue using redis-cli (for a Django app)

I'm using Django==2.2.24 and celery[redis]==4.4.7.
I want to get the length of my celery queues, so that I can use this information for autoscaling purposes in AWS EC2.
I found the following piece of documentation:
https://docs.celeryproject.org/en/v4.4.7/userguide/monitoring.html#redis
Redis
If you’re using Redis as the broker, you can monitor the Celery
cluster using the redis-cli command to list lengths of queues.
Inspecting queues
Finding the number of tasks in a queue:
$ redis-cli -h HOST -p PORT -n DATABASE_NUMBER llen QUEUE_NAME
The default queue is named celery. To get all available queues,
invoke:
$ redis-cli -h HOST -p PORT -n DATABASE_NUMBER keys \*
Note
Queue keys only exists when there are tasks in them, so if a key doesn’t exist it simply means there are no messages in that queue.
This is because in Redis a list with no elements in it is
automatically removed, and hence it won’t show up in the keys command
output, and llen for that list returns 0. Also, if you’re using Redis
for other purposes, the output of the keys command will include
unrelated values stored in the database. The recommended way around
this is to use a dedicated DATABASE_NUMBER for Celery, you can also
use database numbers to separate Celery applications from each other
(virtual hosts), but this won’t affect the monitoring events used by
for example Flower as Redis pub/sub commands are global rather than
database based.
Now, my Celery configuration (in Django) has the following relevant part:
CELERY_QUEUES = (
Queue('default', Exchange('default'), routing_key='default'),
Queue('email', Exchange('email'), routing_key='email'),
Queue('haystack', Exchange('haystack'), routing_key='haystack'),
Queue('thumbnails', Exchange('thumbnails'), routing_key='thumbnails'),
)
So I tried this:
$ redis-cli -n 0 -h ${MY_REDIS_HOST} -p 6379 llen haystack
(yes, celery is configured to use redis database number 0)
I tried my 4 queues, and I always get 0, when this is simply not possible. Some of these queues are usually very active, or my website wouldn't be working properly.
One key part of the documentation is that I can list the available queues, so I tried it:
$ redis-cli -n 0 -h ${MY_REDIS_HOST} -p 6379 keys \*
And I get about 20,000 lines of something like this:
celery-task-meta-b30fb605-d7b6-48db-b8cd-493458566876
celery-task-meta-e10ec56c-6601-420b-9f87-de6455968e76
celery-task-meta-14558a3a-1153-4f02-91f8-614bc29f6775
celery-task-meta-4c266854-512b-48af-8356-c786c507eb9e
celery-task-meta-e4ad4298-3d74-4986-8831-4c4d3c3e79f2
celery-task-meta-dfab0202-3975-46ce-9670-0d4cf3e278db
celery-task-meta-494fcb21-5995-495d-8980-0d8aa7edf0b8
celery-task-meta-345c4857-87f9-4e3f-8028-a6ef8cf93f5d
celery-task-meta-a4a48d00-68dc-4d30-87dd-869d2a20c347
celery-task-meta-d14fc394-6415-442b-8a5d-c9a4f37a9509
If I exclude all the celery-task-meta lines:
$ redis-cli -n 0 -h ${MY_REDIS_HOST} -p 6379 keys \* | grep -v celery-task-meta
I get this:
_kombu.binding.celeryev
_kombu.binding.default
_kombu.binding.thumbnails
_kombu.binding.email
unacked
_kombu.binding.celery.pidbox
_kombu.binding.haystack
unacked_index
_kombu.binding.reply.celery.pidbox
I tried to use the celery CLI to get the information, and this is some relevant output:
$ celery --app my-app inspect active_queues
-> celery#683a8e8bc84f: OK
* {'name': 'thumbnails', 'exchange': {'name': 'thumbnails', 'type': 'direct', 'arguments': None, 'durable': True, 'passive': False, 'auto_delete': False, 'delivery_mode': None, 'no_declare': False}, 'routing_key': 'thumbnails', 'queue_arguments': None, 'binding_arguments': None, 'consumer_arguments': None, 'durable': True, 'exclusive': False, 'auto_delete': False, 'no_ack': False, 'alias': None, 'bindings': [], 'no_declare': None, 'expires': None, 'message_ttl': None, 'max_length': None, 'max_length_bytes': None, 'max_priority': None}
-> celery#bf11d4c3bd6f: OK
* {'name': 'email', 'exchange': {'name': 'email', 'type': 'direct', 'arguments': None, 'durable': True, 'passive': False, 'auto_delete': False, 'delivery_mode': None, 'no_declare': False}, 'routing_key': 'email', 'queue_arguments': None, 'binding_arguments': None, 'consumer_arguments': None, 'durable': True, 'exclusive': False, 'auto_delete': False, 'no_ack': False, 'alias': None, 'bindings': [], 'no_declare': None, 'expires': None, 'message_ttl': None, 'max_length': None, 'max_length_bytes': None, 'max_priority': None}
-> celery#86151417b361: OK
* {'name': 'default', 'exchange': {'name': 'default', 'type': 'direct', 'arguments': None, 'durable': True, 'passive': False, 'auto_delete': False, 'delivery_mode': None, 'no_declare': False}, 'routing_key': 'default', 'queue_arguments': None, 'binding_arguments': None, 'consumer_arguments': None, 'durable': True, 'exclusive': False, 'auto_delete': False, 'no_ack': False, 'alias': None, 'bindings': [], 'no_declare': None, 'expires': None, 'message_ttl': None, 'max_length': None, 'max_length_bytes': None, 'max_priority': None}
-> celery#9a5360a82f14: OK
* {'name': 'haystack', 'exchange': {'name': 'haystack', 'type': 'direct', 'arguments': None, 'durable': True, 'passive': False, 'auto_delete': False, 'delivery_mode': None, 'no_declare': False}, 'routing_key': 'haystack', 'queue_arguments': None, 'binding_arguments': None, 'consumer_arguments': None, 'durable': True, 'exclusive': False, 'auto_delete': False, 'no_ack': False, 'alias': None, 'bindings': [], 'no_declare': None, 'expires': None, 'message_ttl': None, 'max_length': None, 'max_length_bytes': None, 'max_priority': None}
and
$ celery --app my-app inspect scheduled
-> celery#683a8e8bc84f: OK
- empty -
-> celery#86151417b361: OK
- empty -
-> celery#bf11d4c3bd6f: OK
- empty -
-> celery#9a5360a82f14: OK
- empty -
The command above seems to work well: if there are active tasks, the task is shown there, even tho in my copy/paste it says empty.
So, does anybody know what I might be doing wrong and why I can't get the real size of my queues?
Thanks!

flask/Jinja2 escaping backslash from json

i have text field in postgresql which was saved as Json.
I am running query.all() and then passing the result the template.(the result is multiple rows with particular field in json)
Inside the template/jinja2 in running a for look to print the required field which looks like this.
Is there a way to have this available as a json in jina2?
"{\"base\": {\"id\": 2, \"name\": \"Traditional Pulao Rice\"}, \"dessert\": {\"id\": 9, \"name\": \"Ladoo\"}, \"protein\": {\"id\": 5, \"name\": \"Chicken Malai\"}, \"side\": {\"id1\": 7, \"id2\": 8, \"name1\": \"Channa Chaat\", \"name2\": \"Baked Sweet Potato\"}}"
You can use json.loads on your available json and then you can use it in Jinja.
eg : json.loads("{\"base\": {\"id\": 2, \"name\": \"Traditional Pulao Rice\"}, \"dessert\": {\"id\": 9, \"name\": \"Ladoo\"}, \"protein\": {\"id\": 5, \"name\": \"Chicken Malai\"}, \"side\": {\"id1\": 7, \"id2\": 8, \"name1\": \"Channa Chaat\", \"name2\": \"Baked Sweet Potato\"}}")

Dealing with None in a list-of-list-of-dictionary

Over this for loop:
for i in range(len(data_df)):
for j in range(len(data_df.iloc[i].incident_updates)):
print(data_df.iloc[i].incident_updates[j].get("affected_components"))
I get the response like this:
[{u'old_status': u'operational', u'code': u'rwdp37x0698r', u'name': u'AA', u'new_status': u'operational'}]
[{u'old_status': u'partial_outage', u'code': u'rwdp37x0698r', u'name': u'AA', u'new_status': u'operational'}]
[{u'old_status': u'operational', u'code': u'rwdp37x0698r', u'name': u'AA', u'new_status': u'partial_outage'}]
[{u'old_status': u'partial_outage', u'code': u'31tyncvy5ng7', u'name': u'AB', u'new_status': u'operational'}]
[{u'old_status': u'operational', u'code': u'31tyncvy5ng7', u'name': u'AB', u'new_status': u'partial_outage'}]
None
None
[{u'old_status': u'partial_outage', u'code': u'xvgbw19sgbrj', u'name': u'AC', u'new_status': u'operational'}, {u'old_status': u'partial_outage', u'code': u'31tyncvy5ng7', u'name': u'AC', u'new_status': u'operational'}, {u'old_status': u'partial_outage', u'code': u'zg1gfkycdf6p', u'name': u'AC', u'new_status': u'operational'}, {u'old_status': u'partial_outage', u'code': u'rwdp37x0698r', u'name': u'AA', u'new_status': u'operational'}, {u'old_status': u'partial_outage', u'code': u'lvj41y83ghdg', u'name': u'AD', u'new_status': u'operational'}, {u'old_status': u'partial_outage', u'code': u'2qdjrpnyn4mb', u'name': u'AB', u'new_status': u'operational'}, {u'old_status': u'partial_outage', u'code': u'24zyv2d3p2jf', u'name': u'AC', u'new_status': u'operational'}]
None
[{u'old_status': u'operational', u'code': u'31tyncvy5ng7', u'name': u'AA', u'new_status': u'major_outage'}]
None
None
...
So if you notice, there are some Nones. I'd understand there is no value there, so I'd wanted to convert it to a list of dictionary that looked like this [{'None':'None'}] - so that it looks like the other lists in that output response - so I can save it to another list, and so I could get the name from it (using list[i][j].dict.get()). I am sure you get the point.
Here is the problem:
I was able to convert this for loops' output to a list-of-list-of-dictionary (using the list.append() function) and also convert the None to a string 'NONE', using
affected_components_variable = ["NONE" if val is None else val for val in affected_components_variable]
affected_components_variable is the list that contains the aforementioned for-loop output. It looks like this:
[[{u'old_status': u'operational', u'code': u'rwdp37x0698r', u'name': u'AA', u'new_status': u'operational'}]
[{u'old_status': u'partial_outage', u'code': u'rwdp37x0698r', u'name': u'AA', u'new_status': u'operational'}]
[{u'old_status': u'operational', u'code': u'rwdp37x0698r', u'name': u'AA', u'new_status': u'partial_outage'}]
...
'NONE'
...
This does not serve my purpose, because when I do list[i][j].dict.get(), and when it reached the 'NONE', it throws an error that reads AttributeError: 'str' object has no attribute 'get' - for obvious reasons.
To avoid all that, I could do this:
for i in range(len(data_df)):
for j in range(len(data_df.iloc[i].incident_updates)):
for k in range(len(data_df.iloc[i].incident_updates[j].get("affected_components"))):
print(data_df.iloc[i].incident_updates[j].get("affected_components")[k].get("name"))
and that throws the same error, for the same obvious reason.
So, what I think is that I will need to do this: [{'None':'None'}] at the list level, so when I do list[i][j].dict.get() I get a proper response.
Any help is greatly appreciated.
Why not set a variable like
noneResponse = [{u'None': u'None'}]
Then Assign that if none.
AffectedComponentsVariable = data_df.iloc[i].incident_updates[j].get("affected_components")
AffectedComponentsVariable = NoneResponse if AffectedComponentsVariable is None else AffectedComponentsVariable