Pyomo Debug No Solution - pyomo

I have a very strange problem. I made a notebook containing my model, a very simple linear problem with some constraints. And it works, i.e. the solution is found. The env is made with conda.
In this case I get:
{'Problem':
[{'Name': 'unknown',
'Lower bound': 23.02222576,
'Upper bound': 23.02222576,
'Number of objectives': 1,
'Number of constraints': 395,
'Number of variables': 961,
'Number of nonzeros': 200,
'Sense': 'minimize'}],
'Solver': [{'Status': 'ok', 'User time': -1.0, 'System time': 0.02, 'Wallclock time': 0.01, 'Termination condition': 'optimal', 'Termination message': 'Model was solved to optimality (subject to tolerances), and an optimal solution is available.', 'Statistics': {'Branch and bound': {'Number of bounded subproblems': None, 'Number of created subproblems': None}, 'Black box': {'Number of iterations': 448}}, 'Error rc': 0, 'Time': 0.020696401596069336}], 'Solution': [OrderedDict([('number of solutions', 0), ('number of solutions displayed', 0)])]}
Now, I copied the notebook code inside my big python service. Same code, more or less.
The solver is cbc installed system wide.
The solution is never found.
Before posting my code, I would understand how to debug .
In the second case, I get always:
{'Problem':
[{'Name': 'unknown',
'Lower bound': None,
'Upper bound': inf, 'Number of objectives': 1,
'Number of constraints': 395,
'Number of variables': 961,
'Number of nonzeros': 202, 'Sense': 'minimize'}],
'Solver': [{'Status': 'warning', 'User time': -1.0, 'System time': 0.01, 'Wallclock time': 0.01, 'Termination condition': 'infeasible', 'Termination message': 'Model was proven to be infeasible.', 'Statistics': {'Branch and bound': {'Number of bounded subproblems': 0, 'Number of created subproblems': 0}, 'Black box': {'Number of iterations': 0}}, 'Error rc': 0, 'Time': 0.024977684020996094}], 'Solution': [OrderedDict([('number of solutions', 1), ('number of solutions displayed', 1)]), {'Status': 'unknown', 'Problem': {}, 'Objective': {}, 'Variable': {}, 'Constraint': {}}]}
The number of constraints and variable is the same in both cases, and I could swear that the code is the same. The only difference is the 'cbc' solver that in the first case is from conda.
I checked the variables with model.pprint(), and in the second case, I saw that some of them never get increased or changed.
Another strange thing, is that the number of iterations is zero in the second case....
[EDIT]
My fault. The configuration parameters were different between the 2 env.

Related

Nothing being written into the Redshift table [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed yesterday.
Improve this question
I have this AWS Lambda function for writing to Redshift. It executes without error but doesn't actually create the table. Does anyone have any thoughts on what might be wrong or what checks I could perform?
import json
import boto3
import botocore.session as bc
from botocore.client import Config
print('Loading function')
secret_arn = 'arn:aws:secretsmanager:<some secret stuff here>'
cluster_id = 'cluster_id'
bc_session = bc.get_session()
region = boto3.session.Session().region_name
session = boto3.Session(
botocore_session=bc_session,
region_name=region
)
config = Config(connect_timeout = 180, read_timeout = 180)
client_redshift = session.client("redshift-data", config = config)
def lambda_handler(event, context):
query_str = "create table db.lambda_func (id int);"
try:
result = client_redshift.execute_statement(Database = 'db',
SecretArn = secret_arn,
Sql = query_str,
ClusterIdentifier = cluster_id)
print("API successfully executed")
print('RESULT: ', result)
stmtid = result['Id']
response = client_redshift.describe_statement(Id=stmtid)
print('RESPONSE: ', response)
except Exception as e:
raise Exception(e)
return str(result)
RESULT: {'ClusterIdentifier': 'redshift-datalake', 'CreatedAt':
datetime.datetime(2023, 2, 16, 16, 56, 9, 722000, tzinfo=tzlocal()),
'Database': 'db', 'Id': '648bd5b6-6d3f-4d12-9435-
94e316e8dbaa', 'SecretArn': 'arn:aws:secretsmanager:<secret_here>',
'ResponseMetadata': {'RequestId': '648bd5b6-6d3f-4d12-9435-
94e316e8dbaa', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amzn-
requestid': '648bd5b6-6d3f-4d12-9435-94e316e8dbaa', 'content-type':
'application/x-amz-json-1.1', 'content-length': '249', 'date': 'Thu,
16 Feb 2023 16:56:09 GMT'}, 'RetryAttempts': 0}}
RESPONSE: {'ClusterIdentifier': 'redshift-datalake', 'CreatedAt':
datetime.datetime(2023, 2, 16, 16, 56, 9, 722000, tzinfo=tzlocal()),
'Duration': -1, 'HasResultSet': False, 'Id': '648bd5b6-6d3f-4d12-
9435-94e316e8dbaa', 'QueryString': 'create table db.lambda_func (id
int);', 'RedshiftPid': 0, 'RedshiftQueryId': 0, 'ResultRows': -1,
'ResultSize': -1, 'SecretArn': 'arn:aws:secretsmanager:
<secret_here>', 'Status': 'PICKED', 'UpdatedAt':
datetime.datetime(2023, 2, 16, 16, 56, 9, 904000, tzinfo=tzlocal()),
'ResponseMetadata': {'RequestId': '15e99ba3-8b63-4775-bd4e-
c8d4f2aa44b4', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amzn-
requestid': '15e99ba3-8b63-4775-bd4e-c8d4f2aa44b4', 'content-type':
'application/x-amz-json-1.1', 'content-length': '437', 'date': 'Thu,
16 Feb 2023 16:56:09 GMT'}, 'RetryAttempts': 0}}

Problems with Image Label Adjustment Job in Amazon Sagemaker Ground Truth

I'm trying to create a Image Label Adjustment Job in Ground Truth and I'm having some trouble. The thing is that I have a dataset of images, in which there are pre-made bounding boxes. I have an external python script that creates the "dataset.manifest" file with the json's of each image. Here are the first four lines of that manifest file:
{"source-ref": "s3://automatic-defect-detection/LM-WNB1-M-0000126254-camera_2_0022.jpg", "bounding-box": {"image_size": [{"width": 2048, "height": 1536, "depth": 3}], "annotations": [{"class_id": 0, "width": 80, "height": 80, "top": 747, "left": 840}]}, "bounding-box-metadata": {"class-map": {"0": "KK"}, "type": "groundtruth/object-detection", "human-annotated": "yes"}}
{"source-ref": "s3://automatic-defect-detection/LM-WNB1-M-0000126259-camera_2_0028.jpg", "bounding-box": {"image_size": [{"width": 2048, "height": 1536, "depth": 3}], "annotations": [{"class_id": 0, "width": 80, "height": 80, "top": 1359, "left": 527}]}, "bounding-box-metadata": {"class-map": {"0": "KK"}, "type": "groundtruth/object-detection", "human-annotated": "yes"}}
{"source-ref": "s3://automatic-defect-detection/LM-WNB1-M-0000126256-camera_3_0006.jpg", "bounding-box": {"image_size": [{"width": 2048, "height": 1536, "depth": 3}], "annotations": [{"class_id": 3, "width": 80, "height": 80, "top": 322, "left": 1154}, {"class_id": 3, "width": 80, "height": 80, "top": 633, "left": 968}]}, "bounding-box-metadata": {"class-map": {"3": "FF"}, "type": "groundtruth/object-detection", "human-annotated": "yes"}}
{"source-ref": "s3://automatic-defect-detection/LM-WNB1-M-0000126253-camera_2_0019.jpg", "bounding-box": {"image_size": [{"width": 2048, "height": 1536, "depth": 3}], "annotations": [{"class_id": 2, "width": 80, "height": 80, "top": 428, "left": 1058}]}, "bounding-box-metadata": {"class-map": {"2": "DD"}, "type": "groundtruth/object-detection", "human-annotated": "yes"}}
Now the problem is that I'm creating private jobs in Amazon Sagemaker to try it out. I have the manifest file and the images in a S3 bucket, and it actually kinda works. So I select the input manifest, activate the "Existing-labels display options". The existing labels for the bounding boxes do not appear automatically, so I have to enter them manually (don't know why), but if I do that and try the preview before creating the adjustment job, the bounding boxes appear perfectly and I can adjust them. The thing is that, me being the only worker invited for the job, the job never apears to start working on it, and it just auto-completes. I can see later that the images are there with my pre-made bounding boxes, but the job never appears to adjust those boxes. I don't have the "Automated data labeling" option activated. Is there something missing in my manifest file?
There can be multiple reasons for this. First of all, the automated labeling option is not support for label adjustment and verification tasks. so thats ruled out.
It looks like you have not setup the adjustment job properly. Some things to check for:
Have you specified the Task timeout and Task expiration time? If these values are practically low, then the tasks would be expired even before somebody can pick them.
Have you checked the "I want to display existing labels from the dataset for this job." box? It should be checked for your case.
Does your existing label are fetched properly? If this is not fetched correctly, either you need to review your manifest file or you need to manually provide the label values(which i guess you are doing)
Since you are the only worker in the workforce. Do you have correct permissions to access the labeling task?
How many images you have? Have you set any minimum batch size while setting the label adjustment job?

Empty datapoints received while retrieving AWS S3 Request metrics

Following is my payload
response = cloudwatch.get_metric_statistics(
Namespace='AWS/S3',
Dimensions=[
{
'Name': 'BucketName',
'Value': 'foo-bar'
},
{
'Name': 'StorageType',
'Value': 'AllStorageTypes'
}
],
MetricName='BytesUploaded',
StartTime=datetime(2021, 3, 11),
EndTime=datetime(2021, 3, 14),
Period=86400,
Statistics=[
'Maximum', 'Average'
]
)
and this is the response
{'Label': 'BytesUploaded', 'Datapoints': [], 'ResponseMetadata': {'RequestId': '1c6b02e9-9a8f-48e9-a2fd-1e21fd31a096', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amzn-requestid': '1c6b02e9-9a8f-48e9-a2fd-1e21fd31a096', 'content-type': 'text/xml', 'content-length': '336', 'date': 'Tue, 16 Mar 2021 05:51:05 GMT'}, 'RetryAttempts': 0}}
From AWS Console, I'm able to see datapoints for the same timestamp. I tried increasing the timeframe but it still gibes the same result
Can some help me please? thanks
First, If period variable is a refresh period maybe you should need to reduce. When I checked out to example, I saw period is 300.
Second, try to change endTime like:
from datetime import datetime
from datetime import timedelta
EndTime = datetime.utcnow(),

Batch in postman

I need post data from postman, but I have results limit (200 result for one query). I have 45 000 results. In this case, I need run query a lot of times for getting all data.
"select" : "(**)", "start": 0, "count": 200,
"select" : "(**)", "start": 201, "count": 401,
"select" : "(**)", "start": 402, "count": 502,
"select" : "(**)", "start": 503, "count": 603
Do we have any ways to run query using 1000 branch for example?

How to access the List which is placed inside the dictionaries in python

ex.
{'positions': {u'_total': 1, u'values': [{u'startDate': {u'year': 2000, u'month': 7}, u'title': u'ABCD', u'company': {u'industry': u'ABCD', u'size': u'1001-5001', u'type': u'ABCD', u'id': 1234, u'name': u'ABCD'}, u'summary': u'ABCD', u'isCurrent': , u'id': }]}
I am trying to access "comapny" dictionary