Batch Transform Error for deepAR algorithm - amazon-web-services
Describe the bug
Hi,
I'm running the batch transforms from this Amazon SageMaker Batch Transform for the deepAR model implementation in the tutorial
Stock Price Prediction, using SageMaker DeepAR.
Batch transform code
from sagemaker.transformer import Transformer
model = session.create_model_from_job(estimator._current_job_name, name='{}-test'.format(estimator._current_job_name))
test_transformer = Transformer(model,
1,
'ml.m4.xlarge',
output_path='s3://sagemaker-eu-west-1-xxxxxxxxxxxx/sagemaker/stock-prediction/output',
sagemaker_session=session,
strategy='BatchStrategy',
assemble_with='Line')
test_transformer.transform('s3://sagemaker-eu-west-1-xxxxxxxxxxxx/sagemaker/stock-prediction/source/D/test/test.json', split_type='Line')
test_transformer.wait()
JSON input
{"start": "2018-07-24 00:00:00", "target": [81.73, 79.86, 83.39, 82.91, 82.91, 82.97, 82.97, 82.69, 81.5, 81.18, 82.34, 82.34, 83.51, 83.51, 84.05, 84.48, 84.81, 83.58, 83.58, 83.29, 83.29, 82.45, 81.31, 81.8, 81.41, 81.41, 81.75, 81.75, 82.93, 82.29, 81.08, 81.29, 81.29, 83.19, 83.19, 84.39, 84.23, 84.6, 83.41, 83.41, 82.79, 82.79, 81.88, 81.25, 80.73, 81.07, 81.07, 81.1, 81.1, 80.76, 81.32, 82.47, 82.93, 82.93, 82.54, 82.54, 82.67, 83.54, 85.32, 85.77, 85.77, 83.5, 83.5, 79.0, 79.03, 79.0, 77.71, 77.71, 78.14, 78.14, 78.7, 78.26, 78.26, 77.68, 77.68, 76.87, 76.87, 76.39, 75.35, 74.3, 74.49, 74.49, 75.21, 75.21, 75.75, 75.24, 75.13, 74.64, 74.64, 74.44, 74.44, 73.5, 72.69, 74.5, 75.02, 75.02, 76.4, 76.4], "dynamic_feat": [[82.2, 81.31, 83.39, 84.23, 84.23, 83.12, 83.12, 83.27, 82.28, 81.66, 83.02, 83.02, 84.38, 84.38, 84.79, 84.68, 85.42, 84.19, 84.19, 84.02, 84.02, 83.51, 82.93, 82.15, 81.78, 81.78, 82.45, 82.45, 83.15, 83.66, 81.76, 81.46, 81.46, 83.33, 83.33, 85.22, 84.68, 85.75, 83.98, 83.98, 82.89, 82.89, 82.95, 82.11, 81.84, 81.19, 81.19, 81.5, 81.5, 81.05, 81.45, 83.35, 83.25, 83.25, 83.05, 83.05, 83.81, 83.82, 85.48, 86.74, 86.74, 85.26, 85.26, 83.2, 79.03, 79.16, 78.44, 78.44, 78.7, 78.7, 79.44, 78.99, 78.99, 77.94, 77.94, 77.39, 77.39, 76.78, 76.0, 75.21, 75.52, 75.52, 75.86, 75.86, 75.98, 75.47, 76.4, 75.2, 75.2, 75.94, 75.94, 74.53, 74.09, 74.97, 75.02, 75.02, 79.38, 79.38], [81.06, 79.38, 81.22, 82.45, 82.45, 82.51, 82.51, 81.97, 80.88, 79.22, 82.11, 82.11, 82.86, 82.86, 84.05, 83.53, 84.57, 82.9, 82.9, 83.28, 83.28, 81.92, 80.79, 81.3, 80.46, 80.46, 81.73, 81.73, 81.95, 81.18, 80.92, 80.76, 80.76, 82.43, 82.43, 84.26, 83.52, 83.17, 83.21, 83.21, 82.15, 82.15, 81.44, 81.03, 80.43, 80.46, 80.46, 80.65, 80.65, 80.18, 80.37, 82.28, 82.42, 82.42, 82.04, 82.04, 82.26, 82.56, 83.85, 85.26, 85.26, 83.4, 83.4, 78.51, 77.35, 78.18, 77.32, 77.32, 78.09, 78.09, 78.0, 78.05, 78.05, 77.24, 77.24, 76.72, 76.72, 75.66, 75.12, 73.74, 74.3, 74.3, 73.8, 73.8, 75.25, 74.65, 75.06, 73.08, 73.08, 74.07, 74.07, 72.85, 72.44, 73.48, 73.62, 73.62, 75.33, 75.33], [81.13, 81.1, 81.59, 84.18, 84.18, 82.55, 82.55, 82.6, 81.88, 79.95, 82.32, 82.32, 82.96, 82.96, 84.56, 84.09, 84.77, 83.83, 83.83, 83.48, 83.48, 83.46, 82.86, 81.86, 81.46, 81.46, 82.1, 82.1, 82.06, 83.43, 81.76, 81.26, 81.26, 82.44, 82.44, 84.85, 84.57, 83.59, 83.63, 83.63, 82.46, 82.46, 82.68, 81.45, 81.48, 80.85, 80.85, 80.67, 80.67, 81.02, 81.25, 82.42, 82.58, 82.58, 82.14, 82.14, 83.5, 82.72, 83.93, 86.52, 86.52, 85.21, 85.21, 82.98, 77.98, 78.19, 77.59, 77.59, 78.31, 78.31, 78.85, 78.55, 78.55, 77.64, 77.64, 77.16, 77.16, 76.65, 75.89, 73.86, 74.63, 74.63, 74.49, 74.49, 75.42, 75.27, 76.19, 74.16, 74.16, 75.89, 75.89, 73.16, 73.42, 73.59, 73.86, 73.86, 75.48, 75.48]]}
{"start": "2018-07-24 00:00:00", "target": [196.9, 193.9, 197.8, 197.3, 197.3, 197.45, 197.45, 197.0, 189.85, 186.45, 184.45, 184.45, 186.65, 186.65, 188.15, 188.05, 189.0, 186.5, 186.5, 187.45, 187.45, 185.9, 182.9, 184.05, 182.65, 182.65, 184.9, 184.9, 185.3, 160.85, 153.95, 155.45, 155.45, 160.1, 160.1, 159.9, 160.35, 161.1, 158.05, 158.05, 156.6, 156.6, 155.15, 152.7, 151.4, 150.15, 150.15, 150.35, 150.35, 147.0, 149.6, 149.35, 150.7, 150.7, 151.1, 151.1, 150.0, 151.9, 158.0, 159.1, 159.1, 156.8, 156.8, 152.9, 151.7, 151.5, 149.95, 149.95, 151.5, 151.5, 152.75, 146.95, 146.95, 144.3, 144.3, 143.85, 143.85, 143.1, 139.2, 137.0, 140.15, 140.15, 138.4, 138.4, 140.6, 138.95, 137.75, 131.5, 131.5, 133.3, 133.3, 133.3, 129.45, 138.5, 136.1, 136.1, 143.55, 143.55], "dynamic_feat": [[197.25, 196.25, 198.1, 198.35, 198.35, 198.3, 198.3, 197.7, 196.45, 186.8, 187.75, 187.75, 188.65, 188.65, 189.35, 188.4, 190.05, 187.1, 187.1, 188.45, 188.45, 187.25, 186.35, 185.0, 183.9, 183.9, 186.05, 186.05, 185.65, 186.1, 158.95, 155.95, 155.95, 161.4, 161.4, 160.15, 160.7, 162.45, 159.55, 159.55, 157.6, 157.6, 157.55, 155.2, 153.55, 152.15, 152.15, 151.9, 151.9, 150.55, 149.85, 151.5, 151.35, 151.35, 151.8, 151.8, 151.7, 152.45, 158.4, 160.35, 160.35, 157.95, 157.95, 156.85, 151.9, 151.7, 151.1, 151.1, 152.75, 152.75, 154.25, 148.25, 148.25, 144.45, 144.45, 144.0, 144.0, 145.1, 142.0, 140.1, 141.3, 141.3, 140.55, 140.55, 142.3, 139.45, 140.75, 133.0, 133.0, 134.6, 134.6, 134.25, 133.85, 140.95, 136.2, 136.2, 146.65, 146.65], [194.0, 192.5, 195.7, 196.15, 196.15, 196.75, 196.75, 195.25, 189.3, 183.0, 184.05, 184.05, 185.25, 185.25, 187.7, 186.55, 187.85, 185.35, 185.35, 186.05, 186.05, 184.55, 182.05, 183.15, 181.25, 181.25, 184.6, 184.6, 184.0, 157.9, 153.35, 154.1, 154.1, 157.05, 157.05, 157.75, 159.3, 159.1, 157.95, 157.95, 155.95, 155.95, 154.45, 149.75, 151.35, 148.85, 148.85, 150.25, 150.25, 146.65, 146.75, 149.25, 149.25, 149.25, 150.25, 150.25, 148.8, 149.65, 154.0, 158.3, 158.3, 156.35, 156.35, 151.6, 148.5, 149.65, 148.95, 148.95, 150.4, 150.4, 151.15, 145.05, 145.05, 141.4, 141.4, 142.5, 142.5, 142.35, 138.8, 136.1, 137.6, 137.6, 137.5, 137.5, 139.35, 137.75, 137.7, 127.9, 127.9, 132.25, 132.25, 130.6, 129.15, 130.9, 133.1, 133.1, 135.9, 135.9], [194.15, 195.75, 196.05, 198.2, 198.2, 196.85, 196.85, 196.7, 195.65, 183.6, 186.9, 186.9, 185.8, 185.8, 188.5, 187.5, 188.3, 186.5, 186.5, 186.35, 186.35, 186.95, 186.15, 184.25, 182.6, 182.6, 184.9, 184.9, 184.0, 185.65, 158.5, 155.25, 155.25, 157.8, 157.8, 159.8, 159.9, 159.95, 158.9, 158.9, 156.9, 156.9, 157.25, 154.3, 151.85, 151.85, 151.85, 150.55, 150.55, 150.2, 148.6, 149.9, 151.15, 151.15, 150.5, 150.5, 151.55, 150.05, 154.1, 159.65, 159.65, 157.6, 157.6, 156.25, 150.75, 150.3, 150.15, 150.15, 151.1, 151.1, 152.2, 148.25, 148.25, 143.6, 143.6, 143.4, 143.4, 144.75, 141.7, 136.1, 139.05, 139.05, 140.15, 140.15, 139.4, 139.4, 140.55, 132.7, 132.7, 134.55, 134.55, 131.6, 133.1, 131.4, 135.35, 135.35, 136.8, 136.8]]}
{"start": "2018-07-24 00:00:00", "target": [59.14, 57.88, 59.51, 59.29, 59.29, 59.35, 59.35, 59.15, 58.24, 57.41, 58.16, 58.16, 58.25, 58.25, 58.89, 59.06, 59.04, 57.2, 57.2, 57.23, 57.23, 56.45, 55.05, 55.29, 54.92, 54.92, 55.29, 55.29, 55.87, 55.3, 54.41, 54.75, 54.75, 56.11, 56.11, 56.3, 56.67, 56.65, 55.7, 55.7, 55.0, 55.0, 54.2, 54.28, 54.32, 54.48, 54.48, 54.5, 54.5, 54.05, 54.53, 55.09, 55.54, 55.54, 55.41, 55.41, 55.77, 56.32, 57.21, 57.61, 57.61, 56.13, 56.13, 54.74, 54.83, 55.59, 54.35, 54.35, 54.88, 54.88, 56.0, 56.44, 56.44, 55.61, 55.61, 54.6, 54.6, 54.42, 53.6, 52.7, 52.89, 52.89, 52.92, 52.92, 53.23, 52.76, 52.41, 51.39, 51.39, 50.81, 50.81, 50.48, 50.0, 51.35, 51.37, 51.37, 52.43, 52.43], "dynamic_feat": [[59.72, 58.79, 59.75, 59.6, 59.6, 59.62, 59.62, 59.59, 58.85, 57.71, 58.53, 58.53, 58.99, 58.99, 59.4, 59.27, 59.38, 58.11, 58.11, 57.86, 57.86, 57.35, 56.71, 55.72, 55.29, 55.29, 55.59, 55.59, 55.99, 56.37, 54.97, 55.0, 55.0, 56.21, 56.21, 56.98, 56.73, 57.33, 56.27, 56.27, 55.44, 55.44, 54.98, 54.66, 55.09, 54.5, 54.5, 54.68, 54.68, 54.45, 54.54, 55.61, 56.14, 56.14, 55.85, 55.85, 56.47, 56.47, 57.56, 58.81, 58.81, 57.08, 57.08, 56.16, 54.88, 55.61, 55.36, 55.36, 55.1, 55.1, 56.43, 57.09, 57.09, 55.94, 55.94, 55.26, 55.26, 54.6, 54.3, 53.39, 53.84, 53.84, 53.21, 53.21, 53.54, 52.88, 53.52, 52.25, 52.25, 52.22, 52.22, 50.86, 51.35, 51.77, 51.37, 51.37, 54.53, 54.53], [58.42, 57.31, 57.82, 58.89, 58.89, 59.17, 59.17, 58.82, 58.06, 56.76, 57.69, 57.69, 58.09, 58.09, 58.85, 58.59, 58.84, 56.97, 56.97, 57.23, 57.23, 56.06, 54.75, 55.05, 54.22, 54.22, 55.16, 55.16, 54.84, 54.36, 54.35, 54.35, 54.35, 55.17, 55.17, 56.3, 56.01, 55.69, 55.63, 55.63, 54.75, 54.75, 53.86, 53.87, 54.25, 54.1, 54.1, 54.0, 54.0, 53.75, 53.78, 55.02, 55.16, 55.16, 55.13, 55.13, 55.32, 55.84, 56.51, 57.36, 57.36, 56.05, 56.05, 54.22, 53.73, 54.47, 54.12, 54.12, 54.33, 54.33, 55.26, 56.27, 56.27, 55.37, 55.37, 54.54, 54.54, 53.63, 53.52, 52.13, 52.75, 52.75, 52.03, 52.03, 53.0, 52.28, 52.37, 48.77, 48.77, 50.81, 50.81, 49.78, 49.84, 50.57, 50.34, 50.34, 51.37, 51.37], [58.43, 58.59, 57.86, 59.54, 59.54, 59.21, 59.21, 59.46, 58.54, 57.16, 57.78, 57.78, 58.31, 58.31, 59.25, 58.88, 59.0, 57.81, 57.81, 57.52, 57.52, 57.26, 56.69, 55.48, 55.1, 55.1, 55.51, 55.51, 54.92, 56.14, 54.88, 54.84, 54.84, 55.18, 55.18, 56.62, 56.35, 55.83, 55.94, 55.94, 55.21, 55.21, 54.85, 53.91, 54.5, 54.37, 54.37, 54.09, 54.09, 54.44, 54.11, 55.35, 55.44, 55.44, 55.26, 55.26, 56.37, 56.09, 56.73, 58.56, 58.56, 57.03, 57.03, 55.97, 53.86, 54.47, 54.94, 54.94, 54.35, 54.35, 56.06, 56.7, 56.7, 55.89, 55.89, 55.11, 55.11, 54.52, 54.13, 52.15, 53.22, 53.22, 52.89, 52.89, 53.14, 52.77, 53.31, 51.83, 51.83, 52.07, 52.07, 50.09, 50.83, 50.69, 50.74, 50.74, 51.4, 51.4]]}
{"start": "2018-07-24 00:00:00", "target": [57.1, 55.92, 58.18, 58.38, 58.38, 58.32, 58.32, 57.9, 56.2, 55.02, 56.02, 56.02, 56.0, 56.0, 55.86, 57.14, 56.88, 55.42, 55.42, 55.06, 55.06, 54.8, 53.74, 53.72, 53.4, 53.4, 53.76, 53.76, 54.42, 53.92, 53.24, 53.44, 53.44, 55.02, 55.02, 55.76, 55.44, 55.28, 54.52, 54.52, 53.6, 53.6, 52.9, 52.7, 52.22, 52.06, 52.06, 52.0, 52.0, 51.6, 51.7, 52.84, 54.72, 54.72, 55.56, 55.56, 57.14, 58.36, 59.08, 59.12, 59.12, 58.34, 58.34, 57.54, 57.58, 58.72, 58.0, 58.0, 58.4, 58.4, 58.3, 57.9, 57.9, 57.1, 57.1, 56.04, 56.04, 56.06, 54.52, 53.42, 53.4, 53.4, 54.84, 54.84, 57.0, 56.46, 55.9, 55.0, 55.0, 54.62, 54.62, 53.88, 50.78, 51.76, 51.56, 51.56, 53.52, 53.52], "dynamic_feat": [[57.42, 56.92, 58.36, 58.72, 58.72, 58.46, 58.46, 58.44, 57.58, 55.4, 56.44, 56.44, 56.64, 56.64, 57.22, 57.24, 57.36, 56.32, 56.32, 55.46, 55.46, 55.52, 54.98, 54.38, 53.78, 53.78, 54.34, 54.34, 54.62, 55.2, 53.8, 53.46, 53.46, 55.36, 55.36, 56.08, 55.54, 56.32, 54.94, 54.94, 54.52, 54.52, 53.74, 53.16, 53.08, 52.06, 52.06, 52.22, 52.22, 52.0, 51.82, 53.28, 55.46, 55.46, 55.8, 55.8, 57.56, 58.44, 59.44, 60.46, 60.46, 58.84, 58.84, 59.02, 57.62, 59.24, 58.2, 58.2, 58.94, 58.94, 58.8, 58.62, 58.62, 57.52, 57.52, 56.48, 56.48, 56.6, 55.76, 53.86, 54.38, 54.38, 55.64, 55.64, 57.66, 56.94, 57.16, 55.32, 55.32, 56.0, 56.0, 54.0, 53.52, 52.08, 51.56, 51.56, 54.72, 54.72], [55.94, 55.88, 57.54, 57.98, 57.98, 58.1, 58.1, 57.7, 55.78, 54.38, 55.5, 55.5, 55.8, 55.8, 55.86, 55.86, 56.74, 54.44, 54.44, 54.68, 54.68, 54.34, 53.3, 53.32, 52.36, 52.36, 53.62, 53.62, 53.7, 53.34, 53.12, 52.88, 52.88, 53.88, 53.88, 55.64, 54.82, 54.42, 54.32, 54.32, 52.98, 52.98, 52.38, 52.48, 52.14, 51.22, 51.22, 51.66, 51.66, 51.32, 51.44, 51.82, 53.44, 53.44, 54.94, 54.94, 56.62, 57.4, 58.34, 58.66, 58.66, 58.08, 58.08, 56.9, 56.66, 57.36, 57.2, 57.2, 58.1, 58.1, 57.68, 57.74, 57.74, 56.96, 56.96, 56.02, 56.02, 55.42, 54.32, 52.86, 53.26, 53.26, 53.2, 53.2, 53.86, 55.74, 55.84, 53.48, 53.48, 54.26, 54.26, 53.08, 50.78, 51.24, 50.42, 50.42, 51.56, 51.56], [56.0, 56.48, 58.08, 58.18, 58.18, 58.22, 58.22, 58.08, 57.14, 54.9, 55.56, 55.56, 56.44, 56.44, 57.0, 56.04, 57.28, 55.86, 55.86, 54.74, 54.74, 55.5, 54.96, 54.1, 53.34, 53.34, 53.92, 53.92, 53.76, 54.86, 53.8, 53.24, 53.24, 53.9, 53.9, 55.88, 55.54, 54.7, 54.42, 54.42, 54.52, 54.52, 53.46, 52.76, 52.7, 52.06, 52.06, 51.68, 51.68, 52.0, 51.7, 51.9, 53.56, 53.56, 55.1, 55.1, 56.96, 57.54, 58.46, 60.26, 60.26, 58.44, 58.44, 58.74, 56.82, 57.36, 57.64, 57.64, 58.28, 58.28, 58.62, 58.34, 58.34, 57.44, 57.44, 56.32, 56.32, 56.44, 55.6, 53.06, 54.2, 54.2, 53.4, 53.4, 55.76, 56.92, 56.96, 55.2, 55.2, 55.92, 55.92, 53.42, 53.16, 51.6, 51.16, 51.16, 51.56, 51.56]]}
{"start": "2018-07-24 00:00:00", "target": [150.38, 145.92, 151.7, 151.12, 151.12, 151.46, 151.46, 152.22, 146.6, 143.4, 145.34, 145.34, 145.48, 145.48, 146.88, 147.82, 147.84, 144.38, 144.38, 143.98, 143.98, 142.9, 138.44, 139.44, 138.74, 138.74, 139.0, 139.0, 140.72, 138.82, 137.94, 138.0, 138.0, 141.46, 141.46, 143.38, 143.16, 142.86, 140.84, 140.84, 137.94, 137.94, 136.2, 136.14, 136.24, 136.08, 136.08, 137.5, 137.5, 137.38, 138.4, 140.78, 144.0, 144.0, 144.48, 144.48, 148.24, 151.1, 152.5, 154.38, 154.38, 152.94, 152.94, 150.48, 151.3, 153.86, 151.6, 151.6, 152.5, 152.5, 152.94, 152.08, 152.08, 149.3, 149.3, 147.0, 147.0, 146.54, 143.54, 140.48, 140.16, 140.16, 144.42, 144.42, 147.42, 147.1, 145.58, 143.98, 143.98, 142.34, 142.34, 139.7, 133.7, 137.0, 136.88, 136.88, 142.42, 142.42], "dynamic_feat": [[151.6, 148.98, 152.34, 153.42, 153.42, 152.2, 152.2, 153.36, 151.6, 143.9, 145.62, 145.62, 147.08, 147.08, 149.1, 148.72, 148.8, 146.78, 146.78, 144.92, 144.92, 144.68, 143.36, 140.74, 139.32, 139.32, 140.62, 140.62, 141.16, 142.3, 138.52, 138.4, 138.4, 142.14, 142.14, 144.92, 143.52, 145.36, 141.4, 141.4, 139.18, 139.18, 138.3, 137.2, 138.04, 136.32, 136.32, 138.94, 138.94, 137.76, 138.56, 141.88, 144.7, 144.7, 145.26, 145.26, 150.24, 151.96, 154.1, 157.2, 157.2, 153.62, 153.62, 155.0, 151.54, 155.68, 152.52, 152.52, 154.46, 154.46, 154.46, 154.26, 154.26, 150.68, 150.68, 147.76, 147.76, 148.06, 146.1, 142.8, 142.62, 142.62, 146.72, 146.72, 149.32, 147.4, 149.3, 144.84, 144.84, 146.78, 146.78, 140.74, 139.54, 137.88, 136.88, 136.88, 146.34, 146.34], [148.84, 145.26, 149.66, 150.52, 150.52, 150.92, 150.92, 150.7, 145.36, 140.38, 143.34, 143.34, 144.92, 144.92, 146.8, 146.04, 146.64, 143.24, 143.24, 143.68, 143.68, 142.34, 137.46, 138.9, 136.54, 136.54, 139.0, 139.0, 138.02, 136.66, 137.52, 136.86, 136.86, 139.3, 139.3, 143.38, 141.64, 140.66, 140.12, 140.12, 136.42, 136.42, 134.9, 135.42, 135.54, 134.2, 134.2, 135.6, 135.6, 136.04, 136.56, 140.68, 141.14, 141.14, 143.2, 143.2, 146.5, 149.3, 151.9, 152.8, 152.8, 151.2, 151.2, 149.06, 147.94, 150.44, 149.86, 149.86, 152.08, 152.08, 151.68, 151.86, 151.86, 149.02, 149.02, 146.56, 146.56, 144.54, 142.88, 138.1, 140.12, 140.12, 139.66, 139.66, 145.0, 145.52, 145.4, 139.16, 139.16, 141.68, 141.68, 138.54, 133.0, 135.0, 134.02, 134.02, 137.92, 137.92], [149.02, 148.64, 150.44, 153.32, 153.32, 151.82, 151.82, 151.32, 150.12, 142.12, 143.76, 143.76, 145.3, 145.3, 147.72, 146.94, 147.5, 145.56, 145.56, 144.06, 144.06, 144.54, 143.34, 140.06, 138.06, 138.06, 140.3, 140.3, 138.3, 141.78, 138.16, 138.08, 138.08, 139.34, 139.34, 144.72, 143.38, 141.74, 140.48, 140.48, 138.26, 138.26, 138.0, 136.24, 136.1, 135.94, 135.94, 135.6, 135.6, 137.46, 137.92, 140.98, 141.38, 141.38, 143.64, 143.64, 149.14, 149.66, 152.2, 156.9, 156.9, 151.72, 151.72, 153.92, 148.38, 150.46, 151.34, 151.34, 152.58, 152.58, 154.06, 153.48, 153.48, 150.46, 150.46, 147.6, 147.6, 147.8, 145.62, 138.32, 141.8, 141.8, 140.16, 140.16, 146.96, 147.4, 148.96, 143.52, 143.52, 146.56, 146.56, 139.58, 138.58, 136.08, 135.3, 135.3, 137.98, 137.98]]}
JSON.out
And when I open the output file JSON.out, it has 5 lines like the one below. (each line for each stock unique instance I guess).
I also got this error when run batch transforms on Console.
{"error":"The field dynamic_feat needs to be provided in the full prediction range but request has dynamic_feat only for 0 time units in the prediction range when trying to predict for 91 time units."}
To reproduce
Replace the Forecasting and Plotting code in the deepAR notebook with Batch Transform.
Expected behavior
A clear and concise description of what you expected to happen.
JSON output has values of 3 months for targeted stock.
System information
SageMaker Python SDK version:
Framework name (eg. PyTorch) or algorithm (eg. KMeans):
Framework version:
Python version: 3.7.7
CPU or GPU: CPU
Custom Docker image (Y/N): N
I think you had a description of what is expected by the batch transform job.
Batch transform predicts future values. Since you have prediction horizon of 91 time units (mentioned in the error message), it is expected that you provide dynamic features for all that time units.
As stated in the documentation:
If the model was trained with the dynamic_feat field, you must provide
this field for inference. In addition, each of the features has to have the length of the provided target plus the prediction_length. In other words, you must provide the feature value in the future.
Now your target and each dynamic feature have 97 values. It is expected that you provide additional 91 values for each dynamic feature. Taking into account these additional 91 values, predictions of the target will be made.
Precisely!
If you train with dynamic features, you're basically giving the model an auxiliary regression.
That is: For the model to predict x periods in the future, it will need the values of the auxiliary regression on those x periods.
So, if you trained with dynamic features, imagine you are doing inference with a time series of 10 time steps and predicting the next 3.
Your input of the time series will be (length 10):
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
But your input dynamic feature series must be (length 13):
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, x1, x2. x3]
Related
I am creating graph using plotly, However i am not sure how to do it for india, below is the code for US in python
I am using Cloropleth function and a dictionary for states and their short forms figure = {'data': [go.Choropleth( # colorscale = "Blues", locations=geo_dist['State'], z=geo_dist['Log Num'].astype(float), locationmode='USA-states', text=geo_dist['text'], geo='geo', colorbar_title='Num in Log2', marker_line_color='white', colorscale=['#fdf7ff', '#835af1'], )], 'layout': {'title': 'Geographic Segmentation for US', 'geo': {'scope': 'asia'}}} This is done for US, I want for India
fundamentally if you want to use geometry that is not included in plotly you need to provide geojson have found source of geojson that defined geometry for Indian states you reference a data frame in you sample code, so I have simulated it then a simple case of linking data frame to geometry using locations and featureidkey import requests import pandas as pd import numpy as np import plotly.graph_objects as go r = requests.get("https://www.geoboundaries.org/gbRequest.html?ISO=IND&ADM=ADM1") india = requests.get(r.json()[0]["gjDownloadURL"]).json() geo_dist = pd.DataFrame( columns=["text", "State"], data=[ ["Tamil Nadu", "IN-TN"], ["Puducherry", "IN-PY"], ["Himachal Pradesh", "IN-HP"], ["Uttarakhand", "IN-UT"], ["Sikkim", "IN-SK"], ["Delhi", "IN-DL"], ["Uttar Pradesh", "IN-UP"], ["Haryana", "IN-HR"], ["Punjab", "IN-PB"], ["Chandigarh", "IN-CH"], ["Rajasthan", "IN-RJ"], ["Jammu and Kashmir", "IN-JK"], ["Gujarat", "IN-GJ"], ["Madhya Pradesh", "IN-MP"], ["Maharashtra", "IN-MH"], ["Union Territory of Dadra & Nagar Haveli", "IN-DN"], ["Daman and Diu", "IN-DD"], ["Bihar", "IN-BR"], ["West Bengal", "IN-WB"], ["Jharkhand", "IN-JH"], ["Chhattisgarh", "IN-CT"], ["Odisha", "IN-OR"], ["Goa", "IN-GA"], ["Kerala", "IN-KL"], ["Karnataka", "IN-KA"], ["Andhra Pradesh", "IN-AP"], ["Andaman and Nicobar Islands", "IN-AN"], ["Assam", "IN-AS"], ["Tripura", "IN-TR"], ["Arunachal Pradesh", "IN-AR"], ["Lakshadweep", "IN-LD"], ["Meghalaya", "IN-ML"], ["Manipur", "IN-MN"], ["Nagaland", "IN-NL"], ["Mizoram", "IN-MZ"], ["Telangana", "IN-TG"], ], ) geo_dist["Log Num"] = np.random.uniform(1, 10, len(geo_dist)) figure = { "data": [ go.Choropleth( # colorscale = "Blues", locations=geo_dist["State"], featureidkey="properties.shapeISO", geojson=india, z=geo_dist["Log Num"].astype(float), # locationmode="USA-states", text=geo_dist["text"], geo="geo", colorbar_title="Num in Log2", marker_line_color="white", colorscale=["#fdf7ff", "#835af1"], ) ], "layout": { "title": "Geographic Segmentation for India", "geo": {"fitbounds": "locations"}, }, } go.Figure(figure)
Call a function if a checkbox is selected google sheets
I am trying to find a solution to the issue I have. I have checkboxes in column A. What I need is when one of them is checked, to populate a function/formula in a cell next to it (column B). For example, I checked the checkbox in cell A10 and formula is being populated in cell B10. The formula/function I have is a long IF function. Hoping someone will be able to assist.
try: =ARRAYFORMULA(TRANSPOSE(SPLIT(QUERY(QUERY({""; IFERROR(REPT("♦ ", LEN(SUBSTITUTE(TRANSPOSE(SPLIT(QUERY(IF(INDIRECT("A12:A"& MAX(IF(A12:A=TRUE, ROW(A12:A), )))=FALSE, "♣", "♥"),,999^99), "♥")), " ", ))-COUNTA( IF($B$11=Sheet5!A1, FILTER(Sheet5!C1:C, Sheet5!C1:C <>"", NOT(REGEXMATCH(Sheet5!B1:B, "MATERIAL|DISPOSAL|PLANTS"))), IF($B$11=Sheet5!J1, FILTER(Sheet5!L1:L, Sheet5!L1:L <>"", NOT(REGEXMATCH(Sheet5!K1:K, "MATERIAL|DISPOSAL|PLANTS"))), IF($B$11=Sheet5!S1, FILTER(Sheet5!U1:U, Sheet5!U1:U <>"", NOT(REGEXMATCH(Sheet5!T1:T, "MATERIAL|DISPOSAL|PLANTS"))), IF($B$11=Sheet5!AB1, FILTER(Sheet5!AD1:AD, Sheet5!AD1:AD<>"", NOT(REGEXMATCH(Sheet5!AC1:AC, "MATERIAL|DISPOSAL|PLANTS"))), IF($B$11=Sheet5!AK1, FILTER(Sheet5!AM1:AM, Sheet5!AM1:AM<>"", NOT(REGEXMATCH(Sheet5!AL1:AL, "MATERIAL|DISPOSAL|PLANTS"))), IF($B$11=Sheet5!AT1, FILTER(Sheet5!AV1:AV, Sheet5!AV1:AV<>"", NOT(REGEXMATCH(Sheet5!AU1:AU, "MATERIAL|DISPOSAL|PLANTS"))), IF($B$11=Sheet5!BC1, FILTER(Sheet5!BE1:BE, Sheet5!BE1:BE<>"", NOT(REGEXMATCH(Sheet5!BD1:BD, "MATERIAL|DISPOSAL|PLANTS"))), IF($B$11=Sheet5!BL1, FILTER(Sheet5!BN1:BN, Sheet5!BN1:BN<>"", NOT(REGEXMATCH(Sheet5!BM1:BM, "MATERIAL|DISPOSAL|PLANTS"))), IF($B$11=Sheet5!BU1, FILTER(Sheet5!BW1:BW, Sheet5!BW1:BW<>"", NOT(REGEXMATCH(Sheet5!BV1:BV, "MATERIAL|DISPOSAL|PLANTS"))), IF($B$11=Sheet5!CD1, FILTER(Sheet5!CF1:CF, Sheet5!CF1:CF<>"", NOT(REGEXMATCH(Sheet5!CE1:CE, "MATERIAL|DISPOSAL|PLANTS")))) ))))))))))+1))}, IF(COUNTIF(A12:A, TRUE)<2, "offset 1", ), 0)&QUERY(IF(A12:A=TRUE, IF($B$11=Sheet5!A1, "♦"&TEXTJOIN("♦", 1, FILTER(Sheet5!C1:C, NOT(REGEXMATCH(Sheet5!B1:B, "MATERIAL|DISPOSAL|PLANTS")))), IF($B$11=Sheet5!J1, "♦"&TEXTJOIN("♦", 1, FILTER(Sheet5!L1:L, NOT(REGEXMATCH(Sheet5!K1:K, "MATERIAL|DISPOSAL|PLANTS")))), IF($B$11=Sheet5!S1, "♦"&TEXTJOIN("♦", 1, FILTER(Sheet5!U1:U, NOT(REGEXMATCH(Sheet5!T1:T, "MATERIAL|DISPOSAL|PLANTS")))), IF($B$11=Sheet5!AB1, "♦"&TEXTJOIN("♦", 1, FILTER(Sheet5!AD1:AD, NOT(REGEXMATCH(Sheet5!AC1:AC, "MATERIAL|DISPOSAL|PLANTS")))), IF($B$11=Sheet5!AK1, "♦"&TEXTJOIN("♦", 1, FILTER(Sheet5!AM1:AM, NOT(REGEXMATCH(Sheet5!AL1:AL, "MATERIAL|DISPOSAL|PLANTS")))), IF($B$11=Sheet5!AT1, "♦"&TEXTJOIN("♦", 1, FILTER(Sheet5!AV1:AV, NOT(REGEXMATCH(Sheet5!AU1:AU, "MATERIAL|DISPOSAL|PLANTS")))), IF($B$11=Sheet5!BC1, "♦"&TEXTJOIN("♦", 1, FILTER(Sheet5!BE1:BE, NOT(REGEXMATCH(Sheet5!BD1:BD, "MATERIAL|DISPOSAL|PLANTS")))), IF($B$11=Sheet5!BL1, "♦"&TEXTJOIN("♦", 1, FILTER(Sheet5!BN1:BN, NOT(REGEXMATCH(Sheet5!BM1:BM, "MATERIAL|DISPOSAL|PLANTS")))), IF($B$11=Sheet5!BU1, "♦"&TEXTJOIN("♦", 1, FILTER(Sheet5!BW1:BW, NOT(REGEXMATCH(Sheet5!BV1:BV, "MATERIAL|DISPOSAL|PLANTS")))), IF($B$11=Sheet5!CD1, "♦"&TEXTJOIN("♦", 1, FILTER(Sheet5!CF1:CF, NOT(REGEXMATCH(Sheet5!CE1:CE, "MATERIAL|DISPOSAL|PLANTS")))), )))))))))), ), "where Col1 is not null", 0),,999^99), "♦")))
Ray - RLlib - Error with Custom env - continuous action space - DDPG - offline experience training?
Error while using offline experiences for DDPG. custom environment dimensions (action space and state space) seem to be inconsistent with what is expected in DDPG RLLIB trainer. Ubuntu, Ray 0.7 version (latest ray), DDPG example, offline dataset. Used sampler builder for offline dataset. Estimated DQN with this experience data and it ran through. Changed environment action space to be continuous (Box(,1)) and DDPG did not work. from ray.tune.registry import register_env TRAIN_BATCH_SIZE = 512 class mmt_ctns_offline_logs(gym.Env): def __init__(self): self.action_space = Box(0,50,shape=(,1), dtype=np.float32) #one dimension action space, values range 0 to 50 max self.observation_space = Box(-100000, 100000, shape=(,58), dtype=np.float32) #58 columns in state space register_env("mmt_env_ctnaction", lambda config: mmt_ctns_offline_logs()) #register custom environment #define the configuration. Some of these are defaults. But I have explicitely defined them for clarify (within my team) config_dict = {"env": "mmt_env_ctnaction", "evaluation_num_episodes":50, "num_workers": 11, "sample_batch_size": 512, "train_batch_size": TRAIN_BATCH_SIZE, "input": "<experience_replay_folder>/", "output": "<any_folder>", "gamma": 0.99, "horizon": None, "optimizer_class": "SyncReplayOptimizer", "optimizer": {"prioritized_replay":True}, "actor_hiddens": [128, 64], "actor_hidden_activation": "relu", "critic_hiddens": [64, 64], "critic_hidden_activation": "relu", "n_step": 1, "target_network_update_freq": 500, "input_evaluation": [], "ignore_worker_failures":True, 'log_level': "DEBUG", "buffer_size": 50000, "prioritized_replay": True, "prioritized_replay_alpha": 0.6, "prioritized_replay_beta": 0.4, "prioritized_replay_eps": 1e-6, "compress_observations": False, "lr": 1e-3, "actor_loss_coeff": 0.1, "critic_loss_coeff": 1.0, "use_huber": False, "huber_threshold": 1.0, "l2_reg": 1e-6, "grad_norm_clipping": True, "learning_starts": 1500, } config = ddpg.DEFAULT_CONFIG.copy() #dqn.DEFAULT_CONFIG.copy() for k,v in config_dict.items(): config[k] = v config_ddpg = config config_ddpg run_experiments({ 'NM_testing_DDPG_offpolicy_noIS': { 'run': 'DDPG', 'env': 'mmt_env_ctnaction', 'config': config_ddpg, 'local_dir': "/oxygen/narasimham/ray/tmp/mmt/mmt_user_27_DDPG/" }, }) Expected results from DDPG iterations. Actual - ERROR:- ray.exceptions.RayTaskError: ray_DDPGTrainer:train() (pid=89635, host=ip-10-114-53-179) File "/home/ubuntu/anaconda3/envs/tf_p36n/lib/python3.6/site-packages/ray/rllib/utils/tf_run_builder.py", line 49, in get self.feed_dict, os.environ.get("TF_TIMELINE_DIR")) File "/home/ubuntu/anaconda3/envs/tf_p36n/lib/python3.6/site-packages/ray/rllib/utils/tf_run_builder.py", line 91, in run_timeline fetches = sess.run(ops, feed_dict=feed_dict) File "/home/ubuntu/anaconda3/envs/tf_p36n/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 877, in run run_metadata_ptr) File "/home/ubuntu/anaconda3/envs/tf_p36n/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1076, in _run str(subfeed_t.get_shape()))) ValueError: Cannot feed value of shape (512,) for Tensor 'default_policy/action:0', which has shape '(?, 1)' During handling of the above exception, another exception occurred:
Try with action space definition as follows: self.action_space = Box(0,50,shape=(1,), dtype=np.float32)
Export tensorflow graph with batchnorm to opencv dnn
First, build a network with batch_norm net = tf.layers.conv2d(inputs = features, filters = 64, kernel_size = [3, 3], strides = (2, 2), padding = 'same') training = tf.Variable(False, name = 'training') net = tf.contrib.layers.batch_norm(net, is_training = training) net = tf.nn.relu(net) net = tf.reshape(net, [-1, 64 * 7 * 7]) # net = tf.layers.dense(inputs = net, units = class_num, kernel_initializer = tf.contrib.layers.xavier_initializer(), name = 'regression_output') #...... #after training, save the graph and weights sess.run(loss, feed_dict={features : train_imgs, x : real_delta, training : False}) saver = tf.train.Saver() saver.save(sess, 'reshape_final.ckpt') tf.train.write_graph(sess.graph.as_graph_def(), "", 'graph_final.pb') After that, I freeze the graph->optimize>transform python3 ~/.keras2/lib/python3.5/site-packages/tensorflow/python/tools/freeze_graph.py --input_graph=graph_final.pb --input_checkpoint=reshape_final.ckpt --output_graph=frozen_graph.pb --output_node_names=regression_output/BiasAdd python3 ~/.keras2/lib/python3.5/site-packages/tensorflow/python/tools/optimize_for_inference.py --input frozen_graph.pb --output opt_graph.pb --frozen_graph True --input_names input --output_names regression_output/BiasAdd ~/Qt/3rdLibs/tensorflow/bazel-bin/tensorflow/tools/graph_transforms/transform_graph --in_graph=opt_graph.pb --out_graph=fused_graph.pb --inputs=input --outputs=regression_output/BiasAdd --transforms="fold_constants fold_batch_norms fold_old_batch_norms sort_by_execution_order" Load the model std::string const model("/home/ramsus/Qt/blogCodes2/deep_homography/cnn/tensorflow/fused_graph.pb"); dnn::Net net = dnn::readNetFromTensorflow(model); if(net.empty()){ std::cerr<<"Can't load network by using the mode file:"<<std::endl; std::cerr<<model<<std::endl; throw std::runtime_error("net is empty"); } it throw error messages: BatchNorm/moments/mean:Mean(conv2d/convolution)(BatchNorm/moments/mean/reduction_indices) keep_dims:[ ] Tidx:[ ] T:0 OpenCV Error: Unspecified error (Unknown layer type Mean in op BatchNorm/moments/mean) in populateNet, file /home/ramsus/Qt/3rdLibs/opencv/modules/dnn/src/tensorflow/tf_importer.cpp, line 1077 /home/ramsus/Qt/3rdLibs/opencv/modules/dnn/src/tensorflow/tf_importer.cpp:1077: error: (-2) Unknown layer type Mean in op BatchNorm/moments/mean in function populateNet How could I solve this issue?Thanks
Python lightgbm feature_importance() error?
1.Environment info Operating System: Windows Python version: Python 2.7.13 2.Error Message: ValueError: No JSON object could be decoded lgb_train = lgb.Dataset(X_train, y_train) lgb_eval = lgb.Dataset(X_test, y_test, reference=lgb_train) params = { 'task':'train', 'boosting':'gbdt', 'objective':'binary', 'metric':{'l2', 'auc'}, 'num_leaves': 62, 'learning_rate': 0.05, 'feature_fraction': 0.9, 'bagging_fraction': 0.8, 'bagging_freq': 5, 'verbose': 20 } gbm = lgb.train(params, lgb_train, num_boost_round=250, valid_sets=lgb_eval) print('Start predicting...') y_pred = gbm.predict(X_test, num_iteration=gbm.best_iteration) y_pred = np.round(y_pred) print gbm.feature_importance()
Follow this link: https://github.com/Microsoft/LightGBM/issues/615. According to the contributor, this is a small bug: The infinite number cannot be handled by json.