Django Zappa Lambda Deploy "botocore.errorfactory.ResourceNotFoundException" - django

I am trying to deploy simple django application from zappa (https://romandc.com/zappa-django-guide/) I am getting the following error. Is there any permission issue or some other issue with the dev setup?
Traceback (most recent call last):
File "e:\personal\envs\py3\lib\site-packages\zappa\cli.py", line 753, in deploy
function_name=self.lambda_name)
File "e:\personal\envs\py3\lib\site-packages\zappa\core.py", line 1286, in get_lambda_function
FunctionName=function_name)
File "e:\personal\envs\py3\lib\site-packages\botocore\client.py", line 314, in _api_call
return self._make_api_call(operation_name, kwargs)
File "e:\personal\envs\py3\lib\site-packages\botocore\client.py", line 612, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.errorfactory.ResourceNotFoundException: An error occurred (ResourceNotFoundException) when calling the GetFunction operation: Function not found: arn:aws:lambda:ap-south-1:122866061462:function:frankie-dev
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "e:\personal\envs\py3\lib\site-packages\zappa\cli.py", line 2778, in handle
sys.exit(cli.handle())
File "e:\personal\envs\py3\lib\site-packages\zappa\cli.py", line 512, in handle
self.dispatch_command(self.command, stage)
File "e:\personal\envs\py3\lib\site-packages\zappa\cli.py", line 549, in dispatch_command
self.deploy(self.vargs['zip'])
File "e:\personal\envs\py3\lib\site-packages\zappa\cli.py", line 786, in deploy
self.lambda_arn = self.zappa.create_lambda_function(**kwargs)
File "e:\personal\envs\py3\lib\site-packages\zappa\core.py", line 1069, in create_lambda_function
response = self.lambda_client.create_function(**kwargs)
File "e:\personal\envs\py3\lib\site-packages\botocore\client.py", line 314, in _api_call
return self._make_api_call(operation_name, kwargs)
File "e:\personal\envs\py3\lib\site-packages\botocore\client.py", line 586, in _make_api_call
api_params, operation_model, context=request_context)
File "e:\personal\envs\py3\lib\site-packages\botocore\client.py", line 641, in _convert_to_request_dict
api_params, operation_model)
File "e:\personal\envs\py3\lib\site-packages\botocore\validate.py", line 291, in serialize_to_request
raise ParamValidationError(report=report.generate_report())
botocore.exceptions.ParamValidationError: Parameter validation failed:
Unknown parameter in input: "Layers", must be one of: FunctionName, Runtime, Role, Handler, Code, Description, Timeout, MemorySize, Publish, VpcConfig, DeadLetterConfig, Environment, KMSKeyArn, TracingConfig, Tags

I had the exact same error trying to deploy a Flask app using Zappa and then realized that I was using an old botocore package version. I changed all the package versions in my requirements.txt file to the ones on zappa's github page and it fixed the issue for me!

Related

boto3 SSO token expired earlier than AWSCLI

I am facing an issue where my SSO expired earlier when I tried to create a session programmatically using boto3 but NOT my awscli.
python version: 3.8.12
boto3 version: 1.21.46
awscli version: aws-cli/2.4.27 Python/3.8.8 Darwin/21.6.0 exe/x86_64 prompt/off
Sample boto3 code (boto3-test.py)
import boto3
session = boto3.Session(profile_name='RoleA')
sts = session.client('sts')
print(sts.get_caller_identity())
Steps to reproduce:
aws sso login --profile RoleA
aws sts get-caller-identity --profile RoleA (SUCCESS)
python boto3-test.py. (SUCCESS)
WAIT AFTER 1 HOUR ......
aws sts get-caller-identity --profile RoleA (SUCCESS)
python boto3-test.py (FAIL)
I have check ~/.aws/sso/cache and ~/.aws/cli/cache the expiresAt and Expiration in both cache file is still valid. I am expecting boto3 to discover the token cache the same way as the awscli, but it seems not. Any clue why both are not in sync ?
Error from boto3-test.py
Traceback (most recent call last):
File "/Users/tester/venv/lib/python3.8/site-packages/botocore/credentials.py", line 2056, in _get_credentials
response = client.get_role_credentials(**kwargs)
File "/Users/tester/venv/lib/python3.8/site-packages/botocore/client.py", line 415, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/Users/tester/venv/lib/python3.8/site-packages/botocore/client.py", line 745, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.errorfactory.UnauthorizedException: An error occurred (UnauthorizedException) when calling the GetRoleCredentials operation: Session token no
t found or invalid
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/tesster/venv/lib/python3.8/site-packages/botocore/client.py", line 415, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/Users/tesster/venv/lib/python3.8/site-packages/botocore/client.py", line 731, in _make_api_call
http, parsed_response = self._make_request(
File "/Users/tesster/venv/lib/python3.8/site-packages/botocore/client.py", line 751, in _make_request
return self._endpoint.make_request(operation_model, request_dict)
File "/Users/tesster/venv/lib/python3.8/site-packages/botocore/endpoint.py", line 107, in make_request
return self._send_request(request_dict, operation_model)
File "/Users/tesster/venv/lib/python3.8/site-packages/botocore/endpoint.py", line 180, in _send_request
request = self.create_request(request_dict, operation_model)
File "/Users/tesster/venv/lib/python3.8/site-packages/botocore/endpoint.py", line 120, in create_request
self._event_emitter.emit(event_name, request=request,
File "/Users/tesster/venv/lib/python3.8/site-packages/botocore/hooks.py", line 358, in emit
return self._emitter.emit(aliased_event_name, **kwargs)
File "/Users/tesster/venv/lib/python3.8/site-packages/botocore/hooks.py", line 229, in emit
return self._emit(event_name, kwargs)
File "/Users/tesster/venv/lib/python3.8/site-packages/botocore/hooks.py", line 212, in _emit
response = handler(**kwargs)
File "/Users/tesster/venv/lib/python3.8/site-packages/botocore/signers.py", line 95, in handler
return self.sign(operation_name, request)
File "/Users/tesster/venv/lib/python3.8/site-packages/botocore/signers.py", line 159, in sign
auth = self.get_auth_instance(**kwargs)
File "/Users/tesster/venv/lib/python3.8/site-packages/botocore/signers.py", line 239, in get_auth_instance
frozen_credentials = self._credentials.get_frozen_credentials()
File "/Users/tesster/venv/lib/python3.8/site-packages/botocore/credentials.py", line 632, in get_frozen_credentials
self._refresh()
File "/Users/tesster/venv/lib/python3.8/site-packages/botocore/credentials.py", line 527, in _refresh
self._protected_refresh(is_mandatory=is_mandatory_refresh)
File "/Users/tesster/venv/lib/python3.8/site-packages/botocore/credentials.py", line 543, in _protected_refresh
metadata = self._refresh_using()
File "/Users/tesster/venv/lib/python3.8/site-packages/botocore/credentials.py", line 684, in fetch_credentials
return self._get_cached_credentials()
File "/Users/tesster/venv/lib/python3.8/site-packages/botocore/credentials.py", line 694, in _get_cached_credentials
response = self._get_credentials()
File "/Users/tesster/venv/lib/python3.8/site-packages/botocore/credentials.py", line 2058, in _get_credentials
raise UnauthorizedSSOTokenError()
botocore.exceptions.UnauthorizedSSOTokenError: The SSO session associated with this profile has expired or is otherwise invalid. To refresh this SSO session run aws sso login with the corresponding profile.
To all, not sure what the issue is, but I resolve it by removing ~/.aws/sso and ~/.aws/cli entirely and letting it recreate again. I have also upgraded boto3 to version 1.24.90.
Things just work by themselves after that.

Boto3 EC2 CreateSnapshots Validation Error with ExcludeDataVolumeIds

I am trying to create snapshots with Boto3.
snapshot_response = client.create_snapshots(
Description='super important backup',
InstanceSpecification={
'InstanceId': 'i-something',
'ExcludeBootVolume': True,
'ExcludeDataVolumeIds': ['vol-something1','vol-something2'],
},
CopyTagsFromSource='volume'
)
But if fails with this error.
[ERROR] ParamValidationError: Parameter validation failed:
Unknown parameter in InstanceSpecification: \"ExcludeDataVolumeIds\", must be one of: InstanceId, ExcludeBootVolume
Traceback (most recent call last):
File \"/something/some_dir/some_script.py\", line 50, in some_function
snapshot_response = client.create_snapshots(
File \"/var/runtime/botocore/client.py\", line 391, in _api_call
return self._make_api_call(operation_name, kwargs)
File \"/var/runtime/botocore/client.py\", line 691, in _make_api_call
request_dict = self._convert_to_request_dict(
File \"/var/runtime/botocore/client.py\", line 739, in _convert_to_request_dict
request_dict = self._serializer.serialize_to_request(
File \"/var/runtime/botocore/validate.py\", line 360, in serialize_to_request
raise ParamValidationError(report=report.generate_report())END
Has anyone got this sorted?

Amazon VPC Endpoint for AWS Appflow

We have a Glue Python Shell Pipeline Job trying to invoke AWS AppFlow through boto3 APIs. The challenge we have is that we need to design our solution without the need for internet access within Glue
Previously we had challenges in upgrading the boto3 APIs in AWS Glue Shell to the latest version without internet. But then we have found the solution and documented in the following link. In our architecture, we are using AWS Python Shell as our lightweight Datapipeline Engine leveraging boto3 APIs
Git Glue Boto3 Bug & Solution
The following Appflow API python code is working perfectly fine in our local Jupyter Notebooks, as AWS App flow API is invoked over the internet.
##Extra code as per above link to update boto3 version as well, which is omitted
import boto3
print(boto3.__version__)
client=boto3.client('appflow')
response=client.describe_flow(
flowName='sf_non_pii'
)
But while we use the same code in AWS Glue, it is failing as it is not able to access the internet.
botocore.exceptions.ConnectTimeoutError: Connect timeout on endpoint URL: "https://appflow.us-east-2.amazonaws.com/describe-flow"
Please find the logs below
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/urllib3/connection.py", line 160, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw
File "/usr/local/lib/python3.6/site-packages/urllib3/util/connection.py", line 84, in create_connection
raise err
File "/usr/local/lib/python3.6/site-packages/urllib3/util/connection.py", line 74, in create_connection
sock.connect(sa)
socket.timeout: timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/tmp/botocore/httpsession.py", line 263, in send
chunked=self._chunked(request.headers),
File "/usr/local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 727, in urlopen
method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
File "/usr/local/lib/python3.6/site-packages/urllib3/util/retry.py", line 386, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/usr/local/lib/python3.6/site-packages/urllib3/packages/six.py", line 735, in reraise
raise value
File "/usr/local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 677, in urlopen
chunked=chunked,
File "/usr/local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 381, in _make_request
self._validate_conn(conn)
File "/usr/local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 978, in _validate_conn
conn.connect()
File "/usr/local/lib/python3.6/site-packages/urllib3/connection.py", line 309, in connect
conn = self._new_conn()
File "/usr/local/lib/python3.6/site-packages/urllib3/connection.py", line 167, in _new_conn
% (self.host, self.timeout),
urllib3.exceptions.ConnectTimeoutError: (<botocore.awsrequest.AWSHTTPSConnection object at 0x7f5f8802c0f0>, 'Connection to appflow.us-east-2.amazonaws.com timed out. (connect timeout=60)')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/tmp/runscript.py", line 123, in <module>
runpy.run_path(temp_file_path, run_name='__main__')
File "/usr/local/lib/python3.6/runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "/usr/local/lib/python3.6/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/usr/local/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmp/glue-python-scripts-a3cgmro3/ref_appflow.py", line 44, in <module>
File "/tmp/botocore/client.py", line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/tmp/botocore/client.py", line 663, in _make_api_call
operation_model, request_dict, request_context)
File "/tmp/botocore/client.py", line 682, in _make_request
return self._endpoint.make_request(operation_model, request_dict)
File "/tmp/botocore/endpoint.py", line 102, in make_request
return self._send_request(request_dict, operation_model)
File "/tmp/botocore/endpoint.py", line 137, in _send_request
success_response, exception):
File "/tmp/botocore/endpoint.py", line 256, in _needs_retry
caught_exception=caught_exception, request_dict=request_dict)
File "/tmp/botocore/hooks.py", line 356, in emit
return self._emitter.emit(aliased_event_name, **kwargs)
File "/tmp/botocore/hooks.py", line 228, in emit
return self._emit(event_name, kwargs)
File "/tmp/botocore/hooks.py", line 211, in _emit
response = handler(**kwargs)
File "/tmp/botocore/retryhandler.py", line 183, in __call__
if self._checker(attempts, response, caught_exception):
File "/tmp/botocore/retryhandler.py", line 251, in __call__
caught_exception)
File "/tmp/botocore/retryhandler.py", line 277, in _should_retry
return self._checker(attempt_number, response, caught_exception)
File "/tmp/botocore/retryhandler.py", line 317, in __call__
caught_exception)
File "/tmp/botocore/retryhandler.py", line 223, in __call__
attempt_number, caught_exception)
File "/tmp/botocore/retryhandler.py", line 359, in _check_caught_exception
raise caught_exception
File "/tmp/botocore/endpoint.py", line 200, in _do_get_response
http_response = self._send(request)
File "/tmp/botocore/endpoint.py", line 269, in _send
return self.http_session.send(request)
File "/tmp/botocore/httpsession.py", line 287, in send
raise ConnectTimeoutError(endpoint_url=request.url, error=e)
botocore.exceptions.ConnectTimeoutError: Connect timeout on endpoint URL: "https://appflow.us-east-2.amazonaws.com/describe-flow"
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/tmp/runscript.py", line 142, in <module>
raise e_type(e_value).with_traceback(new_stack)
TypeError: __init__() takes 1 positional argument but 2 were given
We had faced similar issues for Redshift, AWS Athena, SSM, etc. But for these services, we have VPC endpoints available. Creating the endpoints enabled our glue job to connect privately within the VPC. Please refer to the link below for supported products
AWS services that you can use with AWS PrivateLink
But since Appflow is a newer product, we do not have endpoint, even though some unclear documentation regarding a throwaway create and delete VPC endpoint as below
Private Amazon AppFlow flows
Please let us know how we can invoke boto3 APIs for Appflow without going through the internet.

`capture_tpu_profile` not able to access the TPU

I have the following TPU:
$ gcloud compute tpus list
NAME ZONE ACCELERATOR_TYPE NETWORK_ENDPOINTS NETWORK RANGE STATUS
daniels-tpu us-central1-a v3-8 10.240.1.10:8470 default 10.240.1.8/29 READY
But it is not accessible via capture_tpu_profile:
$ capture_tpu_profile --tpu=daniels-tpu
TensorFlow version 1.15.2 detected
WARNING:tensorflow:
The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
* https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
* https://github.com/tensorflow/addons
* https://github.com/tensorflow/io (for I/O related ops)
If you depend on functionality not listed there, please file an issue.
W0131 10:15:16.553251 4571966912 lazy_loader.py:50]
The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
* https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
* https://github.com/tensorflow/addons
* https://github.com/tensorflow/io (for I/O related ops)
If you depend on functionality not listed there, please file an issue.
Traceback (most recent call last):
File "/usr/local/Cellar/python/3.7.6_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 1319, in do_open
encode_chunked=req.has_header('Transfer-encoding'))
File "/usr/local/Cellar/python/3.7.6_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 1252, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/local/Cellar/python/3.7.6_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 1298, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/local/Cellar/python/3.7.6_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 1247, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/local/Cellar/python/3.7.6_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 1026, in _send_output
self.send(msg)
File "/usr/local/Cellar/python/3.7.6_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 966, in send
self.connect()
File "/usr/local/Cellar/python/3.7.6_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 938, in connect
(self.host,self.port), self.timeout, self.source_address)
File "/usr/local/Cellar/python/3.7.6_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/socket.py", line 707, in create_connection
for res in getaddrinfo(host, port, 0, SOCK_STREAM):
File "/usr/local/Cellar/python/3.7.6_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/socket.py", line 752, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno 8] nodename nor servname provided, or not known
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/bin/capture_tpu_profile", line 8, in <module>
sys.exit(run_main())
File "/usr/local/lib/python3.7/site-packages/cloud_tpu_profiler/main.py", line 85, in run_main
tf.compat.v1.app.run(main)
File "/usr/local/lib/python3.7/site-packages/tensorflow_core/python/platform/app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "/usr/local/lib/python3.7/site-packages/absl/app.py", line 299, in run
_run_main(main, args)
File "/usr/local/lib/python3.7/site-packages/absl/app.py", line 250, in _run_main
sys.exit(main(argv))
File "/usr/local/lib/python3.7/site-packages/cloud_tpu_profiler/main.py", line 105, in main
[FLAGS.tpu], zone=FLAGS.tpu_zone, project=FLAGS.gcp_project))
File "/usr/local/lib/python3.7/site-packages/tensorflow_core/python/distribute/cluster_resolver/tpu_cluster_resolver.py", line 330, in __init__
self._request_compute_metadata('project/project-id'))
File "/usr/local/lib/python3.7/site-packages/tensorflow_core/python/distribute/cluster_resolver/tpu_cluster_resolver.py", line 124, in _request_compute_metadata
resp = urlopen(req)
File "/usr/local/Cellar/python/3.7.6_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "/usr/local/Cellar/python/3.7.6_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 525, in open
response = self._open(req, data)
File "/usr/local/Cellar/python/3.7.6_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 543, in _open
'_open', req)
File "/usr/local/Cellar/python/3.7.6_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 503, in _call_chain
result = func(*args)
File "/usr/local/Cellar/python/3.7.6_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 1347, in http_open
return self.do_open(http.client.HTTPConnection, req)
File "/usr/local/Cellar/python/3.7.6_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 1321, in do_open
raise URLError(err)
urllib.error.URLError: <urlopen error [Errno 8] nodename nor servname provided, or not known>
The error didn't go away even after using --tpu_zone=us-central1-a flag.
When you use capture_tpu_profile to capture a profile, a .tracetable file is saved to your Google Cloud Storage bucket, if this location is not provided, then it will go nowhere.
Can you try adding the path of you bucket with the following command:
capture_tpu_profile --tpu=tpu-name --logdir=${MODEL_DIR}
--logdir=${MODEL_DIR} - This is a Cloud Storage location where your model and checkpoints are stored.
All details mentioned in this documentation

Pipeline will fail on GCP when writing tensorflow transform metadata

I hope somebody here can help. I've been googling this error like crazy but haven't found anything.
I have a pipeline that works perfectly when executed locally but it fails when executed on GCP. The following are the error messages that I get.
Workflow failed. Causes: S03:Write transform
fn/WriteMetadata/ResolveBeamFutures/CreateSingleton/Read+Write
transform fn/WriteMetadata/ResolveBeamFutures/ResolveFutures/Do+Write
transform fn/WriteMetadata/WriteMetadata failed., A work item was
attempted 4 times without success. Each time the worker eventually
lost contact with the service. The work item was attempted on:
Traceback (most recent call last): File "preprocess.py", line 491,
in
main() File "preprocess.py", line 487, in main
transform_data(args,pipeline_options,runner) File "preprocess.py", line 451, in transform_data
eval_data |= 'Identity eval' >> beam.ParDo(Identity()) File "/Library/Python/2.7/site-packages/apache_beam/pipeline.py", line 335,
in exit
self.run().wait_until_finish() File "/Library/Python/2.7/site-packages/apache_beam/runners/dataflow/dataflow_runner.py",
line 897, in wait_until_finish
(self.state, getattr(self._runner, 'last_error_msg', None)), self) apache_beam.runners.dataflow.dataflow_runner.DataflowRuntimeException:
Dataflow pipeline failed. State: FAILED, Error: Traceback (most recent
call last): File
"/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py",
line 582, in do_work
work_executor.execute() File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/executor.py",
line 166, in execute
op.start() File "apache_beam/runners/worker/operations.py", line 294, in apache_beam.runners.worker.operations.DoOperation.start
(apache_beam/runners/worker/operations.c:10607)
def start(self): File "apache_beam/runners/worker/operations.py", line 295, in
apache_beam.runners.worker.operations.DoOperation.start
(apache_beam/runners/worker/operations.c:10501)
with self.scoped_start_state: File "apache_beam/runners/worker/operations.py", line 300, in
apache_beam.runners.worker.operations.DoOperation.start
(apache_beam/runners/worker/operations.c:9702)
pickler.loads(self.spec.serialized_fn)) File "/usr/local/lib/python2.7/dist-packages/apache_beam/internal/pickler.py",
line 225, in loads
return dill.loads(s) File "/usr/local/lib/python2.7/dist-packages/dill/dill.py", line 277, in
loads
return load(file) File "/usr/local/lib/python2.7/dist-packages/dill/dill.py", line 266, in
load
obj = pik.load() File "/usr/lib/python2.7/pickle.py", line 858, in load
dispatchkey File "/usr/lib/python2.7/pickle.py", line 1083, in load_newobj
obj = cls.new(cls, *args) TypeError: new() takes exactly 4 arguments (1 given)
Any ideas??
Thanks,
Pedro
If the pipeline works locally but fails on GCP it's possible that you're running into a version mismatch.
What TF, tf.Transform, beam versions are you running locally and on GCP?