error when trying to run lambda locally using docker with wsl2 - amazon-web-services

Trying to get AWS SAM CLI working to locally test lambda functions. I've installed the helloworld python function, which I can successfully build and validate, until I add the --use-container flag, at which point I get the below errors.
I have Docker Desktop installed and running. I'm using WSL2 with Ubuntu 20.04 on a windows 11 machine.
mylaptop:~/projects/lambda/lambda-python3.8$ sam build --use-container
Starting Build inside a container
Your template contains a resource with logical ID "ServerlessRestApi", which is a reserved logical ID in AWS SAM. It could result in unexpected behaviors and is not recommended.
Building codeuri: /home/projects/lambda/lambda-python3.8/hello_world runtime: python3.8 metadata: {} architecture: x86_64 functions: HelloWorldFunction
<3>init: (15570) ERROR: UtilConnectUnix:467: connect failed 111
Traceback (most recent call last):
File "docker/credentials/store.py", line 80, in _execute
File "subprocess.py", line 411, in check_output
File "subprocess.py", line 512, in run
subprocess.CalledProcessError: Command '['/usr/bin/docker-credential-desktop.exe', 'get']' returned non-zero exit status 255.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "docker/auth.py", line 264, in _resolve_authconfig_credstore
File "docker/credentials/store.py", line 35, in get
File "docker/credentials/store.py", line 93, in _execute
docker.credentials.errors.StoreError: Credentials store docker-credential-desktop.exe exited with "".
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "samcli/__main__.py", line 12, in <module>
File "click/core.py", line 829, in __call__
File "click/core.py", line 782, in main
File "click/core.py", line 1259, in invoke
File "click/core.py", line 1066, in invoke
File "click/core.py", line 610, in invoke
File "click/decorators.py", line 73, in new_func
File "click/core.py", line 610, in invoke
File "samcli/lib/telemetry/metric.py", line 166, in wrapped
File "samcli/lib/telemetry/metric.py", line 124, in wrapped
File "samcli/lib/utils/version_checker.py", line 41, in wrapped
File "samcli/cli/main.py", line 87, in wrapper
File "samcli/commands/build/command.py", line 201, in cli
File "samcli/commands/build/command.py", line 262, in do_cli
File "samcli/commands/build/build_context.py", line 248, in run
File "samcli/lib/build/app_builder.py", line 221, in build
File "samcli/lib/build/build_strategy.py", line 79, in build
File "samcli/lib/build/build_strategy.py", line 89, in _build_functions
File "samcli/lib/build/build_strategy.py", line 171, in build_single_function_definition
File "samcli/lib/build/app_builder.py", line 654, in _build_function
File "samcli/lib/build/app_builder.py", line 819, in _build_function_on_container
File "samcli/local/docker/manager.py", line 115, in run
File "samcli/local/docker/manager.py", line 85, in create
File "samcli/local/docker/manager.py", line 160, in pull_image
File "docker/api/image.py", line 396, in pull
File "docker/auth.py", line 48, in get_config_header
File "docker/auth.py", line 324, in resolve_authconfig
File "docker/auth.py", line 235, in resolve_authconfig
File "docker/auth.py", line 281, in _resolve_authconfig_credstore
docker.errors.DockerException: Credentials store error: StoreError('Credentials store docker-credential-desktop.exe exited with "".')
[15567] Failed to execute script __main__

I ran docker-credential-desktop.exe version which resulted in the 111 error message, so I was able to isolate the issue to something related to docker-desktop-credential.exe. After googling around and trying lots of different suggestions, this finally worked for me, without any restart required.
mv ~/.docker ~/.docker_old

Related

AWS SAM Pipeline CLI deployment fails repeatedly

I am following an official AWS skill Builder tutorial entitled "Automating the Deployment Pipeline". I am able to initialize my serverless project with a "hello world" template, but when I attempt to initialize my code pipeline, I get through all of the setup propmts but at the very end when it asks "Should we proceed with the creation? [y/N]", i get the "Updating the required resources..." message followed by the following error message:
File "samcli/__main__.py", line 12, in <module>
File "click/core.py", line 1130, in __call__
File "click/core.py", line 1055, in main
File "click/core.py", line 1657, in invoke
File "click/core.py", line 1657, in invoke
File "click/core.py", line 1404, in invoke
File "click/core.py", line 760, in invoke
File "click/decorators.py", line 84, in new_func
File "click/core.py", line 760, in invoke
File "samcli/lib/telemetry/metric.py", line 175, in wrapped
File "samcli/lib/telemetry/metric.py", line 150, in wrapped
File "samcli/commands/pipeline/init/cli.py", line 42, in cli
File "samcli/commands/pipeline/init/cli.py", line 51, in do_cli
File "samcli/commands/pipeline/init/interactive_init_flow.py", line 80, in
do_interactive
File "samcli/commands/pipeline/init/interactive_init_flow.py", line 105, in
_generate_from_app_pipeline_templates
File "samcli/commands/pipeline/init/interactive_init_flow.py", line 250, in
_generate_from_pipeline_template
File "samcli/commands/pipeline/init/interactive_init_flow.py", line 190, in
_prompt_run_bootstrap_within_pipeline_init
File "samcli/commands/pipeline/bootstrap/cli.py", line 403, in do_cli
File "samcli/lib/pipeline/bootstrap/stage.py", line 293, in bootstrap
File "samcli/lib/utils/managed_cloudformation_stack.py", line 89, in update_stack
File "samcli/lib/utils/managed_cloudformation_stack.py", line 185, in
_create_or_update_stack
File "samcli/lib/utils/managed_cloudformation_stack.py", line 290, in _update_stack
File "botocore/waiter.py", line 54, in wait
File "botocore/waiter.py", line 351, in wait
botocore.exceptions.WaiterError: Waiter StackUpdateComplete failed: Waiter
encountered a terminal failure state: For expression "Stacks[].StackStatus" we
matched expected path: "UPDATE_ROLLBACK_COMPLETE" at least once
I am fairly new to AWS and have no idea why it keeps failing, as I beleive that I am following the tutorial to the tee. Formgive me if me if y question is too vague. Any feedback to point me in the right direction is much appreciated.

Compatibility with GCP aiplatform, bigquery and cloud-storage on hyperparameter tuning docker image

I am doing hyperparameter tuning on GCP using this scikit docker image. When I add the aiplatform package as a dependency, things break. The error comes from the bigquery import.
from google.cloud import bigquery
The error message is below.
The replica workerpool0-0 exited with a non-zero status of 1.
Traceback (most recent call last):
[...]
File "/root/.local/lib/python3.7/site-packages/trainer/task.py", line 7, in
from google.cloud import storage, bigquery
File "/usr/local/lib/python3.7/dist-packages/google/cloud/bigquery/__init__.py", line 35, in
from google.cloud.bigquery.client import Client
File "/usr/local/lib/python3.7/dist-packages/google/cloud/bigquery/client.py", line 60, in
from google.cloud.bigquery import _pandas_helpers
File "/usr/local/lib/python3.7/dist-packages/google/cloud/bigquery/_pandas_helpers.py", line 40, in
from google.cloud.bigquery import schema
File "/usr/local/lib/python3.7/dist-packages/google/cloud/bigquery/schema.py", line 19, in
from google.cloud.bigquery_v2 import types
File "/usr/local/lib/python3.7/dist-packages/google/cloud/bigquery_v2/__init__.py", line 23, in
from google.cloud.bigquery_v2 import types
File "/usr/local/lib/python3.7/dist-packages/google/cloud/bigquery_v2/types.py", line 23, in
from google.cloud.bigquery_v2.proto import encryption_config_pb2
File "/usr/local/lib/python3.7/dist-packages/google/cloud/bigquery_v2/proto/encryption_config_pb2.py", line 64, in
file=DESCRIPTOR,
File "/root/.local/lib/python3.7/site-packages/google/protobuf/descriptor.py", line 560, in __new__
_message.Message._CheckCalledFromGeneratedFile()
TypeError: Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
1. Downgrade the protobuf package to 3.20.x or lower.
2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
From the logs, I can see the system is downloading google-cloud-aiplatform v1.17.0. According to the scikit docker image, google-cloud-storage v1.35.0 is installed, but google-cloud-aiplatform drags in v2.5.0.
I am thinking I need to downgrade google-cloud-aiplatform to a specific version. Anyone know which version or how to resolve this problem?
UPDATE: FWIW, if I downgrade google-cloud-aiplatform==1.15.1 then the problem above goes away. However, this problem below shows.
Traceback (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/root/.local/lib/python3.7/site-packages/trainer/hpt.py", line 170, in
staging_bucket=f'{args.bucket_uri}'
File "/root/.local/lib/python3.7/site-packages/google/cloud/aiplatform/initializer.py", line 138, in init
backing_tensorboard=experiment_tensorboard,
File "/root/.local/lib/python3.7/site-packages/google/cloud/aiplatform/metadata/metadata.py", line 235, in set_experiment
experiment_name=experiment, description=description
File "/root/.local/lib/python3.7/site-packages/google/cloud/aiplatform/metadata/experiment_resources.py", line 247, in get_or_create
project=project, location=location, credentials=credentials
File "/root/.local/lib/python3.7/site-packages/google/cloud/aiplatform/metadata/metadata_store.py", line 283, in ensure_default_metadata_store_exists
encryption_spec_key_name=encryption_key_spec_name,
File "/root/.local/lib/python3.7/site-packages/google/cloud/aiplatform/metadata/metadata_store.py", line 123, in get_or_create
credentials=credentials,
File "/root/.local/lib/python3.7/site-packages/google/cloud/aiplatform/metadata/metadata_store.py", line 241, in _get
credentials=credentials,
File "/root/.local/lib/python3.7/site-packages/google/cloud/aiplatform/metadata/metadata_store.py", line 73, in __init__
self._gca_resource = self._get_gca_resource(resource_name=metadata_store_name)
File "/root/.local/lib/python3.7/site-packages/google/cloud/aiplatform/base.py", line 617, in _get_gca_resource
return getattr(self.api_client, self._getter_method)(
File "/root/.local/lib/python3.7/site-packages/google/cloud/aiplatform/utils/__init__.py", line 425, in __getattr__
return getattr(self._clients[self._default_version], name)
File "/root/.local/lib/python3.7/site-packages/google/cloud/aiplatform/utils/__init__.py", line 359, in __getattr__
client_info=self._client_info,
File "/root/.local/lib/python3.7/site-packages/google/cloud/aiplatform_v1/services/metadata_service/client.py", line 547, in __init__
api_audience=client_options.api_audience,
File "/root/.local/lib/python3.7/site-packages/google/cloud/aiplatform_v1/services/metadata_service/transports/grpc.py", line 190, in __init__
("grpc.max_receive_message_length", -1),
File "/root/.local/lib/python3.7/site-packages/google/cloud/aiplatform_v1/services/metadata_service/transports/grpc.py", line 241, in create_channel
**kwargs,
File "/root/.local/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 318, in create_channel
default_host=default_host,
File "/root/.local/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 239, in _create_composite_credentials
credentials, scopes=scopes, default_scopes=default_scopes
TypeError: with_scopes_if_required() got an unexpected keyword argument 'default_scopes'

Can't compile cython program using pyximport in Mac M1

I'm trying to use Cython (Cython version 0.29.30) to speed up my computations in Mac M1, and testing the .pyx by pyximport, but I got:
/Users/xxxx/.pyxbld/temp.macosx-10.9-universal2-3.10/pyrex/binomial.c:697:10: fatal error: 'ios' file not found
#include "ios"
^~~~~
1 error generated.
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/distutils/unixccompiler.py", line 117, in _compile
self.spawn(compiler_so + cc_args + [src, '-o', obj] +
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/distutils/ccompiler.py", line 910, in spawn
spawn(cmd, dry_run=self.dry_run)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/distutils/spawn.py", line 91, in spawn
raise DistutilsExecError(
distutils.errors.DistutilsExecError: command '/usr/bin/clang' failed with exit code 1
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pyximport/pyximport.py", line 214, in load_module
so_path = build_module(module_name, pyxfilename, pyxbuild_dir,
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pyximport/pyximport.py", line 186, in build_module
so_path = pyxbuild.pyx_to_dll(pyxfilename, extension_mod,
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pyximport/pyxbuild.py", line 102, in pyx_to_dll
dist.run_commands()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/Cython/Distutils/old_build_ext.py", line 186, in run
_build_ext.build_ext.run(self)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/distutils/command/build_ext.py", line 340, in run
self.build_extensions()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/Cython/Distutils/old_build_ext.py", line 195, in build_extensions
_build_ext.build_ext.build_extensions(self)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/distutils/command/build_ext.py", line 449, in build_extensions
self._build_extensions_serial()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/distutils/command/build_ext.py", line 474, in _build_extensions_serial
self.build_extension(ext)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/distutils/command/build_ext.py", line 529, in build_extension
objects = self.compiler.compile(sources,
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/distutils/ccompiler.py", line 574, in compile
self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/distutils/unixccompiler.py", line 120, in _compile
raise CompileError(msg)
distutils.errors.CompileError: command '/usr/bin/clang' failed with exit code 1
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pyximport/pyximport.py", line 459, in load_module
module = load_module(fullname, self.path,
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pyximport/pyximport.py", line 231, in load_module
raise exc.with_traceback(tb)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pyximport/pyximport.py", line 214, in load_module
so_path = build_module(module_name, pyxfilename, pyxbuild_dir,
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pyximport/pyximport.py", line 186, in build_module
so_path = pyxbuild.pyx_to_dll(pyxfilename, extension_mod,
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pyximport/pyxbuild.py", line 102, in pyx_to_dll
dist.run_commands()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/Cython/Distutils/old_build_ext.py", line 186, in run
_build_ext.build_ext.run(self)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/distutils/command/build_ext.py", line 340, in run
self.build_extensions()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/Cython/Distutils/old_build_ext.py", line 195, in build_extensions
_build_ext.build_ext.build_extensions(self)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/distutils/command/build_ext.py", line 449, in build_extensions
self._build_extensions_serial()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/distutils/command/build_ext.py", line 474, in _build_extensions_serial
self.build_extension(ext)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/distutils/command/build_ext.py", line 529, in build_extension
objects = self.compiler.compile(sources,
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/distutils/ccompiler.py", line 574, in compile
self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/distutils/unixccompiler.py", line 120, in _compile
raise CompileError(msg)
ImportError: Building module binomial failed: ["distutils.errors.CompileError: command '/usr/bin/clang' failed with exit code 1\n"]
I'm quite new in Cython. When I ran cython xxx.pyx directly, it worked. The answers in this question are great help, but I was confused and sought for a better solution. I tried the first answer that tried to use C++ compiler:
>>> script_args = ["--cython-cplus"]
>>> setup_args = {"script_args": script_args}
>>> pyximport.install(setup_args=setup_args, language_level=3)
but got:
distutils.errors.DistutilsExecError: command '/usr/bin/clang++' failed with exit code 1
So I tried the second answer. It works fine. This means that I have to create a .pyxbld file for each .pyx file, which is quite annoying.
How can I solve this?

Problem with Dataflow runner and jaydebeapi (one-time problem)

Info on our data flow pipeline we're referring to in this incident:
pipeline is responsible for moving data from Oracle source to BigQuery;
pipeline is written in Python3.6;
it uses ojdbc, jdk and jaydebeapi;
it is ensured in our code that all required libraries etc. are installed always on all the Data Flow workers before execution.
Problem description:
21/10 we experienced problem with Data Flow worker (in europe-west3 region) - see below log. It seems it couldn't load or use jaydebeapi library.
2020-10-21 17:28:42.792 CESTError message from worker: Traceback (most recent call last): File "apache_beam/runners/common.py", line 997, in apache_beam.runners.common.DoFnRunner._invoke_bundle_method File "apache_beam/runners/common.py", line 490, in apache_beam.runners.common.DoFnInvoker.invoke_start_bundle File "apache_beam/runners/common.py", line 496, in apache_beam.runners.common.DoFnInvoker.invoke_start_bundle File "/usr/local/lib/python3.7/site-packages/libs/dataflow/common.py", line 269, in start_bundle jars=[f"/tmp/{self.ojdbc_lib}"] File "/usr/local/lib/python3.7/site-packages/jaydebeapi/init.py", line 412, in connect jconn = _jdbc_connect(jclassname, url, driver_args, jars, libs) File "/usr/local/lib/python3.7/site-packages/jaydebeapi/init.py", line 199, in _jdbc_connect_jpype convertStrings=True) File "/usr/local/lib/python3.7/site-packages/jpype/_core.py", line 216, in startJVM ignoreUnrecognized, convertStrings, interrupt) SystemError: java.lang.ClassNotFoundException: org.jpype.classloader.DynamicClassLoader During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.7/site-packages/dataflow_worker/batchworker.py", line 638, in do_work work_executor.execute() File "/usr/local/lib/python3.7/site-packages/dataflow_worker/executor.py", line 179, in execute op.start() File "apache_beam/runners/worker/operations.py", line 662, in apache_beam.runners.worker.operations.DoOperation.start File "apache_beam/runners/worker/operations.py", line 664, in apache_beam.runners.worker.operations.DoOperation.start File "apache_beam/runners/worker/operations.py", line 666, in apache_beam.runners.worker.operations.DoOperation.start File "apache_beam/runners/common.py", line 1014, in apache_beam.runners.common.DoFnRunner.start File "apache_beam/runners/common.py", line 999, in apache_beam.runners.common.DoFnRunner._invoke_bundle_method File "apache_beam/runners/common.py", line 1045, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "/usr/local/lib/python3.7/site-packages/future/utils/init.py", line 446, in raise_with_traceback raise exc.with_traceback(traceback) File "apache_beam/runners/common.py", line 997, in apache_beam.runners.common.DoFnRunner._invoke_bundle_method File "apache_beam/runners/common.py", line 490, in apache_beam.runners.common.DoFnInvoker.invoke_start_bundle File "apache_beam/runners/common.py", line 496, in apache_beam.runners.common.DoFnInvoker.invoke_start_bundle File "/usr/local/lib/python3.7/site-packages/libs/dataflow/common.py", line 269, in start_bundle jars=[f"/tmp/{self.ojdbc_lib}"] File "/usr/local/lib/python3.7/site-packages/jaydebeapi/init.py", line 412, in connect jconn = _jdbc_connect(jclassname, url, driver_args, jars, libs) File "/usr/local/lib/python3.7/site-packages/jaydebeapi/init.py", line 199, in _jdbc_connect_jpype convertStrings=True) File "/usr/local/lib/python3.7/site-packages/jpype/_core.py", line 216, in startJVM ignoreUnrecognized, convertStrings, interrupt) SystemError: java.lang.ClassNotFoundException: org.jpype.classloader.DynamicClassLoader [while running 'Read from Oracle source/Read from database']
Problem occurred several times after running exactly same code again and then disappeared and everything worked well with the same code. Do you have any idea what could happen? It seems to us that it was something with infrastructure/worker provisioning etc.

Pipeline will fail on GCP when writing tensorflow transform metadata

I hope somebody here can help. I've been googling this error like crazy but haven't found anything.
I have a pipeline that works perfectly when executed locally but it fails when executed on GCP. The following are the error messages that I get.
Workflow failed. Causes: S03:Write transform
fn/WriteMetadata/ResolveBeamFutures/CreateSingleton/Read+Write
transform fn/WriteMetadata/ResolveBeamFutures/ResolveFutures/Do+Write
transform fn/WriteMetadata/WriteMetadata failed., A work item was
attempted 4 times without success. Each time the worker eventually
lost contact with the service. The work item was attempted on:
Traceback (most recent call last): File "preprocess.py", line 491,
in
main() File "preprocess.py", line 487, in main
transform_data(args,pipeline_options,runner) File "preprocess.py", line 451, in transform_data
eval_data |= 'Identity eval' >> beam.ParDo(Identity()) File "/Library/Python/2.7/site-packages/apache_beam/pipeline.py", line 335,
in exit
self.run().wait_until_finish() File "/Library/Python/2.7/site-packages/apache_beam/runners/dataflow/dataflow_runner.py",
line 897, in wait_until_finish
(self.state, getattr(self._runner, 'last_error_msg', None)), self) apache_beam.runners.dataflow.dataflow_runner.DataflowRuntimeException:
Dataflow pipeline failed. State: FAILED, Error: Traceback (most recent
call last): File
"/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py",
line 582, in do_work
work_executor.execute() File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/executor.py",
line 166, in execute
op.start() File "apache_beam/runners/worker/operations.py", line 294, in apache_beam.runners.worker.operations.DoOperation.start
(apache_beam/runners/worker/operations.c:10607)
def start(self): File "apache_beam/runners/worker/operations.py", line 295, in
apache_beam.runners.worker.operations.DoOperation.start
(apache_beam/runners/worker/operations.c:10501)
with self.scoped_start_state: File "apache_beam/runners/worker/operations.py", line 300, in
apache_beam.runners.worker.operations.DoOperation.start
(apache_beam/runners/worker/operations.c:9702)
pickler.loads(self.spec.serialized_fn)) File "/usr/local/lib/python2.7/dist-packages/apache_beam/internal/pickler.py",
line 225, in loads
return dill.loads(s) File "/usr/local/lib/python2.7/dist-packages/dill/dill.py", line 277, in
loads
return load(file) File "/usr/local/lib/python2.7/dist-packages/dill/dill.py", line 266, in
load
obj = pik.load() File "/usr/lib/python2.7/pickle.py", line 858, in load
dispatchkey File "/usr/lib/python2.7/pickle.py", line 1083, in load_newobj
obj = cls.new(cls, *args) TypeError: new() takes exactly 4 arguments (1 given)
Any ideas??
Thanks,
Pedro
If the pipeline works locally but fails on GCP it's possible that you're running into a version mismatch.
What TF, tf.Transform, beam versions are you running locally and on GCP?