I have an AWS migration task that extracts data from mongodb and send to s3 in csv file.
After running the tasks for some days, I noticed that DMS was running without errors in the interface but the data was not being updated anymore.
Taking a look in the logs I noticed that it's in a loop with the following error:
2022-12-09T13:07:28 [SOURCE_CAPTURE ]I: Connection string: 'mongodb://user:****#host:27017/?retryWrites=false&authSource=admin&ssl=true' (mongodb_imp.c:398)
2022-12-09T09:46:58.000-03:00
2022-12-09T12:46:58 [SOURCE_CAPTURE ]I: MongoDB version: 4.2.23 (mongodb_imp.c:200)
2022-12-09T12:46:58 [SOURCE_CAPTURE ]I: MongoDB version: 4.2.23 (mongodb_imp.c:200)
2022-12-09T09:46:58.000-03:00
2022-12-09T12:46:58 [TARGET_APPLY ]I: Target components st_0_PHZ527O26SQUKBXSWUIR4AOPAAAO4TPUIBEEMUQ was reattached after 6 seconds. (subtask.c:1393)
2022-12-09T12:46:58 [TARGET_APPLY ]I: Target components st_0_PHZ527O26SQUKBXSWUIR4AOPAAAO4TPUIBEEMUQ was reattached after 6 seconds. (subtask.c:1393)
2022-12-09T09:46:58.000-03:00
2022-12-09T12:46:58 [TASK_MANAGER ]I: Starting component st_0_PHZ527O26SQUKBXSWUIR4AOPAAAO4TPUIBEEMUQ (subtask.c:1399)
2022-12-09T12:46:58 [TASK_MANAGER ]I: Starting component st_0_PHZ527O26SQUKBXSWUIR4AOPAAAO4TPUIBEEMUQ (subtask.c:1399)
2022-12-09T09:46:58.000-03:00
2022-12-09T12:46:58 [TARGET_APPLY ]I: Failed to open repository of URVPYB3AR42L2WPI3ARERUDPO1W43LQASV3PBXI task. [1000153] (file_imp.c:4916)
2022-12-09T12:46:58 [TARGET_APPLY ]I: Failed to open repository of URVPYB3AR42L2WPI3ARERUDPO1W43LQASV3PBXI task. [1000153] (file_imp.c:4916)
2022-12-09T09:46:58.000-03:00
2022-12-09T12:46:58 [TARGET_APPLY ]I: Failed to get task repository of the URVPYB3AR42L2WPI3ARERUDPO1W43LQASV3PBXI task. [1000153] (file_utils.c:217)
2022-12-09T12:46:58 [TARGET_APPLY ]I: Failed to get task repository of the URVPYB3AR42L2WPI3ARERUDPO1W43LQASV3PBXI task. [1000153] (file_utils.c:217)
2022-12-09T09:46:58.000-03:00 2022-12-09T12:46:58 [TARGET_APPLY ]I: Could not get table definition for table: my_table_name. [1000100] (file_apply.c:527)
2022-12-09T09:46:58.000-03:00
2022-12-09T12:46:58 [TARGET_APPLY ]I: Failed to create file writers for on directory: /rdsdbdata/data/tasks/URVPYB3AR42L2WPI3ARERUDPO1W43LQASV3PBXI/bucketFolder [1000100] (file_apply.c:569)
2022-12-09T12:46:58 [TARGET_APPLY ]I: Failed to create file writers for on directory: /rdsdbdata/data/tasks/URVPYB3AR42L2WPI3ARERUDPO1W43LQASV3PBXI/bucketFolder [1000100] (file_apply.c:569)
2022-12-09T09:46:58.000-03:00
2022-12-09T12:46:58 [TARGET_APPLY ]I: Could not init data file writers. [1000100] (file_apply.c:608)
2022-12-09T12:46:58 [TARGET_APPLY ]I: Could not init data file writers. [1000100] (file_apply.c:608)
2022-12-09T09:46:58.000-03:00
2022-12-09T12:46:58 [TARGET_APPLY ]I: Error executing command [1000100] (streamcomponent.c:1980)
2022-12-09T12:46:58 [TARGET_APPLY ]I: Error executing command [1000100] (streamcomponent.c:1980)
2022-12-09T09:46:58.000-03:00
2022-12-09T12:46:58 [TASK_MANAGER ]I: Stream component failed at subtask 0, component st_0_PHZ527O26SQUKBXSWUIR4AOPAAAO4TPUIBEEMUQ [1000100] (subtask.c:1414)
2022-12-09T12:46:58 [TASK_MANAGER ]I: Stream component failed at subtask 0, component st_0_PHZ527O26SQUKBXSWUIR4AOPAAAO4TPUIBEEMUQ [1000100] (subtask.c:1414)
2022-12-09T09:46:58.000-03:00
2022-12-09T12:46:58 [TARGET_APPLY ]I: Target component st_0_PHZ527O26SQUKBXSWUIR4AOPAAAO4TPUIBEEMUQ was detached because of recoverable error. Will try to reattach (subtask.c:1537)
2022-12-09T12:46:58 [TARGET_APPLY ]I: Target component st_0_PHZ527O26SQUKBXSWUIR4AOPAAAO4TPUIBEEMUQ was detached because of recoverable error. Will try to reattach (subtask.c:1537)
2022-12-09T09:46:58.000-03:00 2022-12-09T12:46:58 [SOURCE_CAPTURE ]I: Destroying mongoc client: '2103315958236' (mongodb_imp.c:1161)
I haven't found nothing about this error yet. If someone have any ideia, that would help.
Thanks
Related
I am getting error while run the task in aws ecs cluster
Jncise
I'm having a problem this error when i try to create a file in EFS from ECS:
ResourceInitializationError: failed to invoke EFS utils commands to set up EFS volumes: stderr:
b'mount.nfs4: mounting fs-xxxxxxxxxxxxxxxxxx.efs.eu-west-1.amazonaws.com:/opt/data/ failed,
reason given by server: No such file or directory' : unsuccessful EFS utils command execution; code: 32
I gave it permissions for port 2049 but it still gives me the error
I want to see logs of my elastic beanstalk environment in the command line, for which I'm doing the following:
eb logs --cloudwatch-logs enable --cloudwatch-log-source instance Humboialpha2021-env
However, I'm seeing the following error:
Enabling instance log streaming to CloudWatch for your environment
After the environment is updated you can view your logs by following the link:
https://console.aws.amazon.com/cloudwatch/home?region=us-east-1#logs:prefix=/aws/elasticbeanstalk/Humboialpha2021-env/
Printing Status:
2021-03-27 16:19:05 INFO Environment update is starting.
2021-03-27 16:19:16 INFO Updating environment Humboialpha2021-env's configuration settings.
2021-03-27 16:19:54 INFO Instance deployment successfully detected a JAR file in your source bundle.
2021-03-27 16:19:57 INFO Instance deployment successfully generated a 'Procfile'.
2021-03-27 16:19:58 ERROR Instance deployment failed. For details, see 'eb-engine.log'.
2021-03-27 16:20:02 ERROR [Instance: i-04fb7a3d67a219ca7] Command failed on instance. Return code: 1 Output: Engine execution has encountered an error..
2021-03-27 16:20:02 INFO Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
2021-03-27 16:20:02 ERROR Unsuccessful command execution on instance id(s) 'i-04fb7a3d67a219ca7'. Aborting the operation.
2021-03-27 16:20:03 ERROR Failed to deploy configuration.
ERROR: ServiceError - Failed to deploy configuration.
How to fix this?
I am using this command to try and get a jenkins-x cluster set up and running :
jx create cluster aws --ng
I've also tried :
jx create cluster aws
the output looks like this :
Waiting to for a valid kops cluster state...
WARNING: retrying after error: exit status 2
error: Failed to successfully validate kops cluster state: after 25 attempts, last error: exit status 2
All help appreciated.
Try the kops validate cluster command as shown below
AWS_ACCESS_KEY_ID=<YOUR_KEY_HERE> AWS_SECRET_ACCESS_KEY=<YOUR_SECRET_KEY_HERE> kops validate cluster --wait 10m --state="s3://<YOUR_S3_BUCKET_NAME_HERE>" --name=<YOUR_CLUSTER_NAME_HERE>
I tried to setup EB for worker tier by using the following command
eb create -t worker
But I receive the following error
2015-11-04 16:44:01 UTC+0800 ERROR Stack named 'awseb-e-wh4epksrzi-stack' aborted operation. Current state: 'CREATE_FAILED' Reason: The following resource(s) failed to create: [AWSEBWorkerCronLeaderRegistry, AWSEBSecurityGroup].
2015-11-04 16:43:58 UTC+0800 ERROR Creating security group named: sg-7ba1f41e failed Reason: Resource creation cancelled
Is there something specific to run the command line ?
I found the eb command line buggy. try to use the web console. much more reliable.