prettier --write found some errors. Please fix them and try committing again.-how can I resolve these errors? - prettier

I am new to git and cypress. We also use circleCI. When I am trying to commit, I am getting these errors, and dont know how to fix them..are the errors in the files pointed out by "error"?
✖ prettier --write found some errors. Please fix them and try committing again.
.eslintrc.js 54ms
README.md 91ms
cypress.json 9ms
cypress/integration/SaveFormTest.js 49ms
cypress/integration/SaveFormTest_orig.js 20ms
cypress/support/commands.js 19ms
cypress_original.json 5ms
jest.config.js 7ms
src/api/requests.js 29ms
src/components/FileUploadTile/FileUploadTile.js 25ms
src/components/FileUploadTile/UploadButton/UploadButton.js 11ms
src/components/TopPanel/TopPanel.js 32ms
src/config/messages/en.js 9ms
src/constants/constants.js 13ms
src/containers/UserWidget/UserWidget.js 47ms
src/pages/LoginPage/LoginPage.js 30ms
src/pages/MainPage/MainPage.test.js 20ms
src/pages/TrainingQueuePages/TrainingQueueDashboardPage/ActionButtons/ActionButtons.js 15ms
src/pages/TrainingQueuePages/TrainingQueueDashboardPage/TrainingQueueList/RecordTile/RecordTile.js 18ms
src/pages/TrainingQueuePages/TrainingQueueDashboardPage/TrainingQueueList/RecordTile/RecordTile.test.js 17ms
src/store/trainingQueueDashboard/actions.js 17ms
src/store/trainingQueueDashboard/actions.test.js 34ms
src/store/trainingQueueDashboard/reducers.js 17ms
src/store/trainingQueueDashboard/reducers.test.js 32ms
src/store/user/constants.js 5ms
src/store/user/reducers.js 14ms
src/store/user/reducers.test.js 23ms
[error] cypress_old.json: SyntaxError: Unexpected token (1:7)
[error] > 1 | b0VIM 8.1��B^.�rrenukaalurkarRenukas-MBP~renukaalurkar/Documents/cypress_tests_renuka/KW-Offers-FE/cypress.jsonutf-8
[error] | ^
[error] 2 | U3210#"! Utpada������w\E$������} "defau}} "defaultCommandTimeout": 100000 }, "test_admin_pass":${TEST_ADMIN_PASS} "test_admin":${TEST_ADMIN}, "test_tc_pass":${TEST_TC_PASS}, "test_tc":${TEST_TC}, "test_pass":${TEST_PASS}, "test_user":${TEST_USER}, "password":${PASSWORD}, "user_name", "host":"https://app.kw-offers.master.kw-offers-dev.com/", "env":{{
husky > pre-commit hook failed (add --no-verify to bypass)

Related

Celery unexpectedly closes TCP connection

I'm using RabbitMQ 3.8.2 with Erlang 22.2.7 and having a problem while consuming tasks. My configuration is django-celery-rabbitmq. While publishing messages in a queue everything goes ok until the length of the queue reaches 1200 messages. After this point RabbitMQ starts to close AMQP connection with following errors:
...
2022-11-01 09:35:25.327 [info] <0.20608.9> accepting AMQP connection <0.20608.9> (185.121.83.107:60447 -> 185.121.83.116:5672)
2022-11-01 09:35:25.483 [info] <0.20608.9> connection <0.20608.9> (185.121.83.107:60447 -> 185.121.83.116:5672): user 'rabbit_admin' authenticated and granted access to vhost '/'
...
2022-11-01 09:36:59.129 [warning] <0.19994.9> closing AMQP connection <0.19994.9> (185.121.83.108:36149 -> 185.121.83.116:5672, vhost: '/', user: 'rabbit_admin'):
client unexpectedly closed TCP connection
...
[error] <0.11162.9> closing AMQP connection <0.11162.9> (185.121.83.108:57631 -> 185.121.83.116:5672):
{writer,send_failed,{error,enotconn}}
...
2022-11-01 09:35:48.256 [error] <0.20201.9> closing AMQP connection <0.20201.9> (185.121.83.108:50058 -> 185.121.83.116:5672):
{inet_error,enotconn}
...
Then the django-celery consumer disappears from queue list, messages become "ready" and celery pods are unable to ack the message after the job is finished with the following error:
ERROR: [2022-11-01 09:20:23] /usr/src/app/project/celery.py:114 handle_message Error while handling Rabbit task: [Errno 104] Connection reset by peer
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/amqp/connection.py", line 514, in channel
return self.channels[channel_id]
KeyError: None
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/src/app/project/celery.py", line 76, in handle_message
message.ack()
File "/usr/local/lib/python3.10/site-packages/kombu/message.py", line 125, in ack
self.channel.basic_ack(self.delivery_tag, multiple=multiple)
File "/usr/local/lib/python3.10/site-packages/amqp/channel.py", line 1407, in basic_ack
return self.send_method(
File "/usr/local/lib/python3.10/site-packages/amqp/abstract_channel.py", line 70, in send_method
conn.frame_writer(1, self.channel_id, sig, args, content)
File "/usr/local/lib/python3.10/site-packages/amqp/method_framing.py", line 186, in write_frame
write(buffer_store.view[:offset])
File "/usr/local/lib/python3.10/site-packages/amqp/transport.py", line 347, in write
self._write(s)
ConnectionResetError: [Errno 104] Connection reset by peer
I have noticed that the message size also affects this behavior. In the above case there are like 1000-1500 symbols in each message. If I decrease it to 50 symbols, then the threshold at which RabbitMQ starts to close AMQP connection shifts to 4000-5000 messages.
I suspect that the problem is with lack of resources for RabbitMQ, but I don't know how find what exactly is going wrong. If I run htop on the server, I see that 2 available CPU are not at high load at any time (loaded less than 20% each) and RAM is 400mb / 3840mb used. So nothing seems to be wrong. Is there any resource checking command for RabbitMQ? The tasks do not take long time to complete, about 10 seconds each.
Also maybe there are some missing heartbeats from the client (I had the problem earlier, but not now, there are currently no error messages about that).
Also if I run sudo journalctl --system | grep rabbitmq, I get the following output:
......
Мау 24 05:15:49 oms-git.omsystem sshd[809111]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=43.154.63.169 user=rabbitmq
Мау 24 05:15:51 oms-git.omsystem sshd[809111]: Failed password for rabbitmq from 43.154.63.169 port 37010 ssh2
Мау 24 05:15:51 oms-git.omsystem sshd[809111]: Disconnected from authenticating user rabbitmq 43.154.63.169 port 37010 [preauth]
Мау 24 16:12:32 oms-git.omsystem sudo[842182]: ad : TTY=pts/3 ; PWD=/var/log/rabbitmq ; USER=root ; COMMAND=/usr/bin/tail -f -n 1000 rabbit#XXX-git.log
......
Maybe here is another issue with firewall, but I don't see any error messages about that in /var/log/rabbitmq/rabbit#XXX.log.
My Celery configuration on client is like:
CELERY_TASK_IGNORE_RESULT = True
CELERY_RESULT_BACKEND = 'django-db'
CELERY_CACHE_BACKEND = 'django-cache'
CELERY_SEND_EVENTS = False
CELERY_BROKER_POOL_LIMIT = 30
CELERY_BROKER_HEARTBEAT = 30
CELERY_BROKER_CONNECTION_TIMEOUT = 600
CELERY_PREFETCH_MULTIPLIER = 1
CELERY_SEND_EVENTS = False
CELERY_WORKER_CONCURRENCY = 1
CELERY_TASK_ACKS_LATE = True
Currently I'm running the pod using following command:
celery -A project.celery worker -l info -f /var/log/celery/celery.log -Ofair
Also I have tried to use various arguments to limit prefetch or turn off heartbit but it didn't work:
celery -A project.celery worker -l info -f /var/log/celery/celery.log --without-heartbeat --without-gossip --without-mingle
celery -A project.celery worker -l info -f /var/log/celery/celery.log --prefetch-multiplier=1 --pool=solo --
I expect that there are no limitations on queue length and every celery pod in my kubernetes cluster consumes and acks messages without errors.

k8s: Error pulling images from ECR

We constantly get Waiting: ImagePullBackOff during CI upgrades. Anybody know whats happening? k8s cluster 1.6.2 installed via kops. During upgrades, we do kubectl set image and during the last 2 days, we are seeing the following error
Failed to pull image "********.dkr.ecr.eu-west-1.amazonaws.com/backend:da76bb49ec9a": rpc error: code = 2 desc = net/http: request canceled
Error syncing pod, skipping: failed to "StartContainer" for "backend" with ErrImagePull: "rpc error: code = 2 desc = net/http: request canceled"
journalctl -r -u kubelet
Jul 26 09:32:40 ip-10-0-49-227 kubelet[840]: W0726 09:32:40.731903 840 docker_sandbox.go:263] NetworkPlugin kubenet failed on the status hook for pod "backend-1277054742-bb8zm_default": Unexpected command output nsenter: cannot open : No such file or directory
Jul 26 09:32:40 ip-10-0-49-227 kubelet[840]: E0726 09:32:40.724387 840 generic.go:239] PLEG: Ignoring events for pod frontend-1493767179-84rkl/default: rpc error: code = 2 desc = Error: No such container: 2421109e0d1eb31242c5088b547c0f29377816ca068a283b8fe6c2d8e7e5874d
Jul 26 09:32:40 ip-10-0-49-227 kubelet[840]: E0726 09:32:40.724371 840 kuberuntime_manager.go:858] getPodContainerStatuses for pod "frontend-1493767179-84rkl_default(0fff3b22-71c8-11e7-9679-02c1112ca4ec)" failed: rpc error: code = 2 desc = Error: No such container: 2421109e0d1eb31242c5088b547c0f29377816ca068a283b8fe6c2d8e7e5874d
Jul 26 09:32:40 ip-10-0-49-227 kubelet[840]: E0726 09:32:40.724358 840 kuberuntime_container.go:385] ContainerStatus for 2421109e0d1eb31242c5088b547c0f29377816ca068a283b8fe6c2d8e7e5874d error: rpc error: code = 2 desc = Error: No such container: 2421109e0d1eb31242c5088b547c0f29377816ca068a283b8fe6c2d8e7e5874d
Jul 26 09:32:40 ip-10-0-49-227 kubelet[840]: E0726 09:32:40.724329 840 remote_runtime.go:269] ContainerStatus "2421109e0d1eb31242c5088b547c0f29377816ca068a283b8fe6c2d8e7e5874d" from runtime service failed: rpc error: code = 2 desc = Error: No such container: 2421109e0d1eb31242c5088b547c0f29377816ca068a283b8fe6c2d8e7e5874d
Jul 26 09:32:40 ip-10-0-49-227 kubelet[840]: with error: exit status 1
Try running kubectl create configmap -n kube-system kube-dns
For more context, check out known issues with kubernetes 1.6 https://github.com/kubernetes/kops/releases/tag/1.6.0
This may be caused by a known docker bug where shutdown occurs before the content is synced to disk on layer creation. The fix is included in docker v1.13.
work around is to remove the empty files and re-pull the image.

Error when Cognito tries to refresh credentials by itself

I am getting an error code that looks like this but don't really know why.
So up till this point i was using amazons end to end implementation of developer authentication. Everything seems to work but as soon as i try to use dynamodb to do something i get this error.
AWSiOSSDKv2 [Error] AWSCredentialsProvider.m line:528 | __40-[AWSCognitoCredentialsProvider refresh]_block_invoke352 | Unable to refresh. Error is [Error Domain=com.amazonaws.service.cognitoidentity.DeveloperAuthenticatedIdentityProvider Code=0 "(null)"]
The request failed. Error: [Error Domain=com.amazonaws.service.cognitoidentity.DeveloperAuthenticatedIdentityProvider Code=0 "(null)"]
Any help?
UPDATE 1: LOG OUTPUT FROM COGNITOSYNCDEMO
I removed out the information i thought should be private and replaced it with [redacted info]
2016-02-19 15:32:42.594 CognitoSyncDemo[2895:67542] initializing clients...
2016-02-19 15:32:43.028 CognitoSyncDemo[2895:67542] json: { "identityPoolId": "[redacted info]", "identityId": "[redacted info]", "token": "[redacted info]",}
2016-02-19 15:32:43.056 CognitoSyncDemo[2895:67542] Error in registering for remote notifications. Error: Error Domain=NSCocoaErrorDomain Code=3010 "REMOTE_NOTIFICATION_SIMULATOR_NOT_SUPPORTED_NSERROR_DESCRIPTION" UserInfo={NSLocalizedDescription=REMOTE_NOTIFICATION_SIMULATOR_NOT_SUPPORTED_NSERROR_DESCRIPTION}
2016-02-19 15:32:54.449 CognitoSyncDemo[2895:67542] AWSiOSSDKv2 [Debug] AWSCognitoSQLiteManager.m line:1455 | -[AWSCognitoSQLiteManager filePath] | Local database is: /Users/MrMacky/Library/Developer/CoreSimulator/Devices/29BB1E0D-538D-4167-9069-C02A0628F1B3/data/Containers/Data/Application/1A86E139-5484-4F29-A3FD-25F81DE055EB/Documents/CognitoData.sqlite3
2016-02-19 15:32:54.451 CognitoSyncDemo[2895:67542] AWSiOSSDKv2 [Debug] AWSCognitoSQLiteManager.m line:221 | __39-[AWSCognitoSQLiteManager getDatasets:]_block_invoke | query = 'SELECT Dataset, LastSyncCount, LastModified, ModifiedBy, CreationDate, DataStorage, RecordCount FROM CognitoMetadata WHERE IdentityId = ?'
2016-02-19 15:33:00.946 CognitoSyncDemo[2895:67542] json: { "identityPoolId": "[redacted info]", "identityId": "[redacted info]", "token": "[redacted info]",}
2016-02-19 15:33:00.947 CognitoSyncDemo[2895:67542] AWSiOSSDKv2 [Error] AWSCognitoService.m line:215 | __36-[AWSCognito refreshDatasetMetadata]_block_invoke180 | Unable to list datasets: Error Domain=com.amazon.cognito.AWSCognitoErrorDomain Code=-4000 "(null)"
Looking at the exception, it looks like you are trying to do push sync from the emulator. You cannot receive remote notifications on an Emulator.

puppet file function doesn't load contents

I am trying to use the puppet file function (not the type) in the following way
class iop_users {
include 's3file::curl'
include 'stdlib'
$secretpath=file('/etc/secret','dev/null')
notify { 'show secretpath':
message =>"secretpath is $secretpath"
}
s3file { '/opt/utab.yaml':
source => "mybucket/$secretpath/utab.yaml",
ensure => 'latest',
}
exec { 'fix perms':
command => '/bin/chmod 600 /opt/utab.yaml',
require => S3file['/opt/utab.yaml']
}
if ( $::virtual == 'xenhvm' and defined(S3file['/opt/utab.yaml']) ) {
$uhash=loadyaml('/opt/utab.yaml')
create_resources(iop_users::usercreate, $uhash)
}
}
If I run this then here is some typical output. The manifest fails as the initial "secret" used to find the path is not loaded
https_proxy=https://puppet:3128 puppet agent -t
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Info: Caching catalog for ip-10-40-1-68.eu-west-1.compute.internal
Info: Applying configuration version '1431531382'
Notice: /Stage[main]/Iop_users/S3file[/opt/utab.yaml]/Exec[fetch /opt/utab.yaml]/returns: % Total % Received % Xferd Average Speed Time Time Time Current
Notice: /Stage[main]/Iop_users/S3file[/opt/utab.yaml]/Exec[fetch /opt/utab.yaml]/returns: Dload Upload Total Spent Left Speed
Notice: /Stage[main]/Iop_users/S3file[/opt/utab.yaml]/Exec[fetch /opt/utab.yaml] 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
Notice: /Stage[main]/Iop_users/S3file[/opt/utab.yaml]/Exec[fetch /opt/utab.yaml]/returns: curl: (56) Received HTTP code 404 from proxy after CONNECT
Error: curl -L -o /opt/utab.yaml https://s3-eu-west.amazonaws.com/mybucket//utab.yaml returned 56 instead of one of [0]
Error: /Stage[main]/Iop_users/S3file[/opt/utab.yaml]/Exec[fetch /opt/utab.yaml]/returns: change from notrun to 0 failed: curl -L -o /opt/utab.yaml https://s3-eu-west.amazonaws.com/mybucket//utab.yaml returned 56 instead of one of [0]
Notice: /Stage[main]/Iop_users/Exec[fix perms]: Dependency Exec[fetch /opt/utab.yaml] has failures: true
Warning: /Stage[main]/Iop_users/Exec[fix perms]: Skipping because of failed dependencies
Notice: secretpath is
Notice: /Stage[main]/Iop_users/Notify[show secretpath]/message: defined 'message' as 'secretpath is '
Notice: Finished catalog run in 1.28 seconds
However on the same host that the above puppet agent run fails on, if I use "apply" to try it outside of the context of a manifest, it works fine
puppet apply -e '$z=file("/etc/secret") notify { "z": message => $z}'
Notice: Compiled catalog for ip-x.x.x.x.eu-west-1.compute.internal in environment production in 0.02 seconds
Notice: wombat
Notice: /Stage[main]/Main/Notify[z]/message: defined 'message' as 'wombat
'
Notice: Finished catalog run in 0.03 seconds
What am I doing wrong? Are there any better alternative approaches I could make?
As usual I was confused about the way puppet works
Apparently, functions are always executed on the master
So any files being loaded in this way must be on the master
As soon as I added a "/etc/secret" file to the puppetmaster it all worked

How do I properly install CouchDB using build-couchdb?

I'm trying CouchDB on Ubuntu 11.10. Several tests were failing, so I followed this article's advice and tried to install from build-couchdb, but I'm getting some nasty errors trying to start couchdb after a successful build.
Does anyone know what this crash report means?
Does anyone know why 1.0.1 would be installed, and not the latest build version 1.1.0?
Thanks!
$ build/bin/couchdb
Apache CouchDB 1.0.1 (LogLevel=info) is starting.
=CRASH REPORT==== 8-Jan-2012::22:19:54 ===
crasher:
initial call: couch_event_sup:init/1
pid: <0.80.0>
registered_name: []
exception exit: {{badmatch,
{'EXIT',
{{badmatch,{error,enoent}},
[{couch_log,init,1},
{gen_event,server_add_handler,4},
{gen_event,handle_msg,5},
{proc_lib,init_p_do_apply,3}]}}},
[{couch_event_sup,init,1},
{gen_server,init_it,6},
{proc_lib,init_p_do_apply,3}]}
in function gen_server:init_it/6
ancestors: [couch_primary_services,couch_server_sup,<0.32.0>]
messages: []
links: [<0.79.0>,<0.6.0>]
dictionary: []
trap_exit: false
status: running
heap_size: 377
stack_size: 24
reductions: 116
neighbours:
=SUPERVISOR REPORT==== 8-Jan-2012::22:19:54 ===
Supervisor: {local,couch_primary_services}
Context: start_error
Reason: {{badmatch,{'EXIT',{{badmatch,{error,enoent}},
[{couch_log,init,1},
{gen_event,server_add_handler,4},
{gen_event,handle_msg,5},
{proc_lib,init_p_do_apply,3}]}}},
[{couch_event_sup,init,1},
{gen_server,init_it,6},
{proc_lib,init_p_do_apply,3}]}
Offender: [{pid,undefined},
{name,couch_log},
{mfargs,{couch_log,start_link,[]}},
{restart_type,permanent},
{shutdown,brutal_kill},
{child_type,worker}]
=SUPERVISOR REPORT==== 8-Jan-2012::22:19:54 ===
Supervisor: {local,couch_server_sup}
Context: start_error
Reason: shutdown
Offender: [{pid,undefined},
{name,couch_primary_services},
{mfargs,{couch_server_sup,start_primary_services,[]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,supervisor}]
=CRASH REPORT==== 8-Jan-2012::22:19:54 ===
crasher:
initial call: application_master:init/4
pid: <0.31.0>
registered_name: []
exception exit: {bad_return,
{{couch_app,start,
[normal,
["/etc/couchdb/default.ini",
"/etc/couchdb/local.ini"]]},
{'EXIT',
{{badmatch,{error,shutdown}},
[{couch_server_sup,start_server,1},
{application_master,start_it_old,4}]}}}}
in function application_master:init/4
ancestors: [<0.30.0>]
messages: [{'EXIT',<0.32.0>,normal}]
links: [<0.30.0>,<0.7.0>]
dictionary: []
trap_exit: true
status: running
heap_size: 987
stack_size: 24
reductions: 156
neighbours:
=INFO REPORT==== 8-Jan-2012::22:19:54 ===
application: couch
exited: {bad_return,{{couch_app,start,
[normal,
["/etc/couchdb/default.ini",
"/etc/couchdb/local.ini"]]},
{'EXIT',{{badmatch,{error,shutdown}},
[{couch_server_sup,start_server,1},
{application_master,start_it_old,4}]}}}}
type: temporary
Marcello is right in his comment. The log indicates that you are somehow (I'm not sure how) running version 1.0.1 but Build CouchDB would be building version 1.1.1.
Perhaps you could update your question with the output of these commands?
pwd
./build/bin/couchdb