Second node of PXC dont join to cluster - percona-xtradb-cluster

Im trying to setup PXC with 3 nodes, 1st node was bootstrapped succesfully but when trying to start second node it cant get SST.
Logfile from 2nd node:
2020-10-07T13:17:16.480905Z 0 [Note] [MY-000000] [WSREP] Initiating SST/IST transfer on JOINER side (wsrep_sst_xtrabackup-v2 --role 'joiner' --address 'xxx.xxx.xxx.xxx:4444' --datadir '/var/lib/mysql/' --basedir '/usr/' --plugindir '/usr/lib/mysql/plugin/' --defaults-file '/etc/mysql/my.cnf' --defaults-group-suffix '' --parent '154359' --mysqld-version '8.0.20-11.1' --binlog 'mysql-bin' )
2020-10-07T13:17:17.203042Z 0 [Warning] [MY-000000] [WSREP-SST] Found a stale sst_in_progress file: /var/lib/mysql//sst_in_progress
2020-10-07T13:17:17.642030Z 1 [Note] [MY-000000] [WSREP] Prepared SST request: xtrabackup-v2|xxx.xxx.xxx.xxx:4444/xtrabackup_sst//1
2020-10-07T13:17:17.642609Z 1 [Note] [MY-000000] [Galera] Cert index reset to 00000000-0000-0000-0000-000000000000:-1 (proto: 10), state transfer needed: yes
2020-10-07T13:17:17.643130Z 0 [Note] [MY-000000] [Galera] Service thread queue flushed.
2020-10-07T13:17:17.643718Z 1 [Note] [MY-000000] [Galera] ####### Assign initial position for certification: 00000000-0000-0000-0000-000000000000:-1, protocol version: 5
2020-10-07T13:17:17.644031Z 1 [Note] [MY-000000] [Galera] Check if state gap can be serviced using IST
2020-10-07T13:17:17.644344Z 1 [Note] [MY-000000] [Galera] Local UUID: 00000000-0000-0000-0000-000000000000 != Group UUID: 613f9455-07f0-11eb-9e01-139f2b6b4973
2020-10-07T13:17:17.644667Z 1 [Note] [MY-000000] [Galera] ####### IST uuid:00000000-0000-0000-0000-000000000000 f: 0, l: 57, STRv: 3
2020-10-07T13:17:17.645190Z 1 [Note] [MY-000000] [Galera] IST receiver addr using ssl://xxx.xxx.xxx.xxx:4568
2020-10-07T13:17:17.645589Z 1 [Note] [MY-000000] [Galera] IST receiver using ssl
2020-10-07T13:17:17.646458Z 1 [Note] [MY-000000] [Galera] Prepared IST receiver for 0-57, listening at: ssl://xxx.xxx.xxx.xxx:4568
2020-10-07T13:17:17.647629Z 0 [Warning] [MY-000000] [Galera] Member 1.0 (engine2) requested state transfer from 'engine3', but it is impossible to select State Transfer donor: Resource temporarily unavailable
2020-10-07T13:17:17.648009Z 1 [Note] [MY-000000] [Galera] Requesting state transfer failed: -11(Resource temporarily unavailable). Will keep retrying every 1 second(s)
2020-10-07T13:17:18.651866Z 0 [Warning] [MY-000000] [Galera] Member 1.0 (engine2) requested state transfer from 'engine3', but it is impossible to select State Transfer donor: Resource temporarily unavailable
2020-10-07T13:17:18.969089Z 0 [Note] [MY-000000] [Galera] (6b19a1b7, 'ssl://0.0.0.0:4567') turning message relay requesting off
2020-10-07T13:17:19.654067Z 0 [Warning] [MY-000000] [Galera] Member 1.0 (engine2) requested state transfer from 'engine3', but it is impossible to select State Transfer donor: Resource temporarily unavailable
2020-10-07T13:18:57.356706Z 0 [Note] [MY-000000] [WSREP-SST] pigz: skipping: <stdin> empty
2020-10-07T13:18:57.360327Z 0 [ERROR] [MY-000000] [WSREP-SST] ******************* FATAL ERROR **********************
2020-10-07T13:18:57.363359Z 0 [ERROR] [MY-000000] [WSREP-SST] Possible timeout in receving first data from donor in gtid/keyring stage
2020-10-07T13:18:57.363393Z 0 [ERROR] [MY-000000] [WSREP-SST] Line 1108
2020-10-07T13:18:57.363412Z 0 [ERROR] [MY-000000] [WSREP-SST] ******************************************************
2020-10-07T13:18:57.363430Z 0 [ERROR] [MY-000000] [WSREP-SST] Cleanup after exit with status:32
2020-10-07T13:18:57.384013Z 0 [ERROR] [MY-000000] [WSREP] Process completed with error: wsrep_sst_xtrabackup-v2 --role 'joiner' --address 'xxx.xxx.xxx.xxx:4444' --datadir '/var/lib/mysql/' --basedir '/usr/' --plugindir '/usr/lib/mysql/plugin/' --defaults-file '/etc/mysql/my.cnf' --defaults-group-suffix '' --parent '154359' --mysqld-version '8.0.20-11.1' --binlog 'mysql-bin' : 32 (Broken pipe)
2020-10-07T13:18:57.384488Z 0 [ERROR] [MY-000000] [WSREP] Failed to read uuid:seqno from joiner script.
2020-10-07T13:18:57.384541Z 0 [ERROR] [MY-000000] [WSREP] SST script aborted with error 32 (Broken pipe)
2020-10-07T13:18:57.385257Z 3 [Note] [MY-000000] [Galera] Processing SST received
2020-10-07T13:18:57.385338Z 3 [Note] [MY-000000] [Galera] SST request was cancelled
2020-10-07T13:18:57.385387Z 3 [ERROR] [MY-000000] [Galera] State transfer request failed unrecoverably: 32 (Broken pipe). Most likely it is due to inability to communicate with the cluster primary component. Restart required.
2020-10-07T13:18:57.385421Z 3 [Note] [MY-000000] [Galera] ReplicatorSMM::abort()
2020-10-07T13:18:57.385453Z 3 [Note] [MY-000000] [Galera] Closing send monitor...
2020-10-07T13:18:57.385484Z 3 [Note] [MY-000000] [Galera] Closed send monitor.
2020-10-07T13:18:57.385519Z 3 [Note] [MY-000000] [Galera] gcomm: terminating thread
2020-10-07T13:18:57.385733Z 3 [Note] [MY-000000] [Galera] gcomm: joining thread
2020-10-07T13:18:57.385762Z 3 [Note] [MY-000000] [Galera] gcomm: closing backend
2020-10-07T13:18:57.945476Z 1 [ERROR] [MY-000000] [Galera] Requesting state transfer failed: -77(File descriptor in bad state)
2020-10-07T13:18:57.945563Z 1 [ERROR] [MY-000000] [Galera] State transfer request failed unrecoverably: 77 (File descriptor in bad state). Most likely it is due to inability to communicate with the cluster primary component. Restart required.

A couple things could be causing this. 1) Make sure ports 4444, 4567, and 4568 are open between all nodes. 2) Make sure you have copied the SSL certificates from node1 over to node 2 BEFORE starting node2. Please read the PXC documentation on setting up new nodes.

Related

"queueproxy" : error reverse proxying request; sockstat: sockets TCP: inuse 27 orphan 2 tw 20 alloc 593 mem 52

I've issue with respect to above subject, using
knative v1.2.5
istio 1.12.7
Every 20mins we see below error in the queue proxy,
error: "context canceled"
knative.dev/key: "test-common-service/test-app-0-0-0"
knative.dev/pod: "test-app-0-0-0-deployment-xxxxxxx-xxxxx"
logger: "queueproxy"
message: "error reverse proxying request; sockstat: sockets: used 44
TCP: inuse 27 orphan 2 tw 20 alloc 593 mem 52
UDP: inuse 0 mem 3
UDPLITE: inuse 0
RAW: inuse 0
FRAG: inuse 0 memory 0
Can someone please let me know how can I fix this.
Thank you!
It looks like you may have requests which are hitting request timeouts in the queue proxy. What is your typical request latency, and what is the Revision timeoutSeconds set to?
It's also possible that istio is cancelling (resetting) some of the TCP connections to the queue-proxy, but that seems unlikely.

Why does Chilkat Http experience handshake error at "https" but not at "http"?

The "http" addresses worked fine.
The "https" version gives me a handshake error:
This is the error that I get when I call the following:
Dim lSuccess&
lSuccess = nHttp.Download("https://autoconfig.thunderbird.net/v1.1/gmx.de", "d:\weg.xml")
lSuccess returns 0 which mean an error occured.
ChilkatLog:
Download:
DllDate: Aug 1 2014
ChilkatVersion: 9.5.0.43
UnlockPrefix: *******
Username: *******
Architecture: Little Endian; 32-bit
Language: ActiveX
VerboseLogging: 0
url: https://autoconfig.thunderbird.net/v1.1/gmx.de
toLocalPath: d:\weg.xml
currentWorkingDir: C:\Program Files (x86)\Microsoft Visual Studio\VB98
a_httpDownload:
httpDownloadFile:
localFilePath: d:\weg.xml
localFileAlreadyExists: 0
quickHttpRequest:
httpVerb: GET
url: https://autoconfig.thunderbird.net/v1.1/gmx.de
openHttpConnection:
Opening connection directly to HTTP server.
httpHostname: autoconfig.thunderbird.net
httpPort: 443
ssl: 1
socket2Connect:
connect2:
connectImplicitSsl:
clientHandshake:
clientHandshake2:
processAlert:
TlsAlert:
level: fatal
descrip: handshake failure
--TlsAlert
--processAlert
Failed to read incoming handshake messages. (1)
--clientHandshake2
--clientHandshake
Client handshake failed. (3)
--connectImplicitSsl
ConnectFailReason: 0
--connect2
--socket2Connect
ConnectFailReason: 0
connectElapsedMs: 32
--openHttpConnection
--quickHttpRequest
outputLocalFileSize: 0
numOutputBytesWritten: 0
httpDownloadFile failed.
--httpDownloadFile
a_httpDownload failed.
--a_httpDownload
totalElapsedMs: 47
Failed.
--Download
--ChilkatLog
You're using a VERY OLD version of Chilkat. Update Chilkat to the latest version.. TLS and TLS server requirements evolve over the years. One cannot expect any implementation to work forever when the external world is always changing..

Method PeerConnection::SetLocalDescription in NativeAPI crashes

I'm trying to set up a data channel between a server written in C++ and a Python client. The server crashes with SIGSEGV error when it tries to set a local session description created in method "CreateAnswer"
The server and client exchange SDP information via WebSocket and should open the data channel without video and audio streams. Both programs are working under docker-compose in different services. So no audio or video devices are provided. I use WebRTC Native API from m76 branch.
Crashing handler:
static void OnAnswerCreated(WebRTCManagerImpl* impl_, webrtc::SessionDescriptionInterface* desc) {
LOG4CPLUS_INFO_FMT(impl_->logger_, "Answer created session_id %s", desc->session_id().c_str());
std::string offer_string;
desc->ToString(&offer_string);
LOG4CPLUS_DEBUG_FMT(impl_->logger_, "Offer string: %s", offer_string.c_str());
impl_->peer_connection_->SetLocalDescription(&impl_->set_session_description_observer_, desc);
impl_->signaling_->SendSessionDescription(*desc);
};
I create my connection with this factory:
webrtc::PeerConnectionFactoryDependencies CreatePeerConnectionFactoryDependencies() {
webrtc::PeerConnectionFactoryDependencies dependencies;
dependencies.network_thread = nullptr;
dependencies.worker_thread = nullptr;
dependencies.signaling_thread = nullptr;
dependencies.call_factory = webrtc::CreateCallFactory();
dependencies.task_queue_factory = webrtc::CreateDefaultTaskQueueFactory();
dependencies.event_log_factory = absl::make_unique<webrtc::RtcEventLogFactory>(dependencies.task_queue_factory.get());
cricket::MediaEngineDependencies mediaDependencies;
mediaDependencies.task_queue_factory = dependencies.task_queue_factory.get();
mediaDependencies.adm = rtc::scoped_refptr<webrtc::FakeAudioDeviceModule>(new webrtc::FakeAudioDeviceModule);
mediaDependencies.audio_encoder_factory = webrtc::CreateBuiltinAudioEncoderFactory();
mediaDependencies.audio_decoder_factory = webrtc::CreateBuiltinAudioDecoderFactory();
mediaDependencies.audio_processing = webrtc::AudioProcessingBuilder().Create();
mediaDependencies.video_encoder_factory = webrtc::CreateBuiltinVideoEncoderFactory();
mediaDependencies.video_decoder_factory = webrtc::CreateBuiltinVideoDecoderFactory();
dependencies.media_engine = cricket::CreateMediaEngine(std::move(mediaDependencies));
return dependencies;
}
webrtc::PeerConnectionFactoryDependencies deps = CreatePeerConnectionFactoryDependencies();
deps.signaling_thread = signaling_thread_.get();
// deps.network_thread = network_thread.get();
// deps.worker_thread = worker_thread.get();
peer_connection_factory_ = webrtc::CreateModularPeerConnectionFactory(std::move(deps));
The call stack:
<unknown> 0x0000000001e798f7
webrtc::PeerConnection::ValidateSessionDescription(webrtc::SessionDescriptionInterface const*, cricket::ContentSource) 0x00000000005e74dc
webrtc::PeerConnection::SetLocalDescription(webrtc::SetSessionDescriptionObserver*, webrtc::SessionDescriptionInterface*) 0x00000000005bb677
void webrtc::ReturnType<void>::Invoke<webrtc::PeerConnectionInterface, void (webrtc::PeerConnectionInterface::*)(webrtc::SetSessionDescriptionObserver*, webrtc::SessionDescriptionInterface*), webrtc::SetSessionDescriptionObserver*, webrtc::SessionDescriptionInterface*>(webrtc::PeerConnectionInterface*, void (webrtc::PeerConnectionInterface::*)(webrtc::SetSessionDescriptionObserver*, webrtc::SessionDescriptionInterface*), webrtc::SetSessionDescriptionObserver*, webrtc::SessionDescriptionInterface*) 0x000000000059b814
webrtc::MethodCall2<webrtc::PeerConnectionInterface, void, webrtc::SetSessionDescriptionObserver*, webrtc::SessionDescriptionInterface*>::OnMessage(rtc::Message*) 0x0000000000598f5f
webrtc::internal::SynchronousMethodCall::Invoke(rtc::Location const&, rtc::Thread*) 0x00000000007198fc
webrtc::MethodCall2<webrtc::PeerConnectionInterface, void, webrtc::SetSessionDescriptionObserver*, webrtc::SessionDescriptionInterface*>::Marshal(rtc::Location const&, rtc::Thread*) 0x0000000000593706
webrtc::PeerConnectionProxyWithInternal<webrtc::PeerConnectionInterface>::SetLocalDescription(webrtc::SetSessionDescriptionObserver*, webrtc::SessionDescriptionInterface*) 0x000000000058c982
preprocessor::p2p::WebRTCManager::WebRTCManagerImpl::OnAnswerCreated webrtc_manager.cpp:226
std::__invoke_impl<void, void (*&)(preprocessor::p2p::WebRTCManager::WebRTCManagerImpl*, webrtc::SessionDescriptionInterface*), preprocessor::p2p::WebRTCManager::WebRTCManagerImpl*&, webrtc::SessionDescriptionInterface*> invoke.h:60
std::__invoke<void (*&)(preprocessor::p2p::WebRTCManager::WebRTCManagerImpl*, webrtc::SessionDescriptionInterface*), preprocessor::p2p::WebRTCManager::WebRTCManagerImpl*&, webrtc::SessionDescriptionInterface*> invoke.h:95
std::_Bind<void (*(preprocessor::p2p::WebRTCManager::WebRTCManagerImpl*, std::_Placeholder<1>))(preprocessor::p2p::WebRTCManager::WebRTCManagerImpl*, webrtc::SessionDescriptionInterface*)>::__call<void, webrtc::SessionDescriptionInterface*&&, 0ul, 1ul>(std::tuple<webrtc::SessionDescriptionInterface*&&>&&, std::_Index_tuple<0ul, 1ul>) functional:467
std::_Bind<void (*(preprocessor::p2p::WebRTCManager::WebRTCManagerImpl*, std::_Placeholder<1>))(preprocessor::p2p::WebRTCManager::WebRTCManagerImpl*, webrtc::SessionDescriptionInterface*)>::operator()<webrtc::SessionDescriptionInterface*, void>(webrtc::SessionDescriptionInterface*&&) functional:549
std::_Function_handler<void (webrtc::SessionDescriptionInterface*), std::_Bind<void (*(preprocessor::p2p::WebRTCManager::WebRTCManagerImpl*, std::_Placeholder<1>))(preprocessor::p2p::WebRTCManager::WebRTCManagerImpl*, webrtc::SessionDescriptionInterface*)> >::_M_invoke(std::_Any_data const&, webrtc::SessionDescriptionInterface*&&) std_function.h:316
std::function<void (webrtc::SessionDescriptionInterface*)>::operator()(webrtc::SessionDescriptionInterface*) const std_function.h:706
preprocessor::p2p::CreateSessionDescriptionObserver::OnSuccess webrtc_manager.cpp:79
webrtc::WebRtcSessionDescriptionFactory::OnMessage(rtc::Message*) 0x0000000000b90785
rtc::MessageQueue::Dispatch(rtc::Message*) 0x00000000005712f8
rtc::Thread::ProcessMessages(int) 0x0000000000553398
rtc::Thread::Run() 0x0000000000552993
rtc::Thread::PreRun(void*) 0x0000000000552950
start_thread 0x00007ffff76536db
clone 0x00007ffff608a88f
WebRTC logs:
(audio_processing_impl.cc:435): Capture analyzer activated: 0
Capture post processor activated: 0
Render pre processor activated: 0
(webrtc_voice_engine.cc:196): WebRtcVoiceEngine::WebRtcVoiceEngine
(webrtc_video_engine.cc:479): WebRtcVideoEngine::WebRtcVideoEngine()
(webrtc_voice_engine.cc:219): WebRtcVoiceEngine::Init
(webrtc_voice_engine.cc:227): Supported send codecs in order of preference:
(webrtc_voice_engine.cc:230): opus/48000/2 { minptime=10 useinbandfec=1 } (111)
(webrtc_voice_engine.cc:230): ISAC/16000/1 (103)
(webrtc_voice_engine.cc:230): ISAC/32000/1 (104)
(webrtc_voice_engine.cc:230): G722/8000/1 (9)
(webrtc_voice_engine.cc:230): ILBC/8000/1 (102)
(webrtc_voice_engine.cc:230): PCMU/8000/1 (0)
(webrtc_voice_engine.cc:230): PCMA/8000/1 (8)
(webrtc_voice_engine.cc:230): CN/32000/1 (106)
(webrtc_voice_engine.cc:230): CN/16000/1 (105)
(webrtc_voice_engine.cc:230): CN/8000/1 (13)
(webrtc_voice_engine.cc:230): telephone-event/48000/1 (110)
(webrtc_voice_engine.cc:230): telephone-event/32000/1 (112)
(webrtc_voice_engine.cc:230): telephone-event/16000/1 (113)
(webrtc_voice_engine.cc:230): telephone-event/8000/1 (126)
(webrtc_voice_engine.cc:233): Supported recv codecs in order of preference:
(webrtc_voice_engine.cc:236): opus/48000/2 { minptime=10 useinbandfec=1 } (111)
(webrtc_voice_engine.cc:236): ISAC/16000/1 (103)
(webrtc_voice_engine.cc:236): ISAC/32000/1 (104)
(webrtc_voice_engine.cc:236): G722/8000/1 (9)
(webrtc_voice_engine.cc:236): ILBC/8000/1 (102)
(webrtc_voice_engine.cc:236): PCMU/8000/1 (0)
(webrtc_voice_engine.cc:236): PCMA/8000/1 (8)
(webrtc_voice_engine.cc:236): CN/32000/1 (106)
(webrtc_voice_engine.cc:236): CN/16000/1 (105)
(webrtc_voice_engine.cc:236): CN/8000/1 (13)
(webrtc_voice_engine.cc:236): telephone-event/48000/1 (110)
(webrtc_voice_engine.cc:236): telephone-event/32000/1 (112)
(webrtc_voice_engine.cc:236): telephone-event/16000/1 (113)
(webrtc_voice_engine.cc:236): telephone-event/8000/1 (126)
(apm_helpers.cc:32): Setting AGC mode to 0
(audio_processing_impl.cc:699): Highpass filter activated: 0
(audio_processing_impl.cc:717): Gain Controller 2 activated: 0
(audio_processing_impl.cc:719): Pre-amplifier activated: 0
(webrtc_voice_engine.cc:309): WebRtcVoiceEngine::ApplyOptions: AudioOptions {aec: 1, agc: 1, ns: 1, hf: 1, swap: 0, audio_jitter_buffer_max_packets: 200, audio_jitter_buffer_fast_accelerate: 0, audio_jitter_buffer_min_delay_ms: 0, audio_jitter_buffer_enable_rtx_handling: 0, typing: 1, experimental_agc: 0, extended_filter_aec: 0, delay_agnostic_aec: 0, experimental_ns: 0, residual_echo_detector: 1, }
(render_delay_buffer.cc:341): Applying total delay of 5 blocks.
(matched_filter.cc:450): Filter 0: start: 0 ms, end: 128 ms.
(matched_filter.cc:450): Filter 1: start: 96 ms, end: 224 ms.
(matched_filter.cc:450): Filter 2: start: 192 ms, end: 320 ms.
(matched_filter.cc:450): Filter 3: start: 288 ms, end: 416 ms.
(matched_filter.cc:450): Filter 4: start: 384 ms, end: 512 ms.
(audio_processing_impl.cc:699): Highpass filter activated: 0
(audio_processing_impl.cc:717): Gain Controller 2 activated: 0
(audio_processing_impl.cc:719): Pre-amplifier activated: 0
(apm_helpers.cc:48): Echo control set to 1 with mode 0
(audio_processing_impl.cc:699): Highpass filter activated: 0
(audio_processing_impl.cc:717): Gain Controller 2 activated: 0
(audio_processing_impl.cc:719): Pre-amplifier activated: 0
(audio_processing_impl.cc:699): Highpass filter activated: 0
(audio_processing_impl.cc:717): Gain Controller 2 activated: 0
(audio_processing_impl.cc:719): Pre-amplifier activated: 0
(apm_helpers.cc:62): NS set to 1
(webrtc_voice_engine.cc:447): Stereo swapping enabled? 0
(webrtc_voice_engine.cc:452): NetEq capacity is 200
(webrtc_voice_engine.cc:458): NetEq fast mode? 0
(webrtc_voice_engine.cc:464): NetEq minimum delay is 0
(webrtc_voice_engine.cc:470): NetEq handle reordered packets? 0
(webrtc_voice_engine.cc:481): Delay agnostic aec is enabled? 0
(webrtc_voice_engine.cc:491): Extended filter aec is enabled? 0
(webrtc_voice_engine.cc:501): Experimental ns is enabled? 0
(webrtc_voice_engine.cc:511): Setting AGC to 1
(webrtc_voice_engine.cc:533): Typing detection is enabled? 1
(audio_processing_impl.cc:699): Highpass filter activated: 1
(audio_processing_impl.cc:717): Gain Controller 2 activated: 0
(audio_processing_impl.cc:719): Pre-amplifier activated: 0
(webrtc_sdp.cc:3255): Ignored line: a=sctpmap:5000 webrtc-datachannel 65535
(rtc_event_log_impl.cc:63): Creating legacy encoder for RTC event log.
(peer_connection_factory.cc:361): Using default network controller factory
(bitrate_prober.cc:69): Bandwidth probing enabled, set to inactive
(paced_sender.cc:421): ProcessThreadAttached 0xec072e20
(cpu_info.cc:53): Available number of cores: 8
(aimd_rate_control.cc:105): Using aimd rate control with back off factor 0.85
(remote_bitrate_estimator_single_stream.cc:71): RemoteBitrateEstimatorSingleStream: Instantiating.
(remote_estimator_proxy.cc:44): Maximum interval between transport feedback RTCP messages (ms): 250
(openssl_identity.cc:44): Making key pair
(peer_connection.cc:5531): Local and Remote descriptions must be applied to get the SSL Role of the SCTP transport.
(openssl_identity.cc:92): Returning key pair
(openssl_certificate.cc:58): Making certificate for WebRTC
(openssl_certificate.cc:108): Returning certificate
(p2p_transport_channel.cc:519): Set backup connection ping interval to 25000 milliseconds.
(p2p_transport_channel.cc:528): Set ICE receiving timeout to 2500 milliseconds
(p2p_transport_channel.cc:535): Set ping most likely connection to 0
(p2p_transport_channel.cc:542): Set stable_writable_connection_ping_interval to 2500
(p2p_transport_channel.cc:555): Set presume writable when fully relayed to 0
(p2p_transport_channel.cc:564): Set regather_on_failed_networks_interval to 300000
(p2p_transport_channel.cc:583): Set receiving_switching_delay to 1000
(jsep_transport_controller.cc:1214): Creating DtlsSrtpTransport.
(dtls_srtp_transport.cc:61): Setting RTCP Transport on 0 transport 0
(dtls_srtp_transport.cc:66): Setting RTP Transport on 0 transport dc004830
(p2p_transport_channel.cc:465): Received remote ICE parameters: ufrag=YAvY, renomination disabled
(peer_connection.cc:4185): Session: 7301418690559709073 Old state: kStable New state: kHaveRemoteOffer
(peer_connection.cc:5531): Local and Remote descriptions must be applied to get the SSL Role of the SCTP transport.
(peer_connection.cc:5559): Local and Remote descriptions must be applied to get the SSL Role of the session.
(paced_sender.cc:293): Elapsed time (12680 ms) longer than expected, limiting to 2000 ms
Signal: SIGSEGV (Segmentation fault)
I guess the problem is not in callback but in the connection initialization. But what am I doing wrong?
I've found the error in my code:
peer_connection_->SetRemoteDescription(&set_session_description_observer_, desc.get());
I passed the raw pointer then release the smart one with the memory.

Django Rest Docker with MySQL create a superuser

I have dockerized my existing Django Rest project which uses MySQL database.
My dockerfile:
FROM python:2.7
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY . /code/
RUN pip install -r requirements.txt
And my docker-compose.yml file:
version: '3'
services:
web:
build: .
command: bash -c "python manage.py migrate && python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/code
depends_on:
- db
ports:
- "8000:8000"
db:
image: mysql
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: libraries
MYSQL_USER: root
MYSQL_PASSWORD: root
My commands docker-compose buildand docker-compose up are successful and output of later is:
D:\Development\personal_projects\library_backend>docker-compose up
Starting librarybackend_db_1 ... done
Starting librarybackend_web_1 ... done
Attaching to librarybackend_db_1, librarybackend_web_1
db_1 | 2018-02-13T10:11:48.044358Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
db_1 | 2018-02-13T10:11:48.045250Z 0 [Note] mysqld (mysqld 5.7.20) starting as process 1 ...
db_1 | 2018-02-13T10:11:48.047697Z 0 [Note] InnoDB: PUNCH HOLE support available
db_1 | 2018-02-13T10:11:48.047857Z 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
db_1 | 2018-02-13T10:11:48.048076Z 0 [Note] InnoDB: Uses event mutexes
db_1 | 2018-02-13T10:11:48.048193Z 0 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier
db_1 | 2018-02-13T10:11:48.048297Z 0 [Note] InnoDB: Compressed tables use zlib 1.2.3
db_1 | 2018-02-13T10:11:48.048639Z 0 [Note] InnoDB: Using Linux native AIO
db_1 | 2018-02-13T10:11:48.048928Z 0 [Note] InnoDB: Number of pools: 1
db_1 | 2018-02-13T10:11:48.049119Z 0 [Note] InnoDB: Using CPU crc32 instructions
db_1 | 2018-02-13T10:11:48.050256Z 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M
db_1 | 2018-02-13T10:11:48.056054Z 0 [Note] InnoDB: Completed initialization of buffer pool
db_1 | 2018-02-13T10:11:48.058064Z 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority().
db_1 | 2018-02-13T10:11:48.069243Z 0 [Note] InnoDB: Highest supported file format is Barracuda.
db_1 | 2018-02-13T10:11:48.081867Z 0 [Note] InnoDB: Creating shared tablespace for temporary tables
db_1 | 2018-02-13T10:11:48.082237Z 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
db_1 | 2018-02-13T10:11:48.096687Z 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB.
db_1 | 2018-02-13T10:11:48.097392Z 0 [Note] InnoDB: 96 redo rollback segment(s) found. 96 redo rollback segment(s) are active.
db_1 | 2018-02-13T10:11:48.097433Z 0 [Note] InnoDB: 32 non-redo rollback segment(s) are active.
db_1 | 2018-02-13T10:11:48.097666Z 0 [Note] InnoDB: Waiting for purge to start
db_1 | 2018-02-13T10:11:48.147792Z 0 [Note] InnoDB: 5.7.20 started; log sequence number 13453508
db_1 | 2018-02-13T10:11:48.148222Z 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool
db_1 | 2018-02-13T10:11:48.148657Z 0 [Note] Plugin 'FEDERATED' is disabled.
db_1 | 2018-02-13T10:11:48.151181Z 0 [Note] InnoDB: Buffer pool(s) load completed at 180213 10:11:48
db_1 | 2018-02-13T10:11:48.152154Z 0 [Note] Found ca.pem, server-cert.pem and server-key.pem in data directory. Trying to enable SSL support using them.
db_1 | 2018-02-13T10:11:48.152545Z 0 [Warning] CA certificate ca.pem is self signed.
db_1 | 2018-02-13T10:11:48.153982Z 0 [Note] Server hostname (bind-address): '*'; port: 3306
db_1 | 2018-02-13T10:11:48.154147Z 0 [Note] IPv6 is available.
db_1 | 2018-02-13T10:11:48.154261Z 0 [Note] - '::' resolves to '::';
db_1 | 2018-02-13T10:11:48.154373Z 0 [Note] Server socket created on IP: '::'.
db_1 | 2018-02-13T10:11:48.160505Z 0 [Warning] 'user' entry 'root#localhost' ignored in --skip-name-resolve mode.
db_1 | 2018-02-13T10:11:48.160745Z 0 [Warning] 'user' entry 'mysql.session#localhost' ignored in --skip-name-resolve mode.
db_1 | 2018-02-13T10:11:48.160859Z 0 [Warning] 'user' entry 'mysql.sys#localhost' ignored in --skip-name-resolve mode.
db_1 | 2018-02-13T10:11:48.161025Z 0 [Warning] 'db' entry 'performance_schema mysql.session#localhost' ignored in --skip-name-resolve mode.
db_1 | 2018-02-13T10:11:48.161147Z 0 [Warning] 'db' entry 'sys mysql.sys#localhost' ignored in --skip-name-resolve mode.
db_1 | 2018-02-13T10:11:48.161266Z 0 [Warning] 'proxies_priv' entry '# root#localhost' ignored in --skip-name-resolve mode.
db_1 | 2018-02-13T10:11:48.168523Z 0 [Warning] 'tables_priv' entry 'user mysql.session#localhost' ignored in --skip-name-resolve mode.
db_1 | 2018-02-13T10:11:48.168734Z 0 [Warning] 'tables_priv' entry 'sys_config mysql.sys#localhost' ignored in --skip-name-resolve mode.
db_1 | 2018-02-13T10:11:48.172735Z 0 [Note] Event Scheduler: Loaded 0 events
db_1 | 2018-02-13T10:11:48.173195Z 0 [Note] mysqld: ready for connections.
db_1 | Version: '5.7.20' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server (GPL)
db_1 | 2018-02-13T10:11:48.173365Z 0 [Note] Executing 'SELECT * FROM INFORMATION_SCHEMA.TABLES;' to get a list of tables using the deprecated partition engine. You may use the startup option '--disable-partition-engine-check' to skip this check.
db_1 | 2018-02-13T10:11:48.173467Z 0 [Note] Beginning of list of non-natively partitioned tables
db_1 | 2018-02-13T10:11:48.180866Z 0 [Note] End of list of non-natively partitioned tables
web_1 | Operations to perform:
web_1 | Apply all migrations: account, admin, auth, authtoken, contenttypes, libraries, sessions, sites
web_1 | Running migrations:
web_1 | No migrations to apply.
web_1 | Performing system checks...
web_1 |
web_1 | System check identified no issues (0 silenced).
web_1 | February 13, 2018 - 10:11:50
web_1 | Django version 1.10.3, using settings 'config.settings'
web_1 | Starting development server at http://0.0.0.0:8000/
web_1 | Quit the server with CONTROL-C.
I can now access my app by hitting localhost:8000. However, as it creates a fresh database instance in the container, I do not know how can I create a superuser there and login to my admin interface. Normally without docker, I run the command python manage.py createsuperuser which starts interactive command prompt to enter credentials of admin user.
How should I handle this?
If I have an existing database with data in it, how can I use it (to populate tables) in this database in container?
I could create a superuser by simply running a command: docker-compose run web python manage.py createsuperuser which opened interactive command prompt to enter admin credentials and subsequently I could log in through my admin interface

k8s: Error pulling images from ECR

We constantly get Waiting: ImagePullBackOff during CI upgrades. Anybody know whats happening? k8s cluster 1.6.2 installed via kops. During upgrades, we do kubectl set image and during the last 2 days, we are seeing the following error
Failed to pull image "********.dkr.ecr.eu-west-1.amazonaws.com/backend:da76bb49ec9a": rpc error: code = 2 desc = net/http: request canceled
Error syncing pod, skipping: failed to "StartContainer" for "backend" with ErrImagePull: "rpc error: code = 2 desc = net/http: request canceled"
journalctl -r -u kubelet
Jul 26 09:32:40 ip-10-0-49-227 kubelet[840]: W0726 09:32:40.731903 840 docker_sandbox.go:263] NetworkPlugin kubenet failed on the status hook for pod "backend-1277054742-bb8zm_default": Unexpected command output nsenter: cannot open : No such file or directory
Jul 26 09:32:40 ip-10-0-49-227 kubelet[840]: E0726 09:32:40.724387 840 generic.go:239] PLEG: Ignoring events for pod frontend-1493767179-84rkl/default: rpc error: code = 2 desc = Error: No such container: 2421109e0d1eb31242c5088b547c0f29377816ca068a283b8fe6c2d8e7e5874d
Jul 26 09:32:40 ip-10-0-49-227 kubelet[840]: E0726 09:32:40.724371 840 kuberuntime_manager.go:858] getPodContainerStatuses for pod "frontend-1493767179-84rkl_default(0fff3b22-71c8-11e7-9679-02c1112ca4ec)" failed: rpc error: code = 2 desc = Error: No such container: 2421109e0d1eb31242c5088b547c0f29377816ca068a283b8fe6c2d8e7e5874d
Jul 26 09:32:40 ip-10-0-49-227 kubelet[840]: E0726 09:32:40.724358 840 kuberuntime_container.go:385] ContainerStatus for 2421109e0d1eb31242c5088b547c0f29377816ca068a283b8fe6c2d8e7e5874d error: rpc error: code = 2 desc = Error: No such container: 2421109e0d1eb31242c5088b547c0f29377816ca068a283b8fe6c2d8e7e5874d
Jul 26 09:32:40 ip-10-0-49-227 kubelet[840]: E0726 09:32:40.724329 840 remote_runtime.go:269] ContainerStatus "2421109e0d1eb31242c5088b547c0f29377816ca068a283b8fe6c2d8e7e5874d" from runtime service failed: rpc error: code = 2 desc = Error: No such container: 2421109e0d1eb31242c5088b547c0f29377816ca068a283b8fe6c2d8e7e5874d
Jul 26 09:32:40 ip-10-0-49-227 kubelet[840]: with error: exit status 1
Try running kubectl create configmap -n kube-system kube-dns
For more context, check out known issues with kubernetes 1.6 https://github.com/kubernetes/kops/releases/tag/1.6.0
This may be caused by a known docker bug where shutdown occurs before the content is synced to disk on layer creation. The fix is included in docker v1.13.
work around is to remove the empty files and re-pull the image.