I have a pretty simple application which send tcp log to promtail. (So promtail doesn't gaver data from file)
The problem is that the last log sent to protail isn't stored in loki until another log comes.
Let's say that I logged "foo", I'll see nothing in grafana. Then if I log "bar" to promtail, I'll only see foo in grafana, then if I log another "foo", I'll see only "foo" and "bar"...
I have this architecture :
Dockerised C++ app which use spdlog lib to log through tcp;
Dockerised promtail which only gaver from tcp;
Dockerised loki and grafana both are vanilla versions
My log format is the following :
<165>4 %Y-%m-%dT%H:%M:%S.%eZ MY_APPLICATION MY_PROJECT - 1 - %v,
The %v is composed of various str that I build in one before loging.
Promtail doesn't show any errors.
If there are some network nerds, when I ctrl+c (end) my app, the log bufferised in promtail is finaly stored in Loki and visible in grafana. I might think that my logs lack a kind of "end of log" sign to have this to happen...
Thanks for the help, any suggestion or even question will be appreciated !
(Sorry for my not so good english)
Here is some code to reproduce the problem :
C++ app
static std::shared_ptr<spdlog::logger> tcp_logger_ptr;
void init_logger() {
spdlog::init_thread_pool(4096, 1);
auto tcp_sink_config = spdlog::sinks::tcp_sink_config(172.17.0.1, 1514);
tcp_logger_ptr = spdlog::create_async<spdlog::sinks::tcp_sink_mt>("TCP_logger", tcp_sink_config);
tcp_logger_ptr->set_pattern(PROMTAIL_LOG_PATTERN);
}
Then to log
void tcp_log(spdlog::level::level_enum log_type, const char* log_message) {
tcp_logger_ptr->log(log_type, log_message);
}
Promtail docker config
server:
http_listen_port: 9080
grpc_listen_port: 0
positions:
filename: /tmp/positions.yaml
clients:
- url: http://loki:3100/loki/api/v1/push
scrape_configs:
- job_name: syslog
syslog:
listen_address: 0.0.0.0:1514
idle_timeout: 60s
label_structured_data: yes
labels:
job: "syslog"
relabel_configs:
- source_labels: ['__syslog_message_hostname']
target_label: 'host'
Ok thanks everyone for the help ! (lol)
I fugured it out, and it is pretty disappointing to be honnest...
The answer was the idle_timeout: 60s
It just says to promtail that if it has nothing to do for 60s, it flush what it got stored (my log in this case). So you see it comming, with an idle timeout of 0.1 the fonctionning is pretty good.
Related
I've been trying to use Chilkat library to play around and learn about using Microsoft Graph APIs but it seems I keeps getting TLS errors (connectFailReason 109) with even the simplest GETs and POSTs. This is what a typical log looks like:
ChilkatLog:
QuickGetSb:
DllDate: May 29 2021
ChilkatVersion: 9.5.0.87
UnlockPrefix: XXXXXXXXXXXXXXXX
Architecture: Little Endian; 32-bit
Language: C++ Builder / clang / 32-bit
VerboseLogging: 0
Component successfully unlocked using purchased unlock code.
url: https://graph.microsoft.com/v1.0/users
httpRequestStr:
a_quickReq:
quickHttpRequest:
httpVerb: GET
url: https://graph.microsoft.com/v1.0/users
openHttpConnection:
Opening connection directly to HTTP server.
httpHostname: graph.microsoft.com
httpPort: 443
tls: True
socket2Connect:
connect2:
connectImplicitSsl:
clientHandshake:
clientHandshake2:
ProcessHelloRetryRequest:
readHandshakeMessages:
WindowsError: An existing connection was forcibly closed by the remote host.
WindowsErrorCode: 0x2746
maxToReceive: 5
Failed to receive data on the TCP socket
Failed to read beginning of SSL/TLS record.
b: 0
dbSize: 0
nReadNBytes: 0
idleTimeoutMs: 60000
--readHandshakeMessages
--ProcessHelloRetryRequest
--clientHandshake2
--clientHandshake
Client handshake failed. (3)
--connectImplicitSsl
connectFailReason: 109
ConnectFailReason: 109
--connect2
--socket2Connect
connect: Socket fatal error.
--openHttpConnection
--quickHttpRequest
--a_quickReq
--httpRequestStr
Failed.
--QuickGetSb
--ChilkatLog
The library is not the very latest version but it isn't TOO old (about a year and a half - version 9.5.0.86). I didn't want to upgrade just yet because I have some "live" projects using this dev box (and this is just a "learning journey") so I was wondering if anyone can tell me whether the library version is the most likely issue or whether, perhaps, I'm missing some simple settings in the CkHttp object - the only thing I really do with it is set the auth token (which seems to have been retrieved correctly judging from the logs I output.
The actual API calls are pretty straightforward - mostly simple (slightly modified) examples from Chilkat website. But even the simplest http.quickGetStr("https://graph.microsoft.com/v1.0/me"); fails with a log similar to the above.
So, if anyone can suggest any properties to set to ckHttp to solve this issue (or confirm that library needs to be upgraded to access graph.microsoft.com - if, indeed, that is the case) - I would greatly appreaciate it.
Marko
This problem is already fixed. Contact support#chilkatsoft.com to get a pre-release build for v9.5.0.92.
Good day. I apologize for asking for obvious things because I'm writing in PHP and I know Python at the level "I started learning this yesterday". I've already spent a few days on this - but to no avail.
I downloaded twisted example of the SSH server for version 20.3 from here https://docs.twistedmatrix.com/en/twisted-20.3.0/conch/examples/. Line 162 has an execCommand method that I need to implement to make it work. Then I noticed a comment in this method "We don't support command execution sessions". Therefore, the question: Is this comment apply only to the example, or twisted library entirely. Ie, is it possible to implement this method to make the example server will work as I need?
More information. I don't think that this info is required to answer my questions above.
Why do I need it? I'm trying to compile an environment for writing functional (!) tests (there would be no such problems with the unit tests, I guess). Our API uses the SSH client (phpseclib / SSH2) by 30%+ of endpoints. Whatever I do, I had only 3 options of the results depending on how did I implement this method: (result: success, response: "" - empty; result: success, response: "1"; result: failed, response: "Unable to fulfill channel request at… SSH2.php:3853"). Those were for an SSH2 Client. If the error occurs (3rd case), the server shows logs in the terminal:
[SSHServerTransport, 0,127.0.0.1] Got remote error, code 11 reason: ""
[SSHServerTransport, 0,127.0.0.1] connection lost
I just found this works:
def execCommand(self, protocol, cmd):
protocol.write('Some text to return')
protocol.session.conn.sendEOF(protocol.session)
If I don't send EOF the client throws a timeout error.
I was using the sample application and sending a broadcast command to the Google Assistant using the --text_input option and everything was working correctly.
Now, in the latest version, the Google Assistant responds with: "Something went wrong. Something went wrong," but I don't get an error, even with verbose turned on. Do I need to do something different than before? Other commands, such as "What time is it?" work correctly. Here is the output from the broadcast request:
$ ./run_assistant --text_input "Broadcast Dinner" --credentials_file ./credentials.json
Using locale en-US
assistant_sdk robots_pem:
assistant_sdk CreateCustomChannel(embeddedassistant.googleapis.com:443, creds, arg)
assistant_sdk wrote first request: config { audio_out_config { encoding: LINEAR16 sample_rate_hertz: 16000 } dialog_state_in { language_code: "en-US" } device_config { device_id: "default" device_model_id: "default" } text_query: "Broadcast Dinner" }
assistant_sdk waiting for response ...
assistant_sdk Got a response
$
I also posted this on the Google Communities page, I hope it's OK to post the question in both places.
Same issue .. was working a week or so ago, then I went on hols and it isn't anymore.
If you use the voice input and ask it to broadcast it says "sorry I can't find any other speakers connected to your home network, so I can't broadcast your message"
I don't think this is a problem with your code.
I have the same issue on a project of mine that has been working without issue for months, and then around the 14th May it all stopped working.
We need Google to resolve this one.
I have a failing test because of the timeout. Here's what I see in log output:
2018-05-15 10:47:56.152 WARN com.datastax.driver.core.NettyUtil [UserDataServiceSpec-cassandra-plugin-default-dispatcher-27] [] [] - Found Netty's native epoll transport, but not running on linux-based operating system. Using NIO instead.
2018-05-15 10:48:38.616 ERROR n.f.c.indexing.UniquelyIndexingActor [UserDataServiceSpec-akka.actor.default-dispatcher-39] [UserDataServiceSpec-akka.actor.default-dispatcher-29] [akka.tcp://UserDataServiceSpec#127.0.0.1:51627/user/$c/user-email-indexer] - Persistence failure when replaying events for persistenceId [user-email-indexer]. Last known sequence number [0]
akka.persistence.RecoveryTimedOut: Recovery timed out, didn't get snapshot within 30000 milliseconds
2018-05-15 10:48:38.617 ERROR a.c.sharding.PersistentShardCoordinator [UserDataServiceSpec-akka.actor.default-dispatcher-39] [UserDataServiceSpec-akka.actor.default-dispatcher-30] [akka.tcp://UserDataServiceSpec#127.0.0.1:51627/system/sharding/userdataCoordinator/singleton/coordinator] - Persistence failure when replaying events for persistenceId [/sharding/userdataCoordinator]. Last known sequence number [0]
akka.persistence.RecoveryTimedOut: Recovery timed out, didn't get snapshot within 30000 milliseconds
2018-05-15 10:48:38.618 INFO akka.actor.LocalActorRef [UserDataServiceSpec-akka.actor.default-dispatcher-39] [UserDataServiceSpec-akka.actor.default-dispatcher-35] [akka://UserDataServiceSpec/user/$c/user-email-indexer] - Message [akka.persistence.SnapshotProtocol$LoadSnapshotFailed] from Actor[akka://UserDataServiceSpec/system/cassandra-snapshot-store#-750137778] to Actor[akka://UserDataServiceSpec/user/$c/user-email-indexer#1201357728] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
2018-05-15 10:48:38.619 INFO akka.actor.LocalActorRef [UserDataServiceSpec-akka.actor.default-dispatcher-37] [UserDataServiceSpec-akka.actor.default-dispatcher-39] [akka://UserDataServiceSpec/system/sharding/userdataCoordinator/singleton/coordinator] - Message [akka.cluster.sharding.ShardCoordinator$RebalanceTick$] from Actor[akka://UserDataServiceSpec/system/sharding/userdataCoordinator/singleton/coordinator#-75387958] to Actor[akka://UserDataServiceSpec/system/sharding/userdataCoordinator/singleton/coordinator#-75387958] was not delivered. [2] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
(Don't ask my why we are not using in memory storage to test persistent actors. It's not relevant to the problem right now.)
I am not experienced in Akka and JVM, but the messages I see are just the equivalence of "You screwed up, man". There is no hint in them how to fix the problem or why this RecoveryTimedOut occurs.
If someone could give me a valuable advice how to diagnose the problem, it would be nice.
UniquelyIndexingActor is created as cluster singleton.
Try adding this to your config:
akka {
persistence {
journal-plugin-fallback {
recovery-event-timeout = 60s
}
}
}
It solved the problem for me. I found a reference to it in https://github.com/akka/akka/blob/master/akka-persistence/src/main/resources/reference.conf
Update: I am using sricam SP019 IP(Wireless) camera.
I have been able to find the RTSP URL for my camera: "rtsp://IP_ADDRESS:554/onvif1" and also managed to play it in VLC and the onvifer Android app provided.
The app also provided the following info -
- Encoding: H264
- Transport Protocol: RTP/RTSP/TCP
- RTP packets received: some non-zero number
- RTP packets lost: 0
- RTSP port: 554
However, I still keep getting the error shown below.
===========================================
I am currently working on a project that requires me to interface with an IP camera (Company name: sricam) using openCV 3.3.1.
Already tried:
I have posted in the openCV forum (here) but have not received any reply yet. I also tried all options in this but keep getting this error related to the Gstreamer library.-
My question:
It would be extremely helpful if someone can just point me in the right direction as a minimum.
Thanks!
When it comes to camera URL there should be some default value in documentation (but it might be changed on configuration of camera). I guess that it will be best to start looking there.
Did you try looking on this page?
https://www.ispyconnect.com/man.aspx?n=Sricam
Try like this.
It worked to me ( OSX, sricam sp005 )
import os
os.environ["OPENCV_FFMPEG_CAPTURE_OPTIONS"] = "rtsp_transport;udp"
vcap = cv2.VideoCapture("rtsp://[IP_CAM_ADD]", cv2.CAP_FFMPEG)
Hope to be helpful to somebody