What does "←[37m" mean in the terminal when running Flask? - flask

When I type flask run and go to 127.0.0.1:5000/myfirstpage, I can see the following output in my terminal:
127.0.0.1 - - [29/Apr/2021 14:55:34] "←[37mGET /myfirstpage HTTP/1.1←[0m" 200 -
I understand that 127.0.0.1 is my localhost server, myfirstpage the path, HTTP/1.1 the version of the hypertext transfer protocol and 200 the HTTP status code for 'successfully responded to request'.
But what do ←[37m and ←[0m stand for?

Looks a lot like badly formatted terminal escape sequences.
According to https://www.lihaoyi.com/post/BuildyourownCommandLinewithANSIescapecodes.html
it is
White: \u001b[37m
Reset: \u001b[0m
also have a look at that table from wikipedia

Related

how to send multiple http2 requests over the same connection with libcurl

I'm using https://curl.haxx.se/libcurl/c/http2-download.html to send mulitple http2 requests to a demo http server. This server is based on spring webflux. To verify if libcurl can send http2 requests concurrently, the server will delay 10 seconds before return response. In this way, I hope to observe that the server will receive multiple http2 requests at almost the same time over the same connection, after 10 seconds, the client will receive responses.
However,I noticed that the server received the requests sequentially. It seems that the client doesn't send the next request before geting the response of previous request.
Here is the log of server, the requests arrived every 10 seconds.
2021-05-07 17:14:57.514 INFO 31352 --- [ctor-http-nio-2] i.g.h.mongo.controller.PostController : Call get 609343a24b79c21c4431a2b1
2021-05-07 17:15:07.532 INFO 31352 --- [ctor-http-nio-2] i.g.h.mongo.controller.PostController : Call get 609343a24b79c21c4431a2b1
2021-05-07 17:15:17.541 INFO 31352 --- [ctor-http-nio-2] i.g.h.mongo.controller.PostController : Call get 609343a24b79c21c4431a2b1
Any guys can help figure out my mistakes? Thank you
For me,
curl -v --http2 --parallel --config urls.txt
did exactly what you need, where urls.txt was like
url = "localhost:8080/health"
url = "localhost:8080/health"
the result was that at first, curl sent first request via HTTP/1.1, received 101 upgrade to http/2, immediately sent the second request without waiting for response, and then received two times 200 response in succession.
Note: -v is added for verbosity to validate it works as expected. You don't need it other than for printing the underlying protocol conversation.

Live Stream from AWS MediaLive service not viewable from VLC

I am trying to build a custom live streaming service as documented here:
https://aws.amazon.com/solutions/implementations/live-streaming-on-aws/
I used the pre-provided cloudformation template for "Live Streaming on AWS with MediaStore" which provisioned all the relevant resources for me. Next, I wanted to test my custom streamer.
I used OBS Studio to stream my webcam output to MediaLivePushEndpoint that was created during AWS cloudformation provisioning. OBS Suggests that it is already streaming the webcam stream to the rtmp endpoint to AWS MediaLive RTMP endpoint.
Now, to confirm if I can watch the stream, when I try to set the Input Nerwork Stream in VLC player to the cloudfront endpoint that was created for me (which looks like this: https://aksj2arbacadabra.cloudfront.net/stream/index.m3u8), VLC is unable to fetch the stream and fails with the following error message in the logs. What am I missing? Thanks!
...
...
...
http debug: outgoing request: GET /stream/index.m3u8 HTTP/1.1 Host: d2lasasasauyhk.cloudfront.net Accept: */* Accept-Language: en_US User-Agent: VLC/3.0.11 LibVLC/3.0.11 Range: bytes=0-
http debug: incoming response: HTTP/1.1 404 Not Found Content-Type: application/x-amz-json-1.1 Content-Length: 31 Connection: keep-alive x-amzn-RequestId: HRNVKYNLTdsadasdasasasasaPXAKWD7AQ55HLYBBXHPH6GIBH5WWY x-amzn-ErrorType: ObjectNotFoundException Date: Wed, 18 Nov 2020 04:08:53 GMT X-Cache: Error from cloudfront Via: 1.1 5085d90866d21sadasdasdad53213.cloudfront.net (CloudFront) X-Amz-Cf-Pop: EWR52-C4 X-Amz-Cf-Id: btASELasdasdtzaLkdbIu0hJ_asdasdasdbgiZ5hNn1-utWQ==
access error: HTTP 404 error
main debug: no access modules matched
main debug: dead input
qt debug: IM: Deleting the input
main debug: changing item without a request (current 2/3)
main debug: nothing to play
Updates based on Zach's response:
Here are the parameters I used while deploying the cloudformation template for live streaming using MediaLive (notice that I am using RTMP_PUSH):
I am using MediaLive and not MediaPackage so when I go to MediaLive to my channel, I see this:
Notice that it says that it cannot find the "stream [stream]" but I confirmed that the rtmp endpoint I add to my OBS is exactly the one which was created as an output for me from my cloudformation stack:
Finally, when I try to go to media store to see if there are any objects, it is completely empty:
Vader,
Thank you for the clarification here, I can see the issue is with your settings in OBS. When you setup your input for MediaLive you created a unique Application Name and Instance. Which is part of the URI, the Application Name is LiveStreamingwithMediaStore and the Instance is stream, in OBS you are going to want remove stream from the end of the Server URI and place it in the Stream Key portion, where you currently have a 1.
OBS Settings:
Server: rtmp://server_ip:1935/Application_Name/
Stream Key: Instance_Name
Since you posted the screenshot here on an open forum, which really helped determine the issue, but does expose settings that would allow someone to send to the RTMP input I would suggest that you change the Application Name and Instance.
Zach

Error when starting aca-py agent with the help of seed parameter

I am trying to start the aca-py agent with the command :-
aca-py start --wallet-name user3 --wallet-key user3 --wallet-type indy --genesis-file /<PATH_TO_GENESIS_FILE>/docker_pool_transactions_genesis --ledger-pool-name local_pool --inbound-transport http 127.0.0.1 8001 --admin 127.0.0.1 9001 --endpoint http://127.0.0.1:8001 --outbound-transport http --log-level DEBUG --admin-insecure-mode --seed 00000000000000000000000000000001
But it is giving me the following error :-
aries_cloudagent.config.base.ConfigError: Ledger rejected transaction request: client request invalid: could not authenticate, verkey for 4cLztgZYocjqTdAZM93t27 cannot be found
Why is this issue coming and how can I solve this?
Is because you are spinning up with a public DID. For this reason is checking if the Verkey linked to the DID (seed) is on the ledger, so before spinning up the aca-py you have to publish the DID in the ledger. So, go to the VON network manage page (http://localhost:9000) and publish the DID in " Authenticate a New DID" and copy the seed in the first textbox and publish the DID.
You should now see a new record in the ledger of type NYM with the VerKey linked to the NYM (aka DID)

How to configure jetty request.log date format?

Can someone advise, I have an issue with request.log on some of my jetty instances.
It looks like the date in the log record is locale dependent, for example below it is formatted like russian locale which means 18 of February, despite the fact that the system locale on this RHEL 6.6 + Jetty 9.2.1 instance is set to en_US.UTF-8.
10.1.182.45 - - [18/фев/2017:16:17:11 +0200] "GET /auth/ HTTP/1.0"
10.1.182.45 - - [18/фев/2017:16:17:23 +0200] "GET /auth/ HTTP/1.0"
10.1.182.45 - - [18/фев/2017:16:17:59 +0200] "GET /auth/ HTTP/1.0"
I would like to change format to "18/Feb/2017" because on other similar instances it is in English and I can't determine which factor affects this.
I didn't find such option in the jetty configuration files for request.log, there was only the time zone setting, and the system locale is already in en_US.UTF-8.
The NCSA Log has a Locale, and its using the Java Locale.getDefaults() to figure it out for your system.
Locale logLocale = Locale.getDefault();
As for how to change it, you can either ...
Setup your default Java Locale to be more appropriate for all things running in your Java JVM.
Or, in your chosen NCSA Log configuration, you can use the .setLogLocale(Locale) to set Locale you want it to use.

Why doesn't CloudFront return the cached version of these identical URL's?

I have a server on eb (running a tomcat application), I also have a CloudFront cache setup to cache duplicate requests so that they dont go to the server.
I have two behaviours set up
/artist/search
/Default(*)
and Default(*) is set to:
Allowed Http Methods :GET,PUT
Forward Headers :None
Headers :Customize
Timeout :84,0000
Forward Cookies :None
Forward Query Strings :Yes
Smooth Streaming :No
Restricted View Access:No
so there is no timeout and the only thing it forwards are queries strings
Yet I can see from looking at the localhost_access_log file that my server id receiving duplicate requests:
127.0.0.1 - - [22/Apr/2015:10:58:28 +0000] "GET /artist/cee3e39e-fb10-414d-9f11-b50fa7d6fb7a HTTP/1.1" 200 1351114
127.0.0.1 - - [22/Apr/2015:10:58:29 +0000] "GET /artist/cee3e39e-fb10-414d-9f11-b50fa7d6fb7a HTTP/1.1" 200 1351114
127.0.0.1 - - [22/Apr/2015:10:58:38 +0000] "GET /artist/cee3e39e-fb10-414d-9f11-b50fa7d6fb7a HTTP/1.1" 200 1351114
I can also see from my CloudFront Popular Objects page there are many objects that hit sometimes and miss sometimes including these artist urls, I was expecting only one miss and there all the rest to be hits
Why would this be ?
Update
Looking more carefully it seems (although not sure about this) that less likely to be cached as the size of the artist page increases, but extra weirdly even if the main artist page is larger it also seems to reget everthing referenced in that page such as icons (pngs) but not when the artist page is small. This is the worst outcome for me because it is the large artist pages that need more processing to create on the server - this is why I using cloudfront to try and avoid the recreation of these pages in the first place.
What you are seeing is a combination of two reasons:
Each individual CloudFront POP requests object separately, so if your viewers are in different locations you can expect multiple queries to your origin server (and they will be misses)
I'm not sure about report date range you are looking at, but CloudFront eventually evicts less popular objects to make room in cache for new objects