I'm using Rasp4 and I'm trying to let rasp speaks using gtts-cli. It works but the first time I play it skips 1 second.
I run this command:
gtts-cli -l en 'Good morning my dear friend' | mpg321 -q -
It works but the first time I run it, it misses the word Good, then if I run it again quickly after finished the first command it includes all the words, and sounds ok. If I wait for a minute and try again I get the same problem.
Then I try to create an mp3 from the gtts-cli command with this:
gtts-cli -l en 'Good morning my dear friend' --output test.mp3
Then if I play it with mpg321 I have the same problem, so it's not gtts-cli.
I try different players like play from sox but same issue.
RESOLVED: check this out:
https://raspberrypi.stackexchange.com/questions/132715/skip-1second-in-play-mp3-the-first-time/132722?noredirect=1#comment225721_132722
Related
I am trying to make a script that can view the latest video of a channel, my current method works but it keeps displaying all the videos.
Im wondering if it is possible to only get the first line of this youtube-dl channel request.
Input:
youtube-dl -i --get-filename -o "%(title)s, %(id)s" "youtube.com/user/pewdiepie/videos"
Current output:
WARNING: The url doesn't specify the protocol, trying with http
Saying goodbye is hard.., EOWP5Y7eErE
This illegal Swedish tradition is insane..., 8ch5LLc1z7s
My 12 Things I Can't LiveWithout., 7tB4jwvvuhQ
Lot of big changes lately.., https://www.youtube.com/watch?v=psHriqExm6U
Why I deleted the subreddit, https://www.youtube.com/watch?v=7yyjFudHYuw
Mr Beast passed me in subs.., https://www.youtube.com/watch?v=vHtqsuA8WJ4
Rating Liver Kings Apology, https://www.youtube.com/watch?v=tlNBB0jdL3w
'My mom fakes cancer', https://www.youtube.com/watch?v=wjPIcTU3AAM
Twitter Level Predator Exposed, https://www.youtube.com/watch?v=djr8j-4fS3A
Gordon Ramsey vs Fake Italian.., https://www.youtube.com/watch?v=X98VPQCE_WI
Twins wants to be Pregnant (At the same time...), https://www.youtube.com/watch?v=W27TjYZhytw
I Spent 12 Hours at a Japanese Arcade, It was weird. (Collab w_ #PewDiePie ), https://www.youtube.com/watch?v=eIbqhi5Ikq8
Dr Phil destroys Gamer boy, https://www.youtube.com/watch?v=s0d-AFtXtkM
too big waves for me.., https://www.youtube.com/watch?v=8XR9OzAeDqQ
She fell for it!, https://www.youtube.com/watch?v=LzCA5zHayyk
Does this mean Im cursed now, https://www.youtube.com/watch?v=UhjjjqmGAkI
I took my car off road ... (Oops), https://www.youtube.com/watch?v=cLFJjMokld8
^C ERROR: Interrupted by user
^ Each line for every uploaded video.
Wanted output: Saying goodbye is hard.., https://www.youtube.com/watch?v=EOWP5Y7eErE, One line and exited program
(I usually would resolve this just by placing a |head -1 at the end of the command. but the command stays active and my script is paused.)
Version: 2021.12.17
OS: debian
I am following the tutorial on
Center for High Throughput Computing and Introduction to Configuration in the HTCondor website to set up a Partitionable slot. Before any configuration I run
condor_status
and get the following output.
I update the file 00-minicondor in /etc/condor/config.d by adding the following lines at the end of the file.
NUM_SLOTS = 1
NUM_SLOTS_TYPE_1 = 1
SLOT_TYPE_1 = cpus=4
SLOT_TYPE_1_PARTITIONABLE = TRUE
and reconfigure
sudo condor_reconfig
Now with
condor_status
I get this output as expected. Now, I run the following command to check everything is fine
condor_status -af Name Slotype Cpus
and find slot1#ip-172-31-54-214.ec2.internal undefined 1 instead of slot1#ip-172-31-54-214.ec2.internal Partitionable 4 61295 that is what I would expect. Moreover, when I try to summit a job that asks for more than 1 cpu it does not allocate space for it (It stays waiting forever) as it should.
I don't know if I made some mistake during the installation process or what could be happening. I would really appreciate any help!
EXTRA INFO: If it can be of any help have have installed HTCondor with the command
curl -fsSL https://get.htcondor.org | sudo /bin/bash -s – –no-dry-run
on Ubuntu 18.04 running on an old p2.xlarge instance (it has 4 cores).
UPDATE: After rebooting the whole thing it seems to be working. I can now send jobs with different CPUs requests and it will start them properly.
The only issue I would say persists is that Memory allocation is not showing properly, for example:
But in reality it is allocating enough memory for the job (in this case around 12 GB).
If I run again
condor_status -af Name Slotype Cpus
I still get something I am not supposed to
But at least it is showing the correct number of CPUs (even if it just says undefined).
What is the output of condor_q -better when the job is idle?
UPDATE
I split the file into multiple files each roughly with 1.5 million lines and no issues.
Attempting to pipe into Redis 6.0.6 roughly 15 million lines of SADD and HSET commands properly formatted to Redis Mass Insertion but it fails with the following message:
ERR Protocol error: too big mbulk count string
I use the following command:
echo -e "$(cat load.txt)" | redis-cli --pipe
I run dbsize command in redis-cli and it shows no increase during the entire time.
I can use the formatting app I wrote (a c++ app with client library redis-plus-plus), which correctly formats the lines, write to std::cout then using the following command as well:
./app | redis-cli --pipe
but it exits right away and only sometimes produces the error message.
If I take roughly 400,000 lines from the load.txt file and load it in a smaller file then use echo -e etc.... it loads fine. The problem seems to be the large number of lines.
Any suggestions? It's not a formatting issue afaik. I can code my app to write all the commands into Redis but mass insertion should be faster and I'd prefer that route.
I am retrieving a list of files in a folder with tens of thousands of files. I've let the script execute for a few hours but it never seems to progress after this part. Is there a way to diagnose what's going on or the progress of the list? It used to take a few minutes to load the list, so I am not sure why it's all of a sudden hanging now.
Code
function EchoDump($echoOut = 0)
{
if($echoOut)
echo str_repeat("<!-- Agent Smith -->", 1000);
else
return str_repeat("<!-- Agent Smith -->", 1000);
}
$sftp_connection = new Net_SFTP($website ,2222);
$login_result = $sftp_connection->login($credentials->login(), $credentials->password());
echo "<pre>Login Status: ".( $login_result ? "Success" : "Failure") ."</pre>";
if($sftp_connection->chdir($photosFolder))
echo "<pre>Changed to $photosFolder</pre>";
else
{
echo "<pre>Failed to change to $photosFolder</pre>";
}
echo "<pre>Downloading List.</pre>";
EchoDump(1);
$rawlist = $sftp_connection->rawlist();
echo "<pre>List downloaded.</pre>";
EchoDump(1);
Output
Login Status: Success
Changed to /vspfiles/photos
Downloading List.
And then it will just sit with Downloading List as the last output forever.
Real time logging might help provide some insight. You can enable that by adding define('NET_SSH2_LOGGING', 3); to the top of your script. Once you got that maybe post it on pastebin.org for dissection.
That said, I will say that rawlist() doesn't return directory info on demand. It'll return everything. It's not the most efficient of designs but back in phpseclib 1.0 days Iterators didn't exist. phpseclib works with PHP4 so it had to work with what it had. The current master branch requires PHP 5.6 and is under active development so maybe this behavior will change but we'll see.
I am trying to use the textcleaner script for cleaning up real life images that I am using with OCR. The issue I am having is that the images sent to me are rather large sometimes. (3.5mb - 5mb 12MP pics) The command I run with textcleaner ( textcleaner -g -e none -f <int # 10 - 100> -o 5 result1.jpg out1.jpg ) takes about 10 seconds at -f 10 and minutes or more on a -f 100.
To get around this I tried using ImageMagick to compress the image so it was much smaller. Using convert -strip -interlace Plane -gaussian-blur 0.05 -quality 50% main.jpg result1.jpg I was able to take a 3.5mb and convert it almost loss-lessly to ~400kb. However when I run textcleaner on this new file it STILL acts like its a 3.5mb file. (Times are almost exactly the same). I have tested these textcleaner settings against a file NOT compressed #400kb and it is almost instant while -f 100 takes about 12 seconds.
I am about out of ideas. I would like to follow the example here as I am in almost exactly the same situation. However, at the current speed of transformation an entire OCR process could take over 10 minutes when I need this to be around 30 seconds.