Icecast 2 ~ use pls or m3u on a listen.mp3 from caster.fm - icecast

Today I decided to make a Online Radio Station using a free account on caster.fm. I decided that it wasn't anything to do with them and it was more to do with general Icecast 2.
I own a game server and there is a plugin to allow radio streams but in the format of M3U & PLS playlists not stream files. I know you can mount the stream onto the PLS/M3U but, it doesn't seem to work. I added this to a file I called listen.pls:
[playlist] File1=http://shaincast.caster.fm:port/listen.mp3 Title1= Velocity FM
The port is specified in actual file.
But it doesn't seem to work. I understand PLS is only a playlist, could this be the cause?

You need some line breaks in your file. I'd also make sure you don't have a space after your equals sign... parsers for these INI-style files vary and can be a bit finicky sometimes.
[playlist]
File1=http://shaincast.caster.fm:80/listen.mp3
Title1=Velocity FM
You could also switch to M3U which is more broadly compatible (but it sounds like in your case you can handle either):
#EXTM3U
#EXTINF:-1,Velocity FM
http://shaincast.caster.fm:80/listen.mp3

Related

What is the path from BITMAP[+WAVE(s)] to RTSP (Twitch) via C/C++ in Windows?

So I'm trying to get a basic tool to output video/audio(s) to Twitch. I'm new to this side (AV) of programming so I'm not even sure what to look for. I'm trying to use mainly Windows infrastructure and third party where not available.
What are the steps of getting raw bitmap and wave data into a codec and then into a rtsp client and finally showing up on Twitch? I'm not looking for code. I'm looking for concepts so I can search for as I'm not absolutely sure what to search for. I'd rather not go through OBS source code to figure it out and use that as last resort.
So I capture the monitor via Output Duplication and also the Sound on the system as a wave and the microphone as another wave. I'm trying to push this to Twitch. I know that there's Media Foundation on Windows but I don't know how far to streaming it can get as I assume there no netcode integrated in it? And also the libav* collection in FFMPEG.
What are the basic steps of sending bitmap/wave to Twitch via any of thee above libraries or even others as long as they work on Windows. Please don't add code, I just need a not very long conceptual explanation and I'll take it from there. Try to cover also how bitrate and framerate gets regulated (do I have do it or the codec does it)?
Assume absolute noob level in this area (concept-wise not code-wise).

How to track the number of times my console application in C++14 has been launched?

I'm building a barebones Notepad-styled project (console-based, does not have a GUI as of now) and I'd like to track, display (and later use it in some ways) the number of times the console application has been launched. I don't know if this helps, but I'm building my console application on Windows 10, but I'd like it to run on Windows 7+ as well as on Linux distros such as Ubuntu and the like.
I prefer not storing the details in a file and then subsequently reading from it to maintain count. Please suggest a way or any other resource that details how to do this.
I'd put a strikethrough on my quote above, but SO doesn't have it apparently.
Note that this is my first time building such a project so I may not be familiar with advanced stuff... So, when you're answering please try to explain as is required for a not-so-experienced software developer.
Thanks & Have a great one!
Edit: It seems that the general advice is to use text files to protect portability and to account for the fact that if down-the-line, I need to store some extra info, the text file will come in super handy. In light of this, I'll focus my efforts on the text file.
Thanks to all for keeping my efforts from de-railing!
I prefer not storing the details in a file
In the comments, you wrote that the reason is security and you consider using a file as "over-kill" in this case.
Security can be solved easily - just encrypt the file. You can use a library like this to get it done.
In addition, since you are writing and reading to/from the file only once each time the application is opened/closed, and the file should take only small number of bytes to store such data, I think it's the right, portable solution.
If you still don't want to use a file, you can use windows registry to store the data, but this solution is not portable

How to create extended (custom) file property in Windows?

We have a proprietary file format which has embedded in it a product-code.
I am just starting down the path of "enabling the end-user to sort / filter by product-code when opening a file".
The simplest approach for us might be to simply have another drop-down in our customized Open File Dialog in which to choose a product-code to filter by.
However, I think it might be more useful to the end-user if we could present this information as a column in the details view for this file type - just as name, date-modified, type, size, etc., are also detail properties of a file-type (or perhaps generic to all files).
My vague understanding is that XP and prior Windows OSes embedded some sort of meta data like this in an alternate data stream in NTFS. However, Starting in Vista Microsoft stopped using alternate data streams due to their dependence upon NTFS, and hence fragility (i.e. can't send via file attachment, can't move to a FAT formatted thumb drive, etc.)
Things I need to know but haven't figured out yet:
Is it possible / Is it practicable / how to create a custom extended file property for our file type that expresses the product-code to the Windows shell so that it can be seen in Windows Explorer (and hence File dialogs)?
If that is doable, then how to configure things so that the product-code column is displayed by default for folders containing our file type.
Can anyone point me to a good starting point on the above? We certainly don't have to accomplish this by publishing a custom extended file property - but that seems like a sensible approach, in absence of any way to measure the costs of going this route.
If you have sensible alternative approaches to the problem, I'd be interested in those as well!
Just found: http://www.codeproject.com/Articles/830/The-Complete-Idiot-s-Guide-to-Writing-Shell-Extens
CRAP! It seems I'm very late to the banquet, and MS has already removed this functionality from their shell: http://xpwasmyidea.blogspot.com/2009/10/evil-conspiracy-behind-customizable.html
By far the easiest approach to developing a shell extension is to use a library made for the purpose.
I can recommend EZShellExtension because I have used it in the past to add columns and thumbnails/preview for a custom file format for our company.

Moving, renaming huge amount of text files based on content and size

*Update July 4*
I ended up doing the following:
Sort on date
Check if last sentence is the same
If Yes: If bigger -> this is the new message to be chosen. If smaller: remove. If no more of the same can be found, choose this one and move to another folder.
If No: move on. Loop this again until all files with certain date have been checked.
Thanks all for the help!!
I'm busy with a big project where I have a huge number of emails that I have to filter, imported from gmail through thunderbird. There is a big problem though.
Because gmail uses conversations, but thunderbird doesn't format them as such, what I have is a text file for each email, though the complete previous conversation as well. And so a whole new text file for each reply.To clarify, an example of a conversation:
Me:Hi, how are you?
You, replying: Good!
Me: Great!
In gmail this looks exactly as above, but for me this are now 3 files:
file 1:
Me, sent at 11:41:
Hi, how are you?
file 2:
You, sent at 11:42:
Good!
Me, sent at 11:41:
Hi how are you?
file 3:
Me, sent at 11:43:
Great!
You, sent at 11:42:
Good!
Me, sent at 11:41:
Hi how are you?
As you can understand, this is no problem with 3 files: I just throw away file 1 and 2 and only use file 3. That's precisely what I want to do. But considering in total there are around 30k files, I would very much like to automate that.
It is unfortunately not possible to do this complete by file name, though partially it can. The files are named after their date. For instance: 20110102 for Jan 2, 2011. However as there are multiple email conversations on a day, I would lose a lot if I would just sort by date and only keep the largest.
I hope the problem is clear and you can help me with this.
I work on Mac OSX 10.7. I've tried using Applescript, but either my script is not good or Applescript can't handle the amount of files.
Maybe you have a recommendation for software or a script in some way? I'm open for all and not unfamiliar with programming.
Thanks in advance!
As your task is basically just text processing, any language you're familiar with, including AppleScript, PHP, bash, C, should be able to do the job. I think perhaps #inTide's breaking the problem down into discreet steps is what you need to do, building one portion at a time in the language of your choice.
Pick a language that you're familiar with and start writing one the code to the first step and make sure it's working as you expect, and then expand, adding a small bit of new functionality at each point and making sure that functionality works before moving on. Without an example of the code you've written or a better description of how AppleScript is failing for you, additional advice is difficult.

How do I write a Perl script to filter out digital pictures that have been doctored?

Last night before going to bed, I browsed through the Scalar Data section of Learning Perl again and came across the following sentence:
the ability to have any character in a string means you can create, scan, and manipulate raw binary data as strings.
An idea immediately hit me that I could actually let Perl scan the pictures that I have stored on my hard disk to check if they contain the string Adobe. It seems by doing so, I can tell which of them have been photoshopped. So I tried to implement the idea and came up with the following code:
#!perl
use autodie;
use strict;
use warnings;
{
local $/="\n\n";
my $dir = 'f:/TestPix/';
my #pix = glob "$dir/*";
foreach my $file (#pix) {
open my $pic,'<', "$file";
while(<$pic>) {
if (/Adobe/) {
print "$file\n";
}
}
}
}
Excitingly, the code seems to be really working and it does the job of filtering out the pictures that have been photoshopped. But problem is many pictures are edited by other utilities. I think I'm kind of stuck there. Do we have some simple but universal method to tell if a digital picture has been edited or not, something like
if (!= /the origianl format/) {...}
Or do we simply have to add more conditions? like
if (/Adobe/|/ACDSee/|/some other picture editors/)
Any ideas on this? Or am I oversimplifying due to my miserably limited programming knowledge?
Thanks, as always, for any guidance.
Your best bet in Perl is probably ExifTool. This gives you access to whatever non-image information is embedded into the image. However, as other people said, it's possible to strip this information out, of course.
I'm not going to say there is absolutely no way to detect alterations in an image, but the problem is extremely difficult.
The only person I know of who claims to have an answer is Dr. Neal Krawetz, who claims that digitally altered parts of an image will have different compression error rates from the original portions. He claims that re-saving a JPEG at different quality levels will highlight these differences.
I have not found this to be the case, in my investigations, but perhaps you might have better results.
No. There is no functional distinction between a perfectly edited image, and one which was the way it is from the start - it's all just a bag of pixels in the end, after all, and any other metadata you can remove or forge all you want.
The name of the graphics program used to edit the image is not part of the image data itself but of something called meta data - which may be stored in the image file but, as others have noted, is neither required (so some programs may not store it, some may allow you an option of not storing it) nor reliable - if you forged an image, you might have forged the meta data as well.
So the answer to your question is "no, there's no way to universally tell if the pic was edited or not, although some image editing software may write its signature into the image file and it'll be left there by carelessness of the editing person.
If you're inclined to learn more about image processing in Perl, you could take a look at some of the excellent modules CPAN has to offer:
Image::Magick - read, manipulate and write of a large number of image file formats
GD - create colour drawings using a large number of graphics primitives, and emit the drawings in various formats.
GD::Graph - create charts
GD::Graph3d - create 3D Graphs with GD and GD::Graph
However, there are other utilities available for identifying various image formats. It's more of a question for Super User, but for various unix distros you can use file to identify many different types of files, and for MacOSX, Graphic Converter has never let me down. (It was even able to open the bizarre multi-file X-ray of my cat's shattered pelvis that I got on a disc from the vet.)
How would you know what the original format was? I'm pretty sure there's no guaranteed way to tell if an image has been modified.
I can just open the file (with my favourite programming language and filesystem API) and just write whatever I want into that file willy-nilly. As long as I don't screw something up with the file format, you'd never know it happened.
Heck, I could print the image out and then scan it back in; how would you tell it from an original?
As other's have stated, there is no way to know if the image was doctored. I'm guessing what you basically want to know is the difference between a realistic photograph and one that has been enhanced or modified.
There's always the option of running some extremely complex image recognition algorithm that would analyze every pixel in your image and do some very complicated stuff to determine if the image was doctored or not. This solution would probably involve AI which would examine millions of photos that are both doctored and those that are not and learn from them. However, this is more of a theoretical solution and isn't very practical... you would probably only see it in movies. It would be extremely complex to develop and probably take years. And even if you did get something like this to work, it probably still wouldn't be 100% correct all the time. I'm guessing AI technology still isn't at that level and could take a while until it is.
A not-commonly-known feature of exiftool allows you to recognize the originating software through an analysis of the JPEG quantization tables (not relying on image metadata). It recognizes tables written by many applications. Note that some cameras may use the same quantization tables as some applications, so this isn't a 100% solution, but it is worth looking into. Here is an example of exiftool run on two images, the first was edited by photoshop.
> exiftool -jpegdigest a.jpg b.jpg
======== a.jpg
JPEG Digest : Adobe Photoshop, Quality 10
======== b.jpg
JPEG Digest : Canon EOS 30D/40D/50D/300D, Normal
2 image files read
This will work even if the metadata has been removed.
There is existing software out there which uses various techniques (compression artifacting, comparison to signature profiles in a database of cameras, etc.) to analyze the actual image data for evidence of alteration. If you have access to such software and the software available to you provides an API for external access to these analysis functions, then there's a decent chance that a Perl module exists which will interface with that API and, if no such module exists, it could probably be created rather quickly.
In theory, it would also be possible to implement the image analysis code directly in native Perl, but I'm not aware of anyone having done so and I expect that you'd be better off writing something that low-level and processor-intensive in a fully-compiled language (e.g., C/C++) rather than in Perl.
http://www.impulseadventure.com/photo/jpeg-snoop.html
is a tool that does the job almost good
If there has been any cloning , there is a variation in the pixel density..or concentration which sometimes shows up.. upon manual inspection
a Photoshop cloned area will have even pixel density(my meaning is variation of Pixels wrt a scanned image)