I spent most of yesterday putting together a collection of regular expressions to convert all my image names and paths to lower case. Today, I processed a folder full of files and was surprised to discover that many image names are still capitalized.
So I decided to try it one step at a time, first renaming .jpg's, then .gif's, .png's, etc.
I'm working on a Mac, using Dreamweaver and TextWrangler as my text editors. The following regex works perfectly for jpg's, with one major flaw - it deletes the extension...
([\w/-]+)\.jpe?g
\L\1
In other words, it changes South-America.jpg to south-america.
How can I change it so that it retains the file extension? I assume I can then just change it to...
([\w/-]+)\.png
\L\1
...to process png's, etc.
([\w\/-]+)(\.jpe?g)
and replace with \L\1\2
its deleting your extension because you are never saving it in a matchgroup.
You could perhaps capture the extension too?
([\w/-]+)(\.jpe?g)
\L\1\2
And I think you should be able to use something like this for all the files:
([\w/-]+)(\.[^.]+$)
\L\1\2
Or if you specifically want to convert those jpegs, pngs and gifs:
([\w/-]+)(\.(?:jpe?g|gif|png))
\L\1\2
If it's okay for the extension to become lowercase as well, you could just do
^(.*)$
\L\1
As long as you're certain that all lines contain file names.
If you want to process only certain file formats, use
^(.*\.(jpe?g|png|gif))$
\L\1
Related
I migrated my notes from evernote to markdown files with yarle. unfortunately it created me a lot of folders seperatively for the attachments (although I set it up for one folder only).
I moved all attachements to one folder, so the filepath to the attachments in the mardown files needs to be updated.
I think regex would be right for this, but I don't have any knowledge about regex and would be really thankful for help.
Filepaths are as follows:![[./_attachmentsMove/Coordination_Patterns.resources/CoordinationPattern_Ipsi.MOV]]
All filepaths are identical ![[./_attachmentsMove/]] up to this
The second folder varies e.g. Coordination_Patterns.resources/.
I want to delete everything but the filename.extension itself e.g. ![[CoordinationPattern_Ipsi.MOV]].
An example of the other filepaths:
![[./_attachmentsMove/Jonglieren_(Hände).resources/07 Jonglieren.MOV]]
(second folder changes, filename changes, I also have .png and .mov).
I use MassReplaceIt (app for mac) which allows me to replace expressions in documents with regex. If someone has a solution using the terminal/commandline, I'll try this as well of course :)
Try if this regexp suffices:
(?<=!\[\[)[^\]]+/(?=[^\]/]+]])
Replace with empty string.
It should delete the part from the ![[ up to the last / before the next ]].
I've been searching and I can't seem to find even just a simple grep code to do what I want. I want to take a url such as r2---sn-vgqs7nes.googlevideo.com, but not r3---sn-2xouxaxq5u5-5cxs.googlevideo.com and put them into a seperate file. Everything between r2---sn- and .googlevideo.com changes. A few examples of the varients:
r2---sn-vgqs7nes.googlevideo.com
r4---sn-ab5l6n67.googlevideo.com
r4---sn-5hnednes.googlevideo.com
r12---sn-ab5l6nsz.googlevideo.com
r6---sn-a5mlrn7d.googlevideo.com
r3---sn-vgqsrn76.googlevideo.com
r6---sn-p5qlsne7.googlevideo.com
r2---sn-qxo7snel.googlevideo.com
r4---sn-q4f7sn7z.googlevideo.com
r1---sn-o097znez.googlevideo.com
r6---sn-q4f7sn7e.googlevideo.com
The characters between sn-(randomizes).googlevideo.com
Also, r(number) goes up to r20. Basically, I want to extract them from a log file which constanty updates and input into one that doesn't so, I can later use them. From lets say /opt/var/log/messages to /opt/var/log/list. Another thing I'd like to also do is check to make sure the url doesn't already exist before it inputs it. Thanks in advance for any help.
#john-goofy The urls go from r1 to r20 for each variant. The urls such as these r3---sn-(2xouxaxq5u5-5cxs).googlevideo.com don't need to be collected. These variants of urls in parentheses is important not to be collected because blocking those blocks the videos entirely. Those also go from r1-r20, but the part in parentheses doesn't change besides this part in parentheses, but only one letter sn-2xouxaxq5u5-(5cxs).googlevideo.com. Which So, my desired output would be this:
Not collected:
- (r1-20) ---sn-2xouxaxq5u5-5cxs.googlevideo.com
- (r1-20) ---sn-2xouxaxq5u5-5cxe.googlevideo.com
- (r1-20) ---sn-2xouxaxq5u5-5cx?.googlevideo.com
- The third one I forget the letter.
- manifest.googlevideo.com
Collected:
Everything else such as the ones in my OP. I already have a few thousand collected, but it takes way too long manually doing each one.
(Blocking all these gets rid of youtube ads for the most part. There's some I think included in the above urls, but blocking them blocks everything.)
And it would all be inputted from /opt/var/log/messages to /opt/var/log/list
I have a text file in Notepad++ that contains about 66,000 words all in 1 line, and it is a set of 200 "lines" of output that are all unique and placed in 1 line in the basic JSON form {output:[{output1},{output2},...}]}.
There is a set of characters matching the RegEx expression "id":.........,"kind":"track" that occurs about 285 times in total, and I am trying to either single them out, or copy all of them at once.
Basically, without some super complicated RegEx terms, I am stuck because I can't figure out how to highlight all of them at once, and also the Remove Unbookmarked Lines feature does not apply because this is all in one line. I have only managed to be able to Mark every single occurrence.
So does this require a large number of steps to get the file into multiple lines and work from there, or is there something else I am missing?
Edit: I have come up with a set of Macro schemes that make the process of doing this manually work much faster. It's another alternative but still takes a few steps and quite some time.
Edit 2: I intended there to be an answer for actually just highlighting the different sections all at once, but I guess that it not possible. The answer here turns out to be more useful in my case, allowing me to have a list of IDs without everything else.
You seem to already have a regex which matches single instances of your pattern, so assuming it works and that we must use Notepad++ for this:
Replace .*?("id":.........,"kind":"track").*?(?="id".........,"kind":"track"|$) with \1.
If this textfile is valid JSON, this opens you up to other, non-notepad++ options, like using Python with the json module.
Edited to remove unnecessary steps
I've got folder with about 1300 png icons. What I need is html file with all of them inside like:
<img src="path-to-image.png" alt="file name without .png" id="file-name-without-.png" class="icon"/>
Its easy as hell but with that number of files its pure waste of time to do it manually. Have you any ideas how to automate it?
If you need it just once, then do a "dir" or "ls" and redirect it to a file, then use an editor with macro-ability like notepad++ to record modifying a single line like you desire, then hit play macro for the remainder of the file. If it's dynamic, use PHP.
I would not use C++ to do this. I would use vi, honestly, because running regular expressions repeatedly is all that is needed for this.
But young an do this in C++. I would start with a plan text file with all the file names generated by Dir or ls on the command prompt.
Then write code that takes a line of input and turns it into a line formatted the way you want. Test this and get it working on a single line first.
The RE engine of C++ is probably overkill (and is not all that well supported in compilers), but substr and basic find and replace is all you need. Is there a string library you are familiar with? std::string would do.
To generate the file name without PNG, check the last four characters and see if they exist and are .PNG (if not report an error). Then strip them. To remove dashes, copy characters to a new string but if you are reading a dash write a space. Everything else is just string concatenation.
I have a txt file that I’m trying to import as flat file into SQL2008 that looks like this:
“123456”,”some text”
“543210”,”some more text”
“111223”,”other text”
etc…
The file has more than 300.000 rows and the text is large (usually 200-500 chars), so scanning the file by hand is very time consuming and prone to error. Other similar (and even more complex files) were successfully imported.
The problem with this one, is that “some lines” contain quotes in the text… (this came from an export from an old SuperBase DB that didn’t let you specify a text quantifier, there’s nothing I can do with the file other than clear it and try to import it).
So the “offending” lines look like this:
“123456”,”this text “contains” a quote”
“543210”,”And the “above” text is bad”
etc…
You can see the problem here.
Now, 300.000 is not too much if I could perform a search using a text editor that can use regex, I’d manually remove the quotes from each line. The problem is not the number of offending lines, but the impossibility to find them with a simple search. I’m sure there are less than 500, but spread those in a 300.000 lines txt file and you know what I mean.
Based upon that, what would be the best regex I could use to identify these lines?
My first thought is: Tell me which lines contain more than 4 quotes (“).
But I couldn’t come up with anything (I’m not good at Regex beyond the basics).
this pattern ^("[^"]+){4,} will match "lines containing more than 4 quotes"
you can experiment with replacing 4 with 5 or more, depending on your data.
I think that you can be more direct with a Regex than you're planning to be. Depending on your dialect of Regex, something like this should do it:
^"\d+",".*".*"
You could also use a regex to remove the outside quotes and use a better delimeter instead. For example, search for ^"([0-9]+)","(.*)"$ and replace it with \1+++++DELIM+++++\2.
Of course, this doesn't directly answer your question, but it might solve the problem.