Delete specific lines in multiple Text Files with freeware App? - replace

Does anyone know how to delete specific lines in multiple Text Files with a freeware App?
For instance the first 2 Lines or the last 2 lines or line 4 and 6 or etc.
Replace Pioneer can do this and Soobolsoft but they are paid Apps.
I'm an average computer user and not a programmer so I don't know how to work with scripts. I don't have to do this often therefore I'm looking for a freeware............

I've found a freeware that can do this.
It's name is Text Converter Basic and can be found here

Related

How to automatically increment/add/renumber numbers in Sublime 3 over many related files in a continuos way?

I need to increment/add/renumber numbers (BibTeX keys) selected using regex over several hundred TeX files, maintaining the sequence from one file to the next, when sorted in alphanumeric order.
Files:
latex-01.tex
latex-02.tex
latex-03.tex
etc
Each file containing something like,
Text ... [bibkey01a] ...
More text [bibkey02] ...
I know it is easily possible to do it on one file. I have found several other similar pages on stackoverflow and other forums, but all deal only with one file at a time.
I could open each file, increment/add/renumber the numbers using TextPastry or Sublime-Evaluate and manually carry over the proper value to the next file and repeat the procedure for all the files.
That is possible but a daunting task when one has several hundred related files that need to have value renumbered in a continuos related way. Also, it would be quite easy to make a mistake and carry over the wrong number.
How to automatically increment/add/renumber numbers in Sublime 3 over many related files in a continuos way?
It seems Sublime 3 + extensions is not able to do what I need at the moment.
Of course I can do it with a script. I believe Emacs can do it too, using helm-swoop and wgrep, then, using a replace expression that contains elisp code.

Pentaho reports localization into not supported languages

I am from Slovakia, I wouldn't be surprised if most of you haven't heard about it.
However, that causes me a troubles when it comes to reports. We need to have 3 (soon 4) language versions of each report: Slovak is main language, than, Polish and English.
Since pentaho does not support Polish nor Slovak, it is really pain for me to keep these localized.
What I do is:
Create report in Slovak language
Write down all phrases from report
Send phrases to one of our partners to translate
Create its copy in either pl/en directory
Open it in Report Designer and edit every phrase accordingly
Save as another language version
As you can imagine, the process is very time consuming, and error prone. Plus, every time I add new parameter to report or change its data source (which is BeanShell script), I need to do it in 3 separated files. As a result of this, language mutations are usually out of date, way behind main language version.
I have tried to automate it with OneSky and did a python script that does 2 stages:
Stage 1 (extract and upload):
Change *.prpt files sufix to *.zip
Extract phrases from files: ~/datadefinition.xml, ~/layout.xml, ~/styles.xml, ~/datasources/inline-ds.xml
Put those phrases into *.po file
Export *.po file into OneSky
Stage 2 (download and import):
Change *.prpt files sufix to *.zip
Download translated *.po file from OneSky
Run through ~/datadefinition.xml, ~/layout.xml, ~/styles.xml, ~/datasources/inline-ds.xml files and replace original phrases by translated
While this aproach works fine, it doe not translate everything. There are still flaws of this process. I need to go through it every time I do even slightest change in data source of report or fix small mistakes. Even if I just do a small six in SQL code, I need to do it in 3 files. That of course increases chance to mistake be made.
Soo, I was wondering, how are you guys solving this issue with translating of your reports?
I will share very simple method which we are following.
1)create a properties file with key value format for each language for resource labels(for static values)
2)put it into resources folder(report-designer/resources/)
3)Based on the parameter you can specify which properties file to select and you can specify keys into value field so that it can understand which value to display in which language.
4)if you need to convert the data which are coming from database,you have to design data warehouse specify all the mappings,accordingly it can fetch the data.
5)For converting dates and currency symbols or number format you can use inbuilt functions which can handle all this things,i am using mysql and mysql has translation functions which can handle all such things.
it is difficult to explain entire process here, but if you can get and idea from this it can be useful to you.

How can I find only folders with ALL_CAPITAL letters in the name excluding hidden folders?

I am in the middle of a project that was supposed to be simple and easy but I have been pulling my hair out for a week, so I am asking for your help.
I am trying to simply count all folders that have names in ALL_CAPITAL letters.(excluding any hidden folders)
The project:
My grandfather has been taking pictures and video for years and now has tens of thousands of slides and videos we are digitizing and are storing on a NAS drive. All videos are contained in their own folder which allows for complimentry files to be stored with them, for example a text document with info about the trip.
The naming convention I have adopted is simple. The folder with the actual video files is in ALL_CAPITAL letters with underscores between the words. Containing folders are in Regular_text with underscores for the spaces. So the drive looks like this...
FOLDER: Family_2004
contains FOLDER: Bahama_Cruise
contains FOLDER: CRUISE_DAY_1 and CRUISE_(2004) and JOE_AND_SALLY_CRUISE
FOLDER: CRUISE_DAY_1
contains FILE: CRUISE_DAY_1.mp4 and memorable_day_1_events.txt
FOLDER: CRUISE_(2004)
contains FOLDER: AUDIO_TS and VIDEO_TS
contains unaltered DVD files including .IFO and .BUP and .VOB
This way using a WD media player he can click on any file with an ALL_CAPITAL name and see the video. He woud like to keep track of how many videos are on the system.
I thought I could do this very simply using windows 7 as follows... search type:folder -AUDIO_TS -VIDEO_TS. However as I understand it windows 7 can not do a case sensitive search.
I have used FileLocator Pro for years and then thought I should be able to use regex to define the search. I have tried dozens of combinations but I am obviously not good with regular expresions and can not figure out how to make a simple search to count only the folders with the names in ALL_CAPITALS excluding any hidden files.
EDIT:
As an addition to the request FileLocator uses a Perl compatible regexp syntax.
These are the only 2 tools I am familiar with I am not stuck on any particular method to do this. If anyone one could suggest a way I could accomplish this it would be much appreciated.

Moving, renaming huge amount of text files based on content and size

*Update July 4*
I ended up doing the following:
Sort on date
Check if last sentence is the same
If Yes: If bigger -> this is the new message to be chosen. If smaller: remove. If no more of the same can be found, choose this one and move to another folder.
If No: move on. Loop this again until all files with certain date have been checked.
Thanks all for the help!!
I'm busy with a big project where I have a huge number of emails that I have to filter, imported from gmail through thunderbird. There is a big problem though.
Because gmail uses conversations, but thunderbird doesn't format them as such, what I have is a text file for each email, though the complete previous conversation as well. And so a whole new text file for each reply.To clarify, an example of a conversation:
Me:Hi, how are you?
You, replying: Good!
Me: Great!
In gmail this looks exactly as above, but for me this are now 3 files:
file 1:
Me, sent at 11:41:
Hi, how are you?
file 2:
You, sent at 11:42:
Good!
Me, sent at 11:41:
Hi how are you?
file 3:
Me, sent at 11:43:
Great!
You, sent at 11:42:
Good!
Me, sent at 11:41:
Hi how are you?
As you can understand, this is no problem with 3 files: I just throw away file 1 and 2 and only use file 3. That's precisely what I want to do. But considering in total there are around 30k files, I would very much like to automate that.
It is unfortunately not possible to do this complete by file name, though partially it can. The files are named after their date. For instance: 20110102 for Jan 2, 2011. However as there are multiple email conversations on a day, I would lose a lot if I would just sort by date and only keep the largest.
I hope the problem is clear and you can help me with this.
I work on Mac OSX 10.7. I've tried using Applescript, but either my script is not good or Applescript can't handle the amount of files.
Maybe you have a recommendation for software or a script in some way? I'm open for all and not unfamiliar with programming.
Thanks in advance!
As your task is basically just text processing, any language you're familiar with, including AppleScript, PHP, bash, C, should be able to do the job. I think perhaps #inTide's breaking the problem down into discreet steps is what you need to do, building one portion at a time in the language of your choice.
Pick a language that you're familiar with and start writing one the code to the first step and make sure it's working as you expect, and then expand, adding a small bit of new functionality at each point and making sure that functionality works before moving on. Without an example of the code you've written or a better description of how AppleScript is failing for you, additional advice is difficult.

Convert .odt .doc .ods files to .txt files

I want to convert all the .odt .doc .xls .pdf files to .txt files.
I want to convert these files to text files using a shell script or a perl script
There's a program for odt files and alikes:
odt2txt - avaliable in repos.
$ unoconv --format=txt document1.odt
Should produce document1.txt.
OpenOffice has a built-in document converter capable of handling a bunch of formats- take a look at unoconv: http://dag.wieers.com/home-made/unoconv/
That being said, I have had some troubles getting that to work in the past- If you're having trouble, take a look at similar programs for AbiWord (another open source word processor).
For word documents, you can try antiword, at least on linux. It's a command line utility that takes a word document as an argument, and spits out the text from that document (as best as it can figure) to Standard Output. Maybe you can specify an ouput file too. I can't remember the details of how it works. I haven't used it in a while. Not sure if it can handle OO documents.
It's certainly possible to do this, though there is something strange and impenetrable about the OO project and its documentation that makes things like this hard to research and follow. However, OO has the capability to convert all of those types, not just the OO native ones, and it can do it via two different forms of automatic control.
These are the two general approaches.
You can start OO and tell it to execute a macro which does this job for you for a given file. You then just have to write the macro and a script to loop over your files. The syntax is something like
$ oowriter -headless filename macro://dir/Standard.Module1.sMySub
The other thing OO has is a network API. This is based on something called UNO.
$ oowriter -accept=accept-string
Notifies the OpenOffice.org software that upon the creation of
"UNO Acceptor Threads", a "UNO Accept String" will be used.
You will need some sort of client library. I think they have one for Python at least. Using this technology a Python program or some other scripting language with an OO client library could drive the program and convert all the files. Since OO reads MSO, it should be able to do all of them.
Open the file in LibreOffice. Click on "File", "Save-as" scroll down to find the text option. Click that and it will be saved as a text file.
FYI, I had an *.ODT file that was 339.2 KB in size. When I save-as text the size of the file shrunk to ONLY 5.0 KB. Another reason for saving your files as text files.
For the Microsoft formats, look into the wvWare tools.
Open .ods file normally in libre office
Highlight text to be converted
Open a terminal
Run vi
Press "i" to get insert mode
Press ctrl-shift-v
Done!
Need some formatting?
Save the file as
Get out of vi
Run:
$cat | column >filename2
This worked in opensuse running KDE
Substitute "kwrite" for "vi", if you want