I have thousands of psd files to save as png. The psd files are not different, except for a small text in the center of a image. Is there a way to automate the job?
Yes. Open your actions window. Create new action. Record yourself opening, saving the file as png and closing the file.
Then under File -> Automate -> Batch. Point it to your psd folder and select your action. It should run through the files saving them as pngs.
A quick google search may help if you're new to actions.
edited per author input :}
XnView does the job pretty well. It can batch convert most files into most formats. It also has batch transformations and batch renaming among other things.
I use it regularly to convert PSDs to JPG/PNG/GIF.
I would use Irfanview 's powerful batch engine. Free and super-fast.
Go to the Folder in Irfanview Thumnails
Select all files
Rightclick and "Start batch dialog with selected files"
Select PNG as output format.
Yes you can make a Photoshop action to save the png and than run it via batch. This unfortunately gets tricky when you want to use this action option and specify the destination where the processed files are saved.
Enter Dr. (Russell) Brown’s Image Processor Pro, an extension for Photoshop that does exactly what most people need. It's dead simple and can even stack multiple processes and output formats/destinations to each file.
It's part of Dr. Brown’s Services
- http://russellbrown.com/scripts.html
Related
Background
I'm trying out SageMaker Ground Truth, an AWS service to help you label your data before using it in your ML algorithms.
The labeling job requires a manifest file which contains a JSON object per row that contains a source or a source-ref, see also the Input Data section of the documentation.
Setup
Source-ref is a reference to where the document is located in an S3 bucket like so
my-bucket/data/manifest.json
my-bucket/data/123.txt
my-bucket/data/124.txt
...
The manifest file looks like this (based on the blog example) :
{"source-ref": "s3://my-bucket/data/123.txt"}
{"source-ref": "s3://my-bucket/data/124.txt"}
...
The problem
When I create the job, all I get is the source-ref value: s3://my-bucket/data/123.txt as the text, the contents of the file are not displayed.
I have tried creating jobs using a manifest that does not contain the s3 protocol, but I get the same result.
Is this a bug on their end or I'm I missing something?
Observations
I have tried to make all files public, thinking there may maybe permissions issue? but no
I ensured that the content type of the file was text (s3 -> object -> properties -> metadata)
If I use "source" and inline the text, it works properly, but I should be able to use individual documents as there is a limit on the file size specially if I have to label many or large documents!
I am a member of AWS SageMaker Ground Truth team. Sorry to hear that you have difficulties in using certain features of our product.
From your post I presume you have multiple text files and each text files contains multiple lines. For text classification, in order to show preview in console, we currently support only the inline mode using "source" containing each line.
We understand it is not convenient to create such a manifest with embedded text as it is not trivial and time consuming. That is why we have provided a crawling feature in console (please see "create input manifest" link over the input manifest box) that takes an input s3Prefix and crawls all text files (with extensions .txt, .csv) in that prefix and read each line of each of the text files in the prefix, and creates a manifest with each line as {“source”:””}. Please let us know if you can crawl to create your manifest.
Please note that, currently crawler will only work if you have created s3://my-bucket/data/ folder from console and then uploaded all the text files in this folder (instead of using s3 cli sync tool to upload a local data/ directory).
Sorry if our documents are not clear and we are definitely taking your feedback to improve our product. For any question, please reach us here: https://aws.amazon.com/contact-us/
The problem is with your preprocessing lambda. The preprocessing lambda receives the objects from the manifest (in batches afaik), ie the s3 sources. The preprocessing lambda must read the files and return the actual content. It sounds like your preprocessing is passing the files location instead of content. Refer to the documentation. any example preprocessing lambda for text should be easily adjustable to your case
I am working on a image processing project. I am not familiar with html. Here is what I think.
My C++ application is able to read an image and write the image to file after processing. The procedural is that user can click mouse in a fixed region of my web, and the position data could be passed as parameter to my application, and then my C++ application will use the position data to process the image and output image to file, finally my web display the image.
So is that possible to implement this?
I'm afraid it's not possible only with HTML.
It should be possible with any server-side scripts written in PHP (for example). Anyway, you can make you program to watch folder uploaded and processed images save into another folder. You will need PHP or something like this though.
I need an animation in my program. My designer draws animation in Flash and provides me with *.fla file. All I need is to grab 30-40 PNGs from this file and store them within my internal storage.
Is it possible grab resources from *.fla with C++ ? Probably, some Adobe OLE objects can help?
Please, advice.
Thanks in advance.
If I asked an artist to make me an icon I wouldn't expect to need to write code to convert a .3DS model into a usable icon format.
You can save yourself a lot of time and hassle by having your designer use File->Export and give you PNGs of the layers and frames instead of a .FLA file if that's the format you require for your implementation.
If that's not possible for some reason then you can probably find a flash decompiler that has a command line option which you could launch from your program to extract assets as part of your loading sequence but that is generally frowned upon because this is not the intended use of the proprietary format for .swf/.fla anymore than you should design applications to extract source code from a binary executable.
Assuming
You are using CS5
The assets used internally in the FLA are already PNG's as you want them to be.
Then simply get the FLA saved as a XFL file, and you will be able to grab them from the library folder ( but then why not just get them to mail you the pngs ? )
So if for some reason you can only get access to the fla and not the designer, then you can do it programatically by renaming the fla to .zip, extracting.. and you have the XFL format.
An application of our company uses pdfimages (from xpdf) to check whether some pages in a PDF files, on which we know there is no text, consist of one image.
For this we run pdfimages on that page and count whether only one, two or more, or zero output files have been created (could be JPG, PPM, PGM or PPM).
The problem is that for some PDF files, we get millions of 14-byte PPM images, and the process has to be killed manually.
We know that by assigning the process to a job we can restrict how much time the process will run for. But it would probably be better if we could control that the process will create new files at most twice during its execution.
Do you have any clue for doing that?
Thank you.
One approach is to monitor the directory for file creations: http://msdn.microsoft.com/en-us/library/aa365261(v=vs.85).aspx - the monitoring app could then terminate the PDF image extraction process.
Another would be to use a simple ramdisk which limited the number of files that could be created: you might modify something like http://support.microsoft.com/kb/257405.
If you can set up a FAT16 filesystem, I think there's a limit of 128 files in the root directory, 512 in other dirs? - with such small files that would be reached quickly.
Also, aside from my 'joke' comment, you might want to check out _setmaxstdio and see if that helps ( http://msdn.microsoft.com/en-us/library/6e3b887c(VS.71).aspx ).
I was wondering if anyone new how to get access the metadata (the date in particular) from jpg, arw and dng files.
I've recently lost the folder structure after a merge operation gone-bad and would like to rename the recovered files according to the metadata.
I'm planning on creating a little C++ app to dig into each file and get the metadata.
any input is appreciated.
( alternatively, if you know of an app that already does this I'd like to know :)
Have you looked at the libexif project http://libexif.sourceforge.net/?
ok, so I did a google search (probably should have started with that) for "batch rename based on exif data arw dng jpg"
and the first page that popped up was the ExifTool by Phil Harvey
it supports recent arw and dng files, and with some command line magic I should be able to get it to do what pretty much what I want
exiftool -r -d images/%Y-%m-%d/%Y%m%d_%%.4c.%%e "-filename<filemodifydate" pics
-move files to folders (images/YYYY-MM-DD/) and rename files to YYYYMMDD_####.ext that are in pics folder(and subfolders)
hope this helps others
You should also try Adobe XMP SDK, which is great for its supported formats (JPEG, PNG, TIFF and DNG).