Finding start of compressed data for items in a zip with zip4j - zip4j

I'm trying to find the start of compressed data for each zip entry using zip4j. Great library for returning the local header offset, which Java's ZipFile does not do. However I'm wondering if there is a more reliable way than what I'm doing below to get the start of the compressed data? Thanks in advance.
offset = header.getOffsetLocalHeader();
offset += 30; //add fixed file header
offset += header.getFilenameLength(); // add filename field length
offset += header.getExtraFieldLength(); //add extra field length
//not quite the right number, sometimes have to add 4
//seems to be some header data that is outside the extra field value
offset += 4;
Edit
Here is a sample zip:
https://alexa-public.s3.amazonaws.com/test.zip
The code below decompresses each item properly but won't work without the +4.
String path = "/Users/test/Desktop/zip test/test.zip";
List<FileHeader> fileHeaders = new ZipFile(path).getFileHeaders();
for (FileHeader header : fileHeaders) {
long offset = 30 + header.getOffsetLocalHeader() + header.getFileNameLength() + header.getExtraFieldLength();
//fudge factor!
offset += 4;
RandomAccessFile f = new RandomAccessFile(path, "r");
byte[] buffer = new byte[(int) header.getCompressedSize()];
f.seek(offset);
f.read(buffer, 0, (int) header.getCompressedSize());
f.close();
Inflater inf = new Inflater(true);
inf.setInput(buffer);
byte[] inflatedContent = new byte[(int) header.getUncompressedSize()];
inf.inflate(inflatedContent);
inf.end();
FileOutputStream fos = new FileOutputStream(new File("/Users/test/Desktop/" + header.getFileName()));
fos.write(inflatedContent);
fos.close();
}

The reason you have to add 4 to the offset in your example is because the size of the extra data field in central directory of this entry (= file header) is different than the one in local file header, and it is perfectly legal as per zip specification to have different extra data field sizes in central directory and local header. In fact the extra data field we are talking about, Extended Timestamp extra field (signature 0x5455), has an official definition which has varied lengths between the two.
Extended Timestamp extra field (signature 0x5455)
Local-header version:
| Value | Size | Description |
| ------------- |---------------|---------------------------------------|
| 0x5455 | Short | tag for this extra block type ("UT") |
| TSize | Short | total data size for this block |
| Flags | Byte | info bits |
| (ModTime) | Long | time of last modification (UTC/GMT) |
| (AcTime) | Long | time of last access (UTC/GMT) |
| (CrTime) | Long | time of original creation (UTC/GMT) |
Central-header version:
| Value | Size | Description |
| ------------- |---------------|---------------------------------------|
| 0x5455 | Short | tag for this extra block type ("UT") |
| TSize | Short | total data size for this block |
| Flags | Byte | info bits |
| (ModTime) | Long | time of last modification (UTC/GMT) |
In the sample zip file you have attached, the tool which creates the zip file adds a 4 byte additional information compared to the central directory for this extra field.
Relying on the extra field length from central directory to reach to start of data can be error prone. A more reliable way to achieve what you want is to read the extra field length from local header. I have modified your code slightly to consider the extra field length from local header and not from central header to reach to the start of data.
import net.lingala.zip4j.model.FileHeader;
import net.lingala.zip4j.util.RawIO;
import org.junit.Test;
import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.RandomAccessFile;
import java.util.List;
import java.util.zip.DataFormatException;
import java.util.zip.Inflater;
public class ZipTest {
private static final int OFFSET_TO_EXTRA_FIELD_LENGTH_SIZE = 28;
private RawIO rawIO = new RawIO();
#Test
public void testExtractWithDataOffset() throws IOException, DataFormatException {
String basePath = "/Users/slingala/Downloads/test/";
String path = basePath + "test.zip";
List<FileHeader> fileHeaders = new ZipFile(path).getFileHeaders();
for (FileHeader header : fileHeaders) {
RandomAccessFile f = new RandomAccessFile(path, "r");
byte[] buffer = new byte[(int) header.getCompressedSize()];
f.seek(OFFSET_TO_EXTRA_FIELD_LENGTH_SIZE);
int extraFieldLength = rawIO.readShortLittleEndian(f);
f.skipBytes(header.getFileNameLength() + extraFieldLength);
f.read(buffer, 0, (int) header.getCompressedSize());
f.close();
Inflater inf = new Inflater(true);
inf.setInput(buffer);
byte[] inflatedContent = new byte[(int) header.getUncompressedSize()];
inf.inflate(inflatedContent);
inf.end();
FileOutputStream fos = new FileOutputStream(new File(basePath + header.getFileName()));
fos.write(inflatedContent);
fos.close();
}
}
}
On a side note, I wonder why you want to read the data, deal with inflater and extract the content yourself? With zip4j you can extract all entires with ZipFile.extractAll() or you can also extract each entry in the zip file with streams if you wish with ZipFile.getInputStream(). A skeleton example is:
ZipFile zipFile = new ZipFile("filename.zip");
FileHeader fileHeader = zipFile.getFileHeader("entry_name_in_zip.txt");
InputStream inputStream = zipFile.getInputStream(fileHeader);
Once you have the inputstream, you can read the content and write it to any outputstream. This way you can extract each entry in the zip file without having to deal with the inflaters yourself.

Related

How do I import an Elliptical p256 public key using TSS.MSR (C++)

I am trying to import a public key from another system into my system using Microsoft's MSR.TSS library (C++) in order to set up a Diffie-Hellman Key exchange.
However I get the following error:
"TPM Error - TPM_RC::SIZE: An attempt was made to join or substitute a drive for which a directory on the drive is the target of a previous substitute."
Here is my sample code:
storagePrimaryHandle = MakeStoragePrimary();
TPMT_PUBLIC eccTemplate(TPM_ALG_ID::SHA256,
TPMA_OBJECT::decrypt |
TPMA_OBJECT::fixedParent |
TPMA_OBJECT::fixedTPM |
TPMA_OBJECT::sensitiveDataOrigin |
TPMA_OBJECT::userWithAuth,
NullVec,
TPMS_ECC_PARMS(
TPMT_SYM_DEF_OBJECT(TPM_ALG_ID::_NULL, 0, TPM_ALG_ID::_NULL),
TPMS_KEY_SCHEME_ECDH(TPM_ALG_ID::SHA256),
TPM_ECC_CURVE::NIST_P256,
TPMS_NULL_KDF_SCHEME()),
TPMS_ECC_POINT()
);
//Import the public key
//Create a vector with the 64 byte public key
vector<BYTE> pubVector(publicKey, publicKey + publicKeyLength);
//Indicate this is an uncompressed key
pubVector.insert(pubVector.begin(), 1, 0x04);
inPublic = _tpm.Create(storagePrimaryHandle, TPMS_SENSITIVE_CREATE(), eccTemplate, pubVector, vector<TPMS_PCR_SELECTION>());
Few things to note:
1) If I pass in an empty vector in place of "pubVector" it works
2) If I leave off the 0x04 (indicating an uncompressed public key), it still fails
My work is based off code at:
https://github.com/microsoft/TSS.MSR/tree/master/TSS.CPP/Samples
I figured it out:
The "Create" method creates a key from scratch, I needed the "LoadExternal" method to load a public key portion:
//publicKey is a BYTE array of the p256 public key 64 bytes long (without the leading 04 = uncompressed)
vector<BYTE> pubKeyX(publicKey, publicKey + 32);
vector<BYTE> pubKeyY(publicKey+32, publicKey + 64);
TPMT_PUBLIC eccTemplate(TPM_ALG_ID::SHA1,
TPMA_OBJECT::decrypt |
TPMA_OBJECT::fixedParent |
TPMA_OBJECT::fixedTPM |
TPMA_OBJECT::sensitiveDataOrigin |
TPMA_OBJECT::userWithAuth,
NullVec,
TPMS_ECC_PARMS(
TPMT_SYM_DEF_OBJECT(TPM_ALG_ID::_NULL, 0, TPM_ALG_ID::_NULL),
TPMS_KEY_SCHEME_ECDH(TPM_ALG_ID::SHA256),
TPM_ECC_CURVE::NIST_P256,
TPMS_NULL_KDF_SCHEME()),
TPMS_ECC_POINT(pubKeyX, pubKeyY)
);
pubHandle = _tpm.LoadExternal(TPMT_SENSITIVE::NullObject(), eccTemplate, TPM_HANDLE::FromReservedHandle(TPM_RH::_NULL));

How to use log4cxx RollingFileAppender on Windows

I'm trying to use log4cxx to log my application using RollingFileAppender on a Windows C++ console application. I would like to create a new log file every time the size reaches 1MB. Furthermore, when the desired size is reached, the file should be zipped automatically. The maximum number of files created must be 10; after which older files should be overwritten.
I'm using:
apache-log4cxx-0.10.0
apr-util-1.6.1
apr-1.7.0
This is my code:
log4cxx::rolling::RollingFileAppender* fileAppender1 = new log4cxx::rolling::RollingFileAppender();
fileAppender1->setLayout(log4cxx::LayoutPtr(new log4cxx::PatternLayout(L"[%d{ISO8601}{GMT}] %-4r [%t] %c | %-5p | %m%n")));
fileAppender1->setAppend(true);
log4cxx::helpers::Pool p;
fileAppender1->activateOptions(p);
log4cxx::rolling::FixedWindowRollingPolicy* rollingPolicy = new log4cxx::rolling::FixedWindowRollingPolicy();
rollingPolicy->setMinIndex(1);
rollingPolicy->setMaxIndex(10);
rollingPolicy->setFileNamePattern(L"j_log_%i.log");
log4cxx::rolling::SizeBasedTriggeringPolicy* triggerPolicy = new log4cxx::rolling::SizeBasedTriggeringPolicy();
triggerPolicy->setMaxFileSize(1024*1024);
fileAppender1->setRollingPolicy(rollingPolicy);
fileAppender1->setTriggeringPolicy(triggerPolicy);
LoggerPtr logger(Logger::getLogger("LogConsole1"));
logger->addAppender(fileAppender1);
logger->setLevel(log4cxx::Level::getTrace());
for (int i = 0; i < 10000; i++)
{
LOG4CXX_INFO(logger, "Created FileAppender appender");
LOG4CXX_INFO(logger, "LOGGER1");
}
The result obtained is a file named ".1" (without any extension) with such content (it seems ok):
[2019-09-13 07:44:58,619] 21063 [0x00003e14] LogConsole1 | INFO | Created FileAppender appender
[2019-09-13 07:44:58,622] 21066 [0x00003e14] LogConsole1 | INFO | LOGGER1
The problems are:
The file does not have the proper name
The file does not roll over (only one file is created also if its size exceeds 1MB)
On the application console I see many exceptions like: "log4cxx: Exception during rollover"
What am I doing wrong?
I do not completely understand your file pattern but the docs do not use the "L" char in their Pattern.
In my projects is use
rollingPolicy->setFileNamePattern("file.%i.log");
sometimes with a string variable which works good.
I can not find the configuration in your code snipped.
As far as i know, you have to setup the appender by using the BasicConfiguration object.
log4cxx::BasicConfigurator::configure(log4cxx::AppenderPtr(yourAppenderPointer));
this will append your appender to the root logger and works for my case.
Here is my full code snippet of my initialize.
void someclass::initLogger(std::string fileName) {
std::string::size_type found = fileName.find(".log");
std::string strippedFileName;
if (found != std::string::npos)
{
strippedFileName = fileName.substr(0, found);
}
else
{
strippedFileName = fileName;
fileName = fileName + ".log";
}
//initializes for rolling file appenders
rollingFileAppender = new log4cxx::rolling::RollingFileAppender();
rollingPolicy = new log4cxx::rolling::FixedWindowRollingPolicy();
rollingPolicy->setMinIndex(1);
rollingPolicy->setMaxIndex(3);
log4cxx::LogString fileNamePattern = strippedFileName + ".%i.log";
rollingPolicy->setFileNamePattern(fileNamePattern);
trigger = new log4cxx::rolling::SizeBasedTriggeringPolicy();
trigger->setMaxFileSize(1024);
rollingFileAppender->setRollingPolicy(rollingPolicy);
rollingFileAppender->setTriggeringPolicy(trigger);
rollingFileAppender->setLayout(log4cxx::LayoutPtr(new log4cxx::PatternLayout(LOGFILE_LAYOUT_PATTERN)));
rollingFileAppender->setFile(fileName);
rollingFileAppender->setAppend(true);
//initializes for a console appender
consoleAppender = new log4cxx::ConsoleAppender(log4cxx::LayoutPtr(new log4cxx::PatternLayout(LOGFILE_LAYOUT_PATTERN)));
log4cxx::helpers::Pool p;
rollingFileAppender->activateOptions(p);
log4cxx::BasicConfigurator::configure(log4cxx::AppenderPtr(consoleAppender));
log4cxx::BasicConfigurator::configure(log4cxx::AppenderPtr(rollingFileAppender));
}
This code prints to a specified file via a rolling file appender and also prints to the terminal using the consoleAppender
This prints file one file with fileName.log and up to three more with fileName.i.log

Is it really possible to append data to a text file in MFC's by using CFile and CStdio classes?

Is it really possible to append data to a text file in MFC's by using CFile and CStdio classes ??If yes, then how ??
I used the following code to append the data, but it just gives the latest(last entered) data..
UpdateData(TRUE);
CStdioFile file_object;//(L"D://Docs//Temp.txt",
CFile::modeCreate | CFile::modeReadWrite | CFile::modeRead);
CString str = L"D://Docs//Temp.txt";
CString fc1, fc2;
BOOL bFile = file_object.Open(str,
CFile::modeCreate | CFile::modeReadWrite | CFile::modeRead);
if (bFile)
file_object.Seek(file_object.GetLength(), CFile::end);
fc1.Format(L"%f", m_CelTemp);
file_object.WriteString(L"Temp in Celsius is:");
file_object.WriteString(fc1);
file_object.WriteString(L"\n");
fc2.Format(L"%f", m_FarTemp);
file_object.WriteString(L"Temp in Fahrenheit is:");
file_object.WriteString(fc2);
file_object.WriteString(L"\n");
UpdateData(FALSE);
This is the sample code you can try
CStdioFile file;
file.Open(_T("_FILE_PATH_HERE"),CFile::modeCreate|CFile::modeWrite|CFile::modeNoTruncate);
file.SeekToEnd();
file.WriteString(_T("Write Text Here\r\n")); // \r\n to move the cursor to the next line
file.Close();
CFile::modeCreate Creates a new file if no file exists.; If the file already exists, CFileException is raised.
CFile::modeNoTruncate Creates a new file if no file exists; otherwise, if the file already exists, it is attached to the CFile object.
CFile::modeWrite Requests write access only.
file.SeekToEnd(); Sets the value of the file pointer to the logical end of the file.
You should use CFile::SeekToEnd() method to set the value of the file pointer to the logical end of the file in order to append data.
Here is an example:
CStdioFile f;
CString sDataToWrite(_T("Data\r\n"));
if(f.Open(_T("C:\\Files\\file.txt"), CFile::modeCreate | CFile::modeNoTruncate | CFile::modeWrite))
{
f.SeekToEnd();
f.WriteString(sDataToWrite);
}
f.Close();
First off, what you posted wouldn't compile, starting with line #3. Please post real code.
Best guess is that the problem lies with CFile::modeCreate which "Creates a new file if no file exists.; If the file already exists, CFileException is raised." (see reference docs at CFile::CFile). You probably want CFile::modeNoTruncate which is "open existing".
Yes It's possible to Append Data using CFile and CStdioFile.
You have to use
FileObj.Open(L"FileName.txt",CFile ::modeCreate | CFile::modeWrite | CFile::modeNoTruncate);
simple Example
CFile FileObj;
//CStdioFile FileObj;
// FileObj.Open(L"FileName.txt",CStdioFile ::modeCreate | CStdioFile::modeWrite | CStdioFile::modeNoTruncate);
FileObj.Open(L"FileName.txt",CFile ::modeCreate | CFile::modeWrite | CFile::modeNoTruncate);
FileObj.SeekToEnd();
CString Message = m_strName ; // Whetever Your Message
FileObj.Write(Message,Message.GetLength());
FileObj.Flush();
FileObj.Close();

Communication client-server with OCaml marshalled data

I want to do a client-side js_of_ocaml application with a server in OCaml, with contraints described below, and I would like to know if the approach below is right or if there is a more efficient one. The server can sometimes send large quantities of data (> 30MB).
In order to make the communication between client and server safer and more efficient, I am sharing a type t in a .mli file like this :
type client_to_server =
| Say_Hello
| Do_something_with of int
type server_to_client =
| Ack
| Print of string * int
Then, this type is marshalled into a string and sent on the network. I am aware that on the client side, some types are missing (Int64.t).
Also, in a XMLHTTPRequest sent by the client, we want to receive more than one marshalled object from the server, and sometimes in a streaming mode (ie: process the marshal object received (if possible) during the loading state of the request, and not only during the done state).
These constraints force us to use the field responseText of the XMLHTTPRequest with the content-type application/octet-stream.
Moreover, when we get back the response from responseText, an encoding conversion is made because JavaScript's string are in UTF-16. But the marshalled object being binary data, we do what is necessary in order to retrieve our binary data (by overriding the charset with x-user-defined and by applying a mask on each character of the responseText string).
The server (HTTP server in OCaml) is doing something simple like this:
let process_request req =
let res = process_response req in
let s = Marshal.to_string res [] in
send s
However, on the client side, the actual JavaScript primitive of js_of_ocaml for caml_marshal_data_size needs an MlString. But in streaming mode, we don't want to convert the javascript's string in a MlString (which can iter on the full string), we prefer to do the size verification and unmarshalling (and the application of the mask for the encoding problem) only on the bytes read. Therefore, I have writen my own marshal primitives in javascript.
The client code for processing requests and responses is:
external marshal_total_size : Js.js_string Js.t -> int -> int = "my_marshal_total_size"
external marshal_from_string : Js.js_string Js.t -> int -> 'a = "my_marshal_from_string"
let apply (f:server_to_client -> unit) (str:Js.js_string Js.t) (ofs:int) : int =
let len = str##length in
let rec aux pos =
let tsize =
try Some (pos + My_primitives.marshal_total_size str pos)
with Failure _ -> None
in
match tsize with
| Some tsize when tsize <= len ->
let data = My_primitives.marshal_from_string str pos in
f data;
aux tsize
| _ -> pos
in
aux ofs
let reqcallback f req ofs =
match req##readyState, req##status with
| XmlHttpRequest.DONE, 200 ->
ofs := apply f req##responseText !ofs
| XmlHttpRequest.LOADING, 200 ->
ignore (apply f req##responseText !ofs)
| _, 200 -> ()
| _, i -> process_error i
let send (f:server_to_client -> unit) (order:client_to_server) =
let order = Marshal.to_string order [] in
let msg = Js.string (my_encode order) in (* Do some stuff *)
let req = XmlHttpRequest.create () in
req##_open(Js.string "POST", Js.string "/kernel", Js._true);
req##setRequestHeader(Js.string "Content-Type",
Js.string "application/octet-stream");
req##onreadystatechange <- Js.wrap_callback (reqcallback f req (ref 0));
req##overrideMimeType(Js.string "application/octet-stream; charset=x-user-defined");
req##send(Js.some msg)
And the primitives are:
//Provides: my_marshal_header_size
var my_marshal_header_size = 20;
//Provides: my_int_of_char
function my_int_of_char(s, i) {
return (s.charCodeAt(i) & 0xFF); // utf-16 char to 8 binary bit
}
//Provides: my_marshal_input_value_from_string
//Requires: my_int_of_char, caml_int64_float_of_bits, MlStringFromArray
//Requires: caml_int64_of_bytes, caml_marshal_constants, caml_failwith
var my_marshal_input_value_from_string = function () {
/* Quite the same thing but with a custom Reader which
will call my_int_of_char for each byte read */
}
//Provides: my_marshal_data_size
//Requires: caml_failwith, my_int_of_char
function my_marshal_data_size(s, ofs) {
function get32(s,i) {
return (my_int_of_char(s, i) << 24) | (my_int_of_char(s, i + 1) << 16) |
(my_int_of_char(s, i + 2) << 8) | (my_int_of_char(s, i + 3));
}
if (get32(s, ofs) != (0x8495A6BE|0))
caml_failwith("MyMarshal.data_size");
return (get32(s, ofs + 4));
}
//Provides: my_marshal_total_size
//Requires: my_marshal_data_size, my_marshal_header_size, caml_failwith
function my_marshal_total_size(s, ofs) {
if ( ofs < 0 || ofs > s.length - my_marshal_header_size )
caml_failwith("Invalid argument");
else return my_marshal_header_size + my_marshal_data_size(s, ofs);
}
Is this the most efficient way to transfer large OCaml values from server to client, or what would time- and space-efficient alternatives be?
Have you try to use EventSource https://developer.mozilla.org/en-US/docs/Web/API/EventSource
You could stream json data instead of marshaled data.
Json.unsafe_input should be faster than unmarshal.
class type eventSource =
object
method onmessage :
(eventSource Js.t, event Js.t -> unit) Js.meth_callback
Js.writeonly_prop
end
and event =
object
method data : Js.js_string Js.t Js.readonly_prop
method event : Js.js_string Js.t Js.readonly_prop
end
let eventSource : (Js.js_string Js.t -> eventSource Js.t) Js.constr =
Js.Unsafe.global##_EventSource
let send (f:server_to_client -> unit) (order:client_to_server) url_of_order =
let url = url_of_order order in
let es = jsnew eventSource(Js.string url) in
es##onmessage <- Js.wrap_callback (fun e ->
let d = Json.unsafe_input (e##data) in
f d);
()
On the server side, you then need to rely on deriving_json http://ocsigen.org/js_of_ocaml/2.3/api/Deriving_Json to serialize your data
type server_to_client =
| Ack
| Print of string * int
deriving (Json)
let process_request req =
let res = process_response req in
let data = Json_server_to_client.to_string res in
send data
note1: Deriving_json serialize ocaml value to json using the internal representation of values in js_of_ocaml. Json.unsafe_input is a fast deserializer for Deriving_json that rely on browser-native JSON support.
note2: Deriving_json and Json.unsafe_input take care of ocaml string encoding

Photoshop Action with save as unique name step

I have a need to create an action that will:
1. copy a selected part (selected by hand) of an image in an already opened file
2. paste selection into new file
3. save new file as jpg file, but not with default file name of "untitled.jpg" - instead use a unique name or use a auto-increment suffix
Because the action will be run multiple times on different selections from the same image, saving each selection with a unique name or auto-incremented suffix would save the step of manually supplying the filename each time a different selection is saved.
I can create an action that gets to the save-as step, but don't know if it is possible to modify the default save as name as described above. Is it possible?
No. Tried it before with no success. You have to save manually.
Don't think this is possible with an action but you can write a script do to it.
I have created a script for similar work. It uses a technique to generate unique filenames and save the file.
/************************************************************************
* Author: Nishant Kumar
* Description: This script iterates through a template to create
* jpg images with id card numbers.
* Date: 08-03-2015
***********************************************************************/
//current id count
var id_count = 0;
//total no of id cards to produce
var total_id_cards = 42;
//no. of cards per sheet
var card_per_sheet = 21;
//Save path related to current file
var save_path = app.activeDocument.path;
//do an iteration, number the cards and save file
do{
//iterate 24 nos in each document
for(var i = 0; i<card_per_sheet; i++){
id_count++;
app.activeDocument.layers[i].textItem.contents = id_count;
}
//Create a jpg document with standard options
jpgSaveOptions = new JPEGSaveOptions();
jpgSaveOptions.embedColorProfile = true;
jpgSaveOptions.formatOptions = FormatOptions.STANDARDBASELINE;
jpgSaveOptions.matte = MatteType.NONE;
jpgSaveOptions.quality = 12;
//Save jpg with incremental file names (1.jpg, 2.jpg), make sure the path exists
jpgFile = new File( save_path + "/output/" + id_count/card_per_sheet + ".jpeg" );
app.activeDocument.saveAs(jpgFile, jpgSaveOptions, true, Extension.LOWERCASE);
}while(id_count < total_id_cards);
I know this is old, but still. You can use the following script.
How to use a script:
Copy the following script in notepad, and save it in directory similar to "C:\Program Files (x86)\Adobe\Adobe Photoshop CS2\Presets\Scripts" with the extension JSX.
To run the scrip in photoshop, go to File > Scripts > "Your Script".
#target photoshop
main();
function main(){
if(!documents.length) return;
var Name = app.activeDocument.name.replace(/.[^.]+$/, '');
Name = Name.replace(/\d+$/,'');
try{
var savePath = activeDocument.path;
}catch(e){
alert("You must save this document first!");
}
var fileList= savePath.getFiles(Name +"*.jpg").sort().reverse();
var Suffix = 0;
if(fileList.length){
Suffix = Number(fileList[0].name.replace(/\.[^\.]+$/, '').match(/\d+$/));
}
Suffix= zeroPad(Suffix + 1, 4);
var saveFile = File(savePath + "/" + Name + "_" + Suffix + ".jpg");
SaveJPG(saveFile);
}
function SaveJPG(saveFile){
//Create a jpg document with standard options
jpgSaveOptions = new JPEGSaveOptions();
jpgSaveOptions.embedColorProfile = true;
jpgSaveOptions.formatOptions = FormatOptions.STANDARDBASELINE;
jpgSaveOptions.matte = MatteType.NONE;
jpgSaveOptions.quality = 12;
//Save jpg with incremental file names (1.jpg, 2.jpg), make sure the path exists
activeDocument.saveAs(saveFile, jpgSaveOptions, true, Extension.LOWERCASE);
};
function zeroPad(n, s) {
n = n.toString();
while (n.length < s) n = '0' + n;
return n;
};