how to make layers combination in photoshop - combinations

I have multiple layers in a file and i want to create combinations of those layers. Like layer1 and layer 2, layer 1 and layer 3
see the example.
Check the image
I want to automate this process like u can see in image that i have places first layer to be used on left side of the image and then i need a tool which can turn off the layers on right side one by one and save the image like layer01-layer02.jpg , layer01-layer03.jpg.
Kindly help me in this.
EDIT:
a script on github helped me solve the problem
Photoshop script that separately saves top level layers inside the selected group
just place the layer which u want to keep in all images outside the group, and all other images in a layer group.
then select the group and run this script it will save all combinations with that layer placed outside.
now if anyone know here scripting i have one question.
like when we run the script it asks for file name and then it adds incrimental numbers after it like if we written abc in file name then it will save images as abc1, abc 2 like this
What i want is that if we have written abc in filename and run the script it shud add layer name after it. like if layer names are japan,america, then it shud save files as abcjapan, abcamerica.
Can it be done?

You will need to cycle through all the layers in your document and set the visibility as needed before exporting the image. To set the visibility you'll need something like this:
for(var layerIndex=0; layerIndex < app.activeDocument.artLayers.length; layerIndex++) {
var layer=app.activeDocument.artLayers[layerIndex];
//do some logic to determine if the layer should be visible
layer.visible = true;
}
And see the answer to this question re: saving your jpeg once you've set the layer visibility as needed.

Related

Android/ Java : IS there fast way to filter large data saved in a list ? and how to get high quality picture with small storage space in server?

I have two questions
the first one is:
I have large data come from the server I saved it in a list , the customer can filter this data by 7 filters and two by text watcher this thing caused filtering operation to slow it takes 4 seconds in each time
I tried to put the filter keywords like(length or width ...) in one if and (&&) between them
but it didn't give me a result, also I tried to replace the textwatcher by spinner but it's not
useful.
I'm using one (for loop)
So the question: how can I use multi filter for list contain up to 2000 row with mini or zero slow?
the second is:
I saved from 2 to 8 pictures in the server in string form
the question is when I get these pictures from the server how can I show them in high quality?
when I show them I can see the pixels and this is not good for the customer
I don't want these pictures to take large space in the server and at the same time I want it in good quality when I restore them to display
I'm using Android/ Java
Thank you
The answer on my first quistion is if you want using filter (like when you are using online clothes shop and you want to filter it by less price ) you should use the hash map, not ordinary list it will be faster
The answer on my second question is: if you want to save store images in a database you should save it as a link, not a string or any other datatype

ML.Net LearningPipeline always has 10 rows

I have noticed that the Microsoft.Ml.Legacy.LearningPipeline.Row count is always 10 in the SentimentAnalysis sample project no matter how much data is in the test or training models.
https://github.com/dotnet/samples/blob/master/machine-learning/tutorials/SentimentAnalysis.sln
Can anyone explain the significance of 10 here?
// LearningPipeline allows you to add steps in order to keep everything together
// during the learning process.
// <Snippet5>
var pipeline = new LearningPipeline();
// </Snippet5>
// The TextLoader loads a dataset with comments and corresponding postive or negative sentiment.
// When you create a loader, you specify the schema by passing a class to the loader containing
// all the column names and their types. This is used to create the model, and train it.
// <Snippet6>
pipeline.Add(new TextLoader(_dataPath).CreateFrom<SentimentData>());
// </Snippet6>
// TextFeaturizer is a transform that is used to featurize an input column.
// This is used to format and clean the data.
// <Snippet7>
pipeline.Add(new TextFeaturizer("Features", "SentimentText"));
//</Snippet7>
// Adds a FastTreeBinaryClassifier, the decision tree learner for this project, and
// three hyperparameters to be used for tuning decision tree performance.
// <Snippet8>
pipeline.Add(new FastTreeBinaryClassifier() { NumLeaves = 50, NumTrees = 50, MinDocumentsInLeafs = 20 });
// </Snippet8>
The debugger is showing only a preview of the data - the first 10 rows. The goal here is to show a few example rows and how each transform is operating on them to make debugging easier.
Reading in the entire training data and running all the transformations on it is expensive and only happens when you reach .Train(). As the transformations are only operating on a few rows, their effect might be different when operating on the entire dataset (e.g. the text dictionary will likely be bigger), but hopefully the preview of data shown before running through the full training process is helpful for debugging and making sure transforms are applied to the correct columns.
If you have any ideas on how to make this clearer or more useful, it would be great if you can create an issue on GitHub!

PDI - Check data types of field

I'm trying to create a transformation read csv files and check data types for each field in that csv.
Like this : the standard field A should string(1) character and field B is integer/number.
And what I want is to check/validate: If A not string(1) then set Status = Not Valid also if B not a integer/number to. Then all file with status Not Valid will be moved to error folder.
I know I can use Data Validator to do it, but how to move the file with that status? I can't find any step to do it.
You can read files in loop, and
add step as below,
after data validation, you can filter rows with the negative result(not matched) -> add constant values step and with error = 1 -> add set variable step for error field with default values 0.
after transformation finishes, you can do add simple evaluation step in parent job to check value of ERROR variable.
If it has value 1 then move files else ....
I hope this can help.
You can do same as in this question. Once read use the Group by to have one flag per file. However, this time you cannot do it in one transform, you should use a job.
Your use case is in the samples that was shipped with your PDI distribution. The sample is in the folder your-PDI/samples/jobs/run_all. Open the Run all sample transformations.kjb and replace the Filter 2 of the Get Files - Get all transformations.ktr by your logic which includes a Group by to have one status per file and not one status per row.
In case you wonder why you need such a complex logic for such a task, remember that the PDI starts all the steps of a transformation at the same time. That's its great power, but you do not know if you have to move the file before every row has been processed.
Alternatively, you have the quick and dirty solution of your similar question. Change the filter row by a type check, and the final Synchronize after merge by a Process File/Move
And a final advice: instead of checking the type with a Data validator, which is a good solution in itself, you may use a Javascript like
there. It is more flexible if you need maintenance on the long run.

How to use TimeSeriesForecasting in KnowledgeFlow?

Weka Explorer provides Time Series Forecasting perspective and it is easy to use.
However, what should I do, if I want to use KnowledgeFlow for time series forecast?
what if I want to save original dataset with predictions?
Solutions (Thanks to the help from people from WekaList, especially, Mark Hall, Eibe Frank)
Open knowledgeFlow, load dataset with ArffLoader
go to setting, check time series forecasting perspective, right-click ArffLoader to send to all perspective
go to time series forecasting perspective to set up a model
run the model and copy the model to clipboard
ctrl + v, and click to paste model to Data mining process canvas
save prediction along with original data with ArffSaver

Arcpy.CompositeBands can't access the input datasets but the path is correct. Any ideas?

I have about 30 rasters with 4 bands each that I am trying to create composites so that I can eventually bring all of the rasters together into 1 large raster. But the first step is to create composite rasters. I would like to do this all at once and I found a few examples on various sites on how to do it, including ESRI's. I've pieced them together to create my own code, unfortunately I keep getting error 000271: Cannot open the input datasets. I know the path is correct because arcpy.ListRasters() returns the files in the folder in a large list, so the problem is definitely with the CompositeBands tool. I've looked up possible solutions to this problem, but I did not understand the solutions or how they worked, so if you do have an answer or suggestion, could you comment on your code (if you write one) or answer so I know what is going on and why? About the data - they are all ERDAS Imagine image rasters with 4 image color bands : R, G, B, and whatever N is. All but a few rasters have bands named Layer_1, Layer_2 and so on. The few are called Band_1, Band_2 and so on. Here is my code:
arcpy.env.workspace = r'\\network\folder\subfolder1\subfolder2\All_RGBN'
ws = arcpy.env.workspace
outws = r'\\network\folder\subfolder1\subfolder2\RGBN_Composit'
for ras in arcpy.ListRasters("*.img"):
name = outws+"\\"+ras
try:
arcpy.CompositeBands_management("Layer_1.img;Layer_2.img;Layer_3.img,Layer_4.img", name)
except:
arcpy.CompositeBands_management("Band_1.img;Band_2.img;Band_3.img,Band_4.img", name)
Thanks!
If your rasters have multiple bands, they are already composite. Composite Bands should be used when your bands are distinct raster datasets that you want to merge into one raster.
If you want to merge all your rasters (composite or not) into one single dataset, you should create a Mosaic Dataset or a Raster Catalog and load your rasters into it.
And FYI, you get an error message in the Composite Band tool because your raster bands (inputs) are not correctly referenced, you should write something like:
ras + "\\Layer_x" instead of "Layer_x.img"
But doing this will output the exact same raster as the original one.