Move document in the another folder of same library using power automate workflow - sharepoint-2013

I am new to the workflows. I have created a workflow where I need to move document which are not modified over one year to another folder of same library.
Source library : AAATestDocument
Destination : AAATestDocument/ArchivedDocuments
It moved document to the "ArchivedDocuments" folder but at last it does check for same folder and throwing error saying "/sites/TestSite/AAATestDocument/ArchivedDocuments does not exist."
Any idea how to resolve this issue for folders ? It should check recursively in all folder if it has any.
Here is my workflow.
here expression is for 1 day to test the flow "startOfDay(addDays(utcNow(),1))"

You could add conditions in your flow like the below:

Related

Error in trying to redo the example of In-Flight Entertainment System M2DOC

I want to use M2DOC instead of the HTML generation for sharing my Capella model.
I installed m2doc and I generate the word document of the SA layer by doing only "generate documentation" (right click on the "Template SA Complete.genconf" downloaded with m2doc) (see screenshot).
Generate SA documentation
In this way, I have the SA_Complete.docx well generated.
I want to use this template on my project. Thus, firstly I try to recreate a "Template SA Complete.genconf" in the In-Flight Entertainment project :
-right click on the docx template, "Initialize Documentation Configurations"
-variable name : click on "In-Flight Entertainment System.capella"
variable_name
right click on (my) "Template SA Complete.genconf" > "Validate Documentation Template"
And I have some errors...
The first one : '''Couldn't find the 'isRepresentationDescriptionName(EClassifier=Capability,java.lang.String)' service <--- The predicate never evaluates to a boolean type([Nothing(Couldn't find the 'isRepresentationDescriptionName(EClassifier=Capability, java.lang.String)'service)])'''
error_msg
I followed this procedure : https://www.m2doc.org/ref-doc/1.0.0/index.html#initializing-a-generation-configuration
I started from the "beginning" (without doing anything in the template) to check that I was starting on a good basis. but it's probably not the case... Why? Thank you
You need to check that the option SiriusSession is referencing the .aird file in your Capella project. You should use the relative path to the .aird file. Once the SiriusSession is properly deined, Sirius services will be available.
You can use existing .genconf files in the IFE example to see how the configuration is done.

Data Management API get the full folder path of the document

when using https://developer.api.autodesk.com/data/v1/projects/:project_id/folders/:folder_id/search end point, I am receiving both deleted and non deleted documents.
I know there are post stating using included.attributes.hidden would resolve the issue, however I noticed If you delete the parent folder of the document, the document included.attributes.hidden still shows false (not deleted).
I am thinking of work around to get the document full hierarchy of the searched folder then cross check it with the document parents to know if the document is deleted.
Definitely going the recursive approach by calling parent of a parent till i reach the searched folder is not practical.
I need assistance in the following:
1- Is there a way to get the whole hierarchy under the searched folder.
2- Any other suggestion to know if a file is deleted.
After checking with engineer team, I got some comments:
It is as designed the documents/items are still hidden=false when the folder is deleted.
To dump whole hierarchy of folder, you may try with top folder and check subfolder recursively by GET:Folder Contents. Since GET:Folder Contents will tell if a folder is hidden or not, you could skip this folder (accordingly, skip the items inside the folder). By default, hidden items will not be returned. Finally build the folder & items tree.
If you search items by GET:Folder Search API, you would get those un-hidden items, yet its folder has been deleted, as you are experiencing. So you will have to double check if their folders are hidden or not. This means a tedious checking.
So I would suggest you build the tree from the top.

Beam/Dataflow ReadAllFromParquet doesn't read anything but my job still succeeds?

I have a Dataflow job which:
Reads a text file from GCS with other filenames in it
Passes the filenames to ReadAllFromParquet to read the .parquet files
Writes to BigQuery
Despite my job 'succeeding' it basically doesn't have an output collection past the ReadAllFromParquet step.
I successfully read the files in a list such as:['gs://my_bucket/my_file1.snappy.parquet','gs://my_bucket/my_file2.snappy.parquet','gs://my_bucket/my_file3.snappy.parquet']
I am also confirming this list is correct and the GCS paths to the files are correct using a logger on the step before ReadAllFromParquet.
That's what my pipeline looks like (omitting the full code for brevity but I am confident that it normally works as I have the exact same pipeline for .csv using ReadAllFromText and it works fine):
with beam.Pipeline(options=pipeline_options_batch) as pipeline_2:
try:
final_data = (
pipeline_2
|'Create empty PCollection' >> beam.Create([None])
|'Get accepted batch file: {}'.format(runtime_options.complete_batch) >> beam.ParDo(OutputValueProviderFn(runtime_options.complete_batch))
|'Read all filenames into a list'>> beam.ParDo(FileIterator(runtime_options.files_bucket))
|'Read all files' >> beam.io.ReadAllFromParquet(columns=['locationItemId','deviceId','timestamp'])
|'Process all files' >> beam.ParDo(ProcessSch2())
|'Transform to rows' >> beam.ParDo(BlisDictSch2())
|'Write to BigQuery' >> beam.io.WriteToBigQuery(
table = runtime_options.comp_table,
schema = SCHEMA_2,
project = pipeline_options_batch.view_as(GoogleCloudOptions).project, #options.display_data()['project'],
create_disposition = beam.io.BigQueryDisposition.CREATE_IF_NEEDED, #'CREATE_IF_NEEDED',#create if does not exist.
write_disposition = beam.io.BigQueryDisposition.WRITE_APPEND #'WRITE_APPEND' #add to existing rows,partitoning
)
)
except Exception as exception:
logging.error(exception)
pass
That's what my job diagram looks like after:
Does somebody have an idea what might be going wrong here and what's the best way to debug?
My ideas currently:
A bucket permissions issue. I noticed the bucket I am reading from is odd as earlier I couldn't download the files despite being a project Owner. The Owners of project only had 'Storage Legacy Bucket Owner'. I added 'Storage Admin' and it then worked fine when manually downloading files with my own account. As per the Dataflow documentation I have ensured that both the default compute service account as well as the dataflow one have 'Storage Admin' on this bucket. However, maybe that's all a red herring as ultimately if there was a permissions issue I should see this in the log and the job would fail?
ReadAllFromParquet expects the file patterns in a different format? I have showed an example of the list (in my diagram above I can see the input collection correctly shows elements added = 48 for 48 files in the list) I supply above. I know this format works for ReadAllFromText so I assumed that they are equivalent and should work.
=========
EDIT:
Noticed something else potentially consequential. Comparing against my other job which uses ReadAllFromText and works fine I noticed a slight mismatch in the naming that is worrying.
This is the name of the output collection for my working job:
And that's the name on my parquet job that doesn't actually read anything:
Note specifically
Read all files/ReadAllFiles/ReadRange.out0
vs
Read all files/Read all files/ReadRange.out0
The first part of the path is the name of my step for both jobs.
But I believe the second to be the ReadAllFiles class from apache_beam.io.filebasedsource (https://github.com/apache/beam/blob/master/sdks/python/apache_beam/io/filebasedsource.py) which both ReadAllFromText and ReadAllFromParquet call.
Seems like a potential bug but don't seem to be able to trace it in the source code.
=============
EDIT 2
After some more digging it seems that ReadAllFromParquet just isn't functional yet. ReadFromParquet calls apache_beam.io.parquetio._ParquetSource whereas ReadAllFromParquet simply calls
apache_beam.io.filebasedsource._ReadRange.
I wonder if there's a way to turn this on if it's an experimental function?
You didn't mentioned if you are using the last Beam SDK, try using SDK 2.16 to test the last changes.
The doc states that ReadAllFromParquet is an experimental funtion as well as ReadFromParquet; nonetheless, ReadFromParquet is reported as working in this thread Apache-Beam: Read parquet files from nested HDFS directories, you might want to try to using this funtion.

Set screenshot path from default project location to different folder location

I have a suite which has 50 test cases. When I execute my suite, I get all the failed screenshots listed in the project's folder. I want to point and store those screenshots to a different directory with the name of the test case. I wanted it to be a one time setup than doing it explicitly for every test cases.
There's quite a few ways to change the screenshots default directory.
One way is to set the screenshot_root_directory argument when importing Selenium2Library. See the importing section of Selenium2Library's documentation, and importing libraries in the user guide.
Another way is to use the Set Screenshot Directory keyword, which will do pretty much the same thing as specifying a path when importing the library. Though, using this keyword you can set the path to a new one whenever you like. For example, you could make it so that each test case could have it's own screenshot directory using this keyword. According to your question, this may be the best solution.
And finally, you may also post-process screenshots using an external tool, or even a listener, that would move all screenshots to another directory. Previously mentioned solutions are in most cases much better, but you still may want to do this in some cases, where say, the directory where you want screenshots to be saved would be created only after the tests have finished executing.
I suggest you to do the follow:
For new directory, you should put the following immediately after where you open a browser such:
Open Browser ${URL} chrome
Set screenshot directory ${OUTPUT FILE}${/}..${/}${TEST_NAME}${/}
For replace the screenshot name from the default to your own name, create the following keyword:
sc
Capture page screenshot filename=${SUITE_NAME}-{index}.png
Then, create another keyword and run it on Setup's test case:
Register Keyword To Run On Failure sc
In the above example, I created a new folder with the test case name, which create a screenshot (in case of failure) with the name of suite project name (instead of 'selenium-screenshot-1.png').

How to undo move files from filezilla

I have one problem about filezilla. I accidentally move one folder to wrong directory, so it shows the errors when I view my website.
How can I solve it? Please help me.
Many thanks in advance for your answer.
You cannot undo, but you should understand why that happened and how to prevent that from happening again in the future.
This happened because Filezilla allows Drag-and-drop move functionality on both folders (directories) and files, very dangerous on production servers.
There is actually a feature-request for 11 years now, please add your vote to the list to get this done: https://trac.filezilla-project.org/ticket/2191
In the mean time, please consider using another software that allows the user to set this behavior as an option:
WinSCP: http://winscp.net/eng/docs/screenshots
WS-FTP Pro: https://trac.filezilla-project.org/attachment/ticket/2191/ws_ftp-professional-options.gif
EDIT: Filezilla team responded (sort of) to the feature request and you can block drag and drop in the xml config file. It's better than nothing.
You cannot undo ftp moves. The only way to rectify the problem is to manually move the folder to it's original location.
I suggest you be more careful from next time.
If you don't know where the folder belongs, download the x-cart script package and check where the directory belongs.
Sorry for late, but I am up to date. I get logs from filezilla of moved files.
Status: Renaming '/var/www/html/brb/abc.js' to '/var/www/html/brb/node_modules/abc.js'
Status: /var/www/html/brb/abc.js -> /var/www/html/brb/node_modules/abc.js
Status: Renaming '/var/www/html/brb/xyz.html' to '/var/www/html/brb/node_modules/xyz.html'
Status: /var/www/html/brb/xyz.html -> /var/www/html/brb/node_modules/xyz.html
I write script in js to build command
let x = ['/var/www/html/brb/abc.js -> /var/www/html/brb/node_modules/abc.js',
'/var/www/html/brb/xyz.html -> /var/www/html/brb/node_modules/xyz.html'];
let cmd = [];
x.forEach(p => {
let path = p.split('->');
cmd.push(`mv ${path[1]} ${path[0]}`);
})
console.log(cmd);
Output:
['mv /var/www/html/brb/node_modules/abc.js /var/www/html/brb/abc.js'
'mv /var/www/html/brb/node_modules/xyz.html /var/www/html/brb/xyz.html']
Use any editor like vscode etc and remove string quotes and execute command in server terminal etc
mv /var/www/html/brb/node_modules/abc.js /var/www/html/brb/abc.js
mv /var/www/html/brb/node_modules/xyz.html /var/www/html/brb/xyz.html
A simple answer is,
Copy the folder into the desired location and then delete from the current location where you moved it mistakenly.
Now, what if you overwrite a file.
I just edited a file in local, then downloaded the file from my server into the local, and my all the local updated data is gone. There seems no solution of this, but there may be one possibility, I may be the lucky enough that this is my case.
If you were working on that local file, so most probably it is opened in your browser. Do not refresh it. Copy the content one by one, and update the file again. You can also open developers tools of both old and new page. Compare them line by line and do the job.
I had the same issue and resolved it manually.
The log panel was helpful in this. It is the large panel below the connection form in the top menu.
From that log panel, I was able to figure out all the files which were moved with their current and previous location.
I copied those all log lines and paste them somewhere in notepad and then manually selected all files and move those all at once to their original directory.
Screenshot: The log panel showing last actions