Remove prefix for upload image in Vtiger - vtiger

When upload image, vtiger add prefix to filename.
Befor upload: IMG_NAME.png.
After upload: 26540_IMG_NAME.png.
How I can remove '26540_' prefix?

Its not recommended to change the standard of storing files with the name. Because the prefix ('26540_' in your case) is the unique identifier which will add before the filename. And if we upload same file with the same name vTiger treat as a different file.
But still if you dont want to prefix added then customize the code as per below:
Open \data\CRMEntity.php
Search function uploadAndSaveFile(
Comment the line
$upload_status = move_uploaded_file($filetmp_name, $upload_file_path .$current_id . "_" . $binFile);
Add (Removed $current_id)
$upload_status = move_uploaded_file($filetmp_name, $upload_file_path . $binFile);
Save the script and test. Cheers!

Related

Is it possible to specify name of .rmd file when knitting/rendering in r-markdown?

After knitting to a word doc, I would like to specify the name of the .rmd file when it is saved. For instance I have set the name of the word doc to include date and time so that each word doc version is saved as a different file:
'''{r}
knit: (function(inputFile, encoding) { rmarkdown::render(inputFile, encoding =
encoding, output_file = paste0(substr(inputFile,1,nchar(inputFile)-4),'_',lubridate::today(),'_',paste0(lubridate::hour(lubridate::now()), lubridate::minute(lubridate::now())),'.docx')) })
'''
So in my directory I have the following:
FileName_2019-05-27_1741.docx
FileName.rmd
FileName_2019-05-27_1329.docx
FileName_2019-05-26_1420.docx
I'd like to have the .rmd files automatically saved the same way with date and time in case I want to go back and look at an earlier version of my .rmd file.
The below code worked for me thanks to an earlier tip to copy/rename the file (I was looking for ways to save the file as opposed to copying):
file.copy(from = "FileName.rmd",
to = paste0('FileName_',lubridate::today(),'_',paste0(lubridate::hour(lubridate::now()),
lubridate::minute(lubridate::now())),'.rmd'))
I entered this in a new code chunk as I couldn't figure out how to do it within the header (which is where I had the code to name the word file). It does exactly what I need it to do now!

get list of files in a sharepoint directory using python

I have a url for sharepoint directory(intranet) and need an api to return list of files in that directory given the url. how can I do that using python?
Posting in case anyone else comes across this issue of getting files from a SharePoint folder from just the folder path.
This link really helped me do this: https://github.com/vgrem/Office365-REST-Python-Client/issues/98. I found so much info about doing this for HTTP but not in Python so hopefully someone else needs more Python reference.
I am assuming you are all setup with client_id and client_secret with the Sharepoint API. If not you can use this for reference: https://learn.microsoft.com/en-us/sharepoint/dev/solution-guidance/security-apponly-azureacs
I basically wanted to grab the names/relative urls of the files within a folder and then get the most recent file in the folder and put into a dataframe.
I'm sure this isn't the "Pythonic" way to do this but it works which is good enough for me.
!pip install Office365-REST-Python-Client
from office365.runtime.auth.client_credential import ClientCredential
from office365.runtime.client_request_exception import ClientRequestException
from office365.sharepoint.client_context import ClientContext
from office365.sharepoint.files.file import File
import io
import datetime
import pandas as pd
sp_site = 'https://<org>.sharepoint.com/sites/<my_site>/'
relative_url = "/sites/<my_site/Shared Documents/<folder>/<sub_folder>"
client_credentials = ClientCredential(credentials['client_id'], credentials['client_secret'])
ctx = ClientContext(sp_site).with_credentials(client_credentials)
libraryRoot = ctx.web.get_folder_by_server_relative_path(relative_url)
ctx.load(libraryRoot)
ctx.execute_query()
#if you want to get the folders within <sub_folder>
folders = libraryRoot.folders
ctx.load(folders)
ctx.execute_query()
for myfolder in folders:
print("Folder name: {0}".format(myfolder.properties["ServerRelativeUrl"]))
#if you want to get the files in the folder
files = libraryRoot.files
ctx.load(files)
ctx.execute_query()
#create a dataframe of the important file properties for me for each file in the folder
df_files = pd.DataFrame(columns = ['Name', 'ServerRelativeUrl', 'TimeLastModified', 'ModTime'])
for myfile in files:
#use mod_time to get in better date format
mod_time = datetime.datetime.strptime(myfile.properties['TimeLastModified'], '%Y-%m-%dT%H:%M:%SZ')
#create a dict of all of the info to add into dataframe and then append to dataframe
dict = {'Name': myfile.properties['Name'], 'ServerRelativeUrl': myfile.properties['ServerRelativeUrl'], 'TimeLastModified': myfile.properties['TimeLastModified'], 'ModTime': mod_time}
df_files = df_files.append(dict, ignore_index= True )
#print statements if needed
# print("File name: {0}".format(myfile.properties["Name"]))
# print("File link: {0}".format(myfile.properties["ServerRelativeUrl"]))
# print("File last modified: {0}".format(myfile.properties["TimeLastModified"]))
#get index of the most recently modified file and the ServerRelativeUrl associated with that index
newest_index = df_files['ModTime'].idxmax()
newest_file_url = df_files.iloc[newest_index]['ServerRelativeUrl']
# Get Excel File by newest_file_url identified above
response= File.open_binary(ctx, newest_file_url)
# save data to BytesIO stream
bytes_file_obj = io.BytesIO()
bytes_file_obj.write(response.content)
bytes_file_obj.seek(0) # set file object to start
# load Excel file from BytesIO stream
df = pd.read_excel(bytes_file_obj, sheet_name='Sheet1', header= 0)
Here is another helpful link of the file properties you can view: https://learn.microsoft.com/en-us/previous-versions/office/developer/sharepoint-rest-reference/dn450841(v=office.15). Scroll down to file properties section.
Hopefully this is helpful to someone. Again, I am not a pro and most of the time I need things to be a bit more explicit and written out. Maybe others feel that way too.
You need to do 2 things here.
Get a list of files (which can be directories or simple files) in
the directory of your interest.
Loop over each item in this list of files and check if
the item is a file or a directory. For each directory do the same as
step 1 and 2.
You can find more documentation at https://learn.microsoft.com/en-us/sharepoint/dev/sp-add-ins/working-with-folders-and-files-with-rest#working-with-files-attached-to-list-items-by-using-rest
def getFilesList(directoryName):
...
return filesList
# This will tell you if the item is a file or a directory.
def isDirectory(item):
...
return true/false
Hope this helps.
I have a url for sharepoint directory
Assuming you asking about a library, you can use SharePoint's REST API and make a web service call to:
https://yourServer/sites/yourSite/_api/web/lists/getbytitle('Documents')/items?$select=Title
This will return a list of documents at: https://yourServer/sites/yourSite/Documents
See: https://msdn.microsoft.com/en-us/library/office/dn531433.aspx
You will of course need the appropriate permissions / credentials to access that library.
You can not use "server name/sites/Folder name/Subfolder name/_api/web/lists/getbytitle('Documents')/items?$select=Title" as URL in SharePoint REST API.
The URL structure should be like below considering WebSiteURL is the URL of site/subsite containing document library from which you are trying to get files and Documents is the Display name of document library:
WebSiteURL/_api/web/lists/getbytitle('Documents')/items?$select=Title
And if you want to list metadata field values you should add Field names separated by comma in $select.
Quick tip: If you are not sure about the REST API URL formation. Try pasting the URL in Chrome browser (you must be logged in to SharePoint site with appropriate permissions) and see if you get proper result as XML if you are successful then update the REST URL and run the code. This way you will save time of running your python code.

how to upload files from different subfolders into different subfolders with Django

I have many files under a folder(for example, 'datasets') and these files are separated in many different sub–dictionaries.
I want to let user specify the folder 'datasets' in a form and upload all the files in Django. After upload, The Django view function will extract some pieces information from each files and save into database. how can I do this.
The following is the structure of my files to be uploaded:
datasets
- subfolder 1
- file1
- file2
- subfolder 1a
- file3
- subfolder 2
- file4
- file5
The FileField.upload_to is defined as follows
This attribute provides a way of setting the upload directory and file
name, and can be set in two ways. In both cases, the value is passed
to the Storage.save() method. ... upload_to may also be a callable,
such as a function. This will be called to obtain the upload path,
including the filename. This callable must accept two arguments and
return a Unix-style path (with forward slashes) to be passed along to
the storage system. The two arguments are:
So what you need to do is to create a function that will check the filename or it's content and decide where the file should be saved
def get_upload_path(instance, filename):
if filename .... :
return 'path1'
else :
return 'path2'
And your model will change to
image = models.FileField(upload_to=get_upload_path)

Django rest Framework, change filename of ImageField

I have an API endpoint with Django Rest Framework to upload an image.
class MyImageSerializer(serializers.ModelSerializer):
image = serializers.ImageField(source='image')
I can upload images but they are saved with the filename that is sent from the client which can result to collisions. I would like instead to upload the file to my CDN with a timestamp filename.
Generating the filename is not the problem, just saving the image with it.
Any one knows how to do that?
Thanks.
If your image is of type ImageField from django, then you don't really have to do anything, not even declare it in your serializer like you did. It's enough to add it in the fields attribute and django will handle collisions. This means django will add _index on each new file which might generate a collision, so if you upload a file named 'my_pic.jpg' 5 times, you will actually have files 'my_pic.jpg', 'my_pic_1.jpg', 'my_pic_2.jpg', 'my_pic_3.jpg', 'my_pic_4.jpg' on your server.
Now, this is done using django's implementation for FileSystemStorage (see here), but if you want it to append a timestamp to your filename, all you have to do is write a storage class where you overwrite the get_available_name(name) method. Example:
class MyFileSystemStorage(FileSystemStorage):
def get_available_name(self, name):
''' name is the current file name '''
now = time.time()
stamp = datetime.datetime.fromtimestamp(now).strftime('%Y-%m-%d-%H-%M-%S')
return '{0}_{1}'.format(name, str(stamp))
And the image field in your model:
image = models.ImageField(upload_to='your upload dir', storage= MyFileSystemStorage)
Important update
As of August 20, 2014 this is no longer an issue, since Django found a vulnerability related to this behaviour (thanks #mlissner for pointing it out) . Important excerpt :
We’ve remedied the issue by changing the algorithm for generating file
names if a file with the uploaded name already exists.
Storage.get_available_name() now appends an underscore plus a random 7
character alphanumeric string (e.g. "_x3a1gho"), rather than iterating
through an underscore followed by a number (e.g. "_1", "_2", etc.).

exclude a folder and match all .html pattern files in a root folder using regex

I am doing migration from html to Drupal. Using migrate module.
Here in our custom migration script i need to match all .html files from all the folder except images folder.
pass this regex to $list_files = new MigrateListFiles([],[],$regex)
Below is format of html files
/magazines/sample.html
/test/index.html
/test/format_ss1.html
/test/folder/newstyle_1.html
/images/two.html
i need to get only first 2 html files i.e., we are exluding files which ends with '_[0-9]' and '_ss[0-9]' and .hmtl files in images folder.
i have successfully done by excluding 3 and 4 but i can't able to excule .html files in images folder.
$regex = '/[a-zA-Z0-9\-][^_ss\d][^_\d]+\.html/'; //this will do for 3 and 4 files
but i need to exlude images folder..
i have tried like
$regex = '/[^images\/][a-zA-Z0-9\-][^_ss\d][^_\d]+\.html/'; // not working
Where In PHP script it will work
$regex = '~^(?!/images/)[a-zA-Z0-9/-]+(?!_ss\d|\d)\.html$~' //works in php script
can some one help me out this..
Try
/((?!images)[0-9a-zA-Z])+/[^_]*[^\d]+\.html
Matches:
/magazines/sample.html
/test/index.html
/test/folder/newstyle.html
/test/format_ss.html
Does not match:
/test/format_ss1.html
/test/folder/newstyle_1.html
/images/two.html
/images/1.html
/test/folder/newstyle1.html
/test/folder/newstyle_12.html
is this acceptable?
It's a Drupal/Migrate specific issue - the regex is only a regex for the filename (not the directory) as eventually it gets passed to https://api.drupal.org/api/drupal/includes%21file.inc/function/file_scan_directory/7
file_scan_directory($dir, $mask, $options = array(), $depth = 0)
$mask: The preg_match() regular expression of the files to find.
I think the only way to exclude certain directories is to throw a false in the prepareRow() function if the row has a path you don't require.
function prepareRow($row)
The prepareRow() method is called by the source class next() method, after loading the data row. The argument $row is a stdClass object containing the raw data as provided by the source. There are two primary reasons to implement prepareRow():
To modify the data row before it passes through any further methods and handlers: for example, fetching related data, splitting out source fields, combining or creating new source fields based on some logic.
To conditionally skip a row (by returning FALSE).
https://www.drupal.org/node/1132582