Anyone knows a way to publish .Rmd files on Academic's Theme on HUGO? Normally .md and HTML files are easily read by Academic theme but when it comes to .Rmd they are nowhere to be found in the post. I can't find a way to make my website react to
This is a portion of my config.toml, which I thought may have something to do:
paginate = 10 # Number of items per page in paginated lists.
enableEmoji = true
footnotereturnlinkcontents = "<sup>^</sup>"
ignoreFiles = [".ipynb$", ".ipynb_checkpoints$", "_files$", "_cache$"]
[outputs]
home = [ "HTML", "RSS", "JSON", "WebAppManifest" ]
section = [ "HTML", "RSS" ]
[mediaTypes."application/manifest+json"]
suffixes = ["webmanifest"]
[outputFormats.WebAppManifest]
mediaType = "application/manifest+json"
rel = "manifest"
Thank you very much for your attention.
A list of supported file formats in hugo can be found here. As you can see .Rmd is not among them. .Rmd files have to be .html (or .md) in order to be understood and rendered by hugo.
If you use blogdown and run the command blogdown::serve_site() instead of usual cli hugo server this should be done automatically, if I am not mistaken.
Best
Related
I'm trying to add new skin in django-ckeditor but its doesnt work and it shows a blank screen in django-admin
I have installed the kama skin and extracted in static\ckeditor\ckeditor\skins\kama but doesnt seem to work.
Below im showing my ckeditor config options:
CKEDITOR_CONFIGS = {
'default': {
'toolbar': None,
'toolbarCanCollapse': True,
'skin': 'kama',
'uiColor': '#b7d6ec',
},
}
Can someone please resolve this issue?I m very new to django ckeditor and its config and building my first website.
Thanks in advance !
I know it's a bit late but I managed to get my Office2013 skin working by mimicking CKEditor directory structure.
django-admin collectstatic -c
This did the trick for me. It replaced an old bundle-aaaaaaaa.js file by another file that was found when I reloaded my page.
(And then I discovered that my problem was that I had moono-lisa skin inside my static/djangocms_ckeditor/ckeditor/skins folder, but moono in my settings)
I have a url for sharepoint directory(intranet) and need an api to return list of files in that directory given the url. how can I do that using python?
Posting in case anyone else comes across this issue of getting files from a SharePoint folder from just the folder path.
This link really helped me do this: https://github.com/vgrem/Office365-REST-Python-Client/issues/98. I found so much info about doing this for HTTP but not in Python so hopefully someone else needs more Python reference.
I am assuming you are all setup with client_id and client_secret with the Sharepoint API. If not you can use this for reference: https://learn.microsoft.com/en-us/sharepoint/dev/solution-guidance/security-apponly-azureacs
I basically wanted to grab the names/relative urls of the files within a folder and then get the most recent file in the folder and put into a dataframe.
I'm sure this isn't the "Pythonic" way to do this but it works which is good enough for me.
!pip install Office365-REST-Python-Client
from office365.runtime.auth.client_credential import ClientCredential
from office365.runtime.client_request_exception import ClientRequestException
from office365.sharepoint.client_context import ClientContext
from office365.sharepoint.files.file import File
import io
import datetime
import pandas as pd
sp_site = 'https://<org>.sharepoint.com/sites/<my_site>/'
relative_url = "/sites/<my_site/Shared Documents/<folder>/<sub_folder>"
client_credentials = ClientCredential(credentials['client_id'], credentials['client_secret'])
ctx = ClientContext(sp_site).with_credentials(client_credentials)
libraryRoot = ctx.web.get_folder_by_server_relative_path(relative_url)
ctx.load(libraryRoot)
ctx.execute_query()
#if you want to get the folders within <sub_folder>
folders = libraryRoot.folders
ctx.load(folders)
ctx.execute_query()
for myfolder in folders:
print("Folder name: {0}".format(myfolder.properties["ServerRelativeUrl"]))
#if you want to get the files in the folder
files = libraryRoot.files
ctx.load(files)
ctx.execute_query()
#create a dataframe of the important file properties for me for each file in the folder
df_files = pd.DataFrame(columns = ['Name', 'ServerRelativeUrl', 'TimeLastModified', 'ModTime'])
for myfile in files:
#use mod_time to get in better date format
mod_time = datetime.datetime.strptime(myfile.properties['TimeLastModified'], '%Y-%m-%dT%H:%M:%SZ')
#create a dict of all of the info to add into dataframe and then append to dataframe
dict = {'Name': myfile.properties['Name'], 'ServerRelativeUrl': myfile.properties['ServerRelativeUrl'], 'TimeLastModified': myfile.properties['TimeLastModified'], 'ModTime': mod_time}
df_files = df_files.append(dict, ignore_index= True )
#print statements if needed
# print("File name: {0}".format(myfile.properties["Name"]))
# print("File link: {0}".format(myfile.properties["ServerRelativeUrl"]))
# print("File last modified: {0}".format(myfile.properties["TimeLastModified"]))
#get index of the most recently modified file and the ServerRelativeUrl associated with that index
newest_index = df_files['ModTime'].idxmax()
newest_file_url = df_files.iloc[newest_index]['ServerRelativeUrl']
# Get Excel File by newest_file_url identified above
response= File.open_binary(ctx, newest_file_url)
# save data to BytesIO stream
bytes_file_obj = io.BytesIO()
bytes_file_obj.write(response.content)
bytes_file_obj.seek(0) # set file object to start
# load Excel file from BytesIO stream
df = pd.read_excel(bytes_file_obj, sheet_name='Sheet1', header= 0)
Here is another helpful link of the file properties you can view: https://learn.microsoft.com/en-us/previous-versions/office/developer/sharepoint-rest-reference/dn450841(v=office.15). Scroll down to file properties section.
Hopefully this is helpful to someone. Again, I am not a pro and most of the time I need things to be a bit more explicit and written out. Maybe others feel that way too.
You need to do 2 things here.
Get a list of files (which can be directories or simple files) in
the directory of your interest.
Loop over each item in this list of files and check if
the item is a file or a directory. For each directory do the same as
step 1 and 2.
You can find more documentation at https://learn.microsoft.com/en-us/sharepoint/dev/sp-add-ins/working-with-folders-and-files-with-rest#working-with-files-attached-to-list-items-by-using-rest
def getFilesList(directoryName):
...
return filesList
# This will tell you if the item is a file or a directory.
def isDirectory(item):
...
return true/false
Hope this helps.
I have a url for sharepoint directory
Assuming you asking about a library, you can use SharePoint's REST API and make a web service call to:
https://yourServer/sites/yourSite/_api/web/lists/getbytitle('Documents')/items?$select=Title
This will return a list of documents at: https://yourServer/sites/yourSite/Documents
See: https://msdn.microsoft.com/en-us/library/office/dn531433.aspx
You will of course need the appropriate permissions / credentials to access that library.
You can not use "server name/sites/Folder name/Subfolder name/_api/web/lists/getbytitle('Documents')/items?$select=Title" as URL in SharePoint REST API.
The URL structure should be like below considering WebSiteURL is the URL of site/subsite containing document library from which you are trying to get files and Documents is the Display name of document library:
WebSiteURL/_api/web/lists/getbytitle('Documents')/items?$select=Title
And if you want to list metadata field values you should add Field names separated by comma in $select.
Quick tip: If you are not sure about the REST API URL formation. Try pasting the URL in Chrome browser (you must be logged in to SharePoint site with appropriate permissions) and see if you get proper result as XML if you are successful then update the REST URL and run the code. This way you will save time of running your python code.
I use DjangoCMS (3.2.3) on Django 1.8.12 along with djangocms-blog (0.7). I would like to link blog-posts
in other posts
on other DjangoCMS pages
The app-hooked blog page is available with CMS' Link plugin. However, I do not see how I could link individual posts.
The only and in my eyes dirty workaround I have found is using the app-hooks URL and hard-coding the post's slug directly behind it. It only works if the post URLs are in "slug-only" mode, i.e. w/o categories etc.
Thanks for any thoughts!
Currently there is no generic way to link objects handled by apphooks in the same way as django CMS pages. Providing a solution in a specific application is not trivial, as you basically needs a custom widget to do this
I just came across djangocms-styledlink. Despite the styling capability this link package allows to configure links to other apps, too. For djangocms-blog I added following lines to the setup:
DJANGOCMS_STYLEDLINK_MODELS = [
{
'type': _('CMS Pages'),
'class_path': 'cms.models.Page',
'manager_method': 'public',
'filter': { 'publisher_is_draft': False },
},
{
'type': _('Blog pages'),
'class_path': 'djangocms_blog.models.Post',
'filter': { 'publish': True },
}
]
It seems that currently djangocms-styledlink only works with python 2.
I'm working on a project to site scrape every interview found here into an HTML ready document to be later dumped into a DB which will automatically update our website with the latest content. You can see an example of my current site scraping script which I asked a question about the other day: WWW::Mechanize Extraction Help - PERL
The problem I can't seem to wrap my head around is knowing if what I'm trying to accomplish now is even possible. Because I don't want to have to guess when a new interview is published, my hope is to be able to scrape the website which has a directory listing of all of the interviews and automatically have my program fetch the content on the new URL (new interview).
Again, the site in question is here (scroll down to see the listing of interviews): http://millercenter.org/president/clinton/oralhistory
My initial thought was to have a regex of .\ at the end of the link above in hopes that it would automatically search any links found under that page. I can't seem to be able to get this to work using WWW::Mechanize, however. I will post what I have below and if anyone has any guidance or experience with this your feedback would be greatly appreciated. I'll also summarize my tasks below the code so that you have a concise understanding of what we hope to accomplish.
Thanks to any and all that can help!
#!/usr/bin/perl -w
use strict;
use WWW::Mechanize;
use WWW::Mechanize::Link;
use WWW::Mechanize::TreeBuilder;
my $mech = WWW::Mechanize->new();
WWW::Mechanize::TreeBuilder->meta->apply($mech);
$mech->get("http://millercenter.org/president/clinton/oralhistory/\.");
# find all <dl> tags
my #list = $mech->find('dl');
foreach ( #list ) {
print $_->as_HTML();
}
# # find all links
# my #links = $mech->links();
# foreach my $link (#links) {
# print "$link->url \n";
# }
To summarize what I'm hoping is possible:
Extract the content of every interview found here in an HTML ready document like I did here: WWW::Mechanize Extraction Help - PERL. This would require the 'get' action to be able to traverse the pages listed under the /oralhistory/ directory, which can perhaps be solved using a regex?
Possibly extract the respondent name and position fields on the directory page to be populated in a title field (this isn't that big of a deal if it can't be done)
No, you can't use wildcards on urls.. :-(
You'll have to parse yourself the page with the listing, and then get and process pages in a loop.
To extract specific fields from a page contents will be a strightforward task with WWW::Mechanize...
UPDATE: answering OP comment:
Try this logic:
use strict;
use warnings;
use WWW::Mechanize;
use LWP::Simple;
use File::Basename;
my $mech = WWW::Mechanize->new( autocheck => 1 );
$mech->get("http://millercenter.org/president/clinton/oralhistoryml");
# find all <dl> tags
my #list = $mech->find('dl');
foreach my $link (#list) {
my $url = $link->url();
my $localfile = basename($url);
my $localpath = "./$localfile";
print "$localfile\n";
getstore($url, $localpath);
}
My answer is focused on the approach of how to do this. I'm not providing code.
There are no IDs in the links, but the names of the interview pages seem to be fine to use. You need to parse them out and build a lookup table.
Basically you start by building a parser that fetches all the links that look like an interview. That is fairly simple with WWW::Mechanize. The page URL is:
http://millercenter.org/president/clinton/oralhistory
All the interviews follow this schema:
http://millercenter.org/president/clinton/oralhistory/george-mitchell
So you can find all links in that page that start with http://millercenter.org/president/clinton/oralhistory/. Then you make them unique, because there is this teaser box slider thing that showcases some of them, and it has a read more link to the page. Use a hash to do that like this:
my %seen;
foreach my $url (#urls) {
$mech->get($url) unless $seen{$url};
$seen{$url}++;
}
Then you fetch the page and do your stuff and write it to your database. Use the URL or the interview name part of the URL (e.g. goerge-mitchell) as the primary key. If there are other presidents and you want those as well, adapt in case the same name shows up for several presidents.
Then you go back and add a cache lookup into your code. You grab all the IDs from the DB before you start fetching the page, and put those in a hash.
# prepare query and stuff...
my %cache;
while (my $res = $sth->fetchrow_hashref) {
$cache{$res->{id}}++;
}
# later...
foreach my $url (#urls) {
next if $cache{$url}; # or grab the ID out of the url
next if $seen{$url};
$mech->get($url);
$seen{$url}++;
}
You also need to filter out the links that are not interviews. One of those would be http://millercenter.org/president/clinton/oralhistory/clinton-description, which is the read more of the first paragraph on the page.
Does anyone know how I can test the image upload with using WebTest. My current code is:
form['avatar'] =('avatar', os.path.join(settings.PROJECT_PATH, 'static', 'img', 'avatar.png'))
res = form.submit()
In the response I get the following error "Upload a valid image. The file you uploaded was either not an image or a corrupted image.".
Any help will be appreciated.
Power was right. Unfortunately(or not) I found his answer after I spend half an hour debugging the webtest. Here is a bit more information.
Trying to pass only the path to the files brings you the following exception:
webtest/app.py", line 1028, in _get_file_info
ValueError: upload_files need to be a list of tuples of (fieldname,
filename, filecontent) or (fieldname, filename); you gave: ...
The problem is that is doesn't told you that it automatically will append the field name to the tuple send and making 3 item tuple into 4 item one. The final solutions was:
avatar = ('avatar',
file(os.path.join(settings.PROJECT_PATH, '....', 'avatar.png')).read())
Too bad that there is not decent example but I hope this will help anyone else too )
Nowadays selected answer didn't help me.
But I found the way to provide expected files in the .submit() method args
form.submit(upload_files=[('avatar', '/../file_path.png')])
With Python 3:
form["avatar"] = ("avatar.png", open(os.path.join(settings.PROJECT_PATH, '....', 'avatar.png', "rb").read())