Uploading images with TinyMCE - django

I'm trying to install TinyMCE to my django project for blog posts, I have the initial content block working but It isn't allowing me to upload images with the following errors:
Forbidden (CSRF token missing.): /admin/mhpapp/testmodel/add/static/images/images
I have the app added in my settings.py
Settings.py:
urlpatterns = [
path('tinymce/', include('tinymce.urls')),
TINYMCE_DEFAULT_CONFIG = {
"height": "320px",
"width": "960px",
"menubar": "file edit view insert format tools table help",
"plugins": "advlist autolink lists link image charmap print preview anchor searchreplace visualblocks code "
"fullscreen insertdatetime media table paste code help wordcount spellchecker",
"toolbar": "undo redo | bold italic underline strikethrough | fontselect fontsizeselect formatselect | alignleft "
"aligncenter alignright alignjustify | outdent indent | numlist bullist checklist | forecolor "
"backcolor casechange permanentpen formatpainter removeformat | pagebreak | charmap emoticons | "
"fullscreen preview save print | insertfile image media pageembed template link anchor codesample | "
"a11ycheck ltr rtl | showcomments addcomment code",
"custom_undo_redo_levels": 10,
"images_upload_url": 'static/images/images',
"images_upload_handler": "tinymce_image_upload_handler"
}
TINYMCE_EXTRA_MEDIA = {
'css': {
'all': [
],
},
'js': [
"https://cdn.jsdelivr.net/npm/js-cookie#3.0.1/dist/js.cookie.min.js",
"admin/js/tinymce-upload.js",
],
}
I am currently testing on local server but would also like this to work on my production server that I have hosted on my own ubuntu VPS usinh nginx
Any help would be greatly appreciated.

Related

amplify configuration not showing anything

I'm trying to setup my mobile app with amplify, after the first time i run amplify configuration it prompt me that I have missing plugins.
The following official plugins are missing or inactive:
awscloudformation: provider | amplify-provider-awscloudformation#4.33.0
analytics: category | amplify-category-analytics#2.19.1
api: category | amplify-category-api#2.27.0
auth: category | amplify-category-auth#2.25.0
function: category | amplify-category-function#2.26.3
hosting: category | amplify-category-hosting#undefined
hosting: category | amplify-console-hosting#undefined
interactions: category | amplify-category-interactions#2.6.1
notifications: category | amplify-category-notifications#2.17.1
predictions: category | amplify-category-predictions#2.6.1
storage: category | amplify-category-storage#2.10.3
xr: category | amplify-category-xr#2.6.1
codegen: util | amplify-codegen#2.19.0
flutter: frontend | amplify-frontend-flutter#0.2.0
android: frontend | amplify-frontend-android#2.14.2
ios: frontend | amplify-frontend-ios#2.16.0
javascript: frontend | amplify-frontend-javascript#2.19.0
mock: util | amplify-util-mock#3.27.0
Then it asked me to select my backend provider, but there is nothing for me to choose.
I think it is cause by the missing plugin, how do I install those plugin?
Try to re-install amplify with this command npm install -g #aws-amplify/cli --unsafe-perm=true.

Hugo site isn't starting locally

I'm currently trying to build a Hugo site locally, and no content is showing. I'd love more trouble-shooting steps or anything that can help me do a clean rebuild so I don't have to transfer all my posts over to a Google site.
I've tried re-instantiating the site, rebuilding it with hugo, starting the server with hugo server and hugo server -D, but I'm only getting a blank screen.
I have pages that aren't drafts, so something should definitely be showing. It's possible the public or index folder are goofed, but I'm not sure.
hugo version: Hugo Static Site Generator v0.48/extended darwin/amd64
go version: go version go1.11.2 darwin/amd64
config.toml:
baseURL = ""
languageCode = "en-us"
title = ""
theme = "ananke"
[menu]
[[menu.main]]
identifier = "Posts"
name = "Posts"
pre = "<i class='fa fa-road'></i>"
url = "/posts/"
weight = -100
[params]
featured_image = "images/space-cat-wallpaper.jpg"
twitter = ""
When building the pages with hugo:
| EN
+------------------+----+
Pages | 72
Paginator pages | 0
Non-page files | 0
Static files | 21
Processed images | 0
Aliases | 1
Sitemaps | 1
Cleaned | 0
Total in 88 ms
When starting the local instance with hugo server -D:
| EN
+------------------+-----+
Pages | 117
Paginator pages | 5
Non-page files | 0
Static files | 21
Processed images | 0
Aliases | 1
Sitemaps | 1
Cleaned | 0
Total in 120 ms
Watching for changes in /Users/jschalz/Desktop/hugo-jschalz.github.io-2/{content,data,layouts,static,themes}
Watching for config changes in /Users/jschalz/Desktop/hugo-jschalz.github.io-2/config.toml
Serving pages from memory
Running in Fast Render Mode. For full rebuilds on change: hugo server --disableFastRender
Web Server is available at http://localhost:1313/ (bind address 127.0.0.1)
Press Ctrl+C to stop
After running hugo -v --debug -D I get the following warnings and then a LOT of debug noise:
WARN 2019/06/16 16:33:21 No translation bundle found for default language "en"
WARN 2019/06/16 16:33:21 Translation func for language en not found, use default.
WARN 2019/06/16 16:33:21 i18n not initialized, check that you have language file (in i18n) that matches the site language or the default language.
Navigating to localhost:1313 gives me a blank screen.
First hugo -v --debug -D could tell you more
Second, to be really sure something is generated, try:
hugo server --renderToDisk --gc --cleanDestinationDir
Check that files are created (as opposed to be served in memory)
Note: I always prefer adding in my config.toml
builddrafts = true
It is useful when starting a project, to be sure everything is generated.
The OP ladygremlin confirms in the comments:
I think the builddrafts = true in the config.toml fixed it!
I also upgraded to the newest version of hugo.

Get multiple variations from Google Translate API

When we make a query to Translate API
https://translation.googleapis.com/language/translate/v2?key=$API_KEY&q=hello&source=en&target=e
I only get 1 result in :
{
"data": {
"translations": [
{
"translatedText": "....."
}
]
}
}
Is it possible to get all variations (alternatives) of that word, not only 1 translation?
Microsoft Azure supports one. https://learn.microsoft.com/en-us/azure/cognitive-services/translator/reference/v3-0-dictionary-lookup .
For ex. https://api.cognitive.microsofttranslator.com/dictionary/lookup?api-version=3.0&from=en&to=es
[
{"Text":"hello"}
]
gives you a list of translations like this:
[
{
"normalizedSource": "hello",
"displaySource": "hello",
"translations": [
{
"normalizedTarget": "diga",
"displayTarget": "diga",
"posTag": "OTHER",
"confidence": 0.6909,
"prefixWord": "",
"backTranslations": [
{
"normalizedText": "hello",
"displayText": "hello",
"numExamples": 1,
"frequencyCount": 38
}
]
},
{
"normalizedTarget": "dime",
"displayTarget": "dime",
"posTag": "OTHER",
"confidence": 0.3091,
"prefixWord": "",
"backTranslations": [
{
"normalizedText": "tell me",
"displayText": "tell me",
"numExamples": 1,
"frequencyCount": 5847
},
{
"normalizedText": "hello",
"displayText": "hello",
"numExamples": 0,
"frequencyCount": 17
}
]
}
]
}
]
You can see 2 different translations in this case.
The Translation API service doesn't support the retrieval of multiple translations of a word, as mentioned in the FAQ Documentation:
Is it possible to get multiple translations of a word?
No. This feature is only available via the web interface at
translate.google.com
In case this feature doesn't cover your current needs, you can use the Send Feedback button, located at the lower left and upper right corners of the service public documentation, as well as take a look the Issue Tracker tool in order to raise a Translation API feature request and notify to Google about this desired functionality.
Approach mapping Wiktionary using POS tags, related terms and Google-translated word.
TL;DR
The question is titled 'get-multiple-variations-from-google-translate-api', but in short, you (still) currently can't do this by using Google's service alone (as of Sept. 2022). It seems most companies, such as Google, want to continue charging for this service. This answer provides an approach using a (free) service as a pivot to get the term, related terms, and their POS (Parts of Speech) e.g. noun, verb, etc. before translating those terms and then re-querying the service.
This alternative creates a small pipeline that queries Wiktionary before (on the source language), and after (on the translated terms target language) the translation (using Google).
The small pipeline is written in python and bash.
Rationale
We could get word senses, for each POS (Part of Speech) and corresponding synonyms, then translate for each word sense since Google only translates word to word, and then match word senses for the corresponding target language using a tool such as Wiktionary.
Wiktionary
Fortunately, someone has already created a python library to query Wiktionary for multiple languages.
Script to get definitions / synonyms from Wiktionary (using python):
(requires wiktionaryparser )
e.g. python -m pip install wiktionaryparser
import sys;
import json;
from wiktionaryparser import WiktionaryParser;
parser = WiktionaryParser()
# sys.argv[1] is a language e.g. 'english'
parser.set_default_language(sys.argv[1])
print(
json.dumps(
[
[
{
'pos': d.get('partOfSpeech'),
'text':d.get('text'),
'examples':[e for e in d.get('examples')][0] if d.get('examples') else [],
'related': d.get('relatedWords')
} for d in w.get('definitions')
] for w in parser.fetch(sys.argv[2])
],
indent=2
)
)
Google translate + Wiktionary
The bash script below gets Wiktionary definitions, splits on synonym lists and correlates translations based on POS (Part of Speech).
To be honest this script is a bit convoluted, it uses a lot of utils, but it works. It could be refactored into python like the wiktionary part by anyone wanting to make something a bit more robust.
This github post provided some of the below script that call the free Google translate api.
#!/bin/bash
sl=$1
tl=$2
wiki_sl=$3
wiki_tl=$4
string=$5
ua='Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36'
#echo "$string"
result="{\"${sl}\":[],\"${tl}\":[]}"
#set -x
while IFS= read line; do
# line could be better named 'synonym' here
pos="$(echo ${line} | jq -r ".pos")"
sl_result="$(echo $line | jq . -c)"
tl_result=""
opt_single="single?client=gtx&sl=${sl}&tl=${tl}&dt=t&q=${string//[[:blank:]]/+}"
full_url="http://translate.googleapis.com/translate_a/${opt_single}"
response=$(curl -sA "${ua}" "${full_url}")
tl_word="$(echo ${response} | jq -r '.[[0][0]][] | .[0:1][0]')"
echo "${tl_word}" | grep -q " " && continue 1
tl_result_new="$(python ./get_wiki.py "${wiki_tl}" "${tl_word}" | jq -r -c --arg POS "$pos" '.[][] | select(.pos==$POS)'),"
# making json
tl_result="[${tl_result_new}"
# iterate over synonyms
while IFS= read qry; do
opt_single="single?client=gtx&sl=${sl}&tl=${tl}&dt=t&q=${qry//[[:blank:]]/+}"
full_url="http://translate.googleapis.com/translate_a/${opt_single}"
response=$(curl -sA "${ua}" "${full_url}")
tl_word="$(echo ${response} | jq -r '.[[0][0]][] | .[0:1][0]')"
echo "${tl_word}" | grep -q " " && continue 1
tl_result_new="$(python ./get_wiki.py "${wiki_tl}" "${tl_word}" | jq -r -c --arg POS "$pos" '.[][] | select(.pos==$POS)'),"
# adding to json
tl_result="${tl_result},${tl_result_new}"
done< <(echo "${line}" | jq -c -r ' .related[].words[]' | \
sed -e 's/.*://;s/"//g;s/^ *//g;s/ *$//g' | tr ',' '\n')
tl_result="$(echo "${tl_result_new}" | sed 's/,$//g')"
[ -z "${tl_result}" ] && tl_result=null
[ -z "${sl_result}" ] && sl_result=null
result="{\"${sl}\":${sl_result},\"${tl}\":${tl_result}}"
echo "$result" | jq "."
done< <(python ./get_wiki.py "$wiki_sl" "$string" | \
jq -c -r '.[][]|select(.related[].relationshipType=="synonyms")') 2> /dev/null | jq -c '[.]'
How to use:
The first 2 arguments used are for google (source language, and target language in that order which are two-letter codes.
The second 2 arguments used are for Wiktionary (source language, a full word - e.g. 'English', 'French', etc.)
The final (fifth) argument is the single word to be translated.
./translate.sh en pt english portuguese help
In fact, the python 'wiktionaryparser' lib occasionally breaks and can throw an error, due to the fact that it is a webscraping library, which is why I add 2> /dev/null to silence stderr on output.
./translate.sh en pt english portuguese help 2> /dev/null
This script isn't perfect, but it is a starting point and a proof-of-concept to show you this is possible using a free tool such as wiktionary.
English to Portuguese
$ ./translate.sh en pt english portuguese help 2> /dev/null
Output:
[
{
"en": {
"pos": "noun",
"text": [
"help (usually uncountable, plural helps)",
"(uncountable) Action given to provide assistance; aid.",
"(usually uncountable) Something or someone which provides assistance with a task.",
"Documentation provided with computer software, etc. and accessed using the computer.",
"(usually uncountable) One or more people employed to help in the maintenance of a house or the operation of a farm or enterprise.",
"(uncountable) Correction of deficits, as by psychological counseling or medication or social support or remedial training."
],
"examples": "I need some help with my homework.",
"related": [
{
"relationshipType": "synonyms",
"words": [
"(action given to provide assistance): aid, assistance"
]
}
]
},
"pt": {
"pos": "noun",
"text": [
"assistência f (plural assistências)",
"assistance, aid, help",
"protection"
],
"examples": [],
"related": [
{
"relationshipType": "related terms",
"words": [
"assistir"
]
}
]
}
}
]
[
{
"en": {
"pos": "verb",
"text": [
"help (third-person singular simple present helps, present participle helping, simple past helped or (archaic) holp, past participle helped or (archaic) holpen)",
"(transitive) To provide assistance to (someone or something).",
"(transitive) To assist (a person) in getting something, especially food or drink at table; used with to.",
"(transitive) To contribute in some way to.",
"(intransitive) To provide assistance.",
"(transitive) To avoid; to prevent; to refrain from; to restrain (oneself). Usually used in nonassertive contexts with can."
],
"examples": "Risk is everywhere. […] For each one there is a frighteningly precise measurement of just how likely it is to jump from the shadows and get you. “The Norm Chronicles” […] aims to help data-phobes find their way through this blizzard of risks.",
"related": [
{
"relationshipType": "synonyms",
"words": [
"(provide assistance to): aid, assist, come to the aid of, help out; See also Thesaurus:help",
"(contribute in some way to): contribute to",
"(provide assistance): assist; See also Thesaurus:assist"
]
}
]
},
"pt": {
"pos": "verb",
"text": [
"ajudar (first-person singular present indicative ajudo, past participle ajudado)",
"to help, aid; to assist"
],
"examples": "Ajude-me! ― Help me!",
"related": [
{
"relationshipType": "related terms",
"words": [
"ajuda",
"ajudante"
]
}
]
}
}
]
English to Latin
$ ./translate.sh en la english latin body | jq '.'
[
{
"en": {
"pos": "noun",
"text": [
"body (countable and uncountable, plural bodies)",
"Physical frame.",
"Main section.",
"Coherent group.",
"Material entity.",
"(printing) The shank of a type, or the depth of the shank (by which the size is indicated).",
"(geometry) A three-dimensional object, such as a cube or cone."
],
"examples": "I saw them walking from a distance, their bodies strangely angular in the dawn light.",
"related": [
{
"relationshipType": "synonyms",
"words": [
"See also Thesaurus:body",
"See also Thesaurus:corpse"
]
}
]
},
"la": {
"pos": "noun",
"text": [
"cadāver n (genitive cadāveris); third declension",
"A corpse, cadaver, carcass"
],
"examples": [],
"related": []
}
}
]
When it doesn't work
Sometimes there is no output at all.
Shortcomings of this approach, and going further
Despite a lot of words being on Wiktionary, and a lot of synonyms being present, they are not always inside the 'related' field, sometimes synonyms are in the 'text' field, which gives word senses. I suspect that the partial information wiktionaryparser provides is the same on the Wiktionary site.
One could use any dictionary tool, or online thesaurus, such as wordnet, to first get possible POS tags and a word's synsets, or query a fasttext model to get a word's nearest neighbors, then filter only words that are nearest neighbors from the 'text' field in wiktionary.

How to run "ssd_object_detection.cpp"?

I'm trying to learn about objects detection with Deep Learning, and I'm actually trying to run the sample code "ssd_object_detection.cpp". In this code it's necessary to input the following parameters:
const char* params
= "{ help | false | print usage }"
"{ proto | | model configuration }"
"{ model | | model weights }"
"{ camera_device | 0 | camera device number}"
"{ video | | video or image for detection}"
"{ min_confidence | 0.5 | min confidence }";
Following the instructions of this code, I downloaded some pretrained models from this web: https://github.com/weiliu89/caffe/tree/ssd#models\n
So, I obtained a folder with the next archives:
deploy.prototxt
finetune_ssd_pascal.py
solver.prototxt
test.prototxt
train.prototxt
VGG_VOC0712Plus_SSD_300x300_ft_iter_160000.caffemodel
Then, I copied all this data to my project folder, and tried this code:
const char* params
= "{ help | false | print usage }"
"{ proto |test.prototxt| model configuration }"
"{ model |VGG_VOC0712Plus_SSD_300x300_ft_iter_160000.caffemodel| model weights }"
"{ camera_device | 0 | camera device number}"
"{ video |MyRoute...| video or image for detection}"
"{ min_confidence | 0.5 | min confidence }";
But at output I get the following error:
[libprotobuf ERROR
C:\build\master_winpack-build-win64-vc14\opencv\3rdparty\protobuf\src\google\protobuf\text_format.cc:298] Error parsing text-format opencv_caffe.NetParameter: 13:18: Message
type "opencv_caffe.TransformationParameter" has no field named
"resize_param". OpenCV Error: Unspecified error (FAILED:
ReadProtoFromTextFile(param_file, param).
Failed to parse NetParameter
file: test.prototxt) in cv::dnn::ReadNetParamsFromTextFileOrDie, file
C:\build\master_winpack-build-win64-vc14\opencv\modules\dnn\src\caffe\caffe_io.cpp,
line 1145
I used all the .prototxt archives, and tried other possible solutions, but I still cant run the example code.
Can someone explain me what parameters should I write in this code or what am i doing wrong?
Sorry about my bad English and thanks in advance.

Regex not working on Python 2.6.6

Hello i have a regex problem,
This is the text structure:
TK00123456: Change a lot gibberish 16:34. --- access : [ more
gibberish Module](http://somewebsite.com/selectedModuleCode=Support
form.aspx longblob) summary --- | Properties | | --- Creator | more
gibberish | 16/01/2018 16:26:53 Manager | External Status |
Working on Resolution
Proper English Text
This is my regex
re.match(r'(?s)Change(.*?)Working', text)
Output:
None
Using same RegEx on https://regex101.com/
Match 1 Full match 12-270
`Change a lot gibberish 16:34. --- access :
[ more gibberish
Module](http://somewebsite.com/selectedModuleCode=Support form.aspx
longblob) summary --- | Properties | | --- Creator | more gibberish |
16/01/2018 16:26:53 Manager | External Status |
Working`
I have python version 2.6.6 on RHEL and I cant upgrade to python 2.7 if that is the problem.
Any Suggestions?
You are looking for re.search() rather than re.match():
import re
string = """
TK00123456: Change a lot gibberish 16:34. --- access : [ more gibberish Module](http://somewebsite.com/selectedModuleCode=Support form.aspx longblob) summary --- | Properties | | --- Creator | more gibberish | 16/01/2018 16:26:53 Manager | External Status |
Working on Resolution
Proper English Text
"""
rx = re.compile(r'(?s)Change(.*?)Working')
print(rx.search(string).group(0))
Explanation: re.match() only matches at the beginning of the string and there is no Change (see the TK00123456: there?).